TW200422754A - Method for determining the optical parameters of a camera - Google Patents

Method for determining the optical parameters of a camera Download PDF

Info

Publication number
TW200422754A
TW200422754A TW92109159A TW92109159A TW200422754A TW 200422754 A TW200422754 A TW 200422754A TW 92109159 A TW92109159 A TW 92109159A TW 92109159 A TW92109159 A TW 92109159A TW 200422754 A TW200422754 A TW 200422754A
Authority
TW
Taiwan
Prior art keywords
camera
image
point
center
optical axis
Prior art date
Application number
TW92109159A
Other languages
Chinese (zh)
Other versions
TW565735B (en
Inventor
guo-zhen Zhan
Chuang-Ran Jang
Original Assignee
guo-zhen Zhan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by guo-zhen Zhan filed Critical guo-zhen Zhan
Priority to TW92109159A priority Critical patent/TW565735B/en
Application granted granted Critical
Publication of TW565735B publication Critical patent/TW565735B/en
Priority to PCT/IB2004/001109 priority patent/WO2004092826A1/en
Publication of TW200422754A publication Critical patent/TW200422754A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0221Testing optical properties by determining the optical axis or position of lenses

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a method for determining the optical parameters of a camera and its device, which utilizes the dedicated imaging characteristic of an image point formed by projecting a sight ray on an image plane in a sight space and references an image point to search an absolute coordinate point satisfying such a characteristic from the sight space to analyze the imaging mechanism of the camera. In the implemented device, a centrally symmetric planar test figure target is utilized to position the principal point of the image and determine the absolute direction of the optical axis by its central position and the obtained central point of image with a similar geometric characteristic. Furthermore, the device actively adjusts the relative distance between the camera and the figure target along the optical axis, and captures the imaging tracks of the correction points, so that the imaging tracks of the correction points at different positions on the test figure target are overlapped on the image plane so as to analyze the sight ray from such a phenomenon and further develop a methodology to determine the optical parameters of the camera. Because the present invention derives the camera parameters completely based on directly the controllable and observable mapping of object coordinate values without referencing to any conventional assumption of projection model, it is suitable for analyzing the camera with unknown optical projection logic. In the present invention, the larger the image distortion of the camera is, the better the operating sensitivity is, thereby enlarging the application of the wide-angle camera, and evaluating and defining the optical specification of the camera. In addition, the present invention has the advantages of easy operating and low cost, and the industrial utilization.

Description

200422754 玖、發明說明 (發明說明應敘明:發明所屬之技術領域、先前技術、內容、實 施方式及圖式簡單說明) 【發明所屬之技術領域】 本發明係有關於一種求取相機光學投射參數的方法及 其裝置,且特別是一種針對嚴重偏離線性投射機制的鏡頭 (如:魚眼鏡頭)所提出之解析相機光學參數(含失真中 心、投影中心、投射曲線、失真分析、與焦距常數)的方 法以及實行該方法的裝置。 【先前技術】 爲了量測的準確度,人工視覺(artificial vision)系 統使用的相機裝置喜用小視角的鏡頭,以使得攝取到的影 像盡量符合理想的透視投射機制(perspective projection mechanism);事實上,針孔成像原理的透視投射模型經常 是演繹相機參數的參考。通常相機偏離預設投射機制很 少,可利用一個以像高爲變數的二次多項式非線性函數很 準確地來描述像高偏差模式。如此所得到的相機內部 (intrinsic)與外部(extrinsic)光學參數値可用來組成具 有精確度的視覺應用,如三維‘立體推斷(3-D cubical inference )、立體視覺(stereoscopy )、自動光學檢查 (automatic optical inspection)等等;但是,目前這類應 用的共同限制是其視角小與景深短。 魚眼鏡頭(fisheye lens)能夠聚焦影像更廣更深,將 之裝置在相機上可以攫取到無限景深的淸晰影像,其視野 200422754 甚至可以超過180度,但是卻連帶具有劇烈的桶狀失真 (barrel distortion)。若將魚眼鏡頭應用在監視系統上,只 要求能夠看到監視範圍內之人或物的動向’則還可以忍受 畫面失真;若是用於製作虛擬實境(Virtual Reality)的影 像,只要求目視影像正常’那麼還可達到。但若想辨認實 物尺寸大小或發展量測學,則尙缺精確的相機參數求取技 術。 由於魚眼相機的鏡頭幾何(optics geometry)與線性 透視投射模型差異很大,若以線性透視投射機制爲建立魚 眼相機投射模式的參考,則其相機光學參數無法如一般相 機般被準確地演繹。因此,大量於視覺科學已發展成熟的 技術無法被運用來處理魚眼相機取得的影像。 R.Y· Tsai【1987】提出以鏡頭組具有光學軸圓對稱的 投射幾何機制之徑向對齊的限制條件(radial alignment constraint)來推導相機參數的方法,其參考視野空間已知 絕對位置的不共面五校正點與其對應影像的座標位置(指 校正點的絕對座標位置和影像平面點位置),根據光學軸 的徑向對準(radial alignment)限制條件,演繹表示成像 機制的旋轉和位移的特性矩陣用策得到相機的方向、位置 及定相機的投影中心(viewpoint,VP );而焦距(focal length)則以影像中間區域成像機制完全符合線性投影投 射的假說來演繹得到,最後再以非線性函數來描述整體影 像的失真機制。它主要的優點是可用簡單的實驗裝置得到 相機的參數,在失真度較小的鏡頭,其演算結果相當地準 200422754 確;然而因其設定的假說是鏡頭的投射函數接近於線性投 射,所以其演算法應用於魚眼鏡頭(其嚴重偏離線性投射 機制)時,演算得到的相機參數誤差很大’且結果可預期 將和校正點的預先布置相依,因此,此種校正法不能直接 沿用到具有魚眼鏡頭這般大量偏離線性透視投射的廣角鏡 頭上。 無論如何,人工視覺系統若能夠兼具大視角、影像淸 晰又可精確地掌握立體投射機制,則其應用領域當更廣 泛,功能更強且具市場實用性。魚眼鏡頭跟同樣視角的廣 角鏡頭而言,具有無限聚焦的景深、結構牢靠而簡單且具 有可微小體積化的優點。然而嚴重變形的影像在一些運用 有其致命的缺點,因此鑑定魚眼鏡頭特性及其非規則的成 像機制並進而發展校正學,是一很重要的課題。又’影像 校正的精確度影響及所應用的範圍,如:一般採用魚眼鏡 頭的內視鏡系統或自走機器人的視覺系統’在未能得到精 確的相機光學參數的情況下,難以被高精確度的控制。 由於以往利用線性透視投射模式爲基礎來演繹魚眼相 機參數的準確度不佳,因此陸陸續續有其他變通的方法被 提出來處理魚眼影像的轉換。其Ψ一種方法係以裝置的鏡 頭將使相機成影符合一“專一投射函數”來呈現而直接由影 像爲演算根據設。請參照「第1A圖」與「第1B圖」’ 其中「第1 A圖」顯示一個已被框出邊界的圓形成影區域 1,而「第1 B圖」則爲對應「第1A圖」之半球體空間 投射對映關係;兩張圖中皆標示了影像點的光軸偏折角 8 756 200422754 (zenithal distance ;影像點對應於物體空間中的入射線與 光學軸21的夾角,以下以α表示之)與光軸圍繞角 (azimuthal distance ;以失真中心爲原點,將影像點表示 成極座標的角度分量,以下以β表示之)。引用地球儀的 定位觀念’ β爲赤道平面以設定的本初子午線(prime meridian) 13的映射線13’爲參考基準,失真中心c爲原 點所形成的夾角。因此π/2-α即爲緯度、β即爲經度。因 此,若是複數個影像點落在成影區域1的同一條半徑上, 則這些影像點所對映的空間入射線的軌跡位於同一方位平 面(meridional plane)上(即:弧C’E’G’與球半徑所定義 的平面),也就是其β角爲同一常數,如「第1 A圖」中 的D、E、F、G點對應「第1 B圖」中的D,、E,、F,、G, 點。(註:這個現象並非只存在魚眼鏡頭,事實上這個現 象在直線透視投影鏡頭時,是Tsai氏發展方法學的徑向對 準限制(Radial Alignment Constraint)條件。200422754 发明 Description of the invention (The description of the invention should state: the technical field, prior art, content, embodiments and drawings of the invention are briefly explained.) [Technical field to which the invention belongs] The present invention relates to a method for obtaining optical projection parameters of a camera Method and device thereof, and in particular a lens (such as a fisheye lens) for analyzing camera optical parameters (including distortion center, projection center, projection curve, distortion analysis, and focal constant) ) Method and apparatus for performing the method. [Prior art] In order to measure the accuracy, the camera device used by the artificial vision system prefers a small-angle lens to make the captured image conform to the ideal perspective projection mechanism as much as possible; in fact The perspective projection model of the pinhole imaging principle is often a reference for deducing camera parameters. Usually the camera deviates from the preset projection mechanism very little, and a quadratic polynomial nonlinear function with image height as a variable can be used to describe the image height deviation mode very accurately. The internal and external optical parameters of the camera thus obtained can be used to compose vision applications with precision, such as 3-D cubical inference, stereoscopy, and automatic optical inspection ( automatic optical inspection) and so on; however, the common limitation of this type of application is its small viewing angle and short depth of field. The fisheye lens can focus a wider and deeper image. When it is installed on the camera, it can capture a clear image with infinite depth of field. Its field of view 200422754 can even exceed 180 degrees, but it also has a sharp barrel distortion. distortion). If a fisheye lens is applied to a surveillance system, it is only required to be able to see the movement of people or objects within the surveillance range, and it can also tolerate screen distortion; if it is used to produce a virtual reality image, only visual inspection is required. If the image is normal, then it can be achieved. However, if you want to identify the physical size or development of metrology, there is no precise camera parameter determination technology. Because the lens geometry of a fisheye camera is very different from the linear perspective projection model, if the linear perspective projection mechanism is used as a reference to establish a fisheye camera projection mode, its camera optical parameters cannot be accurately interpreted like ordinary cameras. . Therefore, a large number of technologies developed in the visual sciences cannot be used to process images obtained by fisheye cameras. RY · Tsai [1987] proposed a method of deriving camera parameters based on the radial alignment constraint of a lens group with a projection axis mechanism with optical axis circular symmetry. Its reference field of view is a non-coplanar absolute position. The coordinate position of the five correction points and their corresponding images (referring to the absolute coordinate position of the correction point and the position of the image plane point), according to the radial alignment constraints of the optical axis, a characteristic matrix representing the rotation and displacement of the imaging mechanism is deduced The direction and position of the camera and the projection center (viewpoint, VP) of the fixed camera are obtained by strategy; the focal length is deduced from the hypothesis that the imaging mechanism of the middle area of the image fully conforms to the linear projection projection, and finally uses a non-linear function To describe the distortion of the overall image. Its main advantage is that the camera parameters can be obtained with a simple experimental device. The calculation result is quite accurate for lenses with low distortion. However, because its set hypothesis is that the projection function of the lens is close to linear projection, its When the algorithm is applied to a fisheye lens (which severely deviates from the linear projection mechanism), the camera parameters obtained by the calculation are very wrong, and the results can be expected to depend on the pre-arrangement of the correction points. Therefore, this correction method cannot be directly used On a wide-angle lens with a fisheye lens that deviates a lot from linear perspective projection. In any case, if the artificial vision system can have both a large angle of view, clear images, and accurate grasp of the three-dimensional projection mechanism, its application field should be wider, more powerful, and more practical. Fisheye lenses, like wide-angle lenses with the same angle of view, have infinitely focused depth of field, a solid and simple structure, and the advantage of being able to be miniaturized. However, severely deformed images have their fatal shortcomings in some applications. Therefore, it is an important subject to identify the characteristics of fisheye lenses and their irregular imaging mechanisms and then develop corrections. And 'the impact of the accuracy of image correction and the scope of application, such as: endoscope systems using fish-eye lenses or vision systems of self-propelled robots', it is difficult to be high without accurate camera optical parameters Control of accuracy. Due to the poor accuracy of the fisheye camera parameters based on linear perspective projection mode in the past, other alternative methods have been proposed to deal with the conversion of fisheye images. One of the methods is to use the lens of the device to make the camera's image conform to a "specific projection function" and use the image as the calculation basis. Please refer to "Figure 1A" and "Figure 1B" where "Figure 1A" shows a circle that has been framed by a circle to form shadow area 1, and "Figure 1B" corresponds to "Figure 1A" The mapping relationship of the projection of the hemisphere space; the two images indicate the optical axis deflection angle of the image point 8 756 200422754 (zenithal distance; the image point corresponds to the angle between the incident ray in the object space and the optical axis 21, which is expressed by α below. Azimuthal distance from the optical axis (azimuthal distance; with the center of distortion as the origin, the image point is expressed as the angular component of the polar coordinates, hereinafter expressed as β). A reference to the positioning concept of a globe, β is the angle formed by the equatorial plane with the set prime line 13 'of the prime meridian 13 as the reference datum, and the distortion center c as the origin. Therefore, π / 2-α is the latitude and β is the longitude. Therefore, if a plurality of image points fall on the same radius of the imaging region 1, the trajectories of the spatial incident rays mapped by these image points are located on the same meridional plane (that is, the arc C'E'G 'And the plane defined by the radius of the sphere), that is, the β angle is the same constant, for example, points D, E, F, and G in "Figure 1 A" correspond to D, E in "Figure 1 B", , F ,, G, points. (Note: This phenomenon does not exist only in fisheye lenses. In fact, this phenomenon is a condition of the Radial Alignment Constraint of Tsai's development methodology when the projection lens is viewed in a straight line.

上述以影像爲基礎的演算法除了假設魚眼鏡頭符合一 “專一投射函數”之外,更設定了數個假設前提:第一,假 設魚眼相機攝取的影像(以下簡稱爲魚眼影像)是圓形或 橢圓形且其長軸11與短軸12 (¾二直徑)的交點即爲影 像的失真中心(principal point,即光學軸224投射的影 像);第二,假設影像邊緣係由水平光線(即α=π/2)映射 而得;第三,假設α與像高(principal distance,以下以ρ 表示之)間恰好是線性比例的關係,其中定義P爲成影區 域1上一影像點與失真中心間的相對距離。例如「第1 A 200422754 圖」中E點對C點的距離恰好是半徑的一半,因此推測E 點的α=π/4,而由此定出的視野線(sight ray)也決定了半 球體視野空間中對應的視野線會通過E’點;其餘以此類 推。至於影像點的座標,可以用直角座標系統(Cartesian coordinate system)表示成(u,v)或是利用極座標系統(polar coordinate system)表示爲(ρ,β),這二種座標表示方式皆 設定失真中心爲座標原點;而其對應的視野線空間向量座 標可用(α,β)表示。 雖然習知技術中並未討論這個”專一投射函數”爲何, 但實際上具有這種成像能力的鏡頭在光學領域謂之等距離 投射(equidistant projection,以下簡稱爲EDP),且假設 其剛好擁有180度視野角(以下合稱爲edPtc);等距離投 射的投射函數爲P=h,其中k爲一常數,且當鏡頭符合 EDPtc時’k即是鏡頭的焦距常數/( focai iength constant)。 要使一個相機符合於這些條件,是必須有合格的相機配合 合格的鏡頭。一般這是一種特殊的組合,而無通用性。根 據EDPtc的前提,焦距常數/可由成影區域丨的半徑除以 π/2而得到;由影像平面座標(u,v)亦可以輕易地解析出其 對應之入射光線的空間投射角度(α,β)。 因此,藉由上述習知技術的解析方式,“理想的EDPti” 魚眼影像可以被轉換成直線透視投射(rectilinear perspective projection)。此種以只用影像爲基礎的演算法 簡單且不需額外校正物。並且其可轉換的參考機軸並不限 制在原生光學軸。 200422754 在專利的揭露上,美國專利5,185,667即依循「第1 A圖」與「第1 B圖」所呈現的投射成像機制來演繹演算 法以轉換魚眼影像爲符合於線性透視投射模式,用來呈現 半球形視野(垂直180度、水平360度),並將之應用在 內視鏡、監視系統與遠端控制等實施樣態上(美國專利 5,313,306、5,359,363、5,384,588)。但是,値得注意的是: 這一系列的美國專利中並未具體論證使用的鏡頭是否適用 這種機制,致使其影像轉換的精確度受到質疑;目前實務 上,系統應用製造商要求使用特殊規格的魚眼鏡頭結合到 特定的相機機體,如此才能使該專利技術(美國專利 5,185,667)有商品化的價値。 無論如何,上述這種以影像爲基礎的演算法’對大部 分相機系統是不切實際的,因爲它忽略了一些基本的因素 與可能的變異。第一,請參照「第2圖」,其顯示三種典 型的魚眼鏡頭投射曲線,其中被採用的ΕΕ)Ρπ只是所顯示 投射幾何的一個特例部份而已’鏡頭的原生投射機制可能 是另外二種:立體圖形投射(stereographic projection, SGP,p = 2/xtan(a/2))或正交圖形投射(orthographic projection,OGP,ρ =/*xsin(a))。而另 一^ 可能是視角的涵 蓋範圍不爲π,或許是更大或較小些;再者’由圖中可以 看出這三種魚眼鏡頭投射機制間的差異隨著入射光之α角 的增加而明顯地變大,所以將所有魚眼鏡頭投射模型皆鎖 定在EDP及具有π視角是不合理的。第二,光從魚眼影 像是無法判斷鏡頭的視野角度是否爲π ’因爲無論鏡頭的 11 200422754 視角大小’成影區域1呈現的形狀總是圓形(或檐圓形)。 第三,就算確定視角剛好是π,但是射頻能量響應 (radiometric response)呈徑向衰減是一般鏡頭的普遍現 象,尤其是在較大視角範圍更爲明顯,此會造成影像強度 在成影區域1邊緣處急劇下降,尤以低價、簡單的鏡頭最 爲嚴重;另外由於繞射現象,讓所謂的影像邊界很難被精 確地定出。總結以上觀點,無論鏡頭是否符合完美的EDPtt 假設,這種以影像爲基礎的模式不但精確度低、演繹影像 邊緣、定失真中心時易生誤差、萃取的成影區域1也受到 質疑。而對於架設電腦視覺系統所需要的相機內、外部參 數,都未解決。更無法求取用來表示相機置放位置的投影 中心,這是應用在立體量測(3-D metering)的重要參數, 因此實務應用上大受限制。而在影像感知器面積小於成影 區域1時,影像邊緣無法觀察,這種以影像爲基礎的演算 法就束手無策了。 此外,根據 Margaret M. Fleck【Perspective Projection : The Wrong Image Model,1994】所提出的硏究結果顯示: 鏡頭的投射機制難以在可達視野範圍下符合單一的理想投 射模式;而光學工程師也可以依^應用領域的需求,設計 各種特殊投射機制的鏡頭,如瞳孔鏡頭(fovea lens),所 以將等距離投射套用在所有魚眼鏡頭的假說非常牽強。 從另一方面來說,即使鏡頭在設計製造時是依照一定 的投射規格,但受到使用材料光折射性質限制,不可能達 到完美的設計,且製造完成後是否符合原來的預期則難以 12 200422754 驗證;進而,鏡頭在組裝入照相機體之後,其光學投射模 式和組裝的機械的精確度相依,因此若是能有一簡單又通 用的技術,可以檢驗鏡頭在共同組成的相機形成的光學規 格,使得其出貨及應用時有較確定的基準,則可以大幅地 增加它們的實用價値。 高斯光學模型(Gaussian Model)爲一種很方便描述 光學系統成像邏輯的方法。相機的誤差總以其爲參考基 準。它把一個相機視爲一個特性可以被其數個「基點」 (cardinal points )定義的功能「黑箱」(black box )’思即 在光線的投射行爲時,可忽略其中複雜的光路投射幾何而 直接根據基點邏輯地描述光線行進路徑。請參照「第3 圖」,高斯光學模式所定義的基點包含第一與第二焦點F1、 F2、第一與第二主要點PI、P2 (principal point)以及第 一與第二節點(nodal point),若光學系統的入射與出射介 面爲大氣(air),則節點可視爲是主要點;此時,第一主 要點P1亦稱爲前節點222’(front nodal point,FNP)而 第二主要點P2亦稱爲後節點223’(back nodal point, BNP)。除此之夕f,定義二主要面(principal plane) 141、 142,做爲光線投射進入光學系金後轉折其行進方向的基 準面,而二主要面141、142與光學軸224的交點即爲二 主要點PI、P2。根據這幾個基點與主要面141、142,光 線由無限遠處投射經過第一焦點F1,則會在第一主要面141 轉折行進方向而平行光學軸224,如圖中的直線〇c與直 線CO’ ;相反地,光線若是平行地入射光學系統中,則會 200422754 在遇到第二主要面142後轉折通過第二焦點F2,如圖中的 直線OB與直線BO’。這樣的投射機制具有一個特性:由 物體點〇往第一主要點P1投射的光線(如圖中的直線 OP1) ’通過P1後轉折沿光學軸224行進,當通過第二主 要點P2後再轉折使其依與直線OP1平行的方向繼續前進 (如圖中的直線P20’),直到映射於感像元件上形成影像 點〇’。也就是通過P1的入射光與P2的入射光行進路徑 在空間上平行;而接近此現象的單一鏡片只有在薄透鏡的 近軸區域。然而高斯光學模式是一般相機的追求的成像邏 輯。而廣角鏡頭必須接近此成像機制,與魚眼鏡頭有所區 別。 參考高斯模型,魚眼鏡頭沒有「單一」的投影中心, 這是習知此技藝人士的觀點;.但如能突破高斯光學模式的 限制,解析出魚眼鏡頭的原生投影機制,且可在邏輯上定 位「單一」投影中心並演繹其光學參數。如此,不但可增 加魚眼影像解析的信賴度,更可進一步擴展魚眼鏡頭的應 用領域及於立體影像量測學上。因此,本發明將精確地探 討此一主題,使得相機參數化過程不受限於前述各種假設 前提而精確地得到魚眼相機的光_參數。 【發明內容】 有鑑於此,本發明的目的爲針對嚴重偏離線性透視投 射機制的影像系統,提供一種解析其原生本質上光學投射 特性的影像解析方法以及實現該方法的裝置。 本發明的另一目的爲提供一種完全依據光學現象來定 200422754 出相機光學投射參數(含:投影中心、光學軸的方位、焦 距常數、以及相機的投射機制)的方法以及實現該方法的 裝置,使得魚眼相機可以被延伸應用在人工視覺的應用領 域,如立體影像量測或三維定位。 本發明的另一目的是提出一種以影像平面座標爲基礎 的影像失真表示方式,可以直接由影像點座標位置對應的 光軸偏折角焦距量化影像的失真度。 本發明的又一目的爲提供一種可以檢驗鏡頭或以其所 組成的相機裝置之空間投射機制的方法,以做爲訂定產品 規格或驗證產品品質的方法與裝置。 根據上述目的,本發明利用一中心對稱的圖靶影像的 變形狀況,調整相機的絕對方位達到和圖靶特徵相似的影 像,以取得用來描述視野空間與影像平面間之投射行爲的 物像共軛座標對陣列(物像共軛座標對是指由圖靶上校正 點的絕對座標和對應的影像平面座標所組成的座標對)爲 取樣數據’從而演繹影像(座標點)和視野空間(投射線) 的投射關係。由此可得到相機系統的光學參數。 本發明實現方式未參酌任何既有的封閉形式投射函數 的假說’直接由已知的校正點的fe對座標和其顯現的影像 位置之映射關係演繹相機的光學投射機制並量化相機參 數’這是本發明的最主要特色。本發明揭露的技術突破了 習知技藝認爲是不可能的限制,其可以適用於魚眼相機或 具有特殊投射函數的相機,甚至於當成反向工程,用來解 析投射模式未明的相機鏡頭裝置。 200422754 由於本發明可以精確地演繹相機投射函數,因此其反 投射函數可以校正失真影像(或轉換影像),並進而應用 在立體影像、立體量測及三維定位的領域。 爲讓本發明之目的、特徵、和優點能更明顯易懂,所 以舉一典型的實施例,並配合所附圖式,作詳細說明如下。 【實施方式】 在開始說明實施例前,先定義本文將使用到的座標系 統,以方便討論後續內容: 1. 絕對座標系統W(X,Y,Z),以圖靶布置中心爲原 點,以正交遠離圖靶方向定義Z基軸的參考方向。 2. 影像平面座標系統C’(x,y)或P’(p,P),以失真 中心爲原點,將影像平面以直角座標或極座標表 示。 3. 像素座標系統I(u,v)這是可以直接觀察到的呈現 在電腦系統顯示介面的影像的座標系統,以像素 爲單位。而失真中心成像在電腦系統顯示螢幕的 I(uc,v。)位置。基本上,相機映射到影像平面的尺 寸C’(x’,y’)或P’(p’,p’)可以類比表現在I(u,v)座 標系統。而像素座標系金也表示成以I(Uc,Vc)爲 原點的直角座標C(u,v),及Ρ(ρ,β)。 4. 相機外部座標系統Ν(α,β,1ι),參考相機22視野, 描述視野線幾何的座標系統。 5·相機內部座標系統S(oi’,β’,/),描述相機22內 部成像投射幾何。 7&4 16 200422754 後續實驗程序將於以下標表示特徵點的位置,以陣列 順序表示實驗的取樣次序。如Wn(a,b,c)[k]表示第k次實 驗,校正點η的絕對座標位置在絕對座標的(a,b,c)位置。 其餘類推;且在不影響內容的可讀性判斷時,將省略部份 欄位。各個座標系統的實際例子將在文中適當時機引用。 魚眼鏡頭是一種嚴重偏離高斯光學模型的非線性投影 鏡頭,意謂空間中投射軌跡無法以一般熟知之針孔模式 (pinhole model)的線性透視投射機制來解釋。相較於其 他I頭’魚眼鏡頭具有嚴重的桶狀失真(barrel deformation);它常被利用來製造戲劇或特殊效果的影像, 卻很難由影像直接判斷物體的原貌。然而,其成像機制仍 具有一定規則:規則一,魚眼影像的失真程度於影像平面 上呈中心對稱,此中心點稱爲失真中心(principal point), 光學投射軌跡於視野空間中則還型對稱於相機的光學軸; 規則二,視野空間中同一特定視野線上的所有物體點,在 影像平面上皆映射到同一特定影像點。其投射機制的假說 可描述爲:於視野(field of view,以下簡稱爲FOV)中 發射自物體的入射光線(包含主動發光及反射光)會匯聚 於空間中一唯一的光學中心(或福爲投影中心,viewpoint, 簡稱爲VP),之後再根據投射函數折射並成像於影像平面 上。上述的規則與假說是習知此技藝人士所熟知的現象與 理論模型。 本發明將依據規則一之魚眼影像的失真對稱特性’配 合一特別設計的圖靶,來定位影像平面上失真中心的位置 17 200422754 以及空間中光學軸的方向與位置。之後再利用規則二之空 間視野線與其映射之影像點的專一映射特性,於光學軸上 定出投影中心的絕對座標位置及解析專一視野線的絕對座 標,並依此演繹相機的焦距常數(focal length constant) 與歸納其投射模式(projection model)。本發明並不以任 何已知的相機投射模式(如等距離投射、立體圖形投射和 正交圖形投射等)爲假設前提,因此可以應用在任何有魚 眼成像性質或類似種類的相機上。 規則一所稱之空間投射對稱性可用「第4圖」來表達, 其表現空間中平面圖靶30與魚眼相機間的投射光路圖。 圖中以魚眼鏡頭221和影像平面225來等效表示魚眼相 機,而平面圖靶30被置於魚眼相機的F0V中。從幾何學 的觀點來看,能夠表達對光學軸224空間對稱幾何排列的 平面圖形,實務上便可以在相機內映射出中心對稱的影 像。因此,安排一如「第5圖」所示具有中心對稱圖案31 (physical central-symmetry pattern,以下簡稱 PCP)的平 面圖靶30於相機視野中,PCP 31至少具有一位於圖案中 心的中心校正點38與複數個由中心對稱的幾何圖形所定 義的校正點311-318、321-328、^31-338 ;調整圖靶30與 相機間的相對方位而在影像平面225上得到一中心對稱影 像 226 ( imaged central-symmetry pattern,以下簡稱 ICP )。 調整確定後,此時光學軸224應同時正交穿過影像平面225 上之失真中心227與PCP 31的中心校正點38。由於圖靶 30可以人爲預先設定於已知的絕對方位,故可做爲參考來 18 200422754 限制空間中光學軸224的方位,並以中心校正點38映射 之影像圖團的特徵座標(即是影像圖團的重心座標)做爲 影像平面225的失真中心227。 如果相機的投射行爲可用一圓形函數來規範(即是: 投射函數包含有三角函數),那麼自PCP 31的入射線必然 會本質地達成一準直機制(collimating mechanism),亦即: 入射線會先匯聚於魚眼鏡頭221中一稱爲前基點222( front cardinal point,以下簡稱爲FCP)的邏輯光學中心,然後 再由一後基點223 (back cardinal point,以下簡稱爲BCP) 根據投射函數發散射出並成像在影像平面225上,形成二 個各自以FCP 222與BCP 223爲頂點的內、外圓錐。FCP 222 與BCP 223是描述魚眼鏡頭221之投射行爲的二個基準 點,用來界定魚眼相機內、外的二個投射空間。於解析魚 眼相機的投射機制時,FCP 222供視野線參考,BCP 223 供影像平面225參考,此二基點間的距離並非相機的參數, 可以設定爲任意値;因此可以假設FCP 222與BCP 223合 倂爲一單一的VP,或是直接以FCP 222來代表VP,以簡 化成像模式。這種表示方式習見於討論鏡頭的光學書籍 中。 規則二所描述之等效投射機制可用「第6圖」來說明。 就投射光路而言,由影像平面225上單一影像訊息(如圖 中之影像點91)無法分辨絕對空間中同一視野線80軌跡 上的相異物體點(如圖中所示圖靶30移動於p、q、r三 位置時,三校正點313、323、333的絕對座標位置W313[p]、 200422754 W323[q]、W333 [r]);由另一個角度來看,若是絕對空間中 至少二相異物體點皆映射至同一影像位置,則由該至少二 相異物體點的空間絕對座標可以決定其投射之視野線80, ^ 該視野線80與光學軸224的交點即爲FCP 222,或稱爲投 · 影中心。 魚眼鏡頭的任一道視野線80 (或入射光)的投射機 制可以借用一高斯鏡頭模型(Gaussian optics Model)來 描述。假設視野線80與光學軸224交接於FCP 222 (此即 高斯光學模式的FNP 222’,請參照「第3圖」),經鏡頭折 射後在影像平面225上成像一影像點91,其影像座標位置 爲C’(u,v),則由該影像點91可以倒推一平行於該視野線 - 80的軌跡而得到一對應的後節點223’(BNP);若視野線 80投射行爲符合高斯光學,則BNP 223’吻合BCP 223, 利用簡單的數學幾何關係可以由物距、物高及像高推導得 到該視野線80的焦距常數(/,focal length constant)。只 有高斯鏡頭才可在任何影像位置皆得到相同的焦距數値, 而爲一常數。 · 如果可以解析影像平面225上的任一座標點與其在空 間中所對應的視野線80,則可以完全描述相機的成像幾何 機制,且可不用在乎鏡頭的投射函數,這是本發明所要揭 露的內容。 1 魚眼鏡頭影像具有嚴重的失真,以高斯光學模型的觀 點並無法讓所有的視野線80皆對應到唯一的BNP 223’, 也就是沒有唯一的高斯焦距常數。然而如前段落所述,仍 20 200422754 然可以用高斯光學模型來個別描述一專一視野線80和其 映射影像的投射幾何機制,本發明稱由此而得的焦距常數 爲「光軸偏折角焦距」(zenithal focal length,簡稱爲zFL ), 即「第6圖」中BNP 223’與失真中心227間的距離;BNP 223’的位置係由平行於視野線80原行進方向且經過可觀 察影像點91 C’(u,v)的平行線所決定。而每一影像點91又 對應一像高,且在影像平面像高相等位置有相同數値,故 zFL又可稱爲「像高焦距」。如此可推論在高斯光學模式 下,不同的視野線80各自對應一唯一的光軸偏折角焦距, 但其數値會隨像高增加而縮短;由此唯一的對應性,也可 用各影像點的zFL參數來描述魚眼鏡頭的成像機制或失真 機制。 如果魚眼鏡頭投射函數可由一封閉形式的圓形函數來 描述,則可以藉此函數演繹此鏡頭其像高和光軸偏折角 (α)的關係;光軸偏折角(a; zenithal angle )係爲空間 中入射線與光學軸224的夾角。以等距離投射爲例,像高 (P)決定於光軸偏折角(〇0與焦距常數(/)的乘積, 即是Ρ=/*α,故只要已知p和/値,就可推知α値。 請再次參照「第4圖」,外‘圓錐的光軸偏折角(α) 與內圓錐的底部半徑(亦即像局ρ)之相互關係可用投射 函數來描述。反過來說,如果量測到這種關係就可以推測 得到相機的投射函數。而這個機制並不限定在單一封閉形 式函數(如三角函數)。本發明稱可以由一圓幾何函數來 描述成像機制的鏡頭爲理想鏡頭。邏輯上,理想鏡頭只要 21 得到相機的原生投射函數,則可以存在模式上唯一的後基 點BCP 223。就外部圓錐而言,存在唯一的FCP 222是可 以理解的,因爲用於描述絕對空間的視野線模式上是來自 無窮遠,而將相機當成一個定點來定義投影中心是合理 的。 如果已知相機裝置具有一理想鏡頭,請再次參考「第 4圖」,如果FCP 222的絕對座標位置爲已知,則視野內 實體物的光軸偏折角可以用簡單的正切三角函數 (tangent)求得,並可由其入射視野線80與光學軸224 的交點定位FCP 222,而其對應的唯一像高,以影像平面 225爲基準參考投射函數及焦距常數(/)可得到內部圓錐 的頂點BCP 223。 FCP 222的絕對座標位置和光學軸224可用來表示相 機的位置和方向,也就是外部參數;而焦距常數(focal length constant)和相機的投射函數爲內部參數。本發明 將提出一量測系統裝置與解析方法學驗證一未知光學投射 模型的相機,以上所討論的內外部參數是可演繹出來的。 實現上述目的是本發明設計量測系統的目標,其佈置 請參照「第7圖」所示;其中圖靶30的移動位置係與「第 6圖」相互對照。整個量測系統可以用電腦軟體引導自動 化測量程序,執行影像攫取、演算校正點的映射影像圖團 特徵座標、乃至於演繹相機的內部及外部參數。 廣泛而言,量測系統是指可以達成前述功能的組合, 整體系統包含操作及解析的軟、硬體元件模組外,其品質 200422754 也受實驗室的環境因素影響’如:裝置擺設及照明燈具的 規格及安置方式等,都會影響實驗數據及量測結果。「第 6圖」的圖示解析模式與「第7圖」的量測系統裝置配置 爲本發明的第一種實施例;但鑒於實務上移動圖靶30將 使圖靶30於不同位置接受光源24的照度不一而影響實驗 量測結果,以及爲了使圖靶30當作絕對座標系統28的參 考點基準而一致化運算內容,因此本發明提出第二種實施 例,如「第8圖」所示,固定圖耙3 0於一絕對座標位置、 改爲移動相機22,其對應的圖式如「第9圖」所示。以下 將根據第二種實施例說明本發明方法的實施內容以及其據 以實施的裝置,但任何符合本發明精神的方法與裝置應視 爲本發明的延伸,不應排除在本發明的保護範圍之外。 本發明針對量測系統定義四個獨立但又相互連結的座 標系統,其於量測系統的嵌入位置請參照「第8圖」··(1) 由測試圖靶30定義的絕對座標系統28,W(X,Y,Z) ; (2)驅 動相機方向及位置的平台座標系統29,W’(X,,Y,,Z’);(3) 跟相機22影像平面225對應而在電腦螢幕上顯示的像素 平面座標系統27 Ι(ιχ,ν) ; (4)描述相機22成像投射幾何的 相機座標系統26,以S(ot’,β’,/^ Ν( α,β,匀表示之。 相機座標系統26包含S( α’,β’,/)及Ν( α,β,h); α,β, 在前文中已經介紹,而α’,β’是參考影像平面的虛擬射線 對應角度。請再次參照「第4圖」,S用於定義以BCP223 爲頂點之內圓錐上的折射光線,Ν對應於定義-外圓錐視野 線80光軸偏折角及光軸環繞角,因爲內外空間具有不規 23 200422754 則折射的關係,α’不等於α,可是β,通常等於β (註:也 可以解釋成β+π);內外兩圓錐的α’與α函數對應關係, 也可表現相機的成像模型機制,但α’是無法觀察的。 影像平面座標系統27,C,(x’,y’)或Ρ’(ρ,,Ρ’)是映 像在影像平面,以失真中心爲原點的直角座標與極座標影 像尺寸。 像素座標系統27 I(u,v)影像呈現在電腦螢幕上的直 角座標與極座標像素單位尺寸。而C(u,v)或Ρ(ρ,β)是 影像呈現在電腦螢幕上,以失真中心呈現的特徵像素座標 I(\,ve)=C(0,0)=(0,p)爲原點的直角座標與極座標像素單 位尺寸。 絕對座標系統28係以PCP 31的中心點(中心校正點 38的重心位置)爲原點,請再次參照「第5圖」,以水平 校正點335、325、315、38、311、321及331的特徵座標 來定義絕對座標X軸,以垂直校正點333、323、313、38、 317、327及337的特徵座標來定義絕對座標Y軸;因此 W38=W(0,0,0) 〇 實驗過程中測試圖靶30的位置保持固定,因此圖靶30 上其他各校正點311〜318、321〜328、331〜338的絕對 座標也被確定。當確定了圖靶30的絕對座標方位,移動 相機22於特定視野空間,由其影像的變化,可以解析由 光軸偏折角(α)及光軸圍繞角(β)所定義之視野線80 在相機座標系統26的成像機制,其解析方法將於後文中 介紹。 24 77Ί 200422754 請再次參照「第8圖」,爲了移動相機22,將相機22 固定在一個可以六軸定位的調整平台23上。調整平台23 由三相互正交的剛體基軸一 X’基軸231、Y’基軸232與Z’ 基軸233 —所組成,分別用X’、Y’與Z’座標軸來表示之; 其中正Z’方向定爲遠離圖靶30的方向。理想上,讓平台 座標系統29的三軸\¥’(\’,¥’^’)平行於絕對座標系統28 的三軸W(X,Y,Z)。但實際上組裝,兩座標系統會相差一 個六維度的變數。所以將相機22固定於調整平台23上後, 須可以自由驅動平台的三基軸231、232、233以改變相機 22的平台座標位置,並在Y’軸232上(或相機22底座) 裝置萬用光學基座70以微調相機22的三個方向一水平搖 動(pan)、垂直傾斜(tilt)與旋轉(rotate)。這樣的機 械結構能將光學軸224準直到Z軸,其準直的方法將於後 文中介紹。 像素平面座標系統27係用於表示影像攫取裝置252 (image frame grabber)將相機22傳入之視訊信號予以數 位化後提供給中央處理器251或數位影像處理器253之可 讀取的二維記憶體座標。邏輯上,像素平面座標系統27 的値可代表相機影像平面225的k寸,但單位大小對應是 一個比例轉換關係。一個正方形的影像在像素平面座標系 統27的長寬比例可能不爲1,此表現叫做畫面比例(aspect ratio)。因此,可能將圓形影像顯示成橢圓形。實際應用 時,映射在影像平面225的影像會顯示於螢幕上,使用者 只能間接由像素座標系統的數値來代表影像尺寸。畫面比 25 200422754 例的參數値於本發明中也可求得,其詳細內容也將於後文 中以實例來介紹。 量測系統中除建立上述之座標系統的機械結構外,就 系統功能而言,是一個影像攫取、校正點特徵座標位置運 算、以及主動調整座標系統位置的裝置。實作系統中其他 主要元件的規格說明如下: 1·相機22 : —只應用於監視系統的黑白相機,配有ι/2英 吋的CCD (Charge Coupled Device,電荷親合元件)與 一魚眼鏡頭(規格書上載明焦距爲2.8mm),可對無窮 視野聚焦,具有(美國)國家電視系統委員會NTSC (National Television System Committee)制定標準的 視頻信號輸出,並將此視頻信號傳送到影像攫取裝置 252。相機的實施例除上述的CCD相機外,亦可採用 CMOS ( Complementary Metal Oxide Semiconductor,互 補性氧化金屬半導體)相機、或裝設其他影像掃瞄裝置 的相機等。 2·光源24 :是很重要的元件,使用燈泡種類及擺設位置 都會影響照度的分布,系統會因光源不同而得到不同的 結果。本量測系統使用兩只具有高頻換頻裝置的書燈, 做爲圖靶30的照明光源24。將光源24與圖靶30的相 對方位於實驗中固定而不調整方向與位置,具有讓圖靶 30的照度於實驗過程中維持穩定的好處。 3.平台控制器21 :與調整平台23相連接’用以提供動力 並透過軟體命令控制調整平台23的運動與限制調整平 26 774 200422754 台23的運動範圍,酌予手動輔助微調相機22的方向, 可以調整相機22裝置的方位。 4.運算單元25 : · —般的電腦系統,用於攫取、處理與運 算相機22的影像,以及操控平台控制器21。對平台控 制器21下命令以調整相機22的位置。其中,中央處理 器251是通用型的CPU,用以執行操作軟體,掌控系 統的操作與攫取資料管理;數位影像處理器253負責影 像演算;影像攫取裝置252用於轉換相機22的類比視 訊信號成爲數位信號,並存放於記憶體中,供數位影像 處理器253與中央處理器251即時運算圖靶30上各校 正點38、311-318、321-328、331-338所對應的影像特 徵座標位置。外觀上,影像攫取裝置252、數位影像處 理器253及中央處理器251是一套具有MS Windows操 作系統的個人電腦之,而因應實驗系統操作程序所發展 的對應軟體將於後文中介紹。 5·圖靶30 :固定於相機視野中,提供用來解析視野線80 的絕對座標位置。圖靶30上繪製有一中心對稱圖案31 (PCP),中心對稱圖案31定義有一位於圖案中心的 中心校正點38與由複數個幾f可圖形定義的複數個校正 點;對照「第5圖」的實施例,該複數個幾何圖形係座 落在三個以中心校正點38爲圓心的同心圓圓周上。各 圓周對稱放置8個校正點311-318、321-328、331-338 而成正八邊形。三基準同心圓的半徑分別是 20mm、40mm 及 60mm,校正點 311-318、321-328、331-338 的位置以 27 200422754 〇度平面角的半徑爲基準,每一角位移量π/4,是爲寬 8mm、高8mm的黑色方塊,總共24點。除此之外,再 以最外圈的四校正點331、333、335、337爲切點,製 作一正方形的四頂點爲測試點341-344。圖靶30以電腦 輔助(CAD)繪圖,利用噴墨印表機列印在高散射係數 材質的相片紙上。此外,校正點亦可以用發光二極體 (Light Emitting Diode,LED)主動元件組成,使得圖 靶30可以主動發亮以得到較佳的影像品質;量測系統 就不需要光源24的輔助。實驗中,圖靶30被適當地固 定於實驗桌上,圖靶30的絕對座標位置可以完全地精 準定義。 可利用於本發明的PCP 31,並不只限定於「第5圖」 所繪示之同心圓所定義的正八邊形,只要是同心且對稱的 形式皆是可行的PCP 31實施例,因此校正點可組合成爲 正三角形、正方形、或正偶數多邊形的頂點,皆是可應用 的PCP 31 ;値得注意的是,選擇偶數邊正多邊形所定義的 PCP 31可以得到演算較爲簡易的益處。而多邊形的極致就 是圓形,如「第4圖」中所示之PCP 31的示意圖。另外 一提,若爲光學軸對稱的圖靶30仍有相同的效果。 在描述詳細的實施與演繹方法之前,先將本發明欲解 決的問題點摘要整理如下: 1. 演繹影像平面的失真中心227及定位光學軸224於空 間中的絕對方向與位置; 2. 演繹FCP 222 (或投影中心)的絕對座標; 28 200422754 3·演繹光軸偏折肖錢長度自_ (或像高焦距長度剖 面); 4·演繹絕對座標系,統28對相機座攘系,統%的投射函數; 5·演繹影像的變形及校正的機制。 本發明依上述主題提出實驗方法與演釋方法學,欽述 如后: 一、藉由調整相機的方向與位置使PCP 31影像爲一 lCp 226 來準直相機座標系統26與絕對座檩系統28以定位失真 中心227的像素座標位置I(Ue,vj與光學軸224的到 W(0,0,z) 根據魚眼鏡頭空間投射具有環繞光學軸的對稱性與成 像失真的中心對稱性及PCP 31的中心對稱性質,若且唯 若光學軸224準直吻合到絕對座標系統28的z軸,則可 得到亦是中心對稱的ICP 226。本發明得根據電腦螢幕上 顯示各校正點之在像素座標系統27 (可以是校正點之映射 影像圖團的重心座標)呈現的對稱性來調整量測系統的空 間佈置。利用電腦程式動態調整相機22的絕對座標位置, 同時配合手動調整相機22的方向。以此程序,可準直像 素相機座標系統26和絕對座標系統28。達到準直機制時, ICP 226的幾何中心(對照「第5圖」之PCP 31實施例, 該幾何中心爲中心校正點38之映射影像點的特徵座標) 即是失真中心227的位置,此時光學軸224同時正交地通 過PCP 31的幾何中心,及中心校正點38之特徵座標位置。 而該位置即爲失真中心。其實施的詳細步驟如下: 29 777 、 200422754 1. 以目視感覺設定圖靶30與調整平台23的相對位 置,使得調整平台23的三基軸231-233儘量平行於 絕對座標系統28的三座標軸; 2. 擺設光源24使圖靶具有平均照度,並定義圖靶中 心W(0,〇,〇)(中心校正點38的幾何中心)爲絕對 座標系統28的原點; 3. 裝置相機22於調整平台23的Y’基軸232上,相機 22底座裝設一萬用光學基座70,藉此可以手動調 整相機22的三軸方向。爲了使相機座標系統26的 光學軸S(0,0,/)吻合平台座標系統29的Z’基軸233 座標\^’(0,0,2),使得相機22沿2’基軸233移動時, 可以視爲如同沿著光學軸224移動;因此,實務上 儘量讓絕對座標系統28的Z軸、平台座標系統29 的Z’軸與相機球座標系統26的光學軸S(0,0,/)準直 在同一直線上; 4·藉由調整平台23來改變相機22的在X’基軸231、 Y’基軸232與Z’基軸233上的位置,讓四個測試點 341-344的影像區塊位於電腦螢幕的四個邊界,以 加大校正的範圍; 5·利用一對稱分析背景程式持續追蹤ICP 226的幾何 中心(即爲中心校正點38之映射影像點的特徵座 標)’參考校正點38的像素座標位置l(U38,V38),演 算校正點311-318、321-328、331-338之映射影像 點的「失真指標參數」與「水平/垂直座標偏差値」, 30 776 200422754 並將這些値顯示於電腦螢幕上,及回饋給電腦程式 以命令平台控制器21驅動調整平台23改變相機22 在平台座標系統29上的位置W’(x’,y’,z’),配合手 動調整相機22方向。目的是讓「失真指標參數」 與「水平/垂直座標偏差値」往最佳値趨近。若是螢 幕上顯示這些參數値已達設定的門檻値,則表示ICP 226的對稱性達到要求,則往下一步驟執行;否則, 重複本步驟; 6.記錄「失真指標參數」、「水平/垂直座標偏差値」 以及得到的「物像共軛座標對」,即是(wc’ (x’,y’,z’)[0],ln(u,v)[0])。\¥。’(\’,丫’,2’)[0]係指相機 在平台座標的位置與ln(u,v)[0]是各校正點的像素座 標位置。此時指標爲k=0;其中η可能爲38、311-318、 321-328或331-338,代表「第5圖」中PCP 31的 任一校正點,而l38(u,v)[0])即是本實驗程序所定位 的失真中心Ι(\,ν。)。而k=0代表在系統初始準直 初始時的位置,移動到下一位置則增加k値1,如 「第6圖」與「第9圖」顯示第P、q與r次的執行 量測; 到此本賓驗步驟準直了相機座標系統26與絕對座標 系統28。在往下討論前,先詳細介紹「失真指標參數」、 「水平/垂直座標偏差値」。在實驗過程中一直執行上述 對稱分析背景程式,來線上引導調整系統,在螢幕上除了 顯示PCP 31的影像外,並標以文字或圖樣顯示代表影像 31 200422754 對稱性的指標(稱爲對稱指標);系統根據對稱指標,自 動調整相機22的位置與手動調整方向。定義了「失真指 標參數」及「水平/垂直偏差度」指標;說明如下:In addition to the assumption that the fisheye lens conforms to a "specific projection function", the above image-based algorithm sets several assumptions: First, suppose that the image captured by the fisheye camera (hereinafter referred to as the fisheye image) is Circular or elliptical and the intersection of its long axis 11 and short axis 12 (two diameters) is the principal point of the image (the image projected by the optical axis 224). Second, it is assumed that the edge of the image is horizontal light. (That is, α = π / 2). Third, suppose that α and the image distance (hereinafter referred to as ρ) are exactly linearly proportional, where P is defined as an image point on the imaging area 1. Relative distance from the center of the distortion. For example, the distance from point E to point C in the "1 A 200422754 map" is exactly half the radius, so it is estimated that α = π / 4 at point E, and the field of view (sight ray) determined from this also determines the hemisphere The corresponding line of vision in the field of vision will pass the E 'point; the rest can be deduced by analogy. As for the coordinates of the image points, they can be expressed as (u, v) by the Cartesian coordinate system or (ρ, β) by the polar coordinate system. Both of these coordinate representation methods set distortion. The center is the origin of the coordinates; the corresponding space vector coordinates of the line of sight can be represented by (α, β). Although the “unique projection function” is not discussed in conventional technology, in fact, a lens with this imaging capability is called an equivalent projection (hereinafter referred to as EDP) in the optical field, and it is assumed that it has exactly 180 Degrees of field of view (hereinafter collectively referred to as edPtc); the projection function for equidistant projection is P = h, where k is a constant, and when the lens conforms to EDPtc, 'k is the focal length constant of the lens / (focai iength constant). For a camera to meet these conditions, a qualified camera must be used with a qualified lens. Generally this is a special combination without generality. According to the premise of EDPtc, the focal length constant / can be obtained by dividing the radius of the imaging area 丨 by π / 2; the corresponding plane projection angle (α, β). Therefore, with the analysis method of the above-mentioned conventional techniques, the "ideal EDPti" fish-eye image can be converted into a rectilinear perspective projection. This image-based algorithm is simple and requires no additional corrections. And its convertible reference axis is not limited to the native optical axis. 200422754 In the disclosure of the patent, U.S. Patent 5,185,667 follows the projection imaging mechanism presented in "Figure 1 A" and "Figure 1 B" to perform an algorithm to convert the fish-eye image into a linear perspective projection mode. It is used to present a hemispherical field of view (180 degrees vertically, 360 degrees horizontally), and apply it to endoscopes, surveillance systems, and remote control implementations (US patents 5,313,306, 5,359,363, 5,384,588). However, it should be noted that: this series of US patents does not specifically demonstrate whether the lens used is suitable for this mechanism, causing the accuracy of its image conversion to be questioned. At present, system application manufacturers require special specifications The fisheye lens is integrated into a specific camera body, so that this patented technology (US Patent 5,185,667) has a commercial price. In any case, the image-based algorithm described above is impractical for most camera systems because it ignores some basic factors and possible variations. First, please refer to "Figure 2", which shows three typical fisheye lens projection curves, of which Ε) Ρπ is only a special part of the projection geometry shown. 'The lens's original projection mechanism may be the other two. Type: stereographic projection (SGP, p = 2 / xtan (a / 2)) or orthographic projection (OGP, ρ = / * xsin (a)). Another ^ may be that the coverage of the angle of view is not π, it may be larger or smaller; moreover, 'the difference between the three fisheye lens projection mechanisms can be seen from the figure as the α angle of the incident light Increase and increase significantly, so it is not reasonable to lock all fisheye lens projection models in EDP and have a π angle of view. Secondly, it is impossible to determine whether the angle of field of view of the lens is π 'from the fish-eye shadow, because the shape of the shadow area 1 is always circular (or eaves-shaped) regardless of the 11 200422754 angle of view of the lens. Third, even if it is determined that the angle of view is exactly π, the radial attenuation of the RF response is a common phenomenon in general lenses, especially in a larger range of angles of view, which will cause the image intensity to be in the imaging area. The edges drop sharply, especially with low-priced, simple lenses. In addition, the so-called image boundaries are difficult to pinpoint because of diffraction. Summarizing the above points, whether or not the lens meets the perfect EDPtt assumption, this image-based mode not only has low accuracy, delineates the edges of the image, is prone to errors in determining the center of distortion, and the extracted imaged area 1 is also questioned. The internal and external parameters of the camera required to set up a computer vision system have not been solved. It is impossible to find the projection center used to indicate the placement of the camera. This is an important parameter applied to 3-D metering, so it is very limited in practical applications. When the area of the image sensor is smaller than the imaging area 1, the edge of the image cannot be observed. This image-based algorithm is helpless. In addition, according to Margaret M.  The research results proposed by Fleck [Perspective Projection: The Wrong Image Model, 1994] show that: the projection mechanism of the lens is difficult to meet a single ideal projection mode in the reachable field of vision; and the optical engineer can also meet the needs of the application field, Designing various lenses with special projection mechanisms, such as the fovea lens, the hypothesis that equidistant projection is applied to all fisheye lenses. On the other hand, even if the lens is designed and manufactured in accordance with a certain projection specification, it is not possible to achieve a perfect design due to the limitation of the material's light refraction properties, and it is difficult to verify whether the lens meets the original expectations after the manufacturing is completed. 12 200422754 Verification ; Furthermore, after the lens is assembled into the camera body, its optical projection mode and the accuracy of the assembly machinery depend on each other, so if a simple and general technology can be used, the optical specifications of the lenses formed in the camera that constitutes them together can be tested, making them Goods and applications have a certain benchmark, they can greatly increase their practical value. The Gaussian Model is a convenient way to describe the imaging logic of optical systems. Camera errors are always referenced. It regards a camera as a function whose characteristics can be defined by its several "cardinal points". The "black box" thinks that when the light is projected, it can directly ignore the complex light path projection geometry and directly Logically describe the path of light travel based on the base point. Please refer to "Figure 3". The base points defined by the Gaussian optical mode include the first and second focal points F1, F2, the first and second main points PI, P2 (principal point), and the first and second nodes (nodal point ), If the entrance and exit interface of the optical system is air, the node can be regarded as the main point; at this time, the first main point P1 is also referred to as the front node 222 '(front nodal point (FNP) and the second main point) The point P2 is also called a back node 223 '(back nodal point, BNP). In addition to this, f, the two principal planes 141, 142 are defined as the reference planes when the light is projected into the optical system and the direction of travel is reversed. The intersection of the two principal planes 141, 142 and the optical axis 224 is Two main points PI, P2. According to these base points and the main planes 141 and 142, light is projected from infinity through the first focus F1, and then it will turn in the direction of travel on the first main plane 141 and be parallel to the optical axis 224, as shown in the figure. CO ′; Conversely, if the light is incident into the optical system in parallel, 200422754 will pass through the second focus F2 after meeting the second main surface 142, as shown by the straight line OB and the straight line BO 'in the figure. Such a projection mechanism has a characteristic: the light projected from the object point 0 to the first main point P1 (the straight line OP1 in the figure) 'turns after passing P1 and travels along the optical axis 224, and then turns after passing the second main point P2 It continues to move in a direction parallel to the straight line OP1 (as shown by the straight line P20 'in the figure) until it is mapped on the image sensor to form an image point 0'. That is, the path of the incident light passing through P1 and the incident light of P2 is spatially parallel; and a single lens close to this phenomenon is only in the paraxial region of the thin lens. However, Gaussian optical mode is the imaging logic pursued by general cameras. The wide-angle lens must be close to this imaging mechanism, which is different from the fisheye lens. Referring to the Gaussian model, the fisheye lens does not have a "single" projection center, which is the point of view of those skilled in this art; However, if the limitations of the Gaussian optical mode can be overcome, the native projection mechanism of the fisheye lens can be analyzed, and a "single" projection center can be logically positioned and its optical parameters can be deduced. In this way, it can not only increase the reliability of fisheye image analysis, but also further expand the application field of fisheye lens and the measurement of stereo images. Therefore, the present invention will accurately explore this subject, so that the camera parameterization process is not limited to the aforementioned various premise and can accurately obtain the light parameter of the fisheye camera. [Summary of the Invention] In view of this, an object of the present invention is to provide an image analysis method that analyzes the optical projection characteristics of its original nature and an apparatus for implementing the method for an imaging system that is seriously deviated from the linear perspective projection mechanism. Another object of the present invention is to provide a method for determining the optical projection parameters (including the projection center, the orientation of the optical axis, the focal length constant, and the projection mechanism of the camera) of 200422754 camera completely based on optical phenomena, and a device for implementing the method, The fisheye camera can be extended and applied in the field of artificial vision, such as stereo image measurement or three-dimensional positioning. Another object of the present invention is to propose an image distortion representation method based on the image plane coordinates, which can directly quantify the distortion of the image by the optical axis deflection angle focal length corresponding to the image point coordinate position. Another object of the present invention is to provide a method for inspecting the spatial projection mechanism of a lens or a camera device composed of the lens, as a method and device for setting product specifications or verifying product quality. According to the above purpose, the present invention utilizes the deformation condition of a centrally symmetric target image to adjust the absolute orientation of the camera to achieve an image similar to the target characteristics, so as to obtain a common object image used to describe the projection behavior between the visual field and the image plane. The yoke coordinate pair array (object image conjugate coordinate pair refers to the coordinate pair consisting of the absolute coordinates of the correction point on the map target and the corresponding image plane coordinates) is used as the sampling data to deduct the image (coordinate point) and the visual field space (projection Line). Thus, the optical parameters of the camera system can be obtained. The implementation method of the present invention does not take into account any existing closed-form projection function hypothesis' the camera's optical projection mechanism is deduced directly from the mapping relationship between the coordinates of the known correction points and coordinates and the position of the image that appears, which is The most important feature of the present invention. The technology disclosed by the present invention breaks through the limitation considered by the conventional art, which can be applied to fish-eye cameras or cameras with special projection functions, and even used as reverse engineering to analyze camera lens devices whose projection mode is unknown. . 200422754 Since the present invention can accurately interpret the camera's projection function, its back-projection function can correct distorted images (or transform images) and then be applied to the fields of stereo imaging, stereo measurement, and three-dimensional positioning. In order to make the purpose, features, and advantages of the present invention more comprehensible, a typical embodiment will be described in detail with reference to the accompanying drawings. [Embodiment] Before starting to explain the embodiment, first define the coordinate system that will be used herein to facilitate the discussion of subsequent content: 1.  The absolute coordinate system W (X, Y, Z) uses the center of the target layout as the origin and defines the reference direction of the Z base axis orthogonally away from the target. 2.  The image plane coordinate system C '(x, y) or P' (p, P) uses the center of distortion as the origin, and the image plane is represented by right-angle coordinates or polar coordinates. 3.  Pixel coordinate system I (u, v) This is a coordinate system that can be directly observed and the image displayed on the display interface of the computer system, in pixels. The distortion center is imaged at the I (uc, v.) Position of the display screen of the computer system. Basically, the size C '(x', y ') or P' (p ', p') that the camera maps to the image plane can be represented analogously in the I (u, v) coordinate system. The pixel coordinate system gold is also expressed as a right-angle coordinate C (u, v) and P (ρ, β) with I (Uc, Vc) as the origin. 4.  The external coordinate system N (α, β, 1m) of the camera, with reference to the field of view of the camera 22, describes the coordinate system of the line of sight geometry. 5. Camera internal coordinate system S (oi ', β', /), which describes the imaging projection geometry inside the camera 22. 7 & 4 16 200422754 The subsequent experimental procedure will indicate the position of the feature points in the subscript, and the sampling order of the experiment in array order. For example, Wn (a, b, c) [k] represents the k-th experiment, and the absolute coordinate position of the correction point η is at the (a, b, c) position of the absolute coordinate. The rest can be deduced by analogy; and when the readability of the content is not affected, some fields will be omitted. Practical examples of the various coordinate systems will be cited at appropriate times throughout the text. Fisheye lens is a kind of non-linear projection lens that deviates from the Gaussian optical model. It means that the projection trajectory in space cannot be explained by the linear perspective projection mechanism of the commonly known pinhole model. Compared with other I-head 'fisheye lenses, they have severe barrel deformation; it is often used to make dramatic or special-effect images, but it is difficult to directly judge the original appearance of objects from the images. However, its imaging mechanism still has certain rules: Rule 1. The degree of distortion of the fish-eye image is center symmetrical on the image plane. This center point is called the principal point. The optical projection trajectory is also symmetrical in the visual field. On the optical axis of the camera; rule two, all object points on the same specific field of view in the field of view are mapped to the same specific image point on the image plane. The hypothesis of its projection mechanism can be described as: in the field of view (hereinafter referred to as FOV), the incident light (including active light and reflected light) emitted from the object will converge in a unique optical center (or The projection center (viewpoint, VP for short) is then refracted and imaged on the image plane according to the projection function. The above rules and hypotheses are phenomena and theoretical models well known to those skilled in the art. In the present invention, the position of the distortion center on the image plane and the direction and position of the optical axis in the space will be determined according to the distortion symmetry characteristic of the fisheye image according to Rule 1 with a specially designed target. Then, using the unique mapping characteristics of the spatial field of view and its mapped image points in rule two, the absolute coordinate position of the projection center and the absolute coordinates of the specific field of view are analyzed on the optical axis, and the focal constant of the camera (focal length constant) and induction of its projection model. The present invention does not assume any known camera projection mode (such as equidistant projection, stereo graphic projection, orthogonal graphic projection, etc.) as a premise, so it can be applied to any camera with fisheye imaging properties or similar types of cameras. The space projection symmetry referred to in Rule 1 can be expressed by "Figure 4", which represents the projection light path diagram between the plan view target 30 and the fish-eye camera in space. In the figure, the fisheye lens 221 and the image plane 225 are equivalently used to represent the fisheye camera, and the plan view target 30 is placed in the F0V of the fisheye camera. From a geometric point of view, it is possible to express a plane figure with a spatially symmetrical geometric arrangement of the optical axis 224. In practice, a center-symmetric image can be mapped in the camera. Therefore, a plan view target 30 having a physical central-symmetry pattern 31 (hereinafter referred to as PCP) is arranged in the field of view of the camera as shown in FIG. 5. The PCP 31 has at least one center correction point 38 located at the center of the pattern. And a plurality of correction points 311-318, 321-328, ^ 31-338 defined by a center-symmetric geometric figure; adjusting the relative position between the target 30 and the camera to obtain a center-symmetric image 226 on the image plane 225 ( imaged central-symmetry pattern (hereinafter referred to as ICP). After the adjustment is determined, the optical axis 224 should pass through the distortion center 227 on the image plane 225 and the center correction point 38 of the PCP 31 at the same time. Since the target 30 can be preset at a known absolute orientation manually, it can be used as a reference to 18 200422754 to limit the orientation of the optical axis 224 in space and map the characteristic coordinates of the image group mapped with the center correction point 38 (that is, The center of gravity of the image group is used as the distortion center 227 of the image plane 225. If the projection behavior of the camera can be regulated by a circular function (that is, the projection function contains a trigonometric function), the incident rays from PCP 31 will necessarily achieve a collimating mechanism, that is: incident rays It will first gather at a logical optical center called front cardinal point 222 (hereinafter referred to as FCP) in the fisheye lens 221, and then a back cardinal point 223 (hereinafter referred to as BCP) according to the projection function The emitted light is scattered and imaged on the image plane 225 to form two inner and outer cones each with FCP 222 and BCP 223 as vertices. FCP 222 and BCP 223 are two reference points describing the projection behavior of the fisheye lens 221, and are used to define the two projection spaces inside and outside the fisheye camera. When analyzing the projection mechanism of a fisheye camera, FCP 222 is used as a reference for the field of view, and BCP 223 is used as a reference for the image plane 225. The distance between these two base points is not a camera parameter and can be set to any value; therefore, FCP 222 and BCP 223 can be assumed. Combine them into a single VP, or directly represent the VP with FCP 222 to simplify the imaging mode. This representation is used in optical books that discuss lenses. The equivalent projection mechanism described in Rule 2 can be illustrated by "Figure 6". As far as the projected light path is concerned, a single image message on the image plane 225 (such as image point 91 in the figure) cannot distinguish between different object points on the same trajectory line 80 in the absolute space (as shown in the figure, the target 30 moves at At the three positions of p, q, and r, the absolute coordinate positions of the three correction points 313, 323, and 333 are W313 [p], 200422754, W323 [q], and W333 [r]); from another perspective, if at least the absolute space The two dissimilar object points are all mapped to the same image position. The absolute coordinate of the space of the at least two dissimilar object points can determine the field of view line 80 that it projects. ^ The intersection of the line of view 80 and the optical axis 224 is FCP 222, Or called the Projection Center. The projection mechanism of any field of view 80 (or incident light) of a fisheye lens can be described using a Gaussian optics model. Suppose that the line of sight 80 and the optical axis 224 are intersected at FCP 222 (this is FNP 222 'in Gaussian optical mode, please refer to "Figure 3"). After the lens is refracted, an image point 91 is formed on the image plane 225, and its image coordinates If the position is C '(u, v), then a trajectory parallel to the line of sight -80 can be obtained from the image point 91 to obtain a corresponding back node 223' (BNP). If the line of sight 80 projection behavior conforms to Gaussian Optically, the BNP 223 'matches the BCP 223, and the focal length constant (/, focal length constant) of the field of view 80 can be derived from the object distance, object height, and image height using simple mathematical geometric relationships. Only a Gaussian lens can get the same number of focal lengths at any image position, which is a constant. · If any coordinate point on the image plane 225 and its corresponding field of view 80 in space can be analyzed, the imaging geometric mechanism of the camera can be fully described, and the projection function of the lens can be ignored, which is what the present invention intends to disclose . 1 The fisheye lens image has severe distortion, and the viewpoint of the Gaussian optical model cannot make all the field lines 80 correspond to the unique BNP 223 ′, that is, there is no unique Gaussian focal length constant. However, as described in the previous paragraph, it is still possible to use a Gaussian optical model to individually describe the projection geometric mechanism of a specific field of view 80 and its mapped image. The present invention refers to the focal length constant obtained as "optical axis deflection angle focal length" "(Zenithal focal length, zFL for short), which is the distance between BNP 223 'and distortion center 227 in" Figure 6 "; the position of BNP 223' is parallel to the original direction of travel of the line of sight 80 and passes through the observable image point 91 C '(u, v). And each image point 91 corresponds to an image height, and has the same number at the same position in the image plane, so zFL can also be called "image focal length". In this way, it can be inferred that in the Gaussian optical mode, different fields of view 80 each correspond to a unique optical axis deflection angle focal length, but its number will be shortened as the image height increases; therefore, the only correspondence can also be used for each image point. zFL parameter to describe the imaging mechanism or distortion mechanism of fisheye lens. If the fisheye lens projection function can be described by a closed form circular function, then this function can be used to deduct the relationship between the image height of this lens and the optical axis deflection angle (α); the optical axis deflection angle (a; zenithal angle) is The angle between the incident ray and the optical axis 224 in space. Taking equidistant projection as an example, the image height (P) is determined by the deflection angle of the optical axis (0 0 and the focal length constant (/), which is P = / * α, so as long as p and / 値 are known, it can be inferred α 値. Please refer to "Figure 4" again. The relationship between the deflection angle (α) of the optical axis of the outer cone and the bottom radius of the inner cone (that is, the image ρ) can be described by the projection function. Conversely, if After measuring this relationship, the projection function of the camera can be inferred. This mechanism is not limited to a single closed form function (such as a trigonometric function). The present invention refers to a lens that can describe the imaging mechanism by a circular geometric function as an ideal lens. Logically, as long as the ideal lens obtains the camera's native projection function, there can be a unique rear base point BCP 223 in the model. As far as the outer cone is concerned, the existence of a unique FCP 222 is understandable because it is used to describe the field of view in absolute space. The line mode is from infinity, and it is reasonable to use the camera as a fixed point to define the projection center. If the camera device is known to have an ideal lens, please refer to "Figure 4" again, if FCP The absolute coordinate position of 222 is known, then the deflection angle of the optical axis of a solid object in the field of view can be obtained by a simple tangent triangle function (tangent), and the FCP 222 can be located by the intersection of the incident field line 80 and the optical axis 224, and Its corresponding unique image height, with reference to the image plane 225 as the reference projection function and the focal length constant (/), can obtain the vertex BCP 223 of the inner cone. The absolute coordinate position and optical axis 224 of the FCP 222 can be used to represent the position and orientation of the camera. That is, the external parameters; and the focal length constant and the projection function of the camera are internal parameters. The present invention will propose a measurement system device and analytical methodology to verify a camera of an unknown optical projection model. The parameters can be deduced. Achieving the above-mentioned object is the goal of the design measurement system of the present invention, and its arrangement please refer to the "Figure 7"; where the moving position of the target 30 is compared with the "Figure 6". The whole The measurement system can use computer software to guide the automatic measurement process, perform image capture, calculate the correction points, map the image group feature coordinates Even the internal and external parameters of the camera are interpreted. Broadly speaking, the measurement system refers to a combination that can achieve the aforementioned functions. The overall system includes operating and analysis software and hardware component modules, and its quality is also 200422754. Environmental factors such as: device arrangement, lighting fixture specifications and placement methods, etc., will affect experimental data and measurement results. The graphic analysis mode of "Figure 6" and the measurement system device configuration of "Figure 7" are The first embodiment of the present invention; but in practice, moving the target 30 will cause the target 30 to receive different illumination of the light source 24 at different positions, which will affect the experimental measurement results, and to make the target 30 as an absolute coordinate system The reference point reference of 28 is used to uniformize the calculation content. Therefore, the present invention proposes a second embodiment. As shown in "Figure 8", the fixed picture is fixed at 30 at an absolute coordinate position, and the camera 22 is moved instead. The pattern is shown in "Figure 9". The following describes the implementation content of the method of the present invention and the device based on the second embodiment, but any method and device consistent with the spirit of the present invention should be regarded as an extension of the present invention and should not be excluded from the protection scope of the present invention. Outside. The present invention defines four independent but interconnected coordinate systems for the measurement system. For the embedding position in the measurement system, please refer to "Figure 8" (1) the absolute coordinate system 28 defined by the test chart target 30, W (X, Y, Z); (2) a platform coordinate system 29, W '(X ,, Y ,, Z') that drives the direction and position of the camera; (3) corresponds to the image plane 225 of the camera 22 and is on the computer screen The pixel plane coordinate system 27 Ι (ιχ, ν) shown above; (4) A camera coordinate system 26 describing the imaging projection geometry of the camera 22, denoted by S (ot ', β', / ^ Ν (α, β, uniformly expressed as The camera coordinate system 26 includes S (α ′, β ′, /) and N (α, β, h); α, β, which have been described above, and α ′, β ′ are virtual ray correspondences of the reference image plane Angle. Please refer to the "Figure 4" again. S is used to define the refracted light on the inner cone with BCP223 as the vertex. N corresponds to the definition-the deflection angle of the optical axis of the outer cone line of view 80 and the angle of the optical axis. Has irregular 23 200422754 refraction relationship, α 'is not equal to α, but β is usually equal to β (Note: it can also be solved Interpreted as β + π); the correspondence between the α 'and α functions of the inner and outer cones can also represent the camera's imaging model mechanism, but α' cannot be observed. Image plane coordinate system 27, C, (x ', y' ) Or P '(ρ ,, P') is the right-angle coordinate and polar coordinate image size that is projected on the image plane, with the distortion center as the origin. Pixel coordinate system 27 I (u, v) image is presented on the computer screen right-angle coordinate And polar coordinate pixel unit size. And C (u, v) or P (ρ, β) is the characteristic pixel coordinate I (\, ve) = C (0,0) = (0, p) is the unit size of the rectangular and polar coordinates of the origin. The absolute coordinate system 28 uses the center point of PCP 31 (the position of the center of gravity of the center correction point 38) as the origin. Please refer to "Figure 5" again, The absolute coordinate X-axis is defined by the characteristic coordinates of the horizontal correction points 335, 325, 315, 38, 311, 321, and 331, and the absolute coordinate is defined by the characteristic coordinates of the vertical correction points 333, 323, 313, 38, 317, 327, and 337. Coordinate Y axis; therefore W38 = W (0,0,0) 〇 The position of the test chart target 30 remains fixed during the experiment Therefore, the absolute coordinates of the other correction points 311 ~ 318, 321 ~ 328, and 331 ~ 338 on the target 30 are also determined. When the absolute coordinates of the target 30 are determined, move the camera 22 to a specific field of view and use its image The change of the imaging mechanism of the field of view 80 defined by the optical axis deflection angle (α) and the optical axis surrounding angle (β) in the camera coordinate system 26 can be analyzed later. The analysis method will be described later. 24 77Ί 200422754 Please again Referring to "Fig. 8", in order to move the camera 22, the camera 22 is fixed on a six-axis positioning adjustment platform 23. The adjusting platform 23 is composed of three mutually orthogonal rigid body base axes-X 'base axis 231, Y' base axis 232, and Z 'base axis 233-which are represented by X', Y ', and Z' coordinate axes, respectively; where the positive Z 'direction Set it away from the target 30. Ideally, the three axes \ ¥ '(\', ¥ '^') of the platform coordinate system 29 are made parallel to the three axes W (X, Y, Z) of the absolute coordinate system 28. But in reality, the two coordinate systems differ by a six-dimensional variable. Therefore, after the camera 22 is fixed on the adjustment platform 23, the three base axes 231, 232, and 233 of the platform must be able to freely drive the platform coordinate position of the camera 22, and the universal device can be installed on the Y 'axis 232 (or the base of the camera 22). The optical base 70 finely adjusts the three directions of the camera 22-pan, tilt, and rotate. Such a mechanical structure can align the optical axis 224 to the Z axis. The method of collimating will be described later. The pixel plane coordinate system 27 is used to indicate that the image capture device 252 (image frame grabber) digitizes the video signal input from the camera 22 and provides it to the readable two-dimensional memory of the central processing unit 251 or the digital image processing unit 253. Body coordinates. Logically, 値 in the pixel plane coordinate system 27 can represent the k-inch of the camera image plane 225, but the unit size correspondence is a proportional conversion relationship. The aspect ratio of a square image in the pixel plane coordinate system 27 may not be 1. This expression is called the aspect ratio. Therefore, a circular image may be displayed as an ellipse. In actual application, the image mapped on the image plane 225 will be displayed on the screen, and the user can only indirectly represent the image size by the number of pixel coordinate system. The parameters of the screen ratio 25 200422754 example can also be obtained in the present invention, and its details will also be introduced by examples later. In addition to the mechanical structure of the above-mentioned coordinate system in the measurement system, as far as the system functions are concerned, it is a device for image capture, correction of the characteristic coordinate position of the correction point, and active adjustment of the coordinate system position. The specifications of the other main components in the implementation system are as follows: 1. Camera 22: — Black and white camera only for surveillance systems, equipped with a ½ inch CCD (Charge Coupled Device) and a fisheye lens Head (Specifications show that the focal length is 2. 8mm), which can focus on an infinite field of view. It has a standard video signal output from the National Television System Committee (NTSC) of the United States, and transmits this video signal to the image capturing device 252. In addition to the CCD camera described above, the camera embodiment may also be a CMOS (Complementary Metal Oxide Semiconductor) camera, or a camera equipped with other image scanning devices. 2. Light source 24: It is a very important component. The type of lamp and the placement position will affect the distribution of illumination. The system will obtain different results due to different light sources. The measurement system uses two book lamps with a high-frequency frequency conversion device as the illumination light source 24 of the target 30. Positioning the light source 24 and the target 30 in the experiment and fixing them without adjusting the direction and position has the advantage of keeping the illumination of the target 30 stable during the experiment. 3. Platform controller 21: Connected to the adjustment platform 23 'to provide power and control the movement of the adjustment platform 23 and limit adjustment through software commands. 26 774 200422754 The range of movement of the platform 23 can be manually assisted to fine-tune the direction of the camera 22. Adjust the orientation of the camera 22 device. 4. Computing unit 25: A general computer system for capturing, processing, and computing images from camera 22, and controlling platform controller 21. A command is given to the platform controller 21 to adjust the position of the camera 22. Among them, the central processing unit 251 is a general-purpose CPU, which is used to execute operating software, control system operations and capture data management; a digital image processor 253 is responsible for image calculations; an image capture device 252 is used to convert the analog video signal of the camera 22 into The digital signals are stored in the memory for the digital image processor 253 and the central processing unit 251 to calculate the positions of the image feature coordinates corresponding to the correction points 38, 311-318, 321-328, and 331-338 on the map target 30 in real time. . In appearance, the image capturing device 252, digital image processor 253, and central processing unit 251 are a set of personal computers with MS Windows operating system. Corresponding software developed in response to the operating procedures of the experimental system will be described later. 5. · Target 30: fixed in the camera's field of view, and provides the absolute coordinate position for analyzing the field of view 80. A center symmetrical pattern 31 (PCP) is drawn on the target 30, and the center symmetrical pattern 31 defines a center correction point 38 located at the center of the pattern and a plurality of correction points that can be graphically defined. In the embodiment, the plurality of geometric figures are located on three concentric circles with the center correction point 38 as the center. Eight correction points 311-318, 321-328, and 331-338 are symmetrically placed in each circle to form a regular octagon. The radii of the three reference concentric circles are 20mm, 40mm, and 60mm respectively. The positions of the correction points 311-318, 321-328, and 331-338 are based on the radius of the plane angle of 27 200422754 0 °, and each angular displacement is π / 4. It is a black square with a width of 8mm and a height of 8mm, with a total of 24 points. In addition, the four correction points 331, 333, 335, and 337 in the outermost circle are used as the tangent points, and the four vertices of a square are made as the test points 341-344. The target 30 is a computer-aided (CAD) drawing and printed on a photo paper made of a high scattering coefficient material using an inkjet printer. In addition, the calibration point can also be composed of a light emitting diode (LED) active element, so that the target 30 can be actively illuminated to obtain better image quality; the measurement system does not require the assistance of the light source 24. In the experiment, the target 30 is properly fixed on the experimental table, and the absolute coordinate position of the target 30 can be precisely defined. The PCP 31 that can be used in the present invention is not limited to the regular octagon defined by the concentric circles shown in "Figure 5", as long as it is a concentric and symmetrical form, it is a feasible PCP 31 embodiment, so the correction point The vertices that can be combined into regular triangles, squares, or regular and even polygons are all applicable PCP 31; it should be noted that choosing PCP 31 defined by regular polygons with even sides can get the benefits of simpler calculations. The extreme of a polygon is a circle, as shown in the schematic of PCP 31 in Figure 4. In addition, if the target 30 is optically symmetrical, the same effect is achieved. Before describing the detailed implementation and deduction methods, the problems to be solved by the present invention are summarized as follows: 1.  Deduct the absolute direction and position of the distortion center 227 and positioning optical axis 224 in space of the image plane; 2.  Deduct the absolute coordinates of FCP 222 (or projection center); 28 200422754 3. Deduct the optical axis deflection Xiao Qian length from _ (or like a high focal length length profile); 4. Deduct the absolute coordinate system, all 28 pairs of camera coordinates, System% projection function; 5. Deduction of image distortion and correction mechanism. The present invention proposes experimental methods and interpretation methodologies according to the above subject, which are described as follows: 1. The camera coordinate system 26 and the absolute coordinate system 28 are collimated by adjusting the direction and position of the camera so that the PCP 31 image is a Cp 226. The pixel coordinate position I (Ue, vj and optical axis 224 to W (0,0, z) of the distortion center 227 is projected according to the fisheye lens space. It has symmetry around the optical axis, center symmetry of imaging distortion, and PCP. The central symmetry property of 31, if and only if the optical axis 224 collimates to the z-axis of the absolute coordinate system 28, the ICP 226 that is also center symmetrical can be obtained. The present invention can be based on the on-screen pixels of each correction point displayed on the computer screen The symmetry of the coordinate system 27 (which can be the center of gravity coordinates of the map image group of the calibration points) adjusts the spatial arrangement of the measurement system. The absolute coordinate position of the camera 22 is dynamically adjusted by a computer program, and the direction of the camera 22 is adjusted manually . With this procedure, the pixel camera coordinate system 26 and absolute coordinate system 28 can be collimated. When the collimation mechanism is reached, the geometric center of ICP 226 (compared to PCP 31 in "Figure 5" is implemented For example, the geometric center is the characteristic coordinates of the mapped image point of the center correction point 38) is the position of the distortion center 227. At this time, the optical axis 224 passes through the geometric center of the PCP 31 orthogonally and the characteristic coordinates of the center correction point 38 at the same time. The position is the center of distortion. The detailed steps for its implementation are as follows: 29 777, 200422754 1.  Set the relative positions of the target 30 and the adjustment platform 23 visually, so that the three base axes 231-233 of the adjustment platform 23 are as parallel as possible to the three coordinate axes of the absolute coordinate system 28; 2.  Set the light source 24 so that the target has an average illuminance, and define the target center W (0,0, 〇) (the geometric center of the center correction point 38) as the origin of the absolute coordinate system 28; 3.  The camera 22 is set on the Y 'base axis 232 of the adjustment platform 23. The base of the camera 22 is provided with a universal optical base 70, so that the three axis directions of the camera 22 can be manually adjusted. In order to make the optical axis S (0,0, /) of the camera coordinate system 26 coincide with the Z 'base axis 233 of the platform coordinate system 29 \ ^' (0,0,2), when the camera 22 moves along the 2 'base axis 233, It can be regarded as moving along the optical axis 224; therefore, practically try to make the Z axis of the absolute coordinate system 28, the Z 'axis of the platform coordinate system 29 and the optical axis S (0,0, /) of the camera ball coordinate system 26 as far as possible. Collimate on the same straight line; 4. Change the position of the camera 22 on the X 'base axis 231, Y' base axis 232, and Z 'base axis 233 by adjusting the platform 23, so that the image blocks of the four test points 341-344 Located on the four borders of the computer screen to increase the range of correction; 5. Using a symmetrical analysis background program to continuously track the geometric center of ICP 226 (that is, the feature coordinates of the mapped image point at the center correction point 38) 'Reference correction point 38 Pixel coordinate position l (U38, V38), calculate the "distortion index parameters" and "horizontal / vertical coordinate deviations" of the mapped image points of the correction points 311-318, 321-328, and 331-338, 30 776 200422754 and These cards are displayed on the computer screen and are fed back to computer programs to command the platform The controller 21 drives the adjustment platform 23 to change the position W '(x', y ', z') of the camera 22 on the platform coordinate system 29, and adjusts the direction of the camera 22 manually. The purpose is to bring the "distortion index parameter" and the "horizontal / vertical coordinate deviation" to the optimum. If it is displayed on the screen that these parameters have reached the set threshold, it means that the symmetry of the ICP 226 meets the requirements, then go to the next step; otherwise, repeat this step; 6. Record the "distortion index parameter", "horizontal / vertical coordinate deviation 値" and the obtained "object image conjugate coordinate pair", that is (wc '(x', y ', z') [0], ln (u, v) [0]). \ ¥. ‘(\’, Ya ’, 2’) [0] refers to the position of the camera at the platform coordinates and ln (u, v) [0] is the pixel coordinate position of each correction point. At this time, the index is k = 0; where η may be 38, 311-318, 321-328, or 331-338, which represents any correction point of PCP 31 in "Figure 5", and l38 (u, v) [0 ]) Is the distortion center I (\, ν.) Located in this experimental procedure. And k = 0 represents the position at the initial alignment of the system. Moving to the next position increases k 値 1. For example, "Figure 6" and "Figure 9" show the P, q, and r-th execution measurements. ; At this point, the test procedure has aligned the camera coordinate system 26 and the absolute coordinate system 28. Before going on, let ’s introduce the “distortion index parameters” and “horizontal / vertical coordinate deviations” in detail. During the experiment, the above symmetry analysis background program has been executed to guide the adjustment system online. In addition to displaying the PCP 31 image on the screen, it is displayed with text or graphics to indicate the index of the image 31 200422754 symmetry (called the symmetry index) ; The system automatically adjusts the position of the camera 22 and the manual adjustment direction according to the symmetry index. Define the "distortion index parameters" and "horizontal / vertical deviation" indicators; the description is as follows:

a. 失真指標參數su[m][k],sv[m][k]):係爲像素座標 系統27 (In(u,v))上各校正點的映射影像點對中心校 正點38的映射影像點分別在座標u分量與v分量的 差値和。參考「第5圖」的PCP 31序號,其計算公 式如下: su[m][k]= (300fm*10+a) [k]-u3S[k]) ⑴ a=l sv[m][k] = ^ (v(3〇〇+w,10+a) [k] - v38 [k]) (2) a=\ 式中 l$a$8,k=0。u(30()+m*1()+a)即爲 un, 表示In(u,v)的u分量,而V(3GG+m*1()+a)亦同理,。 (su[m][k],SV[m][k])即爲失真指標參數、由於每一同 心圓上之校正點係呈中心對稱分布,因此若是ICP 226 的對稱性合乎理想,則su[m][k],sv[m][k])的値應趨 近於零。 、 b· 水平座標偏差度:係指PCP 31上所有水平校正 點之映射影像點的特徵座標位置在像素座標系統27 (In(un,vn)[k])上v分量(垂直分量)之量測値所組 成數列的敘述統計標準偏差値。以「第5圖」繪示之 PCP 31 實施例,n=335、325、315、38、311、321、 331,也就是取^35[]^]、心25[幻、^15[幻138[幻1311[1<1、 11279-TW-PA 32 200422754 v321[k]及v331[k]組成數列的敘述統計標準偏差値。 c. 垂直座標偏差度:係指PCP 31上所有垂直校正 點之映射影像點的特徵座標位置在像素平面座標系統 27 (In(un,vn)[k])上u分量之量測値所組成數列的敘 述統計標準偏差値。以「第5圖」繪示之PCP 31實 施例,n=333、323、313、38、317、327、337,也就 是取 u333 [k]、u323 [k]、u313[k]、u38[k]、u317[k]、u327[k] 及u337[k]組成數列的敘述統計標準偏差値。 上述水平及垂直座標偏差度可以經由像素座標系統27 準直相機22對絕對座標系統28的水平及垂直方向。 最小化上述對稱指標,可以使光學軸224 S(0,0,f)準直 到絕對座標系統28的Z軸,意味Z軸正交通過影像平面 225的失真中心227 (I(ue,vJ),且經由既有的PCP 31絕 對座標位置可追跡光學軸224。但是,相機22的絕對位置 (即相機的投影中心)此時尙無法得知。 畫面比例(Aspect Ratio)也是相機校正的參數之一, 本發明可以很容易得到這個參數値,因爲參考ICP 226 (In(un,vn) [k])的垂直分量和水平分量已經直接反應相機 系統的畫面比例(aspect ratio)。如果畫面比例等於1, 在理想的情況,經校正後PCP 31上同一正多邊形(或同 心圓)之任何一頂點的影像具有相同的像高(P)。實務 上,也發現是符合的。 二、運動相機使PCP 31同徑向上面的相異校正點影像重 11279-TW-PA 33 200422754 疊以演繹同視野線的絕對座標位置,及定位相機的投影 中心 經由解析一影像點91所對應的不同絕對座標位置來 演繹相機22的成像機制是本發明的一個重要創意。而這 些位置將組成一視野線如視野線80 (sight ray)。模式上, 任一影像點可解析一對應視野線,稱爲同視野線。 援用「第8圖」中量測系統的佈置讓相機22沿著光 學軸224鎖住圖靶30中心法線移動,隨著加大物距,可 使各校正點影像向著失真中心227靠近,期間可使相異的 校正點映射到重疊的影像範圍,而相機22移動(由程式 主動驅動)的相對位移是可主動控制的,以此位移資料與 攫取相應影像點的特徵座標位置,配合已知的各校正點絕 對座標位置,可以推導相機22影像的同視野線座落的空 間絕對座標。 首先再次參照「第6圖」,以第一種實施例來解釋同 視野線的絕對定位方法。在此實施例中,相機22固定而 移動圖靶30,若至少二相異校正點(如圖中垂直方向的三 校正點313、323、333 )移動在至少二不同的絕對空間座 標位置(如圖中的W313[p]、W32\[q]、W333 [r])且共同映 射在影像平面225的同一影像點I(u,v)91上,則由此可定 義一條I(u,v)91的同視野線80。由於PCP 31的各直徑上 的校正點所定義的直線恆正交於光學軸224,所以驅動圖 靶30沿著光學軸224移動可以得到影像重疊,即是I313(u, V)[P]= 323(u,V)[q]= 333(11,v)[r]。這和 Tsai 的徑向準直限制 11279-TW-PA 34 200422754 條件(Radial Alignment Constraint)相同,這是光軸空間 徑向對稱的投射機制的特性;基本上,這在習知技藝中是 被認同的。而同視野線80與光學軸224的交會點就是FCP 222 ’或稱爲投影中心;也是相機的絕對空間位置。 「第6圖」除了標示經魚眼鏡頭投射後之失真影像點 91的座標位置I(u,v)外,同時也標示符合線性透視投射機 制下之校正影像點92的座標位置I(u% V。)。傳統上此兩點 之差異稱爲I(u,v)的失真値。 考慮實務上圖靶照度的恆定與簡化計算的因素,本發 明於實際實驗時,採用第二種實施例-移動相機22而讓 圖靶30的位置固定,如此也可得到與「第6圖」等同的 投射機制。請參考「第9圖」,控制相機22 (以相機22 的FCP 222代表之)在其光學軸224準直Z基軸233的情 形下遠離圖靶30,使得圖靶30上三相異校正點313、323、 333對相機22的相對位移變化,而使絕對座標位置W313、 W323及W333在於p、q、Γ三不同測試順序時(意味相機22 的FCP 222分別位於W。’ [p]、We’[q]及We’[r]),其映射 影像點91皆落在影像平面225的相同位置,即像素座標 系統 27 位置 I313[p]= I323[q] = I333[r]。 假設實驗之初相機FCP 222的位置(WJp])與中心 校正點38 ( W38(0,0,0)))的距離爲D,維持方向驅動相 機22依序沿著光學軸224移動數次,攫取各校正點38、 311-318、321-328、331_338映射的影像特徵座標位置, 並與相機22於平台座標系統29中的位置配對,組成物像 11279-TW-PA 35 200422754 共軛座標對(w/[k],in[k]),式中k爲實驗取樣次序。 該數據攫取程序分爲兩部份:(1)相機22沿Z基軸233 移動dZ’後,以對稱分析背景程式主動微調相機22的 \¥’0’,¥’^’)位置,但維持相機22的方向不變,目標使1〇? 226的對稱性維持最佳;(2)攫取物像共軛座標對W/ (x’,y’,z’)[k],In(u,v)[k]的數値,逐次實行而組成一「物像 共軛座標對陣列」。延續上節的程式步驟,詳述如下: 7. 量測系統的佈置延續上一步驟(此時光學軸224已準直 於絕對座標系統28的Z軸,且已經得到(We’[0], In[〇])),設定此時相機的FCP 222與圖靶30間的初始 距離爲D-這是目標的演算値;(註:此程序一般可以 不再調整相機22方向) 8. 增加位置指標値(k),主動控制平台座標位置使相機 22沿Z基軸233移動dZ的距離; 9. 依據螢幕上顯示的對稱指標(失真指標參數與水平/垂 直偏差度)主動微調相機22的W’(X’,Y’)位置,當ICP 226的影像對稱度達到預設的對稱標準時,記錄物像共 軛座標對(W/[k],In[k]); 10. 若相機位置未超過預設的取樣~次數,則跳回步驟8,否 則繼續往下執行; Π.關閉對稱分析背景程式; 12. 完成資料攫取,得到供參數演算的「物像共軛座標對 陣列」; 13. 演繹相機參數的相關係數,其演繹方法與演算參數將 11279-TW-PA 36 200422754 於後文中說明)。 以下以實際裝置系統所得到的數據來說明,可以印證發明 的方法學的實用性。本發明實施裝置實際量測時,設定相 機22沿Z’基軸233的位移量每次1 〇mm,總共位移19次, 組成陣列(W/[0..19],Ιη[0··19]),共20個共軛座標對。經 此量測程序之後,取得的數據陣列做成剖面曲線供演算相 機參數,供後續演繹步驟使用: 1· W/[0..19]相機22於平台座標系統29的位置剖面曲線: 「第1 0圖」表示實驗過程中相機22於平台座標系統 29 可得 ICP 226 的位置數列,由 We’[0]=W’(-7.5mm,-15mm,0mm)到 Wc,[19]= W,(-8mm,-19mm,190mm^* 布,表示光學軸224在平台座標系統29的位置與方向。 ^^’[0]=\¥’(-7.7111111,-15.〇111111,〇111111)表示在實驗開始的時 候的座標系統偏差,水平方向爲-7.7mm,而垂直方向 爲-15.0mm,而設定此位置的Ze’[0]爲平台基準點。圖 示Xc’[0..19]及Yc’[0」9]剖面雖然有偏差,參考但仍 維持線性,表示參考ICP 226影像的對稱性可以有效地 追跡光學軸224的方向與位置;另外是所佈置的平台 座標系統29和絕對座標系統28並未完全準直,但其 偏差很小。以Ζ/[0·.19]爲基準,分別爲X方向千分之 三及Υ方向百分之二的誤差。這個結果,也表示直接 用Ζ’軸的位移量來表示相機22於絕對座標系統28的 位移量的是可靠的,因爲其誤差百分比爲 11279-TW-PA 37 200422754 其値約0.002% ;所以相機22於實驗過程中絕對距離 Ze[0],可以視爲z〇’[k]値加上相機22與圖靶30間的 初始距離D。 2 l38(u,v)[〇..19]中心校正點38在像素座標系統27的位置 咅fj面:請參照「第1 1圖」’圖中標不中心校正點3 8 之影像區塊(blob)的特徵座標値。根據「第4圖」相機 22的空間投射對稱性’ l38(u,v)[k]應固定在像素座標的 一專一位置,並不會因相機22在平台座標系統29的 位移We,[0..19]而變動。「第11圖」標示影像失真中 心227的實測像素座標位置。以這些量測演繹位置’ 演算敘述統計的水平及垂直的標準偏差値分別爲0·25, 及0.18像素單位,且依據線性吻合處理失真中心227 位於I(uc,ve)= Ι(318·1,236·1) pixel處。些微的標準偏差 値表示實驗結果是可信的’驗證了設定的假說一失真 中心227的座標位置是定値。 3. (Pm[0"19]; m=[l"3]) ICP[0"19]在像素座標系統 27 呈 現的特徵半徑長剖面··請參照「第12 A圖」’表示 相機22移動於We’[0..19]時,*PCP 31上三同心圓(分 別由內而外以下標m=[1..3])表示之)所定義之校正點 311-318、321-328、331-338在像素座標系統27對應之 像高的平均値。隨著相機座標Z。’ [0..19]的變化’其計 算公式爲: 1/2 凡[幻=| 別)2 -(v#]m])2] (3) ^ /i=300fwi*10fl 38a. Distortion index parameters su [m] [k], sv [m] [k]): are the mapping image points of each correction point on the pixel coordinate system 27 (In (u, v)) to the center correction point 38 The difference between the mapped image points at the u component and the v component is the sum of the difference. Refer to the PCP 31 serial number in "Figure 5", and its calculation formula is as follows: su [m] [k] = (300fm * 10 + a) [k] -u3S [k]) ⑴ a = l sv [m] [k ] = ^ (v (30〇 + w, 10 + a) [k]-v38 [k]) (2) a = \ where l $ a $ 8, k = 0. u (30 () + m * 1 () + a) is un, which means the u component of In (u, v), and V (3GG + m * 1 () + a) is the same. (su [m] [k], SV [m] [k]) are the distortion index parameters. Since the correction points on each concentric circle are center symmetrically distributed, if the symmetry of ICP 226 is ideal, then su The 値 of [m] [k], sv [m] [k]) should approach zero. , B · Horizontal Coordinate Deviation: refers to the amount of the v-component (vertical component) of the feature coordinate positions of the mapped image points of all horizontal correction points on PCP 31 in the pixel coordinate system 27 (In (un, vn) [k]) Describes the statistical standard deviation of the series of descriptive statistics. The embodiment of PCP 31 shown in the "figure 5", n = 335, 325, 315, 38, 311, 321, 331, that is, taking ^ 35 [] ^], heart 25 [幻 、 ^ 15 [幻 138 [Magic 1311 [1 < 1, 11279-TW-PA 32 200422754 v321 [k] and v331 [k] constitute the narrative statistical standard deviation of the series 値. c. Vertical Coordinate Deviation: refers to the measurement of the u component on the pixel plane coordinate system 27 (In (un, vn) [k]) of the characteristic coordinate positions of the mapped image points of all vertical correction points on PCP 31 The narrative statistical standard deviation of the series 値. The embodiment of the PCP 31 shown in the "Figure 5", n = 333, 323, 313, 38, 317, 327, 337, that is, u333 [k], u323 [k], u313 [k], u38 [ k], u317 [k], u327 [k], and u337 [k] make up the descriptive statistical standard deviation 値 of the series. The above-mentioned horizontal and vertical coordinate deviations can be obtained through the horizontal and vertical directions of the pixel coordinate system 27 and the collimating camera 22 to the absolute coordinate system 28. By minimizing the above symmetry index, the optical axis 224 S (0,0, f) can be aligned to the Z axis of the absolute coordinate system 28, which means that the Z axis is orthogonal to the distortion center 227 (I (ue, vJ) of the image plane 225, And the optical axis 224 can be traced through the existing absolute coordinates of the PCP 31. However, the absolute position of the camera 22 (that is, the projection center of the camera) cannot be known at this time. The aspect ratio is also one of the parameters for camera calibration The present invention can easily obtain this parameter 値, because the vertical and horizontal components of the reference ICP 226 (In (un, vn) [k]) have directly reflected the aspect ratio of the camera system. If the picture ratio is equal to 1 In the ideal case, after correction, the image of any vertex of the same regular polygon (or concentric circle) on PCP 31 has the same image height (P). In practice, it is also found to be consistent. Second, the motion camera makes PCP 31 The image of the different correction points on the same radial direction is 11279-TW-PA 33 200422754 superimposed to interpret the absolute coordinate position of the line of sight, and the projection center of the positioning camera is analyzed by analyzing the different absolute values corresponding to an image point 91 It is an important idea of the present invention to interpret the imaging mechanism of the camera 22 at the target position. These positions will form a field of view such as the field of view 80 (sight ray). In the mode, any image point can resolve a corresponding field of view, called With the same line of sight. Using the arrangement of the measurement system in "Figure 8", the camera 22 moves along the optical axis 224 to lock the center normal of the target 30. With the increase of the object distance, the images of each correction point can be directed to the center of distortion 227 During the approach, the different correction points can be mapped to the overlapping image range, and the relative displacement of the camera 22's movement (actively driven by the program) can be actively controlled, based on the displacement data and the feature coordinate position of the corresponding image point, With the known absolute coordinates of each correction point, the absolute coordinates of the space where the line of sight of the camera 22 is located can be derived. First referring to "Figure 6" again, the first embodiment will explain the absolute positioning of the line of sight. Method. In this embodiment, the camera 22 is fixed and the target 30 is moved, and if at least two disparate correction points (such as three vertical correction points 313, 323, and 333 in the vertical direction) are moved to Two different absolute spatial coordinate positions (such as W313 [p], W32 \ [q], W333 [r] in the figure) and are mapped on the same image point I (u, v) 91 of the image plane 225, then This can define an I (u, v) 91 line of sight 80. Since the straight lines defined by the correction points on the diameters of the PCP 31 are always orthogonal to the optical axis 224, the driving target 30 is moved along the optical axis 224 The image overlap can be obtained, that is, I313 (u, V) [P] = 323 (u, V) [q] = 333 (11, v) [r]. This is the same as Tsai's Radial Alignment Constraint 11279-TW-PA 34 200422754. This is a characteristic of the projection mechanism with radial symmetry in the optical axis space; basically, it is recognized in the art. of. The point of intersection of the field of view 80 and the optical axis 224 is the FCP 222 'or the projection center; it is also the absolute spatial position of the camera. "Figure 6" shows the coordinate position I (u, v) of the distorted image point 91 projected by the fisheye lens, and also indicates the coordinate position I (u%) of the corrected image point 92 under the linear perspective projection mechanism. V.). Traditionally, the difference between these two points is called the distortion 値 of I (u, v). Taking into account the constant and simplified calculation of the target's illuminance on the map in practice, the present invention uses a second embodiment-moving the camera 22 to fix the position of the target 30 during actual experiments. Equivalent projection mechanism. Please refer to "Figure 9". Control the camera 22 (represented by the FCP 222 of the camera 22) away from the target 30 when its optical axis 224 collimates the Z base axis 233, so that the three-phase heterogeneous correction point 313 on the target 30 , 323, and 333 change the relative displacement of camera 22, so that the absolute coordinate positions W313, W323, and W333 are in three different test sequences of p, q, and Γ (meaning that FCP 222 of camera 22 is located at W, respectively. '[P], We '[q] and We' [r]), the mapped image points 91 both fall at the same position on the image plane 225, that is, the pixel coordinate system 27 position I313 [p] = I323 [q] = I333 [r]. Assume that the distance between the position (WJp]) of the camera FCP 222 and the center correction point 38 (W38 (0,0,0))) is D at the beginning of the experiment. The camera 22 is sequentially moved along the optical axis 224 several times in the maintenance direction. Take the image feature coordinate positions mapped by each of the correction points 38, 311-318, 321-328, and 331_338 and pair them with the position of the camera 22 in the platform coordinate system 29. The composition image is 11279-TW-PA 35 200422754 conjugate coordinate pair (w / [k], in [k]), where k is the experimental sampling order. The data extraction program is divided into two parts: (1) After camera 22 moves dZ 'along the Z base axis 233, it actively fine-tunes the position of camera \\' 0 ', ¥' ^ ') using a symmetrical analysis background program, but maintains the camera The direction of 22 is not changed, and the goal is to maintain the symmetry of 10 to 226. (2) The conjugate coordinates of the object image W / (x ', y', z ') [k], In (u, v ) [k], which are successively implemented to form an "object image conjugate coordinate pair array". Continuing the program steps from the previous section, the details are as follows: 7. The arrangement of the measurement system is continued from the previous step (at this time, the optical axis 224 has been aligned with the Z axis of the absolute coordinate system 28 and has been obtained (We '[0], In [〇])), set the initial distance between the camera's FCP 222 and the target 30 at this time as D-this is the calculation of the target; (Note: this program can generally no longer adjust the direction of the camera 22) 8. Increase the position Index 値 (k), actively control the platform coordinate position to make camera 22 move dZ distance along Z base axis 233; 9. Actively fine-tune W 'of camera 22 according to the symmetry index (distortion index parameter and horizontal / vertical deviation) displayed on the screen (X ', Y') position, when the image symmetry of ICP 226 reaches the preset symmetry standard, the conjugate coordinate pair of the recorded image (W / [k], In [k]); 10. If the camera position does not exceed For the preset sampling times, skip back to step 8, otherwise continue to execute; Π. Close the symmetry analysis background program; 12. Complete the data extraction to obtain the "object image conjugate coordinate pair array" for parameter calculation; 13. The correlation coefficient of the deduced camera parameters, its deductive method and calculated parameters will be 11 279-TW-PA 36 200422754 is explained later). In the following, the data obtained from actual device systems are used to illustrate the practicality of the methodologies of the invention. When the device of the present invention performs actual measurement, the displacement amount of the camera 22 along the Z ′ base axis 233 is set to 10 mm each time, and the total displacement is 19 times to form an array (W / [0..19], 1η [0 ·· 19] ), A total of 20 conjugate coordinate pairs. After this measurement procedure, the obtained data array is made into a profile curve for calculating the camera parameters for subsequent interpretation steps: 1. · W / [0..19] The profile curve of the position of the camera 22 at the platform coordinate system 29: "第“Picture 10” indicates the position sequence of ICP 226 obtained by camera 22 on the platform coordinate system 29 during the experiment, from We '[0] = W' (-7.5mm, -15mm, 0mm) to Wc, [19] = W , (-8mm, -19mm, 190mm ^ * cloth, which indicates the position and direction of the optical axis 224 in the platform coordinate system 29. ^^ '[0] = \ ¥' (-7.7111111, -15.〇111111, 〇111111) Represents the deviation of the coordinate system at the beginning of the experiment. The horizontal direction is -7.7mm and the vertical direction is -15.0mm. Ze '[0] at this position is set as the platform reference point. Figure Xc' [0..19 ] And Yc '[0 ″ 9] sections, although deviated, still maintain linearity, indicating that the symmetry of the reference ICP 226 image can effectively track the direction and position of the optical axis 224; in addition, the arranged platform coordinate system 29 and The absolute coordinate system 28 is not completely collimated, but its deviation is small. Based on Z / [0 · .19], it is three thousandths of the X direction. The error in the z direction is 2%. This result also shows that it is reliable to directly use the displacement of the Z ′ axis to represent the displacement of the camera 22 to the absolute coordinate system 28 because the error percentage is 11279-TW-PA 37 200422754 Its 値 is about 0.002%; so the absolute distance Ze [0] of camera 22 during the experiment can be regarded as z0 '[k] 値 plus the initial distance D between camera 22 and target 30. 2 l38 (u, v) [〇..19] The position of the center correction point 38 in the pixel coordinate system 27 咅 fj plane: Please refer to the "Figure 11", the feature coordinates of the image block (blob) with the non-center correction point 3 8値. According to the "image 4" of the spatial projection symmetry of the camera 22 'l38 (u, v) [k] should be fixed at a specific position of the pixel coordinates, and will not be caused by the displacement We of the camera 22 in the platform coordinate system 29, [0..19] and changes. "Figure 11" indicates the measured pixel coordinate position of the image distortion center 227. Using these measurements to deduct the position, the horizontal and vertical standard deviations of the calculation's description statistics are 0 · 25, and 0.18 pixel unit, and the distortion center 227 is located at I (uc, ve) = Ι (318 · 1,23 6 · 1) at pixel. A slight standard deviation 値 indicates that the experimental results are credible. 验证 The hypothesis that the coordinate position of the distortion center 227 is set is verified. 3. (Pm [0 "19]; m = [l " 3]) ICP [0 " 19] Long section of feature radius presented in the pixel coordinate system 27. Please refer to "Fig. 12 A" when the camera 22 is moved to We '[0..19], * PCP 31 The average of the image heights corresponding to the correction points 311-318, 321-328, and 331-338 defined by the three concentric circles (respectively represented by the inner and outer subscripts m = [1..3]) at the pixel coordinate system 27 value. With camera coordinates Z. ’[0..19]’ s change ’is calculated as: 1/2 where [幻 = | 别) 2-(v #] m]) 2] (3) ^ / i = 300fwi * 10fl 38

11279-TW-PA 7δ6 、 200422754 式中,l$m$3 (3)k是實驗順序,m表示由內而外 同心圓的層次,而η代表「第5圖」中各校正點31丨-3 18、 321-328、33 1-338的編號,pjk]爲每次實驗各同心圓校正 點的平均像高陣列。將同樣的數據改繪爲「第1 2 B圖」’ 可淸楚看出三層同心圓之平均像高(Ρι[0··19]、P2[0··19]與 p3[〇..19])存在重疊部分;此現象支持本發明所設定的假 說一量測的像高範圍已經隱藏用來定位一同視野線80的 資訊。理想情況,於ICP達到影像對稱時’同一圓周上校 正點的像高是都相等。實務上,以量測到專一圓各校正點 的像高敘述統計的標準偏差爲0.22 pixel。印證系統以達 圓形對稱的光投射機制,實務上是行得通的。 本發明根據上述實測數據,演繹相機的光學參數。首 先以本發明第一實施例來說,請再次參照「第6圖」’若 設We爲原點,則光軸偏折角a ( zenithal angle;視野線 與光軸的夾角)可以表示爲: am[k] = tanH,Z[p])= (R2, Z[q]) = (R3, Z[r]) (4) 式中Z[p]爲圖上的標示長度D。其餘類推,Rn..3]表示圖 靶30上三同心圓的半徑長(參考「第6圖」,即是絕對 空間的物高)。若W[p..r]爲已知,則可由W[p],W[q],及 W[r]組成一線段,該線段延伸交叉到已知的光學軸224方 位而決定We的絕對座標。同理參考「第9圖」,固定圖 靶30,而相機22移動的平台座標方位We[p..r]爲可觀察 和控制的,若三位置間的位移距離已知,則可以加入光學 3911279-TW-PA 7δ6, 200422754 where l $ m $ 3 (3) k is the experimental sequence, m represents the level of concentric circles from the inside to the outside, and η represents each correction point 31 in the "fifth figure" 31 丨 -3 18, 321-328, 33 1-338, pjk] is the average image height array of the concentric circle correction points in each experiment. By plotting the same data as "Figure 1 2B", one can clearly see that the average image height of three layers of concentric circles (Pι [0 ·· 19], P2 [0 ·· 19], and p3 [〇 ... 19]) there is an overlap; this phenomenon supports the hypothesis that the measured image height range of the present invention has hidden the information used to locate the line of sight 80. Ideally, when the ICP reaches image symmetry, the image heights of the calibration points on the same circle are equal. In practice, the standard deviation of the statistic described by measuring the image height of each correction point of a specific circle is 0.22 pixel. The verification system is practically feasible with a circularly symmetrical light projection mechanism. According to the present invention, the present invention deduces the optical parameters of the camera. First, referring to the first embodiment of the present invention, please refer to "Figure 6" again. "If We is the origin, the deflection angle a (zenithal angle; the angle between the field of view and the optical axis) of the optical axis can be expressed as: am [k] = tanH, Z [p]) = (R2, Z [q]) = (R3, Z [r]) (4) where Z [p] is the marked length D on the figure. The rest of the analogy, Rn..3] indicates that the radius of three concentric circles on the target 30 is long (refer to "Figure 6", which is the height of the object in absolute space). If W [p..r] is known, a line segment can be formed by W [p], W [q], and W [r], and the line segment extends to cross the known optical axis 224 to determine the absolute We coordinate. Similarly, referring to "Figure 9", the fixed target 30 is fixed, and the platform coordinate position We [p..r] of the camera 22 is observable and controllable. If the displacement distance between the three positions is known, you can add optical 39

11279-TW-PA 200422754 軸224垂直圖靶30的限制條件’以相似三角形得到We[P], Wc[q],及Wc[r]。這是本發明解相機FCP 222的理論模式。 但是實務上,受限於實驗程序的取樣數目,不易取得像局 (或校正點的映像位置座標)精確地(exactly)吻合的情 形。而影像訊號隨機雜訊會引起定位特徵座標的無法避免 的誤差,這也暗示即使不同校正點在相異絕對座標位置得 到完全一致的影像特徵座標數値,也不可以直接應用於演 算一條同視野線80的絕對座標方向與位置,和設定FCP 222 〇 鑑於以上實務的限制,故本發明提出另一種方法來解 析量測數據。經由實驗數據可以歸納出三組數據:像高 (pm[0_.19] ; m=[1..3])、物高(Rm; m=[l"3])及相機位 移(\\V[0..19])。本發明利用這些數據來演繹相機22的 FCP 222 (投影中心)位置及其投射機制,並且這些數據 已經是過決定的(over determined)了。 首先,一物高的的映射像高與其和相機的距離(物距) 成反比,從「第1 2 A圖」可觀察到此現象,但其並無法 準確關連相機22的投射機制。而在同視野線80的假說基 礎上,若以另外一種觀點一光軸福折角α—來表示物高(就 是PCP 31的實體半徑長度),則可連結「第1 2 Α圖」 中三影像剖面的共同意義。也就像高重疊或且唯若光學角 重疊。根據任一條同視野線80對應唯一光軸偏折角的事 實,若將物高表示成α,則其可以用來一致性解釋像高。 而要將物高轉換成光軸偏折角α,則必須先確定投影中心 11279-TW-PA 40 200422754 (或FCP 222)位置,以得到正確的物距。也就是實驗過 程中像高P重疊的條件,暗示將物高表示成光軸偏折角α 也具有重疊的現象。於是把物距(相機22與圖靶30間的 距離)加入演繹相機22的視野線80,則在「第1 2圖」 像高重疊範圍將同樣得到光軸偏折角a的重疊現象。 所以,以「第9圖」爲基礎,利用試誤法(trial and error) 沿著光學軸224上的每一點進行測試【註:此時光學軸的 絕對座標爲已知】,也就是逐一假設光學軸224上的一定 點與We[p]的距離爲D[p];而We[p],We[q],及We[r]間 的位移量是作實驗過程的設定値,所以D[q],及D[r]也可 相對演繹。據此可在此三不同位置,參考一個等長像高, 即是如圖示的I313[p]、I323[q]與I333 [r],只有在D[p]數値正 確時,才能將物高以正切函數轉換成其對應的等角度的光 軸偏折角( α313、α323 及 α333 ) 〇 實驗時,在20個位置攫取共軛座標對陣列,也就是 針對每一物高Rm在已知的WJ0..19]位置得到像高剖面 pm[〇..19],故假設以We[0]的物距爲D[0]則整個實驗過程 將得D[0:19]20個物距,將D[0:19]參考物高或測試圖靶的 半徑長度,可得其光軸偏折角剖面(am[〇:20],m=[1..3])。 由參考(pm[〇..19],χη=[1.·3]) ’演繹的光軸偏折角 (am[0:20] ,m=[1..3])分布剖面所形成軌跡的重疊度(於 本發明中稱此爲第一種重疊性指標)來定位相機22的位 置;而此重疊現象,只有在D[0]的測試値FCP 222正確定 位,才可得到。這是本發明提出第一種解析相機22位置 4111279-TW-PA 200422754 Restriction condition of axis 224 vertical view target 30 'We [P], Wc [q], and Wc [r] are obtained in similar triangles. This is a theoretical model for the camera FCP 222 of the present invention. However, in practice, due to the number of samples in the experimental program, it is not easy to obtain an exact match of the image bureau (or the image position coordinates of the calibration point). The random noise of image signals will cause unavoidable errors in locating feature coordinates, which also implies that even if different correction points get completely the same number of image feature coordinates at different absolute coordinate positions, they cannot be directly applied to calculate a same field of view. The absolute coordinate direction and position of the line 80, and the setting of the FCP 222. In view of the above practical limitations, the present invention proposes another method to analyze the measurement data. Three sets of data can be summarized through experimental data: image height (pm [0_.19]; m = [1..3]), object height (Rm; m = [l " 3]), and camera displacement (\\ V [0..19]). The present invention uses these data to interpret the FCP 222 (projection center) position of the camera 22 and its projection mechanism, and these data have been over determined. First, the height of a mapped object image is inversely proportional to the distance from it to the camera (object distance). This phenomenon can be observed from "Figure 1A", but it cannot accurately relate to the projection mechanism of camera 22. On the basis of the hypothesis of the same line of vision 80, if another point of view is used to express the height of the object (the physical radius of PCP 31) with an optical axis angle of refraction α-, it can be connected to the three images in "Figure 1 2 Α" Common meaning of profiles. It's like high overlap or if the optical angles overlap. According to the fact that any one deflection angle corresponding to the only optical axis with the field of view 80 corresponds to the height of the object as α, it can be used to consistently interpret the image height. To convert the object height into the optical axis deflection angle α, the position of the projection center 11279-TW-PA 40 200422754 (or FCP 222) must be determined first to obtain the correct object distance. That is, the condition that the image height P overlaps during the experiment, which implies that the object height expressed as the optical axis deflection angle α also overlaps. Therefore, if the object distance (the distance between the camera 22 and the target 30) is added to the field of view 80 of the camera 22, the overlap of the optical axis deflection angle a will also be obtained in the "picture 12" image height overlap range. Therefore, based on the "Figure 9", use the trial and error method to test each point along the optical axis 224 [Note: At this time, the absolute coordinates of the optical axis are known], that is, one by one assumption The distance between a certain point on the optical axis 224 and We [p] is D [p]; and the displacement between We [p], We [q], and We [r] is the setting for the experimental process, so D [q], and D [r] can also be interpreted relatively. Based on this, you can refer to an equal-length image height at these three different positions, that is, I313 [p], I323 [q], and I333 [r] as shown in the figure. Only when the D [p] number is correct, The height of the object is converted to its corresponding equal-angle deflection angle of the optical axis (α313, α323, and α333) by a tangent function. During the experiment, an array of conjugate coordinate pairs was taken at 20 positions, that is, for each object height Rm is known WJ0..19] position will get the image height profile pm [〇..19], so assuming that the object distance of We [0] is D [0], the whole experimental process will get D [0:19] 20 object distances The height of the D [0:19] reference object or the radius of the test target can be used to obtain the optical axis deflection angle profile (am [0:20], m = [1..3]). The trajectory formed by the reference (pm [〇..19], χη = [1. · 3]) 'deduced optical axis deflection angle (am [0:20], m = [1..3]) distribution profile The degree of overlap (referred to as the first indicator of overlap in the present invention) is used to locate the position of the camera 22; and this overlap phenomenon can only be obtained if the FCP 222 is correctly positioned in the test of D [0]. This is the first analysis of the position of the camera 22 proposed by the present invention 41

11279-TW-PA 7〇9 200422754 的方法。請參照「第13圖」,表示在得到正確的D[0]數 値後,光軸偏折角(Xm[0..19]對像高Pm[0..19]之資料點的 曲線軌跡,顯示三個同心圓半徑對應的光軸偏折角剖面具 有很好的重疊性。而將像高p表示成光軸偏折角α的函數 關係即是相機22的投射函數,因此「第1 3圖」所表現 的曲線即是光學鏡頭領域所稱的投射曲線(projection curve)或投射函數。目前習知技藝中還未出現量測裝置非 線性透視投射模型相機的方法(非指鏡頭);而本發明可以 用簡單的儀器裝置達到。另外,若將D[0]數値偏移一定値, 以50mm爲例,則光軸偏折角軌跡會出現明顯發散的現象, 如「第1 4圖」所示。 上述結果顯示:以物像共軛座標對陣列可以演繹投射 函數並定位相機22的位置(即定位FCP 222)。而本發明 提出的這種方式可以廣泛地應用在各種投射模型的 相 機22上。(註:本實施例的投射接近EDP,這只是一個 特例。任何的投射模型可用此發明法求得) 投射曲線可用以來描述相機22的成像機制;但無法 直接量化相機系統的失真。由量測結果,實施例中所討論 的鏡頭,其投射機制接近等距離k射(EDP),因其投射 曲線接近爲一條直線。若以線性投射的觀點,影像的失真 度和像高呈現非線性的負正比關係。爲了進一步方便解析 相機系統的失真機制,本發明定義另一個光學參數一「光 軸偏折角焦距」(zenithal focal length,以下簡稱爲zFL ), 如「第6圖」所示BNP 223’與在影像平面的失真中心227 11279-TW-PA 42 200422754 間的距離爲高斯光學模式下的焦距常數,此參數對應一光 軸偏折角α之視野線80的成像模式,其値爲: zFLm[0..19] = pm[〇..19]*cot(am[〇..19]) (5) 若以單一影像座標點I(u,v)對應唯一同視野線80的 觀點,可將zFL視爲:一影像座標點參考「線性透視投射 的模型」所轉換的焦距常數;此一焦距會隨著像高P的變 化而變動,其變動的幅度越大則表示相機系統的徑向負失 真度越大。更進一步,一影像高可以解釋成一對應空間光 軸偏折角a ;而若以成像機制的觀點,其又相依於所對應 的zFL,所以函數zFL(p)可以直接展現相機系統的失真度。 後續將以「光軸偏折角焦距曲線」(zFL curve)或「光軸 偏折角焦距函數」(zFL function)表示。 將「第1 2 A圖」中的像高pm[〇..19]表示成zFL J0..19] 也必須參考物距,而其對照剖面的重疊現象亦可用於定位 相機22的FCP 222。這是本發明提出第二種解析相機22 位置的方法。「第1 2 A圖」所展現的像高pm[0.. 19],可 以用zFL m[0..19]表示,並得到一致解釋。因此以試誤法 (trial and error)逐一假設光學軸224上任一定點爲相機 22的FCP 222,據此可推斷起始量測距離D[0],進而將 (pm[0..19],m=[1..3])轉換成其對應的(zFL m[0..19],m=[1..3]),由演繹形成軌跡的重疊度來定位相機22 的位置(於本發明中稱此爲第二種重疊性指標)。「第1 5圖」顯示有很好的重疊現象,表示實驗結果可在光學軸 224定點鏡頭真正相機22的FCP 222。相對地,若將測試 11279-TW-PA 43 200422754 點偏移5mm,則zFL軌跡會出現明顯的發散現象,如「第 1 6圖」所示。而「第1 5圖」亦可以直接用來表示相機 的失真機制或影像的失真程度。 値得注意的是,比較「第14圖」與「第16圖」後 發現,「第1 4圖」中D數値偏移50mm比不上「第1 6 圖」中D數値只偏移5mm所造成的曲線發散現象明顯, 由此可知:zFL(p)對相機22投影中心位置的測試靈敏度 遠高於光軸偏折角函數a(p);此亦意味著:在實際應用上, 利用zFL(p)曲線重疊度來定位相機22的FCP 222是較佳 的方式。並且可以由像高pm[0..19]逼近爲零値的位置得到 相機22鏡頭的焦距,一般理想鏡頭以此値爲焦距。 爲了使本發明的投影中心定位方式更具通用性-可適 用於任何的投射函數,針對前文所描述的軌跡剖面重疊度 本發明也提出一鑑定指標的方法。由於zFL(p)函數有較高 的鑑別度,故以此爲例,將「第15圖」及「第16圖」 分別三組數據參考像高P重新排列呈現於「第17圖」, 像高剖面部份的「發散長度」(或稱爲「特性長度」), 可用於評估zFL(p)曲線的重疊度。而計算「特性長度」的 方法可由「第1 7圖」來表示,k計算方法是將相鄰之P 對zFL的資料點連接起來,並計算所有點相連後的總長 度。故若所有點相連後的特性長度最小(如圖中標示爲zFL 的曲線),表示其所對應的zFL(p)軌跡的重疊度最好,則 該測試定點即爲相機22的投影中心(viewpoint,即FCP 222);否則,則如圖中另一條zFL_shift曲線,呈現明顯 11279-TW-PA 44 200422754 較長的發散長度。 另外,本發明更可進一步利用原創PCP 31的影像性 質,評估量測系統的佈置品質,並據以修正系統佈置並預 測相機系統是否可鑑定。由於相機22的失真模式無法預 測,有些相機22的投射機制可能因瑕疵而嚴重不符合預 期的模式。例如鏡頭透鏡組的光學軸224與相機22影像 平面225已非正交,則不管如何校正都不可能得到完全對 稱的影像;但是,藉由本發明則可及早排除校正這些不適 用的相機22。 綜合以上所述,不管是利用相機22的投射函數α(ρ) 或是zFL(p)函數,都可達到鑑定相機規格的目的,並求取 相關光學投射參數。 由此可知’本發明所提出的方法以及量測系統確可用 於解析相機22的成像機制,而且可由量測到的數據分布, 進一步引導或修正量測系統的佈置及判斷量測參數的可靠 度,最後更可用於校正相機或發展影像處理及轉換技術。 【發明之功效】 本發明提出之求取相機光學投射參數的方法及其裝 置,具有以下優點: 1·能夠確切地定出相機的光學軸、定位相機的絕對位 置(即投影中心)以及求出相機的投射曲線與焦距 常數。 2·可以藉由「光學軸偏折角焦距函數」量化影像座標 點的失真。 11279-TW-PA 45 200422754 3·可以由實驗數據鑑定量測系統的可靠度。 4.可以由實驗數據鑑定待測目標相機的品質。 5·可以將影像座標點直接轉換成空間投射角度。 6. 可以應用到立體影像量測學。 7. 校正的方法簡單且成本低,適用於任何一種非線性 投射機制的相機。 雖然本發明已以一較佳實施例揭露如上,然其並非用 以限定本發明,任何熟習此技藝者’在不脫離本發明之精 神和範圍內,當可作些許之更動與潤飾’因此本發明之保 護範圍當視後附之申請專利範圍所界定者爲準。 【圖式簡單說明】 第1Α圖、第1Β圖,繪示習知一種根據理想 平面影像爲基礎之魚眼影像校正方法的影像解析圖以及其 對應之空間投射示意圖; 第2圖,繪示習知三種類型魚眼鏡頭之投射函數曲線 圖; 第3圖,繪示習知高斯光學>莫型(Gaussian Model) 的成像光路示意圖; 第4圖,繪示本發明一實施例中圖耙對魚眼鏡頭之投 射光路的立體示意圖; 第5圖,繪示依據本發明精神而設計之一圖靶實施例 的示意圖,其爲一由三個同心圓定義的八角對稱圖案; 4611279-TW-PA 7009 200422754. Please refer to "Figure 13", which shows the curve locus of the data points of the optical axis deflection angle (Xm [0..19] versus the image height Pm [0..19] after obtaining the correct D [0] number. The optical axis deflection angle profiles corresponding to the three concentric circle radii are shown to have a good overlap. The function relationship that expresses the image height p as the optical axis deflection angle α is the projection function of the camera 22, so "Figure 13" The curve shown is the projection curve or projection function in the field of optical lenses. At present, there is no method (non-referred to lens) of a measurement device for a non-linear perspective projection model camera in the conventional art; and the present invention It can be achieved with a simple instrument. In addition, if the D [0] number is shifted by a certain amount, taking 50mm as an example, the optical axis deflection angle trajectory will obviously diverge, as shown in "Figure 14" The above results show that the array of object image conjugate coordinates can deduct the projection function and locate the position of the camera 22 (ie, locate the FCP 222). The method proposed by the present invention can be widely applied to the cameras 22 of various projection models. (Note: The projection connection of this embodiment EDP, this is just a special case. Any projection model can be obtained by this invention method) The projection curve can be used to describe the imaging mechanism of camera 22; but it cannot directly quantify the distortion of the camera system. From the measurement results, the lens discussed in the embodiment , Its projection mechanism is close to equidistant k-radiation (EDP), because its projection curve is close to a straight line. From the perspective of linear projection, the distortion of the image and the height of the image show a non-linear and negative proportional relationship. To further facilitate the analysis of the camera system The present invention defines another optical parameter, a “zenithal focal length” (hereinafter referred to as zFL), as shown in FIG. 6 and BNP 223 'and the distortion center 227 in the image plane. The distance between 11279-TW-PA 42 200422754 is the focal length constant in the Gaussian optical mode. This parameter corresponds to the imaging mode of the field of view 80 with a deflection angle α of the optical axis, and 値 is: zFLm [0..19] = pm [ 〇..19] * cot (am [〇..19]) (5) If a single image coordinate point I (u, v) corresponds to the viewpoint with the unique line of sight 80, zFL can be regarded as: an image coordinate point Refer to `` Linear Perspective The focal length constant converted by the "radiation model"; this focal length will change with the change of the image height P, and the larger the range of the change, the greater the negative radial distortion of the camera system. Further, an image height can be It is interpreted as a corresponding deflection angle a of the optical axis of the space; and from the viewpoint of the imaging mechanism, it depends on the corresponding zFL, so the function zFL (p) can directly display the distortion of the camera system. "Focal length curve" (zFL curve) or "optical axis deflection angle focal length function" (zFL function). The image height pm [0..19] in the "Figure 1 2 A" is expressed as zFL J0..19]. The object distance must also be referred to, and the overlapping phenomenon of its control section can also be used to locate the FCP 222 of the camera 22. This is the second method for analyzing the position of the camera 22 proposed by the present invention. The image height pm [0 .. 19] shown in the “Figure 1 2 A” can be represented by zFL m [0..19], and can be interpreted consistently. Therefore, a trial and error method is assumed one by one at a certain point on the optical axis 224 as the FCP 222 of the camera 22. Based on this, the initial measurement distance D [0] can be inferred, and then (pm [0..19], m = [1..3]) is converted to its corresponding (zFL m [0..19], m = [1..3]), and the position of the camera 22 is determined by the overlap of the trajectory formed by the deduction (in the present This is called the second index of overlap in the invention). "Figure 15" shows a good overlap phenomenon, indicating that the experimental results can be on the optical axis 224 fixed-point lens FCP 222 of the real camera 22. In contrast, if the test 11279-TW-PA 43 200422754 is shifted by 5mm, the zFL trajectory will show obvious divergence, as shown in "Figure 16". And "Figure 15" can also be used to directly indicate the distortion mechanism of the camera or the degree of distortion of the image. It should be noted that after comparing "Figure 14" and "Figure 16", it is found that the D number in the "14th figure" is offset by 50mm than the D number in the "16th figure". The curve divergence caused by 5mm is obvious. From this, it can be known that the test sensitivity of zFL (p) for the center position of the projection of camera 22 is much higher than the optical axis deflection angle function a (p); this also means that: in practical applications, using The zFL (p) curve overlap is a better way to locate the FCP 222 of the camera 22. And the focal length of the camera 22 lens can be obtained from the position where the image height pm [0..19] is approached to zero. Generally, the ideal lens is the focal length of this lens. In order to make the projection center positioning method of the present invention more general-applicable to any projection function, the present invention also proposes a method of identifying indexes for the overlap degree of the trajectory profile described above. Since the zFL (p) function has a high degree of discrimination, this example is used to rearrange the three sets of data reference image heights P in “Figure 15” and “Figure 16” and present them in “Figure 17”. The “divergence length” (or “characteristic length”) of the high-profile section can be used to evaluate the overlap of the zFL (p) curves. The method of calculating the "characteristic length" can be represented by "Figure 17". The calculation method of k is to connect the data points of adjacent P to zFL and calculate the total length after all the points are connected. Therefore, if the characteristic length of all points is the smallest (as indicated by the curve marked as zFL in the figure), it means that the corresponding zFL (p) trajectory overlap is the best, then the test set point is the projection center of the camera 22 (That is, FCP 222); otherwise, as shown in another zFL_shift curve in the figure, it shows a significantly longer divergence length of 11279-TW-PA 44 200422754. In addition, the present invention can further utilize the image quality of the original PCP 31 to evaluate the layout quality of the measurement system, and then correct the system layout and predict whether the camera system can be identified. Since the distortion mode of the camera 22 cannot be predicted, the projection mechanism of some cameras 22 may be severely out of expectation due to defects. For example, the optical axis 224 of the lens lens group and the image plane 225 of the camera 22 are not orthogonal, so it is impossible to obtain a completely symmetrical image no matter how it is corrected; however, it is possible to eliminate the correction of these unsuitable cameras 22 early by the present invention. In summary, whether the projection function α (ρ) or zFL (p) function of the camera 22 is used, the purpose of identifying the camera specifications can be achieved, and the relevant optical projection parameters can be obtained. It can be known that the method and the measurement system proposed by the present invention can be used to analyze the imaging mechanism of the camera 22, and the measured data distribution can further guide or correct the arrangement of the measurement system and determine the reliability of the measurement parameters. Finally, it can be used to calibrate the camera or develop image processing and conversion technology. [Effects of the invention] The method and device for obtaining the optical projection parameters of the camera provided by the present invention have the following advantages: 1. It can accurately determine the optical axis of the camera, locate the absolute position of the camera (that is, the projection center), and find out Camera projection curve and focal length constant. 2. The distortion of image coordinates can be quantified by the "optical axis deflection angle focal length function". 11279-TW-PA 45 200422754 3. The reliability of the measurement system can be identified from experimental data. 4. The quality of the target camera can be identified from the experimental data. 5. The image coordinate point can be directly converted into a spatial projection angle. 6. Can be applied to stereo image metrology. 7. The calibration method is simple and low cost, and it is suitable for any camera with non-linear projection mechanism. Although the present invention has been disclosed as above with a preferred embodiment, it is not intended to limit the present invention. Anyone skilled in the art can make some changes and retouching without departing from the spirit and scope of the present invention. The scope of protection of the invention shall be determined by the scope of the attached patent application. [Schematic description] Figures 1A and 1B show the image analysis diagram of a fisheye image correction method based on an ideal plane image and its corresponding spatial projection diagram. Figure 2 shows the diagram. Know the projection function curves of the three types of fisheye lenses; Figure 3 shows the imaging optical path diagram of the conventional Gaussian optics > Gaussian Model; Figure 4 shows the chart harrow in an embodiment of the present invention A perspective view of the projection light path of a fisheye lens; FIG. 5 is a schematic view of a target embodiment designed according to the spirit of the present invention, which is an octagonal symmetrical pattern defined by three concentric circles; 46

11279-TW-PA 794 、 200422754 第6圖,繪示本發明方法之第一種實施例的理論模式 示意圖,其顯示圖靶移動於不同絕對位置時相異校正點成 像於同一影像點的光路示意圖; 第7圖,繪示本發明裝置之第一種實施例的系統佈置 示意圖,以及其參考的座標系統; 第8圖,繪示本發明裝置之第二種實施例的系統佈置 示意圖,以及其參考的座標系統; 第9圖,繪示本發明方法之第二種實施例的理論模式 示意圖,其顯示以圖靶實體中心爲絕對座標原點,藉由變 動相機位置而可等效演繹一視野線的方式; 第1 0圖,繪示本發明爲了攫取影像中心點依據實驗 量測得到相機於平台座標系統移動軌跡的統計示意圖,其 亦可代表光學軸於平台座標系統的空間運動軌跡; 第1 1圖,繪示本發明於實驗中所攫取影像中心點之 像素座標位置變化的統計資料圖; 第1 2 A圖,繪示本發明依據實驗中攫取的影像點資 料做成三同心圓定義之像高平均値隨著不同平台位置(參 考「第1 0圖」)變化的統計資料圖; 第12B圖,依據「第12A圖」的統計資料繪示不 同實體半徑三同心圓定義之平均像高變動範圍的統計資料 圖, 第1 3圖,繪示本發明於實驗過程中正確參考投影中 心位置時,物高對應的光軸偏折角(〇0對像高(p)之軌 跡重疊現象的統計資料圖; 4711279-TW-PA 794, 200422754 FIG. 6 is a schematic diagram showing a theoretical mode of a first embodiment of the method of the present invention, showing a schematic diagram of a light path where different correction points are imaged on the same image point when the target moves at different absolute positions Figure 7 shows a schematic diagram of the system layout of the first embodiment of the device of the present invention, and its reference coordinate system; Figure 8 shows a schematic diagram of the system layout of the second embodiment of the device of the present invention, and its Reference coordinate system; FIG. 9 shows a schematic diagram of the theoretical mode of the second embodiment of the method of the present invention, which shows that the center of the target entity of the map is the absolute coordinate origin, and a field of view can be equivalently deduced by changing the camera position Fig. 10 shows a statistical schematic diagram of the movement track of the camera on the platform coordinate system according to the present invention in order to capture the center point of the image based on experimental measurements, which can also represent the spatial movement track of the optical axis on the platform coordinate system; Figure 1 1 shows the statistical data of the pixel coordinates of the center point of the image captured by the present invention in the experiment; Figure 1 2 A shows the present invention Based on the image point data obtained in the experiment, the average height of the image defined by three concentric circles is generated. It is a statistical data chart that varies with different platform positions (refer to "Figure 10"); Figure 12B, based on "Figure 12A" The statistical data shows the statistical data chart of the average image height variation range defined by three concentric circles with different entity radii. Figure 13 shows the optical axis deflection angle corresponding to the object height when the present invention correctly references the projection center position during the experiment. (〇0 statistical data map of the trajectory overlap of the image height (p); 47

11279-TW-PA 7 Q r 200422754 第1 4圖,繪示本發明於實驗過程中未正確參考投$ 中心位置時,物高對應的光軸偏折角(α)對像高(P)之 軌跡發散現象的統計資料圖; 第1 5圖,繪示本發明於實驗過程中正確參考投影中 心位置時,像高轉換爲光軸偏折角焦距(zFL)對像高(Ρ) 之軌跡重疊現象的統3十資料圖》 第1 6圖,繪示本發明於實驗過程中未正確參考投影 中心位置時,像高的光軸偏折角焦距(zFL)對像高(P) 之軌跡發散現象的統計資料圖;以及 第1 7圖,以「第1 5圖」與「第1 6圖」的曲線爲 例,繪示可以用多條光軸偏折角焦距(ZFL)軌跡的特性 長度來評估軌跡相互重疊度的示意圖。 【圖式之符號說明】 I :成影區域 II :長軸 12 :短軸 13 :本初子午線 13’、13’’ :本初子午線的映i寸 141 :第一主要面 142 :第二主要面 20 :量測系統 21 :平台控制器 22 :相機 11279-TW-PA 48 200422754 221 :鏡頭 222 :前基點(FCP) 222’ :前節點(FNP) 223 :後基點(BCP) 223’ :後節點(BNP) 224 :光學軸 225 :影像平面 226 :中心對稱影像(ICP) 227 :失真中心 23 :調整平台 231 : X’基軸 232 : Y’基軸 233 : Z’基軸 24 :光源 25 :運算單元 251 :中央處理器 252 :影像攫取裝置 253 :數位影像處理器 26 :相機球座標系統 27、27’ :像素平面座標系統 28 : 絕對座標系統 29 : 平台座標系統 30 ·· 圖靶 31 : 中心對稱圖案(PCP) 4911279-TW-PA 7 Q r 200422754 Figure 14 shows the trajectory of the optical axis deflection angle (α) and the image height (P) corresponding to the object height when the center position of the $ is not correctly referenced during the experiment of the present invention. Statistical data chart of divergence phenomenon; Figures 15 and 15 show the overlapping of the image height into the optical axis deflection angle focal length (zFL) versus the image height (P) trajectory when the invention correctly references the projection center position during the experiment. Figure 30 of the "Statistical Data" Figure 16 shows the statistics of the divergence of the image height (PFL) trajectory of the image height when the optical axis deflection angle focal length (zFL) of the image height is not correctly referred to the projection center position during the experiment of the present invention. Data map; and Figure 17, using the curves of "Figure 15" and "Figure 16" as examples, showing that the characteristic length of multiple optical axis deflection angle focal length (ZFL) trajectories can be used to evaluate the trajectory mutual Schematic illustration of overlap. [Illustration of Symbols in the Schematic Diagram] I: Shadowing area II: Long axis 12: Short axis 13: Primal meridian 13 ', 13' ': Primal meridian 141: First major surface 142: Second major Surface 20: Measurement system 21: Platform controller 22: Camera 11279-TW-PA 48 200422754 221: Lens 222: Front base point (FCP) 222 ': Front node (FNP) 223: Rear base point (BCP) 223': Rear Node (BNP) 224: Optical axis 225: Image plane 226: Center-symmetric image (ICP) 227: Distortion center 23: Adjustment platform 231: X 'base axis 232: Y' base axis 233: Z 'base axis 24: Light source 25: arithmetic unit 251: Central processing unit 252: Image capturing device 253: Digital image processor 26: Camera ball coordinate system 27, 27 ': Pixel plane coordinate system 28: Absolute coordinate system 29: Platform coordinate system 30 ·· Target 31: Center symmetry Pattern (PCP) 49

11279-TW-PA 7Q7 200422754 38 :中心校正點 311-318、321-328、331-338 :校正點 341-344 :測試點 70 :萬用光學基座 80 :視野線 91、92 :失真及校正影像點 5011279-TW-PA 7Q7 200422754 38: Center correction points 311-318, 321-328, 331-338: Calibration points 341-434: Test points 70: Universal optical base 80: Field of view 91, 92: Distortion and correction Image point 50

11279-TW-PA 79811279-TW-PA 798

Claims (1)

200422754 拾、申請專利範圍 1、 一種求取相機光學投射參數的方法,係利用該相機之 視野空間中一視野線對一影像平面上一影像點的專一 投射特性,而求取該相機的光學參數,該方法包含有: 設置一圖靶於該相機的視野空間中,該圖靶具有 一中心對稱圖案(PCP),該中心對稱圖案(PCP)定 義一位於圖案中心的中心校正點以及至少二個位於同 一輻射半徑上的第一校正點與第二校正點; 準直該相機與該圖靶,使得該相機的一光學軸正 交穿過該中心校正點; 紀錄該第一校正點在該影像平面上映射之該影像 點的像素座標位置; 該圖靶依據該中心校正點沿著該光學軸移動,使 得該第二校正點亦重疊映射在該影像點的像素座標位 置上; 攫取該第一校正點與該第二校正點的空間絕對座 標,計算由該二空間絕對座標定義的一視野線;以及 取該視野線與該光學軸的交點即爲該相機的一投 影中心(viewpoint/FNP)。 2、 如申請專利範圍第1項所述之求取相機光學投射參數 的方法,其中準直該相機與該圖靶的方法係藉由定位 該影像平面的一失真中心來達成,正交穿過該失真中 心與該中心校正點的一空間視野線即爲該光學軸。 51 11279-TW-PA 799 200422754 3、 如申請專利範圍第2項所述之求取相機光學投射參數 的方法,其中定位該失真中心的方法包含有: 該圖靶上之該中心對稱圖案(PCP)更具有複數 個中心對稱的幾何圖形; 將該圖靶放置在該相機的視野空間中使得該中心 對稱圖案(pcp)成像在該影像平面上; 調整該圖靶與該相機間的相對方位,直到該中心 對稱圖案(PCP)成像爲一中心對稱影像(ICP);以 及 以至少一對稱指標測試該中心對稱影像(ICP), 以確定該複數個幾何圖形的影像軌跡達到中心對稱的 要求,則該中心校正點之映射影像點的特徵座標即爲 該失真中心。 4、 如申請專利範圍第3項所述之求取相機光學投射參數 的方法,其中該複數個幾何圖形係爲選自同心圓、同 心方形、同心三角形與同心多邊形之組合的其中之 -- 〇 5、 如申請專利範圍第3項所述之求取相機光學投射參數 的方法,其中該複數個幾何k形係由同心的圓形、方 形、三角形或多邊形所組合而成。 6、 如申請專利範圍第3項所述之求取相機光學投射參數 的方法,其中該對稱指標包括失真指標參數、水平偏 差度與/或垂直偏差度。 7、 一種求取相機光學投射參數的方法,係利用該相機之 11279-TW-PA 52 200422754 視野空間中一視野線對一影像平面上一影像點的專一 投射特性,而求取該相機的光學參數,該方法包含有: 設置一圖靶於該相機的視野空間中,該圖靶具有 一中心對稱圖案(PCP ),該中心對稱圖案定義有一 位於圖案中心的中心校正點以及由複數個幾何圖形定 義的複數個校正點; 準直該相機與該圖靶,使得該相機的一光學軸正 交穿過該中心校正點; 沿著該光學軸變動該相機與該圖靶間的相對距 離,並分別紀錄在不同該相對距離下該複數個校正點 所對應的複數個物像共軛座標對,總合而成一物像共 軛座標對陣列;以及 沿著該光學軸尋求一定點,以該定點爲基準解析 該物像共軛座標對陣列的資料,使得一重疊性指標呈 現最佳的軌跡重疊度,則該定點即爲該相機的一投影 中心。 8、 如申請專利範圍第7項所述之求取相機光學投射參數 的方法,其中準直該相機與該圖靶的方法係藉由定位 該影像平面的一失真中心來達成,正交穿過該失真中 心與該中心校正點的一空間視野線即爲該光學軸。 9、 如申請專利範圍第8項所述之求取相機光學投射參數 的方法,其中定位該失真中心的方法包含有: 將該圖靶放置在該相機的視野空間中使得該中心 對稱圖案(PCP)成像在該影像平面上; 11279-TW-PA 53 調整該圖靶與該相機間的相對方位,直到該中心 對稱圖案(PCP)成像爲一中心對稱影像(ICP);以 及 以至少一對稱指標測試該中心對稱影像(ICP), 以確定該複數個幾何圖形的影像軌跡達到中心對稱的 要求,則該中心校正點之映射影像點的特徵座標即爲 該失真中心。 〇、如申請專利範圍第9項所述之求取相機光學投射參 數的方法,其中該對稱指標包括失真指標參數、水平 偏差度與/或垂直偏差度。 1、 如申請專利範圍第7項所述之求取相機光學投射參 數的方法,其中該複數個幾何圖形係爲選自同心圓、 同心方形、同心三角形與同心多邊形之組合的其中之 --- 〇 2、 如申請專利範圍第7項所述之求取相機光學投射參 數的方法,其中該複數個幾何圖形係由同心的圓形、 方形、三角形或多邊形所組合而成。 3、 如申請專利範圍第7項所述之求取相機光學投射參 數的方法,其中該物像共軛k標對係由該複數個校正 點或該相機的絕對座標位置與其對應影像點的像素座 標位置配對組合而成的座標對,可以解析出像高、物 高和物距三個參數。 4、 如申請專利範圍第7項所述之求取相機光學投射參 數的方法,其中該重疊性指標係指一特性長度,該特 11279-TW-PA 54 性長度的計算包含有以下步驟: 解析該物像共軛座標對陣列而得到複數個資料 點;以及 連接該複數個資料點而成該特性長度,以該特性 長度最小爲軌跡重疊度最佳的指標。 5、 如申請專利範圍第1 4項所述之求取相機光學投射 參數的方法,其中該複數個資料點係指該複數個校正 點對應之光軸偏折角(〇0對像高(p)關係的資料點, 代表該相機的投射曲線,係由解析該物像共軛座標對 陣列的資料與該定點的假設位置而得。 6、 如申請專利範圍第1 4項所述之求取相機光學投射 參數的方法,其中該複數個資料點係指該複數個校正 點對應之光軸偏折角焦距(zFL)對像高(p)關係的 資料點,代表該相機的失真程度,係由解析該物像共 軛座標對陣列的資料與該定點的假設位置而得。 7、 如申請專利範圍第1 6項所述之求取相機光學投射 參數的方法,其中該光軸偏折角焦距(zFL)係以以 下數學式決定之: zFL = p*cot(a) 其中, P:像高,即該映射影像點與該失真中心間的距 離; a :光軸偏折角,該映射影像點對應於物體空間 中一入射線與該光學軸的夾角。 11279-TW-PA 55 1 8、一種求取相機光學投射參數的方法,係利用該相機 之視野空間中〜視野線對一影像平面上一影像點的專 一投射特性,而求取該相機的光學參數,該方法包含 有: 以該影像點爲基準,求取該視野空間中映射到該 影像點的至少二不同絕對座標點,以定義該視野線; 計算代表該視野線的一光軸偏折角(α),係爲 該視野線與該相機之一光學軸的夾角; 進一步求取複數個影像點所分別對應的複數條視 野線所分別定義的複數個光軸偏折角(〇〇;以及 由該複數個影像點與該複數個光軸偏折角(α) 間的對應關係,得到描述該相機投射行爲的一投射函 數。 .1 9、如申請專利範圍第1 8項所述之求取相機光學投射 參數的方法,其中定義該視野線的方法更包含有: 設置一圖靶於該相機的視野空間中,該圖靶具有 一中心對稱圖案(PCP),該中心對稱圖案(pcp)定 義一位於圖案中心的中心校正點以及至少二個位於同 一輻射半徑上的第一校正點k第二校正點; 準直該相機與該圖靶,使得該光學軸正交穿過該 中心校正點; 紀錄該第一校正點在該影像平面上映射之該影像 點的像素座標位置; 該圖靶依據該中心校正點沿著該光學軸移動,使 11279-TW-PA 56 200422754 得該第二校正點亦重疊映射在該影像點的像素座標位 置上;以及 攫取該第一校正點與該第二校正點的空間絕對座 標,則該二絕對座標點即定義該視野線。 2 0、如申請專利範圍第丨9項所述之求取相機光學投射 參數的方法,其中該視野線與該光學軸的交點即爲該 相機的一投影中心(viewpoint/FNP )。 21、如申請專利範圍第19項所述之求取相機光學投射 參數的方法,其中準直該相機與該圖靶的方法係藉由 定位該影像平面的一失真中心來達成,正交穿過該失 真中心與該中心校正點的一空間視野線即爲該光學 軸,其步驟包含有: 該圖靶上之該中心對稱圖案(PCP)更具有複數 個中心對稱的幾何圖形; 將該圖靶放置在該相機的視野空間中使得該中心 對稱圖案(PCP)成像在該影像平面上; 調整該圖靶與該相機間的相對方位,直到該中心 對稱圖案(PCP)成像爲一中心對稱影像(ICP); 以至少一對稱指標測試該中心對稱影像(ICP), 以確定該複數個幾何圖形的影像軌跡達到中心對稱的 要求,此時該中心校正點之映射影像點的特徵座標爲 該失真中心;以及 根據該圖靶的已知方位,取正交穿過該失真中心 與該中心校正點的一空間視野線即爲該光學軸。 11279-TW-PA 57 200422754 2 2、如申請專利範圍第2 1項所述之求取相機光學投射 參數的方法,其中該對稱指標包括失真指標參數、水 平偏差度與/或垂直偏差度。 2 3、如申請專利範圍第2 1項所述之求取相機光學投射 參數的方法,其中該複數個幾何圖形係爲選自同心 圓、同心方形、同心三角形與同心多邊形之組合的其 中之一。 2 4、如申請專利範圍第2 1項所述之求取相機光學投射 參數的方法,其中該複數個幾何圖形係由同心的圓 形、方形、三角形或多邊形所組合而成。 2 5、如申請專利範圍第1 8項所述之求取相機光學投射 參數的方法,其中由解析該複數個影像點與該複數個 光軸偏折角(α)間的對應關係,更可以得到該相機 的一投影中心(viewpoint/FNP)。 2 6、如申請專利範圍第2 5項所述之求取相機光學投射 參數的方法,其中求取該投影中心的方法更包含有: 設置一圖靶於該相機的視野空間中,該圖靶具有 一中心對稱圖案(PCP),該中心對稱圖案定義有一 位於圖案中心的中心校正點k及由複數個幾何圖形定 義的複數個校正點; 準直該相機與該圖靶,使得該光學軸正交穿過該 中心校正點; 沿著該光學軸變動該相機與該圖靶間的相對距 離,並分別紀錄在不同該相對距離下該複數個校正點 11279-TW-PA 58 所對應的複數個物像共軛座標對,總合而成一物像共 軛座標對陣列;以及 沿著該光學軸尋求一定點,以該定點爲基準解析 該物像共軛座標對陣列的資料,使得一重疊性指標呈 現最佳的軌跡重疊度,則該定點即爲該投影中心。 ^ 7、如申請專利範圍第2 6項所述之求取相機光學投射 參數的方法,其中該物像共軛座標對係由該複數個校 正點或該相機的絕對座標位置與其對應影像點的像素 座標位置配對組合而成的座標對,可以解析出像高、 物高和物距三個參數。 2 8、如申請專利範圍第2 6項所述之求取相機光學投射 參數的方法,其中該重疊性指標係指一特性長度,該 特性長度的計算包含有以下步驟: 解析該物像共軛座標對陣列而得到複數個資料 點;以及 連接該複數個資料點而成該特性長度,以該特性 長度最小爲軌跡重疊度最佳的指標。 2 9、如申請專利範圍第2 8項所述之求取相機光學投射 參數的方法,其中該複數個資料點係指該複數個校正 點對應之光軸偏折角(〇0對像高(p)關係的資料點, 代表該相機的投射曲線,係由解析該物像共軛座標對 陣列的資料與該定點的假設位置而得。 3 〇、如申請專利範圍第2 8項所述之求取相機光學投射 參數的方法,其中該複數個資料點係指該複數個校正 11279-TW-PA 59 200422754 點對應之光軸偏折角焦距(zFL)對像高(P)關係的 資料點,代表該相機的失真程度,係由解析該物像共 軛座標對陣列的資料與該定點的假設位置而得。 31、如^請專利範圍第3〇項所述之求取相機光學投射 參數的°方法,其中該光軸偏折角焦距(zFL)係以以 下數學式決定之: zFL 二 P*c〇t(a) 其中’ P ··像高,即該映射影像點與該失真中心間的距 離; α:光軸偏折角,該映射影像點對應於物體空間 中一入射線與該光學軸的夾角。 3 2、一種求取相機光學投射參數的裝置,係應用於解析 該相機之視野空間中複數條視野線對一影像平面上複 數個影像點的對應關係’該裝置包含有: 一圖靶,繪製有一中心對稱圖案(PCP),該中 心對稱圖案係由一中心校正點與複數個中心對稱的幾 何圖形所組成,該複數個幾何圖形定義有複數個校正 點» 一相機,具有一非線性投影鏡頭,用以攝取來自 該中心對稱圖案的光線而在該影像平面上形成一對應 影像; 一 _ 一調整平台,具有三相互正交之基軸以定義一平 台座標系統,用於調整該圖祀與該相機間的相對絕對 11279-TW-PA200422754 Patent application scope 1. A method for obtaining the optical projection parameters of a camera, which uses the specific projection characteristics of a field of view in the camera's field of view to an image point on an image plane to obtain the optical parameters of the camera The method includes: setting a map target in the field of view of the camera, the map target having a central symmetrical pattern (PCP), the central symmetrical pattern (PCP) defining a central correction point at the center of the pattern and at least two A first correction point and a second correction point located on the same radiation radius; collimating the camera and the target so that an optical axis of the camera passes orthogonally through the central correction point; recording the first correction point in the image The pixel coordinate position of the image point mapped on the plane; the target moves along the optical axis according to the central correction point, so that the second correction point is also overlapped and mapped on the pixel coordinate position of the image point; The absolute coordinates of the space between the calibration point and the second calibration point, calculating a field of view defined by the two space absolute coordinates; and taking the field of view and the The intersection of the optical axis is the projection of the center of a camera (viewpoint / FNP). 2. The method for obtaining the optical projection parameters of a camera as described in item 1 of the scope of the patent application, wherein the method of collimating the camera and the target is achieved by positioning a distortion center of the image plane and passing orthogonally A spatial field of view of the distortion center and the center correction point is the optical axis. 51 11279-TW-PA 799 200422754 3. The method for obtaining the optical projection parameters of the camera as described in item 2 of the scope of the patent application, wherein the method of locating the distortion center includes: the center symmetrical pattern (PCP) on the target ) Further having a plurality of center-symmetric geometric figures; placing the target in the field of view of the camera so that the center-symmetric pattern (pcp) is imaged on the image plane; adjusting the relative orientation between the target and the camera, Until the central symmetrical pattern (PCP) is imaged as a central symmetrical image (ICP); and the central symmetrical image (ICP) is tested with at least one symmetry index to determine that the image trajectories of the plurality of geometric figures meet the requirements of central symmetry, then The characteristic coordinate of the image point of the center correction point is the distortion center. 4. The method for obtaining optical projection parameters of a camera as described in item 3 of the scope of the patent application, wherein the plurality of geometric figures are selected from the group consisting of concentric circles, concentric squares, concentric triangles, and concentric polygons-〇 5. The method for obtaining the optical projection parameters of a camera as described in item 3 of the scope of the patent application, wherein the plurality of geometric k-shaped systems are composed of concentric circles, squares, triangles, or polygons. 6. The method for obtaining optical projection parameters of a camera as described in item 3 of the scope of patent application, wherein the symmetry index includes a distortion index parameter, a horizontal deviation degree and / or a vertical deviation degree. 7. A method for obtaining the optical projection parameters of a camera, which uses the specific projection characteristics of a field of view to an image point on an image plane in the field of view of the camera 11279-TW-PA 52 200422754 to obtain the optical characteristics of the camera. Parameters, the method includes: setting a map target in the field of view of the camera, the map target having a central symmetrical pattern (PCP), the central symmetrical pattern defining a central correction point located at the center of the pattern and a plurality of geometric figures A plurality of defined correction points; collimating the camera and the target so that an optical axis of the camera passes orthogonally through the central correction point; varying the relative distance between the camera and the target along the optical axis, and A plurality of object image conjugate coordinate pairs corresponding to the plurality of correction points at different relative distances are respectively recorded to form an array of object image conjugate coordinate pairs; and a certain point is sought along the optical axis to use the fixed point Analyze the data of the object image conjugate coordinate pair array as a reference, so that an overlapping index shows the best degree of trajectory overlap, then the fixed point is one of the cameras. Image centers. 8. The method for obtaining the optical projection parameters of a camera as described in item 7 of the scope of the patent application, wherein the method of collimating the camera and the target is achieved by locating a distortion center of the image plane and passing orthogonally A spatial field of view of the distortion center and the center correction point is the optical axis. 9. The method for obtaining the optical projection parameters of a camera as described in item 8 of the scope of the patent application, wherein the method of locating the distortion center includes: placing the target in the field of view of the camera so that the center is symmetrically patterned (PCP ) Imaging on the image plane; 11279-TW-PA 53 adjusts the relative orientation between the target and the camera until the central symmetrical pattern (PCP) is imaged as a central symmetrical image (ICP); and at least one symmetry index The central symmetrical image (ICP) is tested to determine that the image trajectories of the plurality of geometric figures meet the requirements of central symmetry. Then, the characteristic coordinates of the image points of the central correction point are the distortion center. 〇 The method for obtaining optical projection parameters of a camera as described in item 9 of the scope of patent application, wherein the symmetry index includes a distortion index parameter, a horizontal deviation degree and / or a vertical deviation degree. 1. The method for obtaining optical projection parameters of a camera as described in item 7 of the scope of the patent application, wherein the plurality of geometric figures are selected from the group consisting of concentric circles, concentric squares, concentric triangles, and concentric polygons --- 〇2. The method for obtaining optical projection parameters of a camera as described in item 7 of the scope of the patent application, wherein the plurality of geometric figures are combined by concentric circles, squares, triangles, or polygons. 3. The method for obtaining the optical projection parameters of a camera as described in item 7 of the scope of the patent application, wherein the object image conjugate k-mark pair is a pixel of the plurality of correction points or the absolute coordinate position of the camera and its corresponding image point The coordinate pairs formed by the coordinate position pairing can analyze the three parameters of image height, object height and object distance. 4. The method for obtaining the optical projection parameters of a camera as described in item 7 of the scope of the patent application, wherein the overlap index refers to a characteristic length, and the calculation of the characteristic length of the special 11279-TW-PA 54 includes the following steps: A plurality of data points are obtained by arraying the object image conjugate coordinate pairs; and the characteristic length is obtained by connecting the plurality of data points, with the minimum characteristic length as an index of the best trajectory overlap. 5. The method for obtaining the optical projection parameters of a camera as described in item 14 of the scope of the patent application, wherein the plurality of data points refer to the deflection angle of the optical axis corresponding to the plurality of correction points (0 to the image height (p) The related data points, which represent the projection curve of the camera, are obtained by analyzing the data of the object image conjugate coordinate pair array and the assumed position of the fixed point. 6. Obtain the camera as described in item 14 of the scope of patent application Optical projection parameter method, wherein the plurality of data points refer to the data points of the relationship between the optical axis deflection angle focal length (zFL) and the image height (p) corresponding to the plurality of correction points, and represent the degree of distortion of the camera. The object image conjugate coordinates are obtained from the array data and the assumed position of the fixed point. 7. The method for obtaining the optical projection parameters of the camera as described in item 16 of the patent application scope, wherein the optical axis deflection angle focal length (zFL ) Is determined by the following mathematical formula: zFL = p * cot (a) where P: image height, that is, the distance between the mapped image point and the distortion center; a: optical axis deflection angle, the mapped image point corresponds to An incident in object space The angle between the optical axis and the optical axis. 11279-TW-PA 55 1 8. A method for obtaining the optical projection parameters of a camera is to use the projection characteristic of the camera's field of view ~ the line of vision to an image point on an image plane, To obtain the optical parameters of the camera, the method includes: taking the image point as a reference, obtaining at least two different absolute coordinate points mapped to the image point in the field of view to define the line of view; calculating the field of view; An optical axis deflection angle (α) of the line is an angle between the field of view and an optical axis of the camera; further obtaining a plurality of optical axis deflections respectively defined by a plurality of field lines corresponding to a plurality of image points A chamfer angle (〇〇;) and a correspondence function between the plurality of image points and the plurality of optical axis deflection angles (α), to obtain a projection function describing the projection behavior of the camera. .19, such as the scope of patent application No. 18 The method for obtaining optical projection parameters of a camera according to the item, wherein the method of defining the line of view further includes: setting a map target in the camera's field of view space, the map target having a center Called pattern (PCP), the central symmetrical pattern (pcp) defines a central correction point located at the center of the pattern and at least two first correction points k second correction points located on the same radiation radius; collimating the camera and the target , So that the optical axis passes through the central correction point orthogonally; record the pixel coordinate position of the image point mapped by the first correction point on the image plane; the map target moves along the optical axis according to the central correction point, 11279-TW-PA 56 200422754 so that the second correction point is also overlapped and mapped on the pixel coordinate position of the image point; and if the absolute absolute coordinates of the first correction point and the second correction point are captured, then the two absolute coordinates A point defines the line of sight. 20. The method for obtaining optical projection parameters of a camera as described in item 9 of the scope of the patent application, wherein the intersection of the line of view and the optical axis is a projection center (viewpoint / FNP) of the camera. 21. The method for obtaining the optical projection parameters of a camera as described in item 19 of the scope of the patent application, wherein the method of collimating the camera and the target is achieved by positioning a distortion center of the image plane and passing through orthogonally A spatial field of view of the distortion center and the center correction point is the optical axis, and the steps include: the central symmetrical pattern (PCP) on the target has more than one center symmetrical geometric figure; Placed in the field of view of the camera so that the central symmetrical pattern (PCP) is imaged on the image plane; adjusting the relative orientation between the target and the camera until the central symmetrical pattern (PCP) is imaged as a central symmetrical image ( ICP); test the centrally symmetric image (ICP) with at least one symmetry index to determine that the image trajectory of the plurality of geometric figures meets the requirement of central symmetry, at this time, the characteristic coordinates of the image point of the center correction point mapping the image are the distortion center And according to the known orientation of the target in the figure, a spatial field of view orthogonally passing through the distortion center and the center correction point is taken as the optical axis. 11279-TW-PA 57 200422754 2 2. The method for obtaining optical projection parameters of a camera as described in item 21 of the scope of patent application, wherein the symmetry index includes a distortion index parameter, a horizontal deviation degree and / or a vertical deviation degree. 2 3. The method for obtaining optical projection parameters of a camera as described in item 21 of the scope of patent application, wherein the plurality of geometric figures are one selected from the group consisting of concentric circles, concentric squares, concentric triangles, and concentric polygons. . 24. The method for obtaining optical projection parameters of a camera as described in item 21 of the scope of the patent application, wherein the plurality of geometric figures are combined by concentric circles, squares, triangles, or polygons. 25. The method for obtaining the optical projection parameters of a camera as described in item 18 of the scope of the patent application, wherein the correspondence between the plurality of image points and the plurality of optical axis deflection angles (α) can be obtained by analyzing A projection center (viewpoint / FNP) of the camera. 26. The method for obtaining optical projection parameters of a camera as described in item 25 of the scope of the patent application, wherein the method for obtaining the projection center further includes: setting a map target in the field of view of the camera, and the map target With a central symmetrical pattern (PCP), the central symmetrical pattern defines a central correction point k located at the center of the pattern and a plurality of correction points defined by a plurality of geometric figures; the camera and the target are collimated so that the optical axis is positive Cross the center correction point; change the relative distance between the camera and the target along the optical axis, and record the corresponding plurality of correction points 11279-TW-PA 58 at different relative distances The object image conjugate coordinate pairs are combined to form an object image conjugate coordinate pair array; and a certain point is searched along the optical axis, and the data of the object image conjugate coordinate pair array is analyzed based on the fixed point, making an overlap The index shows the best degree of trajectory overlap, then the fixed point is the projection center. ^ 7. The method for obtaining optical projection parameters of a camera as described in item 26 of the scope of the patent application, wherein the object image conjugate coordinate pair is determined by the plurality of correction points or the absolute coordinate position of the camera and its corresponding image point. The coordinate pairs formed by the pixel coordinate position pairing can analyze the three parameters of image height, object height, and object distance. 28. The method for obtaining optical projection parameters of a camera as described in item 26 of the scope of patent application, wherein the overlap index refers to a characteristic length, and the calculation of the characteristic length includes the following steps: Analyze the conjugate of the object image The coordinate pairs are arrayed to obtain a plurality of data points; and the characteristic length is obtained by connecting the plurality of data points, and the minimum characteristic length is the index with the best trajectory overlap. 29. The method for obtaining the optical projection parameters of a camera as described in item 28 of the scope of the patent application, wherein the plurality of data points refers to the deflection angle of the optical axis corresponding to the plurality of correction points (0 to the image height (p ) Relationship data points, which represent the projection curve of the camera, are obtained by analyzing the data of the object image conjugate coordinate pair array and the assumed position of the fixed point. 3 〇, as described in the 28th of the scope of patent applications Method for obtaining the optical projection parameters of a camera, wherein the plurality of data points refer to the plurality of corrected data points corresponding to the optical axis deflection angle focal length (zFL) and the height (P) relationship corresponding to the correction 11279-TW-PA 59 200422754 points, representing The degree of distortion of the camera is obtained by analyzing the data of the object image conjugate coordinate pair array and the assumed position of the fixed point. 31. Obtain the camera's optical projection parameter ° as described in item 30 of the patent scope Method, wherein the optical axis deflection angle focal length (zFL) is determined by the following mathematical formula: zFL 2 P * c0t (a) where 'P ·· image height, that is, the distance between the mapped image point and the distortion center ; Α: deflection angle of the optical axis, the mapping The image point corresponds to the angle between an incident ray in the object space and the optical axis. 3 2. A device for obtaining the optical projection parameters of a camera, which is used to analyze the number of lines of vision in the camera's field of view to the complex number on an image plane Correspondence between image points' The device includes: a target, a center symmetrical pattern (PCP) is drawn, the center symmetrical pattern is composed of a center correction point and a plurality of center symmetrical geometric figures, the plurality of geometric The figure defines a plurality of correction points »a camera with a non-linear projection lens for capturing light from the central symmetrical pattern to form a corresponding image on the image plane; a _ an adjustment platform with three mutually orthogonal The base axis defines a platform coordinate system, which is used to adjust the relative absolute between the image and the camera. 11279-TW-PA 60 200422754 位置; 一平台控制器,連接於該調整平台,用以提供動 力並控制該調整平台的運動範圍;以及 一運算單元,連接於該相機與該平台控制器,根 據由該相機攫取的影像資料對該平台控制器下命令, 以調整該調整平台之三基軸的位置,並且攫取該複數 個校正點的絕對座標以及其映射影像點的像素座標而 形成複數個共軛座標對,根據該複數個共軛座標對的 資料進行運算,而得到描述該相機中該視野空間對該 影像平面的投射函數,用來表現該相機的成像機制。 3 3、如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中更包含有一光源,用於照明該圖靶。 3 4、如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該圖靶上之該中心校正點與該複數 個校正點可用主動發亮的元件組成。 3 5、如申請專利範圍第3 4項所述之求取相機光學投射 參數的裝置,其中該主動發亮的元件係爲一發光二極 體(Light Emitting Diode,LED)。 3 6、如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該運算單元更包含有: 一影像攫取裝置,連接於該相機’用於轉換該相 機所攫取到的類比訊號爲數位訊號; 一數位影像處理器,連接於該影像攫取裝置,用 於處理該數位訊號以攫取該對應影像的像素座標;以 11279-TW-PA 61 一中央處理器,用於控制該影像攫取裝置與該數 位影像處理器的操作。 7、 如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該運算單元係爲一個人電腦(PC)。 8、 如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該相機係爲選自一 CCD相機、一 CMOS相機、一裝設影像掃瞄裝置相機之組合的其中 之一。 9、 如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該複數個幾何圖形係爲選自同心 圓、同心方形、同心三角形與同心多邊形之組合的其 中之一。 〇、如申請專利範圍第3 2項所述之求取相機光學投射 參數的裝置,其中該複數個幾何圖形係由同心的圓 形、方形、三角形或多邊形所組合而成。 11279-TW-PA 6260 200422754 position; a platform controller connected to the adjustment platform to provide power and control the range of motion of the adjustment platform; and an arithmetic unit connected to the camera and the platform controller, according to the image captured by the camera The data instructs the platform controller to adjust the position of the three base axes of the adjustment platform, and takes the absolute coordinates of the plurality of correction points and the pixel coordinates of the mapped image points to form a plurality of conjugate coordinate pairs. According to the complex number The data of a pair of conjugate coordinates are operated to obtain a projection function describing the projection space of the field of view of the camera to the image plane, which is used to represent the imaging mechanism of the camera. 3 3. The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of patent application, further comprising a light source for illuminating the target. 34. The device for obtaining the optical projection parameters of a camera as described in item 32 of the scope of the patent application, wherein the center correction point and the plurality of correction points on the target of the map may be composed of active light-emitting elements. 35. The device for obtaining optical projection parameters of a camera as described in item 34 of the scope of the patent application, wherein the active light emitting element is a light emitting diode (LED). 36. The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of the patent application, wherein the arithmetic unit further includes: an image capturing device connected to the camera, which is used to convert the camera captured by the camera. The analog signal is a digital signal; a digital image processor connected to the image capturing device is used to process the digital signal to capture the pixel coordinates of the corresponding image; 11279-TW-PA 61 is a central processing unit for controlling the Operation of the image capturing device and the digital image processor. 7. The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of patent application, wherein the operation unit is a personal computer (PC). 8. The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of the patent application, wherein the camera is one selected from the group consisting of a CCD camera, a CMOS camera, and a camera equipped with an image scanning device. One. 9. The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of the patent application, wherein the plurality of geometric figures are one selected from the group consisting of concentric circles, concentric squares, concentric triangles, and concentric polygons. 〇 The device for obtaining optical projection parameters of a camera as described in item 32 of the scope of patent application, wherein the plurality of geometric figures are formed by concentric circles, squares, triangles, or polygons. 11279-TW-PA 62
TW92109159A 2003-04-18 2003-04-18 Method for determining the optical parameters of a camera TW565735B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW92109159A TW565735B (en) 2003-04-18 2003-04-18 Method for determining the optical parameters of a camera
PCT/IB2004/001109 WO2004092826A1 (en) 2003-04-18 2004-04-12 Method and system for obtaining optical parameters of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW92109159A TW565735B (en) 2003-04-18 2003-04-18 Method for determining the optical parameters of a camera

Publications (2)

Publication Number Publication Date
TW565735B TW565735B (en) 2003-12-11
TW200422754A true TW200422754A (en) 2004-11-01

Family

ID=32503978

Family Applications (1)

Application Number Title Priority Date Filing Date
TW92109159A TW565735B (en) 2003-04-18 2003-04-18 Method for determining the optical parameters of a camera

Country Status (2)

Country Link
TW (1) TW565735B (en)
WO (1) WO2004092826A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108931357A (en) * 2017-05-22 2018-12-04 宁波舜宇车载光学技术有限公司 Test target and corresponding camera lens MTF detection system and method

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7893393B2 (en) 2006-04-21 2011-02-22 Mersive Technologies, Inc. System and method for calibrating an image projection system
WO2009140678A2 (en) * 2008-05-16 2009-11-19 Mersive Technologies, Inc. Systems and methods for generating images using radiometric response characterizations
CN104048815B (en) 2014-06-27 2017-03-22 歌尔科技有限公司 Method and system for measuring distortion of lens
RU2635336C2 (en) * 2015-03-30 2017-11-10 Открытое Акционерное Общество "Пеленг" Method of calibrating optical-electronic device and device for its implementation
CN106780617B (en) * 2016-11-24 2023-11-10 北京小鸟看看科技有限公司 Virtual reality system and positioning method thereof
JP2020148700A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Distance image sensor, and angle information acquisition method
TWI738098B (en) * 2019-10-28 2021-09-01 阿丹電子企業股份有限公司 Optical volume-measuring device
CN111105488B (en) * 2019-12-20 2023-09-08 成都纵横自动化技术股份有限公司 Imaging simulation method, imaging simulation device, electronic equipment and storage medium
CN111445522B (en) * 2020-03-11 2023-05-23 上海大学 Passive night vision intelligent lightning detection system and intelligent lightning detection method
CN111432204A (en) * 2020-03-30 2020-07-17 杭州栖金科技有限公司 Camera testing device and system
CN111612710B (en) * 2020-05-14 2022-10-04 中国人民解放军95859部队 Geometric imaging pixel number calculation method for target rectangular projection image
CN113310420B (en) * 2021-04-22 2023-04-07 中国工程物理研究院上海激光等离子体研究所 Method for measuring distance between two targets through image
TWI788838B (en) * 2021-05-07 2023-01-01 宏茂光電股份有限公司 Method for coordinate transformation from spherical to polar
TWI793702B (en) * 2021-08-05 2023-02-21 明志科技大學 Method for obtaining optical projection mechanism of camera
CN116954011B (en) * 2023-09-18 2023-11-21 中国科学院长春光学精密机械与物理研究所 Mounting and adjusting method for high-precision optical reflection system calibration camera

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
AR003043A1 (en) * 1995-07-27 1998-05-27 Sensormatic Electronics Corp AN IMAGE FORMING AND PROCESSING DEVICE TO BE USED WITH A VIDEO CAMERA
JP3126955B2 (en) * 1999-02-12 2001-01-22 株式会社アドバネット Arithmetic unit for image conversion
JP3624288B2 (en) * 2001-09-17 2005-03-02 株式会社日立製作所 Store management system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108931357A (en) * 2017-05-22 2018-12-04 宁波舜宇车载光学技术有限公司 Test target and corresponding camera lens MTF detection system and method
CN108931357B (en) * 2017-05-22 2020-10-23 宁波舜宇车载光学技术有限公司 Test target and corresponding lens MTF detection system and method

Also Published As

Publication number Publication date
TW565735B (en) 2003-12-11
WO2004092826A1 (en) 2004-10-28

Similar Documents

Publication Publication Date Title
US11544874B2 (en) System and method for calibration of machine vision cameras along at least three discrete planes
TW565735B (en) Method for determining the optical parameters of a camera
Luhmann et al. Sensor modelling and camera calibration for close-range photogrammetry
US7321839B2 (en) Method and apparatus for calibration of camera system, and method of manufacturing camera system
US20150116691A1 (en) Indoor surveying apparatus and method
CN103729841B (en) A kind of based on side's target model and the camera distortion bearing calibration of perspective projection
CN108965690A (en) Image processing system, image processing apparatus and computer readable storage medium
CN103106661B (en) Two, space intersecting straight lines linear solution parabolic catadioptric camera intrinsic parameter
CN110312111B (en) Apparatus, system, and method for automatic calibration of image devices
CN108388341B (en) Man-machine interaction system and device based on infrared camera-visible light projector
CN109615664A (en) A kind of scaling method and equipment for optical perspective augmented reality display
US9990739B1 (en) Method and device for fisheye camera automatic calibration
Wilm et al. Accurate and simple calibration of DLP projector systems
CN113298886A (en) Calibration method of projector
Wang et al. Accurate detection and localization of curved checkerboard-like marker based on quadratic form
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
CN107255458B (en) Resolving method of vertical projection grating measurement simulation system
CN107036555B (en) A kind of cross-axis optical grating projection measurement analogue system and its implementation
CN103559710B (en) A kind of scaling method for three-dimensional reconstruction system
Orghidan et al. Omnidirectional depth computation from a single image
TW565736B (en) Method for determining the optical parameters of a camera
JP4382430B2 (en) Head three-dimensional shape measurement system
Gorevoy et al. Optimization of a geometrical calibration procedure for stereoscopic endoscopy systems
Orghidan et al. Calibration of a structured light-based stereo catadioptric sensor
KR102080506B1 (en) 3D optical scanner

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent
MM4A Annulment or lapse of patent due to non-payment of fees
MM4A Annulment or lapse of patent due to non-payment of fees