TWI323425B - Method and system for reconstructing 3-d endoscopic images - Google Patents

Method and system for reconstructing 3-d endoscopic images Download PDF

Info

Publication number
TWI323425B
TWI323425B TW95127782A TW95127782A TWI323425B TW I323425 B TWI323425 B TW I323425B TW 95127782 A TW95127782 A TW 95127782A TW 95127782 A TW95127782 A TW 95127782A TW I323425 B TWI323425 B TW I323425B
Authority
TW
Taiwan
Prior art keywords
feature
matrix
image
point
dimensional
Prior art date
Application number
TW95127782A
Other languages
Chinese (zh)
Other versions
TW200807309A (en
Inventor
Yungnien Sun
Chiahsiang Wu
Yichiao Chen
Chienchen Chang
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW95127782A priority Critical patent/TWI323425B/en
Publication of TW200807309A publication Critical patent/TW200807309A/en
Application granted granted Critical
Publication of TWI323425B publication Critical patent/TWI323425B/en

Links

Description

1323425 九、發明說明 【發明所屬之技術領域】 本發明是有關於一種重構三雉影像之方法,特別是有關 於一種重構内視鏡之三維影像之方法° 【先前技術】 在手術過程中,醫師必須讀認于街工具與周邊組織的才目 對位置’以確保手術品質,隹在内說鏡手術中,醫師僅能透1323425 IX. Description of the Invention [Technical Field] The present invention relates to a method for reconstructing a three-dimensional image, and more particularly to a method for reconstructing a three-dimensional image of an endoscope. [Prior Art] During the operation, Physicians must read the position of the street tools and surrounding organizations to ensure the quality of the operation.

過内視鏡取得二維影像來觀察叢内莛構’醫師無法立即認知 區域大小遠近及其立體空間釣關係,s而使手術過程増加許 多的風險。為輔助醫師進行柜關診斷與治療,相關的電腦辅 助糸統陸續地被發表出來,倒如:以電腦斷層掃描(cτ)景〗像 為基礎’應用於大腸直腸相關疾病的檢測與診斷方法,或是 將三維追蹤器固定在内視鏡上,以此結合内視鏡二維影像與 以電腦斷層掃描影像建構的藤内模型,使醫師可看該區域的 三維結構。 現有的虛擬内視鏡系統多:電番斷層掃描或核磁共振 影像為基礎,需由專業人員操作曹格昂貴的設備取得資料, 不僅增加時間與人力物力等成本,旦提升病患受輻射傷害的 機率。而目前使用三維追㈣結手衡輔助系統,也以這類術 前取得的醫學影像為基礎,題有瞽療成本或n射等缺點。 β因此’需要一種重構三••擊的方法和裝置,藉以即時 提供手術工具在體内的動態和病灶區域的立體幾何模型來 使醫師得以在虛擬環境遊走並觀察病灶,以成為臨床診治上 的輔助工具》 5 1^23425 【發明内容】 因:建本發明的目的是在提供一種三維重構方 罝鞛以建立正常尺度之三維 資訊决隊你* ^以乂供醫#精碟辑三維 傷L的機率太術的風險並節省1療成本和減㈣人按輕射 。的機率。本發明亦可將事 = 個真實三維座標重構為 、^轉㈣與數 治療方面的評估和教育訓練/之-維模型,㈣為手術或 方法= —較佳實施例,本發明之—三轉像的 行特徵= 由内視鏡獲得"維影像H影像被進 仃特徵追蹤與一特徵擷取步 點’對特徵點進行濾除 此二料像之特徵 鱼擷取步驟由 。、 、步驟,以濾除此特徵追蹤 成:尺产縱錯誤的特徵點。針對特徵點之軌跡所組 风之尺度測夏矩陣,以仿射會爐 針對此扭曲三維模型,以歐氏重構之方二扭曲三維模型。 牛跡., 认八置構之方法進行三維模型校正 i第’奸t尺度異常三維模型’並獲得第一移動參數矩陣 絲值。利用感應器任意點選空間甲之一感絲,以 一座標值。將感應器固定於手術工具心手…點 、感應點,以固定追蹤器主機、感應器、感愚要三 相對位置關係’據此獲得追縱器裝供矩 :。利用第-轉換矩陣和第二座標值計算求得:第::; 值1手術工具’在特㈣中任意點選至少w料應: 得到第四座標值和複數個第二轉換矩㈣用第二轉 =第二隸值將第四座標值轉換為第五座標值。對痛 达些對應特徵點於尺度異常三维模型,以獲得第六座標值 座標值和第五座標值進行_比較和計算㈣,以獲得 為筮轉換矩陣。利用此第三轉換矩陣’將第-座標值轉換 第 標值,以及利用此第三轉換矩陣⑹之反矩陣,將 動夕數矩陣㈤,轉換成第二移動參數矩陣(从),以 ^ —維模型。利用世界座標内視鏡移動參數矩陣 ,徵點世界座標值(jS〇計算出尺度測量矩陣(叼中之 :個精確度因子’並更新精確度因子和尺度測量矩陣 因。比:更新前之精確度因子之值和更新後之這些精確度 枳三維楛之差值’若差值小於一預設閥值’則獲得此世界座 J 、型之二維影像,若差值大於預設閥值,則將已更新 測量矩陣⑺代回仿射重構之步驟,直至更新前之 月X子之值和更新後之精確度因子之值之差 設閥值。 』仏頂 靡-仅ί本發月之又一較佳實施例’在-欲觀察區域内以感 -益…點選複數個追加特徵點,並將此些追加特徵點投影 :此二維影像以獲得複數個追加特徵點軌跡並將此些追加 執跡併人此尺度測量矩陣⑺,使此尺度測量矩陣⑺ …一特徵點增廣矩陣(α叹叼並對此特徵點增廣矩陣 (⑽以仿射重構之方法建構此扭曲三維模型。 取步發明之又-較佳實施例,在進行此特徵追縱與操 * 刖,加入一廣角影像校正步驟,以校正由廣角鏡頭 生之影像形變,此廣角影像校正步驟至少包含:對此二 維I像進行党度正規化(intensitynGrmalizati〇n)處理;對已 進行亮度正規化處理之此二維影像,以高通滤波器(㈣ Pass和閥值分割法進行二值化處理以區分前景和背 1323425 景,並取出此二維影像上各圖形之複數個重心;對已進行二 值化處理之此二維影像以影像處理中型態學(morphology) 之閉〇 (closing)運算填補二值化處理所造成之洞隙·以及, 對此些重心所形成之複數條直線,以計算直線性誤差之方法 校正形變影像。 ^根據本發明之又一較佳實施例’此特徵追蹤與擷取步驟 係以Kanade-LuCaS-T〇masi(KLT)特徵追蹤法和邊線特徵 (Edge Feature)追縱法進行。 根據本發明之又一較佳實施例,此濾除不可靠軌跡步驟 至少包含.比較此特徵點於一第一二維影像、一第二二維影 像和一第三二維影像中,以取得此特徵點位於此第—二維影 像之第一特徵點座標值、第二特徵點座標值和第三特徵點座 標值;取第一特徵點至第二特徵點所代表之向量為一第一向 1和取第二特徵點至第三特徵點所代表之向量為—第二向 量,則此第二向量與此第一向量之夾角為一測試夾角,若此 測試夾角之值大於一預先設定之闊值則濾除此第二向量。 式 方 施 實 請參照第1圖,其繪示本發明之重構内視鏡之三維影像 的系統示意圖。本發明之系統追蹤器裝置2〇和内視鏡裝置 30所組成,其中追蹤器裝置20包含有傳送器2卜追蹤器主 機22、電腦24和安裝於手術工具28上之感應器26。本發 明之内視鏡系統包含有内視鏡3 4和内視鏡主機3 2。本發明 之追蹤器裝置20係採用例如美國Ascensi〇ri公司 (Ascension Technology, C〇rp., VA, USA)所生產之 miniBlRD三維追蹤弩。 鏡手術期間,醫師H進行内視 結構組織,並::固有的内視鏡34來觀察體内 灶作處理。因::===工具28,來對病 15主機22可透過其感應器20捕捉傳送g 21所發出的電磁訊號以古十曾 、 汁"'感應26的方位,故只要將追 ΓΛΐ直接與電腦24連接,射即時取得手術工具 一維二間中的位置。在此架構下,Obtaining a two-dimensional image through the endoscope to observe the internal structure of the plexus' physician can't immediately recognize the size of the region and its three-dimensional space fishing relationship, and the risk of surgery increases. In order to assist the physician in the diagnosis and treatment of the cabinet, related computer-aided sputums have been published one after another, as follows: based on the computed tomography (cτ) scene image, the method for detecting and diagnosing the colorectal-related diseases. Or the 3D tracker is fixed on the endoscope, which combines the 2D image of the endoscope with the rattan model constructed with the computed tomography image, so that the doctor can see the three-dimensional structure of the area. The existing virtual endoscope system is mostly based on electric tomography or nuclear magnetic resonance imaging. It is necessary for professionals to operate Cao Ge's expensive equipment to obtain data, which not only increases the cost of time and manpower and material resources, but also increases the probability of radiation damage to patients. . At present, the use of the three-dimensional chasing (four) knot hand balance assist system is also based on such medical images obtained before surgery, and has the disadvantages of cost of treatment or n-shot. β therefore 'requires a method and device for reconstructing three strokes, so as to instantly provide the dynamics of the surgical tool in the body and the three-dimensional geometric model of the lesion area to enable the physician to swim in the virtual environment and observe the lesion to become clinical diagnosis and treatment. Auxiliary Tools 5 1^23425 [Summary of the Invention] Because: The purpose of the invention is to provide a three-dimensional reconstruction method to establish a normal scale of the three-dimensional information ruling team * * 乂 乂 for medical doctors The probability of injury L is too much risk and saves on the cost of treatment and reduces (4) people by light shot. The chance. The present invention can also refactor the real three-dimensional coordinates into a (four) and several therapeutic aspects of evaluation and education training / dimensional model, (four) for surgery or method = - preferred embodiment, the present invention - three The line feature of the image is obtained by the endoscope. The image of the H image is tracked by a feature and a feature capture step is used to filter the feature points. , and steps to filter out the feature tracking into: the feature points of the vertical error. For the trajectory of the feature points, the summer matrix is measured by the scale of the wind, and the three-dimensional model is distorted by the Euclidean reconstruction. Niu Ji., Recognize the eight-form method for 3D model correction i ’ 奸 t scale anomaly 3D model and obtain the first moving parameter matrix filament value. Use the sensor to arbitrarily select one of the space A, and use a value. Fix the sensor to the heart of the surgical tool... point, sensing point, to fix the tracker main unit, sensor, and sense of relative position. Using the first-conversion matrix and the second coordinate value calculation: the::; value 1 surgical tool 'in any special (four), at least w material should be: to obtain the fourth coordinate value and the plurality of second conversion moments (four) with the The second rotation = the second collateral converts the fourth coordinate value to the fifth coordinate value. For the pain, some corresponding feature points are obtained from the scale anomaly three-dimensional model to obtain the sixth coordinate value coordinate value and the fifth coordinate value for _comparison and calculation (4) to obtain the 筮 transformation matrix. Using the third conversion matrix to convert the first coordinate value to the first value, and using the inverse matrix of the third conversion matrix (6), converting the moving average matrix (f) into a second moving parameter matrix (from), to Dimensional model. Using the world coordinate endoscope to move the parameter matrix, collect the world coordinate value (jS〇 calculate the scale measurement matrix (叼: an accuracy factor' and update the accuracy factor and the scale measurement matrix factor. Ratio: Pre-update accuracy The difference between the value of the degree factor and the updated accuracy 枳 three-dimensional ' 'if the difference is less than a predetermined threshold value, then the two-dimensional image of the world seat J, type is obtained, and if the difference is greater than the preset threshold, Then, the updated measurement matrix (7) is replaced by the step of affine reconstruction, until the difference between the value of the X sub-month before the update and the value of the updated accuracy factor is set to a threshold. In another preferred embodiment, a plurality of additional feature points are selected in the area to be observed, and the additional feature points are projected: the two-dimensional image is obtained to obtain a plurality of additional feature point trajectories and These additional traces are combined with this scale measurement matrix (7), so that the scale measurement matrix (7) ... a feature point augmented matrix (α 叼 叼 and this feature point augmented matrix ((10) constructed this distortion by affine reconstruction method Three-dimensional model. For example, in the feature tracking and operation, a wide-angle image correction step is added to correct the image deformation generated by the wide-angle lens. The wide-angle image correction step includes at least: party normalization of the two-dimensional I image ( intensitynGrmalizati〇n) processing; the 2D image of the brightness normalization process is binarized by a high-pass filter ((4) Pass and threshold segmentation method to distinguish the foreground and back 1323425 scene, and take out the 2D image a plurality of centers of gravity of the upper graphs; and the gaps caused by the binarization processing of the two-dimensional images that have been binarized by the morphological processing of the image processing For a plurality of straight lines formed by the center of gravity, the deformed image is corrected by calculating a linearity error. ^ According to still another preferred embodiment of the present invention, the feature tracking and capturing step is performed by Kanade-LuCaS-T〇masi (KLT) feature tracking method and edge feature tracking method. According to another preferred embodiment of the present invention, the step of filtering unreliable tracks includes at least comparing the features. Pointing on a first two-dimensional image, a second two-dimensional image, and a third two-dimensional image to obtain a first feature point coordinate value and a second feature point coordinate value of the feature point located in the first-two-dimensional image And a third feature point coordinate value; the vector represented by the first feature point to the second feature point is a first direction 1 and the vector represented by the second feature point to the third feature point is a second vector, then The angle between the second vector and the first vector is a test angle. If the value of the test angle is greater than a predetermined threshold, the second vector is filtered out. For the implementation of the formula, please refer to FIG. A schematic diagram of a system for reconstructing a three-dimensional image of an endoscope. The system tracker device 2 and the endoscope device 30 of the present invention, wherein the tracker device 20 includes a transmitter 2, a tracker host 22, and a computer 24 And an inductor 26 mounted on the surgical tool 28. The endoscope system of the present invention includes an endoscope 34 and an endoscope main unit 32. The tracker device 20 of the present invention is a miniBlRD three-dimensional tracking device produced by, for example, Ascensi Technology, Inc. (Ascension Technology, C〇rp., VA, USA). During the sinus surgery, the physician H performs endoscopic structural organization and: an intrinsic endoscope 34 to observe the in vivo treatment. Because::===Tool 28, to the disease 15 host 22 can capture the electromagnetic signal sent by g 21 through its sensor 20 to the position of Gu Shi Zeng, juice & '[Induction 26], so as long as the tracking will be direct Connected to the computer 24, the camera instantly acquires the position in the one-dimensional two room of the surgical tool. Under this structure,

域線上以内視鏡34拍 』对欲觀察Q 拍攝奴衫像序列。接著,本發明之车 到之=此影像序列和由㈣器裝置2 G _觀察得 座標,重建出此觀察區域的三維影像結構,: 门時扣供手術工具的方位與行進資訊。 為實現建立三維腔室社禮的曰^^ 之-…会从乍脛至、,。構的目的’本發明之重構内視鏡 之一維衫像的方法包括有 蹤一…押… 特徵點擷取與追 蹤…-維座標的分解法等步驟。請 示根據本發明之較佳實施例之 弟圖…,曰 法的流程示意圖。 j之重構内視鏡之三維影像之方 借右進行影像形變校正的步驟2〇。。通常,内視鏡配 ==二由内視鏡所掏取之觀察區中的二維影像會 ...... 罝線經内視鏡取像後,會成 像\曲線:而此現象在影像邊緣最為明顯。因此,在進行内 視鏡衫像二維重構之前’必須先針旦彡推a 貝无對衫像進行廣角影像的校正 廣角鏡頭帶來的影像形變。首先,準備一印有 複數個等間距圓點的校正板,以旦, 視鏡僅使用固定在鏡頭旁的單―光^像進行前處理,由於内 的散射,通常接近影像中間的區“加上廣角鏡頭對光線 取具冗度會高於影像外圍, 1323425 因此必須對於取得的校正版影像,進杆古 . < 1 丁宂度正規化 (Intensity Normalization)處理。以下說明亮度正規化處理 假設二維影像大小為令乃為(zV)點之亮度, iflVg為整張影像的平均亮度’若以每一點(z’,y)為中、' —義 一個大小為Awxm的視窗(window),則正規化後每一點古声 I,為: ,儿又On the field line, take an endoscope 34 to shoot. Next, the vehicle of the present invention = the image sequence and the coordinates observed by the (4) device 2 G _ to reconstruct the three-dimensional image structure of the observation area: the door is buckled for the orientation and travel information of the surgical tool. In order to realize the establishment of three-dimensional chamber society, the 曰^^-... will be from 乍胫 to,. OBJECT OF THE INVENTION The method of reconstructing an endoscope of the present invention includes a step of extracting and tracking a characteristic point and a tracking method. A schematic diagram of a flow chart of a preferred embodiment of the present invention is shown. j. Reconstructing the 3D image of the endoscope. Step 2 of the image distortion correction by right. . Usually, the endoscope is equipped with == two. The two-dimensional image in the observation area taken by the endoscope will... The image will be imaged after the endoscope is taken by the endoscope: this phenomenon is The edges of the image are most noticeable. Therefore, before performing the two-dimensional reconstruction of the endoscope shirt, it is necessary to perform a wide-angle image correction on the wide-angle image of the image of the wide-angle lens. First, prepare a calibration plate printed with a plurality of equally spaced dots. Once the mirror is only pre-processed using a single optical image fixed to the lens, due to internal scattering, it is usually close to the middle of the image. The upper wide-angle lens will be more complex with the light than the image periphery, 1323425. Therefore, it is necessary to process the corrected image. Inltity Normalization is processed. The following description shows the brightness normalization process. The dimension of the image is the brightness of the (zV) point, and iflVg is the average brightness of the entire image. 'If each point (z', y) is medium, '- a window of size Awxm, then After the formalization, every point of the ancient sound I is:

Hhj) ⑴ ^/2 mil Σ Σ7(/+Μ^+ν) =-m/2 v=-w/2 接著,以高通濾波器(High Pass FUter)與閥值分割法 (Thresholding)對影像作二值化處理,以將二維影像中之每 個像素分為前景與背景,並以影像處理中型態學 (Morphology)的閉合(Closing)運算填補洞隙。然後,以;: 上各圓點重心作為校正點,藉由這些校正點所形成之直l, 以 c.Brauer-Burchardt 在 ”Automatic c〇rrecu〇n Radial Lens Distortion in Single Views of Urban ScenesHhj) (1) ^/2 mil Σ Σ7(/+Μ^+ν) =-m/2 v=-w/2 Next, the high pass filter (High Pass FUter) and threshold division method (Thresholding) are used for the image. Binary processing to divide each pixel in the 2D image into foreground and background, and fill the gap with the Closing operation of Morphology in image processing. Then, the center of gravity of each dot is used as the correction point, and the straight line formed by these correction points is c.Brauer-Burchardt in "Automatic c〇rrecu〇n Radial Lens Distortion in Single Views of Urban Scenes

Using Vanishing PQints”—#"算直線性誤差的方式來校 正形變影像。首先,定義形變校正轉換如下: 又 r = r'/(I + d2ra +d4r'4 ) (之) 乂其中’ K為未知參數,r與〆分別為』影像點在校 正則與校正後到影像中心的距離。接著,選取位於同一 的校正點,這些點的連線® * 一亩娃 原4 I線,現因内視鏡成像而變 1323425 成一曲線。以三個連續校正點為一組, 三角形的面積為誤差函數,再以最小平公此二點所形成之 本私士 一& 丹M被小+方法求得形變參數。 备所有點的三角形面積和最小時, 杜―. 1 J永出最佳形變參數。 接者,進行特徵擷取與追蹤的步 擷取之-绝少推A /从 鄉202,以對内視鏡所 ^像進行特徵點的擷取和追縱,此特徵揭取应追 蹤的結果,會立即影響到三維場 '、Τ ΤΓΤ的精確度。缚灸昭楚 2B圖,其繪示根據本發 ° 步驟的、、…立罔 例之特徵擷取與追蹤 2。2二 。首先’進行内視鏡影像前處理的步驟 ㈣。由於内視鏡拍攝對象為人體器官組織,宜顏色纹理= 迠無明顯變化,不利特徵操取‘ y=t ., /ττ. Μ此本發明以勒後於 ⑻〇g_ Equalization)來強化影像對比凸少 像特徵。此外,内禎鎊敗德蛀丄 以凸顯影 古古择 視鏡取像時’由於光源與鏡頭的特性,當 心… 取常見的就是影像邊緣區域較P。 -1疋在影像均值化之後,雖然細-曰 現氣氺争盔莛去 丨饥裡化但此売度不均 為嚴重’故本發明以上述之影像形變校正所使用的 -度聽技術來克服此缺點。如此,影像特 但焭度分佈不均的現象已被去除。 大颂, 接者,進行步驟202b,以追蹤並擷取二 點。本發明係以忆 ,τ 芦〜像之特徵Using Vanishing PQints"—#" Calculates the deformation image by calculating the linearity error. First, define the deformation correction transformation as follows: again r = r'/(I + d2ra +d4r'4 ) (where) ' where 'K is The unknown parameters, r and 〆 are respectively the distance between the image point and the corrected image point after the correction. Then, select the calibration points at the same point, the connection of these points ® * a mu of the original 4 I line, now within The mirror image becomes 1323425 into a curve. With three consecutive correction points as a group, the area of the triangle is the error function, and then the one with the smallest square is formed by the small + method. Deformation parameters. When preparing the triangle area and minimum of all points, Du. 1 J will always produce the best deformation parameters. Pick up, take the steps of feature extraction and tracking - rarely push A / from township 202, to The feature points are captured and traced by the endoscope. This feature reveals the results that should be tracked, and immediately affects the accuracy of the 3D field '', Τ 。. According to the characteristics of this method, ... Trace 2. 2 2. First of all 'the step of pre-processing of endoscopic image (4). Since the endoscope is the human organ tissue, the color texture = 迠 no obvious change, the unfavorable feature operation ' y = t ., / Ττ. Μ 本 本 本 本 本 本 本 本 本 本 本 本 本 本 本 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化 强化Characteristics, beware... It is common to see that the edge area of the image is P. -1疋 After the image is averaged, although the fine-曰 曰 氺 氺 莛 莛 莛 莛 莛 莛 莛 但 但 但 但 但 但 但 但 但 但 但 但The above-mentioned image distortion correction technique is used to overcome this disadvantage. Thus, the phenomenon that the image is unevenly distributed has been removed. Next, the receiver proceeds to step 202b to track and capture two. Point. The invention is characterized by recall, τ 芦~

月係以 Kanade-Lucas_Tomasi (KL 法來偵測角隅特徽,徊雍* &、0 & .)专徵追蹤凟算 β间将徵,但應用在内視鏡影像序 特性為生理纟且鸿,从τ ρ & 暴於影像 '、我故不易找到明顯的角隅特徵。因此 明增加邊線特糌沾伯w > 四此,本發 特徵的偵測,以有效地增加特徵點。 KLT特徵追蹤演算法: 以下說明 KLT特徵追蹤演算法主要可分為角 兩部分。 /、荷微追蹤 11 1323425 (i)角隅擷取 首先’榻取影像序列的第一張影像’再將影像灰階化, 並利用一次高斯模糊低通渡波(Gaussian Smoothing L〇w-Pass Filter)去除雜訊干擾’且計算影像上水平方向與垂 直方向之免度梯度影像:/,,'。接著,對每一影像點(z· y)建 立協方差矩陣匕: c ΚΙ ⑷2) sjjx(q、iy(qy) w~klSAiAqVyiq))队⑷2) J (3) 其中冰是以Gv)為中心的子視窗;&為高斯濾波器 接著,求出cw矩陣之兩個特徵值(EigenvaIues)e|,e2。 若較小的特徵值’大於某事先設定的閥值,則點夕 是一個角隅特徵。利用此判斷’即可在第一張影像找出 良好的初始特徵點。 (2 )特徵追縱 根據前一步驟找出的初始特徵點,計算其在下一張 影像的位置。假設目前有兩張影像/,J,其中/影像的特 徵點已知,並將被用來追蹤它們在j影像的特徵對應點。 若影像/的特徵點χ,經過位移= 之後出現在影像 J ’則追蹤誤差^可定義為: (4)The monthly system uses Kanade-Lucas_Tomasi (KL method to detect the horned scorpion emblem, 徊雍* &, 0 & .) to track the inter-β lemma, but the endoscopic image sequence characteristics are physiological 纟And Hong, from τ ρ & violence in the image ', I am not easy to find obvious horn features. Therefore, it is necessary to increase the edge feature of the feature, so that the feature is detected to effectively increase the feature point. KLT feature tracking algorithm: The following description KLT feature tracking algorithm can be divided into two parts. /, Holland micro tracking 11 1323425 (i) Corner capture first "the first image of the image sequence" and then grayscale the image, and use a Gaussian Smoothing L〇w-Pass Filter ) Remove noise interference' and calculate the horizontal gradient image in the horizontal and vertical directions of the image: /,, '. Next, establish a covariance matrix for each image point (z·y): c ΚΙ (4)2) sjjx(q, iy(qy) w~klSAiAqVyiq)) Team (4)2) J (3) where ice is centered on Gv) Sub-window; & Gaussian filter Next, find the two eigenvalues (EigenvaIues) e|, e2 of the cw matrix. If the smaller feature value 'is greater than a predetermined threshold, then the point is a corner feature. Use this judgment to find a good initial feature point in the first image. (2) Feature Tracking Calculate the position of the next image based on the initial feature points found in the previous step. Suppose there are currently two images /, J, where the feature points of the image are known and will be used to track their corresponding points in the j image. If the image/feature point is χ, after the displacement = then appears in the image J ’ then the tracking error ^ can be defined as: (4)

\[[J{X^D)-I{X)f〇}{X)cDC 其中π是以點J為中心的搜尋視窗,定義成以\[[J{X^D)-I{X)f〇}{X)cDC where π is a search window centered on point J, defined as

St ^ 1王又 、為中心一適當大小的區域,ω⑺則指定搜尋視窗St ^ 1 Wang is another center of the appropriate size, ω (7) specifies the search window

6Α T1 ^ 3S :權重,通常可設為i或以高斯分布作分配。若將位 量平均分配,則上式變成: 12 1323425 (5) 收(x+f)_ 取-f)]v 敬 對此誤差函數求解,誤差 值,解答時以牛頓法疊代方式差=時:最…^6Α T1 ^ 3S : Weight, usually set to i or distributed in Gaussian distribution. If the bit quantity is evenly distributed, the above formula becomes: 12 1323425 (5) Receive (x+f)_ take -f)]v Exactly solve the error function, the error value, and the difference in Newton's method is the difference. Time: Most...^

D(M1)更新追蹤誤差,直至收斂為 D(t) + AD > U 叹徵為止,其中ί為疊代次數。 收斂h,追蹤誤差,為最小,且透過位移量々,可得尤在 下一張影像的位置。計算第-張影像特徵點在第二張影 像的對應位置之後,可以同樣 ^ φ ^ 僳方式用弟二張影像特徵點 找出其在第三張影像的對應點,重複上述步驟,直到最 後-張影像’則可得影像序列特徵點的追縦軌跡。 以下說明步驟2〇2b之邊線特徵的偵測步驟: 完成影像前處理後,計算最佳闊值(〇叫福加心。⑷ :二以將影像分為前景與背景。本發明實際使用的閥值為 ’以避免取樣不足’其中歧一個依經驗定義的參 數。此外,若某前景像素的鄰域有一半以上為背景,則此像 素也被標記為背景。本發明依上述步驟所得前景像素之集合 疋義-特徵遮罩(Feature Mask)。而影像再以高通濾波器 (High-Pass Filter)作一次褶積(c〇nv〇luti〇n),若某像素經高 通滤波後大於先定義的閥值且位於特徵遮罩内,則被標 ,為邊線候選點(Edge Candidate)。然後,對這些候選點依 高通濾波後的值由大到小排序,並依序置入(push) 一堆疊 C^Stack),有最大值的像素在堆疊頂(τ〇ρ),有最小值則在= 豐底(Bottom),每次由堆疊中取用(ρ〇ρ)_點,檢查此點鄰 域(―個由使用者定義大小的區域)是否已有邊線特徵點,若 無,則此點為邊線特徵,並重複此取用像素—檢查鄰域—標 13 記邊線:的步驟,直到堆疊淨空為止β 步驟咖後,以上述(2)特徵追蹤之方法進行 徵點進二=步二Γ)。然後’進行步請d,以對特 發明之較ifπ ’請參照m,其繪示根據本 «*例之濾除不可靠軌跡步驟之示意圖,其中, :,Pi-S、^與pi+s代表代特測徵點對應於s 1+s張影像之對應點,s為一整數· v曰由5第”第 量;VBC是由"A登數,VAB疋由Pi·』Pi的向 主 Pl P, + s的向量,若VAB與Vbc之夾角Θ不 #點經過第u、第1與第i + s張影像之軌跡是 :的=夹角Θ太大,則表示待測特徵點的變化軌跡過 一’不可信賴,因此必須被剔除。夾角㈣合格範圍由事先 定義之閥值決定。 »月繼續參照第2A圖。在完成步驟2Q2(2()2d)後,進行 步驟2〇4,將以影像座標系表示之特徵追蹤軌跡二維座標, 、内視鏡内。P參數矩陣(Calibrati〇n 轉換成其在内 視鏡裝置座標系的值((一'<_々,並以此建立尺度測量D(M1) updates the tracking error until it converges to D(t) + AD > U sigh, where ί is the number of iterations. The convergence h, the tracking error, is the smallest, and the displacement amount 々 can be obtained especially at the position of the next image. After calculating the corresponding position of the first image feature point in the second image, the two image feature points can be used to find the corresponding point in the third image in the same way, and the above steps are repeated until the last - The "image" can obtain the tracking track of the feature points of the image sequence. The following describes the detection steps of the edge feature of step 2〇2b: After completing the image pre-processing, calculate the optimal threshold (〇叫福加心. (4): 2 to divide the image into foreground and background. The valve actually used in the present invention The value is 'to avoid undersampling' which is an empirically defined parameter. In addition, if more than half of the neighborhood of a foreground pixel is a background, the pixel is also marked as a background. The foreground pixel obtained by the present invention according to the above steps The collection is characterized by a feature mask, and the image is then convolved with a high-pass filter (c〇nv〇luti〇n). If a pixel is high-pass filtered, it is larger than the first definition. The threshold is located in the feature mask and is marked as Edge Candidate. Then, the high-pass filtered values of these candidate points are sorted from large to small, and are sequentially stacked. C^Stack), the pixel with the largest value is at the top of the stack (τ〇ρ), and the minimum value is at the bottom of the bottom (Bottom). Each time the (ρ〇ρ)_ point is taken from the stack, check this point. Domain (-a region defined by the user) There are edge feature points. If not, then this point is the edge feature, and repeat the steps of taking the pixel—check the neighborhood—label 13 mark: until the stack is empty, the β step is followed by the above (2) feature tracking. The method is to enter the two points = step two). Then, 'go step d, to compare the special ifπ' to m, please refer to m, which shows the schematic diagram of the steps of filtering out unreliable trajectories according to this «* example, where :, Pi-S, ^ and pi+s The representative point is corresponding to the corresponding point of the s 1+s image, s is an integer v v is 5th "the first amount; VBC is the number of "A", VAB" by Pi · Pi The vector of the main Pl P, + s, if the angle between VAB and Vbc is not #, the trajectory of the u, first and i + s images is: the angle Θ is too large, indicating the feature point to be tested The change trajectory is 'untrusted' and therefore must be rejected. The angle (4) Qualification range is determined by a predefined threshold. » Continue to refer to Figure 2A. After completing step 2Q2(2()2d), proceed to step 2〇 4. The feature represented by the image coordinate system is used to track the two-dimensional coordinates of the trajectory, and the endoscopic mirror. The P parameter matrix (Calibrati〇n is converted into the value of the coordinate system of the endoscope device ((a '<_々, And establish scale measurement

v" …V W = (θ)v" ...V W = (θ)

UJP V u/\UJP V u/\

2/xP ,/丨 其中-〜)·(叫),, (x广兄Γ^<ηε)為特徵擷取與追蹤出來的第_;.個特徵點在第ζ· 張影像之座標,(\〇,兄‘。)為特徵點之集合的重心投影於第f張 1323425 影像座標’可用第,張影像各特徵點之重心計算此座 :比和?分別為影像與特徵點的數目。矩陣㈣每一個 對應的气’〜為重構之三維模型之精確度因子,並 =由疊代(Iterative)的方法逼近真正的〜值,第-次建構 尺度測量矩陣π時,^之值為零。2/xP, /丨中中~~)·(叫),, (x广兄Γ^<ηε) is the feature of the feature extraction and tracking _;. feature points at the coordinates of the third image, (\〇,兄'.) The center of gravity of the set of feature points is projected on the f th 1323425 image coordinates 'available, the center of gravity of each feature point of the image is calculated: the ratio? The number of images and feature points are respectively. The matrix (4) each corresponding gas '~ is the precision factor of the reconstructed 3D model, and = the Iterative method approximates the true ~ value, the first-time construction scale measures the matrix π, the value of ^ zero.

進行仿射重構步驟2G6,針對尺度測量矩陣(%,以仿 射重構和建構三維模型,其中,仿射重㈣制奇異值分解 法將尺度測里矩陣⑺分解成兩個秩(Rank)為三的矩陣之乘 積卩W力左,其中分為内視鏡的仿射移動參數(尬^ Motion Matrix);左為特徵點的仿射三維座標 細ix)。仿射重構所建立之三維模型會有扭曲之現象,故 需再進行歐氏重構以修正此模型。 進行歐氏重構之步驟208,以修正此扭曲之三維模型。 由於因此必需計算出矩陣…以將内 視鏡仿射移動參數ώ和特徵點的仿射三維座標左,轉換為分 解座&系的内視鏡移動參數(EucHdean Motion Matrix)M(第 一移動參數矩陣)和三維座標(EucHdean Shape Matrix )只第 一座標值),即从=分·//和左,其中,分解座標系係 以各特徵點之重心為原點之歐氏空間座標系(Euclidean Coordinate System);三維座標S為分解座標系下之特徵點 座標值之集合。在此以Aan®s等人於“Robust Factorization” 書中所提出之方式以限制條件計算矩陣//,以使jj/ =从· $。 以上之步驟即為歐氏重構。歐式重構所建構之三維模型尺度 異常於真實世界模型,因此’本發明提供了結合三維座標的 分解法,使歐氏重構後所產生之尺度異常三維模型能恢復原 15 1323425 有正常之尺度。 請繼續參照第1圖和第2A圖。在完成步驟202(202d) 後進行步驟2 1 0,以在二維影像中的特徵點中,使用安裝 另感應器26之手術工具28,在觀察區中點選至少四個對應 至二維影像之特徵點的對應特徵點,並藉由一轉換步驟,得 到這些對應特徵點的世界座標值,這些對應特徵點世界座標 值所在之世界座標系統的原點為追蹤器裝置2〇所定義之原 點(在此實施例中,傳送器21為此原點)。由於感應器〜皮 #目定於手術工具28的手把端,以手術工具28頂端所點選之 座標值與感應器所感應之世界座標值並不相同,因此,若要 .得到對應特徵點的世界座標值,則必須求出手術工具28頂 ,端所f選之座標值與感應器所感應之世界座標的轉換關 係。疋義手術工具28頂端所點選之座標值所在之座標系為 感,器座標系’感應器座標系為將世界座標系的原點移至感 應器26之座標系。假設感應器座標系與世界座標系的轉換 關=為- 3·’其中’匕咖為手術工具28於空間中 •任意點選-感應點之感應器座標向量,感應器座標 應器座標系之原點至感應點之向量;匕一為 二 界座標向量,感應點世界座標向量為世界座標系之原 應點之向罝,矩陣4代表感應器座標系與世界座標系之: 相互的轉換關係,在此定名為感應器座標轉換矩陣 : 蹤器裝置20可知感應器26位於此感應點之三維位置^ 其旋轉矩陣i?,因此可得感應器座標轉換矩陣J = _Γ- L〇 1 ’ 又由於感應器26被固定於手術工具28的手把端,手術工具 16 1323425 28頂端相對於感應器26之位置是固定 任意-點,其感應器座標值皆為固定值、,也就^對於空間中 值。由以上可知,若求出定值之户,再由、έΆ,為固定 供之感應器座標轉換矩陣可:m统所提 意點選之感岸點H Ύ將以手術工具28頂端任Perform affine reconstruction step 2G6, for scale measurement matrix (%, affine reconstruction and construction of 3D model, where affine weight (4) singular value decomposition method is used to decompose scale scale matrix (7) into two ranks (Rank) The product of the matrix of three is 左W force left, which is divided into the affine movement parameter of the endoscope (尬^ Motion Matrix); the left is the affine three-dimensional coordinate of the feature point ix). The three-dimensional model established by affine reconstruction will be distorted, so Euclidean reconstruction is needed to correct the model. A step 208 of the Euclidean reconstruction is performed to correct the three-dimensional model of the distortion. Therefore, it is necessary to calculate the matrix...to convert the endoscopic affine movement parameter ώ and the affine three-dimensional coordinates of the feature point to the left, and convert it into the EucHdean Motion Matrix M of the decomposition seat & The parameter matrix) and the three-dimensional coordinates (EucHdean Shape Matrix) are only the first coordinate value), that is, from ==··/ and left, where the decomposed coordinate system is the Euclidean space coordinate system with the center of gravity of each feature point as the origin ( Euclidean Coordinate System); the three-dimensional coordinate S is a set of feature point coordinate values under the decomposition coordinate system. Here, the matrix / / is calculated with constraints in the manner proposed by Aan®s et al. in "Robust Factorization" to make jj / = from · $. The above steps are the Euclidean reconstruction. The three-dimensional model scale constructed by European reconstruction is abnormal to the real world model. Therefore, the present invention provides a decomposition method combining three-dimensional coordinates, so that the dimensional anomaly three-dimensional model generated after Euclidean reconstruction can be restored to the original 15 1323425. . Please continue to refer to Figure 1 and Figure 2A. After step 202 (202d) is completed, step 2 1 0 is performed to select at least four corresponding to the two-dimensional image in the observation area by using the surgical tool 28 on which the other sensor 26 is mounted in the feature points in the two-dimensional image. The corresponding feature points of the feature points are obtained by a conversion step, and the world coordinate values of the corresponding feature points are obtained. The origin of the world coordinate system where the corresponding feature point world coordinate values are located is the original defined by the tracker device 2 Point (in this embodiment, the transmitter 21 is the origin). Since the sensor is located at the handle end of the surgical tool 28, the coordinate value selected by the top of the surgical tool 28 is not the same as the world coordinate value sensed by the sensor. Therefore, if the target point is obtained, the corresponding feature point is obtained. For the world coordinate value, it is necessary to determine the conversion relationship between the coordinates of the top of the surgical tool 28 and the coordinates selected by the sensor and the world coordinates sensed by the sensor. The coordinates of the coordinate value selected at the top of the sputum surgical tool 28 are sensed, and the sensor coordinate system is the coordinate system that moves the origin of the world coordinate system to the coordinate system of the sensor 26. Assume that the sensor coordinate system and the world coordinate system are switched off = -3 · 'where 'the coffee is the surgical tool 28 in the space · any point - the sensor point coordinate vector of the sensing point, the sensor coordinates coordinate system coordinate system The vector from the origin to the sensing point; the first is the two-coordinate coordinate vector, the world coordinate vector of the sensing point is the direction of the original coordinate point of the world coordinate system, and the matrix 4 represents the coordinate system of the sensor and the coordinate system of the world: mutual conversion relationship Named here as the sensor coordinate conversion matrix: The tracer device 20 knows that the sensor 26 is located at the three-dimensional position of the sensing point ^ its rotation matrix i?, so the sensor coordinate transformation matrix J = _Γ-L〇1 ' Since the sensor 26 is fixed to the handle end of the surgical tool 28, the position of the top end of the surgical tool 16 1323425 28 relative to the sensor 26 is fixed at any point, and the sensor coordinate values are fixed values, that is, for the space Median. It can be seen from the above that if the household value of the fixed value is obtained, and then, the sensor coordinate conversion matrix for the fixed value can be selected: the sense point point H Ύ selected by the system will be the top of the surgical tool 28

、之4應點,表不於世界座標系H 應器26點選手術工具28所點選之感應點,以獲得7二 =界座標值(第二座標值),再藉由追縱器系統所提= &於感應點之感應器座標轉換矩 :)):::广之感應點的感應器座標二二= 可知㈣個料應於每—個對應特徵 座二:應.座標轉換矩陣(第二轉換矩陣)和定值之感應器 才示值(第三座標值)’即可將以手術工具28所點選獲得之 對應特徵點的感應器座標值(第四座標值),轉換為世;座標 值(第五座標值)。 在步驟212中,對應這些對應特徵點於尺度異常之三維 模型中’ 這些對應特徵點在&解座#"的座標值 (第六座標系)。 在步驟214中,計算比較這些對應特徵點世界座標值和 分解座標系座標值,以求得-三乘三之座標系統轉換矩陣 Θ第三轉換矩陣),並將歐氏重構步驟所計算出之分解座標 系下之特徵點5和内視鏡移動參數矩陣Μ由<5' =(7.5和 衂轉換到世界座標系’則世界座標三維模型 所-Μ·5·',其中从'和y之乘積仍等於尺度測量矩陣酽’ y 為世界座標系下之特徵點座標值之集合(第七座標值)並與 真實世界的物體具有相同比例大小;為世界座標系下之 17 1323425 1323425 參數.矩陣(第二移動參數矩陣h 統轉換矩陣σ,假設在分解座標系的特徵 出座^ 可經由位移^後再乘上矩㈣轉換二; 位於分解座標:為二維影像之第,個特徵點 於世以/Γ 維影像之第!·個特徵點位 因此I 座標值,飞為—3x3矩陣,飞為ϋ矩陣。 :此,在歐氏重構後,即以這”個由三維 車 座“…卜1’及其 所估計出的三維 二?二1.…代入_ I Γ」Μ,得-聯立方㈣ 出矩仏id後,將一 成之後補二Γ 轉換矩陣σ。位移。將在整個重建; 苴趨s :ati〇n),除上述座標轉換外,也含有修正形狀,, 差=際形狀的功能’仿射轉換假設每—點的形狀修正崔 =不=用-整體轉換來代表,❹在空間令的 換=陣=種轉換可能不_。若所計算求出之座標系統賴 之秩數不為二’則表示有追縱錯誤之軌跡未被滤 因此必須重新進行遽除不可靠執跡之步驟(202d)。 在步驟216中,集合特徵點世界座標之矩陣&表示為: s,(:,0 qwi~Ta if qwi exists’ ^(:J) otherwise. ⑺ 二乂世界座標系下的特徵叫内視鏡移動參數矩 以f / ^尺度❹㈣F中的每個精確度因子雜 更新精確度因子之值和尺度測量矩陣F,又,由 18 ^323425 从s (从).(一$),因此會有兩組精確度因子之值個別對應 至和(_从,,取反投影誤差較小之今即可。 在步驟218中,右更新後之值與初始值之差值小於預設 之閥值,則以此世界座標三維模型之精確度即符合要求,據 此獲得世界座標三維模型之三維影像,此三維影像具有正常 尺度大小。若精確度因子被更新後與被更新前差值大於預設 之閥值則回到步驟206 ’以更新之尺度測量矩陣妒重新進 行仿射重構之步驟,並重複此循環直至精確度因子被更新後 的值與被更新前的值之差值小於預設之閥值。以上反覆更新 世界座標三維模型之方法稱為疊代法,其中,步驟2ι〇只需 要進行一次即可,即步驟212於每次疊代過程中所需之對應 特徵點之世界座標值皆相同。 在手術過程中,病灶可能因為被其他組織遮蔽或超出影 像拍攝範圍等因素,使觀察區域的特徵點不均,而導致重構 的三維模型在重要區域缺乏特徵點以描述區域的幾何形 狀。另一方面,以手術工具34點選特徵點三維座標時,手 術工具34須準確點到特徵點以避免產生誤差,若能任意點 選,不限制在先前擷取的特徵點,可大幅增加點選的效〜率‘…, 增加幾何細節,並提升重建精確度。這些可任意點選的追加 特徵點以下述步驟加入重構流程。進行三維重構後,將追加 的特徵點利用世界座標系下的内視鏡移動參數矩陣^ 影回二維影像,以形成新的軌跡0<,<),並與原有的特徵 軌跡(Ά.)合併,可得一增廣尺度測量矩陣 19 (8) augW: U\P U\\ < vw, ui 4fN, Vff V, UJP uf\ 2MP+Ns)4, should be point, the table is not in the world coordinate system H device 26 points selected surgical tools 28 selected points to obtain 7 two = boundary coordinates (second coordinate value), and then by the chaser system The proposed ================================================================================================================ (Second conversion matrix) and the value of the sensor display value (third coordinate value)' can be converted by the sensor coordinate value (fourth coordinate value) of the corresponding feature point selected by the surgical tool 28 World; coordinate value (fifth coordinate value). In step 212, the corresponding feature points are corresponding to the coordinate values (the sixth coordinate system) of the & dislocation #" in the three-dimensional model of the scale anomaly. In step 214, calculating and comparing the corresponding feature point world coordinate values and the decomposition coordinate system coordinate values to obtain a three-by-three coordinate system conversion matrix Θ third conversion matrix), and calculating the Euclidean reconstruction step The feature point 5 under the decomposition coordinate system and the endoscopic movement parameter matrix Μ by <5' = (7.5 and 衂 conversion to the world coordinate system, then the world coordinate three-dimensional model - Μ·5·', from 'and The product of y is still equal to the scale measurement matrix 酽' y is the set of feature point coordinates (the seventh coordinate value) under the world coordinate system and has the same scale as the real world object; it is the 17 1323425 1323425 parameter under the world coordinate system. Matrix (the second moving parameter matrix h system transformation matrix σ, assuming that the feature of the decomposition coordinate system is ^ can be multiplied by the moment ^ and then multiplied by the moment (four) to convert two; located in the decomposition coordinates: the second feature of the two-dimensional image Point to the world to / Γ Dimensional image! · A feature point, therefore I coordinate value, fly to -3x3 matrix, fly as a matrix. This: after the Euclidean reconstruction, that is, by this "three-dimensional seat "...Bu 1' and its estimate Dimension II?2 1..substitute _ I Γ"Μ, get - 联立方(四) After the 仏 id, the 转换 之后 之后 转换 转换 转换 转换 转换 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 位移 〇 〇 〇 〇 〇 〇 〇 In addition to the coordinate conversion described above, it also contains the modified shape, the function of the difference = the shape of the shape. The affine transformation assumes that the shape of each point is corrected. Cui = not = with - the overall transformation is represented, and the space is changed. = The type of conversion may not be _. If the calculated number of ranks of the coordinate system is not two, it means that the track with the tracking error is not filtered, so the step of removing the unreliable trace must be re-executed (202d). In step 216, the matrix & of the set feature point world coordinates is expressed as: s, (:, 0 qwi~Ta if qwi exists' ^(:J) otherwise. (7) The feature under the second world coordinate system is called an endoscope. The moving parameter moment is f / ^ scale ❹ (four) F in each of the precision factor miscellaneous update accuracy factor value and the scale measurement matrix F, again, from 18 ^ 323425 from s (from). (a $), so there will be two The values of the group accuracy factors are individually corresponding to and (_, from, and the back projection error is small. In step 218 If the difference between the value after the right update and the initial value is less than the preset threshold, the accuracy of the three-dimensional model of the world coordinate is met, and the three-dimensional image of the three-dimensional model of the world coordinate is obtained, and the three-dimensional image has a normal scale. If the accuracy factor is updated and the difference before the update is greater than the preset threshold, then return to step 206 to re-execute the affine reconstruction with the updated scale measurement matrix, and repeat the loop until the accuracy The difference between the updated value of the factor and the value before being updated is less than the preset threshold. The method of repeatedly updating the three-dimensional model of the world coordinate is called the iterative method, wherein step 2 ι only needs to be performed once, that is, Step 212 has the same world coordinate value for the corresponding feature points required for each iteration. During the operation, the lesion may be unevenly covered by other tissues or beyond the scope of image shooting, resulting in uneven feature points in the observation area, resulting in the reconstructed three-dimensional model lacking feature points in important areas to describe the geometric shape of the area. On the other hand, when the three-dimensional coordinate of the feature point is selected by the surgical tool 34, the surgical tool 34 must accurately point to the feature point to avoid an error. If it can be arbitrarily selected, it is not limited to the feature point previously captured, and the point can be greatly increased. Select the efficiency ~ rate '..., increase the geometric details and improve the reconstruction accuracy. These optional feature points can be added to the refactoring process in the following steps. After the three-dimensional reconstruction, the additional feature points are imaged back to the two-dimensional image by using the endoscope moving parameter matrix under the world coordinate system to form a new trajectory 0, <), and the original feature trajectory ( Ά.) Merging, you can get an augmented scale measurement matrix 19 (8) augW: U\PU\\ < vw, ui 4fN, Vff V, UJP uf\ 2MP+Ns)

其中iV 接續進行以:=:徵點的數目;增廣尺度測量矩陣¥ ,4構、歐氏重構和疊代絲重構三維模型, 了使—、准模型的精確度提升。 本發明之敕#誊+ ^ 貫施例、目的、和特徵已被描述,即使用 很多的不同、改變、 丨便用 發明的精神和進-步列Γ/;Γ不會與所揭示之本 悉此技藝的人,這是很明顯的專利範圍分開,對於熟 【圖式簡單說明】 為讓本發明之上述和其他 易懂,下文特舉一較佳實施的:徵、和優點能更明顯 明如下: 並配合所附圖式,作詳細說 _第1圖係繪示本發明之重構 不意圖。 兄之—、准衫像的系統 第2Α圖係繪示根據本發明 之三維影像之方法的流程示意圖/只施例之重構内視鏡 、第2Β圖係I會示根據本發明之較 适縱步驟的流程示意圖。 &彳之特徵梅取與 第2C圖係緣示根據本發明之 軌跡步驟之示意圖。 較佳貫施例之濾除不可靠 20 主要元件符號說明 20 追縱器裝置 22 : 追蹤器主機 21 :傳送器 26 : 感應器 24 :電腦 30 : 内視鏡裝置 28 .手術工 旦 /、 34 : 内視鏡 32 :内視鏡 主機 200 :影像形變校正 202 •特徵追蹤與擷取 202a .内視鏡影像前處理 202b :偵測變現特徵與角隅特徵 202c .特徵追縱 202d ’濾除不可靠軌跡 204 建立尺度測量矩陣 206 仿射重構 208 :歐氏重 構 210 取传對應特徵點世界座標值 212 取得對應特徵點的分解座標系 座標值 214 計算出座標系統轉換矩陣、 世 界座標系 下的 點和影像擷取裝置移動參數矩陣 216:更新尺度測量矩陣中的每個精確度因子的值並γ 之更新尺度測量矩陣 218:若更新後的精確度因子與更新前的精確度因子4 差值小於預設閥值 21Among them, iV continues to: =: the number of points; augmented scale measurement matrix ¥, 4 structure, Euclidean reconstruction and iterative wire reconstruction 3D model, the accuracy of the - and quasi-models is improved. The invention, the purpose, and the features of the present invention have been described, that is, the use of many different, changed, and sloppy inventions and the spirit of the invention are not related to the disclosed For those skilled in the art, it is obvious that the scope of the patent is separate. For the sake of familiarity and simplicity of the present invention, in order to make the above and other aspects of the present invention understandable, the following is a preferred embodiment: the levy and the advantages are more obvious. The following is a detailed description of the present invention in conjunction with the drawings. FIG. 1 is a schematic diagram showing the reconstruction of the present invention. The system of the brother-and-shirt image is shown in the second diagram of the method of the three-dimensional image according to the present invention. The reconstructed endoscope of the embodiment only, the second diagram I shows the more suitable vertical shape according to the present invention. The flow chart of the steps. The feature of & 梅 与 and 2C is a schematic representation of the trajectory steps in accordance with the present invention. Filtering unreliable according to preferred embodiment 20 Main component symbol description 20 Tracker device 22: Tracker host 21: Transmitter 26: Sensor 24: Computer 30: Endoscope device 28. Operational workmanship/, 34 : Endoscope 32: Endoscope Host 200: Image Deformation Correction 202 • Feature Tracking and Capture 202a. Endoscope Image Pre-Processing 202b: Detection of Realization Features and Corner Features 202c. Feature Tracking 202d 'Filtering No Reliable trajectory 204 Establish scale measurement matrix 206 Affine reconstruction 208: Euclidean reconstruction 210 Take the corresponding feature point world coordinate value 212 Get the corresponding feature point decomposition coordinate system coordinate value 214 Calculate the coordinate system conversion matrix, under the world coordinate system Point and image capture device movement parameter matrix 216: update the value of each precision factor in the scale measurement matrix and update the scale measurement matrix 218 of γ: if the updated accuracy factor is different from the accuracy factor 4 before the update Value is less than the preset threshold 21

Claims (1)

13234251323425 十、申清專利範圍 • 1 · 一種重構三雄旦/热 ^ . —、·隹衫像之方法,其中至少包含: 一 兄熳侍一—維影像; •禮兮〜纟〜像進仃—特徵追蹤與一特徵操取步驟,以 獲得該二維影像之複數個特徵點; 對該些特徵點進行_、、者 屬除不可a軌跡步驟,以濾除該 特徵追縱與擷取步驟中 r之追縱錯誤的特徵點; 纏· 針對該些特徵點夕杯化/ 響 、 之執跡所組成之-尺度測量矩陣 ()’以一仿射重構之方法造杆仕科击德 次進仃仿射重構之步驟,以建構 一扭曲三維模型,· • #對該扭曲三維模型’以-歐氏重構之方法進行—三 維模型校正步驟,來肆·谨 Ρ τϋ: W ^ - /鄉木建構一尺度異常三維模型,並獲得一 第一移動參數矩陣(从)和一第一座標值0); 利用一感應器任意點選空間中之一感應點,以取得一 第一座標值; 將該感應器固定於一手術工具並以該手術工具點 該感應點,以固定該追蹤器、該感應器、該感應點:者' 間的相對位置關係,並獲得該追蹤器所提供之一—之 矩陣; 、—轉換 利用該第一轉換矩陣和該第二座標值計算求1 卜 三座標值; 开侍—第 以該手術工具’在該些特徵點中任意點選-小 ·、土少四個對 應特徵點,以得到複數個第四座標值和複數個笛_ ’ 陣; 矛〜轉換矩 22 1323425X. Shen Qing patent scope • 1 · A method of reconstructing the three males/hot ^. —·· 隹 像 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Feature tracking and a feature fetching step to obtain a plurality of feature points of the two-dimensional image; performing _, and genus omnidirectional trajectory steps on the feature points to filter out the feature tracking and capturing steps The characteristic point of r's falsification error; entanglement - the scale measurement matrix () which is composed of the enchanting/sounding of these feature points, and the method of anamorphic reconstruction Steps to enter the affine reconstruction to construct a distorted three-dimensional model, • ############################################################################# / Xiangmu constructed a scale anomaly three-dimensional model, and obtained a first moving parameter matrix (slave) and a first coordinate value of 0); using a sensor to randomly select one of the sensing points in the space to obtain a first coordinate Value; fix the sensor to a surgical tool and The surgical tool points the sensing point to fix the tracker, the sensor, the relative positional relationship between the sensing point and the one obtained by the tracker, and the conversion utilizes the first The conversion matrix and the second coordinate value are calculated to obtain a three-coordinate value; the service--the first use of the surgical tool's any point in the feature points--small and soil-less four corresponding feature points to obtain a plurality of The fourth coordinate value and the plural flute _ ' array; spear ~ conversion moment 22 1323425 利用該些第二轉換矩陣和該第三座標值將該些第四 座標值轉換為複數個第五座標值; 對應該些對應特徵點於該尺度異常三維模型,以獲得 複數個第六座標值; 對該些第六座標值和該些第五座標值進行一比較和 計算步驟,以獲得一第三轉換矩陣(G); 利用該第三轉換矩陣((?),將該第一座標值(幻轉換為 一第七座標值(π),以及利用該第三轉換矩陣(σ)之反矩 陣’將該第一移動參數矩陣(Μ),轉換成一第二移動參數 矩陣(ΛΓ)’以獲得一世界座標三維模型; ^利用該第二移動參數矩陣(Μ)和該第七座標值(lS,)計 算出該尺度測量矩陣(叼中之複數個精確度因子,並更新 該些精確度因子和該尺度測量矩陣(酽);以及 精確度因子之值之差值,若差值小於一 一預設閥值,則獲得Converting the fourth coordinate values into a plurality of fifth coordinate values by using the second transformation matrix and the third coordinate value; and correspondingly selecting the corresponding feature points on the scale abnormal three-dimensional model to obtain a plurality of sixth coordinate values Performing a comparison and calculation step on the sixth coordinate value and the fifth coordinate values to obtain a third conversion matrix (G); using the third conversion matrix ((?), the first coordinate value (The magic transformation is converted into a seventh coordinate value (π), and the first moving parameter matrix (Μ) is converted into a second moving parameter matrix (ΛΓ) by using the inverse matrix of the third conversion matrix (σ) Obtaining a three-dimensional model of the world coordinate; ^ using the second moving parameter matrix (Μ) and the seventh coordinate value (lS,) to calculate the scale measurement matrix (the plurality of precision factors in the 叼, and updating the precisions) The difference between the factor and the scale measurement matrix (酽); and the value of the accuracy factor, if the difference is less than a predetermined threshold, then 確度因子之值之差值小於該預設閥值。 比較更新前之該些精確度因子之值和更新後之該些 2.如申請專利範圍第1項所述之方法 在一欲觀察區域内以該感應器任专 特徵點’並將該些追加特徵點投影至該二 數個追加特徵點軌跡;以及 更至少包含: 點選複數個追加 一維影像以獲得複 23 U23425The difference between the values of the accuracy factors is less than the preset threshold. Comparing the values of the precision factors before the update and the updated ones. 2. The method according to claim 1 of the patent application is to use the sensor as a feature point in an area to be observed and append the points Feature points are projected to the two additional feature point trajectories; and at least include: clicking a plurality of additional one-dimensional images to obtain a complex 23 U23425 (更)正替換页 將該些追加特徵點軌跡併入哕尺_ .al θ 該尺度測量矩陣…“ 度測量矩陣W,使 特徵點增廣矩陣( 特徵點增廣矩陣⑽躍對該 維模型 陣(°明以仿射重構之方法建構該扭曲三 3·如申請專利範圍第i項所述之方法,更至少包含:(more) the replacement page merges the additional feature point trajectories into the ruler _ .al θ the scale measurement matrix... "degree measurement matrix W, so that the feature point augmentation matrix (the feature point augmentation matrix (10) jumps to the dimensional model The array is constructed by the method of affine reconstruction. The method described in item i of the patent application scope includes at least: 校正2行該特徵追蹤與榻取步驟之前,加入-影像形變 带變技正/校正由廣角鏡頭所產生之影像形變,該影像 也變杈正步驟至少包含: 對°亥—維影像進行亮度正規化處理; 對已進行亮度正規化處理之該二維影像,以高通遽波 口間值分割法進行二值化處理以區分前景和背景,並取 出該二維影像上各圖形之複數個重心; ^對已進行二值化處理之該二維影像以影像處理中型 態學(Morphology)之閉合(cl〇sing)運算填補三值化處理 所造成之洞隙;以及 對該些重心所形成之複數條直線,以一計算直線性誤 差之方法來校正形變影像。 4.如申請專利範圍第1項所述之方法,其中該特徵 榻取步驟係以Kanade-Lucas-Tomasi(KLT)特徵追縦法和 邊線特徵(Edge Feature)擷取法進行。 5·如申請專利範圍第1項所述之方法,其中該特徵 24 1323425Before correcting 2 lines of the feature tracking and reclining steps, the image-deformation band change/correction image distortion caused by the wide-angle lens, and the image is also positively changed to at least include: normalizing the brightness of the image-dimensional image Processing; the two-dimensional image subjected to the brightness normalization processing is binarized by a high-pass inter-channel value segmentation method to distinguish the foreground and the background, and the plurality of centers of gravity of the graphics on the two-dimensional image are extracted; Filling the two-dimensional image that has been binarized with the closing of the Morphology of the image processing to fill the gap caused by the ternary processing; and the plural formed by the center of gravity Straight line, correcting the deformation image by a method of calculating the linearity error. 4. The method of claim 1, wherein the feature reclamation step is performed by a Kanade-Lucas-Tomasi (KLT) feature tracking method and an edge feature extraction method. 5. The method of claim 1, wherein the feature is 24 1323425 正替 mi =取步㈣ a K_de-LUcas_T〇masi(KLT)特徵 線特徵(Edge Feat紙)追蹤法其中之—者進行。’和 6.如申請專利範圍第丨項所述之方法复 不可靠執跡步驟至少包含: ’、中该濾除 比較該些特徵點中之-待測特徵點於―第―二^ 點…;it影像和一第三二維影像中,以取得該特: 點位置‘笛—維影像之一第一特徵點位置、-第二特徵 點位置和一第三特徵點位置; 、支 :該第-特徵點位置至該第二特徵點位 向量為一第—向量; 衣之 取該第二特徵點位置至該第三特徵 向量為-第二向量;以及 “代表之 取該第二向量與該第-向量之夾角為-測試夾角,若 徵點。 大於一預先設疋之閥值則濾除該待測特 7·如申請專利範圍第3項所述之方法,其中該 直線陡誤差之方法$ c㈣㈣·以灿灿計算直線 的方法。 °、左 8.如申請專利範圍第i項所述之方法,苴 轉換矩陣⑹為-三乘三矩陣。 八 25Positive mi = take step (four) a K_de-LUcas_T〇masi (KLT) feature line feature (Edge Feat paper) tracking method is carried out. 'And 6. The method described in the scope of the patent application is not reliable. The at least one step includes: ', in the filter to compare the feature points - the feature points to be tested are at the "second" point... ;it image and a third two-dimensional image to obtain the special: point position 'the first feature point position of the flute-dimensional image, the second feature point position and a third feature point position; The first feature point position to the second feature point bit vector is a first vector; the second feature point position of the clothing to the third feature vector is a second vector; and “representing the second vector and The angle of the first vector is - the test angle, if the point is greater than a pre-set threshold, then the method to be tested is filtered out. 7. The method according to claim 3, wherein the straight line error is Method $c(4)(4)·Method for calculating straight line by Can. °, left 8. As described in the patent item i, the 苴 conversion matrix (6) is a -three by three matrix.
TW95127782A 2006-07-28 2006-07-28 Method and system for reconstructing 3-d endoscopic images TWI323425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW95127782A TWI323425B (en) 2006-07-28 2006-07-28 Method and system for reconstructing 3-d endoscopic images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW95127782A TWI323425B (en) 2006-07-28 2006-07-28 Method and system for reconstructing 3-d endoscopic images

Publications (2)

Publication Number Publication Date
TW200807309A TW200807309A (en) 2008-02-01
TWI323425B true TWI323425B (en) 2010-04-11

Family

ID=44766620

Family Applications (1)

Application Number Title Priority Date Filing Date
TW95127782A TWI323425B (en) 2006-07-28 2006-07-28 Method and system for reconstructing 3-d endoscopic images

Country Status (1)

Country Link
TW (1) TWI323425B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9066086B2 (en) 2010-12-08 2015-06-23 Industrial Technology Research Institute Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same
TWI711010B (en) * 2019-11-29 2020-11-21 財團法人成大研究發展基金會 Geometric camera calibration system and method
TWI764593B (en) * 2021-02-26 2022-05-11 國立清華大學 Method of constructing three-dimensional model by tomographic reconstruction technology

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI678679B (en) * 2018-07-09 2019-12-01 財團法人資訊工業策進會 Space coordinate converting server and method thereof
TWI811162B (en) * 2022-11-22 2023-08-01 達擎股份有限公司 Video data conversion device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9066086B2 (en) 2010-12-08 2015-06-23 Industrial Technology Research Institute Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same
TWI711010B (en) * 2019-11-29 2020-11-21 財團法人成大研究發展基金會 Geometric camera calibration system and method
TWI764593B (en) * 2021-02-26 2022-05-11 國立清華大學 Method of constructing three-dimensional model by tomographic reconstruction technology

Also Published As

Publication number Publication date
TW200807309A (en) 2008-02-01

Similar Documents

Publication Publication Date Title
CN107920722B (en) Reconstruction by object detection for images captured from a capsule camera
CN102473300B (en) multi-modality breast imaging
Bergen et al. Stitching and surface reconstruction from endoscopic image sequences: a review of applications and methods
CN101133431B (en) Method for registering biomedical images with reduced imaging artifacts caused by object movement
KR101967357B1 (en) Method and apparatus for isolating a potential anomaly in imaging data and its application to medical imagery
CN106991694B (en) Based on marking area area matched heart CT and ultrasound image registration method
CN105246409B (en) Image processing apparatus and image processing method
CN102737395B (en) Image processing method and device in a kind of medical X-ray system
WO2017027638A1 (en) 3d reconstruction and registration of endoscopic data
CN109791692A (en) Computer aided detection is carried out using the multiple images of the different perspectives from area-of-interest to improve accuracy in detection
CN110381841B (en) Clamp for medical imaging and using method thereof
JP2016506260A (en) Markerless tracking of robotic surgical instruments
Lin et al. Simultaneous tracking, 3D reconstruction and deforming point detection for stereoscope guided surgery
US8515006B2 (en) Fiducial systems for mammography
CN103942772A (en) Multimodal multi-dimensional blood vessel fusion method and system
TWI323425B (en) Method and system for reconstructing 3-d endoscopic images
Mwikirize et al. Learning needle tip localization from digital subtraction in 2D ultrasound
Zheng et al. Multi-part left atrium modeling and segmentation in C-arm CT volumes for atrial fibrillation ablation
Phan et al. Optical flow-based structure-from-motion for the reconstruction of epithelial surfaces
Lin et al. Efficient vessel feature detection for endoscopic image analysis
CN110464462B (en) Image navigation registration system for abdominal surgical intervention and related device
Park et al. Recent development of computer vision technology to improve capsule endoscopy
CN108090954A (en) Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
Gschwandtner et al. Experimental study on the impact of endoscope distortion correction on computer-assisted celiac disease diagnosis
Alam et al. Evaluation of medical image registration techniques based on nature and domain of the transformation

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees