TW201211937A - Human face matching system and method thereof - Google Patents

Human face matching system and method thereof Download PDF

Info

Publication number
TW201211937A
TW201211937A TW99131265A TW99131265A TW201211937A TW 201211937 A TW201211937 A TW 201211937A TW 99131265 A TW99131265 A TW 99131265A TW 99131265 A TW99131265 A TW 99131265A TW 201211937 A TW201211937 A TW 201211937A
Authority
TW
Taiwan
Prior art keywords
image
unit
mobile
image capturing
capturing unit
Prior art date
Application number
TW99131265A
Other languages
Chinese (zh)
Inventor
Shih-Tseng Lee
Jiann-Der Lee
Chung-Hsien Huang
Che-Shiang Huang
Hui-Yuan Hsieh
Original Assignee
Univ Chang Gung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Chang Gung filed Critical Univ Chang Gung
Priority to TW99131265A priority Critical patent/TW201211937A/en
Publication of TW201211937A publication Critical patent/TW201211937A/en

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a human face matching system and a method thereof. The human face matching method comprises the following steps of: disposing a mark board around an object; providing a double image capturing module having a left/right image capturing unit; capturing the object for providing a left/right image separately by the left/right image capturing unit; capturing a real-time image by a mobile image capturing unit; matching the left/right image to form a 3D image, calculating a plurality of transformation matrix in any two of the above elements, a medical image, and a real space by a processing unit; and then matching the 3D image and the medical image by using a registration algorithm; when finishing matching, analyzing a position of the mark board in the real-time image by the processing unit for determining a relative position relation between the mobile image capturing unit and the mark board, composing the medical and the real-time image to generate a composite image according to the plurality of transformation matrix and the relative position relation, and displaying the composite image on a display unit.

Description

201211937 六、發明說明: 【發明所屬之技術領域】 [0001] 本發明是有關於一種對位系統及其方法,特別是有關於 一種人臉對位系統及其方法。 【先前彳支#f】 [0002] 隨著醫療手術的進步,透過影像導引手術 (Image-Guided Surgery)作為手術輔助,是近年來醫 學上常見的手術輔助方法,其係將病患術前所拍攝之醫 學影像與病患實體位置進行三維立體空間之對位與資訊 的整合,例如:腫瘤定位、腦功能區域的確認、腫瘤和 周邊解剖構造的關聯、或精準地確認殘留腫瘤的位置和 大小等,進而協助醫生導引手術器械至正確的病灶位置 ,一來避免進行冒險性手術,並可達到最小侵入式之目 的。 [0003] 一般而言,臨床上的空間位置與資料比對,往往依賴可 成像於術前醫學影像之立體定位頭架或標記物,以作為 病患實體與影像空間對位之依據。然而,立體定位頭架 有著操作自由度的限制,且長時間固定於病患頭部將容 易造成病患不適與感染,又標記物容易滑脫並於手術中 容易被遮蔽,造成手術人員於手術進行時的操作限制與 障礙。故一種取代立體定位頭架或標記物的技術遂被發 展出來,其係為一種利用空間取樣裝置擷取病患的表面 資訊,例如:臉部表面輪廓或手部輪廓等,再分析所擷 取之表面資訊,以取得表面資訊之生物特徵,並將其生 物特徵與病患術前所拍攝之醫學影像上所取得之表面輪 099131265 表單編號A0101 第4頁/共31頁 0992054810-0 201211937 Ο [0004] Ο [0005] 廓進行對位,進而達到空間資訊整合之目的。過去常用 的空間取樣裝置為一種電磁式儀器,但因其取得生物特 徵之方式容易受到環境之磁場干擾,故逐漸被淘汰。此 外’亦有部分研究係透過雷射掃描儀或投影裝置來獲取 病患於真實空間的表面資訊,而近年來則主要以紅外線 攝影機追蹤反光球的方式取得真實空間點群資料,其優 點在於此種裝置不受環境限制,可準確取得反光球之空 間資訊,助於標記點快速對位。但是隨著數位影像技術 曰漸發展,藉由光學式攝影機取得表面資訊的方式則逐 漸成為主流。而且,近年來結合擴增實境(Augmented Reality’ AR)技術,將病患實體透過顯示技術與其術 前資料整合並顯示於真實環境中,更是成為熱門研究之 Ο 丨 !·:Λ Γ : — .·! 承上所述,本發明之發明人為了減少購入前述之雷射掃 描儀或投影裝置之昂貴成本,並減輕在病患身上安置立 體定位頭架或標記物之不適、負擔以及操作上之不便, 故思索並設計一種士臉對位系統及其方法,結合擴增實 兄技術以其針對現有技術之缺失加以改善,進而增進 產業上之實施利用。 【發明内容】 有鑑於上述習知技藝之問題,本發明之目的就是在提供 種人臉對位系統及其方法,可讓醫生透過擴增實境的 手術環i兄,達到精準且最小侵入式手術之目的。 根據本發明之目的,提出-種人臉對位系統,其包含-雙影像操取如、—移動式影像娜單元、—標記板、 099131265 表單編號A0101 第5頁/共31頁 0992054810-0 [0006] 201211937 一處理單元及一顯示單元。雙影像擷取模組包含一左影 像擷取單元與一右影像擷取單元,左影像擷取單元係擷 取一物件之一左影像,右影像擷取單元係擷取物件之一 右影像。移動式影像擷取單元係擷取物件之一即時影像 。標§己板係設置於物件周圍。處理單元連接雙影像擷取 模組、移動式影像操取單元與顯示單元,處理單元係於 接收左影像、右影像與即時影像後,匹配左影像與右影 像而形成一三維影像,並計算雙影像擷取模組、移動式 影像擷取單元、標記板、物件之一醫學影像與一真實空 間相應兩者之間的複數楼轉換矩陣,利用一對位演算法 進仃二維影像與醫學影像的對位。當完成三維影像與醫 學影像對位後,處理單元即分析即時影像中標記板之位 置,以判斷移動式影像擷取單元與標記板之一相對位置 關係,並根據複數個轉換矩陣與相對位置關係將醫學影 像2即時影像進行合成,產生—合絲像,並傳送至顯 不單元,控制顯示單元顯示合咸影像。 [0007] 根據本發明之目的,再提種人賴㈣統,其包含 一雙影像操取模組、—移動式裝置、-標記板、一處理 單元及-顯拜元。雙影_取模組包含—左 單元與—右影_取單元,絲像擷取單㈣操取一物 件之-左影像,右f彡像齡單元係娜物件之—右影像 。移動式裝置包含-移動式影賴取單元與—移動式投 射單元W影像齡單元係娜物件之—即時影^ ’移動式投射單元係投射物件之—醫學影像。標記板係 設置於物件周圍。處理單元連接雙影像掏取模組、移動 099131265 表單編號A0101 第6頁/共31頁 0992054810-0 201211937 ❹ [0008] 式裝置與顯禾單兀,處理單元係於接收左影像、右影像 與即時影像後,匹配左影像與右影像而形成一三維影像 ,並計算雙影像擷取模組、移動式影像擷取單元、標記 板、醫學影像與一真實空間相應兩者之間的複數個轉換 矩陣,利用一對位演算法進行三維影像與醫學影像的對 位。當完成三維影像與醫學影像對位後,處理單元即分 析即時影像中標記板之位置,以判斷移動式影像擷取單 元與標記板之一相對位置關係,其中移動式投射單元係 根據複數個轉換矩陣與相對位置關係投射部份醫學影像 於物件上。 : Ο 根據本發明之目的,再提出一種人臉對位方法,其適用 於一人臉對位系統,人臉對位方法包含:設置一標★己板 於一物件周圍;提供一雙影像擷取模組包含一左影 像擷取單元及一右影像擷取單元;以左影像擷取單元擷 取物件之一左影像;以右影像掏取單元操取物件之一右 影像;利用一處理單元匹配左影像與右影像而形成一三 維影像;提供一移動式影像姆取單元,透過移動式影像 擷取單元擷取物件之一即時影像;以處理單元計算雙影 像擷取模組、移動式影像擷取單元、標記板、物件之一 醫學影像與-真實空間相應兩者之間的複數個轉換矩陣 ;然後利用一對位演算法進行三維影像與醫學影像的對 位;當完成對倾,讀理單元分析即時影像中標記板 之位置,以判斷移動式影像擷取單元與標記板之一相對 位置關係;接著根據複數個轉換矩陣與相對位置關係, 以處理單元合成醫學影像與即時影像,產生一合成影像 099131265 表單編號A0101 第7頁/共31頁 0992054810-0 201211937 ;以及透過顯示單元顯示合成影像。 [0009] 根據本發明之目的,再提出一種人臉對位方法,其適用 於一人臉對位系統,人臉對位方法包含:設置一標記板 於一物件周圍;提供一雙影像擷取模組,其包含一左影 像擷取單元及一右影像擷取單元;以左影像擷取單元擷 取物件之一左影像;以右影像擷取單元擷取物件之一右 影像;提供一移動式裝置,其包含一移動式影像擷取單 元與一移動式投射單元;並透過移動式影像擷取單元擷 取物件之一即時影像;利用一處理單元匹配左影像與右 影像而形成一三維影像;以處理單元計算雙影像擷取模 組、移動式影像擷取單元、標記板、物件之一醫學影像 與一真實空間相應兩者之間的複數個轉換矩陣;然後利 用一對位演算法進行三維影像與醫學影像的對位;當完 成對位後,以處理單元分析即時影像中標記板之位置, 以判斷移動式影像擷取單元與標記板之一相對位置關係 ;以及以移動式投射單元根據複數個轉換矩陣與相對位 置關係投射部份醫學影像於物件上。 [0010] 承上所述,依本發明之人臉對位系統及其方法,其可具 有一或多個下述優點: [0011] (1)此人臉對位系統及其方法可藉由處理單元合成醫學 影像與即時影像,以產生一合成影像並顯示於顯示單元 上,藉此提供醫生一擴增實境的手術環境,並可達到減 少冒險式手術的目的。 [0012] (2)此人臉對位系統及其方法可藉由移動式投射單元根 099131265 表單編號A0101 第8頁/共31頁 0992054810-0 201211937 據複數個轉換矩陣與相對位置_投射部份醫學影像於 物件上藉此融合醫學影像於實境的物件上,提供醫生 -擴增實境的手術環境,達到精準及最小侵人式手術的 目的。 【實施方式】 [0013]以下將參照相關圖式,邦日日分士 ' 說明依本發明之人臉對位系統之 實施例為使便於理解,下述實施例中之相同元件係以 相同之付號標示來說明。 〇 剛請參閱第1圖’其係為本發明之人美對位系統之第-實施 例之方塊圖。圖中,人臉對位系統1包含-雙影像操取模 組10、一移動式影像擷取單元13〇、一標記板12、一處理 單元13以及-顯示單元14,其係由處理單元13分別連接 雙影像掏取模組1()、移動式影像娜單元13Q及顯示單元 14,其中雙影像擷取模組1〇包含一左影像擷取單元11〇以 及一右影像擷取單元12〇,且雙影彳象擷取模組1〇及移動式 影像擷取單元130可為一攝影機或一紅外線攝影機的其中 〇 之一或其組合,但车發明並不限制移動式影像擷取單元 130的數量。標記板12係設置於一物件2(如人臉)之周圍 。此外,執行本發明之人臉對位系統丨前,更利用一醫學 景>像擷取單元3操取物件2,進而產生一醫學影像3〇,其 中,醫學影像擷取單元3可為一核磁共振儀或一電腦斷層 掃描機等;醫學影像30可包括一電腦斷層掃描 (Computer Tomography, CT)影像、一核磁共振電腦 斷層掃描(Nuclear Magnetic Resonance Computer Tomography,NMR-CT)影像、一磁共振顯影(Magnetic 099131265 表單編珑A0101 第9頁/共31頁 0992054810-0 201211937201211937 VI. Description of the Invention: [Technical Field] [0001] The present invention relates to a registration system and a method thereof, and more particularly to a face alignment system and method thereof. [Previous 彳支#f] [0002] With the advancement of medical surgery, Image-Guided Surgery is a surgical aid in recent years. It is a common surgical aid in recent years. The medical image taken and the patient's physical location are integrated with the alignment of the three-dimensional space and information, such as: tumor location, confirmation of brain function areas, association of tumors with peripheral anatomy, or accurate confirmation of residual tumor location and Size, etc., to help doctors guide surgical instruments to the correct location of the lesion, to avoid adventurous surgery, and to achieve the minimum invasive purpose. [0003] In general, clinical spatial location and data comparison often rely on stereotactic headstocks or markers that can be imaged on preoperative medical images as a basis for alignment of patient entities with image space. However, the stereo positioning head frame has limitations on the degree of freedom of operation, and fixing the patient's head for a long time will easily cause discomfort and infection of the patient, and the marker is easy to slip off and is easily obscured during the operation, causing the operator to perform surgery. Operational limitations and obstacles when proceeding. Therefore, a technique for replacing a stereotactic headstock or a marker has been developed, which is a surface sampling device that uses a surface sampling device to extract surface information of a patient, such as a facial contour or a hand contour, and then analyzes the acquired information. Surface information to obtain the biometric features of the surface information, and to obtain the surface of the surface of the medical image taken by the patient before surgery. 099131265 Form No. A0101 Page 4 of 31 Page 0992054810-0 201211937 Ο [ 0004] Ο [0005] Profiles are aligned to achieve spatial information integration. The space sampling device commonly used in the past is an electromagnetic instrument, but it is gradually being eliminated because it is easily interfered by the magnetic field of the environment because of its biometric characteristics. In addition, some research departments use laser scanners or projection devices to obtain surface information of patients in real space. In recent years, real-time point group data has been obtained mainly by infrared camera tracking of reflective balls. The device is not limited by the environment, and can accurately obtain the spatial information of the reflective ball, which helps the marker point to quickly align. However, with the development of digital imaging technology, the way in which surface information is obtained by optical cameras has gradually become mainstream. Moreover, in recent years, combined with Augmented Reality' AR technology, the patient entity is integrated with the pre-operative data through display technology and displayed in the real environment, which is becoming a hot research topic! Λ:Λ Γ : - In addition, the inventors of the present invention have reduced the cost, cost, and operation of installing a stereo positioning head or a marker on a patient in order to reduce the expensive cost of purchasing the aforementioned laser scanner or projection device. Inconvenient, I thought about and designed a face-to-face alignment system and its method, combined with the augmented reality technology to improve the lack of existing technology, thereby enhancing the implementation and utilization of the industry. SUMMARY OF THE INVENTION In view of the above problems of the prior art, the object of the present invention is to provide a face alignment system and a method thereof, which enable a doctor to achieve accurate and minimally invasive surgery through augmenting a real-life surgical ring. The purpose of the operation. According to the purpose of the present invention, a human face alignment system is proposed, which includes - dual image operation such as - mobile image unit, - marker board, 099131265 form number A0101, page 5 / total 31 page 0992054810-0 [ 0006] 201211937 A processing unit and a display unit. The dual image capture module includes a left image capture unit and a right image capture unit. The left image capture unit captures one of the left images of the object, and the right image capture unit captures one of the right images. The mobile image capture unit captures an instant image of an object. The standard board is placed around the object. The processing unit is connected to the dual image capturing module, the mobile image capturing unit and the display unit. After receiving the left image, the right image and the instant image, the processing unit matches the left image and the right image to form a three-dimensional image, and calculates a double Image capture module, mobile image capture unit, marker board, complex image transformation matrix between medical image and object real space, using a pair of bit algorithms to import 2D images and medical images The opposite. After the 3D image is aligned with the medical image, the processing unit analyzes the position of the marker plate in the instant image to determine the relative positional relationship between the mobile image capturing unit and the marking plate, and according to the plurality of conversion matrices and relative positional relationship. The medical image 2 instant image is synthesized to generate a combined image and transmitted to the display unit, and the display unit is controlled to display the salty image. [0007] In accordance with the purpose of the present invention, a human image system is further disclosed, which includes a dual image capture module, a mobile device, a marker board, a processing unit, and a display unit. The double shadow_receiving module includes - the left unit and the right side _ taking unit, the silk image capture unit (four) fetching an object - the left image, the right f 彡 image age unit is the object - the right image. The mobile device includes a mobile image capture unit and a mobile projection unit W image age unit system object - instant shadow ^ 'mobile projection unit is a projection object - medical image. The marking plate is placed around the object. The processing unit is connected to the dual image capture module, and moves 099131265 Form No. A0101 Page 6 / Total 31 Page 0992054810-0 201211937 ❹ [0008] The device and the display unit are arranged, and the processing unit is configured to receive the left image, the right image and the instant. After the image, the left image and the right image are matched to form a three-dimensional image, and a plurality of transformation matrices between the dual image capturing module, the mobile image capturing unit, the marking plate, the medical image and a real space are calculated. A pair of bit algorithms are used to align the 3D image with the medical image. After the 3D image is aligned with the medical image, the processing unit analyzes the position of the marking plate in the instant image to determine the relative positional relationship between the mobile image capturing unit and the marking plate, wherein the mobile projection unit is based on a plurality of conversions. The matrix and relative positional relationship project a portion of the medical image onto the object. Ο According to the purpose of the present invention, a face alignment method is further proposed, which is applicable to a face alignment system, and the face alignment method includes: setting a standard ★ board around an object; providing a pair of image capture The module includes a left image capturing unit and a right image capturing unit; the left image capturing unit captures one of the left images; the right image capturing unit reads one of the right images; and uses a processing unit to match The left image and the right image form a three-dimensional image; a mobile image capturing unit is provided to capture an instant image of the object through the mobile image capturing unit; and the dual image capturing module and the mobile image are calculated by the processing unit. Taking a plurality of transformation matrices between the medical image of the unit, the marker plate, and the object, and the corresponding space between the real space; and then using a pair of bit algorithms to perform the alignment of the three-dimensional image with the medical image; The unit analyzes the position of the marking plate in the instant image to determine the relative positional relationship between the mobile image capturing unit and the marking plate; and then according to the plurality of conversion matrices and relative Positional relationship, the processing unit synthesizes the medical image and the instant image to generate a composite image. 099131265 Form No. A0101 Page 7 of 31 0992054810-0 201211937; and display the composite image through the display unit. [0009] According to the purpose of the present invention, a face alignment method is further provided, which is applicable to a face alignment system. The face alignment method includes: setting a marker board around an object; providing a pair of image capture modes The group includes a left image capturing unit and a right image capturing unit; the left image capturing unit captures one of the left images; the right image capturing unit captures one of the right images; and provides a mobile The device includes a mobile image capturing unit and a mobile projection unit; and captures an instant image of the object through the mobile image capturing unit; and the processing unit matches the left image and the right image to form a three-dimensional image; The processing unit calculates a plurality of conversion matrices between the dual image capturing module, the mobile image capturing unit, the marking plate, the medical image of one of the objects, and a real space; and then uses a pair of bit algorithms to perform the three-dimensional The alignment of the image and the medical image; after the alignment is completed, the processing unit analyzes the position of the marker plate in the instant image to determine the mobile image capturing unit and the standard One plate relative positional relationship; and means to move the projected part of the projection type according to a plurality of medical images and the transformation matrix relative positional relationship on the object. [0010] As described above, the face alignment system and method thereof according to the present invention may have one or more of the following advantages: [0011] (1) The face alignment system and method thereof can be The processing unit synthesizes the medical image and the instant image to generate a composite image and displays it on the display unit, thereby providing the doctor with an augmented reality operating environment and achieving the purpose of reducing the risky operation. [0012] (2) The face alignment system and the method thereof can be implemented by a mobile projection unit root 099131265 Form No. A0101 Page 8 / Total 31 Page 0992054810-0 201211937 According to a plurality of conversion matrices and relative positions _ projection part The medical image is used to integrate the medical image onto the real object on the object, and provides a doctor-amplified reality operating environment to achieve the purpose of precision and minimal invasive surgery. [Embodiment] [0013] Hereinafter, referring to the related drawings, the state of the face alignment system according to the present invention will be described in order to facilitate understanding, and the same components in the following embodiments are identical. The pay sign is used to indicate.刚Please refer to Fig. 1 'which is a block diagram of the first embodiment of the human beauty alignment system of the present invention. In the figure, the face alignment system 1 includes a dual image capture module 10, a mobile image capture unit 13A, a marker board 12, a processing unit 13 and a display unit 14, which are processed by the processing unit 13. The dual image capture module 1 (), the mobile image capture unit 13Q, and the display unit 14 are respectively connected, wherein the dual image capture module 1 includes a left image capture unit 11 and a right image capture unit 12 The dual image capture module 1 and the mobile image capture unit 130 can be one of a camera or an infrared camera or a combination thereof, but the invention does not limit the mobile image capture unit 130. quantity. The marking plate 12 is disposed around an object 2 such as a human face. In addition, before performing the face alignment system of the present invention, the object 2 is captured by the capture unit 3 to generate a medical image 3, wherein the medical image capturing unit 3 can be a A nuclear magnetic resonance apparatus or a computed tomography scanner; the medical imaging 30 may include a computer Tomography (CT) image, a Nuclear Magnetic Resonance Computer Tomography (NMR-CT) image, and a magnetic resonance image. Development (Magicic 099131265 Form Compilation A0101 Page 9 / Total 31 Page 0992054810-0 201211937

Resonance Imaging, MRI)影像或一核磁共振顯影 (Nuclear Magnetic Resonance Imaging, NMRI) 影像。 [0015] 此外,為求清楚理解,醫學影像擷取單元3並非必需電性 連接處理單元13,亦可透過網路、無線通訊方式或藉由 隨身存取裝置如:隨身或隨身硬碟等方式傳送醫學影像 30至處理單元13。 [0016] 本發明之人臉對位系統1係以雙影像擷取模組1 0模擬人類 兩眼視差進而推導深度,也就是說,此人臉對位系統1係 利用左影像擷取單元110與右影像擷取單元120,同時取 得物件2之一左影像111與一右影像121,由左影像擷取單 元110與右影像擷取單元120間之距離,模擬人類兩眼視 差而產生空間的距離感,並由處理單元13匹配左影像111 與右影像121而推導出一三維影像122。而移動式影像擷 取單元130則是動態地擷取物件2之即時影像131,其可為 戴在醫生頭部之一攝影機,如此一來即可跟隨醫生視角 的不同而擷取不同角度之即時影像131。接著,處理單元 13計算雙影像擷取模組10、移動式影像擷取單元130、標 記板12、醫學影像30與一真實空間相應兩者之間的複數 個轉換矩陣,並分析三維影像122與醫學影像30,取出各 自對應的第一臉部點群資料與第二臉部點群資料,並透 過一對位演算法進行三維影像122與醫學影像30的對位。 當三維影像122與醫學影像30完成對位後,處理單元13即 分析即時影像131中標記板12的位置,透過影像處理技術 如影像二值化及特徵點偵測方式,取得標記板12的數個 099131265 表單編號A0101 第10頁/共31頁 0992054810-0 201211937 邊角點於即時影像131中的位置,藉此得知移動式影像擷 取單元130相對於標記板12之位置及觀測角度,也就是判 斷移動式影像擷取單元130與標記板丨2之一相對位置關係 ’藉此根據轉換矩陣與相對位置關係合成醫學影像30與 即時影像131 ’產生—合成影像132,並傳送合成影像 132至顯示單元14。最後,處理單元丨3控制顯示單元14 顯示合成影像132。其中,顯示單元14可為一顯示螢幕、 一觸控螢幕、一頭戴式之影視顯示器(Vide〇 see- Ο [0017] through Head-mounted display)或一頭戴式光學投 射顯示器(Optical see-through Head-mounted display)等可以顯示影像之裝置。 Ο 請一併參閱第2圖及第3圖,其係為本發,明之人臉對位系 統之第一實施例之示意圖及各座標系統相關性之示意圖 。如第2圖所示,物件係為一人臉2〇 理單元可為一電 腦23、一中央處理單元或一微型處理單元;雙影像擷取 模組中左影像操取單元與右..影像操取單元,以及移動式 衫像操取早元係以攝影機來實碑,並各自分別命名為左 攝影機210、右攝影機220及移動式攝影機22,但本發明 並不限制這些左影像擷取單元'右影像擷取單元及移動 式影像擷取單元的種類與擷取方式。又,如第3圖所示, 本發明之人臉對位系統1係建立5個座標系統,分別為醫 學影像座標系統CIMG、雙影像掏取模組座標系統C 、 ^ scam 移動式影像擷取單元座標系統、am、標記板座標系統car 及世界座標系統Cref ’其中世界座標系統Cref係定義於: 實空間的座標系統。 u 099131265 表單編號A0101 第11 1/共31頁 0992054810-0 201211937 [_如第2圖所示’首先設置一標記板12於人臉20的周圍,接 者’左攝影機210及右攝影機220分別擷取人臉2〇的左影 像與右影像。當電腦23接收左攝影機21〇及右攝_ = 所傳送之左影像與右影像後,則利用角點債測法 分析左影像,取出複數個具有代表性的臉部特徵點,如 眉角 '眼角、瞳孔、鼻尖 '鼻孔或嘴角等,再藉由交互 相關性(cross correlati〇n)方法計算相似性關係搜尋 右影像上之複數個匹配點,進而重建出各特徵點於 空間之三維座標並形成一三維影像。 [0019]纟上所述’本發明之人臉對位系則係將世界座標系统 CW設置於雙影像棘肋之視野内,藉由攝影τ機校正 (Camera Calibrati〇n)推得各特徵點於世界n cref上的實際位置,亦料算雙料軸模域標H CS㈣與世界座標系統、f之轉換關係,得到—第 矩陣31(如第3圖所示)。 圆其中’攝影機校正係指求取攝影機的内在參數與外在參 數的過程。内在參數係描述攝影機賴與影像座標之間 的轉換關係。常用的攝影機内在參數包括:鏡頭投影中 心成像在影像上的位置、像素的長寬比、焦距 length)或鏡頭扭曲變形的參數等。當攝影機的内部機構 與鏡頭不變動的話,其攝影機的内部參數則為固定,與 攝影機的擺放位置無關;但是,若攝影機具有變焦鏡頭 ,則其内在參數(如焦距)將會隨著焦距不同而跟著改變 。至於外在參數係描述攝影機座標與世界座標之間的轉 換關係。常用的攝影機外在參數包括:攝影機在三維座 099131265 表單編號A0101 第〖2頁/共31頁 0992054810-0 201211937 標中的位置與拍攝方向,包括旋轉轉與位移矩陣故 當攝影機被移動之後,其外在參數就需要在重新校正。 [0021] 此外’為了得到移動式攝影機22於世界座標系統c f上 的位置’電腦23係透過影像二及特徵則貞測方re式, 取得移動式攝影機22所擷取之即時影像上標記板12之4個 邊角點的位置,藉此推算移動式攝影機22的外在參數, 以得知移動式鄉機22㈣於標記板12的位£及觀測角 度,亦即計算標記板座標系统cAR與移動式影像揭取單元 Ο [0022] 座標系統%_之轉換關係,得到_二轉換矩陣32(如 第3圖所示)。 又’由於標記板12具有一榡記板座標系統cAR,並將標記 板座標系統CAR設置於雙影物取模組之視野内,如此一 來,電腦23即可計算標叫騎緖CAR㈣於世界座標 系統cref之轉換關係’得到—第三轉換矩陣33(如第3圖 所示)。 [0023] ❹ 承接上述,本發明更結合擴增實境(AugmentedReal_ ity,AR)技術於手術環境中。首先,電腦辦過雙影 像掏取模組取人臉的左影像與右影像,並匹配左影像 與右影像後,重建人臉的三維影像,根據㈣點偵_ 計算三維影像於世界座標系統Cref之第-臉部點群資料, 並取得醫學影像之第二臉部點群㈣,然後㈣疊代最 近點(Iterative Closest p〇int,⑽)演算法進行第 一臉部點群資料與第二臉部點群資料之對位。當完成對 位後’電腦23料求師學料鋪純與雙影像 操取模組座標线(:咖之轉換關係,以得到—第四 099131265 表單編號A0101 第U 1/共31頁 、 0的2〇5481〇~0 201211937 矩陣34(如第3圖所示)。又,根據第一轉換矩陣31,計算 醫學影像座標系統CT#與世界座標系統C f之轉換關係, 得出一第五轉換矩陣35(如第3圖所示),如此一來,即可 得知醫學影像於世界座標系統C <上的位置。如第2圖所 示,電腦23即根據上述複數個轉換矩陣與相對位置關係 合成醫學影像與即時影像,產生一合成影像132,並傳送 合成影像132至顯示單元14,控制顯示單元14顯示合成影 像132。綜上所述,本發明之人臉對位系統1係先將醫學 影像對位到病患於世界座標系統C ,上之位置,並透過移 動式攝影機22擷取病患的即時影像,以及利用影像合成 技術,將醫學影像與即時影像進行合成,提供醫生21 — 擴增實境的手術環境,如此一來,醫生21導引手術器械 至正確的病灶位置,以達到最小侵入式手術之目的。 [0024] 請再參閱第4a、4b與4c圖,係依據本發明第一實施例之 顯示部分醫學影像於實境之正面示意圖、俯視示意圖及 側視示意圖。如圖所示,顯示單元係以一顯示螢幕44來 實施,其係為融合病患術前的部分醫學影像30與移動式 攝影機所擷取的即時影像131之正面、俯視及側視的結果 。醫生可使用頭戴式攝影機擷取人臉20的即時影像131, 如第4a、4b與4c圖所示,即時影像131具有一標記板12 ,處理單元係藉由分析即時影像131中標記板12的位置, 判斷頭戴式攝影機與標記板12之一相對位置關係,並根 據各轉換矩陣及相對位置關係,合成即時影像131與醫學 影像30,產生一合成影像132,並控制顯示螢幕44顯示此 合成影像132,讓醫生能更精準針對病灶的位置進行診斷 099131265 表單編號A0101 第14頁/共31頁 0992054810-0 201211937 [0025] [0026] [0027] [0028] Ο [0029] [0030] [0031] [0032] 〇 [0033] [0034] [0035] 099131265 或手術。 儘管前述在說明本發明之人臉對位系統的過程中,亦已 同時說明本發明之人臉對位方法的概念,但為求清楚起 見,以下仍另繪示流程圖詳細說明。 請參閱第5圖,其係為本發明之人臉對位方法之第一實施 例之步驟流程圖。如圖所示,其包含下列步驟: 550 :設置一標記板於一物件周圍; 551 :提供一雙影像擷取模組; 其中,雙影像擷取模組包含一左影像擷取單元及一右影 像擷取單元。 552 :各以左/右影像擷取單元擷取物件之左/右影像; 553 :利用一處理單元匹配左/右影像而形成一三維影像 9 554 :計算雙影像擷取模組、移動式影像擷取單元、標記 板、物件之一醫學影像與一真實空間相應兩者之間的複 數個轉換矩陣; 555 :透過一移動式影像擷取單元擷取物件之一即時影像 9 556 :利用一對位演算法進行三維影像與醫學影像的對位 ,當完成對位後,則進行S57,否則進行S55 ; 557 :以處理單元分析即時影像中標記板之位置,以判斷 移動式影像擷取單元與標記板之一相對位置關係; 表單編號Α0101 第15頁/共31頁 0992054810-0 201211937 [0036] S58 :根據複數個轉換矩陣與相對位置關係,以處理單元 合成醫學影像與即時影像,產生一合成影像;以及 [0037] S59 :顯示合成影像於顯示單元上。 [0038]請參閱第6圖,其係為本發明之人臉對位系統之第二實施 例之方塊圖。圖中,其架構大致與第—實施例雷同,兩 者之差異在於,人臉對位系統丨還包含一移動式裝置U, 移動式裝置11包含一移動式影像擷取單元13〇與一移動式 投射單元140。移動式投射單元14〇係根據複數個轉換矩 陣與相對位置關係投射部份醫學影裱3〇於物件2上,進而 將醫學影像30與真實空間的物件2融合在一起。不過熟悉 此項技藝者當可任意調整移動式投射單元的投射範圍 與數量,端看設計上的方便雨定。 [0039] 請接續參閱第3圖及第7圖,第7圖係為本發明之人臉對位 糸統之第二實施例之示意圖。如第7圖所示,移動式投射 單元係以頭戴式光學投射顯示器7〇 (〇i)i;iea 1 see-through Head-mounted display) 實施 ,移動式影像 擷取單元係以移動式攝影機2 2實施,但本發明並不限制 移動式投射單元及移動式影像掘取單元之種類與配置方 式。本發明之人臉對位系統1係先將醫學影像對位到病声 於世界座標系統cref上之位置’並透過頭戴式光學投射顯 示器7 0操取病患人臉2 0的即時影像,以及透過影像合成 技術,利用頭戴式光學投射顯示型擴增實境70内建的投 射裝置71投射部分醫學影像30於人臉20上,將醫學影像 30與真實空間的人臉20結合,因此醫生21可於不同視角 觀看部份或全部的醫學影像30投射於人臉20上,達到視 099131265 表單編號A0101 第16頁/共3i頁 0992054810-0 201211937 [0040] [0041] [0042] [0043] Ο [0044] [0045] [0046] ❹ [0047] [0048] [0049] [0050] 覺擴增實境的感受,使醫生21輕易找出病灶的正確位置 ,並對其進行準確的手術、診斷或治療。 請參閱第8圖,其係為本發明之人臉對位方法之第二實施 例之步驟流程圖。如圖所示,其包含下列步驟: 580 :設置一標記板於一物件周圍; 581 :提供一雙影像擷取模組; 其中,雙影像擷取模組包含一左影像擷取單元及一右影 像擷取單元。 〜 582 :各以左/右影像擷取單元擷取物件之左/右影像·, 583 :利用-處理單元匹配左./右影像而形成—三維影像 584 :以處理單元計算雙f彡像擷取模組、軸式影像操取 單元、標記板、物件之—醫學影像與__真實空間相應兩 者之間的複數個轉換矩陣; 585 :提供一移動式裝置,其包含一移動式影像擷取單元 與一移動式投射單元,並透過移動式影像擷取單元擷取 物件之一即時影像; 586 ·利用一對位演算法進行三維影像與醫學影像的對位 ,當完成對位後,則進行S87,否則進行S85 ; 587 :以處理單元分析即時影像中標記板之位置,以判斷 移動式影像操取單元與標記板之一相對位置關係;以及 588 :根據複數個轉換矩陣與相對位置關係,投射部份醫 099131265 表單編號A0101 第17頁/共31頁 0992054810-0 201211937 學影像於物件上。 [0051] 上述各步驟之元件功能及作動之詳細說明同本發明之人 臉對位系統的内容敘述,在此便不再贅述。 [0052] 綜上所述,本發明之人臉對位系統及其方法藉由計算不 同座標系統之轉換關係,得出複數個轉換矩陣,藉此合 成醫學影像於物件上以形成一合成影像,或投射醫學影 像於實境的物件上,提供醫生一擴增實境的手術環境, 解決先前技術在病患身上安置立體定位頭架或標記物所 造成之不適、負擔以及操作上之不便,並可減少購入雷 射掃描儀或投影裝置之昂貴成本,同時達到精準及最小 侵入式手術的目的。 [0053] 以上所述僅為舉例性,而非為限制性者。任何未脫離本 發明之精神與範疇,而對其進行之等效修改或變更,均 應包含於後附之申請專利範圍中。 【圖式簡單說明】 [0054] 第1圖係為本發明之人臉對位系統之第一實施例之方塊圖 〇 第2圖係為本發明之人臉對位系統之第一實施例之示意圖 〇 第3圖係為本發明之人臉對位系統之第一實施例之各座標 系統相關性之示意圖。 第4a圖係為本發明第一實施例之顯示部分醫學影像於實 境之正面示意圖。 第4b圖係為本發明第一實施例之顯示部分醫學影像於實 099131265 境之俯視示意圖。 表單編號A0101 第18頁/共31頁 0992054810-0 201211937 第4c圖係為本發明第一實施例之顯示部分醫學影像於實 境之側視示意圖。 第5圖係為本發明之人臉對位方法之第一實施例之步驟流 程圖。 第6圖係為本發明之人臉對位系統之第二實施例之方塊圖 〇 第7圖係為本發明之人臉對位系統之第二實施例之示意圖Resonance Imaging, MRI) Image or Nuclear Magnetic Resonance Imaging (NMRI) image. [0015] In addition, for the sake of clear understanding, the medical image capturing unit 3 does not need to be electrically connected to the processing unit 13, and may also be connected via a network, a wireless communication method, or by a portable access device such as a portable or portable hard disk. The medical image 30 is transmitted to the processing unit 13. [0016] The face alignment system 1 of the present invention simulates the human two-eye parallax by the dual image capturing module 10 to derive the depth, that is, the face alignment system 1 utilizes the left image capturing unit 110. And the right image capturing unit 120 simultaneously acquires the left image 111 and the right image 121 of the object 2, and the distance between the left image capturing unit 110 and the right image capturing unit 120 simulates the parallax of the human eyes to generate space. A sense of distance is sensed, and the processing unit 13 matches the left image 111 and the right image 121 to derive a three-dimensional image 122. The mobile image capturing unit 130 dynamically captures the instant image 131 of the object 2, which can be a camera worn on the head of the doctor, so that the angle of the doctor can be taken differently according to the perspective of the doctor. Image 131. Then, the processing unit 13 calculates a plurality of conversion matrices between the dual image capturing module 10, the mobile image capturing unit 130, the marking plate 12, the medical image 30, and a real space, and analyzes the three-dimensional image 122 and The medical image 30 takes out the corresponding first facial point group data and the second facial point group data, and performs alignment of the three-dimensional image 122 and the medical image 30 through a one-bit algorithm. After the three-dimensional image 122 and the medical image 30 are aligned, the processing unit 13 analyzes the position of the marking plate 12 in the instant image 131, and obtains the number of the marking plate 12 through image processing techniques such as image binarization and feature point detection. 099131265 Form No. A0101 Page 10/Total 31 Page 0992054810-0 201211937 The position of the corner point in the instant image 131, thereby knowing the position and observation angle of the mobile image capturing unit 130 with respect to the marking plate 12, That is, the relative positional relationship between the mobile image capturing unit 130 and the marking plate 丨2 is determined, thereby synthesizing the synthetic image 132 by synthesizing the medical image 30 and the real image 131' according to the conversion matrix and the relative positional relationship, and transmitting the synthetic image 132 to Display unit 14. Finally, the processing unit 丨3 controls the display unit 14 to display the composite image 132. The display unit 14 can be a display screen, a touch screen, a head-mounted display (Vide〇see-Ο [0017] through head-mounted display) or a head-mounted optical projection display (Optical see- Through Head-mounted display, etc. Ο Please refer to Fig. 2 and Fig. 3 together, which is a schematic diagram of the first embodiment of the human face alignment system and the correlation between the coordinate systems of the present invention. As shown in Fig. 2, the object is a human face. The processing unit can be a computer 23, a central processing unit or a micro processing unit; the left image capturing unit and the right image in the dual image capturing module. The taking unit and the mobile shirt are operated as a real machine by the camera, and are respectively named as the left camera 210, the right camera 220, and the mobile camera 22, but the present invention does not limit the left image capturing unit' The type and method of capturing the right image capturing unit and the mobile image capturing unit. Moreover, as shown in FIG. 3, the face alignment system 1 of the present invention establishes five coordinate systems, namely a medical image coordinate system CIMG, a dual image capture module coordinate system C, and a scam mobile image capture system. Unit coordinate system, am, marker board coordinate system car and world coordinate system Cref 'The world coordinate system Cref is defined as: coordinate system of real space. u 099131265 Form No. A0101 Page 11 1/31 Page 0992054810-0 201211937 [_ As shown in Fig. 2, 'first set a marker board 12 around the face 20, and the receiver' left camera 210 and right camera 220 respectively Take the left and right images of the face. When the computer 23 receives the left camera and the right image transmitted by the left camera 21 右 and the right camera _ = , the left image is analyzed by the corner point debt measurement method, and a plurality of representative facial feature points, such as the eyebrow angle, are taken out. The corners of the eyes, the pupils, the tip of the nose, the nostrils or the corners of the mouth, etc., and then calculate the similarity relationship by the cross correlation method to search for a plurality of matching points on the right image, thereby reconstructing the three-dimensional coordinates of each feature point in the space and Form a three-dimensional image. [0019] The above-mentioned face matching system of the present invention sets the world coordinate system CW in the field of view of the double-image spine rib, and derives each feature point by Camera Calibrati〇n. The actual position on the world n cref is also calculated as the conversion relationship between the two-axis mode domain H CS (four) and the world coordinate system, f, to obtain the - matrix 31 (as shown in Figure 3). The circle 'camera correction' refers to the process of finding the intrinsic parameters and extrinsic parameters of the camera. The intrinsic parameter describes the conversion relationship between the camera and the image coordinates. Commonly used camera internal parameters include: the position of the lens projection center on the image, the aspect ratio of the pixel, the focal length) or the parameters of the lens distortion. When the camera's internal mechanism and lens do not change, the internal parameters of the camera are fixed, regardless of the position of the camera; however, if the camera has a zoom lens, its internal parameters (such as focal length) will be different with the focal length. And follow the change. As for the external parameters, the conversion relationship between the camera coordinates and the world coordinates is described. Commonly used external parameters of the camera include: the position and shooting direction of the camera in the three-dimensional seat 099131265 Form No. A0101, 〖2 pages/ Total 31 pages 0992054810-0 201211937, including the rotation and displacement matrix, so when the camera is moved, its The external parameters need to be recalibrated. [0021] In addition, in order to obtain the position of the mobile camera 22 on the world coordinate system cf, the computer 23 transmits the video image 2 and the feature, and then obtains the image on the instant image captured by the mobile camera 22. The position of the four corner points is used to estimate the external parameters of the mobile camera 22 to know the position of the mobile phone 22 (4) on the marking plate 12 and the observation angle, that is, the calculation of the marker plate coordinate system CAR and the mobile type. Image Retrieval Unit [0022] The conversion relationship of the coordinate system %_ yields a _two conversion matrix 32 (as shown in FIG. 3). In addition, since the marking plate 12 has a stencil coordinate system (CAR) and the marking plate coordinate system CAR is disposed in the field of view of the double-objective taking module, the computer 23 can calculate the standard racquet CAR (four) in the world. The conversion relationship of the coordinate system cref 'gets the third conversion matrix 33 (as shown in Fig. 3). [0023] 承 In accordance with the above, the present invention further incorporates Augmented Reality (AR) technology in a surgical environment. First, the computer has performed a dual image capture module to take the left and right images of the human face, and after matching the left and right images, reconstruct the 3D image of the face, and calculate the 3D image in the world coordinate system Cref according to (4) point detection _ The first-face point group data, and obtain the second facial point group of the medical image (4), and then (4) the Iterative Closest p〇int (10) algorithm for the first facial point group data and the second The alignment of the facial point group data. After the completion of the alignment, the computer 23 seeks the teacher's materials and the double image acquisition module coordinate line (: the conversion relationship of the coffee, to get - the fourth 099131265 form number A0101 U 1 / a total of 31 pages, 0 2〇5481〇~0 201211937 Matrix 34 (as shown in Fig. 3.) Further, according to the first conversion matrix 31, the conversion relationship between the medical image coordinate system CT# and the world coordinate system Cf is calculated, and a fifth conversion is obtained. The matrix 35 (as shown in Fig. 3), so that the position of the medical image on the world coordinate system C < can be known. As shown in Fig. 2, the computer 23 is based on the above plurality of conversion matrices and relative The positional relationship synthesizes the medical image and the live image, generates a composite image 132, and transmits the composite image 132 to the display unit 14, and controls the display unit 14 to display the composite image 132. In summary, the face alignment system 1 of the present invention is first The medical image is aligned to the patient's position on the world coordinate system C, and the patient's real-time image is captured through the mobile camera 22, and the medical image and the real-time image are synthesized by using the image synthesis technology to provide the doctor 21 - The surgical environment is augmented, so that the doctor 21 guides the surgical instrument to the correct lesion position for the purpose of minimally invasive surgery. [0024] Please refer to Figures 4a, 4b and 4c again, in accordance with the present invention. The front view, the top view and the side view of the medical image of the first embodiment are shown in the real world. As shown, the display unit is implemented by a display screen 44, which is a part of the medicine before the fusion patient. The result of the front, top and side views of the image 30 and the instant image 131 captured by the mobile camera. The doctor can use the head mounted camera to capture the instant image 131 of the face 20, as shown in Figures 4a, 4b and 4c. The real-time image 131 has a marking board 12, and the processing unit determines the relative positional relationship between the headphone and the marking board 12 by analyzing the position of the marking board 12 in the instant image 131, and according to each conversion matrix and relative positional relationship. Synthesizing the real-time image 131 and the medical image 30, generating a composite image 132, and controlling the display screen 44 to display the composite image 132, so that the doctor can more accurately target the location of the lesion. Diagnostics 099131265 Form No. A0101 Page 14 of 31 0992054810-0 201211937 [0025] [0028] [0030] [0031] [0032] [0033] [0034] [0035 099131265 or surgery. Although the foregoing description of the face alignment method of the present invention has been described in the course of explaining the face alignment system of the present invention, for the sake of clarity, the flow chart is still drawn below. Detailed description. Please refer to FIG. 5, which is a flow chart of the steps of the first embodiment of the face alignment method of the present invention. As shown in the figure, it comprises the following steps: 550: setting a marker board around an object; 551: providing a dual image capture module; wherein the dual image capture module comprises a left image capture unit and a right Image capture unit. 552: Each left/right image capturing unit captures the left/right image of the object; 553: uses a processing unit to match the left/right image to form a three-dimensional image 9 554: calculating a dual image capturing module, a mobile image a plurality of conversion matrices between a medical image and a real space corresponding to one of the capture unit, the marker plate, and the object; 555: capturing an instant image of the object through a mobile image capture unit 9 556: using a pair The bit algorithm performs alignment between the 3D image and the medical image. When the alignment is completed, the process proceeds to S57, otherwise, S55 is performed; 557: the processing unit analyzes the position of the marker plate in the instant image to determine the mobile image capturing unit and The relative positional relationship of one of the marking plates; Form number Α0101 Page 15 of 31 0992054810-0 201211937 [0036] S58: According to a plurality of conversion matrices and relative positional relationship, the processing unit synthesizes the medical image and the real image to generate a synthesis Image; and [0037] S59: Displaying the composite image on the display unit. Please refer to Fig. 6, which is a block diagram of a second embodiment of the face alignment system of the present invention. In the figure, the architecture is substantially the same as that of the first embodiment. The difference between the two is that the face alignment system further includes a mobile device U, and the mobile device 11 includes a mobile image capturing unit 13 and a mobile device. Projection unit 140. The mobile projection unit 14 projects a portion of the medical image 3 onto the object 2 according to a plurality of conversion matrices and relative positional relationships, thereby merging the medical image 30 with the object 2 of the real space. However, those skilled in the art can adjust the projection range and quantity of the mobile projection unit arbitrarily, and look at the design convenience. [0039] Please refer to FIG. 3 and FIG. 7 in succession. FIG. 7 is a schematic diagram of a second embodiment of the face-aligning system of the present invention. As shown in FIG. 7, the mobile projection unit is implemented by a head-mounted optical projection display (〇i) i; iea 1 see-through head-mounted display), and the mobile image capturing unit is a mobile camera. 2 2 implementation, but the invention does not limit the types and configurations of the mobile projection unit and the mobile image capturing unit. The face alignment system 1 of the present invention first aligns the medical image to the position of the disease on the world coordinate system cref and manipulates the instant image of the patient's face through the head-mounted optical projection display 70. And projecting a part of the medical image 30 onto the human face 20 by using the projection device 71 built in the head-mounted optical projection display type augmented reality 70 through the image synthesis technology, and combining the medical image 30 with the face 20 of the real space, The doctor 21 can view some or all of the medical images 30 projected on the face 20 from different angles to reach the face 099131265 Form No. A0101 Page 16 / Total 3i Page 0992054810-0 201211937 [0040] [0041] [0042] [0043 [0046] [0048] [0050] [0050] Amplifying the feelings of the real world, enabling the doctor 21 to easily find the correct location of the lesion and perform an accurate surgery on it , diagnosis or treatment. Please refer to Fig. 8, which is a flow chart of the steps of the second embodiment of the face alignment method of the present invention. As shown in the figure, it comprises the following steps: 580: setting a marker board around an object; 581: providing a dual image capture module; wherein the dual image capture module comprises a left image capture unit and a right Image capture unit. ~ 582 : Left/right image captured by the left/right image capture unit, 583: formed by matching the left/right image with the processing unit - 3D image 584: Calculating the double f image with the processing unit Taking a plurality of conversion matrices between the module, the axis image manipulation unit, the marker plate, the object, the medical image and the __real space, respectively; 585: providing a mobile device comprising a mobile image撷Taking a unit and a mobile projection unit, and capturing an instant image of the object through the mobile image capturing unit; 586 · using a pair of bit algorithms to perform alignment between the 3D image and the medical image, when the alignment is completed, Performing S87, otherwise performing S85; 587: analyzing the position of the marking plate in the instant image by the processing unit to determine the relative positional relationship between the mobile image capturing unit and the marking plate; and 588: according to the plurality of conversion matrices and relative positional relationship Projection part 099131265 Form No. A0101 Page 17 of 31 0992054810-0 201211937 Learn about images on objects. [0051] The detailed description of the function and operation of the components of the above steps is the same as that of the face alignment system of the present invention, and will not be described herein. [0052] In summary, the face alignment system and the method thereof of the present invention calculate a conversion relationship of different coordinate systems to obtain a plurality of transformation matrices, thereby synthesizing medical images on the objects to form a composite image. Or projecting medical images on real-life objects, providing a surgical environment for the doctor to augment the reality, and solving the discomfort, burden and operational inconvenience caused by the prior art placing the stereotactic head frame or the marker on the patient, and It can reduce the expensive cost of purchasing a laser scanner or projection device while achieving precise and minimally invasive surgery. The above description is by way of example only and not as a limitation. Any equivalent modifications or alterations to the spirit and scope of the invention are intended to be included in the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0054] FIG. 1 is a block diagram of a first embodiment of a face alignment system of the present invention. FIG. 2 is a first embodiment of a face alignment system of the present invention. Figure 3 is a schematic diagram showing the correlation of each coordinate system of the first embodiment of the face alignment system of the present invention. Fig. 4a is a front elevational view showing a portion of a medical image in the first embodiment of the present invention. Figure 4b is a top plan view showing the display of a part of the medical image in the first embodiment of the present invention in the real 099131265. Form No. A0101 Page 18 of 31 0992054810-0 201211937 Fig. 4c is a side view showing the display of a part of medical images in the first embodiment of the present invention. Fig. 5 is a flow chart showing the steps of the first embodiment of the face alignment method of the present invention. Figure 6 is a block diagram of a second embodiment of the face alignment system of the present invention. Figure 7 is a schematic view of a second embodiment of the face alignment system of the present invention.

第8圖係為本發明之人臉對位方法之第二實施例之步驟流 程圖。 【主要元件符號說明】 [0055] 1 :人臉對位系統 10 :雙影像擷取模組 11 :移動式裝置Figure 8 is a flow chart showing the steps of the second embodiment of the face alignment method of the present invention. [Main component symbol description] [0055] 1 : Face alignment system 10 : Dual image capture module 11 : Mobile device

110 :左影像擷取單元 111 :左影像 120 :右影像擷取單元 121 :右影像 122 :三維影像 12 :標記板110 : Left image capture unit 111 : Left image 120 : Right image capture unit 121 : Right image 122 : 3D image 12 : Marker plate

13 :處理單元 130 :移動式影像擷取單元 131 :即時影像 132 :合成影像 14 :顯示單元 140 :移動式投射單元 099131265 表單編號A0101 第19頁/共31頁 0992054810-0 201211937 2 :物件 20 :人臉 21 :醫生 210 :左攝影機 220 :右攝影機 22 :移動式攝影機 23 :電腦 3:醫學影像擷取單元 30 :醫學影像 31 :第一轉換矩陣 3 2 :第二轉換矩陣 33 :第三轉換矩陣 34 :第四轉換矩陣 35 :第五轉換矩陣 醫學影像座標系統 I Mb C :雙影像擷取模組座標系統 scam cn :移動式影像擷取單元座標系統13 : Processing unit 130 : Mobile image capturing unit 131 : Instant image 132 : Composite image 14 : Display unit 140 : Mobile projection unit 099131265 Form number A0101 Page 19 / Total 31 page 0992054810-0 201211937 2 : Object 20 : Face 21: Doctor 210: Left Camera 220: Right Camera 22: Mobile Camera 23: Computer 3: Medical Image Capture Unit 30: Medical Image 31: First Conversion Matrix 3 2: Second Conversion Matrix 33: Third Conversion Matrix 34: fourth conversion matrix 35: fifth conversion matrix medical image coordinate system I Mb C: dual image capture module coordinate system scam cn: mobile image capture unit coordinate system

Dcam cAD :標記板座標系統 c f:世界座標系統 ref 44 :顯示螢幕 70 :頭戴式光學投射顯示器 71 :投射裝置 S50-S59、S80〜S88 :步驟流程 099131265 表單編號A0101 第20頁/共31頁 0992054810-0Dcam cAD: marker board coordinate system cf: world coordinate system ref 44: display screen 70: head-mounted optical projection display 71: projection device S50-S59, S80~S88: step flow 099131265 Form No. A0101 Page 20 of 31 0992054810-0

Claims (1)

201211937 七、申請專利範圍: 1 . 一種人臉對位系統,其包含: 一雙影像擷取模組,包含一左影像擷取單元與一右影像擷 取單元,該左影像擷取單元係擷取一物件之一左影像,該 右影像擷取單元係擷取該物件之一右影像; 一移動式影像擷取單元,係擷取該物件之一即時影像; 一標記板’設置於該物件周圍; 一處理單元,連接該雙影像擷取模組與該移動式影像擷取 ^ 單元,係於接收該左影像、該右影像與該即時影像後,匹 配該左影像與該右影像而形成一三維影像,計算該雙影像 擷取模組、該移動式影像擷取單元、該樣記板、該物件之 一醫學影像與一真實空間相應兩者之間的複數個轉換矩陣 ,並利用對位演算法進行該三維影像與該醫學影像的對 位田70成對位後,該處理單元即分析該即時影像中該標 記板之位置,以判斷該移動式影像擷取¥元與該標記板之 -相對位置關係,並根據該複數個轉換矩陣與該相對位置 _係將該醫學影像與該即時影像進行合成,產生—合成影 像;以及 "頁示單元,連接該處理單元,係顯示該合成影像。 2 .如申明專利範圍第1項所述之人臉對位系統,其中該左影 像操取單疋、該右影像操取單元及該移動式影像操取單元 係為-攝影機或—紅外線攝影機的其中之—或其組合。 3 .如申"月專利範圍第1項所述之人臉對位系統,其中該對位 廣算法包括"'疊代最近點(Iterative Closest Point ICP)演算法。 ’ 099131265 表單編號ΑΟίοι 第21頁/共31頁 0992054810-0 201211937 4 ·如申請專利範圍第1項所述之人臉對位系統,W亥醫風 J^^-t„^^(c〇mputer Tom〇graph;;;T) 核磁共振電腦斷層掃描(Nuclear Magnetic ance Computer Tomography, NMR-CT)今像 :=:顯影(一lc Res_ce、…二 3核磁共振顯影(Nuclear Magnetic Resonance Imaging,NMRI)影像。 5 .—種人臉對位系統,其包含: ^影像擷取模組,包含—左影像擷取單元與—右影像擁 早元忒左影像禅取單元镍擷取一物件支—左影像該 右知像擷取單元係擷取該物件之一右影像; 5 一=動式裝置’包含—移動式影像娜單元與—移動式投 射早凡’該移動式影像掏取單元係梅取該物件之一即時影 像,該移動式投射單元係投射該物件之一醫學影像; 一標記板,設置於該物件周圍;以及 一處理單元,連接該雙影像擷取輪組與該移動式裝置,係 於接收該絲像、該料像與料_雜,匹配該左影 像與該右影像而形成一三維影像,許算該雙影像擷取模組 、該移動式影像擷取單元、該標記板、該醫學影像與一真 實空間相應兩者之間的複數個轉換矩陣,益利用一對位演 算法進行该二維影像與該醫學影像的對位,當完成對位後 ,該處理單元即分析該即時影像中該標記板之位置,以判 斷該移動式影像操取單元與該標記板之一相對位置關係; 其中該移動式投射單元係根據該複數個轉換矩陣與該相對 位置關係投射部份該醫學影像於該物件上。 6 .如申請專利範圍第5項所述之人臉對位系統,其中該左影 第22頁/共31頁 表單煸號Α0101 099131265 0992054810-0 201211937 像棵取單元、該右影像擷取單元及該移動式影像擷取單元 係為一攝影機或一紅外線攝影機的其中之一或其組合。 .如申S青專利範圍第5項所述之人臉對位系統,其中該對位 演算法包括一疊代最近點(Iterat ive Closest Point, ICP)演算法。 .如申請專利範圍第5項所述之人臉對位系統,其中該醫學 影像包含一電腦斷層掃描(C〇mpUtei· Tomography, CT) ’衫像 核磁共振電腦斷層掃描(Nuclear Magnetic Resonance Computer Tomography,NMR-CT)影像、 Ο ο 099131265 一磁共振顯影(M a g n e t i c R e son an c e I ώ a g i n g,M RI) 影像或一核磁共振顯影(Nuc丨ear Magnet ic Resonance Imaging,NMRI)影像。 .一種人臉對位方法,其包含下列步驟:;: 設置一標記板於一物件周圍; 提供一雙影像擷取模組,其包含一左影像擷取單元及一右 影像擷取單元; 以S亥左景彡像掏取單元擷取該物件之一左影像; 以該右影像擷取革元擷取該物件之一右影像; 透過一移動式影像擷取單元擷取該物件之一即時影像; 利用處理單元匹§&該左影像與該右影像而形成一三維影 像; 以該處理單元計算該雙影像擷取模組、該移動式影像操取 單元、該標記板、該物件之一醫學影像與一真實空間相應 兩者之間的複數個轉換矩陣; 利用-對位演算法進行該三維影像與該醫學影像的對位; 以該處理單元分析該即時影像中該標記板之位置,以判斷 0992054810-0 表單編號A0101 第23頁/共31頁 201211937 Λ移動式影像掏取單元與該標記板之—相對位置關係; 根據4硬數個轉換矩陣與該相對位置關係,以該處理單元 合成該醫學影像與該即時影像,產生-合成影像;以及 透過該顯示單元顯示該合成影像。 1〇 ·如申請專利範圍第9項所述之人臉對位方法,其中利用該 處理單元匹配該左影像與該右影像之步驟中更包含下列 步驟: 藉由-角點仙i法仙該左影像之複數個特徵點;以及 以該處理單元利用一交互相關性(Cross C〇rre丨at ion) 方法計算一相似性關係,搜尋該右影像之複數個匹配點, ^ 進而形成該三維影像。 一 11 ·如申請專利範圍第9項所述之人臉對位方法,其中該左影 像擷取單元'該右影像擷取單元及該移動式影像擷取單元 係為一攝影機或一紅外線攝影機的其中之一或其組合。 12 .如申請專利範圍第9項所述之人臉對位方法,其中該對位 演算法包括一疊代最近點(iterative Clbsest p〇int, I CP)演算法。 13 ·如申請專利範圍第9項所述之人臉對位方法其中該醫學 (I 景夕像包含一電腦斷層掃描(C〇mpUter Tomography,CT) 影像、一核磁共振電腦斷層掃描(Nuciear Magnetic Resonance Computer Tomography,NMR-CT)影像、 一磁共振顯影(Magnetic Resonance Imaging, MRI) 影像或一核磁共振顯影(Nuclear Magnetic Resonance Imaging,NMRI)影像。 14 · 一種人臉對位方法’其包含下列步驟: 099131265 設置一標記板於一物件周圍; 表單編號A0101 第24頁/共31頁 0992054810-0 201211937 提供-雙影像齡模組,其包含_左影賴取單元及一右 影像擷取單元; 以該左影像擷取單元擷取該物件之一左影像; 以該右影像擷取單元擷取該物件之一右影像; 提供-移動式裝置,其包含―移動式影像擷取單元與一移 動式投射單元; 透過該移動式影像擷取單元擷取該物件之一即時影像; 利用-處理單元匹配該左影像與該右影像而形成_三維影 像; ^ '' :::. 組、該移動式影像擷取 單元、該標記板、該物件之—料料與—真實空間相應 兩者之間的複數個轉換矩陣; 利用-對位演算法進行該三_彡像與卿學影像的對位; 以該處理單it分析該即時影像中該標_之位置,藉此判 斷該移動式影像擷取單元與該標記板之二相敕位置關係 以及201211937 VII. Patent application scope: 1. A face alignment system, comprising: a pair of image capturing modules, comprising a left image capturing unit and a right image capturing unit, the left image capturing unit system Taking a left image of an object, the right image capturing unit captures a right image of the object; a mobile image capturing unit captures an instant image of the object; a marking plate is disposed on the object a processing unit, connected to the dual image capturing module and the mobile image capturing unit, after receiving the left image, the right image and the instant image, matching the left image and the right image to form a three-dimensional image, calculating a plurality of conversion matrices between the dual image capturing module, the mobile image capturing unit, the sample tablet, a medical image of the object, and a real space, and using the pair After the bit algorithm performs the alignment of the three-dimensional image with the alignment of the medical image 70, the processing unit analyzes the position of the marking plate in the instant image to determine that the mobile image captures ¥ The relative positional relationship between the element and the marker board, and synthesizing the medical image and the instant image according to the plurality of conversion matrices and the relative position_, generating a synthetic image; and "page unit, connecting the processing The unit displays the composite image. 2. The face alignment system of claim 1, wherein the left image manipulation unit, the right image manipulation unit, and the mobile image manipulation unit are -camera or -infrared camera Among them - or a combination thereof. 3. The face alignment system of claim 1, wherein the alignment algorithm comprises "'Iterative Closest Point ICP" algorithm. ' 099131265 Form No. ΑΟίοι Page 21 of 31 0992054810-0 201211937 4 ·If you apply for the face alignment system described in item 1 of the patent scope, Whai Medical Wind J^^-t„^^(c〇mputer Tom〇graph;;;T) Nuclear Magnetic Resonance Computer Tomography (NMR-CT) image: =: development (a lc Res_ce, ... 2 3 nuclear magnetic resonance imaging (Nuclear Magnetic Resonance Imaging, NMRI) image 5. A face matching system, comprising: ^ image capturing module, including - left image capturing unit and - right image holding early element, left image, numerating unit, nickel picking up an object branch - left image The right image capturing unit captures one of the right images of the object; 5 a = moving device 'contains - mobile image unit and mobile projection 2', the mobile image capturing unit An instant image of the object, the mobile projection unit projecting a medical image of the object; a marking plate disposed around the object; and a processing unit connecting the dual image capturing wheel set and the mobile device Receiving The silk image, the image and the material are matched, and the left image and the right image are matched to form a three-dimensional image, and the dual image capturing module, the mobile image capturing unit, the marking plate, and the medical image are calculated A plurality of conversion matrices corresponding to a real space, the pairwise algorithm is used to perform alignment between the two-dimensional image and the medical image, and when the alignment is completed, the processing unit analyzes the real-time image. a position of the marking plate to determine a relative positional relationship between the mobile image capturing unit and the marking plate; wherein the mobile projection unit projects a portion of the medical image according to the plurality of conversion matrices and the relative positional relationship 6. The face alignment system according to claim 5, wherein the left image is 22 pages/31 pages, the form number Α0101 099131265 0992054810-0 201211937, the image capturing unit, the right image The capturing unit and the mobile image capturing unit are one of a camera or an infrared camera or a combination thereof. The face alignment system according to claim 5 of the patent application scope. The alignment algorithm includes an Iterative Close Point (ICP) algorithm. The face alignment system of claim 5, wherein the medical image comprises a computed tomography scan ( C〇mpUtei·Tomography, CT) 'Nuclear Magnetic Resonance Computer Tomography (NMR-CT) image, ο ο 099131265 A magnetic resonance imaging (M agnetic R e son an ce I ώ aging, M RI Image or Nuc丨ear Magnetic Resonance Imaging (NMRI) image. a face alignment method, comprising the steps of: setting a marker board around an object; providing a dual image capture module comprising a left image capture unit and a right image capture unit; The image of the left image of the object is captured by the image capture unit, and the right image of the object is captured by the right image; the right image of the object is captured by the moving image capturing unit; Forming a three-dimensional image by using the processing unit §& the left image and the right image; calculating the dual image capturing module, the mobile image capturing unit, the marking plate, and the object by the processing unit a plurality of transformation matrices between a medical image and a real space; performing alignment of the three-dimensional image with the medical image by using a registration algorithm; and analyzing the position of the marker plate in the instant image by the processing unit To determine 0992054810-0 Form No. A0101 Page 23 of 31 201211937 Λ Mobile image capture unit and the relative positional relationship of the marker board; According to 4 hard number conversion matrix and Positional relationship, to the processing unit of the medical image and the synthetic image instant, generates - synthesis image; and displaying the synthesized image through the display unit. The method for matching a face according to claim 9, wherein the step of using the processing unit to match the left image and the right image further comprises the following steps: a plurality of feature points of the left image; and calculating, by the processing unit, a similarity relationship by using a Cross C〇rre丨at ion method, searching for a plurality of matching points of the right image, and further forming the three-dimensional image . The face alignment method according to claim 9, wherein the left image capturing unit 'the right image capturing unit and the mobile image capturing unit are a camera or an infrared camera One of them or a combination thereof. 12. The face alignment method of claim 9, wherein the alignment algorithm comprises an iterative Clbsest p〇int (ICP) algorithm. 13 · The face alignment method according to claim 9 of the patent application, wherein the medical image (I 景 像 includes a computed tomography (CT) image, a nuclear magnetic resonance computed tomography (Nuciear Magnetic Resonance) Computer Tomography, NMR-CT) Image, Magnetic Resonance Imaging (MRI) image or Nuclear Magnetic Resonance Imaging (NMRI) image. 14 · A face alignment method that includes the following steps: 099131265 Set a marker board around an object; Form No. A0101 Page 24 of 31 0992054810-0 201211937 Provides a dual image age module, which includes a left image capture unit and a right image capture unit; The left image capturing unit captures a left image of the object; the right image capturing unit captures a right image of the object; and provides a mobile device including a “moving image capturing unit and a mobile projection” The unit captures an instant image of the object through the mobile image capturing unit; and the processing unit matches the left image with The right image forms a _3D image; ^ '' :::. group, the moving image capturing unit, the marking plate, the material of the object, and the real space corresponding to the plurality of transformation matrices Using the -parameter algorithm to perform the alignment of the three-image and the clear image; analyzing the position of the target in the live image by using the processing unit, thereby determining the mobile image capturing unit and the marking The positional relationship between the two phases of the board 以該移動式投射單_據該複數個轉換矩陣與該相對位置 關係投射部份該醫举影像於該物件上。 15 .如申請專利範圍第14項所述之人臉對位方法,其中利用該 處理單元匹配該左影像與該右影像之步驟中,更包含下列 步驟: 藉由一角點偵測法偵測該左影像之複數個特徵點;以及 以該處理單元利用一交互相關性(Cr〇ss c〇rrelatiQn) 方法計算一相似性關係,搜尋該右影像之複數個匹配點, 進而形成該三維影像。 16 .如申請專利範圍第14項所述之人臉對位方法,其中該左影 0992054810- 099131265 表單編號A0101 第25貝/共31頁 201211937 像擷取單元、該右影像擷取單元及該移動式影像擷取單元 係為一攝影機或一紅外線攝影機的其中之一或其組合。 17 .如申請專利範圍第14項所述之人臉對位方法,其中該對位 演算法包括一疊代最近點(Iterative Closest Point, ICP)演算法。 18 .如申請專利範圍第14項所述之人臉對位方法,其中該醫學 影像包含一電腦斷層掃描(Computer Tomography, CT) 影像、一核磁共振電腦斷層掃描(Nuclear Magnetic Resonance Computer Tomography,NMR-CT)影像、 一磁共振顯影(Magnetic Resonance Imaging, MRI) 影像或一核磁共振顯影(Nuclear Magnetic Resonance Imaging, NMRI)影像0 099131265 表單編號A0101 第26頁/共31頁 0992054810-0The mobile projection unit projects a portion of the medical image on the object according to the plurality of conversion matrices and the relative positional relationship. The method for matching a face according to claim 14, wherein the step of using the processing unit to match the left image and the right image further comprises the following steps: detecting by a corner detection method a plurality of feature points of the left image; and the processing unit calculates a similarity relationship by using an interaction correlation (Cr〇ss c〇rrelatiQn) method, and searches for a plurality of matching points of the right image to form the three-dimensional image. 16. The face alignment method according to claim 14, wherein the left shadow 0992054810- 099131265 form number A0101 25th/total 31 pages 201211937 image capturing unit, the right image capturing unit, and the movement The image capturing unit is one of a camera or an infrared camera or a combination thereof. 17. The face alignment method of claim 14, wherein the alignment algorithm comprises an Iterative Closest Point (ICP) algorithm. 18. The face alignment method according to claim 14, wherein the medical image comprises a computer Tomography (CT) image, a nuclear magnetic resonance computed tomography (Nuclear Magnetic Resonance Computer Tomography, NMR- CT) Image, Magnetic Resonance Imaging (MRI) image or Nuclear Magnetic Resonance Imaging (NMRI) image 0 099131265 Form No. A0101 Page 26 of 31 0992054810-0
TW99131265A 2010-09-15 2010-09-15 Human face matching system and method thereof TW201211937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99131265A TW201211937A (en) 2010-09-15 2010-09-15 Human face matching system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99131265A TW201211937A (en) 2010-09-15 2010-09-15 Human face matching system and method thereof

Publications (1)

Publication Number Publication Date
TW201211937A true TW201211937A (en) 2012-03-16

Family

ID=46764477

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99131265A TW201211937A (en) 2010-09-15 2010-09-15 Human face matching system and method thereof

Country Status (1)

Country Link
TW (1) TW201211937A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI489419B (en) * 2012-05-10 2015-06-21 Htc Corp Method, apparatus and computer program product for image registration and display
TWI578269B (en) * 2015-12-14 2017-04-11 財團法人工業技術研究院 Method for suturing 3d coordinate information and the device using the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI489419B (en) * 2012-05-10 2015-06-21 Htc Corp Method, apparatus and computer program product for image registration and display
TWI578269B (en) * 2015-12-14 2017-04-11 財團法人工業技術研究院 Method for suturing 3d coordinate information and the device using the same
CN106886990A (en) * 2015-12-14 2017-06-23 财团法人工业技术研究院 Three-dimensional coordinate stitching method and three-dimensional coordinate information stitching device applying same

Similar Documents

Publication Publication Date Title
JP7455847B2 (en) Aligning the reference frame
TWI678181B (en) Surgical guidance system
JP6463038B2 (en) Image alignment apparatus, method and program
US10881353B2 (en) Machine-guided imaging techniques
US20140022283A1 (en) Augmented reality apparatus
CN103948361B (en) Endoscope's positioning and tracing method of no marks point and system
CN107105972A (en) Model register system and method
CN107049489B (en) A kind of operation piloting method and system
CN108701170A (en) Image processing system and method for three-dimensional (3D) view for generating anatomic part
CN103948432A (en) Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation
KR20200013984A (en) Device of providing 3d image registration and method thereof
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN113197666A (en) Device and system for surgical navigation
JP2023526716A (en) Surgical navigation system and its application
WO2001057805A2 (en) Image data processing method and apparatus
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
TWI697317B (en) Digital image reality alignment kit and method applied to mixed reality system for surgical navigation
KR20160057024A (en) Markerless 3D Object Tracking Apparatus and Method therefor
TW201211937A (en) Human face matching system and method thereof
WO2014104357A1 (en) Motion information processing system, motion information processing device and medical image diagnosis device
US10049480B2 (en) Image alignment device, method, and program
US20230355319A1 (en) Methods and systems for calibrating instruments within an imaging system, such as a surgical imaging system
CN213758609U (en) Medical projection device
US11670013B2 (en) Methods, systems, and computing platforms for photograph overlaying utilizing anatomic body mapping