TW202222270A - Surgery assistant system and related surgery assistant method - Google Patents

Surgery assistant system and related surgery assistant method Download PDF

Info

Publication number
TW202222270A
TW202222270A TW109142490A TW109142490A TW202222270A TW 202222270 A TW202222270 A TW 202222270A TW 109142490 A TW109142490 A TW 109142490A TW 109142490 A TW109142490 A TW 109142490A TW 202222270 A TW202222270 A TW 202222270A
Authority
TW
Taiwan
Prior art keywords
augmented reality
surgical
axis
reality scene
signal
Prior art date
Application number
TW109142490A
Other languages
Chinese (zh)
Other versions
TWI750930B (en
Inventor
宋開泰
何紫瑄
Original Assignee
國立陽明交通大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立陽明交通大學 filed Critical 國立陽明交通大學
Priority to TW109142490A priority Critical patent/TWI750930B/en
Application granted granted Critical
Publication of TWI750930B publication Critical patent/TWI750930B/en
Publication of TW202222270A publication Critical patent/TW202222270A/en

Links

Images

Landscapes

  • Manipulator (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A surgery assistant system includes an algorithm module, a monitoring module, a signal conversion unit and a surgery robotic unit. The algorithm module generates an augmented reality scene and an augmented reality object according to an image of a physical target, and the algorithm module further displays a three-dimensional-path indicator in the augmented reality scene. The monitoring module includes a display device and a control device, wherein the display device displays the augmented reality scene including the augmented reality object and the three-dimensional-path indicator. The control device is operated to control the moving path of the three-dimensional-path indicator in in the augmented reality scene to generate a moving trajectory. The signal conversion unit converts the moving trajectory into a motion signal. The surgery robotic unit receives and executes the motion signal to perform surgery actions on the physical target along the moving trajectory.

Description

手術輔助系統及相關手術輔助方法Surgical assistance system and related surgical assistance methods

本發明是有關於一種監控系統及其方法,且特別是一種手術輔助系統及其方法。The present invention relates to a monitoring system and method thereof, and in particular to a surgical assistance system and method thereof.

一般外科手術通常透過拍攝X光片來擷取醫學圖像以制定手術規劃,於進行手術時,醫學圖像可顯示於鄰近醫生的顯示器上。然而,複雜度高的外科手術需要實時掌握病患的現場狀況,上述作法是明顯不足的。這是由於醫學圖像與現實患者要進行手術的部位是在不同的視野,且醫生在手術過程中必須反覆比對手術器械分別在醫學圖像與實際手術視野中的位置才能正確動刀,造成醫生無法專注在同一視野,帶來了極大的不便。舉例來說,椎弓根螺釘植入手術(Pedicle screw placement)是一種高難度植入性手術,其透過將椎弓根螺釘植入脊椎中來改善病症,而在植入椎弓根螺釘的過程中任何些微的角度誤差都會對手術有很大的影響。In general surgery, medical images are usually captured by taking X-rays to formulate surgical planning. During the operation, the medical images can be displayed on a monitor of a nearby doctor. However, the surgical operation with high complexity requires real-time monitoring of the patient's on-site status, which is obviously insufficient. This is because the medical image and the actual patient to be operated on are in different fields of view, and the doctor must repeatedly compare the positions of the surgical instruments in the medical image and the actual operating field of view to move the knife correctly during the operation, resulting in The doctors were unable to focus on the same field of vision, which caused great inconvenience. For example, pedicle screw placement is a difficult implantation procedure that improves the condition by implanting pedicle screws into the spine. Any slight angular error can have a big impact on the surgery.

綜上所述,需要一種新穎的手術輔助系統以及相關的手術輔助方法來提昇外科手術的穩定性,以避免上述人為失誤的問題。In conclusion, there is a need for a novel surgical assistance system and a related surgical assistance method to improve the stability of surgical operations to avoid the above-mentioned problem of human error.

本發明的目的在於提供一種適用於外科手術的手術輔助系統,以透過擴增實境(Augmented Reality, AR)技術和手術機器人單元來解決上述人為失誤的問題。The purpose of the present invention is to provide a surgical assistance system suitable for surgery, so as to solve the above-mentioned problem of human error through Augmented Reality (AR) technology and a surgical robot unit.

為了達到上述目的,本發明的一實施例提供了一種手術輔助系統,包含演算模組、監控模組以及訊號轉換單元。演算模組用以根據實體目標物產生擴增實境場景與擴增實境物體,擴增實境物體係模擬目標物,且擴增實境物體係於擴增實境場景中顯示,演算模組另於擴增實境場景中顯示三維路徑指標。監控模組包含顯示裝置以及控制裝置,其中顯示裝置用以顯示包含擴增實境物體以及三維路徑指標的擴增實境場景。控制裝置受操作以控制三維路徑指標於擴增實境場景中的移動路徑,以產生運動軌跡。訊號轉換單元用以將運動軌跡轉換為第一動作訊號。手術機器人單元接收並且執行第一動作訊號,以沿著運動軌跡對實體目標物執行手術動作。In order to achieve the above objective, an embodiment of the present invention provides a surgical assistance system, which includes an arithmetic module, a monitoring module, and a signal conversion unit. The calculation module is used to generate the augmented reality scene and the augmented reality object according to the physical target object, the augmented reality object system simulates the target object, and the augmented reality object system is displayed in the augmented reality scene. The group also displays 3D path indicators in the augmented reality scene. The monitoring module includes a display device and a control device, wherein the display device is used to display an augmented reality scene including augmented reality objects and three-dimensional path indicators. The control device is operated to control the movement path of the three-dimensional path pointer in the augmented reality scene to generate the motion trajectory. The signal converting unit is used for converting the motion track into a first motion signal. The surgical robot unit receives and executes the first action signal, so as to perform a surgical action on the physical target along the motion track.

在本發明一實施例中,演算模組產生使用者界面,顯示裝置同時顯示使用者界面與擴增實境場景,使用者界面受控於控制裝置,並且用以調整三維路徑指標的空間參數,以決定出移動軌跡。In an embodiment of the present invention, the calculation module generates a user interface, the display device simultaneously displays the user interface and the augmented reality scene, the user interface is controlled by the control device, and is used to adjust the spatial parameters of the three-dimensional path index, to determine the movement trajectory.

在本發明一實施例中,空間參數包含三維路徑指標的X軸座標、Y軸座標、Z軸座標、繞X軸轉動角度、繞Y軸轉動角度以及繞Z軸轉動角度的多個調整欄位,分別用以控制三維路徑指標沿X軸前後移動、沿Y軸左右移動、沿Z軸上下移動、繞X軸旋轉、繞Y軸前後旋轉以及繞Z軸左右旋轉。In an embodiment of the present invention, the spatial parameters include a plurality of adjustment fields of X-axis coordinate, Y-axis coordinate, Z-axis coordinate, rotation angle around X-axis, rotation angle around Y-axis, and rotation angle around Z-axis of the three-dimensional path index , which are used to control the three-dimensional path index to move forward and backward along the X axis, left and right along the Y axis, up and down along the Z axis, rotate around the X axis, rotate forward and backward around the Y axis, and rotate left and right around the Z axis.

在本發明一實施例中,控制裝置另受操作以發出中斷訊號至手術機器人單元,以禁能手術機器人單元執行第一動作訊號。In an embodiment of the present invention, the control device is further operated to send an interrupt signal to the surgical robot unit to disable the surgical robot unit from executing the first motion signal.

在本發明一實施例中,運動軌跡產生之後,在手術機器人單元尚未執行第一動作訊號或手術機器人單元被中斷執行第一動作訊號的情況下,演算模組另產生候選三維路徑指標,候選三維路徑指標與三維路徑指標同時顯示於擴增實境場景;控制裝置另受操作以控制候選三維路徑指標於擴增實境場景中的移動路徑,以產生候選運動軌跡;訊號轉換單元接收候選運動軌跡並將運動軌跡轉換為第二動作訊號;以及,手術機器人單元執行第二動作訊號,以沿著候選運動軌跡執行手術動作。In an embodiment of the present invention, after the motion trajectory is generated, in the case that the surgical robot unit has not yet executed the first motion signal or the surgical robot unit has been interrupted to execute the first motion signal, the calculation module additionally generates a candidate 3D path index, the candidate 3D path indicator The path indicator and the 3D path indicator are simultaneously displayed in the augmented reality scene; the control device is further operated to control the movement path of the candidate 3D path indicator in the augmented reality scene to generate a candidate motion trajectory; the signal conversion unit receives the candidate motion trajectory and converting the motion trajectory into a second motion signal; and the surgical robot unit executes the second motion signal to perform a surgical motion along the candidate motion trajectory.

在本發明一實施例中,手術機器人單元根據該第一動作訊號靠近或遠離該實體目標物、切除該實體目標物的至少一部分、或植入一植入物到該實體目標物。In an embodiment of the present invention, the surgical robot unit approaches or moves away from the physical target according to the first motion signal, resects at least a part of the physical target, or implants an implant into the physical target.

在本發明一實施例中,擴增實境場景係模擬但不包含:目標物所在之真實場景。In an embodiment of the present invention, the augmented reality scene is simulated but does not include: a real scene where the target is located.

在本發明一實施例中,擴增實境場景包含目標物所在之真實場景。In an embodiment of the present invention, the augmented reality scene includes a real scene where the target object is located.

在本發明一實施例中,控制裝置另受操作以改變擴增實境場景的視角,當擴增實境場景的視角改變時,擴增實境物體的視角與三維路徑指標的視角經演算模組運算而對應地改變。In an embodiment of the present invention, the control device is further operated to change the viewing angle of the augmented reality scene. When the viewing angle of the augmented reality scene is changed, the viewing angle of the augmented reality object and the viewing angle of the three-dimensional path index are calculated through a model. The group operation changes accordingly.

在本發明一實施例中,顯示裝置係為頭戴式顯示器,且控制裝置係為耦接於頭戴式顯示器的便攜式控制器。In one embodiment of the present invention, the display device is a head-mounted display, and the control device is a portable controller coupled to the head-mounted display.

在本發明一實施例中,顯示裝置係為液晶顯示器,且控制裝置係為耦接於液晶顯示器的輸入裝置。In an embodiment of the present invention, the display device is a liquid crystal display, and the control device is an input device coupled to the liquid crystal display.

除了以上手術輔助系統,本發明另一實施例提供了一種手術輔助方法,包含:演算模組根據實體目標物產生擴增實境場景與擴增實境物體,以及另於擴增實境場景中顯示三維路徑指標,擴增實境物體係模擬目標物,且擴增實境物體係於擴增實境場景中顯示;於顯示裝置上顯示包含擴增實境物體以及三維路徑指標的擴增實境場景;操作控制裝置來控制三維路徑指標於擴增實境場景中的移動路徑,以產生運動軌跡;訊號轉換單元將運動軌跡轉換為第一動作訊號,以及手術機器人單元接收並且執行第一動作訊號,以沿著運動軌跡對實體目標物執行手術動作。In addition to the above surgical assistance system, another embodiment of the present invention provides a surgical assistance method, including: an algorithm module generates an augmented reality scene and an augmented reality object according to a physical target, and further generates an augmented reality scene and an augmented reality object in the augmented reality scene The 3D path index is displayed, the augmented reality object system simulates the target, and the augmented reality object system is displayed in the augmented reality scene; the augmented reality object including the augmented reality object and the 3D path index is displayed on the display device operating the control device to control the moving path of the three-dimensional path indicator in the augmented reality scene to generate a motion trajectory; the signal conversion unit converts the motion trajectory into a first motion signal, and the surgical robot unit receives and executes the first motion The signal is used to perform surgical action on the physical target along the motion trajectory.

在本發明一實施例中,演算模組產生使用者界面,顯示裝置同時顯示使用者界面與擴增實境場景,使用者界面受控於控制裝置,並且用以調整三維路徑指標的空間參數,以決定出移動軌跡。In an embodiment of the present invention, the calculation module generates a user interface, the display device simultaneously displays the user interface and the augmented reality scene, the user interface is controlled by the control device, and is used to adjust the spatial parameters of the three-dimensional path index, to determine the movement trajectory.

在本發明一實施例中,空間參數包含三維路徑指標的X軸座標、Y軸座標、Z軸座標、繞X軸轉動角度、繞Y軸轉動角度以及繞Z軸轉動角度的多個調整欄位,分別用以控制三維路徑指標沿X軸前後移動、沿Y軸左右移動、沿Z軸上下移動、繞X軸旋轉、繞Y軸前後旋轉以及繞Z軸左右旋轉。In an embodiment of the present invention, the spatial parameters include a plurality of adjustment fields of X-axis coordinate, Y-axis coordinate, Z-axis coordinate, rotation angle around X-axis, rotation angle around Y-axis, and rotation angle around Z-axis of the three-dimensional path index , which are used to control the three-dimensional path index to move forward and backward along the X axis, left and right along the Y axis, up and down along the Z axis, rotate around the X axis, rotate forward and backward around the Y axis, and rotate left and right around the Z axis.

在本發明一實施例中,另包含以下步驟:控制裝置發出中斷訊號至手術機器人單元,以禁能手術機器人單元執行第一動作訊號。In an embodiment of the present invention, the following step is further included: the control device sends an interrupt signal to the surgical robot unit to disable the surgical robot unit from executing the first action signal.

在本發明一實施例中,於運動軌跡產生之後,在手術機器人單元尚未執行第一動作訊號或手術機器人單元被中斷執行第一動作訊號的情況下,手術輔助方法另執行以下步驟:演算模組另產生候選三維路徑指標,候選三維路徑指標與三維路徑指標同時顯示於擴增實境場景;操作控制裝置以控制候選三維路徑指標於擴增實境場景中的移動路徑,以產生候選運動軌跡;訊號轉換單元接收候選運動軌跡並將運動軌跡轉換為第二動作訊號;以及手術機器人單元接收並且執行第二動作訊號,以沿著候選運動軌跡執行手術動作。In an embodiment of the present invention, after the motion trajectory is generated, in the case that the surgical robot unit has not yet executed the first motion signal or the surgical robot unit is interrupted to execute the first motion signal, the surgical assistance method further executes the following steps: an arithmetic module In addition, a candidate 3D path indicator is generated, and the candidate 3D path indicator and the 3D path indicator are simultaneously displayed in the augmented reality scene; the control device is operated to control the movement path of the candidate 3D path indicator in the augmented reality scene, so as to generate a candidate motion trajectory; The signal conversion unit receives the candidate motion trajectory and converts the motion trajectory into a second motion signal; and the surgical robot unit receives and executes the second motion signal to perform a surgical action along the candidate motion trajectory.

綜上所述,本發明利用擴增實境技術解決外科醫生在執行外科手術時遇到的瓶頸,外科醫生可以轉動擴增實境場景的視野來掌握目標物的所有視角,故不需要移動自己的位置就可以掌握目標物與周遭環境的相對位置關係;此外,透過機械臂執行事先輸入的機械臂運動軌跡或是外科醫生現場調整好的機械臂運動軌跡,可以避免人工手術的失誤。本發明亦允許外科醫生中斷、修改機械臂運動軌跡,故能夠應付實際手術的突發狀況。To sum up, the present invention uses augmented reality technology to solve the bottleneck encountered by surgeons in performing surgical operations. The surgeon can rotate the field of view of the augmented reality scene to grasp all the perspectives of the target object, so there is no need to move himself. The relative positional relationship between the target object and the surrounding environment can be grasped by the position of the robot arm; in addition, the robot arm can execute the pre-input robot arm movement trajectory or the robot arm movement trajectory adjusted by the surgeon on the spot, which can avoid the mistakes of manual operation. The present invention also allows the surgeon to interrupt and modify the motion track of the robotic arm, so that it can cope with the unexpected situation of the actual operation.

揭露特別以下述例子加以描述,這些例子僅係用以舉例說明而已,因為對於熟習此技藝者而言,在不脫離本揭示內容之精神和範圍內,當可作各種之更動與潤飾,因此本揭示內容之保護範圍當視後附之申請專利範圍所界定者為準。在通篇說明書與申請專利範圍中,除非內容清楚指定,否則「一」以及「該」的意義包含這一類敘述包括「一或至少一」該元件或成分。此外,如本揭露所用,除非從特定上下文明顯可見將複數排除在外,否則單數冠詞亦包括複數個元件或成分的敘述。而且,應用在此描述中與下述之全部申請專利範圍中時,除非內容清楚指定,否則「在其中」的意思可包含「在其中」與「在其上」。在通篇說明書與申請專利範圍所使用之用詞(terms),除有特別註明,通常具有每個用詞使用在此領域中、在此揭露之內容中與特殊內容中的平常意義。某些用以描述本揭露之用詞將於下或在此說明書的別處討論,以提供從業人員(practitioner)在有關本揭露之描述上額外的引導。在通篇說明書之任何地方之例子,包含在此所討論之任何用詞之例子的使用,僅係用以舉例說明,當然不限制本揭露或任何例示用詞之範圍與意義。同樣地,本揭露並不限於此說明書中所提出之各種實施例。The disclosure is specifically described with the following examples, which are only for illustration, because for those skilled in the art, various changes and modifications can be made without departing from the spirit and scope of the present disclosure. The scope of protection of the disclosed contents shall be determined by the scope of the appended patent application. Throughout the specification and claims, unless the content clearly dictates otherwise, the meanings of "a" and "the" include that such recitations include "one or at least one" of the element or ingredient. Furthermore, as used in this disclosure, singular articles also include recitations of plural elements or components unless it is obvious from the specific context that the plural is excluded. Also, as used in this description and throughout the claims below, the meaning of "in" may include "in" and "on" unless the content clearly dictates otherwise. The terms used throughout the specification and the scope of the patent application, unless otherwise specified, generally have the ordinary meaning of each term used in the field, in the content disclosed herein and in the specific content. Certain terms used to describe the present disclosure are discussed below or elsewhere in this specification to provide practitioners with additional guidance in describing the present disclosure. Examples anywhere throughout the specification, including the use of examples of any terms discussed herein, are by way of illustration only, and of course do not limit the scope and meaning of the disclosure or any exemplified terms. Likewise, the present disclosure is not limited to the various embodiments set forth in this specification.

在此所使用的用詞「實質上(substantially)」、「大約(around)」、「約(about)」或「近乎(approximately)」應大體上意味在給定值或範圍的20%以內,較佳係在10%以內。此外,在此所提供之數量可為近似的,因此意味著若無特別陳述,可以用詞「大約」、「約」或「近乎」加以表示。當一數量、濃度或其他數值或參數有指定的範圍、較佳範圍或表列出上下理想值之時,應視為特別揭露由任何上下限之數對或理想值所構成的所有範圍,不論該等範圍是否分別揭露。舉例而言,如揭露範圍某長度為X公分到Y公分,應視為揭露長度為H公分且H可為X到Y之間之任意實數。The terms "substantially", "around", "about" or "approximately" as used herein shall generally mean within 20% of a given value or range, The best line is within 10%. Furthermore, quantities provided herein may be approximate, and thus are meant to be expressed in terms of the words "about," "about," or "approximately," unless specifically stated otherwise. When a quantity, concentration or other value or parameter has a specified range, a preferred range or a table listing upper and lower ideal values, it shall be deemed to specifically disclose all ranges formed by any pair of upper and lower limits or ideal values, regardless of whether Whether those ranges are disclosed separately. For example, if a certain length of the disclosed range is X cm to Y cm, it should be deemed that the disclosed length is H cm and H can be any real number between X and Y.

此外,若使用「電(性)耦接」或「電(性)連接」一詞在此係包含任何直接及間接的電氣連接手段。舉例而言,若文中描述一第一裝置電性耦接於一第二裝置,則代表該第一裝置可直接連接於該第二裝置,或透過其他裝置或連接手段間接地連接至該第二裝置。另外,若描述關於電訊號之傳輸、提供,熟習此技藝者應該可以了解電訊號之傳遞過程中可能伴隨衰減或其他非理想性之變化,但電訊號傳輸或提供之來源與接收端若無特別敘明,實質上應視為同一訊號。舉例而言,若由電子電路之端點A傳輸(或提供)電訊號S給電子電路之端點B,其中可能經過一電晶體開關之源汲極兩端及/或可能之雜散電容而產生電壓降,但此設計之目的若非刻意使用傳輸(或提供)時產生之衰減或其他非理想性之變化而達到某些特定的技術效果,電訊號S在電子電路之端點A與端點B應可視為實質上為同一訊號。Furthermore, if the term "electrically (sexually) coupled" or "electrically (sexually) connected" is used herein, it includes any means of direct and indirect electrical connection. For example, if it is described in the text that a first device is electrically coupled to a second device, it means that the first device can be directly connected to the second device, or indirectly connected to the second device through other devices or connecting means device. In addition, if the transmission and provision of electrical signals are described, those skilled in the art should be able to understand that the transmission of electrical signals may be accompanied by attenuation or other non-ideal changes. In fact, it should be regarded as the same signal. For example, if the electrical signal S is transmitted (or provided) from the terminal A of the electronic circuit to the terminal B of the electronic circuit, it may pass through the source and drain terminals of a transistor switch and/or possible stray capacitance. A voltage drop is generated, but if the purpose of this design is not to use the attenuation or other non-ideal changes generated during transmission (or supply) to achieve some specific technical effects, the electrical signal S is at the terminal A and the terminal of the electronic circuit. B should be regarded as substantially the same signal.

可了解如在此所使用的用詞「包含(comprising)」、「包含(including)」、「具有(having)」、「含有(containing)」、「包含(involving)」等等,為開放性的(open-ended),即意指包含但不限於。另外,本發明的任一實施例或申請專利範圍不須達成本發明所揭露之全部目的或優點或特點。此外,摘要部分和標題僅是用來輔助專利文件搜尋之用,並非用來限制本發明之申請專利範圍。It is understood that the terms "comprising", "including", "having", "containing", "involving", etc. as used herein are open-ended open-ended, which means including but not limited to. In addition, any embodiment of the present invention or the scope of the claims is not required to achieve all of the objects or advantages or features disclosed herein. In addition, the abstract part and the title are only used to assist the search of patent documents, and are not used to limit the scope of the patent application of the present invention.

本發明系統架構可大致包括擴增實境可視化、手術機器人導航、即時監控三種應用,其中擴增實境可視化應用係利用擴增實境技術來結合虛擬影像與現實影像,或是在虛擬影像與現實環境影像匹配後僅輸出虛擬影像;即時監控應用提供使用者在透過擴增實境可視化能夠觀看到目標物三維圖像的同時,進行機械臂的移動軌跡調整。手術醫生可透過監控系統的操作介面以平移、旋轉的方式調整軌跡,而這個調整的軌跡也會再度送至機械臂導航系統做修正。手術機器人導航應用係將使用者的控制訊號轉換為機械臂的移動軌跡,手術醫生可透過監控系統的操作介面以平移、旋轉的方式調整軌跡,而這個調整的軌跡也會再送至機械臂導航系統做修正。而在機械臂導航應用中,係先將機械臂與擴增實境環境校正到同一座標系,以確保機械臂座標能夠正確執行使用者的輸入指令。使用者的輸入指令可以是事先規劃出的軌跡,或是現場操作監控模組所規劃出的軌跡。手術醫生可於監控模組之操作介面進行軌跡的調整(如第5圖所示),整個手術的任務包含確認目標物的位置與角度,以及目標物上的植入點的位置與角度。透過本發明,醫生可以針對這些關鍵參數作即時監控和調整。手術機器人單元在接收新的軌跡及位置的調整後,計算出新的執行軌跡,並將之再度反饋到擴增實境視覺系統給醫生確認,經確認無誤後才交由機械臂去執行。以上設計包含機械臂座標系與虛擬擴增實境環境座標系之間的轉換,以及調整軌跡之介面。因此,於擴增實境可視化端的路徑規劃可以轉換為機械臂軌跡,使用者可以直接規劃機械臂的手術動作,達到即時監控的目的。The system architecture of the present invention can roughly include three applications: augmented reality visualization, surgical robot navigation, and real-time monitoring. The augmented reality visualization application uses augmented reality technology to combine virtual images with real images, or in virtual images and real images. After matching the real environment image, only the virtual image is output; the real-time monitoring application allows users to adjust the movement trajectory of the robotic arm while viewing the 3D image of the target through augmented reality visualization. The surgeon can adjust the trajectory through translation and rotation through the operation interface of the monitoring system, and the adjusted trajectory will also be sent to the robotic arm navigation system for correction. The surgical robot navigation application converts the user's control signal into the movement trajectory of the robot arm. The surgeon can adjust the trajectory through translation and rotation through the operation interface of the monitoring system, and the adjusted trajectory will also be sent to the robot arm navigation system. make corrections. In the robotic arm navigation application, the system first calibrates the robotic arm and the augmented reality environment to the same coordinate system to ensure that the robotic arm coordinates can correctly execute the user's input instructions. The user's input command may be a trajectory planned in advance, or a trajectory planned by the on-site operation monitoring module. The surgeon can adjust the trajectory on the operation interface of the monitoring module (as shown in Figure 5). The task of the entire operation includes confirming the position and angle of the target, as well as the position and angle of the implantation point on the target. Through the present invention, doctors can monitor and adjust these key parameters in real time. After receiving the new trajectory and position adjustment, the surgical robot unit calculates the new execution trajectory, and feeds it back to the augmented reality vision system for confirmation by the doctor. After confirmation, it is handed over to the robotic arm for execution. The above design includes the conversion between the coordinate system of the robot arm and the coordinate system of the virtual augmented reality environment, and the interface for adjusting the trajectory. Therefore, the path planning on the visualization side of the augmented reality can be converted into the trajectory of the robotic arm, and the user can directly plan the surgical action of the robotic arm to achieve the purpose of real-time monitoring.

請參考第1A圖,第1A圖係為根據本發明一實施例的手術輔助系統100的示意圖,手術輔助系統100包含監控模組20、演算模組30、手術機器人導航模組40以及攝影單元60。攝影單元60可如第1A圖所示獨立於監控模組20之外,也可與監控模組20或手術機器人導航模組40整合再一起。舉例來說,攝影單元60可以是外接於電腦裝置的攝影機,或是整合於智慧型眼鏡的攝影鏡頭。監控模組20、演算模組30、手術機器人導航模組40以及攝影單元60之間的訊號傳遞方式可為有線或無線,亦即,這些元件之間可為耦接的關係,或是彼此透過WiFi或藍芽等無線傳輸技術進行溝通。手術機器人導航模組40可包含訊號轉換單元42和一手術機器人單元(例如第1A圖示的機械臂44),訊號轉換單元42用以轉換收到的控制訊號來控制機械臂44進行相關動作。手術機器人單元泛指受電力控制而能夠產生力矩、位移的構件,可例如是機械臂44或一手術機器人,或是能夠置於患者體內進行手術的微創裝置,且手術機器人單元的材料並不限制為金屬。為了便於理解,後續段落的舉例中大多以機械臂44來代表手術機器人單元,但本發明並不排除採用其他類型的裝置來進行手術。Please refer to FIG. 1A . FIG. 1A is a schematic diagram of a surgical assistance system 100 according to an embodiment of the present invention. The surgical assistance system 100 includes a monitoring module 20 , an arithmetic module 30 , a surgical robot navigation module 40 and a photographing unit 60 . The photographing unit 60 can be independent of the monitoring module 20 as shown in FIG. 1A , and can also be integrated with the monitoring module 20 or the surgical robot navigation module 40 . For example, the camera unit 60 may be a camera externally connected to a computer device, or a camera lens integrated in smart glasses. The signal transmission mode between the monitoring module 20, the calculation module 30, the surgical robot navigation module 40 and the camera unit 60 can be wired or wireless, that is, these components can be in a coupled relationship, or can be transmitted through each other. Communicate with wireless transmission technologies such as WiFi or Bluetooth. The surgical robot navigation module 40 may include a signal conversion unit 42 and a surgical robot unit (eg, the manipulator 44 shown in FIG. 1A ). The signal conversion unit 42 is used to convert the received control signals to control the manipulator 44 to perform related actions. The surgical robot unit generally refers to a component that is controlled by electricity and can generate torque and displacement. It can be, for example, a robotic arm 44 or a surgical robot, or a minimally invasive device that can be placed in a patient to perform surgery. The material of the surgical robot unit is not Limited to metals. For ease of understanding, most of the examples in the following paragraphs use the robotic arm 44 to represent the surgical robot unit, but the present invention does not exclude the use of other types of devices to perform surgery.

演算模組30透過運動學計算工具來模擬機械臂44運動、進行機械臂44控制,以將手術動作目標的植入點規劃為一連串機械臂44軌跡資訊。最後,演算模組30會將機械臂44控制資訊下達給機械臂44的驅動馬達,完成機械臂44控制。在第1A圖中,演算模組30產生對應目標物50的擴增實境場景(例如第3圖、第4圖所示的符號350, 450),並且擴增實境場景中顯示擴增實境物體(例如第3圖、第4圖所示的符號55)。除了擴增實境物體,對應擴增實境物體的三維路徑指標(如第5圖所示的符號56)也出現於擴增實境場景。監控模組20包含一顯示裝置24以及一控制裝置22,顯示裝置24用以顯示擴增實境場景,控制裝置22則受使用者控制來傳達控制訊號,其中使用者10可例如是外科手術醫生或是相關醫療人員,且目標物50可為患者的脊柱部位,或是一脊柱模型(若手術輔助系統100使用於模擬或學術性研究),本發明並不限定目標物50的類型。The calculation module 30 simulates the motion of the robotic arm 44 and controls the robotic arm 44 through a kinematic calculation tool, so as to plan the implantation point of the surgical action target as a series of trajectory information of the robotic arm 44 . Finally, the calculation module 30 sends the control information of the robotic arm 44 to the driving motor of the robotic arm 44 to complete the control of the robotic arm 44 . In FIG. 1A , the calculation module 30 generates an augmented reality scene corresponding to the target object 50 (for example, symbols 350 and 450 shown in FIG. 3 and FIG. 4 ), and the augmented reality scene is displayed in the augmented reality scene. environment objects (eg symbol 55 shown in Figures 3 and 4). In addition to augmented reality objects, three-dimensional path indicators corresponding to augmented reality objects (such as the symbol 56 shown in Figure 5) also appear in augmented reality scenes. The monitoring module 20 includes a display device 24 and a control device 22. The display device 24 is used to display the augmented reality scene, and the control device 22 is controlled by a user to transmit control signals, wherein the user 10 can be, for example, a surgeon Or related medical personnel, and the target 50 can be a patient's spine, or a spinal model (if the surgical assistance system 100 is used for simulation or academic research), the present invention does not limit the type of the target 50 .

在監控模組20為一電腦裝置的情況下,顯示裝置24以及控制裝置22可整合於一機台,或是外接於監控模組20。舉例來說,顯示裝置24可為液晶顯示器,控制裝置22可為一電腦裝置,包括耦接於液晶顯示器的輸入裝置,例如鍵盤、滑鼠、按鈕手把等具備輸入界面的裝置。在另一範例中,顯示裝置24可為一頭戴式顯示器(例如具有顯示螢幕的智慧型眼鏡,諸如Epson BT-300 AR眼鏡),且控制裝置22可為耦接於頭戴式顯示器的便攜式控制器(例如設置於智慧型眼鏡上的按鈕,或是連接於智慧型眼鏡的按鈕手把);顯示裝置24可透過智慧型眼鏡的鏡片上的透明微型顯示器,故不會遮蔽到使用者10的視線。以上兩種實施方式中,使用者10可透過顯示裝置24能夠在同一畫面中同時看到擴增實境物體與當前場景,且顯示裝置24以及控制裝置22之間的訊號傳遞方式可為有線或無線。In the case where the monitoring module 20 is a computer device, the display device 24 and the control device 22 can be integrated into a machine, or can be externally connected to the monitoring module 20 . For example, the display device 24 can be a liquid crystal display, and the control device 22 can be a computer device including an input device coupled to the liquid crystal display, such as a keyboard, a mouse, a button handle, and other devices with an input interface. In another example, the display device 24 may be a head-mounted display (eg, smart glasses with a display screen, such as Epson BT-300 AR glasses), and the control device 22 may be a portable device coupled to the head-mounted display A controller (such as a button disposed on the smart glasses, or a button handle connected to the smart glasses); the display device 24 can pass through the transparent microdisplay on the lens of the smart glasses, so it will not block the user 10 Sight. In the above two embodiments, the user 10 can simultaneously see the augmented reality object and the current scene on the same screen through the display device 24, and the signal transmission method between the display device 24 and the control device 22 can be wired or wireless.

詳細來說,控制裝置22受使用者10控制(如輸入操作A1所示)以產生控制訊號C1至演算模組30,演算模組30根據控制訊號C1執行相關運算,並且將運算結果以影像的方式顯示於顯示裝置24(如第1A圖所示傳送影像資料V1)。另一方面,演算模組30根據控制裝置22傳來的將控制訊號C1來發出控制訊號C2至訊號轉換單元42,訊號轉換單元42將控制訊號C2轉換為第一動作訊號S1至控制機械臂44,使機械臂44依照使用者10的要求對目標物50進行操作(如手術動作A2所示)。Specifically, the control device 22 is controlled by the user 10 (as indicated by the input operation A1 ) to generate the control signal C1 to the calculation module 30 , and the calculation module 30 executes the relevant operation according to the control signal C1 , and converts the calculation result to the image The mode is displayed on the display device 24 (image data V1 is transmitted as shown in FIG. 1A ). On the other hand, the arithmetic module 30 sends a control signal C2 to the signal conversion unit 42 according to the control signal C1 sent from the control device 22 , and the signal conversion unit 42 converts the control signal C2 into a first action signal S1 to control the mechanical arm 44 , so that the robot arm 44 operates the target 50 according to the requirements of the user 10 (as shown in the operation action A2 ).

關於機械臂44運動規劃的部分,演算模組30可於空間中規劃出針對患部的鋼釘鑽入軌跡(可為符合卡氏座標的鑽入軌跡),並推算出機械臂44各關節的運動參數(諸如扭力、位移、速度、加速度等),並且根據這些參數產生指令給訊號轉換單元42。機器臂44的軌跡規劃與運動學計算可應用於骨科手術,例如椎弓根螺釘植入手術。舉例來說,用於進行外科手術機械臂44可以用六軸機械臂系統來實現,六軸機械臂系統能夠在六自由度任意移動。所謂六自由度是指物體在三維空間中運動的自由度, 特別是指物體可以在前後、上下、左右三個互相垂直的坐標軸上平移,也可以在三軸(X、Y、Z軸)方向上位移以及繞三軸旋轉,其中繞X軸旋轉稱為翻滾(Roll)、繞Y軸旋轉稱為俯仰(pitch),且繞Z軸旋轉稱為偏擺(yaw)。Regarding the motion planning part of the robotic arm 44 , the calculation module 30 can plan the drilling trajectory of the steel nail for the affected part in space (it can be a drilling trajectory that conforms to the Karl Fischer coordinates), and calculate the motion of each joint of the robotic arm 44 . parameters (such as torque, displacement, speed, acceleration, etc.), and generate commands to the signal conversion unit 42 according to these parameters. The trajectory planning and kinematic calculation of the robotic arm 44 can be applied to orthopedic surgery, such as pedicle screw implantation surgery. For example, the robotic arm 44 for performing surgery can be implemented with a six-axis robotic arm system that can move arbitrarily in six degrees of freedom. The so-called six degrees of freedom refers to the freedom of movement of an object in three-dimensional space, especially that the object can translate on three mutually perpendicular coordinate axes, front, back, up and down, and left and right, and can also move in three axes (X, Y, Z axes). Displacement in the direction and rotation around three axes, where rotation around the X axis is called roll, rotation around the Y axis is called pitch, and rotation around the Z axis is called yaw.

本發明可應用在脊柱融合手術(spinal fusion surgery),將例如為鋼釘的植入物植入患者體內,軌跡包含起始位置以及一段直線軌跡。由於在鑽入的過程中骨釘的朝向必須保持一致,否則會導致骨頭斷裂的風險。因此,規劃軌跡的關鍵處在於適當地決定患部上的鋼釘鑽入點,包含鑽入點的位置、角度和鑽入路徑的長度。在機械臂實際動作之前會依據骨頭模型定義出鑽入點,其參考因素包含避開神經髓,還有植入物鑽入後的效果等等。本發明規劃出的路徑會是一段以鑽入孔為直線起點、最終放置植入物的位置為終點的直線路徑,並且在運動過程中,需要保持直線行進而不轉動。The present invention can be applied in spinal fusion surgery, where an implant, such as a steel screw, is implanted in a patient, and the trajectory includes a starting position and a straight trajectory. Since the orientation of the bone screws must be kept consistent during the drilling process, otherwise there is a risk of bone fracture. Therefore, the key to planning the trajectory is to appropriately determine the penetration point of the steel nail on the affected part, including the position, angle and length of the penetration path. Before the actual action of the robotic arm, the drilling point will be defined according to the bone model. The reference factors include avoiding the nerve marrow and the effect of the implant after drilling. The path planned by the present invention will be a straight line starting from the drilling hole and ending at the position where the implant is finally placed. During the movement, it is necessary to keep the straight line and not rotate.

詳細來說,機械臂44軌跡的建立可以透過電腦輔助設計程式將機械臂44資訊輸出以便應用在運算軟體(例如ROS MoveIt)上。表1是機械臂44的各軸參數,為機械臂44之數學模型,像是Joint1相對機械臂44基座的轉換就是z方向0.183公尺,角度維持不變。此為反推逆向運動學來轉換卡氏座標系於關節座標系之重要依據。在表1中,Base表示機械臂44基座,Joint1~Joint6分別表示機械臂44(六軸機械臂系統)的六個不同的關節。 表1

Figure 02_image001
Specifically, the establishment of the trajectory of the robotic arm 44 can output the information of the robotic arm 44 through a computer-aided design program so as to be applied to computing software (eg, ROS MoveIt). Table 1 shows the parameters of each axis of the robot arm 44, which is the mathematical model of the robot arm 44. For example, the conversion of Joint1 relative to the base of the robot arm 44 is 0.183 meters in the z direction, and the angle remains unchanged. This is an important basis for inverse kinematics to convert the Cartesian coordinate system to the joint coordinate system. In Table 1, Base represents the base of the robotic arm 44, and Joint1 to Joint6 represent six different joints of the robotic arm 44 (six-axis robotic arm system). Table 1
Figure 02_image001

機械臂44先與相機鏡頭做校正得到兩座標系的相對關係,相機偵測虛擬座標系於實際環境設置之標記,即可獲取模擬系統之軌跡規劃,確保虛擬系統端規劃的座標與實際執行機械臂44執行座標互相符合。利用手眼校正方法來獲取相機與機械臂44基座之相對關係。透過相機辨識出骨頭模型之標誌(TAG)位置,可以獲取機械臂44與骨頭模型的相對位置。如此一來,在骨頭模型上規劃的術前軌跡就可以轉換成機械臂44執行軌跡。The robotic arm 44 is calibrated with the camera lens to obtain the relative relationship between the two coordinate systems. The camera detects the mark of the virtual coordinate system set in the actual environment, and then the trajectory planning of the simulation system can be obtained to ensure that the coordinates planned by the virtual system end are consistent with the actual execution machinery. The arms 44 execute the coordinates to coincide with each other. The relative relationship between the camera and the base of the robotic arm 44 is obtained by using the hand-eye correction method. By identifying the position of the tag (TAG) of the bone model through the camera, the relative position of the robotic arm 44 and the bone model can be obtained. In this way, the preoperative trajectory planned on the bone model can be converted into the execution trajectory of the robotic arm 44 .

本發明可使用數位相機(例如Realsense D435,未繪示)作為攝影單元60來偵測周遭環境,再將此資訊提供給機械臂44執行。數位相機(D435)的座標並不等於機械臂座標系,儘管如此,這兩個座標系之間存在一個固定關係的轉換,可以透過手眼校正進行估計。將機械臂44末端軸上固定一標誌(例如AprilTag),並捕捉多種姿勢,同時記錄數位相機的彩色影像與機械臂44的座標系(簡稱機械臂座標系)之末端軸姿態資訊,如第1B圖所示的手眼校正與影像導航。相機拍攝影像再利用影像辨識出標誌位置姿態,可以獲取該標誌相對於相機座標系的關係,機械臂座標系至末端軸的相對關係是機械臂44系統已知,而標誌與機械臂44末端再附著固定後即是一個固定的距離,有了這三項資訊,可以得到下列方程式(1.1) ,再經過n次迭代出機械臂44與相機座標系最符合的相對關係。

Figure 02_image003
(1.1)     其中
Figure 02_image005
係為工具座標系(Tool coordinate system, TCS)與機械臂座標系(Robot Base Coordinate System, RBCS)的轉換矩陣,
Figure 02_image007
係為標記座標系與機械臂座標系的轉換矩陣,
Figure 02_image009
標記座標系與相機座標系的轉換矩陣,
Figure 02_image011
係為相機座標系與機械臂座標系的轉換矩陣。 The present invention can use a digital camera (eg Realsense D435, not shown) as the camera unit 60 to detect the surrounding environment, and then provide the information to the robotic arm 44 for execution. The coordinates of the digital camera (D435) are not equal to the coordinate system of the manipulator, however, there is a fixed relationship transformation between these two coordinate systems, which can be estimated by hand-eye correction. Fix a mark (such as AprilTag) on the end axis of the robot arm 44, and capture a variety of postures, and simultaneously record the color image of the digital camera and the end axis posture information of the coordinate system of the robot arm 44 (referred to as the robot arm coordinate system), such as Section 1B The hand-eye correction and image navigation shown in Fig. The camera captures the image and then uses the image to identify the position and attitude of the mark, and the relationship between the mark and the camera coordinate system can be obtained. The relative relationship between the coordinate system of the robot arm and the end axis is known by the robot arm 44 system, and the mark and the end of the robot arm 44 are separated from each other. After the attachment is fixed, it is a fixed distance. With these three pieces of information, the following equation (1.1) can be obtained, and after n iterations, the most consistent relative relationship between the robotic arm 44 and the camera coordinate system can be obtained.
Figure 02_image003
(1.1)
in
Figure 02_image005
is the transformation matrix between the Tool coordinate system (TCS) and the Robot Base Coordinate System (RBCS),
Figure 02_image007
is the transformation matrix between the marker coordinate system and the manipulator coordinate system,
Figure 02_image009
The transformation matrix between the marker coordinate system and the camera coordinate system,
Figure 02_image011
is the transformation matrix of the camera coordinate system and the manipulator coordinate system.

現今對於擴增實境技術引入醫療環境已有許多進展,在手術房中將事先拍攝好的X射線斷層成像(Computed Tomography、CT),利用定位系統即時成像在患處身上,最困難的部分在於即時的將虛擬影像覆蓋到使用者的視覺疊合,並且和現實世界中的物件精準對齊。在擴增實境可視化設計中本發明將結合擴增實境技術設計視覺介面,將醫療資訊直覺地展示給使用者。系統中首先會建立一個模擬環境裡面包含醫療圖像模型與機械臂軌跡,而醫療圖像模型會與現實模型做配準(registration),得到模型的現實與虛擬的相對關係;機械臂軌跡分為兩種,一為術前軌跡,根據圖像模型規劃出來的軌跡,另一個則是機械臂現在姿態及預計執行的軌跡,此部分會與機械臂控制系統做即時通訊。除此之外還會包含圖形化介面供使用者調整軌跡。本發明可採用Epson BT-300 AR眼鏡來開發擴增實境可視化介面與監控系統。Nowadays, there have been many progresses in introducing augmented reality technology into the medical environment. In the operating room, the pre-shot X-ray tomography (Computed Tomography, CT) can be imaged on the affected area in real time using the positioning system. The most difficult part is the real-time imaging. The virtual image overlays the user's visual overlay and is precisely aligned with objects in the real world. In the augmented reality visualization design, the present invention will combine the augmented reality technology to design a visual interface to intuitively display medical information to the user. In the system, a simulation environment will be established first, which contains the medical image model and the trajectory of the robotic arm, and the medical image model will be registered with the real model to obtain the relative relationship between the real and virtual models; the trajectory of the robotic arm is divided into There are two types, one is the preoperative trajectory, which is planned according to the image model, and the other is the current posture and expected trajectory of the robotic arm. This part will communicate with the robotic arm control system in real time. In addition, it also includes a graphical interface for users to adjust the trajectory. The present invention can use Epson BT-300 AR glasses to develop an augmented reality visualization interface and monitoring system.

第2圖係為攝影單元60拍攝到的真實畫面的示意圖,畫面200顯示於監控模組20的螢幕上,並且顯示了目標物50和其所處的實體場景250。第1圖中的監控模組20可被第2圖的電腦裝置20A或智慧型眼鏡20B所取代。舉例來說,電腦裝置20A可為為桌上型電腦或筆記型電腦,智慧型眼鏡20B可以是具有顯示螢幕的智慧型眼鏡。此外,目標物50係為一實體,亦即並非電腦合成影像;實體場景250係為一真實場景(亦即目標物50所在之真實場景)。在第2圖中,目標物50的外型繪成人體的脊柱部位,以代表目標物50可為患者的脊柱部位或是一脊柱模型。亦即,後續本發明的步驟可以是實際臨床手術,或是醫生於術前進行模擬測試。舉例來說,可透過3D列印技術來完成脊柱模型。FIG. 2 is a schematic diagram of a real image captured by the photographing unit 60 . The image 200 is displayed on the screen of the monitoring module 20 and displays the target 50 and the physical scene 250 in which it is located. The monitoring module 20 in FIG. 1 can be replaced by the computer device 20A or the smart glasses 20B in FIG. 2 . For example, the computer device 20A may be a desktop computer or a notebook computer, and the smart glasses 20B may be smart glasses with a display screen. In addition, the object 50 is an entity, that is, it is not a computer-generated image; the physical scene 250 is a real scene (that is, the real scene where the object 50 is located). In FIG. 2 , the shape of the target 50 is drawn as the spinal column of the human body, so that the target 50 can be a patient's spinal column or a spinal model. That is, the subsequent steps of the present invention may be actual clinical operations, or a doctor may perform a simulation test before the operation. For example, the spine model can be completed by 3D printing technology.

請參照第1A圖與第3圖,第3圖示意本發明一實施例演算模組30產生具有擴增實境場景和擴增實境物體的畫面300,其中演算模組30用以根據目標物50來顯示擴增實境場景350與擴增實境物體55,其中擴增實境場景350為一擴增實境場景,擴增實境物體55係模擬目標物50而產生。顯示於畫面300中的目標物50係為實體,而擴增實境物體55係為虛擬。擴增實境物體55可以是一種電腦產生圖像(Computer-generated imagery, CGI),在外觀上以及空間特徵上皆與目標物50吻合。擴增實境物體55顯示於畫面300的初始位置可以重疊於目標物50(如第3圖所示),也可一開始不重疊於目標物50,使用者10可再手動校正擴增實境物體55的位置或是由演算模組30自動校正。需要注意的是,雖然擴增實境場景350主要由真實場景構成,但因為擴增實境物體55出現在擴增實境場景350中,使用者10會同時於畫面300觀看到實體景象和虛擬景象,故仍符合擴增實境的定義。Please refer to FIG. 1A and FIG. 3. FIG. 3 illustrates that the calculation module 30 generates a picture 300 with an augmented reality scene and an augmented reality object according to an embodiment of the present invention. The object 50 is used to display the augmented reality scene 350 and the augmented reality object 55 , wherein the augmented reality scene 350 is an augmented reality scene, and the augmented reality object 55 is generated by simulating the target object 50 . The object 50 displayed on the screen 300 is real, and the augmented reality object 55 is virtual. The augmented reality object 55 may be a computer-generated imagery (CGI) that matches the object 50 in appearance and spatial characteristics. The initial position of the augmented reality object 55 displayed on the screen 300 may overlap with the target object 50 (as shown in FIG. 3 ), or may not overlap the target object 50 at the beginning, and the user 10 can then manually correct the augmented reality The position of the object 55 may be automatically corrected by the calculation module 30 . It should be noted that although the augmented reality scene 350 is mainly composed of real scenes, because the augmented reality object 55 appears in the augmented reality scene 350 , the user 10 will simultaneously view the physical scene and the virtual scene on the screen 300 . scene, so it still meets the definition of augmented reality.

請參照第1A圖與第4圖,第4圖示意本發明另一實施例演算模組30產生具有擴增實境場景和擴增實境物體的畫面400,擴增實境場景450是由演算模組30根據資料而模擬出的擴增實境場景,並非真實場景,然而擴增實境場景450在外觀上以及空間特徵上皆與擴增實境場景350吻合。由於整個擴增實境場景450是虛擬的,故擴增實境場景450中只會出現擴增實境物體55,不會出現實體的目標物50。亦即,本實施例雖採用擴增實境進行說明,但實際上可含括各種實境技術,例如:虛擬實境(Virtual Reality,VR)、擴增實境、替代實境(Substitutional Reality,SR)或是混合實境(Mixed Reality,MR),皆不以此為限。Please refer to FIG. 1A and FIG. 4. FIG. 4 illustrates that the calculation module 30 according to another embodiment of the present invention generates a picture 400 with an augmented reality scene and an augmented reality object. The augmented reality scene 450 is composed of The augmented reality scene simulated by the calculation module 30 according to the data is not a real scene, but the augmented reality scene 450 is consistent with the augmented reality scene 350 in appearance and spatial characteristics. Since the entire augmented reality scene 450 is virtual, only the augmented reality object 55 will appear in the augmented reality scene 450 , and the physical target object 50 will not appear. That is, although this embodiment uses augmented reality for description, it can actually include various reality technologies, such as: virtual reality (Virtual Reality, VR), augmented reality, and alternative reality (Substitutional Reality, SR) or Mixed Reality (MR), not limited to this.

請繼續參照第1A、3、4圖,使用者10可操作控制裝置22來操作擴增實境場景350、450作視角上的旋轉(例如後續第8圖畫面800、第9圖畫面900),旋轉角度可為360度,因此使用者10能夠在不改變自身位置的情況下,藉由觀看畫面300、400中的擴增實境物體55來掌握目標物50的各個視角的資訊,包含直線路徑上遭到阻擋的當前視角看不見的特徵(例如切換其他視角才可觀察到細節,或是目標物50內部的構造)。舉例來說,擴增實境物體55可以是患者本身或其身體的某部位(可為脊柱或是其他部位),擴增實境場景350、450可以是患者的周遭環境(例如患者所躺的病床或是整個室內環境)。在另一例子中,擴增實境場景450可以是患者的體內環境、擴增實境物體55可以是患者的體內器官、手術機器人導航模組40可以是微創手術裝置、且攝影單元60可以是微型攝影機或其他可擷取數位影像的相機。Please continue to refer to Figures 1A, 3, and 4. The user 10 can operate the control device 22 to operate the augmented reality scenes 350 and 450 to rotate the viewing angle (for example, the screen 800 in Figure 8 and the screen 900 in Figure 9), The rotation angle can be 360 degrees, so the user 10 can grasp the information of each viewing angle of the target object 50 by viewing the augmented reality objects 55 in the screens 300 and 400 without changing his position, including the straight line path Features that are not visible from the current viewing angle that are blocked on the top (such as switching to other viewing angles to see details, or the internal structure of the target 50). For example, the augmented reality object 55 may be the patient itself or some part of the body (which may be the spine or other parts), and the augmented reality scene 350, 450 may be the patient's surroundings (eg, the patient is lying on it). hospital bed or the entire indoor environment). In another example, the augmented reality scene 450 may be a patient's internal environment, the augmented reality object 55 may be a patient's internal organs, the surgical robot navigation module 40 may be a minimally invasive surgical device, and the camera unit 60 may be Is a miniature camera or other camera that captures digital images.

演算模組30可於擴增實境場景450(或擴增實境場景350)中顯示三維路徑指標56,如第5圖所示。第5圖係為於擴增實境場景顯示三維路徑指標的示意圖。三維路徑指標56顯示於擴增實境場景450,並且可受使用者10控制而於擴增實境場景450中移動。三維路徑指標56係為一虛擬的擴增實境圖案,其形狀可如第5圖所示之箭頭外型,但本發明不限於此,箭頭外型可用其他具有指向性的圖案來取代。較佳地,三維路徑指標56在形狀上能夠表現出機械臂44的移動方向或是實施手術的方向。使用者10可操作控制裝置22來控制三維路徑指標於擴增實境場景中對應於擴增實境物體移動,以產生第一動作訊號S1,手術機器人導航模組40的處理器42再根據第一動作訊號S1控制機械臂44,使機械臂44對目標物50進行移動或手術動作。The calculation module 30 can display the three-dimensional path indicator 56 in the augmented reality scene 450 (or the augmented reality scene 350 ), as shown in FIG. 5 . FIG. 5 is a schematic diagram of displaying a three-dimensional path indicator in an augmented reality scene. The three-dimensional path indicator 56 is displayed in the augmented reality scene 450 and can be moved in the augmented reality scene 450 under the control of the user 10 . The three-dimensional path indicator 56 is a virtual augmented reality pattern, and its shape may be the arrow shape shown in FIG. 5 , but the invention is not limited thereto, and the arrow shape may be replaced by other directional patterns. Preferably, the three-dimensional path index 56 can represent the moving direction of the robotic arm 44 or the direction of performing the operation in shape. The user 10 can operate the control device 22 to control the movement of the three-dimensional path index corresponding to the augmented reality object in the augmented reality scene, so as to generate the first motion signal S1, and the processor 42 of the surgical robot navigation module 40 then moves according to the first motion signal S1. An action signal S1 controls the robotic arm 44 , so that the robotic arm 44 moves or operates on the target 50 .

請參考第1A圖與第6A圖,第6A圖係為本發明對擴增實境場景以及擴增實境物體進行校正的示意圖。畫面600中顯示目標物50放置於校正圖卡630之上,校正圖卡630具有呈現不規則性的幾何圖案,幫助演算模組30更準確地建立擴增實境場景。詳細來說,在演算模組30將所有資訊視覺化之前,會先取得虛擬和現實兩個世界座標系的映射關係,並且利用攝影機辨識出校正圖卡630的幾何圖案中多個特徵點,並和虛擬座標內的擴增實境物體55作比對,在對多個特徵點比對計算後可得到上述兩個世界座標的映射關係。如此一來,擴增實境物體55便可被順利地投影到第3圖、第4圖所示的擴增實境場景350、450中。Please refer to FIG. 1A and FIG. 6A. FIG. 6A is a schematic diagram of the present invention for calibrating an augmented reality scene and an augmented reality object. The screen 600 shows that the target 50 is placed on the calibration chart 630 , and the calibration chart 630 has an irregular geometric pattern, which helps the calculation module 30 to create an augmented reality scene more accurately. Specifically, before the calculation module 30 visualizes all the information, the mapping relationship between the virtual and real world coordinate systems is obtained, and a camera is used to identify a plurality of feature points in the geometric pattern of the calibration chart 630 , and Comparing with the augmented reality object 55 in the virtual coordinates, the mapping relationship between the above two world coordinates can be obtained after comparing and calculating a plurality of feature points. In this way, the augmented reality object 55 can be smoothly projected into the augmented reality scenes 350 and 450 shown in FIGS. 3 and 4 .

詳細來說,在將所有資訊視覺化之前必須先找到虛擬和現實兩個世界座標系的相對關係。利用攝影單元60辨識出環境中的擴增實境標誌(AR marker),並和虛擬座標內的該物件做比對,經過多點比對計算,如此可以得到兩世界座標的對應關係。在得到轉換座標之後,本發明即可將虛擬物件投影到攝影單元60的畫面中。在本實施例中,擴增實境標誌以設置在環境中的校正圖卡630為例說明。 校正圖卡630是由多種幾何線條構成,具備多特徵點使影像辨識效果更佳,利用影像辨識分析畫面中之標誌與原標誌的形變,可以得到攝影單元60的相機座標系(Xc, Yc, Zc)與校正圖卡630的標記座標系(Xm, Ym, Zm)之間的轉換關係。而使用者視野的影像是由攝影單元60拍攝的畫面疊合虛擬物件構成,有了校正圖卡630與攝影單元60的相對轉換關係,可以依此計算出在虛擬物件在螢幕視野上的位置(x, y)。In detail, the relative relationship between the virtual and real world coordinate systems must be found before all the information can be visualized. The imaging unit 60 is used to identify an augmented reality marker (AR marker) in the environment, and compare it with the object in the virtual coordinates, and through multi-point comparison calculation, the corresponding relationship between the two world coordinates can be obtained. After obtaining the transformed coordinates, the present invention can project the virtual object into the picture of the photographing unit 60 . In this embodiment, the augmented reality sign is illustrated by taking the calibration chart 630 set in the environment as an example. The calibration chart 630 is composed of a variety of geometric lines, and has multiple feature points to make the image recognition effect better. Using the image recognition to analyze the deformation of the logo in the picture and the original logo, the camera coordinate system of the photographing unit 60 (Xc, Yc, The transformation relationship between Zc) and the mark coordinate system (Xm, Ym, Zm) of the calibration chart 630 . The image of the user's field of view is composed of the image captured by the camera unit 60 superimposed on the virtual object. With the relative conversion relationship between the calibration chart 630 and the camera unit 60, the position of the virtual object on the screen field of view can be calculated accordingly ( x, y).

擴增實境技術的核心要將虛擬物件投影在現實環境中進行虛實整合,廣義上來說只要將資訊覆蓋在現實環境中,就能算是一種擴增實境,但在本發明架構中,本發明不只要疊合虛擬物件,還要將它精準的疊合在現實環境中的指定位置上,才能更進一步地進行軌跡規劃。The core of augmented reality technology is to project virtual objects into the real environment for virtual and real integration. Broadly speaking, as long as the information is covered in the real environment, it can be regarded as an augmented reality. However, in the framework of the present invention, the present invention It is not only necessary to superimpose the virtual object, but also to superimpose it accurately at the designated position in the real environment, so as to further carry out trajectory planning.

校正圖卡630(又稱擴增實境標誌(AR marker))可以自製,並且作為虛擬與現實環境中的媒介,常見實現擴增實境技術的工具包含Vuforia、ARToolkit及OpenCV等。Vuforia是基於紋理特徵進行影像辨識,由於其簡單快速應用的特性,被廣泛使用,然而他並不支援本實施例使用的AR眼鏡BT-300。ARToolkit則是使用特定格式的標誌進行影像辨識,黑色正四邊形粗邊框正是它的特色,辨識原理是偵測四邊形線條形變來完成,但目前它僅支援MacOS系統。而OpenCV以其豐富多樣化的影像辨識工具聞名,使用此工具來進行擴增實境標誌的辨識投影,便可以最基礎的圖形識別來完成。雖然OpenCV能根據不同案例進行客製化,但卻較為複雜。由於本實施例以EPSON的AR眼鏡BT-300為例來說明,因此可採用EPSON Moverio AR SDK來進行擴增實境標誌的圖像配準工作。EPSON Moverio AR SDK對於擴增實境標誌的要求與Vuforia一樣是基於紋理特徵,故幾何線條特徵點越多,辨識效果越好。因此,本發明設計了一個由多條幾何線組成的校正圖卡630,來作為擴增實境標誌。由於校正圖卡630具有多個特徵點,與現有的AprilTag相比來說,辨識效果更好。The calibration chart 630 (also known as augmented reality marker (AR marker)) can be self-made and used as a medium in virtual and real environments. Common tools for implementing augmented reality technology include Vuforia, ARToolkit, and OpenCV. Vuforia performs image recognition based on texture features, and is widely used due to its simple and fast application characteristics. However, it does not support the BT-300 AR glasses used in this embodiment. ARToolkit uses a specific format of logo for image recognition. The black regular quadrilateral thick border is its characteristic. The recognition principle is to detect the deformation of the quadrilateral line to complete, but currently it only supports the MacOS system. OpenCV is famous for its rich and diverse image recognition tools. Using this tool to identify and project augmented reality signs can be done with the most basic graphic recognition. Although OpenCV can be customized according to different cases, it is more complicated. Since this embodiment takes EPSON's AR glasses BT-300 as an example, EPSON Moverio AR SDK can be used to perform image registration of augmented reality signs. Like Vuforia, EPSON Moverio AR SDK's requirements for augmented reality signs are based on texture features, so the more geometric line feature points, the better the recognition effect. Therefore, the present invention designs a calibration chart 630 composed of a plurality of geometric lines as an augmented reality mark. Since the calibration chart 630 has multiple feature points, the recognition effect is better compared with the existing AprilTag.

為了實現將虛擬系統的物件投影疊加在實際環境中,本發明基於Moverio SDK工具實現投影設計,包含了兩個步驟,一個是校正圖卡630的辨識以及即時投影。在第一個部分影像辨識,是將校正圖卡630之圖片進行特徵點抓取,並將它記錄在運算系統中,使得智慧型眼鏡BT-300上的攝影單元60可以辨識出此校正圖卡630,並透過在畫面中的位置,進一步推算其在世界座標系上的位置。第二個部分是即時投影,目的是為了進行虛擬影像與真實影像的整合,以校正圖卡630將真實與虛擬兩個環境連接起來。由於虛擬座標系的原點坐落在校正圖卡630上,在得知實體標誌於世界座標系位置後,能推得虛擬座標系中其他虛擬物件於實際座標系的位置,同時反向計算出投影物件在眼鏡螢幕上的位置,及能夠把物件成功投影在畫面視野中。在投影過程中,攝影單元60會不斷追蹤辨識校正圖卡630,並將位置資訊透過追蹤處理器(tracking loader )處理,獲取各個投影物件於現實環境中的座標位置,最後能疊合虛擬物件在使用者視野中。藉由如此,即使眼鏡隨著使用者的頭部運動改變,虛擬物件也能即時且正確的投影。In order to realize the superimposition of the object projection of the virtual system in the actual environment, the present invention realizes the projection design based on the Moverio SDK tool, which includes two steps, one is the identification of the calibration chart 630 and the real-time projection. In the first part of image recognition, the feature points of the picture of the calibration chart 630 are captured and recorded in the computing system, so that the camera unit 60 on the smart glasses BT-300 can identify the calibration chart 630, and further calculate its position on the world coordinate system through its position in the picture. The second part is real-time projection, the purpose is to integrate the virtual image with the real image, so as to correct the graphics card 630 to connect the real and virtual environments. Since the origin of the virtual coordinate system is located on the calibration chart 630, after knowing the position of the entity mark in the world coordinate system, the positions of other virtual objects in the virtual coordinate system in the actual coordinate system can be inferred, and the projection can be calculated in reverse at the same time. The position of the object on the glasses screen, and the ability to successfully project the object into the field of view. During the projection process, the photographing unit 60 will continuously track and identify the calibration chart 630 , and process the position information through a tracking loader to obtain the coordinate positions of each projected object in the real environment, and finally the virtual objects can be superimposed on the in the user's field of vision. With this, even if the glasses change with the user's head movement, the virtual objects can be projected instantly and correctly.

關於機械臂44擴增實境操控座標轉換與校正,請參見第6B圖,座標配準(Coordinate Registration)的目的是取得虛擬座標系(Virtual Coordinate System, VCS)VCS與機械臂44的機械臂座標系(Robot Base Coordinate System, RBCS)之間的轉換矩陣

Figure 02_image013
。第6B圖顯示了相關座標系,其X、Y、Z軸(如第6B圖所示之座標軸)可在顯示器上分別用不同顏色表示。機械臂座標系RBCS位於機械臂44第一軸之基座上。第六軸末端效應器(end effector)末端的工具中心點(Tool center Point, TCP)是工具座標系(Tool coordinate system, TCS)的原點。世界座標系(World Coordinate System, WCS)的原點定義在工作空間上,該工作空間的長度和寬度分別為m和n。虛擬座標系VCS是代表AR可視化設計虛擬環境中的座標系,其中包含定義的校正圖卡630。虛擬座標系VCS的原點是基於校正圖卡630,因此實際上標記的位置可以獲取虛擬座標系VCS的相應原點。 For the coordinate transformation and correction of the augmented reality manipulation of the robotic arm 44, please refer to Figure 6B. The purpose of Coordinate Registration is to obtain the robotic arm coordinates of the Virtual Coordinate System (VCS) VCS and the robotic arm 44. The transformation matrix between Robot Base Coordinate System (RBCS)
Figure 02_image013
. Figure 6B shows the relevant coordinate system, whose X, Y, Z axes (as shown in Figure 6B) can be represented by different colors on the display, respectively. The robotic arm coordinate system RBCS is located on the base of the first axis of the robotic arm 44 . The Tool center Point (TCP) at the end of the sixth axis end effector is the origin of the Tool coordinate system (TCS). The origin of the World Coordinate System (WCS) is defined on the workspace, and the length and width of the workspace are m and n, respectively. The virtual coordinate system VCS is a coordinate system in the virtual environment representing the AR visual design, and includes a defined calibration chart 630 therein. The origin of the virtual coordinate system VCS is based on the calibration chart 630, so the actual marked position can obtain the corresponding origin of the virtual coordinate system VCS.

在實際應用中,為了要讓AR可視化系統之虛擬座標系VCS的軌跡交由機械臂44執行,需要找到機械臂座標系RBCS與虛擬座標系VCS的轉換矩陣(

Figure 02_image015
)。虛擬座標系VCS的原點是透過校正圖卡630定義,將校正圖卡630置放於機械臂44手術檯面之工作區間內,並定義該工作區間為世界座標系WCS,即可獲得世界座標系WCS與虛擬座標系VCS之轉換矩陣
Figure 02_image017
。因此,為獲得機械臂座標系RBCS與虛擬座標系VCS的轉換,本發明需要找到世界座標系WCS與機械臂座標系RBCS之間的轉換矩陣
Figure 02_image013
,並依式(2.1)獲得。
Figure 02_image019
(2.1)
In practical applications, in order to let the trajectory of the virtual coordinate system VCS of the AR visualization system be executed by the manipulator 44, it is necessary to find the transformation matrix of the manipulator coordinate system RBCS and the virtual coordinate system VCS (
Figure 02_image015
). The origin of the virtual coordinate system VCS is defined by the calibration chart 630. Place the calibration chart 630 in the working area of the operating table of the robotic arm 44, and define the working area as the world coordinate system WCS, then the world coordinate system can be obtained. Conversion matrix between WCS and virtual coordinate system VCS
Figure 02_image017
. Therefore, in order to obtain the conversion between the robot arm coordinate system RBCS and the virtual coordinate system VCS, the present invention needs to find the transformation matrix between the world coordinate system WCS and the robot arm coordinate system RBCS
Figure 02_image013
, and obtained according to formula (2.1).
Figure 02_image019
(2.1)

為了要讓在UI 介面上調整的軌跡可以被機械臂44執行,本發明需要找到機械臂座標系RBCS與虛擬座標系VCS之間的轉換矩陣

Figure 02_image015
。由於在先前的投影工作中已經找到了虛擬系統到世界座標系的轉換,所以接下來要找到的就是機械臂座標系RBCS與世界座標系WCS的轉換。如第6C圖所示,首先本發明定義一矩形工作區間在世界座標系中,就是機械臂44作動的範圍,而本發明在機械臂44工作平台上的四邊形作為這個區間,其長寬分別為m和n(m、n為正整數),並定義左上角A點為世界座標系原點。利用機械臂44校點的方式,本發明可以獲取機械臂座標系上A、B、C及D點四點之點位置集。而此四邊形為校正圖卡630的四點,因此虛擬座標系上的A、B、C、D四點之點位置集,就是校正圖卡630四個角落於虛擬座標系上之座標點。利用這兩組點位置集,可以計算機械臂座標系與虛擬座標系之三維空間轉換關係。 In order to allow the trajectory adjusted on the UI interface to be executed by the manipulator 44, the present invention needs to find the transformation matrix between the manipulator coordinate system RBCS and the virtual coordinate system VCS
Figure 02_image015
. Since the conversion of the virtual system to the world coordinate system has been found in the previous projection work, the next thing to find is the conversion of the robot arm coordinate system RBCS and the world coordinate system WCS. As shown in Fig. 6C, first of all, the present invention defines a rectangular working area in the world coordinate system, which is the operating range of the robotic arm 44, and the quadrilateral of the present invention on the working platform of the robotic arm 44 is used as this area, and its length and width are respectively m and n (m, n are positive integers), and define point A in the upper left corner as the origin of the world coordinate system. By means of the point calibration method of the manipulator 44, the present invention can obtain the point position set of the four points A, B, C and D on the coordinate system of the manipulator. The quadrilateral is the four points of the calibration chart 630 , so the point position set of the four points A, B, C, and D on the virtual coordinate system is the coordinate points of the four corners of the calibration chart 630 on the virtual coordinate system. Using these two sets of point position sets, the three-dimensional space transformation relationship between the coordinate system of the robot arm and the virtual coordinate system can be calculated.

本發明的做法是找到一個轉換矩陣,使得虛擬座標系VCS原點可以與機械臂座標系RBCS位置重合且朝向一致,讓兩個座標系擁有六個自由度的轉換。由於虛擬座標系之Z軸方向座落校正圖卡630射出紙面方向上,且校正圖卡630放置在機械臂44平台上,因此虛擬座標系VCS與機械臂座標系RBCS之Z軸方向一致。因此,接下來的步驟是找到相對位移以及以Z軸為旋轉軸旋轉使X軸與Y軸方向一致。機械臂座標系RBCS與虛擬座標系VCS之轉換關係可以分為位置重合與方向重合,如下式(2.2)。

Figure 02_image021
(2.2)   The method of the present invention is to find a transformation matrix so that the origin of the virtual coordinate system VCS can be coincident with the robot arm coordinate system RBCS and have the same orientation, so that the two coordinate systems have six degrees of freedom transformation. Since the Z-axis direction of the virtual coordinate system is positioned in the direction of the paper surface, and the calibration chart 630 is placed on the platform of the robot arm 44, the Z-axis directions of the virtual coordinate system VCS and the robot arm coordinate system RBCS are consistent. Therefore, the next steps are to find the relative displacement and rotate around the Z axis so that the X axis is aligned with the Y axis. The transformation relationship between the robot arm coordinate system RBCS and the virtual coordinate system VCS can be divided into position coincidence and direction coincidence, as shown in the following formula (2.2).
Figure 02_image021
(2.2)

位置重合(

Figure 02_image023
): 先找到兩座標系原點在位置上的位移,座標系之原點即為A點,機械臂座標系RBCS之A點座標為式(2.3),虛擬座標系VCS的A點座標
Figure 02_image025
為式(2.4),因此可得位置重合矩陣
Figure 02_image027
(式(2.5)),其中
Figure 02_image029
係為機械臂座標系的A點座標。
Figure 02_image031
,
Figure 02_image033
,
Figure 02_image035
 
(2.3)
Figure 02_image037
 
(2.4)
Figure 02_image039
(2.5)
position coincidence (
Figure 02_image023
): First find the displacement of the origin of the two coordinate systems, the origin of the coordinate system is point A, the coordinate of point A of the robotic arm coordinate system RBCS is formula (2.3), and the coordinate of point A of the virtual coordinate system VCS
Figure 02_image025
is formula (2.4), so the position coincidence matrix can be obtained
Figure 02_image027
(Equation (2.5)), where
Figure 02_image029
is the coordinate of point A in the coordinate system of the manipulator.
Figure 02_image031
,
Figure 02_image033
,
Figure 02_image035
(2.3)
Figure 02_image037
(2.4)
Figure 02_image039
(2.5)

方向重合(

Figure 02_image041
): 由於機械臂座標系RBCS與虛擬座標系VCS之Z軸方向相同,為了要讓兩座標系的方向一致,以Z軸為旋轉軸,旋轉
Figure 02_image043
度,要讓剩餘的兩軸重合,同時B、C、 D三點也將重合,得方向重合矩陣(
Figure 02_image045
),式(2.6)。
Figure 02_image047
(2.6)
direction coincidence (
Figure 02_image041
): Since the direction of the Z axis of the robot arm coordinate system RBCS and the virtual coordinate system VCS is the same, in order to make the directions of the two coordinate systems consistent, take the Z axis as the rotation axis, rotate
Figure 02_image043
degree, to make the remaining two axes coincide, and the three points B, C, and D will also coincide, and the direction coincidence matrix (
Figure 02_image045
), formula (2.6).
Figure 02_image047
(2.6)

第6D圖中係將機械臂座標系RBCS的座標經過位移,使得機械臂座標系RBCS的原點與虛擬座標系VCS的原點對齊,本圖中由於Z軸方向一致,因此僅考慮x和y座標。當兩座標系的原點已經對齊後,接著如第6E圖所示,虛擬座標系VCS繞Z軸旋轉

Figure 02_image043
度,使得兩矩形重合。其中,虛擬座標系VCS之B點經旋轉後會與機械臂座標系RBCS之B’點重合,得式(2.7)。
Figure 02_image049
(2.7)
而虛擬座標系之C點經旋轉後會與機械臂座標系RBCS之C’點重合,得式(2.8)。透過式(2.7) 和(2.8)可以求得
Figure 02_image043
Figure 02_image051
(2.8)
In Figure 6D, the coordinates of the robotic arm coordinate system RBCS are shifted so that the origin of the robotic arm coordinate system RBCS is aligned with the origin of the virtual coordinate system VCS. In this figure, since the direction of the Z axis is the same, only x and y are considered. coordinate. When the origins of the two coordinate systems have been aligned, then as shown in Figure 6E, the virtual coordinate system VCS rotates around the Z axis
Figure 02_image043
degrees so that the two rectangles overlap. Among them, the B point of the virtual coordinate system VCS will coincide with the B' point of the robotic arm coordinate system RBCS after rotation, and the formula (2.7) is obtained.
Figure 02_image049
(2.7)
The point C of the virtual coordinate system will coincide with the point C' of the coordinate system RBCS of the robot arm after being rotated, and Equation (2.8) is obtained. Through equations (2.7) and (2.8), it can be obtained
Figure 02_image043
.
Figure 02_image051
(2.8)

請參考第1A圖與第7圖,第7圖係為本發明監控模組20同時顯示三維路徑指標以及對應三維路徑指標的調整界面的示意圖。如第7圖所示,畫面700中除了包含擴增實境場景650以及擴增實境物體55,更進一部顯示包含對應三維路徑指標56的參數調整區域710。參數調整區域710包含多個空間參數捲軸701~706以及執行鍵707,使用者10可方便地拉動參數調整區域710中的空間參數捲軸701~706來改變三維路徑指標56的空間參數。當使用者10對空間參數捲軸701~706中任一者調整時,三維路徑指標56會對應地轉動,俾利使用者10實時地確認三維路徑指標56的當前位置和角度是否符合需求。當使用者10已調整完參數後(亦即已確認三維路徑指標56的當前位置和角度係為正確),使用者10可進一步按下執行鍵707來發出控制訊號C1,以控制機械臂44執行相關動作,例如依照三維路徑指標56的方向來對脊柱植入鋼釘。上述空間參數可包含三維路徑指標56的六自由度(Six degrees of freedom)參數。詳細來說,三維路徑指標56的六自由度參數即為空間參數捲軸701~706:三維路徑指標56的X軸座標、Y軸座標、Z軸座標、翻滾角度、俯仰角度以及偏擺角度,其分別用以控制三維路徑指標56作以下動作:沿X軸前後移動、沿Y軸左右移動、沿Z軸上下移動、繞X軸旋轉、繞Y軸前後旋轉以及繞Z軸左右旋轉,機械臂44根據三維路徑指標56來靠近或遠離目標物50、切除目標物50的至少一部分、或植入一植入物到目標物50。舉例來說,空間參數捲軸701~703可分別對應於控制機械臂44進行三軸(X、Y、Z軸)方向上的位移,空間參數捲軸704~706可分別對應於控制機械臂44繞三軸旋轉的角度。三維路徑指標56可持續顯示於擴增實境場景650中作為一預定軌跡,以供使用者10肉眼判斷接下來機械臂44的實際動作是否符合預定軌跡,或是演算模組30作自動判斷。此外,使用者10可透過控制裝置22發出一中斷訊號D1至手術機器人導航模組40,以禁能手術機器人導航模組40執行第一動作訊號S1。舉例來說,若使用者10已經發出指令至手術機器人導航模組40,再次按下執行鍵707可以中斷手術機器人導航模組40執行指令,或參數調整區域710可另外新增一按鍵來達到禁能手術機器人導航模組40的目的。Please refer to FIG. 1A and FIG. 7. FIG. 7 is a schematic diagram of the monitoring module 20 of the present invention simultaneously displaying a three-dimensional path index and an adjustment interface corresponding to the three-dimensional path index. As shown in FIG. 7 , in addition to the augmented reality scene 650 and the augmented reality object 55 , the screen 700 further displays a parameter adjustment area 710 that includes the corresponding 3D path index 56 . The parameter adjustment area 710 includes a plurality of spatial parameter scrolls 701 - 706 and an execution key 707 . The user 10 can easily pull the spatial parameter scrolls 701 - 706 in the parameter adjustment area 710 to change the spatial parameters of the 3D path indicator 56 . When the user 10 adjusts any one of the spatial parameter scrolls 701 to 706 , the 3D path indicator 56 will rotate correspondingly, so that the user 10 can confirm in real time whether the current position and angle of the 3D path indicator 56 meet the requirements. After the user 10 has adjusted the parameters (that is, it has been confirmed that the current position and angle of the three-dimensional path index 56 are correct), the user 10 can further press the execution key 707 to send the control signal C1 to control the robot arm 44 to execute A related action, such as implanting a spike in the spine according to the orientation of the three-dimensional path index 56 . The above-mentioned spatial parameters may include six degrees of freedom parameters of the three-dimensional path index 56 . In detail, the six-degree-of-freedom parameters of the three-dimensional path index 56 are the spatial parameter scrolls 701 to 706 : the X-axis coordinates, the Y-axis coordinates, the Z-axis coordinates, the roll angle, the pitch angle, and the yaw angle of the three-dimensional path index 56 . They are respectively used to control the three-dimensional path index 56 to perform the following actions: move back and forth along the X axis, move left and right along the Y axis, move up and down along the Z axis, rotate around the X axis, rotate back and forth around the Y axis, and rotate left and right around the Z axis. Approaching or moving away from the target 50 , resecting at least a portion of the target 50 , or implanting an implant into the target 50 according to the three-dimensional path index 56 . For example, the spatial parameter reels 701 - 703 may respectively correspond to the three-axis (X, Y, Z axis) displacements of the control robot 44 , and the spatial parameter reels 704 - 706 may respectively correspond to the control of the robot 44 to rotate three times. The angle by which the axis is rotated. The three-dimensional path indicator 56 can be continuously displayed in the augmented reality scene 650 as a predetermined trajectory, so that the user 10 can visually judge whether the actual action of the robotic arm 44 conforms to the predetermined trajectory, or the calculation module 30 can automatically judge. In addition, the user 10 can send an interruption signal D1 to the surgical robot navigation module 40 through the control device 22 to disable the surgical robot navigation module 40 from executing the first action signal S1. For example, if the user 10 has sent a command to the surgical robot navigation module 40, pressing the execution key 707 again can interrupt the execution of the command by the surgical robot navigation module 40, or the parameter adjustment area 710 can add another key to disable the execution of the command. The purpose of the surgical robot navigation module 40.

第8圖~第10圖係為本發明擴增實境場景的不同視角的示意圖,用以說明本發明應用於脊柱融合手術(spinal fusion surgery),其中脊柱755係為擴增實境物體,用以呈現患者的脊柱部位或是一脊柱模型,且擴增實境場景750係為擴增實境場景。在第8圖中,畫面800呈現脊柱755的背面視角,此時外科醫生計畫於脊柱755的預計植入點植入鋼釘,但目前的視角無法看到該特定位置,導致三維路徑指標756無法順利定位,故需要旋轉擴增實境場景750的角度。Figures 8 to 10 are schematic diagrams of different perspectives of the augmented reality scene of the present invention, which are used to illustrate the application of the present invention to spinal fusion surgery, wherein the spine 755 is an augmented reality object, using A spine part of a patient or a spine model is presented, and the augmented reality scene 750 is an augmented reality scene. In Figure 8, the screen 800 presents a back view of the spine 755. At this time, the surgeon plans to implant the nail at the expected implantation point of the spine 755, but the current perspective cannot see the specific position, resulting in a three-dimensional path indicator 756. It cannot be positioned smoothly, so it is necessary to rotate the angle of the augmented reality scene 750 .

在第9圖中,擴增實境場景750已被旋轉,此時畫面900呈現脊柱755的正面視角,外科醫生已經可以看到脊柱755的預計植入點759,這有助於決定三維路徑指標756的運動軌跡和手術動作。舉例來說,當監控模組20為電腦裝置的情況下,旋轉擴增實境場景750的操作可透過滑鼠上的滾輪來實現;當監控模組20為智慧型眼鏡的情況下,可於該智慧型眼鏡上設置滾輪,或是在連接於該智慧型眼鏡的按鈕手把上設置滾輪,使旋轉擴增實境場景750的操作可透過上述滾輪來實現。雖然畫面900已呈現了決定性的參考資訊,但外科醫生可能仍需參考其他視角來確認三維路徑指標756在空間上的確切位置,並且進一步將擴增實境場景750旋轉至第10圖中的畫面1000所呈現的脊柱755的側面視角。雖然以上僅介紹如畫面800、900、1000中的視角,本發明實際上可將擴增實境場景750旋轉至各種角度,甚至包含脊柱755的底部。In Figure 9, the augmented reality scene 750 has been rotated so that the frame 900 presents a frontal view of the spine 755, and the surgeon can already see the projected implantation point 759 of the spine 755, which helps determine the 3D path metrics 756's motion trajectory and surgical actions. For example, when the monitoring module 20 is a computer device, the operation of rotating the augmented reality scene 750 can be realized through the scroll wheel on the mouse; when the monitoring module 20 is a smart glasses, the operation of rotating the augmented reality scene 750 can be realized by A scroll wheel is arranged on the smart glasses, or a scroll wheel is arranged on a button handle connected to the smart glasses, so that the operation of rotating the augmented reality scene 750 can be realized through the scroll wheel. Although the frame 900 has presented decisive reference information, the surgeon may still need to refer to other perspectives to confirm the exact location of the 3D path indicator 756 in space, and further rotate the augmented reality scene 750 to the frame in Fig. 10 A side view of spine 755 as presented by 1000. Although the above only describes the perspective as in the frames 800 , 900 , 1000 , the present invention can actually rotate the augmented reality scene 750 to various angles, even including the bottom of the spine 755 .

本發明透過擴增實境技術將3D影像與數據結合實際手術視野於擴增實境智慧眼鏡中,解決外科醫生以往需於兩視野反覆確認的手眼不協調窘境。除此之外,在脊柱融合手術中,必須要在硬質骨骼上開孔洞並植入骨釘,且骨表面平滑,人工植入相當費力且容易有滑動的情形發生。透過機械臂或機器人等手術機器人單元來植入可以準確無誤地執行預定手術動作,提昇手術的穩定度。另外,本發明手術輔助系統100可達到即時監控的效果,一旦機械臂44的動作並未如預期進行,或是使用者10改變原先的規劃,皆可透過控制裝置22來停止(stop)/暫停(pause)/復動(resume)機械臂44,達到即時監控的效果。The present invention combines the 3D image and data with the actual surgical field of view in the augmented reality smart glasses through the augmented reality technology, so as to solve the dilemma of hand-eye incongruity that the surgeon had to repeatedly confirm in the two fields of view in the past. In addition, in spinal fusion surgery, it is necessary to make holes in hard bones and implant bone nails, and the surface of the bones is smooth. Artificial implantation is laborious and prone to slippage. Implantation through surgical robotic units such as robotic arms or robots can accurately perform predetermined surgical actions and improve the stability of surgery. In addition, the surgical assistance system 100 of the present invention can achieve the effect of real-time monitoring. Once the action of the robotic arm 44 does not proceed as expected, or the user 10 changes the original plan, the control device 22 can be used to stop/pause (pause)/resume the robotic arm 44 to achieve the effect of real-time monitoring.

請參考第11圖,第11圖係為本發明監控模組20同時顯示兩個三維路徑指標的示意圖。如第11圖所示,於運動軌跡產生之後(亦即使用者已將三維路徑指標756透過執行鍵707送出),在手術機器人導航模組40尚未執行第一動作訊號S1或手術機器人單元被中斷執行第一動作訊號S1的情況下,演算模組30另產生候選三維路徑指標757,並將三維路徑指標757顯示於畫面1000。此時於畫面1100中,候選三維路徑指標757與三維路徑指標756同時顯示於擴增實境場景750,使用者10操作控制裝置22以控制候選三維路徑指標757於擴增實境場景750中的移動路徑,以產生一候選運動軌跡。訊號轉換單元42接收該候選運動軌跡並將該預定候選運動軌跡轉換為第二動作訊號S2(參見第1A圖),手術機器人單元執行第二動作訊號S2以沿著該候選運動軌跡來對目標物50(例如患者脊柱部位)執行手術動作。舉例來說,在上述視覺化影像中會有三個虛擬物件,分別是患者的脊柱755、機械臂預計軌跡(三維路徑指標756)、以及醫生即時調整的軌跡(三維路徑指標757)。醫療骨頭模型由真實腰椎骨的CT影像轉換而來,用來規劃並模擬手術動作。在虛擬環境座標系中,以校正圖卡630之左上角做為座標系原點。各個軌跡與骨頭模型在此座標系中皆有相對點。Please refer to FIG. 11, which is a schematic diagram of the monitoring module 20 of the present invention simultaneously displaying two three-dimensional path indicators. As shown in FIG. 11, after the motion trajectory is generated (that is, the user has sent the three-dimensional path index 756 through the execution key 707), the surgical robot navigation module 40 has not yet executed the first motion signal S1 or the surgical robot unit has been interrupted When the first motion signal S1 is executed, the calculation module 30 further generates a candidate 3D path indicator 757 and displays the 3D path indicator 757 on the screen 1000 . At this time, in the screen 1100 , the candidate 3D path indicator 757 and the 3D path indicator 756 are simultaneously displayed in the augmented reality scene 750 , and the user 10 operates the control device 22 to control the candidate 3D path indicator 757 in the augmented reality scene 750 . Move the path to generate a candidate motion trajectory. The signal conversion unit 42 receives the candidate motion trajectory and converts the predetermined candidate motion trajectory into a second motion signal S2 (refer to FIG. 1A ). The surgical robot unit executes the second motion signal S2 to move the target along the candidate motion trajectory. 50 (eg patient spine area) to perform surgical action. For example, there are three virtual objects in the above-mentioned visual image, which are the patient's spine 755 , the predicted trajectory of the robotic arm (3D path indicator 756 ), and the trajectory adjusted by the doctor in real time (3D path indicator 757 ). Medical bone models are converted from CT images of real lumbar vertebrae to plan and simulate surgical movements. In the virtual environment coordinate system, the upper left corner of the calibration chart 630 is used as the origin of the coordinate system. Each track and bone model have relative points in this coordinate system.

請參考第12圖,第12圖係為根據本發明一實施例的手術輔助方法的流程圖。請注意,假若可獲得實質上相同的結果,則這些步驟並不一定要完全按照第12圖所示的執行次序來執行。第12圖所示之方法可被第1A圖所示之手術輔助系統100所採用,並可簡單歸納如下: 步驟1202: 開始; 步驟1204: 於擴增實境場景中顯示擴增實境物體以及三維路徑指標; 步驟1206: 控制三維路徑指標於擴增實境場景中的移動路徑,以產生運動軌跡; 步驟1208: 訊號轉換單元將運動軌跡轉換為動作訊號; 步驟1210: 手術機器人單元接收並且執行動作訊號,以沿著運動軌跡對實體目標物執行手術動作; 步驟1212: 結束。 Please refer to FIG. 12. FIG. 12 is a flowchart of a surgical assistance method according to an embodiment of the present invention. Note that these steps do not have to be performed exactly in the order of execution shown in FIG. 12 if substantially the same result can be obtained. The method shown in FIG. 12 can be used by the surgical assistance system 100 shown in FIG. 1A and can be briefly summarized as follows: Step 1202: start; Step 1204: Display the augmented reality object and the 3D path indicator in the augmented reality scene; Step 1206: Control the movement path of the 3D path indicator in the augmented reality scene to generate a motion trajectory; Step 1208: The signal conversion unit converts the motion trajectory into motion signals; Step 1210: The surgical robot unit receives and executes the action signal, so as to execute the surgical action on the physical target along the motion track; Step 1212: End.

由於熟習技藝者在閱讀完以上段落後應可輕易瞭解第12圖中每一步驟的細節,為簡潔之故,在此將省略進一步的描述。Since those skilled in the art should be able to easily understand the details of each step in Fig. 12 after reading the above paragraphs, further descriptions are omitted here for brevity.

綜上所述,本發明利用擴增實境技術解決外科醫生在執行外科手術時遇到的瓶頸,外科醫生可以轉動擴增實境場景的視野來掌握目標物的所有視角,故不需要移動自己的位置就可以掌握目標物與周遭環境的相對位置關係;此外,透過機械臂執行事先輸入的機械臂運動軌跡或是外科醫生現場調整好的機械臂運動軌跡,可以避免人工手術的失誤。本發明亦允許外科醫生中斷機械臂當前的任務、對機械臂運動軌跡作即時的調整,故能夠因應各種實際手術的突發狀況。To sum up, the present invention uses augmented reality technology to solve the bottleneck encountered by surgeons in performing surgical operations. The surgeon can rotate the field of view of the augmented reality scene to grasp all the perspectives of the target object, so there is no need to move himself. The relative positional relationship between the target object and the surrounding environment can be grasped by the position of the robot arm; in addition, the robot arm can execute the pre-input robot arm movement trajectory or the robot arm movement trajectory adjusted by the surgeon on the spot, which can avoid the mistakes of manual operation. The present invention also allows the surgeon to interrupt the current task of the robotic arm and adjust the motion trajectory of the robotic arm in real time, so that it can respond to various emergencies in actual operations.

10:使用者 20:監控模組 20A:電腦裝置 20B:智慧型眼鏡 22:控制裝置 24:顯示裝置 30:演算模組 40:手術機器人導航模組 42:訊號轉換單元 44:機械臂 50:目標物 55:擴增實境物體 56、756:三維路徑指標 60:攝影單元 100:手術輔助系統 250:實體場景 200、300、400、500、600、700、800、900、1000、1100:畫面 350、450、650、750:擴增實境場景 630:校正圖卡 701~706:空間參數捲軸 707:執行鍵 710:參數調整區域 755:脊柱 757:候選三維路徑指標 759:預計植入點 1202~1212:步驟 A1:輸入操作 A2:手術動作 C1、C2:控制訊號 D1:中斷訊號 S1:第一動作訊號 S2:第二動作訊號 V1:影像資料 VCS:虛擬座標系 WCS:世界座標系 RBCS:機械臂座標系

Figure 02_image053
Figure 02_image007
Figure 02_image009
Figure 02_image011
Figure 02_image017
Figure 02_image013
Figure 02_image015
:轉換矩陣 A、B、C、D、A’、B’、C’、D’:座標點 T 1:位置重合矩陣 T 2:方向重合矩陣 10: User 20: Monitoring module 20A: Computer device 20B: Smart glasses 22: Control device 24: Display device 30: Algorithm module 40: Surgical robot navigation module 42: Signal conversion unit 44: Robot arm 50: Target Object 55: Augmented reality Object 56, 756: 3D path index 60: Photography unit 100: Surgical assistance system 250: Physical scene 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100: Screen 350 , 450, 650, 750: Augmented reality scene 630: Correction chart 701~706: Spatial parameter scroll 707: Execute button 710: Parameter adjustment area 755: Spine 757: Candidate 3D path index 759: Estimated implantation point 1202~ 1212: Step A1: Input operation A2: Surgical action C1, C2: Control signal D1: Interruption signal S1: First action signal S2: Second action signal V1: Image data VCS: Virtual coordinate system WCS: World coordinate system RBCS: Mechanical Arm coordinate system
Figure 02_image053
,
Figure 02_image007
,
Figure 02_image009
,
Figure 02_image011
,
Figure 02_image017
,
Figure 02_image013
,
Figure 02_image015
: Conversion matrix A, B, C, D, A', B', C', D': Coordinate point T 1 : Position coincidence matrix T 2 : Direction coincidence matrix

第1A圖係為根據本發明一實施例的手術輔助系統的示意圖。 第1B圖係為本發明手眼校正與影像導航的示意圖。 第2圖係為攝影單元拍攝到的真實畫面的示意圖。 第3圖示意本發明一實施例演算模組產生具有擴增實境場景和擴增實境物體的畫面。 第4圖示意本發明另一實施例演算模組產生具有擴增實境場景和擴增實境物體的畫面。 第5圖係為於擴增實境場景顯示三維路徑指標的示意圖。 第6A圖係為本發明對擴增實境場景以及擴增實境物體進行校正的示意圖。 第6B圖係為本發明機械臂擴增實境操控座標轉換與校正的示意圖。 第6C圖係為本發明在世界座標系中定義矩形工作區間的示意圖。 第6D圖、第6E圖係為將機械臂座標系與機械臂座標系對齊的示意圖。 第7圖係為本發明監控模組同時顯示三維路徑指標以及對應三維路徑指標的調整界面的示意圖。 第8圖~第10圖係為本發明擴增實境場景的不同視角的示意圖。 第11圖係為本發明監控模組同時顯示兩個三維路徑指標的示意圖。 第12圖係為根據本發明一實施例的手術輔助方法的流程圖。 FIG. 1A is a schematic diagram of a surgical assistance system according to an embodiment of the present invention. FIG. 1B is a schematic diagram of hand-eye correction and image navigation according to the present invention. FIG. 2 is a schematic diagram of a real picture captured by the photographing unit. FIG. 3 illustrates the generation of a picture with an augmented reality scene and an augmented reality object by an algorithm module according to an embodiment of the present invention. FIG. 4 illustrates the generation of a picture with an augmented reality scene and an augmented reality object by an algorithm module according to another embodiment of the present invention. FIG. 5 is a schematic diagram of displaying a three-dimensional path indicator in an augmented reality scene. FIG. 6A is a schematic diagram of correcting an augmented reality scene and an augmented reality object according to the present invention. FIG. 6B is a schematic diagram of coordinate transformation and calibration of the augmented reality manipulation of the robotic arm according to the present invention. Fig. 6C is a schematic diagram of defining a rectangular working area in the world coordinate system according to the present invention. Fig. 6D and Fig. 6E are schematic diagrams of aligning the coordinate system of the manipulator with the coordinate system of the manipulator. FIG. 7 is a schematic diagram of the monitoring module of the present invention simultaneously displaying three-dimensional path indicators and an adjustment interface corresponding to the three-dimensional path indicators. Figures 8 to 10 are schematic diagrams of different perspectives of the augmented reality scene of the present invention. FIG. 11 is a schematic diagram of the monitoring module of the present invention simultaneously displaying two three-dimensional path indicators. FIG. 12 is a flowchart of a surgical assistance method according to an embodiment of the present invention.

20:監控模組 20: Monitoring module

20A:電腦裝置 20A: Computer equipment

20B:智慧型眼鏡 20B: Smart Glasses

55:擴增實境物體 55: Augmented Reality Objects

56:三維路徑指標 56: 3D Path Indicator

400:畫面 400: Screen

450:擴增實境場景 450: Augmented Reality Scenarios

Claims (20)

一種手術輔助系統,包含: 一演算模組,用以根據一實體目標物之影像於一擴增實境場景產生一擴增實境物體,該擴增實境物體係模擬該實體目標物,且係於該擴增實境場景中顯示該擴增實境物體與一三維路徑指標; 一監控模組,包含一顯示裝置以及一控制裝置,其中該顯示裝置用以顯示包含該擴增實境物體以及該三維路徑指標的該擴增實境場景;該控制裝置受操作以控制該三維路徑指標於該擴增實境場景中的移動路徑,以產生用於手術機器人之運動軌跡; 一訊號轉換單元,用以將該運動軌跡轉換為一第一動作訊號,以及 一手術機器人單元,接收並且執行該第一動作訊號,以沿著該運動軌跡對該實體目標物執行手術動作。 A surgical assistance system comprising: an arithmetic module for generating an augmented reality object in an augmented reality scene according to an image of a physical object, the augmented reality object system simulates the physical object, and is located in the augmented reality displaying the augmented reality object and a three-dimensional path indicator in the scene; A monitoring module includes a display device and a control device, wherein the display device is used to display the augmented reality scene including the augmented reality object and the three-dimensional path indicator; the control device is operated to control the three-dimensional The path indicator is a moving path in the augmented reality scene, so as to generate a motion trajectory for the surgical robot; a signal converting unit for converting the motion trajectory into a first motion signal, and A surgical robot unit receives and executes the first action signal, so as to perform a surgical action on the physical target along the movement track. 如請求項1所述之手術輔助系統,其中該演算模組產生一使用者界面,該顯示裝置同時顯示該使用者界面與該擴增實境場景,該使用者界面受控於該控制裝置,並且用以調整該三維路徑指標的空間參數,以決定出該移動軌跡。The surgical assistance system according to claim 1, wherein the computing module generates a user interface, the display device simultaneously displays the user interface and the augmented reality scene, and the user interface is controlled by the control device, And it is used to adjust the spatial parameters of the three-dimensional path index to determine the movement trajectory. 如請求項2所述之手術輔助系統,其中該些空間參數包含該三維路徑指標的X軸座標、Y軸座標、Z軸座標、繞X軸轉動角度、繞Y軸轉動角度以及繞Z軸轉動角度的多個調整欄位,分別用以控制該三維路徑指標沿X軸前後移動、沿Y軸左右移動、沿Z軸上下移動、繞X軸旋轉、繞Y軸前後旋轉以及繞Z軸左右旋轉。The surgical assistance system of claim 2, wherein the spatial parameters include X-axis coordinates, Y-axis coordinates, Z-axis coordinates, rotation angle around X-axis, rotation angle around Y-axis, and rotation around Z-axis of the three-dimensional path index Multiple adjustment fields of the angle, respectively used to control the 3D path index to move forward and backward along the X axis, left and right along the Y axis, up and down along the Z axis, rotate around the X axis, rotate forward and backward around the Y axis, and rotate left and right around the Z axis . 如請求項2所述之手術輔助系統,其中該使用者界面經操作以發出一中斷訊號至該手術機器人單元,以禁能該手術機器人單元執行該第一動作訊號。The surgical assistance system of claim 2, wherein the user interface is operated to send an interrupt signal to the surgical robotic unit to disable the surgical robotic unit from executing the first motion signal. 如請求項2所述之手術輔助系統,其中該運動軌跡產生之後,在該手術機器人單元尚未執行該第一動作訊號或手術機器人單元被中斷執行該第一動作訊號的情況下,該演算模組另產生一候選三維路徑指標,該候選三維路徑指標與該三維路徑指標同時顯示於該擴增實境場景;該控制裝置另受操作以控制該候選三維路徑指標於該擴增實境場景中的移動路徑,以產生一候選運動軌跡;該訊號轉換單元接收該候選運動軌跡並將該運動軌跡轉換為一第二動作訊號;以及,該手術機器人單元執行該第二動作訊號,以沿著該候選運動軌跡執行手術動作。The surgical assistance system as claimed in claim 2, wherein after the motion trajectory is generated, in the case that the surgical robot unit has not yet executed the first action signal or the surgical robot unit has been interrupted to execute the first action signal, the calculation module In addition, a candidate 3D path indicator is generated, and the candidate 3D path indicator and the 3D path indicator are simultaneously displayed in the augmented reality scene; the control device is further operated to control the candidate 3D path indicator in the augmented reality scene. moving a path to generate a candidate motion trajectory; the signal conversion unit receives the candidate motion trajectory and converts the motion trajectory into a second motion signal; and the surgical robot unit executes the second motion signal to follow the candidate motion The motion trajectory performs the surgical action. 如請求項5所述之手術輔助系統,其中該候選運動軌跡經由該使用者界面作即時調整,調整後的該候選運動軌跡經由使用者界面確認後產生該第二動作訊號。The surgical assistance system of claim 5, wherein the candidate motion trajectory is adjusted in real time through the user interface, and the adjusted candidate motion trajectory is confirmed by the user interface to generate the second motion signal. 如請求項1所述之手術輔助系統,其中該手術機器人單元根據該第一動作訊號靠近或遠離該實體目標物、切除該實體目標物的至少一部分、或植入一植入物到該實體目標物。The surgical assistance system of claim 1, wherein the surgical robot unit approaches or moves away from the physical target according to the first motion signal, resects at least a part of the physical target, or implants an implant into the physical target thing. 如請求項1所述之手術輔助方法,其中該擴增實境場景係模擬但不包含:該實體目標物所在之真實場景。The surgical assistance method of claim 1, wherein the augmented reality scene is simulated but does not include: a real scene where the physical object is located. 如請求項1所述之手術輔助系統,其中該擴增實境場景包含該實體目標物所在之真實場景。The surgical assistance system of claim 1, wherein the augmented reality scene includes a real scene where the physical object is located. 如請求項1所述之手術輔助系統,其中該控制裝置另受操作以改變該擴增實境場景的視角,當該擴增實境場景的視角改變時,該擴增實境物體的視角與該三維路徑指標的視角經該演算模組運算而對應地改變。The surgical assistance system of claim 1, wherein the control device is further operated to change the viewing angle of the augmented reality scene, and when the viewing angle of the augmented reality scene is changed, the viewing angle of the augmented reality object is the same as the viewing angle of the augmented reality object. The viewing angle of the three-dimensional path index is correspondingly changed by the calculation module. 如請求項1所述之手術輔助系統,其中該顯示裝置係為一頭戴式顯示器,且該控制裝置係為耦接於該頭戴式顯示器的一便攜式控制器。The surgical assistance system of claim 1, wherein the display device is a head-mounted display, and the control device is a portable controller coupled to the head-mounted display. 如請求項1所述之手術輔助系統,其中該顯示裝置係為一液晶顯示器,且該控制裝置係為耦接於該液晶顯示器的輸入裝置。The surgical assistance system of claim 1, wherein the display device is a liquid crystal display, and the control device is an input device coupled to the liquid crystal display. 一種手術輔助方法,包含: 一演算模組根據一實體目標物產生一擴增實境場景與一擴增實境物體,以及另於該擴增實境場景中顯示一三維路徑指標,該擴增實境物體係模擬該實體目標物,且該擴增實境物體係於該擴增實境場景中顯示; 於一顯示裝置上顯示包含該擴增實境物體以及該三維路徑指標的該擴增實境場景;操作一控制裝置來控制該三維路徑指標於該擴增實境場景中的移動路徑,以產生一運動軌跡; 一訊號轉換單元將該運動軌跡轉換為一第一動作訊號,以及 一手術機器人單元接收並且執行該第一動作訊號,以沿著該運動軌跡對該實體目標物執行手術動作。 A surgical aid method comprising: An arithmetic module generates an augmented reality scene and an augmented reality object according to a physical object, and further displays a three-dimensional path indicator in the augmented reality scene, and the augmented reality object system simulates the entity a target object, and the augmented reality object system is displayed in the augmented reality scene; Displaying the augmented reality scene including the augmented reality object and the 3D path indicator on a display device; operating a control device to control the movement path of the 3D path indicator in the augmented reality scene to generate a movement trajectory; a signal converting unit converts the motion trajectory into a first motion signal, and A surgical robot unit receives and executes the first action signal to perform a surgical action on the physical target along the movement track. 如請求項13所述之手術輔助方法,其中該演算模組產生一使用者界面,該顯示裝置同時顯示該使用者界面與該擴增實境場景,該使用者界面受控於該控制裝置,並且用以調整該三維路徑指標的空間參數,以決定出該移動軌跡。The surgical assistance method according to claim 13, wherein the calculation module generates a user interface, the display device simultaneously displays the user interface and the augmented reality scene, and the user interface is controlled by the control device, And it is used to adjust the spatial parameters of the three-dimensional path index to determine the movement trajectory. 如請求項14所述之手術輔助方法,其中該些空間參數包含該三維路徑指標的X軸座標、Y軸座標、Z軸座標、繞X軸轉動角度、繞Y軸轉動角度以及繞Z軸轉動角度的多個調整欄位,分別用以控制該三維路徑指標沿X軸前後移動、沿Y軸左右移動、沿Z軸上下移動、繞X軸旋轉、繞Y軸前後旋轉以及繞Z軸左右旋轉。The surgical assistance method of claim 14, wherein the spatial parameters include X-axis coordinates, Y-axis coordinates, Z-axis coordinates, rotation angle around X axis, rotation angle around Y axis, and rotation around Z axis of the three-dimensional path index Multiple adjustment fields of the angle, respectively used to control the 3D path index to move forward and backward along the X axis, left and right along the Y axis, up and down along the Z axis, rotate around the X axis, rotate forward and backward around the Y axis, and rotate left and right around the Z axis . 如請求項13所述之手術輔助方法,另包含: 操作該使用者界面發出一中斷訊號至該手術機器人單元,以禁能該手術機器人單元執行該第一動作訊號。 The surgical assistance method as described in claim 13, further comprising: Operating the user interface sends an interruption signal to the surgical robot unit to disable the surgical robot unit from executing the first motion signal. 如請求項13所述之手術輔助方法,其中該運動軌跡產生之後,在該手術機器人單元尚未執行該第一動作訊號或手術機器人單元被中斷執行該第一動作訊號的情況下,該手術輔助方法另執行以下步驟: 該演算模組另產生一候選三維路徑指標,該候選三維路徑指標與該三維路徑指標同時顯示於該擴增實境場景; 操作該控制裝置以控制該候選三維路徑指標於該擴增實境場景中的移動路徑,以產生一候選運動軌跡; 該訊號轉換單元接收該候選運動軌跡並將該運動軌跡轉換為一第二動作訊號;以及 該手術機器人單元接收並且執行該第二動作訊號,以沿著該候選運動軌跡執行手術動作。 The surgical assistance method as claimed in claim 13, wherein after the motion trajectory is generated, the surgical assistance method is provided in the case that the surgical robot unit has not yet executed the first motion signal or the surgical robot unit has been interrupted to execute the first motion signal Also perform the following steps: The calculation module further generates a candidate 3D path indicator, and the candidate 3D path indicator and the 3D path indicator are simultaneously displayed in the augmented reality scene; operating the control device to control the moving path of the candidate three-dimensional path indicator in the augmented reality scene to generate a candidate motion trajectory; the signal conversion unit receives the candidate motion trajectory and converts the motion trajectory into a second motion signal; and The surgical robot unit receives and executes the second action signal to perform a surgical action along the candidate motion trajectory. 如請求項17所述之手術輔助方法,其中該候選運動軌跡經由該使用者界面作即時調整,調整後的該候選運動軌跡經由使用者界面確認後產生該第二動作訊號。The surgical assistance method according to claim 17, wherein the candidate motion trajectory is adjusted in real time through the user interface, and the adjusted candidate motion trajectory is confirmed by the user interface to generate the second motion signal. 如請求項13所述之手術輔助方法,其中該手術機器人單元根據該第一動作訊號對該實體目標物進行:靠近、遠離、切除或植入。The surgical assistance method as claimed in claim 13, wherein the surgical robot unit performs: approaching, moving away, removing or implanting the physical target according to the first motion signal. 如請求項13所述之手術輔助方法,另包含: 操作該控制裝置以改變該擴增實境場景的視角,當該擴增實境場景的視角改變時,該擴增實境物體的視角與該三維路徑指標的視角經該演算模組運算而對應地改變。 The surgical assistance method as described in claim 13, further comprising: Operate the control device to change the viewing angle of the augmented reality scene. When the viewing angle of the augmented reality scene changes, the viewing angle of the augmented reality object and the viewing angle of the three-dimensional path index are calculated by the arithmetic module to correspond to change.
TW109142490A 2020-12-02 2020-12-02 Surgery assistant system and related surgery assistant method TWI750930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109142490A TWI750930B (en) 2020-12-02 2020-12-02 Surgery assistant system and related surgery assistant method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109142490A TWI750930B (en) 2020-12-02 2020-12-02 Surgery assistant system and related surgery assistant method

Publications (2)

Publication Number Publication Date
TWI750930B TWI750930B (en) 2021-12-21
TW202222270A true TW202222270A (en) 2022-06-16

Family

ID=80681387

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109142490A TWI750930B (en) 2020-12-02 2020-12-02 Surgery assistant system and related surgery assistant method

Country Status (1)

Country Link
TW (1) TWI750930B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI838199B (en) * 2023-03-31 2024-04-01 慧術科技股份有限公司 Medical static picture contrast teaching system and method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110215284B (en) * 2019-06-06 2021-04-02 上海木木聚枞机器人科技有限公司 Visualization system and method
TWI697317B (en) * 2019-08-30 2020-07-01 國立中央大學 Digital image reality alignment kit and method applied to mixed reality system for surgical navigation

Also Published As

Publication number Publication date
TWI750930B (en) 2021-12-21

Similar Documents

Publication Publication Date Title
US12115028B2 (en) Leveraging two-dimensional digital imaging and communication in medicine imagery in three-dimensional extended reality applications
US11963666B2 (en) Overall endoscopic control system
US11357581B2 (en) Method for using a physical object to manipulate a corresponding virtual object in a virtual environment, and associated apparatus and computer program product
EP3443924B1 (en) A graphical user interface for use in a surgical navigation system with a robot arm
KR102501099B1 (en) Systems and methods for rendering on-screen identification of instruments in teleoperated medical systems
EP3498212A1 (en) A method for patient registration, calibration, and real-time augmented reality image display during surgery
EP3713508A1 (en) Systems and methods for master/tool registration and control for intuitive motion
KR20230003408A (en) Systems and methods for onscreen identification of instruments in a teleoperational medical system
US20210315637A1 (en) Robotically-assisted surgical system, robotically-assisted surgical method, and computer-readable medium
JP2023530652A (en) Spatial Perception Display for Computer-Assisted Interventions
CN115697235A (en) Remote surgical guidance using augmented reality
TWI750930B (en) Surgery assistant system and related surgery assistant method
WO2023205782A1 (en) Two-way communication between head-mounted display and electroanatomic system
US20230139425A1 (en) Systems and methods for optimizing configurations of a computer-assisted surgical system for reachability of target objects
Penza et al. Augmented Reality Navigation in Robot-Assisted Surgery with a Teleoperated Robotic Endoscope
CN115211962A (en) Surgical system for computer-assisted navigation during surgical procedures
Wang et al. Augmented reality provision in robotically assisted minimally invasive surgery
Ho et al. Supervised control for robot-assisted surgery using augmented reality
TWI790181B (en) Surgical robot system
CN114668507B (en) Visual feedback system and method for remote operation
Chou et al. Augmented reality based preoperative planning for robot assisted tele-neurosurgery
US20240341568A1 (en) Systems and methods for depth-based measurement in a three-dimensional view
EP4137033A1 (en) System and method for view restoration
US20230414307A1 (en) Systems and methods for remote mentoring
WO2023150449A1 (en) Systems and methods for remote mentoring in a robot assisted medical system