TWI825499B - Surgical robotic arm control system and control method thereof - Google Patents

Surgical robotic arm control system and control method thereof Download PDF

Info

Publication number
TWI825499B
TWI825499B TW110139147A TW110139147A TWI825499B TW I825499 B TWI825499 B TW I825499B TW 110139147 A TW110139147 A TW 110139147A TW 110139147 A TW110139147 A TW 110139147A TW I825499 B TWI825499 B TW I825499B
Authority
TW
Taiwan
Prior art keywords
image
information image
robot arm
processor
surgical
Prior art date
Application number
TW110139147A
Other languages
Chinese (zh)
Other versions
TW202317045A (en
Inventor
曾建嘉
潘柏瑋
楊昇宏
Original Assignee
財團法人金屬工業研究發展中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人金屬工業研究發展中心 filed Critical 財團法人金屬工業研究發展中心
Priority to TW110139147A priority Critical patent/TWI825499B/en
Publication of TW202317045A publication Critical patent/TW202317045A/en
Application granted granted Critical
Publication of TWI825499B publication Critical patent/TWI825499B/en

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

A surgical robotic arm control system and a control method thereof are provided. The surgical robotic arm control system includes a surgical robotic arm, an image capture unit and a processor. The surgical robot arm has multiple joint axes. The image capturing unit obtains a first image. The processor executes a spatial environment recognition module to generate a first environment information image, a first direction information image, and a first depth information image based on the first image. The processor executes a spatial environment image processing module to calculate path information based on the first environment information image, the first direction information image, and the first depth information image. The processor executes the robot arm motion feedback module to operate the surgical robot arm to move according to the path information.

Description

手術機械臂控制系統及其控制方法Surgical robotic arm control system and control method thereof

本發明是有關於一種自動控制技術,且特別是有關於一種手術機械臂控制系統及其控制方法。The present invention relates to an automatic control technology, and in particular to a surgical robotic arm control system and a control method thereof.

隨著醫療設備的演進,可有助於補助醫療人員的手術效率的可自動控制的相關醫療設備目前為此領域的重要發展方向之一。特別是,在手術過程中用於輔助或配合醫療人員(施術者)進行相關手術工作的手術機械臂更為重要。然而,在現有的手術機械臂設計中,為了使手術機械臂可實現自動控制功能,手術機械臂必須設置有多個感測器以及必須由使用者在每次手術過程中進行繁瑣的手動校正操作,才可使手術機械臂在移動過程迴避路徑中存在的障礙物,實現準確的自動移動以及自動操作結果。With the evolution of medical equipment, automatically controllable related medical equipment that can help supplement the surgical efficiency of medical personnel is currently one of the important development directions in this field. In particular, the surgical robotic arm used to assist or cooperate with medical personnel (operators) in performing related surgical work during the operation is even more important. However, in the existing surgical robotic arm design, in order for the surgical robotic arm to achieve automatic control functions, the surgical robotic arm must be equipped with multiple sensors and the user must perform cumbersome manual correction operations during each operation. , so that the surgical robotic arm can avoid obstacles in the path during movement and achieve accurate automatic movement and automatic operation results.

本發明提供一種手術機械臂控制系統及其控制方法,可有效地控制手術機械臂進行自動移動。The invention provides a surgical robotic arm control system and a control method thereof, which can effectively control the surgical robotic arm to move automatically.

本發明的手術機械臂控制系統包括手術機械臂、影像擷取單元以及處理器。手術機械臂具有多個關節軸。影像擷取單元取得第一影像。第一影像包括手術機械臂的機械臂末端影像。處理器耦接手術機械臂以及影像擷取單元。處理器執行空間環境辨識模組,以根據第一影像產生第一環境資訊影像、第一方向資訊影像以及第一深度資訊影像。處理器執行空間環境影像處理模組,以根據第一環境資訊影像、第一方向資訊影像以及第一深度資訊影像推算路徑資訊。處理器執行機械臂動作回饋模組,以根據路徑資訊來操作手術機械臂移動。The surgical robot arm control system of the present invention includes a surgical robot arm, an image capture unit and a processor. Surgical robotic arms have multiple joint axes. The image capturing unit acquires the first image. The first image includes an image of the end of the robotic arm of the surgical robotic arm. The processor is coupled to the surgical robotic arm and the image capture unit. The processor executes the spatial environment recognition module to generate a first environment information image, a first direction information image and a first depth information image based on the first image. The processor executes the spatial environment image processing module to calculate the path information based on the first environment information image, the first direction information image and the first depth information image. The processor executes the robotic arm motion feedback module to operate the surgical robotic arm movement based on the path information.

本發明的手術機械臂控制方法包括以下步驟:藉由影像擷取單元取得第一影像,其中第一影像包括手術機械臂的機械臂末端影像;藉由處理器執行空間環境辨識模組,以根據第一影像產生第一環境資訊影像、第一方向資訊影像以及第一深度資訊影像;藉由處理器執行空間環境影像處理模組,以根據第一環境資訊影像、第一方向資訊影像以及第一深度資訊影像推算路徑資訊;以及藉由處理器執行機械臂動作回饋模組,以根據路徑資訊來操作手術機械臂移動。The method for controlling a surgical robotic arm of the present invention includes the following steps: obtaining a first image through an image acquisition unit, where the first image includes an image of the end of the robotic arm of the surgical robotic arm; executing a spatial environment recognition module through a processor to determine The first image generates a first environment information image, a first direction information image and a first depth information image; the processor executes the spatial environment image processing module to generate a first environment information image, a first direction information image and a first depth information image according to the first environment information image, the first direction information image and the first depth information image. The depth information image estimates the path information; and the processor executes the robotic arm motion feedback module to operate the surgical robotic arm movement based on the path information.

基於上述,本發明的手術機械臂控制系統及其控制方法,可透過腦視覺影像技術來自動控制手術機械臂移動,並且可自動迴避當前環境中的障礙物。Based on the above, the surgical robotic arm control system and its control method of the present invention can automatically control the movement of the surgical robotic arm through brain vision imaging technology, and can automatically avoid obstacles in the current environment.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, embodiments are given below and described in detail with reference to the accompanying drawings.

為了使本發明之內容可以被更容易明瞭,以下特舉實施例做為本揭示確實能夠據以實施的範例。另外,凡可能之處,在圖式及實施方式中使用相同標號的元件/構件/步驟,係代表相同或類似部件。In order to make the content of the present invention easier to understand, the following embodiments are provided as examples according to which the present disclosure can be implemented. In addition, wherever possible, elements/components/steps with the same reference numbers in the drawings and embodiments represent the same or similar parts.

圖1是依照本發明的一實施例的手術機械臂控制系統的電路示意圖。參考圖1,手術機械臂控制系統100包括處理器110、儲存單元120、影像擷取單元130以及手術機械臂140。儲存單元120儲存空間環境辨識模組121、空間環境影像處理模組122以及機械臂動作回饋模組123。處理器110耦接儲存單元120、影像擷取單元130以及手術機械臂140。手術機械臂140具有多個關節軸。在本實施例中,影像擷取單元130可取得影像資料,並且提供至處理器110。處理器110可存取儲存單元120,以執行空間環境辨識模組121、空間環境影像處理模組122以及機械臂動作回饋模組123。在本實施例中,處理器110可將相關影像資料輸入至空間環境辨識模組121以及空間環境影像處理模組122,以產生路徑資訊,並且處理器110可根據路徑資訊來操作手術機械臂140移動。Figure 1 is a schematic circuit diagram of a surgical robotic arm control system according to an embodiment of the present invention. Referring to FIG. 1 , the surgical robot arm control system 100 includes a processor 110 , a storage unit 120 , an image capture unit 130 and a surgical robot arm 140 . The storage unit 120 stores a space environment recognition module 121, a space environment image processing module 122, and a robot arm motion feedback module 123. The processor 110 is coupled to the storage unit 120, the image capture unit 130 and the surgical robotic arm 140. Surgical robotic arm 140 has multiple joint axes. In this embodiment, the image capture unit 130 can obtain image data and provide it to the processor 110 . The processor 110 can access the storage unit 120 to execute the spatial environment recognition module 121, the spatial environment image processing module 122, and the robotic arm motion feedback module 123. In this embodiment, the processor 110 can input relevant image data to the spatial environment recognition module 121 and the spatial environment image processing module 122 to generate path information, and the processor 110 can operate the surgical robotic arm 140 according to the path information. Move.

在本實施例中,手術機械臂控制系統100可與手術平台的機構整合。影像擷取單元130可設置在手術平台的上方一側(正上方或偏移一角度的上方),以朝向手術平台以及手術機械臂140進行拍攝。並且,手術機械臂140可設置在手術平台的一側。在本實施例中,手術機械臂控制系統100可控制手術機械臂140從手術平台的一側移動至手術平台的另一端,並且手術機械臂140及其機械臂末端可自動避開移動路徑上的障礙物。因此,手術人員可在手術平台的另一端快速地掌握手術機械臂140進行手術輔助功能。In this embodiment, the surgical robot arm control system 100 can be integrated with the mechanism of the surgical platform. The image capturing unit 130 can be disposed on the upper side of the surgical platform (directly above or offset at an angle) to take pictures toward the surgical platform and the surgical robotic arm 140 . Moreover, the surgical robot arm 140 may be disposed on one side of the surgical platform. In this embodiment, the surgical robot arm control system 100 can control the surgical robot arm 140 to move from one side of the surgical platform to the other end of the surgical platform, and the surgical robot arm 140 and its end can automatically avoid obstacles on the movement path. Obstacles. Therefore, the operator can quickly master the surgical assisting function of the surgical robotic arm 140 at the other end of the surgical platform.

在本實施例中,處理器110可例如是中央處理單元(Central Processing Unit, CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor, DSP)、影像處理器(Image Processing Unit, IPU)、圖形處理器(Graphics Processing Unit, GPU)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits, ASIC)、可程式化邏輯裝置(Programmable Logic Device, PLD)、其他類似處理裝置或這些裝置的結合。In this embodiment, the processor 110 may be, for example, a central processing unit (CPU), or other programmable general-purpose or special-purpose microprocessor (Microprocessor) or digital signal processor (Digital Signal Processor). Processor (DSP), Image Processing Unit (IPU), Graphics Processing Unit (GPU), programmable controller, Application Specific Integrated Circuits (ASIC), programmable Logic device (Programmable Logic Device, PLD), other similar processing devices, or a combination of these devices.

在本實施例中,儲存單元120可為記憶體(Memory),例如動態隨機存取記憶體(Dynamic Random Access Memory, DRAM)、快閃記憶體(Lash memory)或非揮發性隨機存取記憶體(Non-Volatile Random Access Memory, NVRAM),而本發明並不加以限制。儲存單元120可儲存空間環境辨識模組121、空間環境影像處理模組122、機械臂動作回饋模組123以及本發明各實施例提及的各個模組的相關演算法,並且還可儲存影像資料、機械臂控制指令、機械臂控制軟體以及運算軟體等諸如此類用於實現本發明的手術機械臂控制功能的相關演算法、程式及數據。在本實施例中,空間環境辨識模組121以及空間環境影像處理模組122可分別為實現對應功能的神經網路模組。In this embodiment, the storage unit 120 can be a memory, such as a dynamic random access memory (Dynamic Random Access Memory, DRAM), a flash memory (Lash memory) or a non-volatile random access memory. (Non-Volatile Random Access Memory, NVRAM), which is not limited by the present invention. The storage unit 120 can store the spatial environment recognition module 121, the spatial environment image processing module 122, the robotic arm action feedback module 123, and the related algorithms of each module mentioned in various embodiments of the present invention, and can also store image data. , robotic arm control instructions, robotic arm control software, computing software and the like related algorithms, programs and data used to implement the surgical robotic arm control function of the present invention. In this embodiment, the spatial environment recognition module 121 and the spatial environment image processing module 122 can respectively be neural network modules that implement corresponding functions.

在本實施例中,手術機械臂140可為一種具有六個自由度(Six degree of freedom tracking,6DOF)的機械臂,並且處理器110可執行一種應用馬可夫決策過程的機器學習模組來控制手術機械臂140。在本實施例中,影像擷取單元130可例如是深度攝影機,並且可用以拍攝手術場域,以取得場域影像及其深度資訊。在一實施例中,儲存單元120還可儲存全景環境場域定位模組。處理器110可執行全景環境場域定位模組,以進行攝影機標定(Camera calibration)運算,並且可實現影像擷取單元130以及手術機械臂140之間的座標系匹配功能。在本實施例中,影像擷取單元130可預先取得定位影像以及參考深度資訊,其中定位影像包括定位物件。處理器110可透過全景環境場域定位模組分析定位影像中的定位物件的定位座標資訊以及參考深度資訊,以使影像擷取單元130(深度攝影機)的攝影機座標系與手術機械臂140的機械臂座標系匹配。In this embodiment, the surgical robotic arm 140 may be a robotic arm with six degrees of freedom tracking (6DOF), and the processor 110 may execute a machine learning module that applies a Markov decision process to control the surgery. Robotic arm 140. In this embodiment, the image capture unit 130 can be, for example, a depth camera, and can be used to capture the surgical field to obtain field images and depth information. In one embodiment, the storage unit 120 can also store the panoramic environment field positioning module. The processor 110 can execute the panoramic environment field positioning module to perform camera calibration calculations, and can implement the coordinate system matching function between the image capture unit 130 and the surgical robotic arm 140 . In this embodiment, the image capturing unit 130 can obtain positioning images and reference depth information in advance, where the positioning images include positioning objects. The processor 110 can analyze the positioning coordinate information and the reference depth information of the positioning object in the positioning image through the panoramic environment field positioning module, so that the camera coordinate system of the image capture unit 130 (depth camera) is consistent with the mechanism of the surgical robotic arm 140 Arm coordinate system matching.

具體而言,使用者可例如將具有棋盤圖像的圖案的定位板作為定位物件,並且放置在手術平台上,以使影像擷取單元130可拍攝多個定位影像,其中這些定位影像中可分別包括棋盤圖像的圖案。定位影像的數量可例如是5張。接著,處理器110可執行全景環境場域定位模組,以透過全景環境場域定位模組分析多個定位影像中分別的定位物件的定位座標資訊(多個空間座標)以及參考深度資訊,以使影像擷取單元130的攝影機座標系(空間座標系)與手術機械臂140的機械臂座標系(空間座標系)匹配。處理器110可根據固定位置關係、定位座標資訊以及參考深度資訊來匹配影像擷取單元130的攝影機座標系以及手術機械臂140的機械臂座標系。Specifically, the user can, for example, use a positioning plate with a checkerboard pattern as a positioning object and place it on the surgical platform so that the image capture unit 130 can capture multiple positioning images, wherein each of these positioning images can be Pattern including checkerboard image. The number of positioning images may be, for example, five. Then, the processor 110 can execute the panoramic environment field positioning module to analyze the positioning coordinate information (multiple spatial coordinates) and reference depth information of respective positioning objects in the multiple positioning images through the panoramic environment field positioning module, so as to The camera coordinate system (spatial coordinate system) of the image capture unit 130 is matched with the robot arm coordinate system (spatial coordinate system) of the surgical robot arm 140 . The processor 110 can match the camera coordinate system of the image capture unit 130 and the robotic arm coordinate system of the surgical robotic arm 140 based on the fixed position relationship, positioning coordinate information, and reference depth information.

圖2是依照本發明的一實施例的手術機械臂控制系統的操作示意圖。圖3是依照本發明的一實施例的手術機械臂控制方法的流程圖。圖4是依照本發明的一實施例的影像處理及影像分析的示意圖。參考圖1至圖4,影像擷取單元130可例如朝向手術平台進行拍攝,手術平台上可例如放置有手術對象200。在本實施例中,手術機械臂140可位於如圖2所示的手術對象200的一側,並且處理器110可控制手術機械臂140移動至手術對象200的手術區域201另一側,並且可自動避開在手術區域201中的移動路徑上的障礙物,其中所述障礙物可例如包括放置在手術對象200上的手術器械202~204。FIG. 2 is an operational schematic diagram of a surgical robotic arm control system according to an embodiment of the present invention. Figure 3 is a flow chart of a method for controlling a surgical robotic arm according to an embodiment of the present invention. FIG. 4 is a schematic diagram of image processing and image analysis according to an embodiment of the present invention. Referring to FIGS. 1 to 4 , the image capturing unit 130 may, for example, take pictures toward a surgical platform, on which a surgical object 200 may be placed, for example. In this embodiment, the surgical robot arm 140 may be located on one side of the surgical object 200 as shown in FIG. 2 , and the processor 110 may control the surgical robot arm 140 to move to the other side of the surgical area 201 of the surgical object 200 , and may Obstacles on the movement path in the surgical area 201 are automatically avoided, where the obstacles may include, for example, surgical instruments 202 to 204 placed on the surgical object 200 .

在本實施例中,手術機械臂控制系統100可執行以下步驟S310~S340。在步驟S310,手術機械臂控制系統100可藉由影像擷取單元130取得第一影像401(當前幀(frame)),其中第一影像401包括手術機械臂140的機械臂末端影像。在本實施例中,儲存單元120還可儲存目標區域確認模組,並且手術機械臂控制系統100還可包括輸入單元。輸入單元可例如是滑鼠、觸控螢幕、使用者介面或系統設定模組等,並且可提供目標座標至處理器110。對此,處理器110可執行目標區域確認模組,以根據目標座標定義在第一影像401中的目標區域。對此,目標區域為空間區域(虛擬立方體),並且可例如於第一影像401的手術對象的另一側。In this embodiment, the surgical robot arm control system 100 can perform the following steps S310 to S340. In step S310 , the surgical robot arm control system 100 can obtain the first image 401 (current frame) through the image capture unit 130 , where the first image 401 includes an image of the end of the robot arm of the surgical robot arm 140 . In this embodiment, the storage unit 120 may also store the target area confirmation module, and the surgical robot arm control system 100 may further include an input unit. The input unit may be, for example, a mouse, a touch screen, a user interface or a system setting module, and may provide target coordinates to the processor 110 . In this regard, the processor 110 may execute the target area confirmation module to define the target area in the first image 401 according to the target coordinates. In this regard, the target area is a spatial area (virtual cube), and may be, for example, on the other side of the surgical object in the first image 401 .

在步驟S320,手術機械臂控制系統100可藉由處理器110執行空間環境辨識模組121,以根據第一影像401產生第一環境資訊影像411、第一方向資訊影像412以及第一深度資訊影像413。在步驟S330,手術機械臂控制系統100可藉由處理器110執行空間環境影像處理模組122,以根據第一環境資訊影像411、第一方向資訊影像412以及第一深度資訊影像413推算路徑資訊。在本實施例中,空間環境影像處理模組122可根據手術機械臂140的機械臂末端區域來從第一環境資訊影像411、第一方向資訊影像412以及第一深度資訊影像413分別擷取第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423(只擷取影像中的機械臂末端影像部分進行後續運算與分析)。對此,由於第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423為第一環境資訊影像411、第一方向資訊影像412以及第一深度資訊影像413分別的一部分,因此本發明的手術機械臂控制系統100可針對每一幀的影像的重點區域進行快速的影像運算與分析,而可有效節省運算資源且可快速推算以移動手術機械臂140至目標座標。In step S320, the surgical robot arm control system 100 can execute the spatial environment recognition module 121 through the processor 110 to generate the first environment information image 411, the first direction information image 412 and the first depth information image according to the first image 401. 413. In step S330, the surgical robot arm control system 100 can use the processor 110 to execute the spatial environment image processing module 122 to calculate the path information based on the first environment information image 411, the first direction information image 412 and the first depth information image 413. . In this embodiment, the spatial environment image processing module 122 can respectively capture the first environmental information image 411, the first direction information image 412 and the first depth information image 413 according to the end area of the robotic arm of the surgical robotic arm 140. The second environment information image 421, the second depth information image 422 and the second direction information image 423 (only the image part of the end of the robotic arm in the image is captured for subsequent calculation and analysis). In this regard, since the second environment information image 421, the second depth information image 422 and the second direction information image 423 are parts of the first environment information image 411, the first direction information image 412 and the first depth information image 413 respectively, therefore The surgical robot arm control system 100 of the present invention can perform rapid image calculation and analysis on the key areas of each frame of images, thereby effectively saving computing resources and quickly calculating to move the surgical robot arm 140 to the target coordinates.

舉例而言,第一環境資訊影像411、第一方向資訊影像412以及第一深度資訊影像413可分別具有224×224像素的影像解析度,並且第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423可分別具有54×54像素的影像解析度。空間環境影像處理模組122將第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423輸入至全卷積網路模型122之前,空間環境影像處理模組122可先分別將第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423進行影像放大,其中影像放大可例如透過雙線性插值算法的方式。已放大的第二環境資訊影像431、已放大的第二深度資訊影像432以及已放大的第二方向資訊影像433可分別具有224×224像素的影像解析度。接著。空間環境影像處理模組122可將已放大的第二環境資訊影像431、已放大的第二深度資訊影像432以及已放大的第二方向資訊影像433輸入至全卷積網路模型122,以使全卷積網路模型122輸出特徵影像451。For example, the first environment information image 411, the first direction information image 412, and the first depth information image 413 may each have an image resolution of 224×224 pixels, and the second environment information image 421, the second depth information image 422 and the second direction information image 423 may each have an image resolution of 54×54 pixels. Before the spatial environment image processing module 122 inputs the second environment information image 421, the second depth information image 422, and the second direction information image 423 to the fully convolutional network model 122, the spatial environment image processing module 122 may first separately The second environment information image 421, the second depth information image 422 and the second direction information image 423 perform image amplification, where the image amplification may be performed, for example, through a bilinear interpolation algorithm. The enlarged second environment information image 431, the enlarged second depth information image 432, and the enlarged second direction information image 433 may each have an image resolution of 224×224 pixels. Next. The spatial environment image processing module 122 can input the enlarged second environment information image 431, the enlarged second depth information image 432, and the enlarged second direction information image 433 to the fully convolutional network model 122, so that The fully convolutional network model 122 outputs feature images 451.

全卷積網路模型122可包括稠密神經網路模組122-1(運算模型的上半部)以及特徵還原模組122-2(運算模型的下半部)。稠密神經網路模組122-1可先產生訓練結果的多個特徵值資料441-1~441-N、442-1~442-N、443-1~443-N。特徵值資料441-1~441-N可為已放大的第二環境資訊影像431的訓練結果。特徵值資料442-1~442-N可為已放大的第二深度資訊影像432的訓練結果。特徵值資料443-1~443-N可為已放大的第二方向資訊影像433的訓練結果。全卷積網路模型122可接著將特徵值資料441-1~441-N、442-1~442-N、443-1~443-N輸入至特徵還原模組122-2,以使特徵還原模組122-2可將特徵值資料441-1~441-N、442-1~442-N、443-1~443-N進行重組,以輸出特徵影像451。在本實施例中,空間環境影像處理模組122可分析特徵影像451,以推算路徑資訊。特徵影像451可例如具有對應於空間或移動平面中的各點位置的權重分布資訊(可移動權重或障礙物權重)。並且,處理器110可例如根據特徵影像451推算出手術機械臂140在當前幀(frame)中可移動的方向以及可移動的距離等資訊或參數。The fully convolutional network model 122 may include a dense neural network module 122-1 (the upper part of the operation model) and a feature reduction module 122-2 (the lower part of the operation model). The dense neural network module 122-1 can first generate multiple feature value data 441-1~441-N, 442-1~442-N, and 443-1~443-N of the training results. The feature value data 441-1~441-N may be the training results of the enlarged second environment information image 431. The feature value data 442-1~442-N may be the training results of the enlarged second depth information image 432. The feature value data 443-1~443-N may be the training results of the enlarged second direction information image 433. The fully convolutional network model 122 can then input the feature value data 441-1~441-N, 442-1~442-N, 443-1~443-N to the feature restoration module 122-2 to restore the features The module 122-2 can reorganize the feature value data 441-1~441-N, 442-1~442-N, 443-1~443-N to output the feature image 451. In this embodiment, the spatial environment image processing module 122 can analyze the characteristic image 451 to derive path information. The feature image 451 may, for example, have weight distribution information (movable weight or obstacle weight) corresponding to the position of each point in space or a moving plane. Furthermore, the processor 110 can, for example, calculate information or parameters such as the direction in which the surgical robot arm 140 can move and the distance it can move in the current frame based on the characteristic image 451 .

在步驟S340,手術機械臂控制系統100可藉由處理器110執行機械臂動作回饋模組123,以根據路徑資訊來操作手術機械臂140移動至目標區域。在本實施例中,影像擷取單元130可連續取得多個幀的多個第一影像,以使處理器110根據這些第一影像疊代執行空間環境辨識模組121、空間環境影像處理模組122以及機械臂動作回饋模組123,以多次操作手術機械臂140移動直到處理器110判斷手術機械臂140的機械臂末端抵達目標座標。對此,當處理器110可判斷手術機械臂140的機械臂末端區域與目標區域重疊時(兩個虛擬立方體疊合),處理器110可判斷手術機械臂140的機械臂末端抵達目標座標。機械臂末端區域可為處理器110基於機械臂末端的空間位置的中心點來以此為中心向外延伸而模擬出的立方體區域(區域中心點為機械臂末端的中心點)。因此,手術機械臂140可自動地迴避移動路徑上的手術器械202~204,以自動移動至手術對象200的另一側。In step S340, the surgical robot arm control system 100 can use the processor 110 to execute the robot arm action feedback module 123 to operate the surgical robot arm 140 to move to the target area according to the path information. In this embodiment, the image capture unit 130 can continuously acquire multiple first images of multiple frames, so that the processor 110 iteratively executes the spatial environment recognition module 121 and the spatial environment image processing module based on these first images. 122 and the robotic arm motion feedback module 123 to operate the surgical robotic arm 140 multiple times until the processor 110 determines that the end of the robotic arm of the surgical robotic arm 140 reaches the target coordinates. In this regard, when the processor 110 can determine that the robotic arm end area of the surgical robotic arm 140 overlaps with the target area (two virtual cubes overlap), the processor 110 can determine that the robotic arm end of the surgical robotic arm 140 has reached the target coordinates. The end area of the robot arm may be a cubic area simulated by the processor 110 based on the center point of the spatial position of the end of the robot arm and extending outward from the center (the center point of the area is the center point of the end of the robot arm). Therefore, the surgical robot arm 140 can automatically avoid the surgical instruments 202 to 204 on the movement path to automatically move to the other side of the surgical object 200 .

以下圖5至圖7實施例將分別詳細說明第二環境資訊影像421、第二深度資訊影像422以及第二方向資訊影像423的產生方式。The following embodiments of FIGS. 5 to 7 will respectively describe in detail the generation method of the second environment information image 421, the second depth information image 422, and the second direction information image 423.

圖5是依照本發明的一實施例的產生第二環境資訊影像的示意圖。參考圖1以及圖5,影像擷取單元130可例如對於如圖5所示的手術場域501進行拍攝,以取得第一環境資訊影像502(即第一影像)。處理器110可對第一環境資訊影像502中對應於手術機械臂的機械臂末端的位置進行定義,以決定機械臂末端區域511的範圍(預設分析範圍)。對此,機械臂末端區域511的水平範圍可對應於在第一環境資訊影像502中的範圍512。接著,處理器110可根據此範圍512裁剪第一環境資訊影像502,以產生第二環境資訊影像421(RGB影像)。FIG. 5 is a schematic diagram of generating a second environmental information image according to an embodiment of the present invention. Referring to FIG. 1 and FIG. 5 , the image capture unit 130 may, for example, capture the surgical field 501 as shown in FIG. 5 to obtain the first environmental information image 502 (ie, the first image). The processor 110 may define the position of the robotic arm end corresponding to the surgical robotic arm in the first environmental information image 502 to determine the range of the robotic arm end region 511 (preset analysis range). In this regard, the horizontal range of the robot arm end area 511 may correspond to the range 512 in the first environmental information image 502 . Then, the processor 110 can crop the first environmental information image 502 according to the range 512 to generate a second environmental information image 421 (RGB image).

圖6是依照本發明的一實施例的產生第二深度資訊影像的示意圖。參考圖1以及圖6,影像擷取單元130可例如對於如圖6所示的手術場域601進行拍攝,以取得具有深度資訊的第一深度資訊影像(即附帶有深度資訊的第一影像)。處理器110可對第一深度資訊影像中對應於手術機械臂的機械臂末端的位置進行定義,以決定以機械臂末端141的延伸軸611為基準的參考平面。第一深度資訊影像可包括對應於不同深度的多個第一深度平面影像602_1~602_N,其中N為正整數。所述不同深度可例如是指參考平面以及平行於參考平面的上方及下方的各5個垂直深度(例如-5、-4、-3、-2、-1、0、+1、+2、+3、+4、+5)的平面,但本發明並不限制深度平面影像的取樣數量。對此,機械臂末端141的機械臂末端區域(如同圖5的機械臂末端區域511)的水平範圍可對應於在多個第一深度平面影像602_1~602_N中的相同位置的範圍512。接著,處理器110可將這些第一深度平面影像602_1~602_N轉換為多個二值化影像603_1~603_N(例如有障礙物以數值“0”(純黑)代表,而無障礙物則以數值“1”代表(純白)),並且根據手術機械臂140的機械臂末端區域來從這些二值化影像603_1~603_N取得第二深度資訊影像中對應於不同深度的多個第二深度平面影像422_1~422_N。對此,手術機械臂控制系統100可根據第二深度平面影像422_1~422_N來獲得不同深度平面的障礙物分布資訊(例如其他手術器械的分布資訊),以有效地推算出手術機械臂140不會撞到障礙物的移動路徑。FIG. 6 is a schematic diagram of generating a second depth information image according to an embodiment of the present invention. Referring to FIG. 1 and FIG. 6 , the image capture unit 130 may, for example, capture the surgical field 601 as shown in FIG. 6 to obtain a first depth information image with depth information (ie, a first image with depth information). . The processor 110 may define the position of the robotic arm end corresponding to the surgical robotic arm in the first depth information image to determine a reference plane based on the extension axis 611 of the robotic arm end 141 . The first depth information image may include a plurality of first depth plane images 602_1˜602_N corresponding to different depths, where N is a positive integer. The different depths may, for example, refer to the reference plane and 5 vertical depths above and below the reference plane (for example -5, -4, -3, -2, -1, 0, +1, +2, +3, +4, +5) plane, but the present invention does not limit the number of samples of depth plane images. In this regard, the horizontal range of the robot end area of the robot end 141 (like the robot end area 511 of FIG. 5 ) may correspond to the range 512 of the same position in the plurality of first depth plane images 602_1˜602_N. Then, the processor 110 can convert these first depth plane images 602_1 ~ 602_N into a plurality of binary images 603_1 ~ 603_N (for example, if there are obstacles, they are represented by the value "0" (pure black), and if there are no obstacles, they are represented by the value "0" (pure black)). "1" represents (pure white)), and a plurality of second depth plane images 422_1 corresponding to different depths in the second depth information image are obtained from these binary images 603_1~603_N according to the end area of the robotic arm of the surgical robotic arm 140 ~422_N. In this regard, the surgical robot arm control system 100 can obtain obstacle distribution information (such as distribution information of other surgical instruments) at different depth planes based on the second depth plane images 422_1 ~ 422_N, so as to effectively calculate that the surgical robot arm 140 will not The path of movement that hits an obstacle.

圖7是依照本發明的一實施例的產生第一方向資訊影像的示意圖。參考圖1以及圖7,影像擷取單元130可例如對於如圖7所示的手術場域701進行拍攝,以第一環境資訊影像702(即第一影像)。處理器110可對第一環境資訊影像702中對應於手術機械臂的機械臂末端進行定義,以決定以機械臂末端的端點作為當前幀的機械臂末端點P1。處理器110可對第一環境資訊影像702中的手術機械臂的機械臂末端進行定義,以決定機械臂末端區域的範圍(預設分析範圍)。並且,處理器110可根據輸入單元提供的目標選擇信號(例如由使用者選擇目標位置)來取得目標座標,以決定目標點P2。處理器110可沿著第一環境資訊影像702中的機械臂末端點P1至目標點P2的路徑並且朝目標點P2的距離不同而決定放射性漸層顏色參數,以產生第一方向資訊影像703。值得注意的是,第一方向資訊影像703的色彩變化方向平行於在第一方向資訊影像703上從機械臂末端點P1的機械臂末端座標至目標點P2的目標座標的方向。處理器110可根據範圍512裁剪第一方向資訊影像703,以產生第二方向資訊影像423。FIG. 7 is a schematic diagram of generating a first direction information image according to an embodiment of the present invention. Referring to FIG. 1 and FIG. 7 , the image capturing unit 130 may, for example, capture the surgical field 701 shown in FIG. 7 to capture the first environmental information image 702 (ie, the first image). The processor 110 may define the end of the robotic arm corresponding to the surgical robotic arm in the first environment information image 702 to determine the end point of the end of the robotic arm as the end point P1 of the robotic arm of the current frame. The processor 110 may define the robotic arm end of the surgical robotic arm in the first environmental information image 702 to determine the range of the robotic arm end area (preset analysis range). Furthermore, the processor 110 can obtain the target coordinates according to the target selection signal provided by the input unit (for example, the user selects the target position) to determine the target point P2. The processor 110 may determine the radioactive gradient color parameters along the path from the robot arm end point P1 to the target point P2 in the first environment information image 702 and at different distances toward the target point P2 to generate the first direction information image 703 . It is worth noting that the color change direction of the first direction information image 703 is parallel to the direction from the robot end coordinate of the robot end point P1 to the target coordinate of the target point P2 on the first direction information image 703 . The processor 110 may crop the first direction information image 703 according to the range 512 to generate the second direction information image 423.

綜上所述,本發明的手術機械臂控制系統及其控制方法,可透過影像擷取單元來實現利用電腦視覺影像技術來自動控制手術機械臂移動並靠近目標物件,並且可透過將運算資源集中在進行運算及分析影像擷取單元所提供的感測影像中重點區域來達到快速且準確的手術機械臂控制效果。因此,本發明的手術機械臂控制系統及其控制方法,可有效地使手術機械臂可自動移動至例如鄰近手術人員的手或手術對象的位置,來使手術人員可快速且有效率的使用手術機械臂來實現其手術輔助功能。To sum up, the surgical robot arm control system and its control method of the present invention can realize the use of computer vision imaging technology through the image capture unit to automatically control the movement of the surgical robot arm and approach the target object, and can centralize computing resources. By performing calculations and analyzing key areas in the sensing image provided by the image capture unit, rapid and accurate surgical robotic arm control can be achieved. Therefore, the surgical robot arm control system and its control method of the present invention can effectively enable the surgical robot arm to automatically move to, for example, a position adjacent to the operator's hand or the surgical object, so that the operator can perform surgery quickly and efficiently. Robotic arm to achieve its surgical assistance function.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Anyone with ordinary knowledge in the technical field may make some modifications and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the appended patent application scope.

100:手術機械臂控制系統 110:處理器 120:儲存單元 121:空間環境辨識模組 122:空間環境影像處理模組 122-1:稠密神經網路模組 122-2:特徵還原模組 123:機械臂動作回饋模組 130:影像擷取單元 140:手術機械臂 200:手術對象 201:手術區域 202~204:手術器械 401:第一影像 411、502、702:第一環境資訊影像 412、703:第一方向資訊影像 413:第一深度資訊影像 421:第二環境資訊影像 422:第二深度資訊影像 422_1~422_N:第二深度平面影像 423:第二方向資訊影像 431:已放大的第二環境資訊影像 432:已放大的第二深度資訊影像 433:已放大的第二方向資訊影像 441-1~441-N、442-1~442-N、443-1~443-N:特徵值資料 451:特徵影像 501、601、701:手術場域 511:機械臂末端區域 512:範圍 602_1~602_N:第一深度平面影像 603_1~603_N:二值化影像 611:延伸軸 P1:機械臂末端點 P2:目標點 S310~S340:步驟 100:Surgical robotic arm control system 110: Processor 120:Storage unit 121: Space environment recognition module 122: Space environment image processing module 122-1: Dense Neural Network Module 122-2: Feature restoration module 123: Robotic arm action feedback module 130:Image capture unit 140:Surgical robotic arm 200:Surgery object 201:Surgery area 202~204: Surgical instruments 401:First Image 411, 502, 702: First environmental information image 412, 703: First direction information image 413: The first depth information image 421: Second environmental information image 422: Second depth information image 422_1~422_N: Second depth plane image 423: Second direction information image 431: Enlarged second environment information image 432: Magnified second depth information image 433: Enlarged second direction information image 441-1~441-N, 442-1~442-N, 443-1~443-N: Characteristic value data 451:Characteristic image 501, 601, 701: Surgery field 511: Robotic arm end area 512: Range 602_1~602_N: First depth plane image 603_1~603_N: Binarized image 611:Extended shaft P1: End point of robot arm P2: target point S310~S340: steps

圖1是依照本發明的一實施例的手術機械臂控制系統的電路示意圖。 圖2是依照本發明的一實施例的手術機械臂控制系統的操作示意圖。 圖3是依照本發明的一實施例的手術機械臂控制方法的流程圖。 圖4是依照本發明的一實施例的影像處理及影像分析的示意圖。 圖5是依照本發明的一實施例的產生第二環境資訊影像的示意圖。 圖6是依照本發明的一實施例的產生第二深度資訊影像的示意圖。 圖7是依照本發明的一實施例的產生第二方向資訊影像的示意圖。 Figure 1 is a schematic circuit diagram of a surgical robotic arm control system according to an embodiment of the present invention. FIG. 2 is an operational schematic diagram of a surgical robotic arm control system according to an embodiment of the present invention. Figure 3 is a flow chart of a method for controlling a surgical robotic arm according to an embodiment of the present invention. FIG. 4 is a schematic diagram of image processing and image analysis according to an embodiment of the present invention. FIG. 5 is a schematic diagram of generating a second environmental information image according to an embodiment of the present invention. FIG. 6 is a schematic diagram of generating a second depth information image according to an embodiment of the present invention. FIG. 7 is a schematic diagram of generating a second direction information image according to an embodiment of the present invention.

100:手術機械臂控制系統 100:Surgical robotic arm control system

110:處理器 110: Processor

120:儲存單元 120:Storage unit

121:空間環境辨識模組 121: Space environment recognition module

122:空間環境影像處理模組 122: Space environment image processing module

123:機械臂動作回饋模組 123: Robotic arm action feedback module

130:影像擷取單元 130:Image capture unit

140:手術機械臂 140:Surgical robotic arm

Claims (20)

一種手術機械臂控制系統,包括:一手術機械臂,具有多個關節軸;一影像擷取單元,用以取得一第一影像,其中該第一影像包括該手術機械臂的一機械臂末端影像;以及一處理器,耦接該手術機械臂以及該影像擷取單元,其中該處理器執行一空間環境辨識模組,以根據該第一影像產生一第一環境資訊影像、一第一方向資訊影像以及一第一深度資訊影像,並且該處理器執行一空間環境影像處理模組,以根據該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像推算一路徑資訊,其中該處理器執行一機械臂動作回饋模組,以根據該路徑資訊來操作該手術機械臂移動。 A surgical robot arm control system, including: a surgical robot arm with multiple joint axes; an image capture unit used to obtain a first image, wherein the first image includes an end-of-arm image of the surgical robot arm ; and a processor coupled to the surgical robotic arm and the image capture unit, wherein the processor executes a spatial environment recognition module to generate a first environmental information image and a first direction information based on the first image image and a first depth information image, and the processor executes a spatial environment image processing module to calculate a path information based on the first environment information image, the first direction information image and the first depth information image, wherein The processor executes a robotic arm motion feedback module to operate the surgical robotic arm to move according to the path information. 如請求項1所述的手術機械臂控制系統,其中該影像擷取單元為一深度攝影機,並且該影像擷取單元預先取得一定位影像以及一參考深度資訊,其中該定位影像包括一定位物件,其中該處理器執行一全景環境場域定位模組,以透過該全景環境場域定位模組分析該定位影像中的該定位物件的一定位座標資訊以及該參考深度資訊,以使該深度攝影機的一攝影機座標系與該手術機械臂的一機械臂座標系匹配。 The surgical robot arm control system as claimed in claim 1, wherein the image capture unit is a depth camera, and the image capture unit pre-obtains a positioning image and a reference depth information, wherein the positioning image includes a positioning object, The processor executes a panoramic environment field positioning module to analyze a positioning coordinate information and the reference depth information of the positioning object in the positioning image through the panoramic environment field positioning module, so as to make the depth camera A camera coordinate system matches a robotic arm coordinate system of the surgical robotic arm. 如請求項1所述的手術機械臂控制系統,其中該處理器執行一目標區域確認模組,以根據一目標座標定義在該第一影 像中的一目標區域,並且該機械臂動作回饋模組操作該手術機械臂移動至該目標區域。 The surgical robot arm control system according to claim 1, wherein the processor executes a target area confirmation module to define the first image according to a target coordinate. A target area in the image, and the robotic arm motion feedback module operates the surgical robotic arm to move to the target area. 如請求項3所述的手術機械臂控制系統,其中該空間環境影像處理模組根據該手術機械臂的一機械臂末端區域來從該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像分別擷取一第二環境資訊影像、一第二方向資訊影像以及一第二深度資訊影像,並且將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像輸入至一全卷積網路模型,以使該全卷積網路模型輸出一特徵影像,並且該處理器根據該特徵影像產生該路徑資訊,其中該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像為該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像分別的一部分。 The surgical robot arm control system according to claim 3, wherein the spatial environment image processing module obtains the first environment information image, the first direction information image and the third according to a robot arm end area of the surgical robot arm. A depth information image captures a second environment information image, a second direction information image and a second depth information image respectively, and combines the second environment information image, the second direction information image and the second depth information image Input to a fully convolutional network model, so that the fully convolutional network model outputs a feature image, and the processor generates the path information based on the feature image, wherein the second environment information image, the second direction information The image and the second depth information image are respectively part of the first environment information image, the first direction information image and the first depth information image. 如請求項4所述的手術機械臂控制系統,其中在該處理器將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像輸入至該全卷積網路模型之前,該處理器先分別將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像進行影像放大。 The surgical robot arm control system as claimed in claim 4, wherein before the processor inputs the second environment information image, the second direction information image and the second depth information image to the fully convolutional network model, The processor first magnifies the second environment information image, the second direction information image and the second depth information image respectively. 如請求項4所述的手術機械臂控制系統,其中該第一深度資訊影像包括對應於不同深度的多個第一深度平面影像,該處理器將該些第一深度平面影像轉換為多個二值化影像,並且根 據該手術機械臂的該機械臂末端區域來從該些二值化影像取得該第二深度資訊影像中對應於不同深度的多個第二深度平面影像。 The surgical robot arm control system according to claim 4, wherein the first depth information image includes a plurality of first depth plane images corresponding to different depths, and the processor converts the first depth plane images into a plurality of two-dimensional images. value image, and the root A plurality of second depth plane images corresponding to different depths in the second depth information image are obtained from the binary images according to the end area of the robotic arm of the surgical robotic arm. 如請求項4所述的手術機械臂控制系統,其中當該處理器判斷該手術機械臂的該機械臂末端區域與該目標區域重疊時,該處理器判斷該手術機械臂的該機械臂末端抵達該目標座標。 The surgical robot arm control system as claimed in claim 4, wherein when the processor determines that the robot arm end area of the surgical robot arm overlaps the target area, the processor determines that the robot arm end area of the surgical robot arm has reached The target coordinates. 如請求項7所述的手術機械臂控制系統,其中該影像擷取單元連續取得多個第一影像,以使該處理器根據該些第一影像疊代執行該空間環境辨識模組、該空間環境影像處理模組以及該機械臂動作回饋模組,以多次操作該手術機械臂移動直到該處理器判斷該手術機械臂的該機械臂末端抵達該目標座標。 The surgical robot arm control system according to claim 7, wherein the image capture unit continuously acquires a plurality of first images, so that the processor iteratively executes the spatial environment recognition module, the spatial environment recognition module and the spatial environment recognition module according to the first images. The environmental image processing module and the robotic arm motion feedback module operate the surgical robotic arm multiple times until the processor determines that the end of the robotic arm of the surgical robotic arm reaches the target coordinate. 如請求項4所述的手術機械臂控制系統,其中該第二方向資訊影像的一色彩變化方向平行於在該第二方向資訊影像上從一機械臂末端座標至該目標座標的一方向。 The surgical robot arm control system of claim 4, wherein a color change direction of the second direction information image is parallel to a direction from a robot arm end coordinate to the target coordinate on the second direction information image. 如請求項3所述的手術機械臂控制系統,還包括:一輸入單元,耦接該處理器,並且提供一目標選擇信號至該處理器,以使該處理器根據該目標選擇信號產生該目標座標。 The surgical robot arm control system according to claim 3, further comprising: an input unit coupled to the processor and providing a target selection signal to the processor, so that the processor generates the target according to the target selection signal. coordinates. 一種手術機械臂控制方法,包括:藉由一影像擷取單元取得一第一影像,其中該第一影像包括一手術機械臂的一機械臂末端影像;藉由一處理器執行一空間環境辨識模組,以根據該第一影像產生一第一環境資訊影像、一第一方向資訊影像以及一第一深度資訊影像; 藉由該處理器執行一空間環境影像處理模組,以根據該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像推算一路徑資訊;以及藉由該處理器執行一機械臂動作回饋模組,以根據該路徑資訊來操作該手術機械臂移動。 A method for controlling a surgical robotic arm, including: obtaining a first image through an image acquisition unit, wherein the first image includes a robotic arm end image of a surgical robotic arm; executing a spatial environment recognition model through a processor Group to generate a first environment information image, a first direction information image and a first depth information image based on the first image; A spatial environment image processing module is executed by the processor to calculate a path information based on the first environment information image, the first direction information image and the first depth information image; and a machine is executed by the processor The arm motion feedback module is used to operate the surgical robot arm to move according to the path information. 如請求項11所述的手術機械臂控制方法,其中該影像擷取單元為一深度攝影機,並且該影像擷取單元預先取得一定位影像以及一參考深度資訊,其中該定位影像包括一定位物件,其中該手術機械臂控制方法包括以下步驟:藉由該處理器執行一全景環境場域定位模組,以透過該全景環境場域定位模組分析該定位影像中的該定位物件的一定位座標資訊以及該參考深度資訊,以使該深度攝影機的一攝影機座標系與該手術機械臂的一機械臂座標系匹配。 The surgical robot arm control method as described in claim 11, wherein the image capture unit is a depth camera, and the image capture unit obtains a positioning image and a reference depth information in advance, wherein the positioning image includes a positioning object, The surgical robot arm control method includes the following steps: using the processor to execute a panoramic environment field positioning module to analyze a positioning coordinate information of the positioning object in the positioning image through the panoramic environment field positioning module and the reference depth information to match a camera coordinate system of the depth camera with a robotic arm coordinate system of the surgical robotic arm. 如請求項11所述的手術機械臂控制方法,還包括:藉由該處理器執行一目標區域確認模組,以根據一目標座標定義在該第一影像中的一目標區域,並且該機械臂動作回饋模組操作該手術機械臂移動至該目標區域。 The surgical robot arm control method according to claim 11, further comprising: executing a target area confirmation module by the processor to define a target area in the first image according to a target coordinate, and the robot arm The motion feedback module operates the surgical robotic arm to move to the target area. 如請求項13所述的手術機械臂控制方法,其中藉由該處理器執行該空間環境影像處理模組,以根據該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像推算該路徑資訊的步驟包括: 藉由該空間環境影像處理模組根據該手術機械臂的一機械臂末端區域來從該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像分別擷取一第二環境資訊影像、一第二方向資訊影像以及一第二深度資訊影像;以及藉由該空間環境影像處理模組將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像輸入至一全卷積網路模型,以使該全卷積網路模型輸出一特徵影像,並且藉由該處理器根據該特徵影像產生該路徑資訊,其中該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像為該第一環境資訊影像、該第一方向資訊影像以及該第一深度資訊影像分別的一部分。 The surgical robot arm control method according to claim 13, wherein the spatial environment image processing module is executed by the processor to control the surgical robot arm according to the first environment information image, the first direction information image and the first depth information image. The steps to derive this path information include: The spatial environment image processing module acquires a second environment information respectively from the first environment information image, the first direction information image and the first depth information image according to a robot arm end area of the surgical robot arm. image, a second direction information image and a second depth information image; and input the second environment information image, the second direction information image and the second depth information image to a A fully convolutional network model, so that the fully convolutional network model outputs a characteristic image, and the processor generates the path information based on the characteristic image, wherein the second environment information image and the second direction information image And the second depth information image is a part of the first environment information image, the first direction information image and the first depth information image respectively. 如請求項14所述的手術機械臂控制方法,其中在該處理器將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像輸入至該全卷積網路模型之前,該處理器先分別將該第二環境資訊影像、該第二方向資訊影像以及該第二深度資訊影像進行影像放大。 The surgical robot arm control method as claimed in claim 14, wherein before the processor inputs the second environment information image, the second direction information image and the second depth information image to the fully convolutional network model, The processor first magnifies the second environment information image, the second direction information image and the second depth information image respectively. 如請求項14所述的手術機械臂控制方法,其中該第一深度資訊影像包括對應於不同深度的多個第一深度平面影像,其中產生該第二深度資訊影像的步驟包括:藉由該處理器將該些第一深度平面影像轉換為多個二值化影像,並且根據該手術機械臂的該機械臂末端區域來從該些二值化影像取得該第二深度資訊影像中對應於不同深度的多個第二深度 平面影像。 The surgical robot arm control method according to claim 14, wherein the first depth information image includes a plurality of first depth plane images corresponding to different depths, and the step of generating the second depth information image includes: by the processing The device converts the first depth plane images into a plurality of binary images, and obtains the second depth information images corresponding to different depths from the binary images according to the end area of the robotic arm of the surgical robot arm. multiple second depths Flat image. 如請求項14所述的手術機械臂控制方法,還包括:當該處理器判斷該手術機械臂的該機械臂末端區域與該目標區域重疊時,藉由該處理器判斷該手術機械臂的該機械臂末端抵達該目標座標。 The surgical robot arm control method as described in claim 14, further comprising: when the processor determines that the robot end area of the surgical robot arm overlaps with the target area, the processor determines the end area of the surgical robot arm. The end of the robotic arm reaches the target coordinates. 如請求項17所述的手術機械臂控制方法,還包括:藉由該影像擷取單元連續取得多個第一影像,以使該處理器根據該些第一影像疊代執行該空間環境辨識模組、該空間環境影像處理模組以及該機械臂動作回饋模組,以多次操作該手術機械臂移動直到該處理器判斷該手術機械臂的該機械臂末端抵達該目標座標。 The surgical robot arm control method as described in claim 17, further comprising: continuously acquiring a plurality of first images through the image acquisition unit, so that the processor iteratively executes the spatial environment recognition model based on the first images. The group, the spatial environment image processing module and the robotic arm motion feedback module operate the surgical robotic arm to move multiple times until the processor determines that the end of the robotic arm of the surgical robotic arm reaches the target coordinates. 如請求項13所述的手術機械臂控制方法,其中該第二方向資訊影像的一色彩變化方向平行於在該第二方向資訊影像上從一機械臂末端座標至該目標座標的一方向。 The surgical robot arm control method as claimed in claim 13, wherein a color change direction of the second direction information image is parallel to a direction from a robot arm end coordinate to the target coordinate on the second direction information image. 如請求項13所述的手術機械臂控制方法,還包括:藉由一輸入單元提供一目標選擇信號至該處理器,以使該處理器根據該目標選擇信號產生該目標座標。 The surgical robot arm control method according to claim 13, further comprising: providing a target selection signal to the processor through an input unit, so that the processor generates the target coordinates according to the target selection signal.
TW110139147A 2021-10-21 2021-10-21 Surgical robotic arm control system and control method thereof TWI825499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110139147A TWI825499B (en) 2021-10-21 2021-10-21 Surgical robotic arm control system and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110139147A TWI825499B (en) 2021-10-21 2021-10-21 Surgical robotic arm control system and control method thereof

Publications (2)

Publication Number Publication Date
TW202317045A TW202317045A (en) 2023-05-01
TWI825499B true TWI825499B (en) 2023-12-11

Family

ID=87378696

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110139147A TWI825499B (en) 2021-10-21 2021-10-21 Surgical robotic arm control system and control method thereof

Country Status (1)

Country Link
TW (1) TWI825499B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763040A (en) * 2008-11-14 2010-06-30 苏州佳世达电通有限公司 Device capable of interacting with audio frequencies and method thereof
CN106607920A (en) * 2015-10-23 2017-05-03 赵士野 Control device for integrated molding machine and mechanical arm
US20170182660A1 (en) * 2015-12-29 2017-06-29 Robomotive Laboratories LLC Method of controlling devices with sensation of applied force
US20180354130A1 (en) * 2015-10-30 2018-12-13 Keba Ag Method, control system and movement setting means for controlling the movements of articulated arms of an industrial robot
US20190224841A1 (en) * 2018-01-24 2019-07-25 Seismic Holdings, Inc. Exosuit systems and methods for monitoring working safety and performance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763040A (en) * 2008-11-14 2010-06-30 苏州佳世达电通有限公司 Device capable of interacting with audio frequencies and method thereof
CN106607920A (en) * 2015-10-23 2017-05-03 赵士野 Control device for integrated molding machine and mechanical arm
US20180354130A1 (en) * 2015-10-30 2018-12-13 Keba Ag Method, control system and movement setting means for controlling the movements of articulated arms of an industrial robot
US20170182660A1 (en) * 2015-12-29 2017-06-29 Robomotive Laboratories LLC Method of controlling devices with sensation of applied force
US20190224841A1 (en) * 2018-01-24 2019-07-25 Seismic Holdings, Inc. Exosuit systems and methods for monitoring working safety and performance

Also Published As

Publication number Publication date
TW202317045A (en) 2023-05-01

Similar Documents

Publication Publication Date Title
CN110355754B (en) Robot hand-eye system, control method, device and storage medium
JP6198857B2 (en) Method and system for performing three-dimensional image formation
CN103838437B (en) Touch positioning control method based on projection image
CN108921907B (en) Exercise test scoring method, device, equipment and storage medium
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
JP2019028843A (en) Information processing apparatus for estimating person's line of sight and estimation method, and learning device and learning method
JP2018111165A (en) Calibration device of visual sensor, method and program
JP2011175477A (en) Three-dimensional measurement apparatus, processing method and program
US9008442B2 (en) Information processing apparatus, information processing method, and computer program
JP6836561B2 (en) Image processing device and image processing method
WO2021218542A1 (en) Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium
JP2015035211A (en) Pattern matching method and pattern matching device
WO2022040954A1 (en) Ar spatial visual three-dimensional reconstruction method controlled by means of gestures
JP2010112731A (en) Joining method of coordinate of robot
JP6410411B2 (en) Pattern matching apparatus and pattern matching method
JP5698815B2 (en) Information processing apparatus, information processing apparatus control method, and program
Dalvand et al. High speed vision-based 3D reconstruction of continuum robots
JP6040264B2 (en) Information processing apparatus, information processing apparatus control method, and program
TWI825499B (en) Surgical robotic arm control system and control method thereof
CN114155288A (en) AR space visual three-dimensional reconstruction method controlled through gestures
JP2021128592A5 (en)
US20230149095A1 (en) Surgical robotic arm control system and control method thereof
JP2021026599A (en) Image processing system
KR101438514B1 (en) Robot localization detecting system using a multi-view image and method thereof
Nguyen et al. Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras