TWI801775B - Automatic control method of mechnical arm and automatic control system - Google Patents

Automatic control method of mechnical arm and automatic control system Download PDF

Info

Publication number
TWI801775B
TWI801775B TW109140432A TW109140432A TWI801775B TW I801775 B TWI801775 B TW I801775B TW 109140432 A TW109140432 A TW 109140432A TW 109140432 A TW109140432 A TW 109140432A TW I801775 B TWI801775 B TW I801775B
Authority
TW
Taiwan
Prior art keywords
module
robot arm
image
automatic control
depth images
Prior art date
Application number
TW109140432A
Other languages
Chinese (zh)
Other versions
TW202220623A (en
Inventor
曾建嘉
楊昇宏
潘柏瑋
Original Assignee
財團法人金屬工業研究發展中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人金屬工業研究發展中心 filed Critical 財團法人金屬工業研究發展中心
Priority to TW109140432A priority Critical patent/TWI801775B/en
Publication of TW202220623A publication Critical patent/TW202220623A/en
Application granted granted Critical
Publication of TWI801775B publication Critical patent/TWI801775B/en

Links

Images

Landscapes

  • Numerical Control (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

An automatic control method of a mechanical arm and automatic control system are provided. The automatic control method includes the following steps: obtaining a color image and depth information corresponding to the color image through a depth camera; performing image space cutting processing and image rotation processing based on the color image and the depth information to generate multiple depth images; inputting the depth image to an environmental image recognition module, so that the environmental image recognition module outputs a displacement coordinate parameter; and outputting the displacement coordinate parameter to a robot control module, so that the robot control module controls a movement of the mechanical arm according to the displacement coordinate parameter.

Description

機械手臂的自動控制方法以及自動控制系統Automatic control method and automatic control system of mechanical arm

本發明是有關於一種控制方法與系統,且特別是有關於一種機械手臂的自動控制方法以及自動控制系統。 The present invention relates to a control method and system, and in particular to an automatic control method and an automatic control system of a mechanical arm.

隨著醫療設備的演進,可有助於增加醫療人員的手術效率以及手術精準度的可自動控制的相關醫療設備目前為此領域的重要發展方向之一。特別是,在手術過程中用於輔助或配合醫療人員(施術者)進行相關手術工作的機械手臂更為重要。然而,在現有的機械手臂設計中,為了使機械手臂可實現自動控制功能,機械手臂必須設置有多個感測器以及必須由使用者在每次手術過程中進行繁瑣的手動校正操作,才可使機械手臂在移動過程迴避路徑中存在的障礙物,實現準確的自動移動以及自動操作結果。有鑑於此,以下將提出新型的自動控制系統設計。 With the evolution of medical equipment, related medical equipment that can be automatically controlled, which can help to increase the surgical efficiency and accuracy of medical personnel, is currently one of the important development directions in this field. In particular, the mechanical arm used to assist or cooperate with medical personnel (performers) in performing relevant surgical work during the operation is more important. However, in the existing design of the robotic arm, in order to realize the automatic control function of the robotic arm, the robotic arm must be equipped with multiple sensors and the user must perform cumbersome manual calibration operations during each operation. Make the robot arm avoid obstacles in the path during the movement process, and realize accurate automatic movement and automatic operation results. In view of this, the following will propose a new type of automatic control system design.

本發明提供一種機械手臂的自動控制方法以及自動控制系統,可操作機械手臂於空間中進行移動並有效地迴避障礙物。 The invention provides an automatic control method and an automatic control system of a mechanical arm, which can operate the mechanical arm to move in space and effectively avoid obstacles.

本發明的機械手臂的自動控制方法包括以下步驟:透過深度攝影機取得彩色影像以及對應於彩色影像的深度資訊;依據彩色影像及深度資訊進行影像空間切割處理以及影像旋轉處理,以產生多個深度影像;將所述多個深度影像輸入至環境影像辨識模組,以使環境影像辨識模組輸出位移座標參數;以及輸出位移座標參數至機械手臂控制模組,以使機械手臂控制模組依據位移座標參數來控制機械手臂移動。 The automatic control method of the robotic arm of the present invention includes the following steps: obtaining a color image and depth information corresponding to the color image through a depth camera; performing image space cutting and image rotation processing according to the color image and depth information to generate multiple depth images ; Input the plurality of depth images to the environment image recognition module, so that the environment image recognition module outputs displacement coordinate parameters; and output the displacement coordinate parameters to the robot arm control module, so that the robot arm control module according to the displacement coordinate Parameters to control the movement of the robot arm.

本發明的機械手臂的自動控制系統包括深度攝影機以及處理器。深度攝影機用以取得彩色影像以及對應於彩色影像的深度資訊。處理器耦接機械手臂以及深度攝影機。處理器用以依據彩色影像及深度資訊進行影像空間切割處理以及影像旋轉處理,以產生多個深度影像。處理器將所述多個深度影像輸入至環境影像辨識模組,以使環境影像辨識模組輸出位移座標參數。處理器輸出位移座標參數至機械手臂控制模組,以使機械手臂控制模組依據位移座標參數來控制機械手臂移動。 The automatic control system of the mechanical arm of the present invention includes a depth camera and a processor. The depth camera is used to obtain color images and depth information corresponding to the color images. The processor is coupled to the robotic arm and the depth camera. The processor is used for image space cutting and image rotation processing according to the color image and depth information, so as to generate multiple depth images. The processor inputs the plurality of depth images to the environment image recognition module, so that the environment image recognition module outputs displacement coordinate parameters. The processor outputs the displacement coordinate parameter to the robot arm control module, so that the robot arm control module controls the movement of the robot arm according to the displacement coordinate parameter.

基於上述,本發明的機械手臂的自動控制方法以及自動控制系統可透過視覺訓練的方式來實現可自動判斷當前環境中的障礙物的功能,並可有效地操作機械手臂於當前環境進行移動。 Based on the above, the automatic control method and automatic control system of the robotic arm of the present invention can realize the function of automatically judging obstacles in the current environment through visual training, and can effectively operate the robotic arm to move in the current environment.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉 實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the following special citations Embodiments, together with the accompanying drawings, are described in detail as follows.

100:自動控制系統 100: Automatic control system

110:處理器 110: Processor

120:記憶體 120: memory

121:環境影像辨識模組 121:Environmental image recognition module

122:機械手臂控制模組 122: Robotic arm control module

130:深度攝影機 130: Depth camera

140:機械手臂 140: Mechanical arm

141:夾具 141: fixture

150:顯示設備 150: display device

200:手術對象 200: Surgical object

210:手術位置 210: Surgical position

410:控制模組 410: Control module

411:影像處理模組 411: Image processing module

412:軌跡紀錄模組 412: Track record module

413:回饋模組 413: Feedback Module

414:輸出模組 414: Output module

420:訓練模組 420: Training module

600:彩色影像 600: color image

601:範圍 601: range

610:彩色影像 610:Color image

620:環境影像 620: Environment image

630:目標位置 630: target position

700_1~700_N:深度影像 700_1~700_N: Depth image

S310~S340、S510~S570、S810、S820:步驟 S310~S340, S510~S570, S810, S820: steps

800_1~800_N:有效安全空間影像 800_1~800_N: effective safe space image

900_1~900_N:移動軌跡 900_1~900_N: moving track

圖1是依照本發明的一實施例的自動控制系統的方塊示意圖。 FIG. 1 is a schematic block diagram of an automatic control system according to an embodiment of the present invention.

圖2是依照本發明的一實施例的自動控制系統的操作示意圖。 FIG. 2 is a schematic diagram of the operation of the automatic control system according to an embodiment of the present invention.

圖3是依照本發明的一實施例的自動控制方法的流程圖。 FIG. 3 is a flowchart of an automatic control method according to an embodiment of the present invention.

圖4是依照本發明的一實施例的神經網路運算的多個模組的示意圖。 FIG. 4 is a schematic diagram of multiple modules of a neural network operation according to an embodiment of the present invention.

圖5是依照本發明的另一實施例的自動控制方法的流程圖。 Fig. 5 is a flowchart of an automatic control method according to another embodiment of the present invention.

圖6是依照本發明的一實施例的彩色影像的示意圖。 FIG. 6 is a schematic diagram of a color image according to an embodiment of the invention.

圖7是依照本發明的一實施例的多個深度影像的示意圖。 FIG. 7 is a schematic diagram of a plurality of depth images according to an embodiment of the invention.

圖8是依照本發明的一實施例的神經網路運算的模組的訓練流程圖。 FIG. 8 is a flow chart of training a module of a neural network operation according to an embodiment of the present invention.

圖9是依照本發明的一實施例的多個有效安全空間影像的示意圖。 FIG. 9 is a schematic diagram of a plurality of effective safe space images according to an embodiment of the present invention.

為了使本發明之內容可以被更容易明瞭,以下特舉實施例做為本發明確實能夠據以實施的範例。另外,凡可能之處,在 圖式及實施方式中使用相同標號的元件/構件/步驟,係代表相同或類似部件。 In order to make the content of the present invention more comprehensible, the following specific examples are given as examples in which the present invention can indeed be implemented. In addition, wherever possible, in Elements/components/steps using the same reference numerals in the drawings and embodiments represent the same or similar components.

圖1是依照本發明的一實施例的自動控制系統的方塊示意圖。參考圖1,自動控制系統100包括處理器110、記憶體120以及深度攝影機130。處理器110耦接記憶體120、深度攝影機130以及機械手臂140。機械手臂140可為多軸機械手臂(例如六軸)。在本實施例中,記憶體120可儲存環境影像辨識模組121以及機械手臂控制模組122。處理器110可存取記憶體120,並且執行環境影像辨識模組121以及機械手臂控制模組122,以控制機械手臂140進行移動及相關操作。在本實施例中,處理器110以及記憶體120可例如整合至一個電腦主機中,並且以有線或無線的方式與深度攝影機130以及機械手臂140進行通訊。然而,在一實施例中,處理器110以及記憶體120亦可整合至雲端伺服器系統,但本發明並不限於此。 FIG. 1 is a schematic block diagram of an automatic control system according to an embodiment of the present invention. Referring to FIG. 1 , the automatic control system 100 includes a processor 110 , a memory 120 and a depth camera 130 . The processor 110 is coupled to the memory 120 , the depth camera 130 and the robot arm 140 . The robotic arm 140 can be a multi-axis robotic arm (eg, six-axis). In this embodiment, the memory 120 can store the environment image recognition module 121 and the robot arm control module 122 . The processor 110 can access the memory 120 and execute the environment image recognition module 121 and the robot arm control module 122 to control the movement and related operations of the robot arm 140 . In this embodiment, the processor 110 and the memory 120 can be integrated into a host computer, for example, and communicate with the depth camera 130 and the robotic arm 140 in a wired or wireless manner. However, in one embodiment, the processor 110 and the memory 120 can also be integrated into the cloud server system, but the present invention is not limited thereto.

在本實施例中,處理器110可先透過深度攝影機130取得對應於目標位置及機械手臂的彩色影像以及對應於彩色影像的深度資訊,並接著依據彩色影像以及對應於彩色影像的深度資訊來執行環境影像辨識模組121,以透過電腦視覺影像處理的方式來辨識目標位置的環境。處理器110可依據環境影像辨識模組121輸出對應於環境辨識結果的位移或路徑參數來執行機械手臂控制模組122,以使機械手臂控制模組122可產生對應的控制信號至機械手臂140。在本實施例中,機械手臂控制模組122可包括用於控 制機械手臂140的輸入介面(例如socket或API等方式),並且機械手臂控制模組122可執行機械手臂140的正逆運動學的運算。因此,機械手臂控制模組122可控制機械手臂140自動地於空間中移動至目標位置,並且可有效地迴避環境中的障礙物。 In this embodiment, the processor 110 can first obtain the color image corresponding to the target position and the robot arm and the depth information corresponding to the color image through the depth camera 130, and then execute according to the color image and the depth information corresponding to the color image. The environment image recognition module 121 recognizes the environment of the target location through computer vision image processing. The processor 110 can execute the robot arm control module 122 according to the displacement or path parameters corresponding to the environment recognition result output by the environment image recognition module 121 , so that the robot arm control module 122 can generate corresponding control signals to the robot arm 140 . In this embodiment, the robotic arm control module 122 may include The input interface of the control robot arm 140 (such as socket or API, etc.), and the robot arm control module 122 can perform the calculation of forward and inverse kinematics of the robot arm 140 . Therefore, the robot arm control module 122 can control the robot arm 140 to automatically move to the target position in space, and can effectively avoid obstacles in the environment.

在本實施例中,處理器110可包括中央處理器(Central processing unit;CPU)、可程式設計的一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital signal processor;DSP)、可程式設計控制器、專用積體電路(Application specific integrated circuit;ASIC)、圖形處理器(Graphics processing unit,GPU)或其他類似元件或上述元件的組合且可用於實現本發明的相關功能電路。 In this embodiment, the processor 110 may include a central processing unit (Central processing unit; CPU), a programmable general-purpose or special-purpose microprocessor (Microprocessor), a digital signal processor (Digital signal processor; DSP) , programmable controller, application specific integrated circuit (Application specific integrated circuit; ASIC), graphics processor (Graphics processing unit, GPU) or other similar components or a combination of the above components and can be used to implement the relevant functional circuits of the present invention.

在本實施例中,記憶體120可例如包括隨機存取記憶體(Random-Access Memory;RAM)、唯讀記憶體(Read-Only Memory;ROM)、光碟(Optical disc)、磁碟(Magnetic disk)、硬驅動機(Hard drive)、固態驅動機(Solid-state drive)、快閃驅動機(Flash drive)、安全數位(Security digital;SD)卡、記憶條(Memory stick)、緊密快閃(Compact flash;CF)卡或任何類型的儲存裝置。在本實施例中,記憶體120可用於儲存本發明各實施例所提到的相關模組、相關影像資料及相關參數等,以使處理器110可透過存取記憶體120,以執行相關資料處理及運算。 In this embodiment, the memory 120 may include, for example, random access memory (Random-Access Memory; RAM), read-only memory (Read-Only Memory; ROM), optical disc (Optical disc), magnetic disk (Magnetic disk) ), Hard drive, Solid-state drive, Flash drive, Security digital (SD) card, Memory stick, Compact flash ( Compact flash; CF card or any type of storage device. In this embodiment, the memory 120 can be used to store related modules, related image data, and related parameters mentioned in various embodiments of the present invention, so that the processor 110 can access the memory 120 to execute related data. processing and computing.

圖2是依照本發明的一實施例的自動控制系統的操作示意圖。參考圖1以及圖2,圖1的自動控制系統100可例如應用於 如圖2所示的醫療手術之情境,並且機械手臂140可為手術機械手臂。如圖2所示,自動控制系統100的處理器110還可進一步耦接顯示設備150。具體而言,深度攝影機130可朝手術對象200的手術位置210進行拍攝,以連續取得手術位置210的多個彩色影像以及對應於所述多個彩色影像的多個深度資訊。深度攝影機130可將所述多個彩色影像以及對應於所述多個彩色影像的多個深度資訊提供至處理器110。處理器110可依據深度攝影機130的拍攝結果而於顯示設備150對應顯示相關即時資訊供醫療人員進行判斷或監控,而本發明並不限制顯示設備150的顯示內容。在本實施例中,處理器110可對每一個彩色影像以及對應的深度資訊進行影像處理,以產生相應的控制信號至機械手臂140,以使機械手臂140可自動地朝手術位置210中的目標位置。 FIG. 2 is a schematic diagram of the operation of the automatic control system according to an embodiment of the present invention. Referring to Fig. 1 and Fig. 2, the automatic control system 100 of Fig. 1 can be applied to As shown in FIG. 2 , the scenario of a medical operation, and the robot arm 140 may be a surgical robot arm. As shown in FIG. 2 , the processor 110 of the automatic control system 100 may be further coupled to a display device 150 . Specifically, the depth camera 130 can shoot towards the operation site 210 of the operation object 200 to continuously acquire a plurality of color images of the operation position 210 and a plurality of depth information corresponding to the plurality of color images. The depth camera 130 can provide the plurality of color images and a plurality of depth information corresponding to the plurality of color images to the processor 110 . The processor 110 can display relevant real-time information on the display device 150 according to the shooting result of the depth camera 130 for medical personnel to judge or monitor, and the present invention does not limit the display content of the display device 150 . In this embodiment, the processor 110 can perform image processing on each color image and the corresponding depth information to generate corresponding control signals to the robotic arm 140 so that the robotic arm 140 can automatically move towards the target in the surgical site 210 Location.

在深度攝影機130進行連續拍攝的過程中,處理器110同樣進行連續的視覺影像處理,以連續地輸出控制信號至機械手臂140。換言之,處理器110可對應當前環境情境或環境變動來相應地控制機械手臂140的移動。舉例而言,處理器110可控制貨操作機械手臂140的夾具141朝位於手術位置210中特定醫療器件移動,並且在機械手臂140的移動過程中,機械手臂140可自動迴避手術環境中的障礙物(包括手術對象200的身體部位),而使機械手臂140的夾具141可順利地夾取目標位置上的特定醫療器件。因此,本實施例的自動控制系統100可控制機械手臂140來有效地輔助或配合醫療人員進行相關手術動作。 During the continuous shooting process of the depth camera 130 , the processor 110 also performs continuous visual image processing to continuously output control signals to the robotic arm 140 . In other words, the processor 110 can correspondingly control the movement of the robot arm 140 corresponding to the current environmental situation or environmental changes. For example, the processor 110 can control the gripper 141 of the manipulator arm 140 to move towards a specific medical device located in the surgical location 210, and during the movement of the robotic arm 140, the robotic arm 140 can automatically avoid obstacles in the surgical environment (including the body part of the surgical object 200), so that the gripper 141 of the robotic arm 140 can smoothly grip the specific medical device at the target position. Therefore, the automatic control system 100 of this embodiment can control the robotic arm 140 to effectively assist or cooperate with medical personnel to perform related operations.

圖3是依照本發明的一實施例的自動控制方法的流程圖。參考圖1至圖3,自動控制系統100可透過執行步驟S310~S340來實現控制機械手臂140進行自動移動。在步驟S310,處理器110可透過深度攝影機130取得彩色影像以及對應於彩色影像的深度資訊。在步驟S320,處理器110可依據彩色影像及深度資訊進行影像空間切割處理以及影像旋轉處理,以產生多個深度影像。在步驟S330,處理器110可將多個深度影像輸入至環境影像辨識模組121,以使環境影像辨識模組121輸出位移座標參數。在本實施例中,環境影像辨識模組121可包括神經網路(Neural network)運算模型,並且環境影像辨識模組121可例如被預先訓練以學習辨識深度影像中的障礙物,並且可依據深度影像的辨識結果來產生位移座標參數。在步驟S340,環境影像辨識模組121可輸出所述位移座標參數至機械手臂控制模組122,以使機械手臂控制模組122可依據所述位移座標參數來控制機械手臂140移動。因此,本實施例的自動控制系統100可有效地控制機械手臂140。並且,各步驟的詳細實施方式將由以下多個實施例來詳細說明之。 FIG. 3 is a flowchart of an automatic control method according to an embodiment of the present invention. Referring to FIG. 1 to FIG. 3 , the automatic control system 100 can control the robot arm 140 to move automatically by executing steps S310 - S340 . In step S310 , the processor 110 can obtain a color image and depth information corresponding to the color image through the depth camera 130 . In step S320, the processor 110 may perform image space cutting and image rotation processing according to the color image and the depth information, so as to generate a plurality of depth images. In step S330, the processor 110 may input a plurality of depth images to the environment image recognition module 121, so that the environment image recognition module 121 outputs the displacement coordinate parameters. In this embodiment, the environment image recognition module 121 may include a neural network (Neural network) computing model, and the environment image recognition module 121 may be pre-trained, for example, to learn to recognize obstacles in the depth image, and it may be based on the depth The recognition result of the image is used to generate the displacement coordinate parameter. In step S340, the environment image recognition module 121 can output the displacement coordinate parameter to the robot arm control module 122, so that the robot arm control module 122 can control the movement of the robot arm 140 according to the displacement coordinate parameter. Therefore, the automatic control system 100 of this embodiment can effectively control the robot arm 140 . Moreover, the detailed implementation of each step will be described in detail by the following multiple embodiments.

圖4是依照本發明的一實施例的神經網路運算的多個模組的示意圖。圖5是依照本發明的另一實施例的自動控制方法的流程圖。參考圖1、圖4以及圖5,自動控制系統100可執行步驟S510~S570,來操作機械手臂140。在本實施例中,記憶體120還可儲存有影像處理模組411、軌跡紀錄模組412、回饋模組413以及輸出模組414。處理器110可執行環境影像辨識模組121以及機 械手臂控制模組122、影像處理模組411、軌跡紀錄模組412、回饋模組413以及輸出模組414。值得注意的是,自動控制系統100可執行影像處理模組411、環境影像辨識模組121、輸出模組414以及機械手臂控制模組122來控制機械手臂140。影像處理模組411、環境影像辨識模組121、輸出模組414以及機械手臂控制模組122可屬於或整合為控制模組410。並且,在一實施例中,自動控制系統100可進入訓練模式,以訓練環境影像辨識模組121。對此,自動控制系統100可執行影像處理模組411、環境影像辨識模組121、軌跡紀錄模組412、回饋模組413、輸出模組414以及機械手臂控制模組122來訓練環境影像辨識模組121。影像處理模組411、環境影像辨識模組121、軌跡紀錄模組412、回饋模組413、輸出模組414以及機械手臂控制模組122可屬於或整合為訓練模組420。 FIG. 4 is a schematic diagram of multiple modules of a neural network operation according to an embodiment of the present invention. Fig. 5 is a flowchart of an automatic control method according to another embodiment of the present invention. Referring to FIG. 1 , FIG. 4 and FIG. 5 , the automatic control system 100 may execute steps S510 - S570 to operate the robot arm 140 . In this embodiment, the memory 120 can also store an image processing module 411 , a track recording module 412 , a feedback module 413 and an output module 414 . The processor 110 can execute the environment image recognition module 121 and the machine Manipulator control module 122 , image processing module 411 , track recording module 412 , feedback module 413 and output module 414 . It should be noted that the automatic control system 100 can execute the image processing module 411 , the environment image recognition module 121 , the output module 414 and the robot arm control module 122 to control the robot arm 140 . The image processing module 411 , the environment image recognition module 121 , the output module 414 and the robot arm control module 122 may belong to or be integrated into the control module 410 . Moreover, in one embodiment, the automatic control system 100 can enter a training mode to train the environment image recognition module 121 . In this regard, the automatic control system 100 can execute the image processing module 411, the environment image recognition module 121, the track record module 412, the feedback module 413, the output module 414 and the robot arm control module 122 to train the environment image recognition module. Group 121. The image processing module 411 , the environment image recognition module 121 , the trajectory recording module 412 , the feedback module 413 , the output module 414 and the robot arm control module 122 may belong to or be integrated into the training module 420 .

圖6是依照本發明的一實施例的彩色影像的示意圖。圖7是依照本發明的一實施例的多個深度影像的示意圖。對此,以下更搭配圖6及圖7進行說明。在本實施例中,自動控制系統100可進一步包括輸入模組,例如包括滑鼠、鍵盤等輸入設備,並且可接收使用者的輸入資料(或設定參數)。在步驟S510中,處理器110可依據接收到的輸入資料來設定起始位置參數以及目標位置參數。在步驟S520中,處理器110可透過深度攝影機130取得當前幀的如圖6所示的彩色影像600以及對應於彩色影像600的深度資訊。彩色影像600可包括手術部位影像610以及環境影像 620。處理器110可依據目標位置參數於彩色影像600中定義目標位置630,並且目標位置630的範圍601。在步驟S530,處理器110可執行影像處理模組411以依據彩色影像600及深度資訊進行影像空間切割處理以及影像旋轉處理,以產生如圖7所示的多個深度影像700_1~700_16。 FIG. 6 is a schematic diagram of a color image according to an embodiment of the invention. FIG. 7 is a schematic diagram of a plurality of depth images according to an embodiment of the invention. This will be further described below with reference to FIG. 6 and FIG. 7 . In this embodiment, the automatic control system 100 may further include an input module, such as a mouse, a keyboard and other input devices, and may receive user input data (or set parameters). In step S510, the processor 110 may set a starting position parameter and a target position parameter according to the received input data. In step S520 , the processor 110 can obtain the color image 600 shown in FIG. 6 of the current frame and the depth information corresponding to the color image 600 through the depth camera 130 . Color image 600 may include surgical site image 610 as well as environment image 620. The processor 110 can define a target position 630 in the color image 600 according to the target position parameter, and a range 601 of the target position 630 . In step S530 , the processor 110 may execute the image processing module 411 to perform image space cutting and image rotation processing according to the color image 600 and depth information, so as to generate a plurality of depth images 700_1 - 700_16 as shown in FIG. 7 .

舉例而言,影像處理模組411可先對彩色影像600進行RGB數位影像空間切割,以增加環境特徵差異性,而產生分別對應於不同深度的深度影像700_1、700_5、700_9、700_13。接著,影像處理模組411可分別對深度影像700_1、700_5、700_9、700_13進行旋轉,例如旋轉90度、180度以及270度,以進一步產生多個深度影像700_2~700_4、700_6~700_8、700_10~700_12、700_14~700_16,而增加樣本資料。從另一角度而言,對於影像中的每一像素位置共有16個樣本資料。換言之,自動控制系統100可針對深度攝影機130於每一幀所取得的一張彩色影像進行處理,而產生多個對應的深度影像700_1~700_16,以有效地三維分析每一時刻的當前空間的環境狀態。另外,本發明的深度影像的數量並不限於圖7。 For example, the image processing module 411 may perform RGB digital image space segmentation on the color image 600 to increase the diversity of environmental features, and generate depth images 700_1 , 700_5 , 700_9 , and 700_13 respectively corresponding to different depths. Next, the image processing module 411 can respectively rotate the depth images 700_1, 700_5, 700_9, and 700_13, such as 90 degrees, 180 degrees, and 270 degrees, to further generate multiple depth images 700_2~700_4, 700_6~700_8, 700_10~ 700_12, 700_14~700_16, and increase the sample data. From another point of view, there are 16 sample data for each pixel position in the image. In other words, the automatic control system 100 can process a color image obtained by the depth camera 130 in each frame, and generate a plurality of corresponding depth images 700_1~700_16, so as to effectively analyze the current space environment at each moment in three dimensions. state. In addition, the number of depth images of the present invention is not limited to FIG. 7 .

在步驟S540,處理器110將多個深度影像輸入至環境影像辨識模組121,以使環境影像辨識模組121輸出位移座標參數至輸出模組414。在本實施例中,環境影像辨識模組121可執行神經網路運算,並且環境影像辨識模組121可從深度影像700_1~700_16辨識有效安全空間影像,以選擇深度影像 700_1~700_16的其中一張作為有效安全空間影像,並且進一步依據該有效安全空間影像來決定位移座標參數。換言之,環境影像辨識模組121可三維地判斷當前幀的空間中較安全的移動路徑,以提供對應的位移座標參數至輸出模組414。 In step S540 , the processor 110 inputs a plurality of depth images to the environment image recognition module 121 , so that the environment image recognition module 121 outputs the displacement coordinate parameters to the output module 414 . In this embodiment, the environment image recognition module 121 can execute the neural network calculation, and the environment image recognition module 121 can identify effective safe space images from the depth images 700_1~700_16 to select the depth image One of 700_1~700_16 is used as an effective safe space image, and the displacement coordinate parameter is further determined according to the effective safe space image. In other words, the environment image recognition module 121 can three-dimensionally determine a safer movement path in the space of the current frame, so as to provide corresponding displacement coordinate parameters to the output module 414 .

更詳細而言,深度影像700_1~700_16的每一個輸入至環境影像辨識模組121的神經網路運算模型後,環境影像辨識模組121的神經網路運算模型可對深度影像700_1~700_16的每一個的多個像素進行特徵值(環境有效空間徵特值)分析,以取得每一個像素的特徵值權重,其中特徵值分析用於判斷每一個像素的物件評估。因此,環境影像辨識模組121可例如產生對應於分別深度影像700_1~700_16的多個空間權重矩陣資料。對此,環境影像辨識模組121可依據對應於深度影像700_1~700_16的所述多個空間權重矩陣資料進行神經網路運算以決定有效安全空間影像。 In more detail, after each of the depth images 700_1~700_16 is input to the neural network computing model of the environment image recognition module 121, the neural network computing model of the environment image recognition module 121 can analyze each of the depth images 700_1~700_16. An eigenvalue (environmental effective space eigenvalue) analysis is performed on a plurality of pixels to obtain the eigenvalue weight of each pixel, wherein the eigenvalue analysis is used to determine the object evaluation of each pixel. Therefore, the environment image recognition module 121 can, for example, generate a plurality of spatial weight matrix data corresponding to the respective depth images 700_1 - 700_16 . In this regard, the environment image recognition module 121 can perform neural network calculations according to the plurality of spatial weight matrix data corresponding to the depth images 700_1 to 700_16 to determine effective safe space images.

舉例而言,環境影像辨識模組121的神經網路運算模型可依據機械手臂140在深度影像700_1~700_16中的每一個的當前位置來判斷下一幀可安全位移的方向及位置。舉例而言,深度影像700_1~700_16中屬於物件(障礙物或手術部位)的權重可較高。環境影像辨識模組121可判斷機械手臂140在深度影像700_1~700_16中的每一個的當前位置的周圍單位移動距離內的各像素所對應的權重值最低者(並且朝目標位置移動),來作為機械手臂140在下一幀的位置,並且將對應的深度影像作為有效安全空間影像。因此,自動控制系統100可驅動機械手臂140朝此位置 移動,而可有效避免機械手臂140與物件(障礙物或手術部位)發生接觸或碰撞。 For example, the neural network computing model of the environment image recognition module 121 can determine the safe displacement direction and position of the next frame according to the current position of the robotic arm 140 in each of the depth images 700_1 - 700_16 . For example, the weights belonging to objects (obstacles or surgical sites) in the depth images 700_1˜700_16 may be higher. The environment image recognition module 121 can determine the one with the lowest weight value corresponding to each pixel within the unit movement distance around the current position of the robot arm 140 in each of the depth images 700_1~700_16 (and move toward the target position), as The robot arm 140 is at the position of the next frame, and uses the corresponding depth image as an effective safe space image. Therefore, the automatic control system 100 can drive the mechanical arm 140 towards this position Move, so as to effectively avoid contact or collision between the robotic arm 140 and objects (obstacles or surgical sites).

在步驟S550,輸出模組414可輸出位移座標參數至機械手臂控制模組122,以使機械手臂控制模組122可依據位移座標參數來控制機械手臂140移動。在本實施例中,輸出模組414可例如進一步依據依該環境影像辨識模組121的分析及運算結果進一步輸出用於機械手臂140的可移動方向資訊以及可移動位置資訊至機械手臂控制模組122。在步驟S560,處理器110可透過機械手臂控制模組122回傳機械手臂140的目前末端座標參數(例如上述圖2所示的機械手臂140的夾具141的座標)。在步驟S570,處理器110可判斷機械手臂140是否已到達目標位置。若是,則自動控制系統100結束當前控制任務。若否,則自動控制系統100重新執行步驟S510~S570,以針對深度攝影機130所提供的下一幀的彩色影像及其深度資訊來決定機械手臂140的下一步位移方向與位置。因此,本實施例的自動控制系統100以及自動控制方法,可透過視覺影像控制的方式來有效地控制機械手臂140在空間中移動,而避免機械手臂140在空間中移動的過程中於物件(障礙物或手術部位)發生接觸或碰撞。 In step S550, the output module 414 can output the displacement coordinate parameter to the robot arm control module 122, so that the robot arm control module 122 can control the movement of the robot arm 140 according to the displacement coordinate parameter. In this embodiment, the output module 414 can further output the movable direction information and movable position information for the robot arm 140 to the robot arm control module, for example, according to the analysis and calculation results of the environment image recognition module 121 122. In step S560 , the processor 110 may return the current end coordinate parameters of the robotic arm 140 (such as the coordinates of the gripper 141 of the robotic arm 140 shown in FIG. 2 ) through the robotic arm control module 122 . In step S570, the processor 110 may determine whether the robot arm 140 has reached the target position. If yes, the automatic control system 100 ends the current control task. If not, the automatic control system 100 re-executes steps S510-S570 to determine the next displacement direction and position of the mechanical arm 140 according to the next frame of color image and its depth information provided by the depth camera 130 . Therefore, the automatic control system 100 and the automatic control method of this embodiment can effectively control the movement of the robotic arm 140 in space through visual image control, and avoid objects (obstacles) in the process of moving the robotic arm 140 in space. object or surgical site) contact or collision.

圖8是依照本發明的一實施例的訓練環境影像辨識模組的流程圖。圖9是依照本發明的一實施例的多個有效安全空間影像的示意圖。參考圖1、圖4、圖5、圖8以及圖9,自動控制系統100可執行步驟S510~S570、S810、S820,來訓練環境影像辨 識模組121中的神經網路運算模型。自動控制系統100可進入訓練模式,以執行訓練模組420。在本實施例中,自動控制系統100可同樣如上述圖5實施例的流程來依序操作步驟S510~S570,並且在處理器110透過機械手臂控制模組122回傳機械手臂140的目前末端座標參數之後(在上述步驟S560之後),自動控制系統100可執行步驟S810。在步驟S810,處理器110可執行軌跡紀錄模組412,以透過軌跡紀錄模組412依據目前末端座標參數以及先前末端座標參數將位移方向記錄至如圖9所示的有效安全空間影像800_1,其中有效安全空間影像800_1包括有機械手臂140的移動軌跡900_1。 FIG. 8 is a flowchart of a training environment image recognition module according to an embodiment of the present invention. FIG. 9 is a schematic diagram of a plurality of effective safe space images according to an embodiment of the present invention. Referring to Fig. 1, Fig. 4, Fig. 5, Fig. 8 and Fig. 9, the automatic control system 100 can execute steps S510~S570, S810, S820 to train the environment image recognition The neural network calculation model in the recognition module 121. The automatic control system 100 can enter the training mode to execute the training module 420 . In this embodiment, the automatic control system 100 can also perform steps S510-S570 sequentially as in the flow of the above-mentioned embodiment in FIG. After parameterizing (after step S560 above), the automatic control system 100 may execute step S810. In step S810, the processor 110 can execute the trajectory recording module 412, so as to record the displacement direction to the effective safe space image 800_1 shown in FIG. 9 according to the current terminal coordinate parameter and the previous terminal coordinate parameter through the trajectory recording module 412, wherein The effective safe space image 800_1 includes a movement track 900_1 of the robot arm 140 .

接著,在步驟S820,處理器110可執行回饋模組413,以透過回饋模組413計算目前末端座標參數與目標位置之間的距離參數,並且依據距離參數來訓練環境影像辨識模組121。舉例而言,處理器110可判斷當次移動機械手臂140的結果是否使機械手臂140朝目標位置移動(機械手臂140與目標位置之間的距離是否縮短),而定義當次移動是否適當,進而回饋訓練環境影像辨識模組121中的神經網路運算模型。最後,處理器110可接續執行步驟S570。並且,自動控制系統100可例如連續一段期間來反覆針對深度攝影機130的多個幀的拍攝結果進行分析,而產生多個如圖9所示的多個連續的有效安全空間影像800_1~800_N,其中N為大於1的正整數。值得注意的,有效安全空間影像800_1~800_N可分別包括依時間順序所累加紀錄的先前多個幀的機械手臂140 的多個位移位置所形成的移動軌跡900_1~900_N。換言之,環境影像辨識模組121中的神經網路運算模型不僅可有效地識別影像中的物件,自動控制系統100的還可有效地訓練神經網路運算模型所輸出的結果是驅使機械手臂140朝目標位置移動且選出最佳化路徑,而使機械手臂140的每一次移動並非單純地迴避環境中的障礙物。 Next, in step S820, the processor 110 may execute the feedback module 413 to calculate the distance parameter between the current terminal coordinate parameter and the target position through the feedback module 413, and train the environment image recognition module 121 according to the distance parameter. For example, the processor 110 can determine whether the result of moving the robot arm 140 makes the robot arm 140 move toward the target position (whether the distance between the robot arm 140 and the target position is shortened), and define whether the current movement is appropriate, and then Feedback training the neural network computing model in the environment image recognition module 121 . Finally, the processor 110 may continue to execute step S570. Moreover, the automatic control system 100 can, for example, repeatedly analyze the shooting results of multiple frames of the depth camera 130 for a continuous period of time to generate a plurality of continuous effective safe space images 800_1~800_N as shown in FIG. 9 , wherein N is a positive integer greater than 1. It is worth noting that the effective safe space images 800_1~800_N may respectively include the mechanical arm 140 of the previous multiple frames accumulated and recorded in time sequence The movement trajectories 900_1~900_N formed by the multiple displacement positions of . In other words, the neural network computing model in the environmental image recognition module 121 can not only effectively identify objects in the image, but also effectively train the neural network computing model in the automatic control system 100. The output result is to drive the mechanical arm 140 toward The target position is moved and an optimal path is selected, so that each movement of the robot arm 140 is not simply to avoid obstacles in the environment.

綜上所述,本發明的機械手臂的自動控制方法以及自動控制系統可透過視覺訓練的方式來使環境影像辨識模組中的神經網路運算模型可學習判斷影像中的物件以及學習使每一次的神經網路運算結果的位移座標參數可使機械手臂朝目標位置移動。因此,本發明的自動控制方法以及自動控制系統可有效地控制機械手臂朝移動至目標位置,並且在機械手臂的移動過程中可有效地迴避環境中的障礙物。 To sum up, the automatic control method and automatic control system of the robotic arm of the present invention can enable the neural network computing model in the environmental image recognition module to learn to judge objects in the image and learn to make each time The displacement coordinate parameters of the neural network calculation result can make the robot arm move towards the target position. Therefore, the automatic control method and the automatic control system of the present invention can effectively control the movement of the robot arm to the target position, and can effectively avoid obstacles in the environment during the movement of the robot arm.

為了使本發明之內容可以被更容易明瞭,以下特舉實施例做為本發明確實能夠據以實施的範例。另外,凡可能之處,在圖式及實施方式中使用相同標號的元件/構件/步驟,係代表相同或類似部件。 In order to make the content of the present invention more comprehensible, the following specific examples are given as examples in which the present invention can indeed be implemented. In addition, wherever possible, elements/components/steps using the same reference numerals in the drawings and embodiments represent the same or similar parts.

100:自動控制系統 100: Automatic control system

110:處理器 110: Processor

120:記憶體 120: memory

121:環境影像辨識模組 121:Environmental image recognition module

122:機械手臂控制模組 122: Robotic arm control module

130:深度攝影機 130: Depth camera

140:機械手臂 140: Mechanical arm

Claims (16)

一種機械手臂的自動控制方法,包括:透過一深度攝影機取得一彩色影像以及對應於該彩色影像的一深度資訊;依據該彩色影像及該深度資訊進行一影像空間切割處理以及一影像旋轉處理,以產生多個深度影像,包括:對該彩色影像進行數位影像空間切割,以產生多個切割深度影像;以及分別對該些切割深度影像進行旋轉,以產生該些深度影像;將該些深度影像輸入至一環境影像辨識模組,以使該環境影像辨識模組依據該些深度影像輸出一位移座標參數,包括:對各該深度影像進行特徵值分析以取得對應的一空間權重矩陣資料;以及判斷具有最低值的各該空間權重矩陣資料以作為該機械手臂在下一幀的位置,並使對應的各該深度影像作為一有效安全空間影像;以及輸出該位移座標參數至一機械手臂控制模組,以使該機械手臂控制模組依據該位移座標參數來控制該機械手臂朝向具有最低值的該空間權重矩陣資料移動。 An automatic control method of a mechanical arm, comprising: obtaining a color image and a depth information corresponding to the color image through a depth camera; performing an image space cutting process and an image rotation process according to the color image and the depth information, to generating a plurality of depth images, including: cutting the color image in digital image space to generate a plurality of cutting depth images; and rotating the cutting depth images respectively to generate the depth images; inputting the depth images To an environment image recognition module, so that the environment image recognition module outputs a displacement coordinate parameter according to the depth images, including: performing eigenvalue analysis on each of the depth images to obtain a corresponding spatial weight matrix data; and judging Each of the space weight matrix data with the lowest value is used as the position of the robot arm in the next frame, and the corresponding depth images are used as an effective safe space image; and the displacement coordinate parameters are output to a robot arm control module, The robot arm control module controls the robot arm to move towards the spatial weight matrix data having the lowest value according to the displacement coordinate parameter. 如請求項1所述的自動控制方法,更包括:設定一起始位置參數以及一目標位置參數,其中該起始位置 為對應於該機械手臂的一末端位置參數。 The automatic control method as described in claim 1 further includes: setting a starting position parameter and a target position parameter, wherein the starting position is an end position parameter corresponding to the robot arm. 如請求項2所述的自動控制方法,其中該影像空間切割處理以及該影像旋轉處理依據該目標位置參數或一障礙物的一位置參數來執行。 The automatic control method according to claim 2, wherein the image space cutting process and the image rotation process are executed according to the target position parameter or a position parameter of an obstacle. 如請求項2所述的自動控制方法,其中該環境影像辨識模組用以執行一神經網路運算,並且該環境影像辨識模組用以從該些深度影像辨識該有效安全空間影像,並且依據該有效安全空間影像來決定該位移座標參數。 The automatic control method as described in claim 2, wherein the environment image recognition module is used to execute a neural network calculation, and the environment image recognition module is used to recognize the effective safe space image from the depth images, and according to The effective safe space image is used to determine the displacement coordinate parameter. 如請求項4所述的自動控制方法,更包括:當該機械手臂依據該位移座標參數移動後,透過該機械手臂控制模組回傳該機械手臂的一目前末端座標參數。 The automatic control method as described in claim 4 further includes: after the robot arm moves according to the displacement coordinate parameter, returning a current end coordinate parameter of the robot arm through the robot arm control module. 如請求項5所述的自動控制方法,其中將該些深度影像輸入至該環境影像辨識模組的步驟更包括:執行一軌跡紀錄模組,以透過該軌跡紀錄模組依據該目前末端座標參數以及一先前末端座標參數將一位移方向記錄至該有效安全空間影像;以及執行一回饋模組,以透過該回饋模組計算該目前末端座標參數與該目標位置之間的一距離參數,並且依據該距離參數來訓練該環境影像辨識模組。 The automatic control method as described in claim item 5, wherein the step of inputting these depth images to the environment image recognition module further includes: executing a track record module, so as to use the track record module according to the current end coordinate parameter and a previous end coordinate parameter records a displacement direction to the effective safe space image; and executes a feedback module to calculate a distance parameter between the current end coordinate parameter and the target position through the feedback module, and according to The distance parameter is used to train the environment image recognition module. 如請求項4所述的自動控制方法,其中將該些深度影像輸入至該環境影像辨識模組的步驟包括:透過該環境影像辨識模組分析該些深度影像,以產生對應於 該些深度影像的多個空間權重矩陣資料;以及透過該環境影像辨識模組依據對應於該些深度影像的該些空間權重矩陣資料進行該神經網路運算以決定該有效安全空間影像。 The automatic control method as described in claim 4, wherein the step of inputting the depth images to the environment image recognition module includes: analyzing the depth images through the environment image recognition module to generate corresponding A plurality of spatial weight matrix data of the depth images; and performing the neural network calculation according to the spatial weight matrix data corresponding to the depth images through the environment image recognition module to determine the effective safe space image. 如請求項7所述的自動控制方法,其中輸出該位移座標參數至該機械手臂控制模組的步驟包括:執行一輸出模組,以透過該輸出模組依據該環境影像辨識模組的分析及運算結果進一步輸出用於該機械手臂的一可移動方向資訊以及一可移動位置資訊至該機械手臂控制模組。 The automatic control method as described in claim 7, wherein the step of outputting the displacement coordinate parameter to the robot arm control module includes: executing an output module, so as to use the output module according to the analysis and analysis of the environmental image recognition module The calculation result further outputs a movable direction information and a movable position information for the robot arm to the robot arm control module. 一種機械手臂的自動控制系統,包括:一深度攝影機,用以取得一彩色影像以及對應於該彩色影像的一深度資訊;以及一處理器,耦接該機械手臂以及該深度攝影機,並且用以依據該彩色影像及該深度資訊進行一影像空間切割處理以及一影像旋轉處理,以產生多個深度影像,其中該處理器對該彩色影像進行數位影像空間切割,以產生多個切割深度影像,並且該處理器分別對該些切割深度影像進行旋轉,以產生該些深度影像,其中該處理器將該些深度影像輸入至一環境影像辨識模組,以使該環境影像辨識模組依據該些深度影像輸出一位移座標參數,該處理器對各該深度影像進行特徵值分析以取得對應的一空間權重矩陣資料,該處理器判斷具有最低值的各該空間權重矩陣 資料以作為該機械手臂在下一幀的位置,並使對應的各該深度影像作為一有效安全空間影像,並且該處理器輸出該位移座標參數至一機械手臂控制模組,以使該機械手臂控制模組依據該位移座標參數來控制該機械手臂朝向具有最低值的該空間權重矩陣資料移動。 An automatic control system of a mechanical arm, comprising: a depth camera, used to obtain a color image and a depth information corresponding to the color image; and a processor, coupled to the mechanical arm and the depth camera, and used to The color image and the depth information are subjected to an image space slicing process and an image rotation process to generate multiple depth images, wherein the processor performs digital image space slicing on the color image to generate multiple cutting depth images, and the The processor rotates the cutting depth images respectively to generate the depth images, wherein the processor inputs the depth images to an environment image recognition module so that the environment image recognition module can use the depth images Outputting a displacement coordinate parameter, the processor performs eigenvalue analysis on each of the depth images to obtain a corresponding spatial weight matrix data, and the processor determines each of the spatial weight matrices with the lowest value The data is used as the position of the robot arm in the next frame, and the corresponding depth images are used as an effective safe space image, and the processor outputs the displacement coordinate parameters to a robot arm control module, so that the robot arm can control The module controls the robot arm to move towards the spatial weight matrix data with the lowest value according to the displacement coordinate parameter. 如請求項9所述的自動控制系統,其中該處理器設定一起始位置參數以及一目標位置參數,其中該起始位置為對應於該機械手臂的一末端位置參數。 The automatic control system according to claim 9, wherein the processor sets an initial position parameter and a target position parameter, wherein the initial position corresponds to an end position parameter of the robot arm. 如請求項10所述的自動控制系統,其中該處理器依據該目標位置參數或一障礙物的一位置參數來執行該影像空間切割處理以及該影像旋轉處理。 The automatic control system as claimed in claim 10, wherein the processor executes the image space cutting process and the image rotation process according to the target position parameter or a position parameter of an obstacle. 如請求項10所述的自動控制系統,其中該環境影像辨識模組用以執行一神經網路運算,並且該處理器執行該環境影像辨識模組,以從該些深度影像辨識該有效安全空間影像,並且依據該有效安全空間影像來決定該位移座標參數。 The automatic control system according to claim 10, wherein the environment image recognition module is used to execute a neural network operation, and the processor executes the environment image recognition module to recognize the effective safe space from the depth images image, and determine the displacement coordinate parameter according to the effective safe space image. 如請求項12所述的自動控制系統,其中當該機械手臂依據該位移座標參數移動後,該機械手臂控制模組回傳該機械手臂的一目前末端座標參數。 The automatic control system according to claim 12, wherein after the robot arm moves according to the displacement coordinate parameter, the robot arm control module returns a current end coordinate parameter of the robot arm. 如請求項13所述的自動控制系統,其中該處理器執行一軌跡紀錄模組,以透過該軌跡紀錄模組依據該目前末端座標參數以及一先前末端座標參數將一位移方向記錄至該有效安全空間影像,並且該處理器執行一回饋模組,以透過該回饋模組計 算該目前末端座標參數與該目標位置之間的一距離參數,並且依據該距離參數來訓練該環境影像辨識模組。 The automatic control system as described in claim 13, wherein the processor executes a trajectory recording module, so as to record a displacement direction to the effective safety according to the current terminal coordinate parameter and a previous terminal coordinate parameter through the trajectory recording module spatial image, and the processor executes a feedback module to calculate A distance parameter between the current end coordinate parameter and the target position is calculated, and the environment image recognition module is trained according to the distance parameter. 如請求項12所述的自動控制系統,其中該處理器透過該環境影像辨識模組分析該些深度影像,以產生對應於該些深度影像的多個空間權重矩陣資料,並且該處理器透過該環境影像辨識模組依據對應於該些深度影像的該些空間權重矩陣資料進行該神經網路運算以決定該有效安全空間影像。 The automatic control system as claimed in claim 12, wherein the processor analyzes the depth images through the environment image recognition module to generate a plurality of spatial weight matrix data corresponding to the depth images, and the processor uses the The environment image recognition module performs the neural network calculation according to the spatial weight matrix data corresponding to the depth images to determine the effective safe space image. 如請求項15所述的自動控制系統,其中該處理器執行一輸出模組,以透過該輸出模組依據該環境影像辨識模組的分析及運算結果進一步輸出用於該機械手臂的一可移動方向資訊以及一可移動位置資訊至該機械手臂控制模組。 The automatic control system as described in claim 15, wherein the processor executes an output module, so as to further output a movable for the robot arm through the output module based on the analysis and calculation results of the environmental image recognition module Direction information and a movable position information are sent to the robot arm control module.
TW109140432A 2020-11-19 2020-11-19 Automatic control method of mechnical arm and automatic control system TWI801775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109140432A TWI801775B (en) 2020-11-19 2020-11-19 Automatic control method of mechnical arm and automatic control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109140432A TWI801775B (en) 2020-11-19 2020-11-19 Automatic control method of mechnical arm and automatic control system

Publications (2)

Publication Number Publication Date
TW202220623A TW202220623A (en) 2022-06-01
TWI801775B true TWI801775B (en) 2023-05-11

Family

ID=83062227

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109140432A TWI801775B (en) 2020-11-19 2020-11-19 Automatic control method of mechnical arm and automatic control system

Country Status (1)

Country Link
TW (1) TWI801775B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200147804A1 (en) * 2018-11-08 2020-05-14 Kabushiki Kaisha Toshiba Operating system, control device, and computer program product

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200147804A1 (en) * 2018-11-08 2020-05-14 Kabushiki Kaisha Toshiba Operating system, control device, and computer program product

Also Published As

Publication number Publication date
TW202220623A (en) 2022-06-01

Similar Documents

Publication Publication Date Title
Lee et al. Camera-to-robot pose estimation from a single image
US9616569B2 (en) Method for calibrating an articulated end effector employing a remote digital camera
US20230042756A1 (en) Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
KR101494344B1 (en) method and system for motion control in humanoid robot
RU2700246C1 (en) Method and system for capturing an object using a robot device
Wang et al. Collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
Zou et al. An end-to-end calibration method for welding robot laser vision systems with deep reinforcement learning
KR102353637B1 (en) Method and apparatus of analyzing golf motion
WO2022021156A1 (en) Method and apparatus for robot to grab three-dimensional object
JP2019188477A (en) Robot motion teaching device, robot system, and robot control device
JP2020082315A (en) Image generating device, robot training system, image generating method, and image generating program
WO2021117479A1 (en) Information processing device, method, and program
CN116600945A (en) Pixel-level prediction for grab generation
CN111241940B (en) Remote control method of robot and human body boundary frame determination method and system
US20240139962A1 (en) Iterative control of robot for target object
TWI801775B (en) Automatic control method of mechnical arm and automatic control system
Tu et al. Posefusion: Robust object-in-hand pose estimation with selectlstm
US20220161438A1 (en) Automatic control method of mechanical arm and automatic control system
CN109934155B (en) Depth vision-based collaborative robot gesture recognition method and device
Birk et al. Autonomous rescue operations on the iub rugbot
Suzui et al. Toward 6 dof object pose estimation with minimum dataset
KR20220067719A (en) Apparatus and method of robot control through vision recognition using deep learning and marker
Tian et al. Short-baseline binocular vision system for a humanoid ping-pong robot
CN117921682A (en) Welding robot rapid teaching device and method based on binocular vision
WO2023100282A1 (en) Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program