TW201617789A - Control apparatus, system and method - Google Patents

Control apparatus, system and method Download PDF

Info

Publication number
TW201617789A
TW201617789A TW103143445A TW103143445A TW201617789A TW 201617789 A TW201617789 A TW 201617789A TW 103143445 A TW103143445 A TW 103143445A TW 103143445 A TW103143445 A TW 103143445A TW 201617789 A TW201617789 A TW 201617789A
Authority
TW
Taiwan
Prior art keywords
target image
image
data
control
module
Prior art date
Application number
TW103143445A
Other languages
Chinese (zh)
Other versions
TWI570594B (en
Inventor
鍾永國
魏毅
Original Assignee
寧波弘訊科技股份有限公司
弘訊科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 寧波弘訊科技股份有限公司, 弘訊科技股份有限公司 filed Critical 寧波弘訊科技股份有限公司
Publication of TW201617789A publication Critical patent/TW201617789A/en
Application granted granted Critical
Publication of TWI570594B publication Critical patent/TWI570594B/en

Links

Landscapes

  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A control apparatus includes a detecting module that captures a dynamic image as a target image and displays the target image on a webpage, a processing module that receives and converts the target image into a data instruction, and a transmission module that receives and transmits the data instruction via a network to a machine. Therefore, the control apparatus realizes the control technique for a user to control remotely, greatly improves the convenience, timeliness and accuracy for remote control. The present invention further provides a control system and a control method.

Description

控制裝置、系統及方法 Control device, system and method

本發明係有關一種控制系統,特別是指一種可用於控制機械設備運動之控制裝置、系統及方法。 The present invention relates to a control system, and more particularly to a control device, system and method that can be used to control the movement of mechanical equipment.

「機械手」或「機器人」是可執行程式化指令模仿人手或肢體動作,用以抓取、搬運物件或操作工具之自動化機器設備。以機械手為例,機械手係依據座標、速度等參數來執行固定或非固定程式之控制指令,以完成所需之各種動作。因此,在機械手執行控制指令前,需要先進行座標、速度等參數之定位控制。 A "manipulator" or "robot" is an automated machine that can be used to manipulate, manipulate, or manipulate tools by mimicking human or physical movements. Taking a robot as an example, the robot performs control commands of fixed or non-fixed programs according to parameters such as coordinates and speed to perform various required actions. Therefore, before the robot executes the control command, it is necessary to perform positioning control of parameters such as coordinates and speed.

如第1圖所示,機械手之定位控制主要係操作者10在手動模式下通過控制機台1之操作介面反覆調節X、Y軸向之進退來調整機械手之位置、設定與紀錄座標參數、查看機械手之位置、對照座標參數與機械手之位置等步驟,以獲得合理之座標參數。 As shown in Fig. 1, the positioning control of the robot is mainly for the operator 10 to adjust the position of the manipulator, set and record coordinate parameters by manually adjusting the advance and retreat of the X and Y axes through the operation interface of the control machine 1 in the manual mode. Check the position of the robot, compare the coordinates of the coordinates with the position of the robot to obtain reasonable coordinate parameters.

然而,於此種手動模式,即使是熟練的操作者利用人性化(User-friendly)的操作界面,仍需要藉由操作實體裝置(例如:按鍵11、鍵盤或觸控式螢幕等)及繁瑣的步驟(如不停按壓該按鍵11進行微調)來進行定位操作的輸入與調 校,且由不同操作者輪流操作按鍵亦可能產生衛生方面的問題。 However, in this manual mode, even a skilled operator needs to operate a physical device (for example, button 11, keyboard or touch screen, etc.) and cumbersome by using a user-friendly operation interface. Step (such as pressing the button 11 for fine adjustment) to perform input and adjustment of the positioning operation Schools, and the operation of buttons by different operators may also cause health problems.

再者,操作者必須親臨機械手所在之現場才得以進行定位控制,若該位操作者因故不能親臨現場,則可能導致後續工作之遲延。 Furthermore, the operator must be at the site where the robot is located to be able to perform positioning control. If the operator is unable to visit the site for any reason, it may cause delay in subsequent work.

近期,陸續有採行用戶端與伺服器端分離模式之體感控制技術發表。惟,用戶端與伺服器端分離模式的資料傳輸效率低下,且資料包容易丟失導致資料的不準確性。 Recently, the somatosensory control technology that adopts the separation mode between the client and the server is published. However, the data transmission efficiency between the client and the server side is low, and the data packet is easily lost, resulting in inaccurate data.

此外,已知的體感控制技術除了操作介面之外,需要額外的感應裝置(例如:穿戴式配件或貼片等),但額外的感應裝置一方面增加成本,另一方面不合乎多數人的日常穿戴習慣。 In addition, known somatosensory control techniques require additional sensing devices (eg, wearable accessories or patches, etc.) in addition to the operator interface, but additional sensing devices add cost on the one hand and are not. Daily wear habits.

因此,如何解決現行控制自動化機器設備之種種問題,提供精確而便利之控制技術,即是發展本發明之目的。 Therefore, how to solve the problems of the current control of automated machine equipment and provide precise and convenient control technology is the purpose of developing the present invention.

鑑於習知技術之種種問題,本發明提供一種控制裝置,包括檢測模組、處理模組以及傳輸模組。檢測模組係用以擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;處理模組係用以接收該目標影像,俾轉換該目標影像為數據指令;以及傳輸模組係自該處理模組接收該數據指令,以經由網路傳輸該數據指令至一機器設備。 In view of various problems of the prior art, the present invention provides a control device including a detection module, a processing module, and a transmission module. The detection module is configured to capture the dynamic image as the target image to display the target image in a webpage; the processing module is configured to receive the target image, convert the target image into a data command; and transmit the module The data module is received from the processing module to transmit the data command to a machine device via the network.

本發明復提供一種控制方法,包括:擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;將該目標影像轉換為數據指令;以及經由網路傳輸該數據指令至一 機器設備。 The present invention provides a control method, including: capturing a motion image as a target image to present the target image in a web page; converting the target image into a data command; and transmitting the data command to the network via the network mechanical equipment.

於一實施例,復包括初始化模組,係用以登入該網頁而獲得用於控制該機器設備之初始化資料。例如,該於擷取該動態影像前,先獲得用於控制該機器設備之初始化資料。 In an embodiment, the initialization module is configured to log in to the webpage to obtain initialization data for controlling the machine device. For example, the initialization data for controlling the machine device is obtained before the motion picture is captured.

於一實施例,該控制裝置復包括用以顯示該網頁之顯示單元。例如,於經該網路傳輸該數據指令至該機器設備後,自該網路接收該機器設備對應該數據指令之運動資訊,以顯示該機器設備之運動影像於一顯示單元。 In one embodiment, the control device further includes a display unit for displaying the web page. For example, after transmitting the data command to the machine device via the network, the mobile device receives motion information corresponding to the data command from the network to display the motion image of the machine device in a display unit.

於一實施例,該檢測模組擷取動態影像以作為該目標影像之模式係包含:膚色檢測模式、凸包檢測模式及/或凸形缺陷檢測模式。 In one embodiment, the detection module captures the motion image as a mode of the target image, including: a skin color detection mode, a convex hull detection mode, and/or a convex defect detection mode.

本發明復提供一種控制系統,包括:控制裝置以及機器設備。該控制裝置包含:檢測模組,係用以擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;處理模組,係用以接收該目標影像,俾轉換該目標影像為數據指令;及傳輸模組,係自該處理模組接收該數據指令。該機器設備係接收經由網路傳輸之該傳輸模組之數據指令,使該機器設備執行該數據指令並產生對應該數據指令之運動資訊。 The invention further provides a control system comprising: a control device and a machine device. The control device comprises: a detection module for capturing a dynamic image as a target image to present the target image in a webpage; and a processing module for receiving the target image and converting the target image into data And the transmission module receives the data command from the processing module. The machine device receives data commands of the transmission module transmitted via the network, causing the machine device to execute the data command and generate motion information corresponding to the data command.

於一實施例,該傳輸模組復能經由該網路接收該運動資訊。 In an embodiment, the transmission module is capable of receiving the motion information via the network.

於一實施例,該機器設備包含主控制器以及機械手,該主控制器係用以執行該數據指令以控制該機械手,並將 該運動資訊經由該網路傳輸至該控制裝置。 In one embodiment, the machine includes a main controller and a robot for executing the data command to control the robot and The motion information is transmitted to the control device via the network.

本發明實現基於用戶端之控制技術,於用戶端實現體感檢測與資料運算。應用本發明之控制裝置及方法於控制機械手時,不需操作按鍵亦無需額外的感應裝置即可進行體感檢測,亦不用上傳資料包至伺服器運算處理,不僅解決現行機械手定位繁瑣、資料包丟失及傳輸耗時等問題,分離式視窗還可同步顯示體感檢測、控制資料及即時影像供操作者直接比對效果,更大幅提升機械手控制之清潔性、便利性、即時性及準確性。 The invention realizes the somatosensory detection and the data operation on the user end based on the control technology of the user end. When the control device and method of the present invention are used to control the manipulator, the body feeling detection can be performed without operating buttons and additional sensing devices, and the data packet is not uploaded to the server operation processing, which not only solves the cumbersome positioning of the current robot, The problem of packet loss and transmission time-consuming, the separate window can also display the somatosensory detection, control data and instant image for the operator to directly compare the effects, and greatly improve the cleanliness, convenience and immediacy of the robot control. accuracy.

1‧‧‧控制機台 1‧‧‧Control machine

10‧‧‧操作者 10‧‧‧ Operator

100‧‧‧網路 100‧‧‧Network

11‧‧‧按鍵 11‧‧‧ button

2,61‧‧‧控制裝置 2,61‧‧‧Control device

20,62‧‧‧機器設備 20,62‧‧‧ Machines

201,202,203,204,701,702,703,704,705‧‧‧步驟 201, 202, 203, 204, 701, 702, 703, 704, 705 ‧ ‧ steps

21,611‧‧‧初始化模組 21,611‧‧‧Initialization module

22,612‧‧‧檢測模組 22,612‧‧‧Test module

23,613‧‧‧處理模組 23,613‧‧‧Processing module

24,614‧‧‧傳輸模組 24,614‧‧‧Transmission module

25,616‧‧‧顯示單元 25,616‧‧‧ display unit

30,40,50‧‧‧動態影像 30,40,50‧‧‧motion images

301,401,501,82‧‧‧目標影像 301, 401, 501, 82 ‧ ‧ target image

302,402,502‧‧‧對比影像 302,402,502‧‧‧Contrast images

303,403‧‧‧周圍影像 303, 403‧‧‧ surrounding images

6‧‧‧控制系統 6‧‧‧Control system

615‧‧‧攝影模組 615‧‧‧Photography module

617‧‧‧儲存單元 617‧‧‧ storage unit

620‧‧‧主控制機台 620‧‧‧Main control machine

621‧‧‧主控制器 621‧‧‧Master controller

622‧‧‧機械手 622‧‧‧manipulator

623‧‧‧監測模組 623‧‧‧Monitoring module

81‧‧‧模擬影像 81‧‧‧Analog image

811‧‧‧中心點 811‧‧‧ center point

821‧‧‧預定移動位置 821‧‧‧ scheduled mobile location

83‧‧‧運動影像 83‧‧‧ sports images

L1‧‧‧凸包 L1‧‧‧ convex hull

P1‧‧‧中心點 P1‧‧‧ Center Point

P2‧‧‧凹陷點 P2‧‧‧ recessed point

第1圖係習知控制機台之立體圖;第2圖係本發明之控制裝置之立體圖;第3圖係本發明之控制裝置之功能方塊圖;第4圖係本發明之控制方法第一實施例之步驟流程圖;第5A圖係以膚色檢測模式擷取目標影像之示意圖;第5B圖係以膚色檢測模式結合凸包檢測模式擷取目標影像之示意圖;第5C圖係結合膚色檢測模式、凸包檢測模式及凸型缺陷檢測模式擷取目標影像之示意圖;第6A及6B圖係本發明之控制系統之不同實施例之方塊圖;第7圖係本發明之控制方法第二實施例之步驟流程圖;以及 第8A至8C圖係機械手之運動影像對應疊合於操作者之目標影像之流程示意圖。 1 is a perspective view of a conventional control machine; FIG. 2 is a perspective view of a control device of the present invention; FIG. 3 is a functional block diagram of a control device of the present invention; and FIG. 4 is a first embodiment of a control method of the present invention; Step 5A is a schematic diagram of capturing a target image by using a skin color detection mode; FIG. 5B is a schematic diagram of capturing a target image by using a skin color detection mode in combination with a convex hull detection mode; and FIG. 5C is a combination of a skin color detection mode, A schematic diagram of a convex envelope detection mode and a convex defect detection mode for capturing a target image; 6A and 6B are block diagrams of different embodiments of the control system of the present invention; and FIG. 7 is a second embodiment of the control method of the present invention. Step flow chart; Figures 8A to 8C are schematic diagrams showing the motion images of the robot corresponding to the target image of the operator.

以下係藉由特定的具體實施例說明本發明之實施方式,熟習此技藝之人士可由本說明書所揭示之內容瞭解本發明之其他優點與功效。本發明也可藉由其他不同的具體實施例加以施行或應用,本說明書中的各項細節亦可基於不同觀點與應用,在不悖離本發明之精神下進行各種修飾與變更。 The embodiments of the present invention are described by way of specific examples, and those skilled in the art can understand the advantages and advantages of the present invention as disclosed in the present disclosure. The present invention may be embodied or applied in various other specific embodiments, and various modifications and changes may be made without departing from the spirit and scope of the invention.

除非文中另有說明,說明書及所附申請專利範圍中所使用之單數形式「一」及「該」包括複數個體;術語「或」包括「及/或」之含義。 The singular forms "a", "the", and "the"

於本發明中,術語「node-webkit」係整合node.js與webkit兩者文本之即時運行環境,webkit提供文檔物件模型(document object model,DOM),node.js提供當地語系化服務,使用者可通過HTML5、CSS3、Javascript語言編寫本地應用程式。術語「node.js」是基於Chrome Javascript運作時所建立之平台,藉由事件驅動I/O伺服器端Javascript環境,可供使用者撰寫可擴充網路程式,適合資料密集型分散式設備之即時應用。術語「WebRTC」(網頁即時通信Web Real-Time Communication)為支援網頁瀏覽器進行即時語音或視頻對話之應用程序接口(Application Programming Interface,API),通過Javascript即可提供基於網頁之即時通訊,包括語音視頻採集、解編碼、網路傳 輸、顯示等功能,並且跨平台支援windows、linux、mac、android。術語「canvas」為HTML5之部分,使用Javascript於網頁上繪製圖像,可控制畫布區域內每一圖元,具有多種繪製路徑、形狀、字元、添加圖像之功能。術語「UDP」(使用者資料包User Datagram Protocol)是一種不需傳輸層協議之開放式系統互聯參考模型(Open System Interconnection Reference Model,OSI model)。 In the present invention, the term "node-webkit" is an instant running environment that integrates both text of node.js and webkit, webkit provides a document object model (DOM), and node.js provides local language service, user Native applications can be written in HTML5, CSS3, and Javascript. The term "node.js" is based on the platform established by Chrome Javascript. The event-driven I/O server-side Javascript environment allows users to write scalable network programs for instant data-intensive distributed devices. application. The term "WebRTC" (Web Real-Time Communication) is an application programming interface (API) that supports web browsers for instant voice or video conversations. Javascript provides instant web-based instant messaging, including voice. Video capture, decoding, network transmission Loss, display and other functions, and cross-platform support for windows, linux, mac, android. The term "canvas" is part of HTML5. It uses Javascript to draw images on a web page. It can control each element in the canvas area, and has a variety of drawing paths, shapes, characters, and images. The term "UDP" (User Datagram Protocol) is an Open System Interconnection Reference Model (OSI model) that does not require a transport layer protocol.

本發明提供一種基於用戶於遠端操控之控制裝置及方法,不需將資料傳輸至伺服器端進行運算,減少資料包丟失的機率及資料包傳輸時間,且可跨平台使用。於此處所指之「遠端」係為不在機器設備所在之現場,例如操作者與機器設備分別位於不同房間、不同區域或不同國家。 The invention provides a control device and a method based on remote control of a user, which does not need to transmit data to a server for calculation, reduces the probability of packet loss and data packet transmission time, and can be used across platforms. The term “remote” as used herein is not at the site where the machine is located, for example, the operator and the machine are located in different rooms, different regions or different countries.

如第2、3、4圖所示,所述之控制裝置2包括顯示單元25、初始化模組21、檢測模組22、處理模組23以及傳輸模組24。 As shown in FIGS. 2, 3, and 4, the control device 2 includes a display unit 25, an initialization module 21, a detection module 22, a processing module 23, and a transmission module 24.

該控制裝置2經事件觸發開始進行遠端控制包括下列步驟:步驟201,操作者10可利用該初始化模組21經由網路100登入本發明之網頁(顯示於該顯示單元25),以獲得用於控制一機器設備20之初始化資料。步驟202,該檢測模組22擷取操作者10之動態影像作為目標影像,並解析對應該目標影像之連續圖元資料。步驟203,該處理模組23自該初始化模組21及該檢測模組22接收該初始化資料及該連續圖元資料,依據該初始化資料轉換該連續圖元資料為數據指令。步驟204,該傳輸模組24自該處理模 組23接收該數據指令,經由網路100傳輸該數據指令至該機器設備20。其中,步驟201與步驟202之先後順序可依需求而定,亦可同步進行。 The remote control of the control device 2 by the event triggering includes the following steps: Step 201: The operator 10 can use the initialization module 21 to log in to the webpage of the present invention (displayed on the display unit 25) via the network 100 to obtain The initialization data of a machine device 20 is controlled. In step 202, the detection module 22 captures the motion image of the operator 10 as a target image, and parses the continuous primitive data corresponding to the target image. Step 203: The processing module 23 receives the initialization data and the continuous primitive data from the initialization module 21 and the detection module 22, and converts the continuous primitive data into a data instruction according to the initialization data. Step 204, the transmission module 24 is from the processing module Group 23 receives the data command and transmits the data command to the machine device 20 via network 100. The sequence of step 201 and step 202 may be determined according to requirements, or may be performed simultaneously.

於本實施例,該機器設備20係為注塑機用之主控制機台620(設有第6A圖之主控制器621)及機械手622,且操作者10可藉由本發明之控制裝置2擷取操作者10之手勢動作,以進行遠端控制機械手622之定位。 In the present embodiment, the machine device 20 is a main control machine 620 for injection molding machines (providing the main controller 621 of FIG. 6A) and a robot 622, and the operator 10 can be controlled by the control device 2 of the present invention. The gesture action of the operator 10 is taken to perform the positioning of the remote control robot 622.

具體而言,本發明之控制方法係支援網頁瀏覽器之運行架構,可跨平台使用,只要具有顯示單元25及瀏覽網頁功能之通訊電子裝置,例如:個人電腦、平板電腦、智慧型手機等,故無需額外的感應裝置,亦不用上傳資料包至伺服器運算處理,即可實現基於用戶於遠端操控之控制裝置2。 Specifically, the control method of the present invention supports the running structure of the web browser, and can be used across platforms, as long as the display unit 25 and the communication electronic device for browsing the webpage function, such as a personal computer, a tablet computer, a smart phone, etc. Therefore, the control device 2 based on the user's remote control can be realized without an additional sensing device or uploading the data packet to the server operation processing.

再者,當操作者10使用該控制裝置2開始進行遠端控制時,因該初始化模組21之設計,而可經由登入本發明之網頁以獲得初始化資料,該初始化資料係基於node-webkit架構之跨平台應用程式。 Furthermore, when the operator 10 uses the control device 2 to start remote control, due to the design of the initialization module 21, the initialization data can be obtained by logging in to the webpage of the present invention, which is based on the node-webkit architecture. Cross-platform application.

又,該檢測模組22經由外接或內建之攝影模組接收該操作者10之動態影像,自背景影像中擷取該操作者10之動態影像作為目標影像,該目標影像可包括但不限於操作者10之手勢、肢體動作臉部表情等。於本發明中,可運用一種或結合多種辨識運算技術,例如:膚色檢測模式、凸包檢測模式、凸型缺陷檢測模式等,自該操作者10之動態影像中分離背景而擷取操作者之手勢影像。 The detection module 22 receives the motion image of the operator 10 via an external or built-in camera module, and captures the motion image of the operator 10 from the background image as a target image, which may include but is not limited to The gesture of the operator 10, the facial expression of the limb, and the like. In the present invention, one or a combination of multiple recognition computing techniques, such as a skin color detection mode, a convex hull detection mode, a convex defect detection mode, etc., may be used to separate the background from the motion image of the operator 10 and capture the operator's Gesture image.

第5A圖係以膚色檢測模式擷取目標影像之示意圖。如第5A圖所示,用以顯示該網頁之顯示單元25會顯示操作者10及其周圍之動態影像30,該檢測模組22檢測動態影像30中的顏色近似於膚色之圖元區域,經確認後,再擷取該顏色近似於膚色之圖元區域作為目標影像301,並解析對應目標影像301之連續圖元資料。其中,該顯示單元25復可依據該連續圖元資料於該操作者10之動態影像30中同步顯示對應目標影像301之對比影像302。 Figure 5A is a schematic diagram of capturing a target image in a skin color detection mode. As shown in FIG. 5A, the display unit 25 for displaying the web page displays the motion image 30 of the operator 10 and its surroundings. The detection module 22 detects that the color in the motion image 30 is similar to the color region of the skin color. After the confirmation, the color region corresponding to the skin color is captured as the target image 301, and the continuous primitive data corresponding to the target image 301 is parsed. The display unit 25 can synchronously display the contrast image 302 corresponding to the target image 301 in the motion image 30 of the operator 10 according to the continuous element data.

詳細而言,膚色檢測模式包括圖像灰度化演算法、膚色概率長條圖、閥值濾波及RGB染色等程序。圖像灰度化演算法係將動態影像中圖元RGB三分量之最大值作為該圖元之灰度值,而將彩色圖元資料轉變成灰度值圖元資料。膚色概率長條圖描述不同色彩在整幅圖像中所占的比例,而不關心每種色彩所處的空間位置,顏色長條圖特別適於描述那些難以進行自動分割的圖像。閥值濾波主要係長條圖進項圖元分割而用的。RGB染色係針對每個圖元不同的RGB值進行圖元的色彩的變化。 In detail, the skin color detection mode includes an image grayscale algorithm, a skin color probability bar graph, threshold filtering, and RGB coloring. The image grayscale algorithm uses the maximum value of the RGB three components of the primitive in the motion image as the gray value of the primitive, and converts the color primitive data into grayscale primitive data. The skin color probability bar graph describes the proportion of different colors in the entire image, regardless of the spatial position of each color. The color bar graph is particularly suitable for describing images that are difficult to auto-segment. Threshold filtering is mainly used for the segmentation of the long picture. The RGB coloring changes the color of the primitive for each RGB value of each primitive.

於本實施例中,由於動態影像30中包含各種狀況之周圍影像303(圖示雖呈現空白,但實際上為操作者10所處之空間環境),故網頁右下方設計對比影像302之區域,以單一呈現所擷取之目標影像301。 In the present embodiment, since the moving image 30 includes surrounding images 303 of various conditions (the illustration is blank, but actually the space environment in which the operator 10 is located), the area of the contrast image 302 is designed on the lower right side of the webpage. The target image 301 captured in a single presentation.

再者,還可運用對比影像302來確認膚色檢測模式擷取目標影像301之準確度。因周圍影像303中可能出現近似膚色之物件,故需藉由對比影像302確認是否正確擷取 目標影像301。 Furthermore, the contrast image 302 can also be used to confirm the accuracy of the skin color detection mode capturing the target image 301. Since the object of the skin color may appear in the surrounding image 303, it is necessary to confirm whether the image is correctly captured by comparing the image 302. Target image 301.

第5B圖係以膚色檢測模式結合凸包檢測模式擷取目標影像之示意圖。如第5B圖所示,該檢測模組22可結合膚色檢測模式及凸包檢測模式檢測動態影像40中之目標影像401。該檢測模組22以圖像灰度化、膚色概率長條圖判斷出動態影像40中顏色接近於膚色之圖元區域後,進一步由動態影像40中選取複數個特徵點,將該等特徵點之圖元資料輸入劃線函數劃出涵蓋目標輪廓之凸包L1,運算該等特徵點標定特徵點的中心點之圖元座標資料且輸入畫方形函數產生3X3中心點P1,擷取該凸包L1圖元區域為目標影像401,凸包L1及中心點P1可隨目標影像401而移動。其中,該顯示單元25同步顯示對應目標影像401之對比影像402。 FIG. 5B is a schematic diagram of capturing a target image by using a skin color detection mode in combination with a convex hull detection mode. As shown in FIG. 5B, the detection module 22 can detect the target image 401 in the motion image 40 in combination with the skin color detection mode and the convex hull detection mode. The detection module 22 determines, after the image gradation and the skin color probability bar graph, that the color of the motion image 40 is close to the color region of the skin color, and further selects a plurality of feature points from the motion image 40, and the feature points are The primitive data input scribing function draws a convex hull L1 covering the target contour, calculates the coordinate coordinates of the center point of the feature points of the feature points, and inputs a square function to generate a 3X3 center point P1, and captures the convex hull The L1 primitive area is the target image 401, and the convex hull L1 and the center point P1 can move with the target image 401. The display unit 25 synchronously displays the contrast image 402 corresponding to the target image 401.

詳細而言,凸包檢測模式除包括圖像灰度化、膚色概率長條圖,復包括Codebook圖像分割演算法、IPAN輪廓演算法、Camshift演算法、畫矩形及畫線演算法函數等程序。Codebook圖像分割演算法的基本思想是獲得每個圖元的時間序列模型,此種模型能很好地處理時間起伏。Codebook圖像分割演算法為當前圖像的每一個圖元建立一個CodeBook(CB)結構,每個CodeBook結構又由多個CodeWord(CW)組成。IPAN輪廓演算法使用三角形abp來描述點p。Camshift演算法的基本思想是將視頻影像中所有幀畫面作MeanShift運算,並將上一幀的結果(即檢索視窗Search Window的中心和大小)作為下一幀MeanShift演 算法的Search Window的初始值,如此反覆運算下去。 In detail, the convex hull detection mode includes not only image graying, skin color probability bar graph, but also Codebook image segmentation algorithm, IPAN contour algorithm, Camshift algorithm, drawing rectangle and line drawing algorithm function. . The basic idea of the Codebook image segmentation algorithm is to obtain a time series model of each primitive, which can handle time fluctuations well. The Codebook image segmentation algorithm creates a CodeBook (CB) structure for each primitive of the current image, and each CodeBook structure is composed of multiple CodeWords (CWs). The IPAN contour algorithm uses a triangle abp to describe the point p. The basic idea of the Camshift algorithm is to perform the MeanShift operation on all the frames in the video image, and the result of the previous frame (ie, the center and size of the search window Search Window) as the next frame MeanShift The initial value of the algorithm's Search Window is thus repeated.

本實施例係藉由Codebook圖像分割演算法分離動態影像40中之目標影像401及其周圍影像403;再以IPAN輪廓演算法描述凸包多邊形中的每個點;再以Camshift演算法追蹤目標影像401移動的點;再以畫矩形及畫矩形演算法函數進行目標點之顯示。 In this embodiment, the target image 401 and its surrounding image 403 in the dynamic image 40 are separated by a Codebook image segmentation algorithm; each point in the convex hull polygon is described by an IPAN contour algorithm; and the target is tracked by the Camshift algorithm. The point at which the image 401 moves; the display of the target point is performed by drawing a rectangle and drawing a rectangular algorithm function.

第5C圖係結合膚色檢測模式、凸包檢測模式及凸型缺陷檢測模式擷取目標影像之示意圖。如第5C圖所示,凸型缺陷檢測模式係標定動態影像50中凸包L1之複數個凹陷點P2,以該等凹陷點P2之圖元座標資料輸入畫矩形函數產生凸型凹陷圖元區域,再擷取該凸型之凹陷圖元區域作為目標影像501。 The 5C figure is a schematic diagram of capturing a target image by combining a skin color detection mode, a convex hull detection mode, and a convex defect detection mode. As shown in FIG. 5C, the convex defect detection mode is to calibrate a plurality of concave points P2 of the convex hull L1 in the dynamic image 50, and input a rectangular function to generate a convex concave element region by using the primitive coordinate data of the concave point P2. Then, the convex recessed element area of the convex shape is taken as the target image 501.

詳細而言,凸型缺陷檢測模式除包括圖像灰度化演算法、膚色概率長條圖、CodeBook圖像分割演算法、IPAN輪廓演算法、Camshift演算法,復以Delaunay三角剖分法標定出複數個凸型凹陷夾角之凹陷點P2,將該等凹陷點P2之圖元座標資料輸入畫矩形函數產生6×6之正方形凹陷點P2,由前後幀圖元資料中凸包L1、凸型中心點P1及凹陷點P2之變化擷取出目標影像501。顯示單元25可同步顯示對應目標影像501之對比影像502。 In detail, the convex defect detection mode includes image grayscale algorithm, skin color probability bar graph, CodeBook image segmentation algorithm, IPAN contour algorithm, Camshift algorithm, and calibrated by Delaunay triangulation. The concave point P2 of the plurality of convex depressions is input, and the coordinate data of the concave points P2 is input into the rectangular function to generate a 6×6 square concave point P2, and the convex and convex L1 and the convex center in the front and rear frame primitive data are generated. The target image 501 is taken out by the change of the point P1 and the recessed point P2. The display unit 25 can synchronously display the contrast image 502 corresponding to the target image 501.

值得一提的是,操作者可任選上述三種檢測模式中一種(如第5A圖)、兩種(如第5B圖)或三種(如第5C圖)來擷取目標影像301、401、501,各檢測模式不會互相影響效果。再者,本發明之控制裝置2可運用分離視窗同步 顯示操作者10及其周圍之動態影像30、40、50及對應操作者10動作之對比影像302、402、502,使操作者10容易確認檢測效果。 It is worth mentioning that the operator can select one of the above three detection modes (such as Figure 5A), two (such as Figure 5B) or three (such as Figure 5C) to capture the target images 301, 401, 501. Each detection mode does not affect the effect of each other. Furthermore, the control device 2 of the present invention can utilize separate window synchronization The motion images 30, 40, and 50 of the operator 10 and its surroundings and the contrast images 302, 402, and 502 corresponding to the operation of the operator 10 are displayed, so that the operator 10 can easily confirm the detection effect.

於本實施例,自動態影像30、40、50中擷取目標影像301、401、501後,再依第4圖之步驟,該檢測模組22解析對應目標影像301、401、501之連續圖元資料。接著,處理模組23分別自初始化模組21及檢測模組22接收該初始化資料及該連續圖元資料,依據該初始化資料轉換該連續圖元資料為用於控制機器設備20之數據指令。之後傳輸模組24自處理模組23接收該數據指令,經由網路100傳輸該數據指令至機器設備20,用以進行遠端控制該機器設備20。 In this embodiment, after the target images 301, 401, and 501 are captured from the motion images 30, 40, and 50, the detection module 22 analyzes the consecutive images of the corresponding target images 301, 401, and 501 according to the steps of FIG. Metadata. Then, the processing module 23 receives the initialization data and the continuous metadata from the initialization module 21 and the detection module 22, and converts the continuous metadata according to the initialization data into data commands for controlling the device 20. The transmission module 24 then receives the data command from the processing module 23 and transmits the data command to the machine device 20 via the network 100 for remote control of the machine device 20.

依上述說明,本發明之實際情況例如,當操作者10於該顯示單元25前面將手指移動10公分,藉由該檢測模組22將此移動距離傳至該處理模組23,經由該處理模組23之運算,該傳輸模組24會將數據指令經由網路100傳輸至該機器設備20,使該機器設備20之機械手622位移1公分(cm)或1公厘(mm)(換算結果可依需求設定),藉以調整該機器設備20之機械手之位置,而調節機械手之X、Y、Z軸向之定位。 According to the above description, the actual situation of the present invention is, for example, when the operator 10 moves the finger by 10 cm in front of the display unit 25, and the detection module 22 transmits the moving distance to the processing module 23, via the processing module. In the operation of group 23, the transmission module 24 transmits the data command to the machine device 20 via the network 100, and the robot 622 of the machine device 20 is displaced by 1 cm (cm) or 1 mm (mm) (conversion result) The position of the robot of the machine device 20 can be adjusted to adjust the position of the X, Y, and Z axes of the robot.

本發明復提供一種控制系統6,如第6A圖所示,其包括:控制裝置61以及機器設備62。 The present invention provides a control system 6, as shown in FIG. 6A, which includes a control device 61 and a machine device 62.

所述之控制裝置61包括:檢測模組612、處理模組613、傳輸模組614及顯示單元616。檢測模組612擷取操 作者之動態影像作為目標影像(例如:操作者之手勢),以解析對應該目標影像之連續圖元資料。處理模組613接收該初始化資料及該連續圖元資料,以依據該初始化資料轉換該連續圖元資料為數據指令。傳輸模組614經由網路100傳輸該數據指令至機器設備62。 The control device 61 includes a detection module 612, a processing module 613, a transmission module 614, and a display unit 616. Detection module 612 capture operation The author's motion image is used as the target image (for example, the operator's gesture) to parse the continuous element data corresponding to the target image. The processing module 613 receives the initialization data and the continuous primitive data, and converts the continuous primitive data into data instructions according to the initialization data. The transmission module 614 transmits the data command to the machine device 62 via the network 100.

所述之機器設備62包括:主控制器621及機械手622。主控制器621經由網路100登入本發明之網頁獲得基於node-webkit架構之初始化資料包,並轉換該初始化資料包為相對應指令。機械手622自主控制器621接收該相對應指令完成初始化後回傳初始化完成訊號至主控制器621,主控制器621回傳初始化完成訊號至本發明之網頁,以通知控制裝置61之操作者可開始操作機器設備62。 The machine device 62 includes a main controller 621 and a robot 622. The main controller 621 logs into the webpage of the present invention via the network 100 to obtain an initialization data package based on the node-webkit architecture, and converts the initialization data packet into a corresponding instruction. The robot 622 autonomous controller 621 receives the corresponding command to complete the initialization and returns the initialization completion signal to the main controller 621. The main controller 621 returns the initialization completion signal to the webpage of the present invention to notify the operator of the control device 61. The machine device 62 is initially operated.

需說明,當機器設備62初始化完成後,本發明之網頁每隔一時間主動發送確認指令至主控制器621,以確保控制裝置61與機器設備62之間的連線。 It should be noted that, after the initialization of the machine device 62 is completed, the web page of the present invention actively sends an acknowledgment command to the main controller 621 at intervals to ensure the connection between the control device 61 and the machine device 62.

於本實施例,控制裝置61復包括:初始化模組611、攝影模組615以及儲存單元617,其中,初始化模組611自網路100獲得用於控制機器設備62之初始化資料。攝影模組615傳輸該操作者之動態影像至檢測模組612,儲存單元617用以儲存該初始化資料。 In this embodiment, the control device 61 further includes an initialization module 611, a camera module 615, and a storage unit 617. The initialization module 611 obtains initialization data for controlling the device 62 from the network 100. The photography module 615 transmits the motion image of the operator to the detection module 612, and the storage unit 617 is configured to store the initialization data.

如第6B圖所示,該機器設備62復包括:監測模組623,係耦接主控制器621,用以獲得機械手622對應控制裝置61所傳輸之該數據指令之運動資訊。監測模組623可選用攝影機、光電或電磁感測元件,但不以此為限。 As shown in FIG. 6B, the machine device 62 further includes: a monitoring module 623 coupled to the main controller 621 for obtaining motion information of the data command transmitted by the robot 622 corresponding to the control device 61. The monitoring module 623 can be selected from a camera, a photoelectric or an electromagnetic sensing component, but is not limited thereto.

當控制裝置61之傳輸模組614經由網路100傳輸數據指令至機器設備62後,機器設備62之主控制器621執行數據指令,以控制機械手622。監測模組623獲得機械手622對應該數據指令之運動資訊並將該運動資訊傳輸至主控制器621,主控制器621經由網路100回傳機械手622之該運動資訊至控制裝置61。控制裝置61於接收機械手622之該運動資訊後,轉換該運動資訊為機械手622對應該數據指令之運動影像,顯示單元616會顯示機械手622對應該數據指令之運動影像。 When the transmission module 614 of the control device 61 transmits a data command to the machine device 62 via the network 100, the main controller 621 of the machine device 62 executes a data command to control the robot 622. The monitoring module 623 obtains the motion information of the robot 622 corresponding to the data command and transmits the motion information to the main controller 621. The main controller 621 returns the motion information of the robot 622 to the control device 61 via the network 100. After receiving the motion information of the robot 622, the control device 61 converts the motion information into a motion image corresponding to the data command by the robot 622, and the display unit 616 displays the motion image corresponding to the data command by the robot 622.

第7圖係本發明之控制系統6之控制方法之步驟流程圖。操作者登入網頁且開始進行遠端控制一機器設備,遠端控制方法包括:步驟701:獲得用於控制機器設備之初始化資料;步驟702:擷取操作者之動態影像作為目標影像;步驟703:依據該初始化資料將該目標影像轉換為數據指令;步驟704:經由該網路傳輸該數據指令至該機器設備及接收該機器設備對應該數據指令之運動資訊(參考前述);步驟705:顯示該目標影像及該機器設備對應該數據指令之運動影像。其中,步驟701與步驟702之先後順序可依需求而定,亦可同步進行。 Figure 7 is a flow chart showing the steps of the control method of the control system 6 of the present invention. The operator logs in to the webpage and starts to remotely control a machine device. The remote control method includes: Step 701: Obtain initialization data for controlling the machine device; Step 702: Capture the operator's motion image as the target image; Step 703: Converting the target image into a data command according to the initialization data; Step 704: transmitting the data command to the machine device via the network and receiving motion information corresponding to the data command of the device (refer to the foregoing); Step 705: displaying the The target image and the moving image of the machine device corresponding to the data command. The sequence of steps 701 and 702 may be determined according to requirements, or may be performed simultaneously.

於本實施例,步驟701包括:判斷控制裝置61及機器設備62是否已儲存該初始化資料,若判斷為否,則經由登入本發明之網頁獲得初始化資料,以使該控制裝置61及該機器設備62完成初始化步驟。該初始化資料係基於node-webkit架構之跨平台應用程式。 In this embodiment, step 701 includes: determining whether the initialization device has been stored by the control device 61 and the device 62. If the determination is no, the initialization data is obtained by logging in to the webpage of the present invention, so that the control device 61 and the device are 62 completes the initialization step. The initialization data is a cross-platform application based on the node-webkit architecture.

步驟702包括:控制裝置61以膚色檢測模式、凸包檢測模式及/或凸型缺陷檢測模式擷取該動態影像中之目標影像。 Step 702 includes: the control device 61 captures the target image in the motion image in the skin color detection mode, the convex hull detection mode, and/or the convex defect detection mode.

步驟703包括:控制裝置61解析對應該目標影像之連續圖元資料,將該連續圖元資料轉換為UDP。詳細而言,每一UDP包含:資料包具有:頭尾標示、基本協議及數據指令內容。依據數據指令的功能來劃分,UDP包括:即時數據、馬達參數設定數據、警報、曲線數據及動作命令等類型。動作命令型UDP包含該主控制器操控該機械手沿X、Y軸運動之數據指令。 Step 703 includes: the control device 61 parses the continuous primitive data corresponding to the target image, and converts the continuous primitive data into UDP. In detail, each UDP contains: the data packet has: head and tail indication, basic protocol and data instruction content. According to the function of the data instruction, UDP includes: real-time data, motor parameter setting data, alarm, curve data and motion commands. The action command type UDP includes a data command for the main controller to manipulate the movement of the robot along the X and Y axes.

步驟704包括:控制裝置61經由網路100傳送資料包頭標示「請求」之UDP至機器設備62,其中,該「請求」UDP包含可用以控制機械手622之數據指令。機器設備62之主控制器621執行控制機械手622之數據指令後,主控制器621傳送資料包頭標示「回饋」之UDP至控制裝置61,該「回饋」之UDP包含機械手622對應該數據指令之運動資訊。 Step 704 includes the control device 61 transmitting a UDP to machine device 62 indicating the "request" of the packet header via the network 100, wherein the "request" UDP includes data commands that can be used to control the robot 622. After the main controller 621 of the machine device 62 executes the data command of the control robot 622, the main controller 621 transmits the UDP of the data packet header "feedback" to the control device 61, and the UDP of the "feedback" includes the robot 622 corresponding to the data command. Sports information.

步驟705包括:控制裝置61轉換該運動資訊為機械手622對應該數據指令之運動影像,並且以機械手622之中心點對應操作者之目標影像之中心點,將該運動影像與該目標影像相疊合。 Step 705 includes: the control device 61 converts the motion information into a motion image corresponding to the data command by the robot 622, and the center point of the robot 622 corresponds to the center point of the target image of the operator, and the motion image is compared with the target image. Superimposed.

該顯示單元616可選擇顯示操作者之該目標影像、該機械手之該運動影像或同時顯示該運動影像與該目標影像之疊合影像,以提供操作者如臨現場的控制感。具體地, 如第8A至8C圖所示,係操作者之目標影像呈現於網頁之示意圖。 The display unit 616 can select the target image of the operator, the moving image of the robot, or simultaneously display the superimposed image of the moving image and the target image to provide an operator with a sense of control on the spot. specifically, As shown in FIGS. 8A to 8C, the target image of the operator is presented on the web page.

如第8A圖所示,於進行遠端控制時,藉由攝影機將機械手622之畫面傳輸至顯示單元616,使顯示單元616呈現該機械手622之模擬影像81,且模擬影像81可設定有中心點811,故該操作者之目標影像82可利用該中心點811進行對位,如第8B圖所示,使該目標影像82與該模擬影像81相疊合而成為同步移動之運動影像83。因此,藉由目標影像82與模擬影像81之設計以進行定位作業,使操作者更能明確得知機械手622與人手之同步狀況。其中,顯示單元616所呈現之網頁上可依需求表示出座標或對位情況(圖略),以提供操作者對位資訊。 As shown in FIG. 8A, when the remote control is performed, the screen of the robot 622 is transmitted to the display unit 616 by the camera, so that the display unit 616 presents the analog image 81 of the robot 622, and the analog image 81 can be set with The center point 811, so that the target image 82 of the operator can be aligned by the center point 811. As shown in FIG. 8B, the target image 82 and the analog image 81 are superimposed to form a synchronous moving motion image 83. . Therefore, by designing the target image 82 and the analog image 81 to perform the positioning operation, the operator can more clearly know the synchronization status of the robot 622 and the human hand. The coordinate page or the alignment condition (not shown) may be displayed on the webpage presented by the display unit 616 to provide operator alignment information.

此外,機器設備62還可將機械手622預定移動之位置資訊(例如:待抓物體之相對座標數據)傳輸至控制裝置61,如第8C圖所示,該顯示單元616可在網頁中顯示一預定移動位置821與操作者之目標影像82之相對距離,以提供操作者更直觀且精確的控制感。例如,機械手622需移動5公分或移至某一位置,則該預定移動位置821與該中心點811間之距離即代表該機械手622所需移動之距離或位置,故操作者只要將手移動(此時網頁上之運動影像83會移動),令該中心點811移動至重疊該預定移動位置821時,機械手622即完成移動5公分或移動至預定位置之動作。 In addition, the machine device 62 can also transmit the position information of the predetermined movement of the robot 622 (for example, the relative coordinate data of the object to be grasped) to the control device 61. As shown in FIG. 8C, the display unit 616 can display a page in the webpage. The relative distance between the predetermined moving position 821 and the target image 82 of the operator is provided to provide a more intuitive and accurate sense of control by the operator. For example, if the robot 622 needs to move 5 cm or move to a certain position, the distance between the predetermined moving position 821 and the center point 811 represents the distance or position required for the robot 622 to move, so the operator only needs to move the hand. When the movement (in this case, the moving image 83 on the webpage moves), when the center point 811 is moved to overlap the predetermined movement position 821, the robot 622 completes the movement of moving 5 cm or moving to the predetermined position.

綜上所述,本發明實現基於用戶遠端操控之控制技 術,於用戶端實現體感檢測與資料運算。應用本發明之控制裝置、系統及方法於遠端控制機械手時,不須操作按鍵亦無需額外的感應裝置即可進行體感檢測,不用上傳資料包至伺服器運算處理,不僅解決現行機械手定位繁瑣、資料包丟失及傳輸耗時等問題,分離式視窗還可同步顯示體感檢測、控制資料及即時影像供操作者直接比對效果,更大幅提升機械手控制之清潔性、便利性、即時性及準確性。 In summary, the present invention implements a control technique based on remote control of a user. Surgery, real-time detection and data calculation at the user end. When the control device, system and method of the present invention are used to control the robot at the remote end, the body feeling detection can be performed without operating the button or the additional sensing device, and the data packet is not uploaded to the server operation processing, and the current robot is not only solved. The problem of cumbersome positioning, loss of data packets and time-consuming transmission, the separate window can simultaneously display the somatosensory detection, control data and instant images for the operator to directly compare the effects, and greatly improve the cleanliness and convenience of the robot control. Immediacy and accuracy.

上述實施例僅例示性說明,而非用於限制本發明。任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施例進行修飾與改變。因此,本發明之權利保護範圍,應本案所附之申請專利範圍所載。 The above embodiments are illustrative only and are not intended to limit the invention. Modifications and variations of the above-described embodiments can be made by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be construed as being included in the scope of the appended claims.

2‧‧‧控制裝置 2‧‧‧Control device

20‧‧‧機器設備 20‧‧‧ Machines

21‧‧‧初始化模組 21‧‧‧Initialization module

22‧‧‧檢測模組 22‧‧‧Test module

23‧‧‧處理模組 23‧‧‧Processing module

24‧‧‧傳輸模組 24‧‧‧Transmission module

100‧‧‧網路 100‧‧‧Network

Claims (11)

一種控制裝置,包括:檢測模組,係用以擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;處理模組,係用以接收該目標影像,俾轉換該目標影像為數據指令;以及傳輸模組,係自該處理模組接收該數據指令,以經由網路傳輸該數據指令至一機器設備。 A control device includes: a detection module for capturing a motion image as a target image to present the target image in a webpage; and a processing module for receiving the target image and converting the target image to The data command; and the transmission module receives the data command from the processing module to transmit the data command to a machine device via the network. 如申請專利範圍第1項所述之控制裝置,復包括初始化模組,係用以登入該網頁而獲得用於控制該機器設備之初始化資料。 For example, the control device described in claim 1 includes an initialization module for logging in to the webpage to obtain initialization data for controlling the machine device. 如申請專利範圍第1項所述之控制裝置,復包括用以顯示該網頁之顯示單元。 The control device according to claim 1, wherein the display unit for displaying the web page is further included. 如申請專利範圍第1項所述之控制裝置,其中,該檢測模組擷取動態影像以作為該目標影像之模式係包含:膚色檢測模式、凸包檢測模式及/或凸形缺陷檢測模式。 The control device of claim 1, wherein the detection module captures the motion image as a mode of the target image, including: a skin color detection mode, a convex hull detection mode, and/or a convex defect detection mode. 一種控制系統,包括:一控制裝置包含:檢測模組,係用以擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;處理模組,係用以接收該目標影像,俾轉換該目標影像為數據指令;及傳輸模組,係自該處理模組接收該數據指 令;以及一機器設備,係接收經由網路傳輸之該傳輸模組之數據指令,使該機器設備執行該數據指令以產生對應該數據指令之運動資訊。 A control system includes: a control device comprising: a detection module for capturing a dynamic image as a target image to present the target image in a web page; and a processing module for receiving the target image, Converting the target image into a data command; and transmitting the module, receiving the data finger from the processing module And a machine device receiving a data command of the transmission module transmitted via the network, causing the machine device to execute the data command to generate motion information corresponding to the data command. 如申請專利範圍第5項所述之控制系統,其中,該傳輸模組復能經由該網路接收該運動資訊。 The control system of claim 5, wherein the transmission module is capable of receiving the motion information via the network. 如申請專利範圍第5項所述之控制系統,其中,該機器設備包含主控制器以及機械手,該主控制器係用以執行該數據指令以控制該機械手,並將該運動資訊經由該網路傳輸至該控制裝置。 The control system of claim 5, wherein the machine device comprises a main controller and a robot, the main controller is configured to execute the data command to control the robot, and the motion information is The network is transmitted to the control device. 一種控制方法包括:擷取動態影像作為目標影像,以將該目標影像呈現於一網頁中;將該目標影像轉換為數據指令;以及經由網路傳輸該數據指令至一機器設備。 A control method includes: capturing a motion image as a target image to present the target image in a web page; converting the target image into a data command; and transmitting the data command to a machine device via a network. 如申請專利範圍第8項所述之控制方法,復包括於擷取該動態影像前,先獲得用於控制該機器設備之初始化資料。 For example, the control method described in claim 8 of the patent application includes obtaining initialization data for controlling the machine device before extracting the motion picture. 如申請專利範圍第8項所述之控制方法,復包括於經該網路傳輸該數據指令至該機器設備後,自該網路接收該機器設備對應該數據指令之運動資訊,以顯示該機器設備之運動影像於一顯示單元。 The control method as described in claim 8 is further included after receiving the data command to the machine device via the network, and receiving, from the network, motion information corresponding to the data command of the machine device to display the machine The motion image of the device is in a display unit. 如申請專利範圍第8項所述之控制方法,其中,擷取該動態影像以作為目標影像之模式包含:膚色檢測模 式、凸包檢測模式及/或凸形缺陷檢測模式。 The control method of claim 8, wherein the mode of capturing the motion image as the target image comprises: a skin color detection mode , convex hull detection mode and/or convex defect detection mode.
TW103143445A 2014-11-04 2014-12-12 Control apparatus, system and method TWI570594B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410613220.1A CN105538307B (en) 2014-11-04 2014-11-04 Control device, system and method

Publications (2)

Publication Number Publication Date
TW201617789A true TW201617789A (en) 2016-05-16
TWI570594B TWI570594B (en) 2017-02-11

Family

ID=55818098

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103143445A TWI570594B (en) 2014-11-04 2014-12-12 Control apparatus, system and method

Country Status (2)

Country Link
CN (1) CN105538307B (en)
TW (1) TWI570594B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112682025B (en) * 2020-08-12 2023-05-26 山西天地煤机装备有限公司 Drilling and anchoring equipment control method, device and system
CN114415829B (en) * 2021-12-29 2022-08-19 广州市影擎电子科技有限公司 Cross-platform equipment universal interface implementation method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6501981B1 (en) * 1999-03-16 2002-12-31 Accuray, Inc. Apparatus and method for compensating for respiratory and patient motions during treatment
TWI395145B (en) * 2009-02-02 2013-05-01 Ind Tech Res Inst Hand gesture recognition system and method
EP3320875A1 (en) * 2009-11-13 2018-05-16 Intuitive Surgical Operations Inc. Apparatus for hand gesture control in a minimally invasive surgical system
TW201310339A (en) * 2011-08-25 2013-03-01 Hon Hai Prec Ind Co Ltd System and method for controlling a robot
TWI454246B (en) * 2011-09-30 2014-10-01 Mackay Memorial Hospital Immediate monitoring of the target location of the radiotherapy system
TWM438671U (en) * 2012-05-23 2012-10-01 Tlj Intertech Inc Hand gesture manipulation electronic apparatus control system
CN103006332B (en) * 2012-12-27 2015-05-27 广东圣洋信息科技实业有限公司 Scalpel tracking method and device and digital stereoscopic microscope system
CN203092551U (en) * 2013-03-15 2013-07-31 西北师范大学 Domestic service robot based on Kinect and FPGA (Field-programmable Gate Array)
CN103302668B (en) * 2013-05-22 2016-03-16 东南大学 Based on control system and the method thereof of the Space teleoperation robot of Kinect
TWM486114U (en) * 2014-04-01 2014-09-11 Univ Minghsin Sci & Tech Automatic care device

Also Published As

Publication number Publication date
TWI570594B (en) 2017-02-11
CN105538307A (en) 2016-05-04
CN105538307B (en) 2018-08-07

Similar Documents

Publication Publication Date Title
US10540800B2 (en) Facial gesture driven animation of non-facial features
US9367951B1 (en) Creating realistic three-dimensional effects
KR102045219B1 (en) Body Information Analysis Apparatus Combining with Augmented Reality and Eyebrow Shape Preview Method thereof
US11288854B2 (en) Information processing apparatus and information processing method
US20180005445A1 (en) Augmenting a Moveable Entity with a Hologram
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
US20190095696A1 (en) Body information analysis apparatus and method of auxiliary comparison of eyebrow shapes thereof
CN110549353B (en) Force vision device, robot, and computer-readable medium storing force vision program
US9501810B2 (en) Creating a virtual environment for touchless interaction
KR20150106823A (en) Gesture recognition apparatus and control method of gesture recognition apparatus
CN109992111B (en) Augmented reality extension method and electronic device
KR20220154763A (en) Image processing methods and electronic equipment
WO2015093130A1 (en) Information processing device, information processing method, and program
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
TWI570594B (en) Control apparatus, system and method
TW201310339A (en) System and method for controlling a robot
CN109542218B (en) Mobile terminal, human-computer interaction system and method
TWM474185U (en) Remote hand gesture operating system
US20130187890A1 (en) User interface apparatus and method for 3d space-touch using multiple imaging sensors
JP2013218423A (en) Directional video control device and method
KR101519589B1 (en) Electronic learning apparatus and method for controlling contents by hand avatar
US10275051B2 (en) Interaction method, interaction device, and operation stick
JP2017199085A (en) Information processing apparatus, information processing method, and program
US10983608B2 (en) System and method of annotation of a shared display using a mobile device
JP7390891B2 (en) Client device, server, program, and information processing method