TWI435280B - Gesture recognition interaction system - Google Patents

Gesture recognition interaction system Download PDF

Info

Publication number
TWI435280B
TWI435280B TW98137391A TW98137391A TWI435280B TW I435280 B TWI435280 B TW I435280B TW 98137391 A TW98137391 A TW 98137391A TW 98137391 A TW98137391 A TW 98137391A TW I435280 B TWI435280 B TW I435280B
Authority
TW
Taiwan
Prior art keywords
image
gesture
processor
virtual object
gesture recognition
Prior art date
Application number
TW98137391A
Other languages
Chinese (zh)
Other versions
TW201117109A (en
Original Assignee
Univ Ishou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Ishou filed Critical Univ Ishou
Priority to TW98137391A priority Critical patent/TWI435280B/en
Publication of TW201117109A publication Critical patent/TW201117109A/en
Application granted granted Critical
Publication of TWI435280B publication Critical patent/TWI435280B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Description

手勢辨識互動系統Gesture recognition interactive system

本發明是有關於一種互動系統,特別是指一種手勢辨識互動系統。The invention relates to an interactive system, in particular to a gesture recognition interactive system.

目前的手控互動系統需經由各種硬體的週邊輸入設備,例如手套、鍵盤、滑鼠、觸控式螢幕等以提供資料的輸入,缺點為使用上不小心時將損壞週邊設備而增加硬體成本,且在互動方面感覺會較不符合人類習慣自然原則。The current hand-controlled interactive system needs to input data through various hardware peripheral input devices, such as gloves, keyboard, mouse, touch screen, etc. The disadvantage is that the use of the device will damage the peripheral device and increase the hardware. Cost, and in terms of interaction, it feels less in line with the natural principles of human habits.

因此,本發明之目的,即在提供一種避免上述缺失和於互動過程中增加自然感受效果的手勢辨識互動系統。Accordingly, it is an object of the present invention to provide a gesture recognition interactive system that avoids such deficits and adds natural sensation during interaction.

該手勢辨識互動系統,包含:一資料庫,儲存多數個虛擬物件影像且包括一手勢對應表,該手勢對應表用於儲存每一手勢動作所相對應的一控制指令;一影像擷取器,對一使用者的該手勢動作進行連續攝影以得到一呈動態且具有多數個擷取影像的影像序列;一處理器,對該影像序列的每一擷取影像進行影像分離以得到該影像序列的每一手部區域影像,且更對該影像序列的每一手部區域影像進行特徵擷取以得到一特徵資訊,再將該特徵資訊從該手勢對應表進行比對以找出相對應的控制指令;及一螢幕,用於即時顯示每一手部區域影像和於該手部區域影像上呈現一虛擬物件影像;其中,該處理器更根據該控制指令對該螢幕上的該虛擬物件影像進行操作。The gesture recognition interaction system comprises: a database storing a plurality of virtual object images and including a gesture correspondence table, wherein the gesture correspondence table is configured to store a control instruction corresponding to each gesture action; and an image capture device, Performing continuous shooting on the gesture of a user to obtain a dynamic image sequence having a plurality of captured images; and a processor for performing image separation on each captured image of the image sequence to obtain the image sequence Each hand region image is further characterized by each feature region image of the image sequence to obtain a feature information, and the feature information is compared from the gesture correspondence table to find a corresponding control command; And a screen for displaying an image of each hand area and displaying a virtual object image on the image of the hand area; wherein the processor further operates the virtual object image on the screen according to the control instruction.

有關本發明之前述及其他技術內容、特點與功效,在以下配合參考圖式之一個較佳實施例的詳細說明中,將可清楚的呈現。The above and other technical contents, features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments.

如圖1所示,本發明手勢辨識互動系統之較佳實施例,適用於對一使用者的手勢動作進行辨識以作為輸入信號進行互動,且包含:一影像擷取器2(如網路數位相機或是攝影機)、一資料庫5、一處理器4和一螢幕3。As shown in FIG. 1 , a preferred embodiment of the gesture recognition interaction system of the present invention is adapted to recognize a gesture of a user to interact as an input signal, and includes: an image capture device 2 (eg, network digits) A camera or a camera), a database 5, a processor 4, and a screen 3.

該處理器4分別電連接於影像擷取器2、資料庫5,和螢幕3。The processor 4 is electrically connected to the image capture device 2, the data library 5, and the screen 3.

資料庫5儲存多數個三維的虛擬物件影像,和每一物件影像相對應的動畫,且包括一手勢對應表,該手勢對應表用於儲存每一手勢動作所相對應的一控制指令。The database 5 stores a plurality of three-dimensional virtual object images, an animation corresponding to each object image, and includes a gesture correspondence table for storing a control instruction corresponding to each gesture action.

該手勢辨識互動系統分別根據一訓練模式和應用模式,來執行一手勢辨識互動方法,且該手勢辨識互動方法進行訓練模式時包含以下步驟,如圖2所示:The gesture recognition interaction system performs a gesture recognition interaction method according to a training mode and an application mode, and the gesture recognition interaction method includes the following steps when performing the training mode, as shown in FIG. 2:

<訓練模式><training mode>

步驟11:一影像擷取器2對一手所處區域之預定範圍進行攝影以得到一具有手部區域和背景區域的擷取影像。Step 11: An image capture device 2 photographs a predetermined range of a region in which a hand is located to obtain a captured image having a hand region and a background region.

步驟12:該處理器4對該擷取影像進行影像分離以得到一手部區域影像。Step 12: The processor 4 performs image separation on the captured image to obtain a hand region image.

其中,在本實施例中,進行影像分離的詳細方法為:處理器4根據一相關於灰階值的分界值濾除背景區域而得到該手部區域影像,但不限於此,也可以使用其他影像分離技術。In this embodiment, the detailed method for performing image separation is: the processor 4 obtains the hand region image according to a boundary value related to the grayscale value, but is not limited thereto, and may also use other Image separation technology.

步驟13:螢幕3遭該處理器4控制以顯示該手部區域影像,並更於該該手部區域影像上呈現至少一虛擬物件影像。Step 13: The screen 3 is controlled by the processor 4 to display the image of the hand area, and at least one virtual object image is displayed on the image of the hand area.

步驟14:於螢幕3上告知使用者以呈動態的手勢動作對該虛擬物件影像進行操作。Step 14: On the screen 3, the user is informed that the virtual object image is operated by a dynamic gesture.

其中,該操作包括選取、按住、移動、縮放、旋轉及啟動相關的動畫等。Among them, the operation includes selecting, holding, moving, zooming, rotating, and starting related animations.

步驟15:影像擷取器2對手勢動作進行連續攝影以得到一呈動態且具有多數個該擷取影像的影像序列,且將影像序列儲存於資料庫5中。Step 15: The image capturing device 2 continuously captures the gesture motion to obtain a dynamic image sequence with the plurality of captured images, and stores the image sequence in the database 5.

步驟16:該處理器4對該影像序列的每一擷取影像進行影像分離以得到該影像序列的每一手部區域影像,且將每一手部區域影像即時顯示於螢幕上。Step 16: The processor 4 performs image separation on each captured image of the image sequence to obtain an image of each hand region of the image sequence, and displays each hand region image on the screen in real time.

步驟17:該處理器4對該影像序列進行特徵擷取以得到一特徵資訊,該特徵資訊包括手指移動軌跡和掌心移動軌跡。Step 17: The processor 4 performs feature extraction on the image sequence to obtain a feature information, where the feature information includes a finger movement track and a palm movement track.

其中,進行特徵擷取的詳細做法為:該處理器4將每一手部區域影像進行邊緣運算以得到一具有手部輪廓的影像,進而分別根據手指呈現狹長輪廓、掌心呈現團狀輪廓以區分手指位置、掌心位置,再由該影像序列中手指位置、掌心位置的前後變化以得到一移動軌跡,但不限於此,本發明也可使用其他特徵擷取的技術實現。The detailed method of performing feature extraction is as follows: the processor 4 performs edge calculation on each hand region image to obtain an image with a hand contour, and then presents a narrow outline according to the finger and a clustered outline according to the palm to distinguish the finger. The position and the palm position are further changed from the finger position and the palm position in the image sequence to obtain a movement trajectory, but the present invention is not limited thereto, and the present invention can also be implemented using other feature extraction techniques.

步驟18:該處理器4根據該特徵資訊該建立一控制指令,並將該控制指令和該特徵資訊儲存於一手勢對應表中。Step 18: The processor 4 establishes a control instruction according to the feature information, and stores the control instruction and the feature information in a gesture correspondence table.

<應用模式><application mode>

如圖3所示,該手控互動方法進行應用模式時,相較於訓練模式的差別為步驟11至17相同,但是沒有步驟18,而且更包含以下步驟:As shown in FIG. 3, when the manual interaction method performs the application mode, the difference from the training mode is the same as steps 11 to 17, but there is no step 18, and the following steps are further included:

步驟21:該處理器4將該特徵資訊從該手勢對應表進行比對以找出相對應的控制指令。Step 21: The processor 4 compares the feature information from the gesture correspondence table to find a corresponding control instruction.

步驟22:該處理器4根據該控制指令對該螢幕3上的該虛擬物件影像進行操作。Step 22: The processor 4 operates the virtual object image on the screen 3 according to the control instruction.

綜上所述,將本發明之較佳實施例具有以下優點:在本發明的系統下,藉由辨識手勢動作以取代週邊輸入設備的使用,能簡化人機互動的介面以增加自然感受的效果,又避免其於使用的過程中發生損壞以達到降低硬體成本的目的。In summary, the preferred embodiment of the present invention has the following advantages: under the system of the present invention, by recognizing the gesture action instead of the use of the peripheral input device, the interface of the human-computer interaction can be simplified to increase the natural feeling effect. And avoid the damage in the process of use to achieve the purpose of reducing the cost of hardware.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。The above is only the preferred embodiment of the present invention, and the scope of the invention is not limited thereto, that is, the simple equivalent changes and modifications made by the scope of the invention and the description of the invention are All remain within the scope of the invention patent.

2...影像擷取器2. . . Image capture device

3...螢幕3. . . Screen

4...處理器4. . . processor

5...資料庫5. . . database

11...進行攝影的步驟11. . . Steps to take photography

12...影像分離的步驟12. . . Steps of image separation

13...顯示影像的步驟13. . . Steps to display images

14...告知操作的步驟14. . . Steps to inform the operation

15...動態攝影的步驟15. . . Dynamic photography steps

16...影像分離的步驟16. . . Steps of image separation

17...特徵擷取的步驟17. . . Feature extraction step

18...建立控制指令的步驟18. . . Steps to establish control instructions

21...找控制指令的步驟twenty one. . . Steps to find control instructions

22...進行操作的步驟twenty two. . . Steps to proceed

圖1是本發明之較佳實施例的示意圖;Figure 1 is a schematic view of a preferred embodiment of the present invention;

圖2是該較佳實施例執行訓練模式的流程圖;及2 is a flow chart of the preferred embodiment performing a training mode; and

圖3是該較佳實施例執行應用模式的流程圖。3 is a flow chart of the execution mode of the preferred embodiment.

2...影像擷取器2. . . Image capture device

3...螢幕3. . . Screen

4...處理器4. . . processor

5...資料庫5. . . database

Claims (5)

一種手勢辨識互動系統,包含:一資料庫,儲存多數個三維的虛擬物件影像,和每一物件影像相對應的動畫,且包括一手勢對應表,該手勢對應表用於儲存每一手勢動作所相對應的一控制指令;一影像擷取器,對一使用者的該手勢動作進行連續攝影以得到一呈動態且具有多數個擷取影像的影像序列;一處理器,對該影像序列的每一擷取影像進行影像分離以得到該影像序列的每一手部區域影像,且更對該影像序列的每一手部區域影像進行特徵擷取以得到一特徵資訊,再將該特徵資訊從該手勢對應表進行比對以找出相對應的控制指令;及一螢幕,用於即時顯示每一手部區域影像和於該手部區域影像上呈現一三維的虛擬物件影像;其中,該處理器更根據該控制指令對該螢幕上的該三維的虛擬物件影像進行操作。 A gesture recognition interaction system includes: a database storing a plurality of three-dimensional virtual object images, an animation corresponding to each object image, and a gesture correspondence table for storing each gesture action Corresponding control command; an image capture device for continuously capturing the gesture of a user to obtain a dynamic image sequence having a plurality of captured images; a processor for each of the image sequences An image is captured for image separation to obtain an image of each hand region of the image sequence, and a feature image is captured for each hand region image of the image sequence to obtain a feature information, and the feature information is corresponding to the gesture information. The table is compared to find a corresponding control command; and a screen is used for instantly displaying each hand region image and presenting a three-dimensional virtual object image on the hand region image; wherein the processor is further configured according to the The control command operates the three-dimensional virtual object image on the screen. 依據申請專利範圍第1項所述之手勢辨識互動系統,其中,該處理器會先執行一訓練模式,於該訓練模式中,該處理器將根據該特徵資訊該建立該控制指令,並將該控制指令和該特徵資訊儲存於該手勢對應表。 The gesture recognition interaction system according to claim 1, wherein the processor first executes a training mode, wherein the processor establishes the control instruction according to the feature information, and the The control command and the feature information are stored in the gesture correspondence table. 依據申請專利範圍第1項所述之手勢辨識互動系統,其中,使用該資料庫所儲存之三維的虛擬物件影像所相對 應的動畫,以當該控制指令屬於啟動相關的動畫時,將於該螢幕顯示。 According to the gesture recognition interactive system described in claim 1, wherein the three-dimensional virtual object image stored in the database is used as opposed to The animation should be displayed on the screen when the control command belongs to the startup related animation. 依據申請專利範圍第1項所述之手勢辨識互動系統,其中,該操作包括選取、按住、移動、縮放,和旋轉。 The gesture recognition interactive system of claim 1, wherein the operations include selecting, holding, moving, zooming, and rotating. 依據申請專利範圍第1項所述之手勢辨識互動系統,其中,該處理器會先執行一訓練模式,於該訓練模式中,該螢幕遭該處理器控制以顯示至少一虛擬物件影像,且更告知該使用者以呈動態的手勢動作對該三維的虛擬物件影像進行操作。 The gesture recognition interaction system according to claim 1, wherein the processor first executes a training mode in which the screen is controlled by the processor to display at least one virtual object image, and The user is informed to operate the three-dimensional virtual object image in a dynamic gesture.
TW98137391A 2009-11-04 2009-11-04 Gesture recognition interaction system TWI435280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98137391A TWI435280B (en) 2009-11-04 2009-11-04 Gesture recognition interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98137391A TWI435280B (en) 2009-11-04 2009-11-04 Gesture recognition interaction system

Publications (2)

Publication Number Publication Date
TW201117109A TW201117109A (en) 2011-05-16
TWI435280B true TWI435280B (en) 2014-04-21

Family

ID=44935134

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98137391A TWI435280B (en) 2009-11-04 2009-11-04 Gesture recognition interaction system

Country Status (1)

Country Link
TW (1) TWI435280B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164995A (en) * 2013-04-03 2013-06-19 湖南第一师范学院 Children somatic sense interactive learning system and method

Also Published As

Publication number Publication date
TW201117109A (en) 2011-05-16

Similar Documents

Publication Publication Date Title
US11048333B2 (en) System and method for close-range movement tracking
US9910498B2 (en) System and method for close-range movement tracking
US10394334B2 (en) Gesture-based control system
US20190250714A1 (en) Systems and methods for triggering actions based on touch-free gesture detection
CN107665042B (en) Enhanced virtual touchpad and touchscreen
KR101292467B1 (en) Virtual controller for visual displays
US8659548B2 (en) Enhanced camera-based input
EP2631739B1 (en) Contactless gesture-based control method and apparatus
US8933970B2 (en) Controlling an augmented reality object
AU2010366331B2 (en) User interface, apparatus and method for gesture recognition
JP2013037675A5 (en)
WO2015026569A1 (en) System and method for creating an interacting with a surface display
TWI435280B (en) Gesture recognition interaction system
CN111901518A (en) Display method and device and electronic equipment
CN109144235B (en) Man-machine interaction method and system based on head-hand cooperative action
Feng et al. An HCI paradigm fusing flexible object selection and AOM-based animation
CN110727345B (en) Method and system for realizing man-machine interaction through finger intersection movement
AlAgha et al. An Exploratory Study of 3D Interaction Techniques in Augmented Reality Environments.
Sen et al. Novel Human Machine Interface via Robust Hand Gesture Recognition System using Channel Pruned YOLOv5s Model
TWI499937B (en) Remote control method and remote control device using gestures and fingers
CN112667087A (en) Unity-based gesture recognition operation interaction realization method and system
WO2019152013A1 (en) Operating user interfaces