TW201137671A - Vision based hand posture recognition method and system thereof - Google Patents

Vision based hand posture recognition method and system thereof Download PDF

Info

Publication number
TW201137671A
TW201137671A TW99114011A TW99114011A TW201137671A TW 201137671 A TW201137671 A TW 201137671A TW 99114011 A TW99114011 A TW 99114011A TW 99114011 A TW99114011 A TW 99114011A TW 201137671 A TW201137671 A TW 201137671A
Authority
TW
Taiwan
Prior art keywords
gesture
image
contour
hand
gesture recognition
Prior art date
Application number
TW99114011A
Other languages
Chinese (zh)
Inventor
Chung-Cheng Lou
Jing-Wei Wang
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to TW99114011A priority Critical patent/TW201137671A/en
Publication of TW201137671A publication Critical patent/TW201137671A/en

Links

Abstract

A vision based hand posture recognition method and system thereof are disclosed. The method comprises the following steps of receiving an image frame; extracting a contoured hand image from said image frame; calculating a gravity center of said contoured hand image; obtaining contour points on a contour of said contoured hand image; calculating distances between said gravity center and said multiple contour points; recognizing a hand posture according to a first characteristic function of said multiple contour points. In embodiment, the finger number and hand direction of the hand posture can be determined according to the number and location of at least one peak of the first characteristic function.

Description

201137671 六、發明說明: 【發明所屬之技術領域】 [0001] 本發明是有關於一種手勢辨識方法,特別是有關於一種 根據視覺影像計算輪廓特徵函數以進行辨識之技術領域 〇 【先前技術】 [0002] 目前,對於快速發展的娛樂系統而言,尤其是遊戲系統 ,如何讓使用者與電腦之間的互動介面更友善是一項曰 漸重要的課題。其中,透過電腦分析使用者之動作來執 行指令已成為未來最具可能性的互動方法。然而,傳統 的解決方案往往需要在使用者手指上配置一感應器,此 舉雖然可以增加手部偵測的準確性,但是亦增加使用者 的負擔。另一較佳的方式為直接將使用者的手部視為一 指令下達器具,以影像處理的方式分析使用者的手部移 動方式來輸入指令,控制電腦的作業系統或是週邊裝置 。但是,此種傳統的影像分析方法過於複雜且不夠穩定 〇 [0003] 例如,已知一美國專利,其專利號6, 002, 808,便揭露 一種用以快速分析手勢以控制電腦的方法,其使用影像 向量計算來決定使用者手部的位置,方位以及大小。接 著,透過影像處理的方式來決定手勢,例如如果確認過 的手部影像中有洞,表示使用者的拇指與食指相碰觸擺 出一OK的手勢。此外,此專利亦揭露可利用手勢來控制 電腦顯示的屏幕顯示介面(OSD)。此習知技術的運算量過 於龐大,且容易在使用者改變動作時產生誤判,穩定度 099114011 表單編號A0101 第4頁/共27頁 0992024732-0 201137671 不佳。 [0004] 例如,另一已知美國專利,其專利號7, 129, 927,揭露 一手勢辨識系統,其特徵在於使用者手上配置複數個標 記物(marker ),藉著此系統透過一感應器偵測此些標記 物的位置。其中,複數個標記物中分成第一標記物組及 一第二標記物組,第一標記物組係作為參考之用,而感 應器偵測第二標記物組相對於第一標記物組的移動以進 一步辨識出使用者手勢。此習知技術要求使用者佩帶標 Ο [0005] 記物,無法僅以徒手進行操作。 因此,如何讓使用者可以徒手手勢或是移動軌跡與操作 介面進行互動,是一項及待解決的問題。 [0006] 【發明内容】 有鑑於上述習知技藝之問題,本發明之其中一目的就是 在提供一種三維手勢辨識系統、及其基於視覺之手勢辨 識系統及方法,以達到降低基於視覺辨識的運算複雜度 ,進一步達到即時運算之功效。 〇 U [0007] 根據本發明之目的,提出一種基於視覺之手勢辨識方法 ,其包含下列步驟。首先接收一影像畫面;從該影像畫 面擷取一手部輪廓影像;計算該手部輪廓影像之一重心 點;取得該手部輪廓影像之一手部輪廓之複數個輪廓點 :計算該重心點與該複數個輪廓點之間的複數個距離值 ;根據該複數個距離值之一第一特徵函數來辨識一手勢 0 [0008] 其中,上述辨識該手勢之步驟更包含下列步驟。設定一 099114011 表單編號A0101 第5頁/共27頁 0992024732-0 201137671 [0009] [0010] [0011] [0012] [0013] [0014] 099114011 參考點;計算連接該重心點與該參考點之—第一線段; /算連接該重心點與每—該複數個輪廓點之間的複數個 第一·線段,計算該第一線段與該複數個第二線段之間的 複數個夾角,以及以該複數個央角與該複數個距離值形 成的函數定義為該第一特徵函數。 其中’辨識該手勢之步驟更包含提供—資料庫,該資料 庫係紀錄複數個預設手勢之複數鮮二特徵函數;並計 算該第-特徵函數與該複數個第二特徵函數之間的複數 個成本值t),再根據騎數個成本值 個預設手勢之其-偶該㈣。 释亥複數 其中’辨識該手勢之步驟更包含先判斷該第 中是否有至少-峰值,若該第—特徵函數中有至^數 值’根據該至少-峰值之數目以及位置辨識出^一峰 其中’辨識該手勢之步驟更包含若該第 該峰贫’貝·]判斷該手勢係為一握拳手勢。^數中無 其中,辨識該手勢之步驟更包含根據該至少 數目來判斷該手勢之手指數目。 峰值之該 其中,辨識該手勢之步驟更包含根據該至少 位置來判斷該手勢之手部方向。 峰值之該 根據本發明之目的,更提出―種基於視覺之丰 統’包含-影像操取單元、-影像處理單-勢辨識系 理單元及—手勢_單元。影像麻單^資料處 像畫面,影像處理單元用以從該影像畫^枚-影 廓影像,並計算該手部輪廓影 侍一手部輪 表單編號A〇m 第6頁/共27頁 w點。資料處理 201137671 [0015] Ο [0016] [0017] Ο [0018] [0019] 單元用以取得該手部輪廓影像之一手部輪廓之複數個輪 廓點,並計算該重心點與該複數個輪廓點之間的複數個 距離值。手勢辨識單元係根據該複數個距離值之一第一 特徵函數來辨識一手勢。 其中,上述資料處理單元更計算一第一線段與複數個第 二線段之間的夾角值,並定義該第一特徵函數為該複數 個夾角值以及複數個距離值的函數,其中該第一線段係 連接該重心點及一參考點,而每一該複數個第二線段係 連接該重心點及每一該複數個輪廓點。 其中,上述手勢辨識系統更包含一資料庫,該資料庫係 紀錄複數個預設手勢之複數個第二特徵函數,其中該手 勢辨識單元係計算該第一特徵函數與該複數個第二特徵 函數之間的複數個成本值,並根據該複數個成本值,選 擇該複數個預設手勢之其一作為該手勢。 其中,該手勢辨識單元係判斷該第一特徵函數中是否有 至少一峰值,並根據該至少一峰值之數目以及位置辨識 出該手勢。 其中,該第一特徵函數中無該峰值時,該手勢辨識單元 係判斷該手勢係為一握拳手勢。 其中,該手勢辨識單元根據該至少一峰值之該數目來判 斷該手勢之手指數目,並根據該至少一峰值之該位置來 判斷該手勢之手部方向。 根據本發明之目的,再提出一種三維手勢辨識系統,包 含一第一影像擷取單元、一第二影像擷取單元、一影像 099114011 表單編號Α0101 第7頁/共27頁 0992024732-0 [0020] 201137671 處理單元、一資料處理單元及一手勢辨識單元。第一影 像擷取單元及第二影像擷取單元係分別接收一第一影像 晝面及一第二影像畫面。影像處理單元係從該第一影像 畫面取得一第一手部輪廓影像,並計算該第一手部輪廓 影像之一第一重心點,以及從該第二影像畫面取得一第 二手部輪廓影像,並計算該第二手部輪廓影像之一第二 重心點。資料處理單元係取得該第一手部輪廓影像之輪 廓上的複數個第一輪廓點,並計算該第一重心點與該複 數個第一輪廓點之間的複數個第一距離值,以及取得該 第二手部輪廓影像之輪廓上的複數個第二輪廓點,並計 算該第二重心點與該複數個第二輪廓點之間的複數個第 二距離值。手勢辨識單元係根據該複數個第一距離值之 一第一特徵函數來辨識一第一手勢,及根據該複數個第 二距離值之一第二特徵函數來辨識一第二手勢,再根據 該第一手勢及該第二手勢判斷出一三維手勢。 [00211 其中,手勢辨識單元係根據該第一特徵函數之至少一峰 值之數目及位置來辨識該第一手勢,以及根據該第二特 徵函數之至少一峰值之數目及位置來辨識該第二手勢。 【實施方式】 [0022] 請參閱第1圖,其係為本發明之基於視覺之手勢辨識方法 之實施流程圖。圖中,此實施例包含下列步驟。在步驟 10,接收一影像畫面。在步驟11,判斷影像畫面内是否 有一手部影像,如第2圖所示之手部影像21。若無,則執 行步驟10 ;若有,則在步驟12,從影像畫面中擷取一手 部輪廓影像。實施上,可對手部影像21進行邊緣偵測處 099114011 表單編號A0101 第8頁/共27頁 0992024732-0 201137671 [0023] Ο [0024] Ο [0025] 理以取得一如第2圖所示之手部輪廓線22。接著以手部 輪廓線22與+部影像21之邊緣所目之影像區域。作為上 述之手部輪廓影像。 接者在步驟13,計算此手部輪廓影像之一重心點。實施 上’可執行一手掌方位計算以取得手部輪廓影像23之一 重心點。例如,可根據手掌的常見的二維形狀選擇一力 矩函式I(x,y) ’接著根據此〗(xy)計算一階力矩以及二 階力矩M〇〇、M1G、Mqi、Mu、M2Q及、,如以下列方程式 所示: ^〇〇 = ^ y 财 i〇 = ΣΣχ%,少) ^ y ^οι = Λ > 从丨丨=ΣΣχγ%,) ^ y Μ2〇 - ^ y· ^ y 接著,可根據、〇、Μι〇及μ〇1計算出重心點(xc,yc),如 下列公式所示:201137671 VI. Description of the Invention: [Technical Field] [0001] The present invention relates to a gesture recognition method, and more particularly to a technical field for calculating a contour feature function based on a visual image for identification. [Prior Art] 0002] At present, for fast-developing entertainment systems, especially game systems, how to make the interaction interface between users and computers more friendly is an increasingly important issue. Among them, the implementation of instructions through computer analysis of user actions has become the most promising interactive method in the future. However, conventional solutions often require a sensor to be placed on the user's finger. This can increase the accuracy of hand detection, but it also increases the burden on the user. Another preferred method is to directly treat the user's hand as an instruction-issuing device, analyze the user's hand movement mode by image processing, input commands, and control the computer's operating system or peripheral devices. However, such conventional image analysis methods are too complicated and not sufficiently stable. [0003] For example, a U.S. Patent No. 6,002,808 discloses a method for quickly analyzing gestures to control a computer. Image vector calculations are used to determine the position, orientation, and size of the user's hand. Then, the gesture is determined by image processing. For example, if there is a hole in the confirmed hand image, the user's thumb and the index finger are touched to make an OK gesture. In addition, this patent also discloses an on-screen display interface (OSD) that can be used to control the display of a computer. The computational complexity of this prior art is too large, and it is easy to cause misjudgment when the user changes the action. Stability 099114011 Form No. A0101 Page 4 of 27 0992024732-0 201137671 Poor. [0004] For example, another known U.S. Patent No. 7,129,927 discloses a gesture recognition system in which a user is provided with a plurality of markers on which a sensor transmits a sensor. The device detects the location of these markers. Wherein the plurality of markers are divided into a first marker group and a second marker group, the first marker group is used as a reference, and the sensor detects the second marker group relative to the first marker group. Move to further identify the user gesture. This prior art requires the user to wear the mark [0005] and cannot operate with bare hands. Therefore, how to let the user interact with the gesture or the movement track and the operation interface is a problem to be solved. SUMMARY OF THE INVENTION In view of the above-mentioned problems of the prior art, one of the objects of the present invention is to provide a three-dimensional gesture recognition system and a vision-based gesture recognition system and method thereof, so as to reduce the operation based on visual recognition. Complexity, further achieving the power of instant computing. 〇 U [0007] In accordance with the purpose of the present invention, a vision-based gesture recognition method is provided that includes the following steps. First receiving an image frame; extracting a hand contour image from the image frame; calculating a center of gravity point of the hand contour image; and obtaining a plurality of contour points of the hand contour of the hand contour image: calculating the center of gravity point and the a plurality of distance values between the plurality of contour points; identifying a gesture 0 according to the first characteristic function of the plurality of distance values. [0008] wherein the step of recognizing the gesture further comprises the following steps. Set a 099114011 Form No. A0101 Page 5 / Total 27 Page 0992024732-0 201137671 [0009] [0011] [0012] [0014] [0014] 099114011 Reference point; calculate the connection of the center of gravity point and the reference point - a first line segment; calculating a plurality of first line segments between the center of gravity point and each of the plurality of contour points, and calculating a plurality of angles between the first line segment and the plurality of second line segments, and A function formed by the plurality of central corners and the plurality of distance values is defined as the first characteristic function. Wherein the step of recognizing the gesture further comprises providing a database, the database is a complex two-characteristic function for recording a plurality of preset gestures; and calculating a complex number between the first-featured function and the plurality of second characteristic functions The cost value is t), and then according to the cost value of the ride, the preset gesture is - (even). The step of recognizing the gesture further includes determining whether the middle has at least a peak, and if the first feature has a value of ^, the number is determined according to the number and position of the at least one peak. The step of recognizing the gesture further includes determining that the gesture is a fist gesture if the first peak is poor. In the number of none, the step of recognizing the gesture further comprises determining the number of fingers of the gesture based on the at least the number. The step of identifying the gesture further includes determining the hand direction of the gesture based on the at least position. The peak value According to the object of the present invention, a "vision-based abundance" inclusion-image manipulation unit, an image processing single-potential recognition system unit, and a gesture_unit are further proposed. The image is used to image the image from the image, and the image processing unit is used to draw the image from the image, and calculate the hand contour to capture the hand wheel form number A〇m page 6 / total 27 pages w point . [0015] [0019] The unit is configured to obtain a plurality of contour points of a hand contour of the hand contour image, and calculate the gravity center point and the plurality of contour points A plurality of distance values between. The gesture recognition unit identifies a gesture based on the first feature function of the plurality of distance values. The data processing unit further calculates an angle value between a first line segment and a plurality of second line segments, and defines the first characteristic function as a function of the plurality of angle values and a plurality of distance values, wherein the first The line segment connects the center of gravity point and a reference point, and each of the plurality of second line segments connects the center of gravity point and each of the plurality of contour points. The gesture recognition system further includes a database, wherein the database records a plurality of second feature functions of the plurality of preset gestures, wherein the gesture recognition unit calculates the first feature function and the plurality of second feature functions. A plurality of cost values are selected, and one of the plurality of preset gestures is selected as the gesture according to the plurality of cost values. The gesture recognition unit determines whether there is at least one peak in the first feature function, and recognizes the gesture according to the number and location of the at least one peak. Wherein, when the peak is absent in the first feature function, the gesture recognition unit determines that the gesture is a fist gesture. The gesture recognition unit determines the number of fingers of the gesture according to the number of the at least one peak, and determines the hand direction of the gesture according to the position of the at least one peak. According to the purpose of the present invention, a three-dimensional gesture recognition system is provided, including a first image capturing unit, a second image capturing unit, and an image 099114011. Form number Α0101, page 7 / total 27 pages 0992024732-0 [0020] 201137671 Processing unit, a data processing unit and a gesture recognition unit. The first image capturing unit and the second image capturing unit respectively receive a first image frame and a second image frame. The image processing unit acquires a first hand contour image from the first image frame, calculates a first center of gravity point of the first hand contour image, and obtains a second hand contour image from the second image frame. And calculating a second center of gravity point of one of the second hand contour images. The data processing unit obtains a plurality of first contour points on the contour of the first hand contour image, and calculates a plurality of first distance values between the first center of gravity point and the plurality of first contour points, and obtains a plurality of second contour points on the contour of the second hand contour image, and calculating a plurality of second distance values between the second center of gravity point and the plurality of second contour points. The gesture recognition unit identifies a first gesture according to the first feature function of the plurality of first distance values, and identifies a second gesture according to the second feature function of the plurality of second distance values, and then according to the second feature function The first gesture and the second gesture determine a three-dimensional gesture. [00211] The gesture recognition unit identifies the first gesture according to the number and position of at least one peak of the first feature function, and identifies the second hand according to the number and position of at least one peak of the second feature function. Potential. [Embodiment] [0022] Please refer to FIG. 1 , which is a flowchart of an implementation of a vision-based gesture recognition method according to the present invention. In the figure, this embodiment includes the following steps. In step 10, an image frame is received. In step 11, it is determined whether there is a hand image in the image frame, such as the hand image 21 shown in Fig. 2. If not, go to step 10; if so, in step 12, grab a hand contour image from the image screen. In practice, the edge detection can be performed on the hand image 21. 099114011 Form No. A0101 Page 8 / Total 27 Page 0992024732-0 201137671 [0023] Ο [0025] Ο [0025] Take the same as shown in FIG. 2 Hand outline 22. The image area of the hand contour 22 and the edge of the + image 21 is then displayed. As the hand contour image described above. In step 13, the receiver calculates a center of gravity of one of the hand contour images. The implementation can perform a palm orientation calculation to obtain a center of gravity of one of the hand contour images 23. For example, a torque function I(x, y) can be selected according to the common two-dimensional shape of the palm. Then, the first-order moment and the second-order moments M〇〇, M1G, Mqi, Mu, M2Q, and , as shown in the following equation: ^〇〇= ^ y 财 i〇= ΣΣχ%, less) ^ y ^οι = Λ > From 丨丨=ΣΣχγ%,) ^ y Μ2〇- ^ y· ^ y Next The center of gravity (xc, yc) can be calculated from , 〇, Μι〇 and μ〇1 as shown in the following formula:

[0026] Μ[0026] Μ

10 财οι 财00 [0027] 重“點(Xc,yc)如第3圖所示之重心點24。再根據xc、 099114011 表單編號A0101 第9頁/共27頁 0992024732-0 201137671 、Mnn、Μ”、M9n及Mn9計算出手部矩型的長L』及寬L9, c 00 11 20 02 1 2 如下列公式所示: [0028]10 ο ο ο 00 [0027] Re-point (Xc, yc) as shown in Figure 3, the center of gravity point 24. According to xc, 099114011 Form No. A0101 Page 9 / Total 27 Page 0992024732-0 201137671 , Mnn, Μ ", M9n and Mn9 calculate the length L of the hand rectangle" and the width L9, c 00 11 20 02 1 2 as shown in the following formula: [0028]

财02 yl [0029] 接著在步驟14,取得手部輪廓影像之一手部輪廓之複數 個輪廓點,如第3圖所示之沿著手部輪廓線22配置的輪廓 點26。在步驟15,計算此重心點與每一輪廓點之間的距 離值,如第3圖所示之距離d。接著在步驟16根據複數個 距離值之一第一特徵函數來辨識一手勢。實施上,此第 一特徵函數可為複數個距離值,以及重心點、輪廓點及 一參考點等三個位置點之間的夾角角度所形成的特徵函 數。如第3圖所示,上述夾角角度係由連接重心點24與參 考點25之一第一線段271,與連接重心點與每一輪廓點26 之間的一第二線段272所形成的夾角,如第3圖所示之角 度0。 [0030] 請續參閱第4圖,其繪示距離值與夾角角度之特徵函數的 波形圖。圖中,橫軸為上述炎角角度0而縱軸為距離值d ,將每一輪廓點其對應的距離值以及夾角角度從0度到 360度依序繪製,而形成特徵函數之波形圖。其中,波形 圖中可使用正規化(normalized)的距離值,以消除不同 影像大小的影響。 [0031] 由於手指的面積較手掌為小,因此手部輪廓影像的重心 099114011 表單編號 A0101 第 10 頁/共 27 頁 0992024732-0 201137671 點大多位在手掌的中心區域。若使用者擺出伸出手指的 手勢&端與重心點之間會形成較長距離,根據 此現象’我們可以從特徵函數之波形圖中是否有出現明 顯的陡俩波峰來騎手部輪廓影像是否為伸 出手指的 手勢’此外,亦可從波峰的數目來判斷手部輪廓影像中 的手和數目。實施上’可預設—角度幅度值以及一距離 門植值,逐-在波形中檢查是否有在角度幅度值内有局 部較大值出現且距離值的變化大於距離門檻值,若有, Ο [0032] 則可確遇有波峰的存在,如第*圖及第6圖所示波形圖; 反之,若有局部較大值出現租距離值的變化 小於距離門 權值,如第5圖所示,則不算是波峰。 再者,可由參考點的位置以及波峰的角度位置來判斷手 部輪廓影像的指向方位。例如,如果參考點設置於畫面 的右邊邊、缘,而波峰的角度位置出現於14Q度至22〇度之 間’表示此手勢的指向方位係為指 ❹ 向西方θ如第4圖所示 ’圖中之波形有-波峰存在且其角度位置於15()度至2〇〇 度之間而參考點技於畫面之右侧,因此可判斷此手部 輪廓影像代表伸出_隻手指指向西方的手勢 ;請續參閱 第5圖®中之波形無波峰存在,表示因此可判斷此手部 輪麻影像代表無伸出手指的握拳手勢;請續參閲第6圖, 圖中之波t有5個波峰存在,且其角度位置於15g度至 度之間’而參考點位於晝面之底侧,因此可判斷此手部 輪麻影像代㈣4五隻手糾向北方的手勢。 [0033] .月參閱第7圖其綠林發明之基於視覺之手勢辨識系統 之實知方塊圖。圖中,手勢辨識系統包含—影像操取單 099114011 表單編號A0101 第11頁/共27頁 0992024732-0 201137671 元41、一影像處理單元42、一資料處理單元43、一手勢 辨識單元44及一資料庫45。影像擷取單元41用以接收一 影像畫面411。影像處理單元42用以從影像畫面4丨丨取得 一手部輪廓影像421,並計算手部輪廓影像421之一重心 點422。貧料處理單元43用以取得手部輪廓影像421之一 手部輪廓423之複數個輪廓點431,並計算重心點422與 複數個輪廓點431之間的複數個距離值432。其中,影像 . 擷取單疋41較佳為一攝影機或一網路攝影機(webcam)。 _ 此外,資料處理單元43更可計算重心點422、複數個輪腐 點431及一參考點之間所形成的夾角433,如第3圖所示之 ◎ 夾角61。 [0034] [0035] 手勢辨識單元44係根據複數個距離值432之-第-特徵函 = 442來辨識一手勢441。其中’資料祕儲存複數個預 。又手勢之複數個第二特徵函數452。手勢辨識單元以可計 算第一特徵函數442與該複數個第二特徵函數452之間的 複數個成本值443,再根據該複數個成本值⑷,選擇該 複數個預-X手勢之其—作為手勢44卜例如,若第—特徵《 函數442及第二特徵函數452較佳為複數個距離值432以 - 一夾角433之函數’其可緣示如第4圖第5圖第6圖所 ^之波形圖。手勢辨識單元以可計算第-特徵函數442及 母一第二特徵函數452的波形之間的差異值,此差異值即 為上述之成本值443。手勢辨識單元44可選擇與第一特徵 函數442之間差異最小的第二特徵函數奶所對應的預設 手勢作為手勢441。 此外’手勢辨識單元44亦可根據第一特徵函數442之波形 099114011 表單編號A0101 第12頁/共27頁 0992024732-0 201137671 之波峰數目以及波峰位置來判斷出手部輪廓影像421所表 示的手勢441 »例如,可根據第一特徵函數442之波形中 疋否有波峰來判斷手部輪廓影像421是否為一伸出手指的 手勢’波峰數目可用以判斷手勢441的手指數目;此外, 參考點位置以及波形中波峰位置可用來判斷手勢441的方 位。其中,以波峰數目以及波峰位置來判斷的方式以於 月’J段内容所揭露,在此不再贅述。 [0036] ❹ ❹ 請參閱第8圖,其繪示本發明之三維手勢辨識系統之實施 方塊圖。圖中,此三維手勢辨識系統之實施例包含一第 —影像擷取單元501、一第二影像擷取單元502、一影像 處理單元52、一資料處理單元53:及一手勢辨識單元54。 第一影像擷取單元5〇1及第二影像擷取單元502係分別接 收一第一影像畫面511及一第二影像畫面512。影像處理 單元52係從第一影像畫面511取得一第一手部輪廓影像 5211 ’並計算第一手部輪廓影像5211之一第一重心點 5221,以及從第二影像畫面512取得一第二手部輪廓影像 5212,並計算第二手部輪廓影像5212之一第二重心點 5222。資料處理單元53係取得第一手部輪廓影像5211之 輪廓5231上的複數個第一輪廓點5311 ’並計算第一重心 點與複數個第一輪廓點之間的複數個第一距離值5321及 第一夾角5331,以及取得第二手部輪廓影像5212之輪廓 5232上的複數個第二輪廓點5312,並計算第二重心點 5222與複數個第二輪廓點531 2之間的複數個第二距離值 及第二夾角5332。 [0037] 099114011 手勢辨識單元54係根據複數個第一距離值5321與第一夾 表單編號A0101 第13頁/共27頁 0992024732-0 201137671 角5331之一第一特徵函數來辨識一第一手勢541,及根據 複數個第一距離值5322與第二夾角5332之一第二特徵函 數來辨識一第一手勢542,再根據第一手勢541及第二手 勢542判斷出一二維手勢543。其中,手勢辨識單元54較 佳的是根據特徵函數的波峰數目以及波峰位置來辨識第 一手勢541及第二手勢542。 [0038] 以上所述僅為舉例性’㈣為限制性者。任何未脫離本 發明之精神與範_,而對其進行之等致修改或變更,均 應包含於後附之申請專利範圍中。 [0039] 【圖式簡單說明】 第1圖係為本發明之基於視覺之手勢辨識方法之 圖; 實施流程 第2圖係為本發明之手部影像之_咅 不惠圖; 第3圖係為本發明之手部輪廓影像之示专圖. 第4圖係為本發明之對應輪摩點之距離:與失角 數之第一波形圖範例; 第5圖係為本發明之對應輪扇 *之距離值與夾角之特徵函 數之第二波形圖範例; ^ 第6圖係為本發明之對應輪靡點 數之第三波形圖範例; 之特徵函 之距離值與夾角之特徵函 第7圖係為本發明之基於視誉+ 1 m „ 手勢辨識系統之實施方塊 [0040] 圖;以及 第8圖係為本發明之三維手勢辨 【主要元件符號說明】 10〜16 :步驟流程 識系統之實施方塊 圖 099114011 表單編號A0101 第14頁/共27 0992024732-0 201137671 21 :手部影像 22 :手部輪廓線 23 :影像區域 24、281、291、422 :重心點 5221 :第一重心點 5 2 2 2 :第二重心點 25 ' 282 ' 292 :參考點 26、431 :輪廓點 5311 :第一輪廓點 〇 5312 :第二輪廓點 271 :第一線段 272 :第二線段 41 :影像擷取單元 501 :第一影像擷取單元 502 :第二影像擷取單元 411 :影像畫面 511 :第一影像畫面 〇 512 :第二影像畫面 42、 52 :影像處理單元 421 :手部輪廓影像 5211 :第一手部輪廓影像 5212 :第二手部輪廓影像 423、5231、5232 :手部輪廓 43、 53 :資料處理單元 432 :距離值 5321 :第一距離值 099114011 表單編號 A0101 第 15 頁/共 27 頁 0992024732-0 201137671 5322 :第二距離值 433 :夾角 5331 :第一夾角 5332 :第二夾角 44、54 :手勢辨識單元 441 :手勢 541 :第一手勢 542 :第二手勢 442 :第一特徵函數 443 :成本值 45 :資料庫 452 :第二特徵函數 543 :三維手勢 L1 :手部矩型長度 L2 :手部矩型寬度 099114011 表單編號A0101 第16頁/共27頁 0992024732-0Next, in step 14, a plurality of contour points of the hand contour of one of the hand contour images are obtained, as shown in Fig. 3, contour points 26 disposed along the hand contour 22. At step 15, the distance between the center of gravity point and each contour point is calculated, as shown by the distance d shown in FIG. A gesture is then identified in step 16 based on the first feature function of one of a plurality of distance values. In practice, the first eigenfunction can be a plurality of distance values, and a feature function formed by the angle between the three points of the center point, the contour point, and a reference point. As shown in FIG. 3, the angle of the above angle is formed by a first line segment 271 connecting the center of gravity point 24 and the reference point 25, and an angle formed by a second line segment 272 connecting the center of gravity point and each contour point 26. , as shown in Figure 3, the angle 0. [0030] Please continue to refer to FIG. 4, which is a waveform diagram showing a characteristic function of the distance value and the angle of the angle. In the figure, the horizontal axis is the above-mentioned flaming angle angle 0 and the vertical axis is the distance value d, and the corresponding distance value and the angle angle of each contour point are sequentially drawn from 0 degrees to 360 degrees, and a waveform diagram of the characteristic function is formed. Among them, the normalized distance value can be used in the waveform diagram to eliminate the influence of different image sizes. [0031] Since the area of the finger is smaller than the palm, the center of gravity of the hand contour image is 099114011. Form number A0101 Page 10 of 27 0992024732-0 201137671 Most of the points are in the center of the palm. If the user poses a gesture of extending the finger & a longer distance between the end and the center of gravity, according to this phenomenon, we can get the contour image of the rider from the waveform of the characteristic function. Whether it is a gesture of extending a finger' In addition, the number and the number of hands in the contour image of the hand can also be judged from the number of peaks. Implementing the 'presettable-angle amplitude value and a distance gate value, check in the waveform whether there is a local large value within the angular amplitude value and the distance value changes greater than the distance threshold, if any, Ο [0032] The presence of a peak can be confirmed, such as the waveform diagrams shown in FIG. 6 and FIG. 6; conversely, if there is a local large value, the change in the rental distance value is smaller than the distance gate weight, as shown in FIG. Show, it is not a peak. Furthermore, the pointing orientation of the hand contour image can be determined from the position of the reference point and the angular position of the peak. For example, if the reference point is set to the right edge and edge of the screen, and the angular position of the peak appears between 14Q and 22 degrees, 'the pointing position of the gesture is ❹ to the west θ as shown in Figure 4' The waveform in the figure has - the peak exists and its angular position is between 15 () degrees and 2 degrees and the reference point is on the right side of the picture, so it can be judged that the contour image of the hand represents the extension _ only the finger points to the west Gesture; please continue to see the waveform without peaks in Figure 5, which means that this hand numb image represents a fist gesture without protruding fingers; please continue to see Figure 6, the wave in the figure Five peaks exist, and their angular position is between 15g degrees and degrees', and the reference point is located at the bottom side of the facet. Therefore, it can be judged that the hand of the hand is four (five) hands to the north. [0033] The month refers to the block diagram of the visual-based gesture recognition system of the Greenwood invention in FIG. In the figure, the gesture recognition system includes - image manipulation order 099114011 form number A0101 page 11 / 27 pages 0992024732-0 201137671 element 41, an image processing unit 42, a data processing unit 43, a gesture recognition unit 44 and a data Library 45. The image capturing unit 41 is configured to receive an image frame 411. The image processing unit 42 is configured to obtain a hand contour image 421 from the image frame 4 and calculate a center of gravity 422 of the hand contour image 421. The poor material processing unit 43 is configured to acquire a plurality of contour points 431 of the hand contour 423 of one of the hand contour images 421, and calculate a plurality of distance values 432 between the center of gravity point 422 and the plurality of contour points 431. Wherein, the image capturing unit 41 is preferably a camera or a webcam. Further, the data processing unit 43 can further calculate an angle 433 formed between the center of gravity point 422, the plurality of round rotatable points 431, and a reference point, such as the angle 611 shown in FIG. [0035] The gesture recognition unit 44 recognizes a gesture 441 based on a plurality of distance values 432 - the first feature letter = 442. Among them, the information secret storage is plural. A plurality of second feature functions 452 are also gestured. The gesture recognition unit selects the plurality of cost values 443 between the first feature function 442 and the plurality of second feature functions 452, and selects the plurality of pre-X gestures according to the plurality of cost values (4). For example, if the first feature "function 442 and second feature function 452 are preferably a plurality of distance values 432 by a function of angle 433", the relationship can be as shown in Fig. 4, Fig. 5, Fig. 6 Waveform diagram. The gesture recognition unit calculates the difference value between the waveforms of the first-feature function 442 and the parent-second feature function 452, and the difference value is the above-mentioned cost value 443. The gesture recognition unit 44 may select a preset gesture corresponding to the second feature function milk having the smallest difference from the first feature function 442 as the gesture 441. In addition, the gesture recognition unit 44 can also determine the gesture 441 represented by the hand contour image 421 according to the number of peaks and the peak position of the waveform 099114011 of the first characteristic function 442, the form number A0101, the 12th page, and the 27th page of the 0992024732-0 201137671. For example, according to whether there is a peak in the waveform of the first characteristic function 442 to determine whether the hand contour image 421 is a gesture of extending a finger, the number of peaks can be used to determine the number of fingers of the gesture 441; in addition, the position of the reference point and the waveform The peak position can be used to determine the orientation of the gesture 441. Among them, the method of judging the number of peaks and the position of the peaks is disclosed in the content of the paragraph of the month, and will not be described here. 00 ❹ Please refer to FIG. 8 , which is a block diagram showing an implementation of the three-dimensional gesture recognition system of the present invention. The embodiment of the three-dimensional gesture recognition system includes a first image capturing unit 501, a second image capturing unit 502, an image processing unit 52, a data processing unit 53 and a gesture recognition unit 54. The first image capturing unit 5〇1 and the second image capturing unit 502 respectively receive a first image frame 511 and a second image frame 512. The image processing unit 52 acquires a first hand contour image 5211 ′ from the first image frame 511 and calculates a first center of gravity point 5221 of the first hand contour image 5211 and a second hand from the second image frame 512 . The contour image 5212 is calculated and a second center of gravity point 5222 of the second hand contour image 5212 is calculated. The data processing unit 53 obtains a plurality of first contour points 5311 ′ on the contour 5231 of the first hand contour image 5211 and calculates a plurality of first distance values 5321 between the first centroid point and the plurality of first contour points and a first angle 5331, and a plurality of second contour points 5312 on the contour 5232 of the second hand contour image 5212, and calculating a plurality of second points between the second center of gravity point 5222 and the plurality of second contour points 531 2 The distance value and the second angle 5332. [0037] 099114011 The gesture recognition unit 54 identifies a first gesture 541 according to a first feature value of one of the first distance value 5321 and the first folder form number A0101 page 13 / 27 pages 0992024732-0 201137671 angle 5331. And identifying a first gesture 542 according to a second characteristic function of the plurality of first distance values 5322 and the second angle 5332, and determining a two-dimensional gesture 543 according to the first gesture 541 and the second gesture 542. Preferably, the gesture recognition unit 54 recognizes the first gesture 541 and the second gesture 542 according to the number of peaks of the feature function and the peak position. [0038] The above is merely illustrative of the '(d)). Any changes or modifications that are made without departing from the spirit and scope of the invention are intended to be included in the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram of a gesture-based gesture recognition method according to the present invention; FIG. 2 is a diagram of a hand image of the present invention; FIG. The figure is a special image of the hand contour image of the present invention. Fig. 4 is an example of the distance of the corresponding wheel point of the present invention: the first waveform diagram of the number of corners; FIG. 5 is the corresponding wheel fan of the present invention *Example of the second waveform diagram of the characteristic function of the distance value and the angle; ^ Figure 6 is an example of the third waveform diagram of the corresponding rim point of the invention; the characteristic value of the distance value and the angle of the characteristic letter 7 The figure is the implementation block of the invention based on the reputation + 1 m „ gesture recognition system [0040]; and the figure 8 is the three-dimensional gesture recognition of the present invention [main component symbol description] 10~16: step process identification system Implementation Block Diagram 099114011 Form No. A0101 Page 14 of 27 0992024732-0 201137671 21 : Hand Image 22: Hand Outline 23: Image Area 24, 281, 291, 422: Center of Gravity Point 5221: First Center of Gravity Point 5 2 2 2 : Second center of gravity 25 ' 282 ' 292 : Reference point 26, 4 31: contour point 5311: first contour point 〇 5312: second contour point 271: first line segment 272: second line segment 41: image capturing unit 501: first image capturing unit 502: second image capturing unit 411 : image screen 511 : first image frame 〇 512 : second image screen 42 , 52 : image processing unit 421 : hand contour image 5211 : first hand contour image 5212 : second hand contour image 423 , 5231 5232: hand contour 43, 53: data processing unit 432: distance value 5321: first distance value 099114011 form number A0101 page 15 / total 27 page 0992024732-0 201137671 5322: second distance value 433: angle 5331: first Angle 5332: second angle 44, 54: gesture recognition unit 441: gesture 541: first gesture 542: second gesture 442: first feature function 443: cost value 45: database 452: second feature function 543: three-dimensional Gesture L1: Hand Moment Length L2: Hand Moment Width 099114011 Form No. A0101 Page 16 / Total 27 Page 0992024732-0

Claims (1)

201137671 七、申請專利範圍: 1 . 一種基於視覺之手勢辨識方法,包含下列步驟: 接收一影像畫面; 從該影像畫面擷取一手部輪廓影像; 計算該手部輪廓影像之一重心點; 取得該手部輪廓影像之一手部輪廓之複數個輪廓點; 計算該重心點與該複數個輪廓點之間的複數個距離值;以 及 根據該複數個距離值之一第一特徵函數來辨識一手勢。 Ο 2 .如申請專利範圍第1項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 設定一參考點; 計算連接該重心點與該參考點之一第一線段; -計算連接該重心點與每一該複數個輪廓點之間的複數個第 二線段; 計算該第一線段與該複數個第二線段之間的複數個夾角; 以及 ❹ 以該複數個夾角與該複數個距離值形成的函數定義為該第 一特徵函數。 3 .如申請專利範圍第2項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 提供一資料庫,該資料庫係紀錄複數個預設手勢之複數個 第二特徵函數; 計算該第一特徵函數與該複數個第二特徵函數之間的複數 個成本值(cost);以及 099114011 表單編號A0101 第17頁/共27頁 0992024732-0 201137671 根據該複數個成本值,選擇該複數個預設手勢之其一作為 該手勢。 4 .如申請專利範圍第2項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 判斷該第一特徵函數中是否有至少一峰值;以及 若該第一特徵函數中有該至少一峰值,根據該至少一峰值 之數目以及位置辨識出該手勢。 5 .如申請專利範圍第4項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 若該第一特徵函數中無該峰值,則判斷該手勢係為一握拳 手勢。 6 .如申請專利範圍第4項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 根據該至少一峰值之該數目來判斷該手勢之手指數目。 7 .如申請專利範圍第4項所述之手勢辨識方法,其中辨識該 手勢之步驟更包含: 根據該至少一峰值之該位置來判斷該手勢之手部方向。 8 . —種基於視覺之手勢辨識系統,包含: 一影像擷取單元,係接收一影像畫面; 一影像處理單元,係從該影像畫面取得一手部輪廓影像, 並計算該手部輪廓影像之一重心點; 一資料處理單元,係取得該手部輪廓影像之一手部輪廓之 複數個輪廓點,並計算該重心點與該複數個輪廓點之間的 複數個距離值;以及 一手勢辨識單元,係根據該複數個距離值之一第一特徵函 數來辨識一手勢。 099114011 表單編號A0101 第18頁/共27頁 0992024732-0 201137671 9 .如申請專利範圍第8項所述之手勢辨識系統,其中該資料 處理單元更計算一第一線段與複數個第二線段之間的夾角 值,並定義該第一特徵函數為該複數個夾角值以及該複數 個距離值的函數,其中該第一線段係連接該重心點及一參 考點,而每一該複數個第二線段係連接該重心點及每一該 複數個輪廓點。 10 .如申請專利範圍第9項所述之手勢辨識系統,更包含一資 料庫,該資料庫係紀錄複數個預設手勢之複數個第二特徵 函數,其中該手勢辨識單元係計算該第一特徵函數與該複 〇 數個第二特徵函數之間的複數個成本值,並根據該複數個 成本值,選擇該複數個預設手勢之其一作為該手勢。 11 .如申請專利範圍第9項所述之手勢辨識系統,其中該手勢 - 辨識單元係判斷該第一特徵函數中是否有至少一峰值,並 - 根據該至少一峰值之數目以及位置辨識出該手勢。 12 .如申請專利範圍第11項所述之手勢辨識系統,其中該第一 特徵函數中無該峰值時,該手勢辨識單元係判斷該手勢係 為一握拳手勢。 〇 13 .如申請專利範圍第11項所述之手勢辨識系統,其中該手勢 辨識單元根據該至少一峰值之該數目來判斷該手勢之手指 數目,並根據該至少一峰值之該位置來判斷該手勢之手部 方向。 14 . 一種三維手勢辨識系統,包含: 一第一影像擷取單元,係接收一第一影像晝面; 一第二影像擷取單元,係接收一第二影像晝面; 一影像處理單元,係從該第一影像晝面取得一第一手部輪 廓影像,並計算該第一手部輪廓影像之一第一重心點,以 099114011 表單編號A0101 第19頁/共27頁 0992024732-0 201137671 及從該第二影像畫面取得一第二手部輪廓影像,並計算該 第二手部輪廓影像之一第二重心點; 一資料處理單元,係取得該第一手部輪廓影像之輪廓上的 複數個第一輪廓點,並計算該第一重心點與該複數個第一 輪廓點之間的複數個第一距離值,以及取得該第二手部輪 廓影像之輪廓上的複數個第二輪廓點,並計算該第二重心 點與該複數個第二輪廓點之間的複數個第二距離值;以及 一手勢辨識單元,係根據該複數個第一距離值之一第一特 徵函數來辨識一第一手勢,及根據該複數個第二距離值之 一第二特徵函數來辨識一第二手勢,再根據該第一手勢及 該第二手勢判斷出一三維手勢。 15 .如申請專利範圍第14項所述之三維手勢辨識系統,其中該 手勢辨識單元係根據該第一特徵函數之至少一峰值之數目 及位置來辨識該第一手勢,以及根據該第二特徵函數之至 少一峰值之數目及位置來辨識該第二手勢。 099114011 表單編號A0101 第20頁/共27頁 0992024732-0201137671 VII. Patent application scope: 1. A visual gesture recognition method, comprising the following steps: receiving an image frame; capturing a hand contour image from the image frame; calculating a gravity point of the hand contour image; a plurality of contour points of the hand contour image; calculating a plurality of distance values between the center of gravity point and the plurality of contour points; and identifying a gesture according to the first characteristic function of the plurality of distance values. The gesture recognition method of claim 1, wherein the step of identifying the gesture further comprises: setting a reference point; calculating a first line segment connecting the center of gravity point and the reference point; - calculating the connection a plurality of second line segments between the center of gravity point and each of the plurality of contour points; calculating a plurality of angles between the first line segment and the plurality of second line segments; and ❹ the plurality of angles and the complex number The function formed by the distance values is defined as the first feature function. 3. The gesture recognition method according to claim 2, wherein the step of identifying the gesture further comprises: providing a database, the database is a plurality of second feature functions for recording a plurality of preset gestures; a plurality of cost values between the first characteristic function and the plurality of second characteristic functions; and 099114011 Form number A0101 Page 17 of 27 pages 0992024732-0 201137671 Selecting the plurality of cost values based on the plurality of cost values One of the preset gestures is used as the gesture. 4. The gesture recognition method of claim 2, wherein the step of identifying the gesture further comprises: determining whether there is at least one peak in the first feature function; and if the first feature function has the at least one The peak value is recognized based on the number and location of the at least one peak. 5. The gesture recognition method of claim 4, wherein the step of recognizing the gesture further comprises: if the peak is absent in the first feature function, determining that the gesture is a fist gesture. 6. The gesture recognition method of claim 4, wherein the step of recognizing the gesture further comprises: determining the number of fingers of the gesture based on the number of the at least one peak. 7. The gesture recognition method of claim 4, wherein the step of recognizing the gesture further comprises: determining a hand direction of the gesture based on the position of the at least one peak. 8. A vision-based gesture recognition system, comprising: an image capture unit that receives an image frame; an image processing unit that acquires a hand contour image from the image frame and calculates one of the hand contour images a data processing unit is configured to obtain a plurality of contour points of a hand contour of the hand contour image, and calculate a plurality of distance values between the gravity center point and the plurality of contour points; and a gesture recognition unit, A gesture is identified based on the first feature function of the plurality of distance values. </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; An angle value between the two, and defining the first characteristic function as a function of the plurality of angle values and the plurality of distance values, wherein the first line segment is connected to the center of gravity point and a reference point, and each of the plurality of The second line segment connects the center of gravity point and each of the plurality of contour points. 10. The gesture recognition system of claim 9, further comprising a database for recording a plurality of second feature functions of the plurality of preset gestures, wherein the gesture recognition unit calculates the first And a plurality of cost values between the feature function and the plurality of second feature functions, and selecting one of the plurality of preset gestures as the gesture according to the plurality of cost values. 11. The gesture recognition system of claim 9, wherein the gesture-identification unit determines whether there is at least one peak in the first feature function, and - identifying the number based on the number and location of the at least one peak gesture. 12. The gesture recognition system of claim 11, wherein the gesture recognition unit determines that the gesture is a fist gesture when the peak is absent in the first feature function. The gesture recognition system of claim 11, wherein the gesture recognition unit determines the number of fingers of the gesture according to the number of the at least one peak, and determines the position according to the position of the at least one peak. The direction of the hand of the gesture. A three-dimensional gesture recognition system, comprising: a first image capturing unit that receives a first image plane; and a second image capturing unit that receives a second image plane; an image processing unit Obtaining a first hand contour image from the first image plane, and calculating a first center of gravity point of the first hand contour image, to 099114011 form number A0101 page 19 / total 27 page 0992024732-0 201137671 and The second image frame acquires a second hand contour image, and calculates a second center of gravity point of the second hand contour image; a data processing unit obtains a plurality of contours of the first hand contour image a first contour point, and calculating a plurality of first distance values between the first center of gravity point and the plurality of first contour points, and obtaining a plurality of second contour points on a contour of the second hand contour image, And calculating a plurality of second distance values between the second center of gravity point and the plurality of second contour points; and a gesture recognition unit, according to the first characteristic function of the plurality of first distance values Identifying a first gesture, and identifying a second gesture according to the second feature function of the plurality of second distance values, and determining a three-dimensional gesture according to the first gesture and the second gesture. The three-dimensional gesture recognition system of claim 14, wherein the gesture recognition unit identifies the first gesture according to the number and position of at least one peak of the first feature function, and according to the second feature The second gesture is identified by the number and location of at least one peak of the function. 099114011 Form No. A0101 Page 20 of 27 0992024732-0
TW99114011A 2010-04-30 2010-04-30 Vision based hand posture recognition method and system thereof TW201137671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99114011A TW201137671A (en) 2010-04-30 2010-04-30 Vision based hand posture recognition method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99114011A TW201137671A (en) 2010-04-30 2010-04-30 Vision based hand posture recognition method and system thereof

Publications (1)

Publication Number Publication Date
TW201137671A true TW201137671A (en) 2011-11-01

Family

ID=46759614

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99114011A TW201137671A (en) 2010-04-30 2010-04-30 Vision based hand posture recognition method and system thereof

Country Status (1)

Country Link
TW (1) TW201137671A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528061A (en) * 2014-09-30 2016-04-27 财团法人成大研究发展基金会 Gesture recognition system
TWI563818B (en) * 2013-05-24 2016-12-21 Univ Central Taiwan Sci & Tech Three dimension contactless controllable glasses-like cell phone

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI563818B (en) * 2013-05-24 2016-12-21 Univ Central Taiwan Sci & Tech Three dimension contactless controllable glasses-like cell phone
CN105528061A (en) * 2014-09-30 2016-04-27 财团法人成大研究发展基金会 Gesture recognition system

Similar Documents

Publication Publication Date Title
US20190250714A1 (en) Systems and methods for triggering actions based on touch-free gesture detection
CN107077197B (en) 3D visualization map
US9529527B2 (en) Information processing apparatus and control method, and recording medium
US9069386B2 (en) Gesture recognition device, method, program, and computer-readable medium upon which program is stored
US20130335324A1 (en) Computer vision based two hand control of content
TWI546711B (en) Method and computing device for determining angular contact geometry
TWI431538B (en) Image based motion gesture recognition method and system thereof
US20110268365A1 (en) 3d hand posture recognition system and vision based hand posture recognition method thereof
JP5604279B2 (en) Gesture recognition apparatus, method, program, and computer-readable medium storing the program
US9916043B2 (en) Information processing apparatus for recognizing user operation based on an image
JP6004716B2 (en) Information processing apparatus, control method therefor, and computer program
JP6349800B2 (en) Gesture recognition device and method for controlling gesture recognition device
WO2011146070A1 (en) System and method for reporting data in a computer vision system
US9218060B2 (en) Virtual mouse driving apparatus and virtual mouse simulation method
Choi et al. Bare-hand-based augmented reality interface on mobile phone
CN104978018B (en) Touch system and touch method
CN109101127A (en) Palm touch detection in touch panel device with floating earth or thin touch panel
JP6033061B2 (en) Input device and program
TW201137671A (en) Vision based hand posture recognition method and system thereof
Edwin et al. Hand detection for virtual touchpad
JP5675196B2 (en) Information processing apparatus and control method thereof
US10175825B2 (en) Information processing apparatus, information processing method, and program for determining contact on the basis of a change in color of an image
KR20140086805A (en) Electronic apparatus, method for controlling the same and computer-readable recording medium
US20150268734A1 (en) Gesture recognition method for motion sensing detector
Hamamatsu et al. Detection of pinching gestures using a depth sensor and its application to 3D modeling