201137671 六、發明說明: 【發明所屬之技術領域】 [0001] 本發明是有關於一種手勢辨識方法,特別是有關於一種 根據視覺影像計算輪廓特徵函數以進行辨識之技術領域 〇 【先前技術】 [0002] 目前,對於快速發展的娛樂系統而言,尤其是遊戲系統 ,如何讓使用者與電腦之間的互動介面更友善是一項曰 漸重要的課題。其中,透過電腦分析使用者之動作來執 行指令已成為未來最具可能性的互動方法。然而,傳統 的解決方案往往需要在使用者手指上配置一感應器,此 舉雖然可以增加手部偵測的準確性,但是亦增加使用者 的負擔。另一較佳的方式為直接將使用者的手部視為一 指令下達器具,以影像處理的方式分析使用者的手部移 動方式來輸入指令,控制電腦的作業系統或是週邊裝置 。但是,此種傳統的影像分析方法過於複雜且不夠穩定 〇 [0003] 例如,已知一美國專利,其專利號6, 002, 808,便揭露 一種用以快速分析手勢以控制電腦的方法,其使用影像 向量計算來決定使用者手部的位置,方位以及大小。接 著,透過影像處理的方式來決定手勢,例如如果確認過 的手部影像中有洞,表示使用者的拇指與食指相碰觸擺 出一OK的手勢。此外,此專利亦揭露可利用手勢來控制 電腦顯示的屏幕顯示介面(OSD)。此習知技術的運算量過 於龐大,且容易在使用者改變動作時產生誤判,穩定度 099114011 表單編號A0101 第4頁/共27頁 0992024732-0 201137671 不佳。 [0004] 例如,另一已知美國專利,其專利號7, 129, 927,揭露 一手勢辨識系統,其特徵在於使用者手上配置複數個標 記物(marker ),藉著此系統透過一感應器偵測此些標記 物的位置。其中,複數個標記物中分成第一標記物組及 一第二標記物組,第一標記物組係作為參考之用,而感 應器偵測第二標記物組相對於第一標記物組的移動以進 一步辨識出使用者手勢。此習知技術要求使用者佩帶標 Ο [0005] 記物,無法僅以徒手進行操作。 因此,如何讓使用者可以徒手手勢或是移動軌跡與操作 介面進行互動,是一項及待解決的問題。 [0006] 【發明内容】 有鑑於上述習知技藝之問題,本發明之其中一目的就是 在提供一種三維手勢辨識系統、及其基於視覺之手勢辨 識系統及方法,以達到降低基於視覺辨識的運算複雜度 ,進一步達到即時運算之功效。 〇 U [0007] 根據本發明之目的,提出一種基於視覺之手勢辨識方法 ,其包含下列步驟。首先接收一影像畫面;從該影像畫 面擷取一手部輪廓影像;計算該手部輪廓影像之一重心 點;取得該手部輪廓影像之一手部輪廓之複數個輪廓點 :計算該重心點與該複數個輪廓點之間的複數個距離值 ;根據該複數個距離值之一第一特徵函數來辨識一手勢 0 [0008] 其中,上述辨識該手勢之步驟更包含下列步驟。設定一 099114011 表單編號A0101 第5頁/共27頁 0992024732-0 201137671 [0009] [0010] [0011] [0012] [0013] [0014] 099114011 參考點;計算連接該重心點與該參考點之—第一線段; /算連接該重心點與每—該複數個輪廓點之間的複數個 第一·線段,計算該第一線段與該複數個第二線段之間的 複數個夾角,以及以該複數個央角與該複數個距離值形 成的函數定義為該第一特徵函數。 其中’辨識該手勢之步驟更包含提供—資料庫,該資料 庫係紀錄複數個預設手勢之複數鮮二特徵函數;並計 算該第-特徵函數與該複數個第二特徵函數之間的複數 個成本值t),再根據騎數個成本值 個預設手勢之其-偶該㈣。 释亥複數 其中’辨識該手勢之步驟更包含先判斷該第 中是否有至少-峰值,若該第—特徵函數中有至^數 值’根據該至少-峰值之數目以及位置辨識出^一峰 其中’辨識該手勢之步驟更包含若該第 該峰贫’貝·]判斷該手勢係為一握拳手勢。^數中無 其中,辨識該手勢之步驟更包含根據該至少 數目來判斷該手勢之手指數目。 峰值之該 其中,辨識該手勢之步驟更包含根據該至少 位置來判斷該手勢之手部方向。 峰值之該 根據本發明之目的,更提出―種基於視覺之丰 統’包含-影像操取單元、-影像處理單-勢辨識系 理單元及—手勢_單元。影像麻單^資料處 像畫面,影像處理單元用以從該影像畫^枚-影 廓影像,並計算該手部輪廓影 侍一手部輪 表單編號A〇m 第6頁/共27頁 w點。資料處理 201137671 [0015] Ο [0016] [0017] Ο [0018] [0019] 單元用以取得該手部輪廓影像之一手部輪廓之複數個輪 廓點,並計算該重心點與該複數個輪廓點之間的複數個 距離值。手勢辨識單元係根據該複數個距離值之一第一 特徵函數來辨識一手勢。 其中,上述資料處理單元更計算一第一線段與複數個第 二線段之間的夾角值,並定義該第一特徵函數為該複數 個夾角值以及複數個距離值的函數,其中該第一線段係 連接該重心點及一參考點,而每一該複數個第二線段係 連接該重心點及每一該複數個輪廓點。 其中,上述手勢辨識系統更包含一資料庫,該資料庫係 紀錄複數個預設手勢之複數個第二特徵函數,其中該手 勢辨識單元係計算該第一特徵函數與該複數個第二特徵 函數之間的複數個成本值,並根據該複數個成本值,選 擇該複數個預設手勢之其一作為該手勢。 其中,該手勢辨識單元係判斷該第一特徵函數中是否有 至少一峰值,並根據該至少一峰值之數目以及位置辨識 出該手勢。 其中,該第一特徵函數中無該峰值時,該手勢辨識單元 係判斷該手勢係為一握拳手勢。 其中,該手勢辨識單元根據該至少一峰值之該數目來判 斷該手勢之手指數目,並根據該至少一峰值之該位置來 判斷該手勢之手部方向。 根據本發明之目的,再提出一種三維手勢辨識系統,包 含一第一影像擷取單元、一第二影像擷取單元、一影像 099114011 表單編號Α0101 第7頁/共27頁 0992024732-0 [0020] 201137671 處理單元、一資料處理單元及一手勢辨識單元。第一影 像擷取單元及第二影像擷取單元係分別接收一第一影像 晝面及一第二影像畫面。影像處理單元係從該第一影像 畫面取得一第一手部輪廓影像,並計算該第一手部輪廓 影像之一第一重心點,以及從該第二影像畫面取得一第 二手部輪廓影像,並計算該第二手部輪廓影像之一第二 重心點。資料處理單元係取得該第一手部輪廓影像之輪 廓上的複數個第一輪廓點,並計算該第一重心點與該複 數個第一輪廓點之間的複數個第一距離值,以及取得該 第二手部輪廓影像之輪廓上的複數個第二輪廓點,並計 算該第二重心點與該複數個第二輪廓點之間的複數個第 二距離值。手勢辨識單元係根據該複數個第一距離值之 一第一特徵函數來辨識一第一手勢,及根據該複數個第 二距離值之一第二特徵函數來辨識一第二手勢,再根據 該第一手勢及該第二手勢判斷出一三維手勢。 [00211 其中,手勢辨識單元係根據該第一特徵函數之至少一峰 值之數目及位置來辨識該第一手勢,以及根據該第二特 徵函數之至少一峰值之數目及位置來辨識該第二手勢。 【實施方式】 [0022] 請參閱第1圖,其係為本發明之基於視覺之手勢辨識方法 之實施流程圖。圖中,此實施例包含下列步驟。在步驟 10,接收一影像畫面。在步驟11,判斷影像畫面内是否 有一手部影像,如第2圖所示之手部影像21。若無,則執 行步驟10 ;若有,則在步驟12,從影像畫面中擷取一手 部輪廓影像。實施上,可對手部影像21進行邊緣偵測處 099114011 表單編號A0101 第8頁/共27頁 0992024732-0 201137671 [0023] Ο [0024] Ο [0025] 理以取得一如第2圖所示之手部輪廓線22。接著以手部 輪廓線22與+部影像21之邊緣所目之影像區域。作為上 述之手部輪廓影像。 接者在步驟13,計算此手部輪廓影像之一重心點。實施 上’可執行一手掌方位計算以取得手部輪廓影像23之一 重心點。例如,可根據手掌的常見的二維形狀選擇一力 矩函式I(x,y) ’接著根據此〗(xy)計算一階力矩以及二 階力矩M〇〇、M1G、Mqi、Mu、M2Q及、,如以下列方程式 所示: ^〇〇 = ^ y 财 i〇 = ΣΣχ%,少) ^ y ^οι = Λ > 从丨丨=ΣΣχγ%,) ^ y Μ2〇 - ^ y· ^ y 接著,可根據、〇、Μι〇及μ〇1計算出重心點(xc,yc),如 下列公式所示:201137671 VI. Description of the Invention: [Technical Field] [0001] The present invention relates to a gesture recognition method, and more particularly to a technical field for calculating a contour feature function based on a visual image for identification. [Prior Art] 0002] At present, for fast-developing entertainment systems, especially game systems, how to make the interaction interface between users and computers more friendly is an increasingly important issue. Among them, the implementation of instructions through computer analysis of user actions has become the most promising interactive method in the future. However, conventional solutions often require a sensor to be placed on the user's finger. This can increase the accuracy of hand detection, but it also increases the burden on the user. Another preferred method is to directly treat the user's hand as an instruction-issuing device, analyze the user's hand movement mode by image processing, input commands, and control the computer's operating system or peripheral devices. However, such conventional image analysis methods are too complicated and not sufficiently stable. [0003] For example, a U.S. Patent No. 6,002,808 discloses a method for quickly analyzing gestures to control a computer. Image vector calculations are used to determine the position, orientation, and size of the user's hand. Then, the gesture is determined by image processing. For example, if there is a hole in the confirmed hand image, the user's thumb and the index finger are touched to make an OK gesture. In addition, this patent also discloses an on-screen display interface (OSD) that can be used to control the display of a computer. The computational complexity of this prior art is too large, and it is easy to cause misjudgment when the user changes the action. Stability 099114011 Form No. A0101 Page 4 of 27 0992024732-0 201137671 Poor. [0004] For example, another known U.S. Patent No. 7,129,927 discloses a gesture recognition system in which a user is provided with a plurality of markers on which a sensor transmits a sensor. The device detects the location of these markers. Wherein the plurality of markers are divided into a first marker group and a second marker group, the first marker group is used as a reference, and the sensor detects the second marker group relative to the first marker group. Move to further identify the user gesture. This prior art requires the user to wear the mark [0005] and cannot operate with bare hands. Therefore, how to let the user interact with the gesture or the movement track and the operation interface is a problem to be solved. SUMMARY OF THE INVENTION In view of the above-mentioned problems of the prior art, one of the objects of the present invention is to provide a three-dimensional gesture recognition system and a vision-based gesture recognition system and method thereof, so as to reduce the operation based on visual recognition. Complexity, further achieving the power of instant computing. 〇 U [0007] In accordance with the purpose of the present invention, a vision-based gesture recognition method is provided that includes the following steps. First receiving an image frame; extracting a hand contour image from the image frame; calculating a center of gravity point of the hand contour image; and obtaining a plurality of contour points of the hand contour of the hand contour image: calculating the center of gravity point and the a plurality of distance values between the plurality of contour points; identifying a gesture 0 according to the first characteristic function of the plurality of distance values. [0008] wherein the step of recognizing the gesture further comprises the following steps. Set a 099114011 Form No. A0101 Page 5 / Total 27 Page 0992024732-0 201137671 [0009] [0011] [0012] [0014] [0014] 099114011 Reference point; calculate the connection of the center of gravity point and the reference point - a first line segment; calculating a plurality of first line segments between the center of gravity point and each of the plurality of contour points, and calculating a plurality of angles between the first line segment and the plurality of second line segments, and A function formed by the plurality of central corners and the plurality of distance values is defined as the first characteristic function. Wherein the step of recognizing the gesture further comprises providing a database, the database is a complex two-characteristic function for recording a plurality of preset gestures; and calculating a complex number between the first-featured function and the plurality of second characteristic functions The cost value is t), and then according to the cost value of the ride, the preset gesture is - (even). The step of recognizing the gesture further includes determining whether the middle has at least a peak, and if the first feature has a value of ^, the number is determined according to the number and position of the at least one peak. The step of recognizing the gesture further includes determining that the gesture is a fist gesture if the first peak is poor. In the number of none, the step of recognizing the gesture further comprises determining the number of fingers of the gesture based on the at least the number. The step of identifying the gesture further includes determining the hand direction of the gesture based on the at least position. The peak value According to the object of the present invention, a "vision-based abundance" inclusion-image manipulation unit, an image processing single-potential recognition system unit, and a gesture_unit are further proposed. The image is used to image the image from the image, and the image processing unit is used to draw the image from the image, and calculate the hand contour to capture the hand wheel form number A〇m page 6 / total 27 pages w point . [0015] [0019] The unit is configured to obtain a plurality of contour points of a hand contour of the hand contour image, and calculate the gravity center point and the plurality of contour points A plurality of distance values between. The gesture recognition unit identifies a gesture based on the first feature function of the plurality of distance values. The data processing unit further calculates an angle value between a first line segment and a plurality of second line segments, and defines the first characteristic function as a function of the plurality of angle values and a plurality of distance values, wherein the first The line segment connects the center of gravity point and a reference point, and each of the plurality of second line segments connects the center of gravity point and each of the plurality of contour points. The gesture recognition system further includes a database, wherein the database records a plurality of second feature functions of the plurality of preset gestures, wherein the gesture recognition unit calculates the first feature function and the plurality of second feature functions. A plurality of cost values are selected, and one of the plurality of preset gestures is selected as the gesture according to the plurality of cost values. The gesture recognition unit determines whether there is at least one peak in the first feature function, and recognizes the gesture according to the number and location of the at least one peak. Wherein, when the peak is absent in the first feature function, the gesture recognition unit determines that the gesture is a fist gesture. The gesture recognition unit determines the number of fingers of the gesture according to the number of the at least one peak, and determines the hand direction of the gesture according to the position of the at least one peak. According to the purpose of the present invention, a three-dimensional gesture recognition system is provided, including a first image capturing unit, a second image capturing unit, and an image 099114011. Form number Α0101, page 7 / total 27 pages 0992024732-0 [0020] 201137671 Processing unit, a data processing unit and a gesture recognition unit. The first image capturing unit and the second image capturing unit respectively receive a first image frame and a second image frame. The image processing unit acquires a first hand contour image from the first image frame, calculates a first center of gravity point of the first hand contour image, and obtains a second hand contour image from the second image frame. And calculating a second center of gravity point of one of the second hand contour images. The data processing unit obtains a plurality of first contour points on the contour of the first hand contour image, and calculates a plurality of first distance values between the first center of gravity point and the plurality of first contour points, and obtains a plurality of second contour points on the contour of the second hand contour image, and calculating a plurality of second distance values between the second center of gravity point and the plurality of second contour points. The gesture recognition unit identifies a first gesture according to the first feature function of the plurality of first distance values, and identifies a second gesture according to the second feature function of the plurality of second distance values, and then according to the second feature function The first gesture and the second gesture determine a three-dimensional gesture. [00211] The gesture recognition unit identifies the first gesture according to the number and position of at least one peak of the first feature function, and identifies the second hand according to the number and position of at least one peak of the second feature function. Potential. [Embodiment] [0022] Please refer to FIG. 1 , which is a flowchart of an implementation of a vision-based gesture recognition method according to the present invention. In the figure, this embodiment includes the following steps. In step 10, an image frame is received. In step 11, it is determined whether there is a hand image in the image frame, such as the hand image 21 shown in Fig. 2. If not, go to step 10; if so, in step 12, grab a hand contour image from the image screen. In practice, the edge detection can be performed on the hand image 21. 099114011 Form No. A0101 Page 8 / Total 27 Page 0992024732-0 201137671 [0023] Ο [0025] Ο [0025] Take the same as shown in FIG. 2 Hand outline 22. The image area of the hand contour 22 and the edge of the + image 21 is then displayed. As the hand contour image described above. In step 13, the receiver calculates a center of gravity of one of the hand contour images. The implementation can perform a palm orientation calculation to obtain a center of gravity of one of the hand contour images 23. For example, a torque function I(x, y) can be selected according to the common two-dimensional shape of the palm. Then, the first-order moment and the second-order moments M〇〇, M1G, Mqi, Mu, M2Q, and , as shown in the following equation: ^〇〇= ^ y 财 i〇= ΣΣχ%, less) ^ y ^οι = Λ > From 丨丨=ΣΣχγ%,) ^ y Μ2〇- ^ y· ^ y Next The center of gravity (xc, yc) can be calculated from , 〇, Μι〇 and μ〇1 as shown in the following formula:
[0026] Μ[0026] Μ
10 财οι 财00 [0027] 重“點(Xc,yc)如第3圖所示之重心點24。再根據xc、 099114011 表單編號A0101 第9頁/共27頁 0992024732-0 201137671 、Mnn、Μ”、M9n及Mn9計算出手部矩型的長L』及寬L9, c 00 11 20 02 1 2 如下列公式所示: [0028]10 ο ο ο 00 [0027] Re-point (Xc, yc) as shown in Figure 3, the center of gravity point 24. According to xc, 099114011 Form No. A0101 Page 9 / Total 27 Page 0992024732-0 201137671 , Mnn, Μ ", M9n and Mn9 calculate the length L of the hand rectangle" and the width L9, c 00 11 20 02 1 2 as shown in the following formula: [0028]
财02 yl [0029] 接著在步驟14,取得手部輪廓影像之一手部輪廓之複數 個輪廓點,如第3圖所示之沿著手部輪廓線22配置的輪廓 點26。在步驟15,計算此重心點與每一輪廓點之間的距 離值,如第3圖所示之距離d。接著在步驟16根據複數個 距離值之一第一特徵函數來辨識一手勢。實施上,此第 一特徵函數可為複數個距離值,以及重心點、輪廓點及 一參考點等三個位置點之間的夾角角度所形成的特徵函 數。如第3圖所示,上述夾角角度係由連接重心點24與參 考點25之一第一線段271,與連接重心點與每一輪廓點26 之間的一第二線段272所形成的夾角,如第3圖所示之角 度0。 [0030] 請續參閱第4圖,其繪示距離值與夾角角度之特徵函數的 波形圖。圖中,橫軸為上述炎角角度0而縱軸為距離值d ,將每一輪廓點其對應的距離值以及夾角角度從0度到 360度依序繪製,而形成特徵函數之波形圖。其中,波形 圖中可使用正規化(normalized)的距離值,以消除不同 影像大小的影響。 [0031] 由於手指的面積較手掌為小,因此手部輪廓影像的重心 099114011 表單編號 A0101 第 10 頁/共 27 頁 0992024732-0 201137671 點大多位在手掌的中心區域。若使用者擺出伸出手指的 手勢&端與重心點之間會形成較長距離,根據 此現象’我們可以從特徵函數之波形圖中是否有出現明 顯的陡俩波峰來騎手部輪廓影像是否為伸 出手指的 手勢’此外,亦可從波峰的數目來判斷手部輪廓影像中 的手和數目。實施上’可預設—角度幅度值以及一距離 門植值,逐-在波形中檢查是否有在角度幅度值内有局 部較大值出現且距離值的變化大於距離門檻值,若有, Ο [0032] 則可確遇有波峰的存在,如第*圖及第6圖所示波形圖; 反之,若有局部較大值出現租距離值的變化 小於距離門 權值,如第5圖所示,則不算是波峰。 再者,可由參考點的位置以及波峰的角度位置來判斷手 部輪廓影像的指向方位。例如,如果參考點設置於畫面 的右邊邊、缘,而波峰的角度位置出現於14Q度至22〇度之 間’表示此手勢的指向方位係為指 ❹ 向西方θ如第4圖所示 ’圖中之波形有-波峰存在且其角度位置於15()度至2〇〇 度之間而參考點技於畫面之右侧,因此可判斷此手部 輪廓影像代表伸出_隻手指指向西方的手勢 ;請續參閱 第5圖®中之波形無波峰存在,表示因此可判斷此手部 輪麻影像代表無伸出手指的握拳手勢;請續參閲第6圖, 圖中之波t有5個波峰存在,且其角度位置於15g度至 度之間’而參考點位於晝面之底侧,因此可判斷此手部 輪麻影像代㈣4五隻手糾向北方的手勢。 [0033] .月參閱第7圖其綠林發明之基於視覺之手勢辨識系統 之實知方塊圖。圖中,手勢辨識系統包含—影像操取單 099114011 表單編號A0101 第11頁/共27頁 0992024732-0 201137671 元41、一影像處理單元42、一資料處理單元43、一手勢 辨識單元44及一資料庫45。影像擷取單元41用以接收一 影像畫面411。影像處理單元42用以從影像畫面4丨丨取得 一手部輪廓影像421,並計算手部輪廓影像421之一重心 點422。貧料處理單元43用以取得手部輪廓影像421之一 手部輪廓423之複數個輪廓點431,並計算重心點422與 複數個輪廓點431之間的複數個距離值432。其中,影像 . 擷取單疋41較佳為一攝影機或一網路攝影機(webcam)。 _ 此外,資料處理單元43更可計算重心點422、複數個輪腐 點431及一參考點之間所形成的夾角433,如第3圖所示之 ◎ 夾角61。 [0034] [0035] 手勢辨識單元44係根據複數個距離值432之-第-特徵函 = 442來辨識一手勢441。其中’資料祕儲存複數個預 。又手勢之複數個第二特徵函數452。手勢辨識單元以可計 算第一特徵函數442與該複數個第二特徵函數452之間的 複數個成本值443,再根據該複數個成本值⑷,選擇該 複數個預-X手勢之其—作為手勢44卜例如,若第—特徵《 函數442及第二特徵函數452較佳為複數個距離值432以 - 一夾角433之函數’其可緣示如第4圖第5圖第6圖所 ^之波形圖。手勢辨識單元以可計算第-特徵函數442及 母一第二特徵函數452的波形之間的差異值,此差異值即 為上述之成本值443。手勢辨識單元44可選擇與第一特徵 函數442之間差異最小的第二特徵函數奶所對應的預設 手勢作為手勢441。 此外’手勢辨識單元44亦可根據第一特徵函數442之波形 099114011 表單編號A0101 第12頁/共27頁 0992024732-0 201137671 之波峰數目以及波峰位置來判斷出手部輪廓影像421所表 示的手勢441 »例如,可根據第一特徵函數442之波形中 疋否有波峰來判斷手部輪廓影像421是否為一伸出手指的 手勢’波峰數目可用以判斷手勢441的手指數目;此外, 參考點位置以及波形中波峰位置可用來判斷手勢441的方 位。其中,以波峰數目以及波峰位置來判斷的方式以於 月’J段内容所揭露,在此不再贅述。 [0036] ❹ ❹ 請參閱第8圖,其繪示本發明之三維手勢辨識系統之實施 方塊圖。圖中,此三維手勢辨識系統之實施例包含一第 —影像擷取單元501、一第二影像擷取單元502、一影像 處理單元52、一資料處理單元53:及一手勢辨識單元54。 第一影像擷取單元5〇1及第二影像擷取單元502係分別接 收一第一影像畫面511及一第二影像畫面512。影像處理 單元52係從第一影像畫面511取得一第一手部輪廓影像 5211 ’並計算第一手部輪廓影像5211之一第一重心點 5221,以及從第二影像畫面512取得一第二手部輪廓影像 5212,並計算第二手部輪廓影像5212之一第二重心點 5222。資料處理單元53係取得第一手部輪廓影像5211之 輪廓5231上的複數個第一輪廓點5311 ’並計算第一重心 點與複數個第一輪廓點之間的複數個第一距離值5321及 第一夾角5331,以及取得第二手部輪廓影像5212之輪廓 5232上的複數個第二輪廓點5312,並計算第二重心點 5222與複數個第二輪廓點531 2之間的複數個第二距離值 及第二夾角5332。 [0037] 099114011 手勢辨識單元54係根據複數個第一距離值5321與第一夾 表單編號A0101 第13頁/共27頁 0992024732-0 201137671 角5331之一第一特徵函數來辨識一第一手勢541,及根據 複數個第一距離值5322與第二夾角5332之一第二特徵函 數來辨識一第一手勢542,再根據第一手勢541及第二手 勢542判斷出一二維手勢543。其中,手勢辨識單元54較 佳的是根據特徵函數的波峰數目以及波峰位置來辨識第 一手勢541及第二手勢542。 [0038] 以上所述僅為舉例性’㈣為限制性者。任何未脫離本 發明之精神與範_,而對其進行之等致修改或變更,均 應包含於後附之申請專利範圍中。 [0039] 【圖式簡單說明】 第1圖係為本發明之基於視覺之手勢辨識方法之 圖; 實施流程 第2圖係為本發明之手部影像之_咅 不惠圖; 第3圖係為本發明之手部輪廓影像之示专圖. 第4圖係為本發明之對應輪摩點之距離:與失角 數之第一波形圖範例; 第5圖係為本發明之對應輪扇 *之距離值與夾角之特徵函 數之第二波形圖範例; ^ 第6圖係為本發明之對應輪靡點 數之第三波形圖範例; 之特徵函 之距離值與夾角之特徵函 第7圖係為本發明之基於視誉+ 1 m „ 手勢辨識系統之實施方塊 [0040] 圖;以及 第8圖係為本發明之三維手勢辨 【主要元件符號說明】 10〜16 :步驟流程 識系統之實施方塊 圖 099114011 表單編號A0101 第14頁/共27 0992024732-0 201137671 21 :手部影像 22 :手部輪廓線 23 :影像區域 24、281、291、422 :重心點 5221 :第一重心點 5 2 2 2 :第二重心點 25 ' 282 ' 292 :參考點 26、431 :輪廓點 5311 :第一輪廓點 〇 5312 :第二輪廓點 271 :第一線段 272 :第二線段 41 :影像擷取單元 501 :第一影像擷取單元 502 :第二影像擷取單元 411 :影像畫面 511 :第一影像畫面 〇 512 :第二影像畫面 42、 52 :影像處理單元 421 :手部輪廓影像 5211 :第一手部輪廓影像 5212 :第二手部輪廓影像 423、5231、5232 :手部輪廓 43、 53 :資料處理單元 432 :距離值 5321 :第一距離值 099114011 表單編號 A0101 第 15 頁/共 27 頁 0992024732-0 201137671 5322 :第二距離值 433 :夾角 5331 :第一夾角 5332 :第二夾角 44、54 :手勢辨識單元 441 :手勢 541 :第一手勢 542 :第二手勢 442 :第一特徵函數 443 :成本值 45 :資料庫 452 :第二特徵函數 543 :三維手勢 L1 :手部矩型長度 L2 :手部矩型寬度 099114011 表單編號A0101 第16頁/共27頁 0992024732-0Next, in step 14, a plurality of contour points of the hand contour of one of the hand contour images are obtained, as shown in Fig. 3, contour points 26 disposed along the hand contour 22. At step 15, the distance between the center of gravity point and each contour point is calculated, as shown by the distance d shown in FIG. A gesture is then identified in step 16 based on the first feature function of one of a plurality of distance values. In practice, the first eigenfunction can be a plurality of distance values, and a feature function formed by the angle between the three points of the center point, the contour point, and a reference point. As shown in FIG. 3, the angle of the above angle is formed by a first line segment 271 connecting the center of gravity point 24 and the reference point 25, and an angle formed by a second line segment 272 connecting the center of gravity point and each contour point 26. , as shown in Figure 3, the angle 0. [0030] Please continue to refer to FIG. 4, which is a waveform diagram showing a characteristic function of the distance value and the angle of the angle. In the figure, the horizontal axis is the above-mentioned flaming angle angle 0 and the vertical axis is the distance value d, and the corresponding distance value and the angle angle of each contour point are sequentially drawn from 0 degrees to 360 degrees, and a waveform diagram of the characteristic function is formed. Among them, the normalized distance value can be used in the waveform diagram to eliminate the influence of different image sizes. [0031] Since the area of the finger is smaller than the palm, the center of gravity of the hand contour image is 099114011. Form number A0101 Page 10 of 27 0992024732-0 201137671 Most of the points are in the center of the palm. If the user poses a gesture of extending the finger & a longer distance between the end and the center of gravity, according to this phenomenon, we can get the contour image of the rider from the waveform of the characteristic function. Whether it is a gesture of extending a finger' In addition, the number and the number of hands in the contour image of the hand can also be judged from the number of peaks. Implementing the 'presettable-angle amplitude value and a distance gate value, check in the waveform whether there is a local large value within the angular amplitude value and the distance value changes greater than the distance threshold, if any, Ο [0032] The presence of a peak can be confirmed, such as the waveform diagrams shown in FIG. 6 and FIG. 6; conversely, if there is a local large value, the change in the rental distance value is smaller than the distance gate weight, as shown in FIG. Show, it is not a peak. Furthermore, the pointing orientation of the hand contour image can be determined from the position of the reference point and the angular position of the peak. For example, if the reference point is set to the right edge and edge of the screen, and the angular position of the peak appears between 14Q and 22 degrees, 'the pointing position of the gesture is ❹ to the west θ as shown in Figure 4' The waveform in the figure has - the peak exists and its angular position is between 15 () degrees and 2 degrees and the reference point is on the right side of the picture, so it can be judged that the contour image of the hand represents the extension _ only the finger points to the west Gesture; please continue to see the waveform without peaks in Figure 5, which means that this hand numb image represents a fist gesture without protruding fingers; please continue to see Figure 6, the wave in the figure Five peaks exist, and their angular position is between 15g degrees and degrees', and the reference point is located at the bottom side of the facet. Therefore, it can be judged that the hand of the hand is four (five) hands to the north. [0033] The month refers to the block diagram of the visual-based gesture recognition system of the Greenwood invention in FIG. In the figure, the gesture recognition system includes - image manipulation order 099114011 form number A0101 page 11 / 27 pages 0992024732-0 201137671 element 41, an image processing unit 42, a data processing unit 43, a gesture recognition unit 44 and a data Library 45. The image capturing unit 41 is configured to receive an image frame 411. The image processing unit 42 is configured to obtain a hand contour image 421 from the image frame 4 and calculate a center of gravity 422 of the hand contour image 421. The poor material processing unit 43 is configured to acquire a plurality of contour points 431 of the hand contour 423 of one of the hand contour images 421, and calculate a plurality of distance values 432 between the center of gravity point 422 and the plurality of contour points 431. Wherein, the image capturing unit 41 is preferably a camera or a webcam. Further, the data processing unit 43 can further calculate an angle 433 formed between the center of gravity point 422, the plurality of round rotatable points 431, and a reference point, such as the angle 611 shown in FIG. [0035] The gesture recognition unit 44 recognizes a gesture 441 based on a plurality of distance values 432 - the first feature letter = 442. Among them, the information secret storage is plural. A plurality of second feature functions 452 are also gestured. The gesture recognition unit selects the plurality of cost values 443 between the first feature function 442 and the plurality of second feature functions 452, and selects the plurality of pre-X gestures according to the plurality of cost values (4). For example, if the first feature "function 442 and second feature function 452 are preferably a plurality of distance values 432 by a function of angle 433", the relationship can be as shown in Fig. 4, Fig. 5, Fig. 6 Waveform diagram. The gesture recognition unit calculates the difference value between the waveforms of the first-feature function 442 and the parent-second feature function 452, and the difference value is the above-mentioned cost value 443. The gesture recognition unit 44 may select a preset gesture corresponding to the second feature function milk having the smallest difference from the first feature function 442 as the gesture 441. In addition, the gesture recognition unit 44 can also determine the gesture 441 represented by the hand contour image 421 according to the number of peaks and the peak position of the waveform 099114011 of the first characteristic function 442, the form number A0101, the 12th page, and the 27th page of the 0992024732-0 201137671. For example, according to whether there is a peak in the waveform of the first characteristic function 442 to determine whether the hand contour image 421 is a gesture of extending a finger, the number of peaks can be used to determine the number of fingers of the gesture 441; in addition, the position of the reference point and the waveform The peak position can be used to determine the orientation of the gesture 441. Among them, the method of judging the number of peaks and the position of the peaks is disclosed in the content of the paragraph of the month, and will not be described here. 00 ❹ Please refer to FIG. 8 , which is a block diagram showing an implementation of the three-dimensional gesture recognition system of the present invention. The embodiment of the three-dimensional gesture recognition system includes a first image capturing unit 501, a second image capturing unit 502, an image processing unit 52, a data processing unit 53 and a gesture recognition unit 54. The first image capturing unit 5〇1 and the second image capturing unit 502 respectively receive a first image frame 511 and a second image frame 512. The image processing unit 52 acquires a first hand contour image 5211 ′ from the first image frame 511 and calculates a first center of gravity point 5221 of the first hand contour image 5211 and a second hand from the second image frame 512 . The contour image 5212 is calculated and a second center of gravity point 5222 of the second hand contour image 5212 is calculated. The data processing unit 53 obtains a plurality of first contour points 5311 ′ on the contour 5231 of the first hand contour image 5211 and calculates a plurality of first distance values 5321 between the first centroid point and the plurality of first contour points and a first angle 5331, and a plurality of second contour points 5312 on the contour 5232 of the second hand contour image 5212, and calculating a plurality of second points between the second center of gravity point 5222 and the plurality of second contour points 531 2 The distance value and the second angle 5332. [0037] 099114011 The gesture recognition unit 54 identifies a first gesture 541 according to a first feature value of one of the first distance value 5321 and the first folder form number A0101 page 13 / 27 pages 0992024732-0 201137671 angle 5331. And identifying a first gesture 542 according to a second characteristic function of the plurality of first distance values 5322 and the second angle 5332, and determining a two-dimensional gesture 543 according to the first gesture 541 and the second gesture 542. Preferably, the gesture recognition unit 54 recognizes the first gesture 541 and the second gesture 542 according to the number of peaks of the feature function and the peak position. [0038] The above is merely illustrative of the '(d)). Any changes or modifications that are made without departing from the spirit and scope of the invention are intended to be included in the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram of a gesture-based gesture recognition method according to the present invention; FIG. 2 is a diagram of a hand image of the present invention; FIG. The figure is a special image of the hand contour image of the present invention. Fig. 4 is an example of the distance of the corresponding wheel point of the present invention: the first waveform diagram of the number of corners; FIG. 5 is the corresponding wheel fan of the present invention *Example of the second waveform diagram of the characteristic function of the distance value and the angle; ^ Figure 6 is an example of the third waveform diagram of the corresponding rim point of the invention; the characteristic value of the distance value and the angle of the characteristic letter 7 The figure is the implementation block of the invention based on the reputation + 1 m „ gesture recognition system [0040]; and the figure 8 is the three-dimensional gesture recognition of the present invention [main component symbol description] 10~16: step process identification system Implementation Block Diagram 099114011 Form No. A0101 Page 14 of 27 0992024732-0 201137671 21 : Hand Image 22: Hand Outline 23: Image Area 24, 281, 291, 422: Center of Gravity Point 5221: First Center of Gravity Point 5 2 2 2 : Second center of gravity 25 ' 282 ' 292 : Reference point 26, 4 31: contour point 5311: first contour point 〇 5312: second contour point 271: first line segment 272: second line segment 41: image capturing unit 501: first image capturing unit 502: second image capturing unit 411 : image screen 511 : first image frame 〇 512 : second image screen 42 , 52 : image processing unit 421 : hand contour image 5211 : first hand contour image 5212 : second hand contour image 423 , 5231 5232: hand contour 43, 53: data processing unit 432: distance value 5321: first distance value 099114011 form number A0101 page 15 / total 27 page 0992024732-0 201137671 5322: second distance value 433: angle 5331: first Angle 5332: second angle 44, 54: gesture recognition unit 441: gesture 541: first gesture 542: second gesture 442: first feature function 443: cost value 45: database 452: second feature function 543: three-dimensional Gesture L1: Hand Moment Length L2: Hand Moment Width 099114011 Form No. A0101 Page 16 / Total 27 Page 0992024732-0