TWI539329B - Gesture recognition system - Google Patents

Gesture recognition system Download PDF

Info

Publication number
TWI539329B
TWI539329B TW103133272A TW103133272A TWI539329B TW I539329 B TWI539329 B TW I539329B TW 103133272 A TW103133272 A TW 103133272A TW 103133272 A TW103133272 A TW 103133272A TW I539329 B TWI539329 B TW I539329B
Authority
TW
Taiwan
Prior art keywords
gesture recognition
recognition system
hand
momentum
tracking
Prior art date
Application number
TW103133272A
Other languages
Chinese (zh)
Other versions
TW201612698A (en
Inventor
謝明得
甘家銘
楊得煒
王宗仁
Original Assignee
財團法人成大研究發展基金會
奇景光電股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人成大研究發展基金會, 奇景光電股份有限公司 filed Critical 財團法人成大研究發展基金會
Priority to TW103133272A priority Critical patent/TWI539329B/en
Publication of TW201612698A publication Critical patent/TW201612698A/en
Application granted granted Critical
Publication of TWI539329B publication Critical patent/TWI539329B/en

Links

Landscapes

  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Description

手勢辨識系統Gesture recognition system

本發明係有關一種手勢辨識系統,特別是關於一種可執行於複雜場景的手勢辨識系統。The present invention relates to a gesture recognition system, and more particularly to a gesture recognition system executable in a complex scene.

自然使用者介面(NUI)為一種無形且不需人工控制裝置(例如鍵盤及滑鼠)的使用者介面。人與機器之間的互動可藉由靜態或動態手勢來達成。微軟的體感設備Kinect為一種基於視覺的手勢辨識系統,其使用靜態手勢或動態手勢以進行使用者與電腦之間的互動。The Natural User Interface (NUI) is a user interface that is invisible and does not require manual controls such as keyboards and mice. The interaction between people and machines can be achieved by static or dynamic gestures. Microsoft's somatosensory device Kinect is a vision-based gesture recognition system that uses static gestures or dynamic gestures to interact between the user and the computer.

傳統基於視覺的手勢辨識系統容易因周圍光線與背景物件,於物件辨識時得到錯誤的判斷結果。自辨識物件(此處指的是手)擷取得到特徵後,使用訓練資料以執行分類,藉以辨識出手勢。傳統分類方法的缺點為龐大的訓練資料或者因不明特徵而得到錯誤判斷。The traditional vision-based gesture recognition system is easy to get the wrong judgment result when the object is recognized due to the surrounding light and the background object. After the self-identified object (here referred to as the hand) has acquired the feature, the training data is used to perform the classification to identify the gesture. The shortcomings of the traditional classification method are huge training materials or misjudgment due to unknown features.

鑑於上述,因此亟需提出一種新穎的手勢辨識系統,用以更準確且快速地辨識出靜態手勢或/且動態手勢。In view of the above, it is therefore necessary to propose a novel gesture recognition system for recognizing static gestures and/or dynamic gestures more accurately and quickly.

鑑於上述,本發明實施例的特徵之一在於提出一種強健手勢辨識系統,可正確執行於複雜場景且可降低手勢分類的複雜度。In view of the above, one of the features of the embodiments of the present invention is to provide a robust gesture recognition system that can be correctly executed in a complex scene and can reduce the complexity of gesture classification.

根據本發明實施例,手勢辨識系統包含候選點偵測單元、靜態手勢辨識單元、多手追蹤單元及動態手勢辨識單元。候選點偵測單元接收輸入影像以產生候選點。靜態手勢辨識單元根據候選點以辨識靜態手勢。多手追蹤單元藉由連續輸入影像之間的配對以追蹤多手。動態手勢辨識單元根據多手追蹤單元的追蹤路徑以得到動量累積值,藉以辨識動態手勢。According to an embodiment of the invention, the gesture recognition system includes a candidate point detection unit, a static gesture recognition unit, a multi-hand tracking unit, and a dynamic gesture recognition unit. The candidate point detecting unit receives the input image to generate a candidate point. The static gesture recognition unit recognizes the static gesture according to the candidate points. The multi-hand tracking unit tracks multiple hands by continuously inputting pairs between images. The dynamic gesture recognition unit obtains a momentum cumulative value according to the tracking path of the multi-hand tracking unit, thereby identifying the dynamic gesture.

第一圖顯示本發明實施例之手勢辨識系統100的方塊圖。在本實施例中,手勢辨識系統100主要包含候選點(candidate node)偵測單元11、靜態手勢辨識單元12、多手追蹤單元13及動態手勢辨識單元14。以下將詳細描述這些單元的細節。手勢辨識系統100可由處理器(例如數位影像處理器)來執行。The first figure shows a block diagram of a gesture recognition system 100 in accordance with an embodiment of the present invention. In this embodiment, the gesture recognition system 100 mainly includes a candidate node detection unit 11, a static gesture recognition unit 12, a multi-hand tracking unit 13, and a dynamic gesture recognition unit 14. Details of these units will be described in detail below. Gesture recognition system 100 can be executed by a processor, such as a digital image processor.

第二圖顯示第一圖之候選點偵測單元11所執行步驟的流程圖。於步驟111(亦即,互動式特徵擷取),根據色彩、深度及動量以擷取特徵,藉以產生色彩可靠圖、深度可靠圖及動量可靠圖。The second figure shows a flow chart of the steps performed by the candidate point detecting unit 11 of the first figure. In step 111 (ie, interactive feature extraction), features are captured according to color, depth, and momentum to generate a color reliable map, a depth reliable map, and a momentum reliable map.

色彩可靠圖是根據輸入影像的膚色所產生的。於色彩可靠圖中,愈接近膚色的像素則指定予較高值。The color reliability map is generated based on the skin color of the input image. In a color-reliable map, pixels that are closer to skin tone are assigned a higher value.

深度可靠圖是根據輸入影像的手之深度所產生的。於深度可靠圖中,位於手之深度範圍的像素則指定予較高值。於一實施例中,首先藉由臉辨識技術以辨識出臉,接著相對於辨識臉的深度以決定手之深度範圍。The depth reliable map is generated based on the depth of the hand of the input image. In the depth-reliable map, pixels located in the depth range of the hand are assigned a higher value. In one embodiment, the face is first identified by a face recognition technique, and then the depth of the face is determined relative to the depth of the recognized face.

動量可靠圖是根據一系列輸入影像的動量所產生的。於動量可靠圖中,具較大動量的像素則指定予較高值,其中的動量可使用二輸入影像之間的絕對差之和(SAD)來量測。Momentum reliable maps are generated based on the momentum of a series of input images. In the momentum reliable map, pixels with larger momentum are assigned to higher values, and the momentum can be measured using the sum of absolute differences (SAD) between the two input images.

於步驟112(亦即,自然使用者場景分析),相對於操作狀態以決定所擷取色彩、深度及動量的權重。操作狀態可以為初始陳述(initial statement)、動量或者手是否接近臉。表一例示一些權重:           表一 <TABLE style="BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; BORDER-COLLAPSE: collapse; BORDER-TOP: medium none; BORDER-RIGHT: medium none; mso-border-alt: double windowtext 1.5pt; mso-yfti-tbllook: 1184; mso-padding-alt: 0cm 5.4pt 0cm 5.4pt; mso-border-insideh: 1.5pt double windowtext; mso-border-insidev: 1.5pt double windowtext"_0003"MsoTableGrid" border="1" cellSpacing="0" cellPadding="0"><TBODY><tr><td> 操作狀態 </td><td> 權重 </td></tr><tr><td> 初始陳述 </td><td> 動量 </td><td> 手接近臉 </td><td> 色彩 </td><td> 深度 </td><td> 動量 </td></tr><tr><td> 否 </td><td> 強 </td><td> 否 </td><td> 0.286 </td><td> 0.286 </td><td> 0.429 </td></tr><tr><td> 否 </td><td> 強 </td><td> 是 </td><td> 0.25 </td><td> 0.375 </td><td> 0.375 </td></tr><tr><td> 否 </td><td> 弱 </td><td> 否 </td><td> 0.5 </td><td> 0.5 </td><td> 0 </td></tr><tr><td> 否 </td><td> 弱 </td><td> 是 </td><td> 0.4 </td><td> 0.6 </td><td> 0 </td></tr><tr><td> 是 </td><td> 強 </td><td> 不理會 </td><td> 0 </td><td> 0.4 </td><td> 0.6 </td></tr><tr><td> 是 </td><td> 弱 </td><td> 不理會 </td><td> 0 </td><td> 1 </td><td> 0 </td></tr></TBODY></TABLE>In step 112 (ie, natural user scene analysis), the weights of the captured colors, depths, and momentum are determined relative to the operational state. The operational state can be an initial statement, momentum, or whether the hand is close to the face. The table shows some weights: Table 1         <TABLE style="BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; BORDER-COLLAPSE: collapse; BORDER-TOP: medium none; BORDER-RIGHT: medium none; mso-border-alt: double windowtext 1.5pt; Mso-yfti-tbllook: 1184; mso-padding-alt: 0cm 5.4pt 0cm 5.4pt; mso-border-insideh: 1.5pt double windowtext; mso-border-insidev: 1.5pt double windowtext"_0003"MsoTableGrid" border=" 1" cellSpacing="0" cellPadding="0"><TBODY><tr><td> Operation Status</td><td> Weight</td></tr><tr><td> Initial Statement </ Td><td> Momentum</td><td> Hand close to face</td><td> Color</td><td> Depth</td><td> Momentum</td></tr><tr ><td> No</td><td> Strong</td><td> No</td><td> 0.286 </td><td> 0.286 </td><td> 0.429 </td>< /tr><tr><td> No</td><td> Strong</td><td> Yes </td><td> 0.25 </td><td> 0.375 </td><td> 0.375 </td></tr><tr><td> No</td><td> Weak</td><td> No</td><td> 0.5 </td><td> 0.5 </td ><td> 0 </td></tr><tr><td> No</td><td> Weak</td><td> Yes </td><td> 0.4 </td><td > 0.6 </td><td> 0 </td></tr><tr><td> yes</td><td> strong</td><td> no </td><td> 0 </td><td> 0.4 </td><td> 0.6 </td></tr><tr><td> yes</td><td> weak</ Td><td> ignores</td><td> 0 </td><td> 1 </td><td> 0 </td></tr></TBODY></TABLE>

最後,於步驟113(亦即,混合可靠圖產生),使用步驟112所提供的色彩、深度及動量的權重,以結合色彩可靠圖、深度可靠圖及動量可靠圖,因而產生混合可靠圖,藉此可偵測得到候選點。Finally, in step 113 (ie, the hybrid reliable map generation), the weights of the color, depth, and momentum provided in step 112 are used to combine the color reliability map, the depth reliability map, and the momentum reliability map, thereby generating a hybrid reliable map. This can detect candidate points.

第三圖顯示第一圖之靜態手勢辨識單元12所執行步驟的流程圖。於步驟121(亦即,動態手掌切割),(自候選點偵測單元11的)偵測手被切割為手掌與手臂,其中手掌將於後續使用,而手臂則捨棄。The third figure shows a flow chart of the steps performed by the static gesture recognition unit 12 of the first figure. In step 121 (ie, dynamic palm cutting), the detecting hand (from the candidate point detecting unit 11) is cut into palms and arms, wherein the palms are used later and the arms are discarded.

於步驟122(亦即,高準確度手指偵測),藉由紀錄切割手掌的中心與切割手掌的邊緣之間的距離,以產生距離曲線(distance curve)。第四圖例示距離曲線,其具五峰值,表示辨識到五個展開手指。In step 122 (ie, high-accuracy finger detection), a distance curve is generated by recording the distance between the center of the cutting palm and the edge of the cutting palm. The fourth graph illustrates a distance curve with five peaks indicating that five extended fingers are recognized.

於步驟123(亦即,階層式手勢辨識),對多種手勢進行分類,以利於後續程序的進行。第五圖例示根據辨識到的展開手指數目所得到的手勢分類。當使用階層式方法以辨識手勢時,首先決定展開手指的數目。合併手指的數目可藉由計算手指寬度而得知。接著,空洞及其寬度可決定出展開手指之間的彎折手指數目。In step 123 (ie, hierarchical gesture recognition), a plurality of gestures are classified to facilitate subsequent procedures. The fifth figure illustrates the classification of gestures obtained based on the number of recognized fingers. When using a hierarchical approach to recognize gestures, first decide the number of fingers to expand. The number of merged fingers can be known by calculating the width of the finger. Next, the void and its width determine the number of fingers that are bent between the unfolded fingers.

於第一圖之多手追蹤單元13,藉由連續圖框(frame)之間的匹配,以追蹤多手,如第六圖所例示。匹配的追蹤二手之間存在有追蹤路徑。如果因為物件離開而造成追蹤手的失配(unmatched),則將該追蹤路徑刪除。如果因為物件遮蔽而造成追蹤手的失配,可藉由外差(extrapolation)技術以產生預期追蹤手。如果因為物件進入而造成追蹤手的失配,則須辨識一新的手勢並接著追蹤一新的路徑。當發現失配的追蹤手時,則迴授至候選點偵測單元11(第一圖),用以捨棄相應的候選點。In the multi-hand tracking unit 13 of the first figure, the multi-hands are tracked by matching between successive frames, as illustrated in the sixth figure. There is a tracking path between matching tracking used. If the tracking hand is unmatched because the object leaves, the tracking path is deleted. If the tracking hand is mismatched due to object obscuration, the extrapolation technique can be used to generate the expected tracking hand. If the tracking hand is mismatched due to the entry of the object, a new gesture must be identified and a new path followed. When the mismatched tracking hand is found, it is fed back to the candidate point detecting unit 11 (first picture) for discarding the corresponding candidate point.

於第一圖之動態手勢辨識單元14,監督追蹤路徑用以在三維空間的各軸方向得到動量累積值,藉以辨識動態手勢。經辨識的動態手勢則饋至自然使用者介面以執行預先定義的工作。In the dynamic gesture recognition unit 14 of the first figure, the supervised tracking path is used to obtain a momentum cumulative value in each axial direction of the three-dimensional space, thereby identifying the dynamic gesture. The recognized dynamic gestures are fed to the natural user interface to perform predefined tasks.

第七A圖顯示自然使用者介面,用以在擷取影像上單手繪圖。如第七B圖所例示,於使用1號靜態手勢(未顯示於第七B圖)之後,使用者可連續使用2號靜態手勢所構成的動態手勢以繪製線條,其間使用者可使用3號或4號靜態手勢以改變顏色。Figure 7A shows the natural user interface for single hand drawing on the captured image. As illustrated in FIG. 7B, after using the static gesture No. 1 (not shown in the seventh B diagram), the user can continuously use the dynamic gesture composed of the static gesture No. 2 to draw a line, and the user can use the number 3 during the period. Or static gesture No. 4 to change the color.

以上所述僅為本發明之較佳實施例而已,並非用以限定本發明之申請專利範圍;凡其它未脫離發明所揭示之精神下所完成之等效改變或修飾,均應包含在下述之申請專利範圍內。The above description is only the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; all other equivalent changes or modifications which are not departing from the spirit of the invention should be included in the following Within the scope of the patent application.

100             手勢辨識系統 11               候選點偵測單元 111             互動式特徵擷取 112             自然使用者場景分析 113             混合可靠圖產生 12               靜態手勢辨識單元 121             動態手掌切割 122             高準確度手指偵測 123             階層式手勢辨識 13               多手追蹤單元 14               動態手勢辨識單元100 gesture recognition system 11 candidate point detection unit 111 interactive feature capture 112 natural user scene analysis 113 hybrid reliable map generation 12 static gesture recognition unit 121 dynamic palm cutting 122 high accuracy finger detection 123 hierarchical gesture recognition 13 Hand tracking unit 14 dynamic gesture recognition unit

第一圖顯示本發明實施例之手勢辨識系統的方塊圖。 第二圖顯示第一圖之候選點偵測單元所執行步驟的流程圖。 第三圖顯示第一圖之靜態手勢辨識單元所執行步驟的流程圖。 第四圖例示距離曲線。 第五圖例示根據辨識到的展開手指數目所得到的手勢分類。 第六圖例示藉由連續圖框之間的匹配以追蹤多手。 第七A圖顯示自然使用者介面,用以在擷取影像上單手繪圖。 第七B圖顯示使用第七A圖之靜態手勢以構成動態手勢。The first figure shows a block diagram of a gesture recognition system in accordance with an embodiment of the present invention. The second figure shows a flow chart of the steps performed by the candidate point detecting unit of the first figure. The third figure shows a flow chart of the steps performed by the static gesture recognition unit of the first figure. The fourth figure illustrates the distance curve. The fifth figure illustrates the classification of gestures obtained based on the number of recognized fingers. The sixth figure illustrates the tracking of multiple hands by matching between successive frames. Figure 7A shows the natural user interface for single hand drawing on the captured image. Figure 7B shows the static gesture using Figure 7A to form a dynamic gesture.

100             手勢辨識系統 11               候選點偵測單元 12               靜態手勢辨識單元 13               多手追蹤單元 14               動態手勢辨識單元100 gesture recognition system 11 candidate point detection unit 12 static gesture recognition unit 13 multi-hand tracking unit 14 dynamic gesture recognition unit

Claims (15)

一種手勢辨識系統,包含:一候選點偵測單元,接收一輸入影像以產生一候選點;一靜態手勢辨識單元,根據該候選點以辨識一靜態手勢;一多手追蹤單元,藉由連續輸入影像之間的配對以追蹤多手;及一動態手勢辨識單元,根據該多手追蹤單元的追蹤路徑以得到動量累積值,藉以辨識動態手勢;其中該候選點偵測單元執行以下步驟:根據色彩、深度及動量以擷取特徵,藉以分別產生色彩可靠圖、深度可靠圖及動量可靠圖;相對於操作狀態以決定該色彩、深度及動量的權重;及使用該些權重,以結合該色彩可靠圖、該深度可靠圖及該動量可靠圖,因而產生一混合可靠圖,以提供該候選點。 A gesture recognition system includes: a candidate point detecting unit that receives an input image to generate a candidate point; a static gesture recognition unit that recognizes a static gesture according to the candidate point; and a multi-hand tracking unit by continuously inputting Pairing between images to track multiple hands; and a dynamic gesture recognition unit, according to the tracking path of the multi-hand tracking unit to obtain a momentum cumulative value, thereby identifying a dynamic gesture; wherein the candidate point detecting unit performs the following steps: according to the color , depth and momentum to capture features, thereby generating a color reliable map, a depth reliable map and a momentum reliable map; determining the weight of the color, depth and momentum relative to the operating state; and using the weights to combine the color reliably The map, the depth reliable map, and the momentum reliable map, thereby generating a hybrid reliable map to provide the candidate point. 根據申請專利範圍第1項所述之手勢辨識系統,其中該色彩可靠圖係根據該輸入影像的膚色所產生的。 The gesture recognition system of claim 1, wherein the color reliability map is generated based on a skin color of the input image. 根據申請專利範圍第1項所述之手勢辨識系統,其中該深度可靠圖係根據該輸入影像的手之深度所產生的。 The gesture recognition system of claim 1, wherein the depth reliable map is generated based on a depth of a hand of the input image. 根據申請專利範圍第3項所述之手勢辨識系統,其中該深度可靠圖中,位於手之深度範圍的像素則指定予較高值。 According to the gesture recognition system of claim 3, wherein the pixel in the depth range of the hand is assigned a higher value in the depth reliable map. 根據申請專利範圍第1項所述之手勢辨識系統,其中該動量可靠圖係根據一系列輸入影像的動量所產生的。 The gesture recognition system of claim 1, wherein the momentum reliability map is generated based on momentum of a series of input images. 根據申請專利範圍第5項所述之手勢辨識系統,其中該動量可靠圖中的動量係使用二輸入影像之間的絕對差之和(SAD)來量測。 The gesture recognition system of claim 5, wherein the momentum in the momentum reliability map is measured using a sum of absolute differences (SAD) between the two input images. 根據申請專利範圍第1項所述之手勢辨識系統,其中該操作狀態包含初始陳述、動量、手是否接近臉或上述之組合。 The gesture recognition system of claim 1, wherein the operational state comprises an initial statement, momentum, whether the hand is close to the face, or a combination thereof. 根據申請專利範圍第1項所述之手勢辨識系統,其中該靜態手勢辨識單元執行以下步驟:自該候選點相應的手切割出手掌;藉由紀錄該手掌的中心與該手掌的邊緣之間的距離以產生距離曲線,藉以辨識一靜態手勢;及對多種靜態手勢進行分類。 The gesture recognition system according to claim 1, wherein the static gesture recognition unit performs the steps of: cutting a palm from a corresponding hand of the candidate point; by recording between the center of the palm and the edge of the palm The distance is used to generate a distance curve to identify a static gesture; and to classify a plurality of static gestures. 根據申請專利範圍第8項所述之手勢辨識系統,其中該些靜態手勢係根據辨識到的展開手指數目進行分類。 The gesture recognition system of claim 8, wherein the static gestures are classified according to the number of recognized deployed fingers. 根據申請專利範圍第1項所述之手勢辨識系統,於該多手追蹤單元中,如果因為物件離開而造成追蹤手的失配,則將相應追蹤路徑刪除。 According to the gesture recognition system described in claim 1, in the multi-hand tracking unit, if the tracking hand is mismatched due to the object leaving, the corresponding tracking path is deleted. 根據申請專利範圍第1項所述之手勢辨識系統,於該多手追蹤單元中,如果因為物件遮蔽而造成追蹤手的失配,則藉由外差技術以產生一預期追蹤手。 According to the gesture recognition system of claim 1, in the multi-hand tracking unit, if the tracking hand is mismatched due to object obscuration, a heterogeneous technique is used to generate an expected tracking hand. 根據申請專利範圍第1項所述之手勢辨識系統,於該多手追蹤單元中,如果因為物件進入而造成追蹤手的失配,則產生一新的追蹤路徑。 According to the gesture recognition system described in claim 1, in the multi-hand tracking unit, if the tracking hand is mismatched due to the entry of the object, a new tracking path is generated. 根據申請專利範圍第1項所述之手勢辨識系統,當發現失配的追蹤手時,則自該多手追蹤單元迴授至該候選點偵測單元。 According to the gesture recognition system described in claim 1, when the mismatched tracking hand is found, the multi-hand tracking unit is fed back to the candidate point detecting unit. 根據申請專利範圍第1項所述之手勢辨識系統,其中經辨識的動態手勢被饋至自然使用者介面以執行預先定義的工作。 The gesture recognition system of claim 1, wherein the recognized dynamic gesture is fed to a natural user interface to perform a predefined work. 根據申請專利範圍第14項所述之手勢辨識系統,其中一使用者根據該自然使用者介面,使用經辨識的動態手勢以繪製線條。 The gesture recognition system of claim 14, wherein a user uses the recognized dynamic gesture to draw a line according to the natural user interface.
TW103133272A 2014-09-25 2014-09-25 Gesture recognition system TWI539329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103133272A TWI539329B (en) 2014-09-25 2014-09-25 Gesture recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103133272A TWI539329B (en) 2014-09-25 2014-09-25 Gesture recognition system

Publications (2)

Publication Number Publication Date
TW201612698A TW201612698A (en) 2016-04-01
TWI539329B true TWI539329B (en) 2016-06-21

Family

ID=56360873

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103133272A TWI539329B (en) 2014-09-25 2014-09-25 Gesture recognition system

Country Status (1)

Country Link
TW (1) TWI539329B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI663575B (en) * 2018-03-26 2019-06-21 奇景光電股份有限公司 Method and electrical device for image motion detection

Also Published As

Publication number Publication date
TW201612698A (en) 2016-04-01

Similar Documents

Publication Publication Date Title
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
US11294470B2 (en) Human-to-computer natural three-dimensional hand gesture based navigation method
JP2014165660A (en) Method of input with virtual keyboard, program, storage medium, and virtual keyboard system
KR101436050B1 (en) Method of establishing database including hand shape depth images and method and device of recognizing hand shapes
US9836130B2 (en) Operation input device, operation input method, and program
KR101631011B1 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
KR101360149B1 (en) Method for tracking finger motion based on sensorless and apparatus thereof
JP6326847B2 (en) Image processing apparatus, image processing method, and image processing program
TWI571772B (en) Virtual mouse driving apparatus and virtual mouse simulation method
JP6455186B2 (en) Fingertip position estimation device, fingertip position estimation method, and program
JP5964603B2 (en) Data input device and display device
KR102052449B1 (en) System for virtual mouse and method therefor
US20180210611A1 (en) Television virtual touch control method and system
KR101281461B1 (en) Multi-touch input method and system using image analysis
US10379678B2 (en) Information processing device, operation detection method, and storage medium that determine the position of an operation object in a three-dimensional space based on a histogram
TWI539329B (en) Gesture recognition system
JP2017084065A (en) Identity theft detection device
JP6786015B1 (en) Motion analysis system and motion analysis program
JP2016186809A (en) Data input device, data input method, and data input program
JP2016525235A (en) Method and device for character input
Zhang et al. Transforming a regular screen into a touch screen using a single webcam
JP2012003724A (en) Three-dimensional fingertip position detection method, three-dimensional fingertip position detector and program
KR101289883B1 (en) System and method for generating mask image applied in each threshold in region
KR20120061367A (en) Apparatus and method for detecting touch and proximity information in display apparatus
JP2017004438A (en) Input device, finger-tip position detection method, and computer program for finger-tip position detection

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees