TWM617136U - Gesture control device - Google Patents

Gesture control device Download PDF

Info

Publication number
TWM617136U
TWM617136U TW109210485U TW109210485U TWM617136U TW M617136 U TWM617136 U TW M617136U TW 109210485 U TW109210485 U TW 109210485U TW 109210485 U TW109210485 U TW 109210485U TW M617136 U TWM617136 U TW M617136U
Authority
TW
Taiwan
Prior art keywords
gesture
image
unit
dimensional image
module
Prior art date
Application number
TW109210485U
Other languages
Chinese (zh)
Inventor
蔡明勳
Original Assignee
蔡明勳
雅匠科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 蔡明勳, 雅匠科技股份有限公司 filed Critical 蔡明勳
Priority to TW109210485U priority Critical patent/TWM617136U/en
Publication of TWM617136U publication Critical patent/TWM617136U/en

Links

Images

Abstract

一種手勢控制裝置,係包括一影像處理模組,該影像處理模組係電性連接一攝影單元,該影像處理模組係用以接收來自於該攝影單元所擷取之二維影像進行後續處理,包括抓取手部二維影像及對該手部二維影像進行灰階化處理及計算該二維影像之特徵點,該影像處理模組另電性連接一影像分析模組,該影像分析模組係用以接收計算完成之二維影像資訊進行後續分析,包括對於計算後之二維影像進行標示,並對所標示之特徵位置進行節點繪製或是計算其他節點,最後再由該手勢判斷單元進行手勢判讀,最後再輸出該手勢判讀之結果。 A gesture control device includes an image processing module that is electrically connected to a photographing unit, and the image processing module is used to receive a two-dimensional image captured by the photographing unit for subsequent processing , Including capturing a two-dimensional image of the hand and performing gray-scale processing on the two-dimensional image of the hand and calculating the feature points of the two-dimensional image. The image processing module is also electrically connected to an image analysis module, and the image analysis The module is used to receive the calculated two-dimensional image information for subsequent analysis, including marking the calculated two-dimensional image, drawing the marked feature position or calculating other nodes, and finally judged by the gesture The unit performs gesture interpretation, and finally outputs the result of the gesture interpretation.

Description

手勢控制裝置 Gesture control device

本創作係有關一種控制裝置,尤指一種具有人機互動功能之手勢控制裝置。 This creation is related to a control device, especially a gesture control device with man-machine interaction.

傳統的人機互動介面為滑鼠、鍵盤和搖桿,隨著科技發展日新月異,為了使人機互動可以更加便利,體感控制提供了一種全新的輸入方式,最常見的體感控制變是透過手勢辨識來進行全新的人機互動。 The traditional human-computer interaction interface is mouse, keyboard and joystick. With the rapid development of technology, in order to make human-computer interaction more convenient, somatosensory control provides a new input method. The most common somatosensory control change is through Gesture recognition for a new human-computer interaction.

由於手勢是人與人在日常生活中常用的溝通方式,是一種相當直覺且方便的方法,因此,透過技術的引領下,手勢辨逐漸應用於眾多領域之一,包括人機介面設計、醫療復健、虛擬實境與遊戲設計等。 Because gestures are a common way of communication between people in daily life, it is a very intuitive and convenient method. Therefore, under the guidance of technology, gesture recognition is gradually applied in one of many fields, including human-machine interface design, medical rehabilitation. Health, virtual reality and game design, etc.

而辨識手勢的資訊主要有兩種方式,一種是動態手勢,另一種則是靜態手勢;動態手勢資訊包括手部移動軌跡、位置資訊與時序關係,而靜態手勢資訊則主要為手形變化,藉由分析手勢資訊並根據不同的手勢,來達到人機互動的功能。 There are two main ways to identify gesture information, one is dynamic gesture, the other is static gesture; dynamic gesture information includes hand movement trajectory, position information, and timing relationship, while static gesture information is mainly hand shape changes. Analyze gesture information and achieve the function of human-computer interaction according to different gestures.

目前常見手勢辨識的技術,係為利用深度攝影機來取得具有三維影像,每張三維影像必須做前置處理,如影像二值化和清除影像背景、消除雜訊等等,再從一連串影像中擷取並分析出使用者手部位置及手勢等相關訊息。再利用手部位置的影像座標數值來控制顯示器之游標的移動;然後,透過三維影像需要在前置處理花費較長時間,致使造成移動游標的速度與精確度難以與 滑鼠相較,且深度攝影機此類裝置費用太高,造成目前手勢辨識裝置及技術難以降低成本門檻。 The current common gesture recognition technology is to use depth cameras to obtain three-dimensional images. Each three-dimensional image must be pre-processed, such as image binarization and image background removal, noise elimination, etc., and then capture from a series of images Obtain and analyze relevant information such as the user's hand position and gestures. Then use the image coordinate value of the hand position to control the movement of the cursor of the display; then, it takes a long time to pre-process the three-dimensional image, which makes the speed and accuracy of the moving cursor difficult to match. Compared with mice, and the cost of devices such as depth cameras is too high, it is difficult to reduce the cost threshold of current gesture recognition devices and technologies.

針對上述之缺失,本創作之主要目的在於提供一種手勢控制裝置及其控制方法,藉由具有抓取二維影像之攝影機與演算法之搭配,以判斷手勢之變化,達成人機互動之目的,並藉此降低該手勢控制裝置之成本。 In view of the above-mentioned deficiencies, the main purpose of this creation is to provide a gesture control device and its control method. The combination of a camera that captures two-dimensional images and an algorithm can determine the change of gestures and achieve the purpose of human-computer interaction. And thereby reduce the cost of the gesture control device.

為達成上述之目的,本創作係主要提供一種手勢控制裝置,係包括一影像處理模組,該影像處理模組係電性連接一攝影單元,該影像處理模組係用以接收來自於該攝影單元所擷取之二維影像進行後續處理,於該影像處理模組內更包括一影像處理單元及一特徵計算單元,其中該影像處理單元係於接收所擷取之二維影像後,進行後製處理,包括抓取手部二維影像及對該手部二維影像進行灰階化處理,待處理過之二維影像輸入進到該特徵計算單元,即開始計算該二維影像之特徵點;該影像處理模組另電性連接一影像分析模組,該影像分析模組係用以接收計算完成之二維影像資訊進行後續分析,其中該影像分析模組內更包括一影像分析單元及一手勢判斷單元,其中該影像分析單元係用以接收二維影像資訊進行分析,對於計算後之二維影像進行標示,並對所標示之特徵位置進行節點繪製或是計算其他節點,最後再由該手勢判斷單元進行手勢判讀,最後再輸出手勢判讀之結果。 In order to achieve the above-mentioned purpose, the author mainly provides a gesture control device, which includes an image processing module, the image processing module is electrically connected to a photographing unit, and the image processing module is used to receive data from the photographing unit. The two-dimensional image captured by the unit is subjected to subsequent processing. The image processing module further includes an image processing unit and a feature calculation unit. The image processing unit performs post-processing after receiving the captured two-dimensional image. Processing, including grabbing a two-dimensional image of the hand and performing gray-scale processing on the two-dimensional image of the hand. The to-be-processed two-dimensional image is input to the feature calculation unit, and the feature points of the two-dimensional image are calculated. The image processing module is also electrically connected to an image analysis module, the image analysis module is used to receive the calculated two-dimensional image information for subsequent analysis, wherein the image analysis module further includes an image analysis unit and A gesture judgment unit, wherein the image analysis unit is used to receive the two-dimensional image information for analysis, mark the calculated two-dimensional image, and draw the marked feature position or calculate other nodes, and finally The gesture judgment unit performs gesture judgment, and finally outputs the result of the gesture judgment.

為讓本創作之上述和其他目的、特徵和優點能更明顯易懂,下文特舉較佳實施例,並配合所附圖式,作詳細說明如下。 In order to make the above and other purposes, features and advantages of this creation more obvious and easy to understand, the following is a detailed description of preferred embodiments in conjunction with the accompanying drawings.

1:影像處理模組 1: Image processing module

11:影像處理單元 11: Image processing unit

12:特徵計算單元 12: Feature calculation unit

2:攝影單元 2: Photography unit

3:影像分析模組 3: Image analysis module

31:影像分析單元 31: Image analysis unit

32:手勢判斷單元 32: Gesture Judgment Unit

4:手勢訓練模組 4: Gesture training module

41:手勢紀錄單元 41: Gesture Recording Unit

42:手勢訓練單元 42: Gesture Training Unit

第一圖、係為本創作之系統方塊圖。 The first picture is the system block diagram of this creation.

第二圖、係為本創作之流程圖。 The second picture is the flow chart of this creation.

第三圖、係為本創作之特徵計算流程圖。 The third figure is the flow chart of feature calculation for this creation.

第四圖、係為本創作之實施例圖(一)。 The fourth picture is an example picture (1) of this creation.

第五圖、係為本創作之實施例圖(二)。 The fifth picture is an example picture (2) of this creation.

第六圖、係為本創作之實施例圖(三)。 The sixth picture is an example picture of this creation (3).

第七圖、係為本創作之實施例圖(四)。 The seventh picture is an example picture of this creation (4).

先敘明本實施例中所述之「特徵點」,係指透過影像偵測及分析後取得的數個點,其中可以是以關節位置、關節處的皮膚紋路、指甲、掌紋或是手腕周圍之外型等。將所取得的特徵點對比之前訓練時所設定之關節點,進而完成標示關節處的影像關鍵點。利用關鍵點間的連線所形成的圖樣,對比之前所預設之手勢的形狀,即可用以啟動對應之功能。 First clarify that the "feature points" in this embodiment refer to several points obtained through image detection and analysis, which can be joint positions, skin lines at joints, nails, palm prints, or around the wrist Appearance and so on. Compare the acquired feature points with the joint points set during the previous training, and then complete the image key points at the marked joints. Use the pattern formed by the connection between the key points to compare the shape of the previously preset gesture to activate the corresponding function.

請參閱第一圖,係為本創作之系統方塊圖。本創作之手勢控制裝置係主要包括一影像處理模組1,該影像處理模組1係電性連接一攝影單元2,該影像處理模組1係用以接收來自於該攝影單元2所擷取之二維影像進行後續處理,而該攝影單元2係為一種具有擷取二維影像功能之攝影機;於該影像處理模組1內更包括一影像處理單元11及一特徵計算單元12,該影像處理單元11與特徵計算單元12電性連接,其中該影像處理單元11係於接收所擷取之二維影像後,進行後製處理,包括抓取手部二維影像及對該手部二維影像進行灰階化處理,待處理過之二維影像輸入進到該特徵計算單元12,即開始計算該二維影像之特徵點(scale-invariant feature transform descriptor),而該特徵點之計算方式係針對每個關鍵點(keypoint)擷取16×16之像素大小,再平均切為4×4之格狀大小(cells), 每個格狀取梯度及角度值統計為8個箱狀(bins)之直方圖(Histogram),總共16個格狀會得到16個直方圖(8bins),可合併成16×8=128維度資料,最後針對這些資料進行歸一化處理(L2-Normalizing),即可得到代表該關鍵點之特徵;亦即特徵點的位置可為二維座標(X,Y),關鍵點經過計算轉換後,其位置座標為三維座標(X,Y,Z)。其計算轉換方式可為經由比較影像中的參考點(例如:臉或背後物件),根據參考點與每一關鍵點的大小比例,進而推算出每一關鍵點的Z值。在取得參考點時,亦可經由手的動作來進行每一關鍵點的Z值的取得或是校正。 Please refer to the first picture, which is the system block diagram of this creation. The gesture control device of this creation mainly includes an image processing module 1, which is electrically connected to a photographing unit 2, and the image processing module 1 is used to receive the image captured by the photographing unit 2. The two-dimensional image is subjected to subsequent processing, and the photographing unit 2 is a camera with a function of capturing two-dimensional images; the image processing module 1 further includes an image processing unit 11 and a feature calculation unit 12, the image The processing unit 11 is electrically connected to the feature calculation unit 12. The image processing unit 11 performs post-processing after receiving the captured two-dimensional image, including capturing a two-dimensional image of the hand and the two-dimensional image of the hand. The image is gray-scaled, and the to-be-processed two-dimensional image is input to the feature calculation unit 12, and the feature point (scale-invariant feature transform descriptor) of the two-dimensional image is calculated, and the feature point calculation method is For each keypoint (keypoint), the pixel size of 16×16 is captured, and then averaged into 4×4 cells. Take the gradient and angle value of each grid into 8 bins histograms (Histogram), a total of 16 grids will get 16 histograms (8bins), which can be combined into 16×8=128 dimension data , And finally normalize the data (L2-Normalizing) to obtain the feature representing the key point; that is, the position of the feature point can be two-dimensional coordinates (X, Y). After the key point is converted by calculation, Its position coordinates are three-dimensional coordinates (X, Y, Z). The calculation conversion method can be to calculate the Z value of each key point based on the ratio of the size of the reference point to each key point by comparing the reference points in the image (for example, the face or the object behind the back). When obtaining the reference point, the Z value of each key point can also be obtained or corrected by hand movement.

續參閱第一圖。該影像處理模組1另電性連接一影像分析模組3,該影像分析模組3係用以接收計算完成之二維影像資訊進行後續分析,其中該影像分析模組3內更包括一影像分析單元31及一手勢判斷單元32,該影像分析單元31係電性訊息連接該手勢判斷單元32,其中該影像分析單元31係用以接收二維影像資訊進行分析,對於計算後之二維影像進行標示,並對所標示之特徵位置進行節點繪製或是計算其他節點,最後再由該手勢判斷單元32進行手勢判讀,最後再輸出該手勢判讀之結果。 Continue to refer to the first figure. The image processing module 1 is also electrically connected to an image analysis module 3. The image analysis module 3 is used to receive the calculated two-dimensional image information for subsequent analysis, wherein the image analysis module 3 further includes an image An analysis unit 31 and a gesture determination unit 32. The image analysis unit 31 is connected to the gesture determination unit 32 with electrical information, and the image analysis unit 31 is used to receive the two-dimensional image information for analysis, and to calculate the two-dimensional image Marking is performed, and the marked feature position is drawn or other nodes are calculated. Finally, the gesture judgment unit 32 performs gesture judgment, and finally outputs the result of the gesture judgment.

續參閱第一圖。此外,該影像分析模組3係電性連接一手勢訓練模組4,該手勢訓練模組4更包括一手勢紀錄單元41及一手勢訓練單元42,該手勢紀錄單元41係電性連接該手勢訓練單元42,其中該手勢紀錄單元41係用以紀錄手勢影像,並將該些手勢影像載入該手勢訓練單元42進行手勢訓練並產生手勢訓練檔案,該些檔案再傳至該手勢判斷單元32,以作為手勢判斷之依據。 Continue to refer to the first figure. In addition, the image analysis module 3 is electrically connected to a gesture training module 4. The gesture training module 4 further includes a gesture recording unit 41 and a gesture training unit 42. The gesture recording unit 41 is electrically connected to the gesture The training unit 42, wherein the gesture recording unit 41 is used to record gesture images, and load the gesture images into the gesture training unit 42 for gesture training and generate gesture training files, which are then transferred to the gesture judgment unit 32 , As a basis for gesture judgment.

請參閱第二圖,係為本創作之手勢控制方法之流程圖。該控制方法之步驟係包括擷取二維手部影像(S1),該二維手部影像係透過一種具有擷取二維影像功能之攝影機進行操作;之後進行計算該二維手部影像之特徵點(S2),包 括進行對該手部二維影像進行灰階化處理及特徵點計算,其中該計算之步驟更包括針對每個關鍵點(keypoint)擷取16×16之像素大小(S21),如第三圖之特徵計算流程圖所示,再將每個像素平均切為4×4之格狀大小(cells)(S22),每個格狀取梯度及角度值統計為8個箱狀(bins)之直方圖(Histogram)(S23),總共16個格狀會得到16個直方圖(8bins),最後合併成16×8=128維度資料(S24),最後針對維度資料進行歸一化處理(L2-Normalizing)(S25),即可得到代表該關鍵點之特徵。 Please refer to the second figure, which is the flow chart of the gesture control method for this creation. The steps of the control method include capturing a two-dimensional hand image (S1), the two-dimensional hand image is operated by a camera with a function of capturing a two-dimensional image; and then the characteristics of the two-dimensional hand image are calculated Point (S2), pack Including gray-scale processing and feature point calculation on the two-dimensional image of the hand, where the steps of the calculation further include capturing a pixel size of 16×16 for each keypoint (S21), as shown in the third figure. As shown in the feature calculation flow chart, each pixel is averaged into 4×4 cells (S22), and the gradient and angle values of each cell are counted as 8 bins. Figure (Histogram) (S23), a total of 16 grids will get 16 histograms (8bins), and finally merge into 16×8=128 dimension data (S24), and finally normalize the dimension data (L2-Normalizing ) (S25), the characteristic of the key point can be obtained.

續參閱第三圖。該控制方法之步驟更包括標示該二維手部影像之特徵點(S3),該步驟中係用以計算後之二維影像進行標示,並對所標示之特徵位置進行節點繪製或是計算其他節點;最後,針對該些標示影像進行判斷該二維手部影像之手勢(S4),透過系統內部已存之手勢檔案進行判讀,以輸出最後之判讀結果,即如第四圖至第七圖所示之實施例圖,根據不同的手勢,於系統解讀時,各關鍵點會產生對應移動或是對應手勢之建立。 Continue to refer to the third figure. The step of the control method further includes marking the characteristic points of the two-dimensional hand image (S3). In this step, the calculated two-dimensional image is marked, and the marked characteristic positions are drawn or calculated. Node; Finally, the gestures (S4) of the two-dimensional hand images are judged for the marked images, and the gesture files stored in the system are interpreted to output the final interpretation results, as shown in Figures 4 to 7 In the illustrated embodiment diagram, according to different gestures, when the system is interpreted, each key point will generate a corresponding movement or a corresponding gesture establishment.

惟以上所述之實施方式,是為較佳之實施實例,當不能以此限定本創作實施範圍,若依本創作申請專利範圍及說明書內容所作之等效變化或修飾,皆應屬本創作下述之專利涵蓋範圍。 However, the implementations described above are preferred implementation examples. When the scope of implementation of this creation cannot be limited by this, any equivalent changes or modifications made in accordance with the scope of the patent application for this creation and the contents of the specification should all belong to the creation below The scope of patent coverage.

1:影像處理模組 1: Image processing module

2:攝影單元 2: Photography unit

3:影像分析模組 3: Image analysis module

4:手勢訓練模組 4: Gesture training module

11:影像處理單元 11: Image processing unit

12:特徵計算單元 12: Feature calculation unit

31:影像分析單元 31: Image analysis unit

32:手勢判斷單元 32: Gesture Judgment Unit

41:手勢紀錄單元 41: Gesture Recording Unit

42:手勢訓練單元 42: Gesture Training Unit

Claims (2)

一種手勢控制裝置,係包括:一影像處理模組,該影像處理模組係用以接收所擷取之影像進行後續處理,於該影像處理模組內更包括:一影像處理單元,該係於接收所擷取之二維影像後,進行後製處理,包括抓取手部二維影像及對該手部二維影像進行灰階化處理;一特徵計算單元,該特徵計算單元與該影像處理單元電性連接,待處理過之二維影像輸入進到該特徵計算單元,即開始計算該二維影像之特徵點;一攝影單元,係與該影像處理模組電性連接,該攝影單元係具有擷取二維影像功能;一影像分析模組,係電性連接該影像處理模組,用以接收計算完成之二維影像資訊進行後續分析,該影像分析模組更包括:一影像分析單元,係用以接收二維影像資訊進行分析,對於計算後之二維影像進行標示,並對所標示之特徵位置進行節點繪製或是計算其他節點;一手勢判斷單元,係電性連接該影像分析單元,該手勢判斷單元用以判讀手勢影像,再輸出手勢影像判讀之結果。 A gesture control device includes: an image processing module for receiving captured images for subsequent processing, and further including: an image processing unit in the image processing module After receiving the captured two-dimensional image, post-processing is performed, including capturing the two-dimensional image of the hand and performing gray-scale processing on the two-dimensional image of the hand; a feature calculation unit, the feature calculation unit and the image processing The unit is electrically connected, and the two-dimensional image to be processed is input into the feature calculation unit, and the feature points of the two-dimensional image are calculated; a photographing unit is electrically connected to the image processing module, and the photographing unit is It has the function of capturing two-dimensional images; an image analysis module is electrically connected to the image processing module to receive the calculated two-dimensional image information for subsequent analysis. The image analysis module further includes: an image analysis unit , Is used to receive the two-dimensional image information for analysis, mark the calculated two-dimensional image, and draw the node of the marked feature position or calculate other nodes; a gesture judgment unit is electrically connected to the image analysis Unit, the gesture determination unit is used for interpreting the gesture image, and then outputting the result of the interpretation of the gesture image. 如申請專利範圍第1項所述之手勢控制裝置,其中該裝置更包括一手勢訓練模組,係電性連接該影像分析模組,該手勢訓練模組更包括一手勢紀錄單元及一手勢訓練單元,該手勢紀錄單元係電性連接該手勢訓練單元,該手勢紀錄單元係用以紀錄手勢影像,並將該些手勢影像載入該手勢訓練單元進行手勢訓練並產生手勢訓練檔案,該些檔案再傳至該手勢判斷單元,以作為手勢判斷之依據。 Such as the gesture control device described in claim 1, wherein the device further includes a gesture training module electrically connected to the image analysis module, and the gesture training module further includes a gesture recording unit and a gesture training Unit, the gesture recording unit is electrically connected to the gesture training unit, the gesture recording unit is used to record gesture images, and load the gesture images into the gesture training unit for gesture training and generate gesture training files, the files Then it is passed to the gesture judgment unit as a basis for gesture judgment.
TW109210485U 2020-08-13 2020-08-13 Gesture control device TWM617136U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109210485U TWM617136U (en) 2020-08-13 2020-08-13 Gesture control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109210485U TWM617136U (en) 2020-08-13 2020-08-13 Gesture control device

Publications (1)

Publication Number Publication Date
TWM617136U true TWM617136U (en) 2021-09-21

Family

ID=78779317

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109210485U TWM617136U (en) 2020-08-13 2020-08-13 Gesture control device

Country Status (1)

Country Link
TW (1) TWM617136U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI775128B (en) * 2020-08-13 2022-08-21 蔡明勳 Gesture control device and control method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI775128B (en) * 2020-08-13 2022-08-21 蔡明勳 Gesture control device and control method thereof

Similar Documents

Publication Publication Date Title
CN110991319B (en) Hand key point detection method, gesture recognition method and related device
Nair et al. Hand gesture recognition system for physically challenged people using IOT
US20130335318A1 (en) Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
Wen et al. A robust method of detecting hand gestures using depth sensors
TWI471815B (en) Gesture recognition device and method
Qiang et al. SqueezeNet and fusion network-based accurate fast fully convolutional network for hand detection and gesture recognition
TW201120681A (en) Method and system for operating electric apparatus
TWI571772B (en) Virtual mouse driving apparatus and virtual mouse simulation method
TW201017557A (en) Video based handwritten character input device and method thereof
WO2021098147A1 (en) Vr motion sensing data detection method and apparatus, computer device, and storage medium
CN107832736A (en) The recognition methods of real-time body's action and the identification device of real-time body's action
TWM617136U (en) Gesture control device
CA2806149A1 (en) Method and system for gesture-based human-machine interaction and computer-readable medium thereof
TWI775128B (en) Gesture control device and control method thereof
Sahoo et al. A user independent hand gesture recognition system using deep CNN feature fusion and machine learning technique
Ghodichor et al. Virtual mouse using hand gesture and color detection
Khan et al. Computer vision based mouse control using object detection and marker motion tracking
Dehankar et al. Using AEPI method for hand gesture recognition in varying background and blurred images
CN110991307B (en) Face recognition method, device, equipment and storage medium
TW202141349A (en) Image recognition method and device thereof and ai model training method and device thereof
JP6467994B2 (en) Image processing program, image processing apparatus, and image processing method
CN113961067A (en) Non-contact graffiti drawing method and recognition interaction system based on deep learning
TWI246662B (en) Face recognition system
TW201203131A (en) System and method for hand image recognition
Meng et al. Building smart cameras on mobile tablets for hand gesture recognition

Legal Events

Date Code Title Description
MM4K Annulment or lapse of a utility model due to non-payment of fees