CN112347951B - Gesture recognition method and device, storage medium and data glove - Google Patents

Gesture recognition method and device, storage medium and data glove Download PDF

Info

Publication number
CN112347951B
CN112347951B CN202011253584.5A CN202011253584A CN112347951B CN 112347951 B CN112347951 B CN 112347951B CN 202011253584 A CN202011253584 A CN 202011253584A CN 112347951 B CN112347951 B CN 112347951B
Authority
CN
China
Prior art keywords
data
gesture
gesture recognition
input data
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011253584.5A
Other languages
Chinese (zh)
Other versions
CN112347951A (en
Inventor
王勃然
姜京池
刘劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202011253584.5A priority Critical patent/CN112347951B/en
Publication of CN112347951A publication Critical patent/CN112347951A/en
Application granted granted Critical
Publication of CN112347951B publication Critical patent/CN112347951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a gesture recognition method, a gesture recognition device, a storage medium and a data glove, wherein the gesture recognition method comprises the following steps: acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, wherein all the sensor data form one input data; carrying out feature extraction on the input data by adopting a principal component analysis method to obtain second feature data; inputting the second characteristic data into a trained multi-class SVM classifier, and determining a gesture corresponding to the current action; when the trained multi-class SVM classifier cannot recognize the current action, preprocessing input data to obtain preprocessed data; and inputting the preprocessed data into a trained gesture recognition model, and outputting a gesture corresponding to the current action, wherein the gesture recognition model is built based on a convolutional neural network and a long-term and short-term memory cyclic network. The technical scheme of the invention can ensure the accuracy of gesture recognition while improving the gesture recognition speed.

Description

Gesture recognition method and device, storage medium and data glove
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition method, a gesture recognition device, a storage medium and a data glove.
Background
Sign language is a language of a hand that simulates an image or syllable to form a certain meaning or word according to a change in a gesture by means of a gesture proportional action, and is a mutual interaction and communication idea of people with hearing impairment or impossibility of speaking. Thus, recognizing gestures is important for communicating with sign language users, and the following two methods are currently used to recognize gestures.
A camera is used for capturing gesture actions, analyzing a shot picture and identifying the gesture actions, but the camera has high requirements on light rays in the process of shooting the gestures, and the accuracy of gesture identification can be affected when the lighting condition is poor.
The other is to acquire the electromyogram of the hand surface when the hand completes the gesture, and to recognize the gesture by analyzing and processing the electromyogram, but the existing algorithm for recognizing the gesture according to the electromyogram is complex and has low efficiency.
Disclosure of Invention
The invention solves the problem of how to consider the efficiency and the precision of gesture recognition.
In order to solve the above problems, the present invention provides a gesture recognition method, a gesture recognition device, a storage medium and a data glove.
In a first aspect, the present invention provides a gesture recognition method, including:
Acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, wherein all the sensor data form one input data;
performing feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
inputting the second characteristic data into a trained multi-class SVM classifier, and determining a gesture corresponding to the current action;
when the trained multi-class SVM classifier cannot recognize the current action, preprocessing the input data to obtain preprocessed data;
and inputting the preprocessed data into a trained gesture recognition model, and outputting a gesture corresponding to the current action, wherein the gesture recognition model is established based on a convolutional neural network and a long-term and short-term memory cyclic network.
Further, before inputting the second feature data into the trained multi-class SVM classifier, the method includes:
respectively acquiring the input data of the data glove when different calibration actions are completed;
amplifying and filtering each input data respectively to obtain filtered input data;
performing feature extraction on all the filtered input data by adopting a principal component analysis method to obtain first feature data;
And training the multi-class SVM classifier by adopting the first characteristic data to obtain the trained multi-class SVM classifier.
Further, the feature extraction of all the filtered input data by using the principal component analysis method includes:
calculating the average value of all the filtered input data;
respectively determining differences between the filtered input data and the average value, and determining a covariance matrix according to all the differences;
calculating a characteristic value and a characteristic vector according to the covariance matrix, and determining a principal component matrix according to the characteristic vector;
and determining the first characteristic data according to the principal component matrix and the difference value.
Further, the calibrating actions are in one-to-one correspondence with the gesture templates, and the training the multi-class SVM classifier by using the characteristic data comprises:
for any gesture template, taking the first characteristic data corresponding to the gesture template as a positive set, taking the first characteristic data except the positive set as a negative set, and taking the corresponding positive set and negative set as a training set;
inputting the training set into the multi-class SVM classifier, wherein the multi-class SVM classifier comprises a plurality of classification functions, each classification function respectively processes the training set and respectively outputs a first classification value;
Determining a maximum value in the first classification value and the classification function corresponding to the maximum value, and corresponding the classification function to the gesture template;
and processing the first characteristic data in sequence, and enabling the gesture templates to correspond to the classification functions one by one.
Further, inputting the second feature data into a trained multi-class SVM classifier, and determining the gesture corresponding to the current action includes:
inputting the second characteristic data into the trained multi-class SVM classifier, wherein each classification function respectively processes the second characteristic data and respectively outputs a second classification value;
and determining the maximum value and the next largest value in all the second classification values, comparing the maximum value and the next largest value in the second classification values with a preset threshold value respectively, and determining the gesture template corresponding to the classification function outputting the maximum value as the gesture corresponding to the current action when the maximum value is greater than or equal to the preset threshold value and the next largest value is smaller than the preset threshold value.
Further, the trained multi-class SVM classifier cannot recognize that the current action includes that both the maximum value and the next-largest value in the second classification value are greater than or equal to the preset threshold.
Further, before inputting the preprocessed data into the trained gesture recognition model, the method includes:
respectively acquiring the input data of the data glove when different calibration actions are completed, wherein each input data comprises all sensor data corresponding to one gesture template;
preprocessing all the input data respectively to obtain preprocessed data;
and constructing a gesture recognition model based on a convolutional neural network and a long-term and short-term memory cyclic network, and training the gesture recognition model by adopting the preprocessed data to obtain the trained gesture recognition model.
Further, each input data includes all the sensor data corresponding to one calibration action, the preprocessing is performed on all the input data, and obtaining the preprocessed data includes:
for any input data, synchronizing all sensor data corresponding to the input data by adopting a time synchronization mechanism based on time slot channel hopping to obtain synchronized sensor data;
filtering the synchronized sensor data by using a Butterworth band-pass filter to obtain filtered sensor data;
And intercepting the filtered sensor data by adopting a sliding window to obtain a plurality of data segments, wherein all the data segments form the preprocessed data.
Further, training the gesture recognition model using the preprocessed data includes a forward propagation step that includes:
inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template;
and determining the gesture template with the highest probability as a predicted gesture.
Further, the gesture recognition model includes two one-dimensional convolution layers, a maximum pooling layer, a flattening layer, an LSTM layer, two full connection layers, a Softmax layer and an output layer which are sequentially connected, inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template includes:
inputting the preprocessed data into a first one-dimensional convolution layer, and carrying out feature extraction on the preprocessed data by two one-dimensional convolution layers to obtain third feature data, wherein all the third feature data form a feature map;
inputting the feature map into the maximum pooling layer, and extracting features of each sub-region of the feature map to obtain fourth feature data;
Inputting the fourth characteristic data into the flattening layer, shaping the fourth characteristic data into a one-dimensional vector, inputting the one-dimensional vector into the LSTM layer for processing, and outputting processed data;
inputting the processed data into the two full-connection layers and the Softmax layer in sequence, and outputting the probability that the calibration action is each gesture template;
and the output layer outputs the gesture template with the highest probability, and the gesture template with the highest probability is the predicted gesture.
Further, the gesture recognition model further comprises a plurality of shedding layers, wherein two shedding layers are arranged between the second one-dimensional convolution layer and the maximum pooling layer, and two shedding layers are arranged between the LSTM layer and the first fully-connected layer.
Further, training the gesture recognition model using the preprocessed hand data further includes:
a back propagation step, comprising cross entropy loss according to the calibration action and the predicted gesture, and optimizing the gesture recognition model according to the loss;
and repeating the forward propagation step and the backward propagation step in a circulating way until the loss is not reduced any more, and obtaining a stable gesture recognition model.
In a second aspect, the present invention provides a gesture recognition apparatus, comprising:
the acquisition module is used for acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, and all the sensor data form one input data;
the feature extraction module is used for carrying out feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
the first processing module is used for inputting the second characteristic data into a trained multi-class SVM classifier and determining a gesture corresponding to the current action;
the preprocessing module is used for preprocessing the input data to obtain preprocessed data when the trained multi-class SVM classifier cannot recognize the current action;
and the second processing module is used for inputting the preprocessed data into a trained gesture recognition model and outputting a gesture corresponding to the current action, wherein the gesture recognition model is established based on a convolutional neural network and a long-term and short-term memory cyclic network.
In a third aspect, the present invention provides a gesture recognition apparatus comprising a memory and a processor;
the memory is used for storing a computer program;
The processor is configured to implement the gesture recognition method as described above when executing the computer program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a gesture recognition method as described above.
In a fifth aspect, the present invention provides a data glove comprising a glove, a plurality of sensors and a gesture recognition device as described above, wherein a plurality of the sensors are respectively electrically connected with the gesture recognition device, and the plurality of sensors are respectively arranged on the glove and are suitable for detecting movement data of each finger.
Further, the sensor comprises a plurality of fully flexible capacitive sensors and/or a plurality of piezoresistive sensors;
the plurality of fully flexible capacitive sensors are respectively arranged at the back of each finger of the glove and the back tiger mouth of the thumb and are respectively used for measuring movement data in the process of buckling or stretching of the finger and movement data in the process of transverse movement of the thumb;
the piezoresistive sensors are respectively arranged at joints on the inner sides of the fingers of the glove and between two adjacent fingers and are respectively used for measuring motion data of each finger joint and motion data of abduction or adduction of the fingers.
The gesture recognition method, the gesture recognition device, the storage medium and the data glove have the beneficial effects that: and acquiring sensor data acquired by each sensor when the data glove finishes the current action, wherein all the sensor data corresponding to the current action form one input data. And the main component analysis method is adopted to extract the characteristics of the input data, the characteristic data is obtained and is input into a trained multi-class SVM classifier, the gesture corresponding to the current action is identified, and the multi-class SVM classifier has high identification speed, simplicity and high efficiency. When the multi-class SVM classifier is difficult to recognize the current action, preprocessing the input data corresponding to the current action, inputting the preprocessed data into a trained gesture recognition model, outputting the gesture corresponding to the current action, and constructing the gesture recognition model based on the deep learning model, so that the gesture can be accurately recognized. According to the technical scheme, the multi-class SVM classifier is adopted to identify the gestures, the identification speed is high, when the multi-class SVM classifier is difficult to identify the gestures, the gesture identification model is adopted to identify the gestures, and the identification accuracy can be ensured while the identification speed is improved.
Drawings
FIG. 1 is a schematic view of the back structure of a data glove according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a front structure of a pair of data glove according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a circuit connection of a data glove according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of sensor signals under different gestures according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a training method of a multi-class SVM classifier according to an embodiment of the invention;
FIG. 6 is a flowchart of a gesture recognition model training method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a gesture recognition model according to an embodiment of the present invention
FIG. 8 is a flow chart of a gesture recognition method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a gesture recognition model according to an embodiment of the present invention.
Reference numerals illustrate:
10-gloves; 20-a fully flexible capacitive sensor; 30-piezoresistive sensor; 40-base.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
As shown in fig. 1 and 2, the data glove provided by the present invention includes a glove 10, a plurality of sensors and a gesture recognition device as described below, wherein the plurality of sensors are respectively electrically connected with the gesture recognition device, and the plurality of sensors are respectively disposed on the glove 10 and are suitable for detecting movement data of each finger.
Preferably, the sensor comprises a plurality of fully flexible capacitive sensors 20 and/or a plurality of piezoresistive sensors 30;
the plurality of fully flexible capacitive sensors 20 are respectively arranged at the back of each finger of the glove 10 and the back tiger mouth of the thumb and are respectively used for measuring the movement data of the fingers during the buckling or stretching process and the movement data of the thumb during the transverse movement process;
a plurality of piezoresistive sensors 30 are respectively disposed at joints inside respective fingers of the glove 10 and between two adjacent fingers for measuring movement data of each finger joint and movement data of abduction or adduction of the fingers, respectively.
Specifically, as shown in fig. 3, the output end of the piezoresistive sensor 30 or the fully flexible capacitive sensor 20 at each finger stall on the data glove is electrically connected with the input end of the amplifier, the output end of the amplifier is electrically connected with the input end of the filter, the output end of the filter is electrically connected with the input end of the AD conversion processor, and the output end of the AD conversion processor is electrically connected with the communication device. The communication device is used for transmitting the converted digital signals to the upper computer and the like for processing, and recognizing gestures.
The amplifier, filter, AD conversion processor, communication device, etc. may be integrated on a single circuit board that is mounted on any convenient wearing and activity location as desired by the base 40, for example: the base 40 may be manufactured by 3D printing, on the back of the wrist, the back of the palm, on the forearm, etc. of the data glove.
The voltage signal may change when the piezoresistive sensor 30 detects pressure, or when the fully flexible capacitive sensor 20 detects stretching. As shown in fig. 4, when the fully flexible capacitive sensor 20 is adopted on the data glove, when a plurality of fingers move simultaneously, the fully flexible capacitive sensor 20 on each finger stall records the movement data of each finger respectively, the movement data of all fingers under one gesture are combined into one input data, and the data acquisition channels of the five fingers are mutually independent. When a finger moves, the voltage signal detected by the sensor changes, and a voltage peak value is generated. For example, in fig. 4, the thumb corresponding to gesture "a" does not move, no voltage peak is generated, and the other four fingers move to bend, respectively generating voltage peaks; the thumb, index finger and middle finger corresponding to gesture "3" do not move, no voltage peak is generated, while the ring finger and little finger bend, respectively, generating voltage peaks.
As shown in fig. 5, the training method for a multi-class SVM (Support Vector Machine ) classifier provided by the embodiment of the invention includes:
step 110, respectively acquiring the input data of the data glove when different calibration actions are completed;
step 120, amplifying and filtering each input data to obtain filtered input data;
step 130, performing feature extraction on all the filtered input data by adopting a principal component analysis method to obtain the first feature data;
and 140, training the multi-class SVM classifier by using the first characteristic data to obtain the trained multi-class SVM classifier.
In this embodiment, the input data is amplified and filtered, so that noise data can be reduced. The feature data in the input data corresponding to each gesture template is extracted by adopting a principal component analysis method and is used for training the multi-class SVM classifier, so that the training speed can be improved. The trained multi-class SVM classifier can classify gestures and recognize the gestures.
Preferably, the feature extraction of all the filtered input data by using a principal component analysis method includes:
and calculating the average value of all the filtered input data.
Specifically, the filtered input data corresponding to each gesture template is formed into a set, s= { S 1 ,s 2 ,s 3 …s n -wherein s i And the filtered input data corresponding to the ith gesture template. Calculating an average value of all the filtered input data using a first formula comprising:
Figure BDA0002772391480000091
wherein n is the number of gesture templates, S avg Is the average of all the filtered input data.
And respectively determining the difference value between each filtered input data and the average value, and determining a covariance matrix according to all the difference values.
Specifically, a second formula is used to determine a difference between each of the filtered input data and an average value, the second formula comprising:
δ i =s i -S avg
wherein delta i The difference value between the filtered input data corresponding to the ith gesture template and the average value is obtained.
Calculating a covariance matrix using a third formula comprising:
Figure BDA0002772391480000101
wherein M is covariance matrix, delta i T As a difference vector delta i Is a transpose of (a).
Calculating a characteristic value and a characteristic vector according to the covariance matrix, and determining a principal component matrix according to the characteristic vector;
specifically, the eigenvalues and eigenvectors are calculated using a fourth formula comprising:
M×v i =λ i ×v i (i=1,2,3...k),
Wherein (lambda) 123 ...λ k ) Respectively the characteristic values, (v) 1 ,v 2 ,v 3 ...v k ) As a feature vector, the principal component matrix is v= { V 1 ,v 2 ,v 3 ...v k }。
And determining the first characteristic data according to the principal component matrix and the difference value.
Specifically, the input data is projected into the principal component matrix using a fifth formula comprising:
y i =V T ×δ i ,
wherein y is i As a difference vector delta i Corresponding characteristic data.
Preferably, the calibrating actions are in one-to-one correspondence with the gesture templates, and the training the multi-class SVM classifier by using the feature data includes:
for any gesture template, taking the first characteristic data corresponding to the gesture template as a positive set, taking the first characteristic data except the positive set as a negative set, and taking the corresponding positive set and negative set as a training set;
inputting the training set into the multi-class SVM classifier, wherein the multi-class SVM classifier comprises a plurality of classification functions, each classification function respectively processes the training set and respectively outputs a first classification value;
determining a maximum value in the first classification value and the classification function corresponding to the maximum value, and corresponding the classification function to the gesture template;
and processing the first characteristic data in sequence, and enabling the gesture templates to correspond to the classification functions one by one.
Specifically, the multi-class SVM classifier adopts a pair of all support vector machines, acquires input data corresponding to gesture templates, trains the multi-class SVM classifier, and comprises a plurality of classification functions, wherein the number of the gesture templates is how many, the classification functions are constructed, and one classification function of the multi-class SVM classifier can distinguish the corresponding gesture templates from other gesture templates, and the classification functions are in one-to-one correspondence with the gesture templates. When the trained multi-class SVM classifier is adopted to identify gestures, after the classification function corresponding to the hand action is determined, the gesture template can be rapidly determined, and the gestures corresponding to the hand action can be identified, so that the method is simple, efficient and high in speed.
As shown in fig. 6, a training method for a gesture recognition model provided by an embodiment of the present invention includes:
step 210, respectively acquiring the input data of the data glove when different calibration actions are completed, wherein each input data comprises all sensor data corresponding to one gesture template;
step 220, preprocessing all the input data respectively to obtain preprocessed data;
and 230, constructing a gesture recognition model based on a convolutional neural network and a long-term and short-term memory cyclic network, and training the gesture recognition model by adopting the preprocessed data to obtain the trained gesture recognition model.
In this embodiment, input data of the data glove when calibration actions corresponding to different gesture templates are completed is obtained, a gesture recognition model is trained by adopting the input data, the gesture recognition model is built based on a convolutional neural network and a long-term and short-term memory cyclic network, and data required for training is small. When the gesture recognition model based on deep learning is adopted to recognize the gesture, the accuracy is high.
Preferably, each input data includes all sensor data corresponding to one calibration action, and the preprocessing all the input data includes:
for any input data, synchronizing all sensor data corresponding to the input data by adopting a time synchronization mechanism based on time-slot channel hopping (time-slotted channel hopping-based, TSCH) to obtain synchronized sensor data;
and filtering the synchronized sensor data by adopting a Butterworth band-pass filter to obtain filtered sensor data.
Specifically, since the human hand motion frequency is generally lower than 10 hz, after the data glove acquires the sensor data, the data glove is filtered by using a butterworth band-pass filter with a cutoff frequency of 0.5 hz to 10 hz, so that uncorrelated energy can be removed. Also, due to strong noise from the biometric measurements and loose fold errors in wearing of the data glove, the similarity of the same activity of the acquired sensor data may be low, with an average Pearson correlation coefficient between the same activity sensor data of about 0.2-0.4. The butterworth bandpass filter may increase the similarity by taking out the noise of other channels and the Pearson correlation of the same activity may be increased to 0.7-0.8.
And intercepting the filtered sensor data by adopting a sliding window to obtain a plurality of data segments, wherein all the data segments form the preprocessed data.
Specifically, the filtered sensor data may be intercepted by a sliding window with a fixed time of 4 seconds, 50% of partial overlap exists between two adjacent sliding windows, if the sampling frequency of the acquired sensor data is 128Hz, the two-dimensional matrix obtained by reshaping the acquired time-series data into a two-dimensional matrix of 16×32=512 samples at the 128Hz sampling frequency is used as the input of the gesture recognition model by combining the design of the eastern window with 512 samples.
Preferably, the training the gesture recognition model using the preprocessed hand data includes a forward propagation step, the forward propagation step including:
inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template;
and determining the gesture template with the highest probability as a predicted gesture.
Preferably, as shown in fig. 7, the gesture recognition model includes two one-dimensional convolution layers (Conv 1D), a Max Pooling Layer (Max Pooling), a flattening Layer (flat Layer), an LSTM (Long Short-Term Memory cyclic network) Layer, two full-connection layers (Fully Connected Layer), a Softmax Layer and an Output Layer (Output Layer) which are sequentially connected, and inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template includes:
Inputting the preprocessed data into a first one of the one-dimensional convolution layers, and extracting features of the preprocessed data by the two one-dimensional convolution layers to obtain third feature data, wherein all the third feature data form a feature map.
Specifically, the preprocessed data is shaped into a two-dimensional matrix from one-dimensional time series data so as to meet the input size requirement of a one-dimensional convolution layer under a Tensorflow Keras (machine learning framework) computing framework, wherein one dimension is a time step, and the other dimension is a feature on each time step. The robustness of the extracted features can be improved by adopting two one-dimensional convolution layers. One-dimensional convolutional layers are a variant of CNN (Convolutional Neural Networks, convolutional neural network) that are dedicated to processing sequence and time-series data. In one-dimensional convolution layers, the convolution filter is only shifted in the time direction of the data, so that the one-dimensional convolution layer is able to derive features from a fixed length segment of data. When applied to recognize gestures, CNNs have two advantages over other models, local dependencies, which represent that nearby signals may be correlated, and scale invariance, which represents that the scale of different steps or frequencies is unchanged.
And inputting the feature map into the maximum pooling layer, and extracting features of each sub-region of the feature map to obtain fourth feature data.
Specifically, after the convolution is completed, the maximum pooling layer is applied to extract the most important features from each region in the feature map output by the one-dimensional convolution layer, so that the number of features can be reduced, and the training process is quickened.
And inputting the fourth characteristic data into the flattening layer, shaping the fourth characteristic data into a one-dimensional vector, inputting the one-dimensional vector into the LSTM layer for processing, and outputting the processed data.
Specifically, since the output processed by the one-dimensional convolution layer and the maximum pooling layer is a two-dimensional matrix, and the input size requirement of the LSTM layer is a one-dimensional vector, the flattening layer is adopted to re-flatten the second characteristic data output by the maximum pooling layer into a one-dimensional vector, and the one-dimensional vector is input into the LSTM layer as one-dimensional time sequence data. The flattening layer may be regarded as a bridging layer between the CNN and LSTM layers for converting two-dimensional data into one-dimensional data, unifying the LSTM layers without losing information.
And sequentially inputting the processed data into the two full-connection layers and the Softmax layer, and outputting the probability that the calibration action is each gesture template.
Specifically, a correction linear unit (ReLu) activation is adopted in the first full connection layer, a Softmax activation is adopted in the second full connection layer, and a category label is output and is a gesture. ReLu is used as an activation function in a deep learning model, can meet the requirement of rapid convergence and solves the problem of gradient disappearance, mainly because the gradient is unsaturated, the convergence speed of gradient descent can be greatly accelerated, and the problem of gradient disappearance is solved through the gradient of 0 or 1.
And the output layer outputs the gesture template with the highest probability, and the gesture template with the highest probability is the predicted gesture.
Specifically, the Softmax layer is usually used in the last layer of the model, is a commonly used activation function in classification problems, outputs the probability that the action corresponding to the gesture template is each class label, and the sum of the probabilities of all class labels is 1, wherein the class label with the highest probability is the predicted class label of the model, namely the predicted gesture of the model.
Preferably, the gesture recognition model further comprises a plurality of shedding layers (Dropout layers), wherein two shedding layers are arranged between the second one-dimensional convolution Layer and the maximum pooling Layer, and two shedding layers are arranged between the LSTM Layer and the first one-dimensional full-connection Layer.
In particular, training a neural network with a relatively small data set can result in overfitting of the training data because the model can learn statistical noise in the training data, which can exhibit poor performance when the training model is tested or new data is evaluated. Therefore, in order to prevent overfitting and reduce generalization errors, a shedding layer is introduced into the deep learning framework, the model learning robust feature can set the shedding rate of the shedding layer to 0.5, meaning that 50% of the shedding layer random selection input units are set to zero.
Preferably, the training the gesture recognition model using the preprocessed hand data further includes:
a back propagation step, comprising cross entropy loss according to the calibration action and the predicted gesture, and optimizing the gesture recognition model according to the loss;
and repeating the forward propagation step and the backward propagation step in a circulating way until the loss is not reduced any more, and obtaining a stable gesture recognition model.
Specifically, according to the parameters of the loss as driving optimization gesture recognition model, the gesture recognition model is subjected to repeated iterative optimization, and when the loss is no longer reduced, the gesture recognition model reaches a stable state.
As shown in fig. 8, a gesture recognition method provided by an embodiment of the present invention includes:
step 310, acquiring sensor data acquired by each sensor of the data glove when the data glove completes the current action, wherein all the sensor data form one input data;
step 320, performing feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
and 330, inputting the second characteristic data into a trained multi-class SVM classifier, and determining the gesture corresponding to the current action.
Specifically, the second characteristic data is input into a trained multi-class SVM classifier, the trained multi-class SVM classifier comprises a plurality of classification functions, each classification function processes the characteristic data respectively and outputs a second classification value respectively, and the classification functions are in one-to-one correspondence with the gesture templates.
And determining the maximum value and the next maximum value in all the second classification values, comparing the maximum value and the next maximum value with a preset threshold respectively, and determining the gesture template corresponding to the classification function outputting the maximum value as the gesture corresponding to the current action when the maximum value is greater than or equal to the preset threshold and the next maximum value is smaller than the preset threshold.
And 340, preprocessing the input data to obtain preprocessed data when the trained multi-class SVM classifier cannot recognize the current action.
Specifically, when both the maximum value and the next-maximum value are greater than or equal to the preset threshold, preprocessing the input data to obtain preprocessed data.
And 350, inputting the preprocessed data into a trained gesture recognition model, and outputting a gesture corresponding to the current action, wherein the gesture recognition model is established based on a convolutional neural network and a long-term and short-term memory cyclic network.
In this embodiment, sensor data acquired by each sensor when the data glove completes the current action is acquired, and all the sensor data corresponding to the current action form one input data. And the main component analysis method is adopted to extract the characteristics of the input data, the characteristic data is obtained and is input into a trained multi-class SVM classifier, the gesture corresponding to the current action is identified, and the multi-class SVM classifier has high identification speed, simplicity and high efficiency. When the multi-class SVM classifier is difficult to recognize the current action, preprocessing the input data corresponding to the current action, inputting the preprocessed data into a trained gesture recognition model, outputting the gesture corresponding to the current action, and constructing the gesture recognition model based on the deep learning model, so that the gesture can be accurately recognized. According to the technical scheme, the multi-class SVM classifier is adopted to identify the gestures, the identification speed is high, when the multi-class SVM classifier is difficult to identify the gestures, the gesture identification model is adopted to identify the gestures, and the identification accuracy can be ensured while the identification speed is improved.
As shown in fig. 9, a gesture recognition apparatus provided in an embodiment of the present invention includes:
the acquisition module is used for acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, and all the sensor data form one input data;
the feature extraction module is used for carrying out feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
the first processing module is used for inputting the second characteristic data into a trained multi-class SVM classifier and determining a gesture corresponding to the current action;
the preprocessing module is used for preprocessing the input data to obtain preprocessed data when the trained multi-class SVM classifier cannot recognize the current action;
and the second processing module is used for inputting the preprocessed data into a trained gesture recognition model and outputting a gesture corresponding to the current action, wherein the gesture recognition model is established based on a convolutional neural network and a long-term and short-term memory cyclic network.
Another embodiment of the present invention provides a gesture recognition apparatus including a memory and a processor; the memory is used for storing a computer program; the processor is configured to implement the gesture recognition method as described above when executing the computer program. The device may be a computer, a server, etc.
A further embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a gesture recognition method as described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like. In this application, the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Although the present disclosure is disclosed above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and these changes and modifications will fall within the scope of the disclosure.

Claims (13)

1. A method of gesture recognition, comprising:
acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, wherein all the sensor data form one input data;
performing feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
inputting the second characteristic data into a trained multi-class SVM classifier, and determining a gesture corresponding to the current action;
when the trained multi-class SVM classifier cannot recognize the current action, preprocessing the input data to obtain preprocessed data;
inputting the preprocessed data into a trained gesture recognition model, and outputting a gesture corresponding to the current action, wherein the gesture recognition model is established based on a convolutional neural network and a long-term and short-term memory cyclic network;
Before the second characteristic data is input into the trained multi-class SVM classifier, the method comprises the following steps: respectively acquiring the input data of the data glove when different calibration actions are completed; amplifying and filtering each input data respectively to obtain filtered input data; performing feature extraction on all the filtered input data by adopting a principal component analysis method to obtain first feature data; training a multi-class SVM classifier by adopting the first characteristic data to obtain the trained multi-class SVM classifier;
the feature extraction of all the filtered input data by adopting the principal component analysis method comprises the following steps: calculating the average value of all the filtered input data; respectively determining differences between the filtered input data and the average value, and determining a covariance matrix according to all the differences; calculating a characteristic value and a characteristic vector according to the covariance matrix, and determining a principal component matrix according to the characteristic vector; determining the first feature data from the principal component matrix and the difference value;
the calibrating actions are in one-to-one correspondence with the gesture templates, and the training the multi-class SVM classifier by adopting the first characteristic data comprises the following steps: for any gesture template, taking the first characteristic data corresponding to the gesture template as a positive set, taking the first characteristic data except the positive set as a negative set, and taking the corresponding positive set and negative set as a training set; inputting the training set into the multi-class SVM classifier, wherein the multi-class SVM classifier comprises a plurality of classification functions, each classification function respectively processes the training set and respectively outputs a first classification value; determining a maximum value in the first classification value and the classification function corresponding to the maximum value, and corresponding the classification function to the gesture template; processing the first characteristic data in sequence, and enabling the gesture templates to correspond to the classification functions one by one;
Inputting the second characteristic data into a trained multi-class SVM classifier, wherein determining the gesture corresponding to the current action comprises: inputting the second characteristic data into the trained multi-class SVM classifier, wherein each classification function respectively processes the second characteristic data and respectively outputs a second classification value; and determining the maximum value and the next largest value in all the second classification values, comparing the maximum value and the next largest value in the second classification values with a preset threshold value respectively, and determining the gesture template corresponding to the classification function outputting the maximum value as the gesture corresponding to the current action when the maximum value is greater than or equal to the preset threshold value and the next largest value is smaller than the preset threshold value.
2. The gesture recognition method of claim 1, wherein the inability of the trained multi-class SVM classifier to recognize the current action includes a maximum value and a next-largest value of the second classification values being greater than or equal to the preset threshold.
3. The gesture recognition method according to claim 1 or 2, wherein before inputting the preprocessed data into the trained gesture recognition model, comprising:
Respectively acquiring the input data of the data glove when different calibration actions are completed;
preprocessing all the input data respectively to obtain preprocessed data;
and constructing a gesture recognition model based on a convolutional neural network and a long-term and short-term memory cyclic network, and training the gesture recognition model by adopting the preprocessed data to obtain the trained gesture recognition model.
4. A gesture recognition method according to claim 3, wherein each of the input data includes all of the sensor data corresponding to one of the calibration actions, the preprocessing of all of the input data, respectively, includes:
for any input data, synchronizing all sensor data corresponding to the input data by adopting a time synchronization mechanism based on time slot channel hopping to obtain synchronized sensor data;
filtering the synchronized sensor data by using a Butterworth band-pass filter to obtain filtered sensor data;
and intercepting the filtered sensor data by adopting a sliding window to obtain a plurality of data segments, wherein all the data segments form the preprocessed data.
5. The method of claim 4, wherein training the gesture recognition model using the preprocessed data comprises a forward propagating step that includes:
inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template;
and determining the gesture template with the highest probability as a predicted gesture.
6. The gesture recognition method of claim 5, wherein the gesture recognition model comprises two one-dimensional convolution layers, a max pooling layer, a flattening layer, an LSTM layer, two full connection layers, a Softmax layer, and an output layer connected in sequence, the inputting the preprocessed data into the gesture recognition model, and outputting the probability that the calibration action is each gesture template comprises:
inputting the preprocessed data into a first one-dimensional convolution layer, and carrying out feature extraction on the preprocessed data by two one-dimensional convolution layers to obtain third feature data, wherein all the third feature data form a feature map;
inputting the feature map into the maximum pooling layer, and extracting features of each sub-region of the feature map to obtain fourth feature data;
Inputting the fourth characteristic data into the flattening layer, shaping the fourth characteristic data into a one-dimensional vector, inputting the one-dimensional vector into the LSTM layer for processing, and outputting processed data;
inputting the processed data into the two full-connection layers and the Softmax layer in sequence, and outputting the probability that the calibration action is each gesture template;
and the output layer outputs the gesture template with the highest probability, and the gesture template with the highest probability is the predicted gesture.
7. The method of claim 6, wherein the gesture recognition model further comprises a plurality of shedding layers, wherein two of the shedding layers are disposed between a second one of the one-dimensional convolution layers and the max pooling layer, and two of the shedding layers are disposed between the LSTM layer and a first one of the fully-connected layers.
8. The method of claim 7, wherein training the gesture recognition model using the preprocessed hand data further comprises:
a back propagation step, comprising cross entropy loss according to the calibration action and the predicted gesture, and optimizing the gesture recognition model according to the loss;
And repeating the forward propagation step and the backward propagation step in a circulating way until the loss is not reduced any more, and obtaining a stable gesture recognition model.
9. A gesture recognition apparatus, comprising:
the acquisition module is used for acquiring sensor data acquired by each sensor of the data glove when the data glove finishes the current action, and all the sensor data form one input data;
the feature extraction module is used for carrying out feature extraction on the input data by adopting a principal component analysis method to obtain second feature data;
the first processing module is used for inputting the second characteristic data into a trained multi-class SVM classifier and determining a gesture corresponding to the current action;
the preprocessing module is used for preprocessing the input data to obtain preprocessed data when the trained multi-class SVM classifier cannot recognize the current action;
the second processing module is used for inputting the preprocessed data into a trained gesture recognition model and outputting a gesture corresponding to the current action, wherein the gesture recognition model is built based on a convolutional neural network and a long-term and short-term memory cyclic network;
The first processing module is specifically configured to: respectively acquiring the input data of the data glove when different calibration actions are completed; amplifying and filtering each input data respectively to obtain filtered input data; performing feature extraction on all the filtered input data by adopting a principal component analysis method to obtain first feature data; training a multi-class SVM classifier by adopting the first characteristic data to obtain the trained multi-class SVM classifier;
the feature extraction module is specifically configured to: calculating the average value of all the filtered input data; respectively determining differences between the filtered input data and the average value, and determining a covariance matrix according to all the differences; calculating a characteristic value and a characteristic vector according to the covariance matrix, and determining a principal component matrix according to the characteristic vector; determining the first feature data from the principal component matrix and the difference value;
the calibration actions are in one-to-one correspondence with the gesture templates, and the first processing module is specifically further configured to: for any gesture template, taking the first characteristic data corresponding to the gesture template as a positive set, taking the first characteristic data except the positive set as a negative set, and taking the corresponding positive set and negative set as a training set; inputting the training set into the multi-class SVM classifier, wherein the multi-class SVM classifier comprises a plurality of classification functions, each classification function respectively processes the training set and respectively outputs a first classification value; determining a maximum value in the first classification value and the classification function corresponding to the maximum value, and corresponding the classification function to the gesture template; processing the first characteristic data in sequence, and enabling the gesture templates to correspond to the classification functions one by one;
The first processing module is specifically further configured to: inputting the second characteristic data into the trained multi-class SVM classifier, wherein each classification function respectively processes the second characteristic data and respectively outputs a second classification value; and determining the maximum value and the next largest value in all the second classification values, comparing the maximum value and the next largest value in the second classification values with a preset threshold value respectively, and determining the gesture template corresponding to the classification function outputting the maximum value as the gesture corresponding to the current action when the maximum value is greater than or equal to the preset threshold value and the next largest value is smaller than the preset threshold value.
10. A gesture recognition apparatus comprising a memory and a processor;
the memory is used for storing a computer program;
the processor being configured to implement the gesture recognition method of any one of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the gesture recognition method according to any of claims 1 to 8.
12. A data glove comprising a glove, a plurality of sensors and a gesture recognition device according to claim 10, wherein the plurality of sensors are respectively electrically connected with the gesture recognition device, and the plurality of sensors are respectively arranged on the glove and are suitable for detecting movement data of each finger.
13. The data glove of claim 12, wherein the sensor comprises a plurality of fully flexible capacitive sensors and/or a plurality of piezoresistive sensors;
the plurality of fully flexible capacitive sensors are respectively arranged at the back of each finger of the glove and the back tiger mouth of the thumb and are respectively used for measuring movement data in the process of buckling or stretching of the finger and movement data in the process of transverse movement of the thumb;
the piezoresistive sensors are respectively arranged at joints on the inner sides of the fingers of the glove and between two adjacent fingers and are respectively used for measuring motion data of each finger joint and motion data of abduction or adduction of the fingers.
CN202011253584.5A 2020-11-11 2020-11-11 Gesture recognition method and device, storage medium and data glove Active CN112347951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011253584.5A CN112347951B (en) 2020-11-11 2020-11-11 Gesture recognition method and device, storage medium and data glove

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011253584.5A CN112347951B (en) 2020-11-11 2020-11-11 Gesture recognition method and device, storage medium and data glove

Publications (2)

Publication Number Publication Date
CN112347951A CN112347951A (en) 2021-02-09
CN112347951B true CN112347951B (en) 2023-07-11

Family

ID=74363342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011253584.5A Active CN112347951B (en) 2020-11-11 2020-11-11 Gesture recognition method and device, storage medium and data glove

Country Status (1)

Country Link
CN (1) CN112347951B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214250A (en) * 2017-07-05 2019-01-15 中南大学 A kind of static gesture identification method based on multiple dimensioned convolutional neural networks
CN110262653A (en) * 2018-03-12 2019-09-20 东南大学 A kind of millimeter wave sensor gesture identification method based on convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111902077B (en) * 2018-01-25 2023-08-04 元平台技术有限公司 Calibration technique for hand state representation modeling using neuromuscular signals
EP3743901A4 (en) * 2018-01-25 2021-03-31 Facebook Technologies, Inc. Real-time processing of handstate representation model estimates

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214250A (en) * 2017-07-05 2019-01-15 中南大学 A kind of static gesture identification method based on multiple dimensioned convolutional neural networks
CN110262653A (en) * 2018-03-12 2019-09-20 东南大学 A kind of millimeter wave sensor gesture identification method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于骨架信息的人体动作识别与实时交互技术;张继凯;顾兰君;;内蒙古科技大学学报(03);第66-72页 *

Also Published As

Publication number Publication date
CN112347951A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
US8935195B2 (en) Method of identification and devices thereof
Chung et al. Real-time hand gesture recognition model using deep learning techniques and EMG signals
Alrubayi et al. A pattern recognition model for static gestures in malaysian sign language based on machine learning techniques
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
Nurwanto et al. Light sport exercise detection based on smartwatch and smartphone using k-Nearest Neighbor and Dynamic Time Warping algorithm
Benalcázar et al. Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG
KR20120052610A (en) Apparatus and method for recognizing motion using neural network learning algorithm
CN110458235B (en) Motion posture similarity comparison method in video
CN108985157A (en) A kind of gesture identification method and device
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
Zheng et al. L-sign: Large-vocabulary sign gestures recognition system
Antony et al. Sign language recognition using sensor and vision based approach
Badhe et al. Artificial neural network based indian sign language recognition using hand crafted features
Gu et al. Locomotion activity recognition: A deep learning approach
Alhersh et al. Learning human activity from visual data using deep learning
Anwar et al. Feature extraction for indonesian sign language (SIBI) using leap motion controller
CN112347951B (en) Gesture recognition method and device, storage medium and data glove
Enikeev et al. Recognition of sign language using leap motion controller data
KR20140073294A (en) Apparatus and method for real-time emotion recognition using pulse rate change
Mendes et al. Subvocal speech recognition based on EMG signal using independent component analysis and neural network MLP
Shintani et al. Digital pen for handwritten alphabet recognition
Surekha et al. Hand Gesture Recognition and voice, text conversion using
CN115937910A (en) Palm print image identification method based on small sample measurement network
Safdar et al. A novel similar character discrimination method for online handwritten Urdu character recognition in half forms
KR100852630B1 (en) Biometric method using probabillistic access in video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant