CN104484644A - Gesture identification method and device - Google Patents

Gesture identification method and device Download PDF

Info

Publication number
CN104484644A
CN104484644A CN201410621500.7A CN201410621500A CN104484644A CN 104484644 A CN104484644 A CN 104484644A CN 201410621500 A CN201410621500 A CN 201410621500A CN 104484644 A CN104484644 A CN 104484644A
Authority
CN
China
Prior art keywords
data sequence
gesture
sorter
user
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410621500.7A
Other languages
Chinese (zh)
Other versions
CN104484644B (en
Inventor
陈涛
蒋文明
李敏
李力
范炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201410621500.7A priority Critical patent/CN104484644B/en
Publication of CN104484644A publication Critical patent/CN104484644A/en
Application granted granted Critical
Publication of CN104484644B publication Critical patent/CN104484644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a gesture identification method and a device. The gesture identification method comprises steps that, data sequences of a use gesture are acquired, and the data sequences comprise an acceleration data sequence and an angular speed data sequence; pre-processing on the data sequences is carried out to acquire a characteristic vector; a pre-set predefining classifier is utilized to identify the characteristic vector to acquire the user gesture corresponding to the characteristic vector. The gesture identification method has the relatively high identification rate and can identify gestures based on wrist motion and gestures based on hand portion rotation.

Description

A kind of gesture identification method and device
Technical field
The present invention relates to pattern-recognition and field of artificial intelligence, particularly relate to a kind of gesture identification method and device.
Background technology
Traditional Gesture Recognition is mainly divided into the gesture identification of gesture identification based on computer vision and data glove, algorithm complex wherein based on the Gesture Recognition of computer vision is general higher, and extremely easily by such environmental effects, discrimination is unsatisfactory; Although better based on the Gesture Recognition discrimination of data glove, due to its apparatus expensive, carry inconvenience and Consumer's Experience is poor, studied personnel abandoned gradually.
Along with the development of computer hardware, the research direction of gesture identification is transferred to sensor aspect by increasing technician, is particularly a pattern-recognition and artificial intelligence field fields of comparatively enlivening in recent years based on the Gesture Recognition of acceleration transducer data.
Gesture Recognition based on acceleration information mainly contains the methods such as template matches, neural network and probability statistical analysis.Mode advantage based on template matches is that flow logic is simple, maximum shortcoming is that computation complexity is higher, the application that very difficult requirement of real time is higher, as the integrated acceleration of people and angular velocity datas such as Sven Kratz, identified 6 kinds of simple gesture by the method for dynamic time programming, result shows that the real-time of dynamic time programming is the poorest.Method based on probability statistical analysis is at present comparatively popular, as the people such as Jiangfeng Liu utilize stealthy Markov model, to identify and identify that interest rate is better based on the acceleration information of hand exercise to 8 kinds of gestures.Compressed sensing is the method that another kind is comparatively popular at present, and this method can reduce the dimension of gesture data, simplify the computation complexity of identification, as Ahmad Akl just identifies 18 kinds of gestures in conjunction with dynamic time programming and compressed sensing technology.
The deficiencies in the prior art part is:
When current Gesture Recognition identifies less gesture kind, discrimination is higher, and when identifying more gesture, discrimination can significantly decline; Further, mainly hand movement locus is identified, do not support the gesture such as hand rotation without obvious movement locus.
Summary of the invention
The invention provides a kind of gesture identification method, its discrimination is higher, and the gesture that can identify based on Wrist-sport and hand rotate gesture.
Present invention also offers a kind of gesture identifying device, its discrimination is higher, and the gesture that can identify based on Wrist-sport and hand rotate gesture.
Technical scheme of the present invention is achieved in that
A kind of gesture identification method, comprising:
Obtain the data sequence of user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence;
Pre-service is carried out to described data sequence, obtains proper vector;
Proper vector described in the predefine sorter identification that employing presets, obtains user's gesture that described proper vector is corresponding.
A kind of gesture identifying device, put and comprise:
Pretreatment module, for obtaining the data sequence of user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence; Pre-service is carried out to described data sequence, obtains proper vector;
Predefine sorter, for identifying described proper vector, obtains user's gesture that described proper vector is corresponding.
Visible, the gesture identification method that the present invention proposes and device, using the acceleration information sequence of user's gesture and angular velocity data sequence as basis of characterization, accurately can identify more gesture, and the hand rotate gesture without obvious movement locus can be identified.
Accompanying drawing explanation
Fig. 1 is the gesture identification method realization flow figure that the present invention proposes;
Fig. 2 is the system architecture schematic diagram of embodiment one;
Fig. 3 is the realization flow figure of embodiment two;
Fig. 4 is dimension standardization schematic diagram;
Fig. 5 is the realization flow figure of embodiment three;
Fig. 6 is the realization flow figure of embodiment four;
Fig. 7 is the test result schematic diagram of the inventive method on 6DMG PostgreSQL database.
Embodiment
The present invention proposes a kind of gesture identification method, as the realization flow figure that Fig. 1 is the method, comprising:
Step 101: the data sequence obtaining user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence;
Step 102: pre-service is carried out to described data sequence, obtains proper vector;
Step 103: proper vector described in the predefine sorter identification that employing presets, obtains user's gesture that described proper vector is corresponding.
Include angular velocity data sequence due in user's gesture data sequence of obtaining, the gesture identification method that therefore the present invention proposes can identify based on gestures such as the gesture of Wrist-sport and hand rotations.
The concrete mode of above-mentioned steps 102 can be:
Denoising smooth, dimension normalization and dimension normalizing operation are performed respectively successively to each data sequence; Operating result is formed the proper vector of user's gesture.
In step 103, when predefine sorter None-identified goes out described proper vector, said method may further include: proper vector described in the self-defined sorter identification adopting user to set, obtains user's gesture that described proper vector is corresponding.
The present invention can for user definition oneself hobby is provided and the method for gesture not among the set of predefine sorter gesture, be specifically as follows:
Obtain the data sequence of user's gesture of the double above input of user;
Pre-service is carried out to described data sequence, obtains the proper vector for each user's gesture;
When the proper vector for each user's gesture all cannot by described predefine sorter identification time, judge whether the similarity of the data sequence of any two user's gestures is all not more than the threshold value preset, if so, then using the data sequence of each user's gesture as initial sample;
Be weighted described initial sample and add process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar is as the positive sample of self-defined sorter; Choose the negative sample of other data sequences as self-defined sorter; Described positive sample and negative sample are trained described self-defined sorter as training data.
Wherein, weighting add make an uproar process concrete mode can be: each data in data sequence are multiplied by respectively to the random number in preset range;
Be weighted initial sample and add process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar as the concrete mode of the positive sample of self-defined sorter can be:
Initial sample is weighted and adds process of making an uproar, obtain new data sequence;
Described initial sample and new data sequence are weighted again and add process of making an uproar, obtain new data sequence;
Until when the quantity of data sequence meets the requirement preset, using the new data sequence that obtains and initial sample as positive sample.
Choosing other data sequences as the concrete mode of the negative sample of self-defined sorter can be:
Choose the data sequence of part or all of user's gesture that described predefine sorter can identify or more than one random series as negative sample.
Below in conjunction with accompanying drawing, lift specific embodiment and introduce in detail.
Embodiment one:
The present embodiment introduction runs the exemplary system of the gesture identification method that the present invention proposes, and if Fig. 2 is this system architecture schematic diagram, this system achieves a third party application and uses the present invention to carry out the example of man-machine interaction.
This system obtains by the equipment (i.e. data identification unit 210) of built-in acceleration and angular-rate sensor acceleration information sequence and the angular velocity data sequence that user carries out gesture operation, transfers to his connection device will speed up degrees of data sequence and angular velocity data sequence pretreatment module 220 via bluetooth or its;
Pretreatment module 220 carries out denoising smooth process to the acceleration information sequence obtained and angular velocity data sequence, result after denoising smooth process is normalized, again dimension normalizing operation is carried out to the result after normalized, the data mean value finally extracting each dimension sets up proper vector as feature, proper vector is sent to identification module 230;
Identification module 230 can comprise the predefine sorter 231 preset, and can further include the self-defined sorter 232 of user's sets itself; Identification module 230 identifies the proper vector received, identify corresponding user's gesture, recognition result is returned to third party application 240, by third party application match cognization result to specific machine operation, thus realize the function of man-machine interaction.In addition, this system allows third party application to provide self-defined gesture interface for user, is that certain of third party application operates by the self-defined identification module of this method by the new individual hand modelling that user oneself define.
The present embodiment describes the function of unit on the whole, below introduces the concrete treatment scheme of unit respectively for multiple embodiment.
Embodiment two:
The present embodiment is introduced pretreatment module 220 and is carried out pretreated process to original acceleration information sequence and angular velocity data sequence.In the present embodiment, the original data sequence of user's gesture comprises 6, is designated as AccSeq_ x, AccSeq_ y, AccSeq_ z, AngSeq_ x, AngSeq_ y, AngSeq_ z; Wherein, AccSeq represents acceleration information sequence, and AngSeq represents angular velocity data sequence, and subscript x, y, z represents 3 axis of orientations of sensor respectively.
As the realization flow figure that Fig. 3 is the present embodiment, comprise the following steps:
Step 301: denoising smooth.The present embodiment adopts mean filter, and using width be the moving window of 5 as central point field, the average of data in window is carried out mean filter as the updated value of central point, and calculating formula is:
Value i , = Σ j = - 2 2 Value i + j 5 , i = 2,3,4 . . . L - 2 , Wherein,
Value i+jrepresent (i+j) the individual numerical value in original data sequence (acceleration information sequence or angular velocity data sequence);
Value ' irepresent i-th data after denoising smooth process in the data obtained sequence;
L represents the length of original data sequence.
Step 302: dimension normalization.
The original acceleration and the angular velocity information that extract because the operating habit of user or writing style different, although its acceleration presented or angular velocity variation tendency similar, its effective value interval can present fluctuates up and down.In order to solve this problem, the present embodiment carries out dimension normalization operation to the data sequence after above-mentioned denoising smooth.Particularly:
For the data sequence after denoising smooth process, extract the maximal value in this sequence and minimum value respectively, each data deduct the minimum value in sequence belonging to it, and divided by the constant interval of data in sequence belonging to it, obtain the sequence after normalized.
Calculating formula is:
Value ' i=(Value i-min)/(max-min); Wherein,
Value irepresent i-th data after denoising smooth process in the data obtained sequence;
Max and min represents maximal value after denoising smooth process in the data obtained sequence and minimum value respectively;
Value ' irepresent i-th data after dimension normalization process in the data obtained sequence.
Step 303: dimension standardization.
Due to the different length that the original data sequence extracted can show, be uniform length, the present embodiment carries out dimension standardization to the data sequence after dimension normalization process.Particularly:
Be that L data sequence is divided into N number of data segment equably by length, the as a whole dimension of all data of each data segment, each dimension comprises L/N data, and extract its data mean value as single dimensional characteristics, be denoted as f, computing formula is:
f i = Σ j = 0 L / N Value , ( i * L / N ) + j L / N , i = 1,2 . . . N ; Wherein,
Value' (i*L/N)+jrepresent (i*L/N)+j data after dimension normalization process in the data obtained sequence;
F irepresent i-th data after dimension standardization in the data obtained sequence.
If Fig. 4 is dimension standardization schematic diagram, if Fig. 4 is for N=8, each data sequence is divided and is divided into 8 dimensions, and get the average of data inside dimension as single dimensional characteristics, then whole data sequence is extracted 8 dimensional characteristics.
Step 304: set up proper vector F.
By above-mentioned steps 301-303, to 6 original data sequence (i.e. AccSeq_ x, AccSeq_ y, AccSeq_ z, AngSeq_ x, AngSeq_ y, AngSeq_ z) process, obtain 6 new data sequences; These 6 new data sequences are set up into proper vector F by this step, particularly:
F = f 1 AccSeq _ x f 2 AccSeq _ x . . . f N AccSeq _ x f 1 AccSeq _ y f 2 AccSeq _ y . . . f N AccSeq _ y f 1 AccSeq _ z f 2 AccSeq _ z . . . f N AccSeq _ z f 1 AngSeq _ x f 2 AngSeq _ x . . . f N AngSeq _ x f 1 AngSeq _ y f 2 AngSeq _ y . . . f N AngSeq _ y f 1 AngSeq _ z f 2 AngSeq _ z . . . f N AngSeq _ z ; Wherein,
The 1st of F walks to the 6th row and is respectively AccSeq_ x, AccSeq_ y, AccSeq_ z, AngSeq_ x, AngSeq_ yand AngSeq_ zby obtaining data sequence after above-mentioned process operation, each data sequence comprises N number of data.
The present embodiment has invited 30 personnel to gather 150 groups of gesture set in implementation procedure, often organizes original acceleration and the angular-rate sensor data sample of 38 gesture samples.Pre-service is carried out to above-mentioned data sample, extracts each sample characteristic of correspondence vector, and it can be used as training sample, carry out the built-in predefine sorter that cross validation selects optimum parameter to obtain for 38 kinds of Pre-defined gestures in the training process.
Embodiment three:
The proper vector that the present embodiment introduction obtains after adopting identification module 230 pairs of pre-service carries out the process of gesture identification.Wherein identification module 230 comprises the self-defined sorter 232 of predefine sorter 231 and user's sets itself.First carry out gesture identification by predefine sorter 231, when the gesture identification of predefine sorter 231 couples of users input is 38 kinds of Pre-defined gestures a kind of, exits this module and return to the result that third party application or user identify; When predefine sorter 231 cannot carry out identifying or being identified as negative sample, by continuing, self-defined sorter 232 identifies that whether current input gesture is the self-defined gesture of user, Fig. 5 is shown in operation in detail, and step is as follows.
Step 501: predefine sorter 231 identifies that user inputs the proper vector of gesture.For the consideration of Consumer's Experience, the present embodiment can adopt support vector machine estimated probability, and filters recognition result according to probability.Support vector machine estimates estimated result and the estimated probability of proper vector.
Step 502: carry out result filtration according to probability.Particularly, if the result of current maximum estimated probability is greater than threshold value Threshold 1, and the difference of maximum estimated probability and time large estimated probability is greater than threshold value Threshold 2, then accept current recognition result, and exit this module.Otherwise be considered as negative sample, namely current input gesture is not any one of 38 kinds of built-in gestures, and performs step 503.The following formula of concrete employing:
res = res , if ( P max > Threshold 1 & P max - P sec > Threshold 2 ) - 1 , else
Wherein, res represents recognition result, P maxrepresent gesture classification maximum estimated probability, P secrepresent gesture classification time large estimated probability.
Step 503: self-defined sorter 232 identifies that user inputs the proper vector of gesture, determines whether the proper vector of current input gesture is user-defined gesture further.
Here, self-defined sorter 232 provides a kind of method defining that oneself is liked, not among Pre-defined gesture set gesture for user, below introduces the embodiment of the self-defined sorter 232 of training.
Embodiment four:
The embodiment of self-defined sorter 232 is trained in the present embodiment introduction.Based on the consideration of Consumer's Experience, gesture that self-defined sorter generally can not require that user inputs repeatedly (being greater than 3 times) is to obtain abundant training sample, but very few training sample will inevitably weaken the classification capacity of sorter.In order to address this problem, the present embodiment adopts following steps to complete this function.In the present embodiment, input twice gesture continuously for user to be introduced as initial sample; User also can input more than three times gestures continuously, and the present invention is not restricted user's gesture input number of times.
As the realization flow figure that Fig. 6 is the present embodiment, comprising:
Step 601: after opening User Defined pattern, user inputs the gesture of twice oneself definition as requested continuously, obtain acceleration information sequence and the angular velocity data sequence of twice gesture, carry out pre-service by pretreatment module 220 pairs of acceleration information sequences and angular velocity data sequence, obtain the proper vector of gesture.
Step 602: identification module 230 pairs of proper vectors identify, if can identify, shows being included among Pre-defined gesture set or self-defined gesture set of this gesture that user inputs, then requires that user re-enters; If can not identify, then perform step 603.
Step 603: the similarity of the gesture data sequence of input before and after comparing.Dynamic time warping (DTW) specifically can be adopted to compare, and DTW is a kind of non-linear regular technology that Time alignment and range observation are gathered, and is mainly used in measurement two time series similarities.
Step 604: judge whether comparative result is greater than the threshold value preset, if be greater than, then thought that the gesture of twice input is not similar, requires that user re-enters; If be not more than, then perform step 605.
Step 605: using the data sequence of the gesture of user's twice input as initial sample, expansion obtains positive sample.
Iteration weighting specifically can be adopted to add method for de-noising, namely to each data in data sequence with the size of data own for weights, that takes advantage of the random number of (such as between 0.5 to 1.5) in a preset range rebuilds sample as noise, and sample is expanded on the basis that the method can ensure data sequence variation tendency again.Then continue to continue to expand as initial sample using current extensions sample and initial sample, until when the quantity of positive sample meets the requirement preset, perform step 606.
Above-mentioned weighting adds process of making an uproar can adopt following formula:
Value' i=value i× Random (0.5 ~ 1.5), wherein,
Value irepresent i-th data in pending data sequence;
Value i' represent process after data sequence in i-th data.
Step 606: choose negative sample.
Particularly, the data sequence of part or all of user's gesture that predefine sorter can identify and/or random data sequence can be chosen as negative sample.
Step 607: adopt above-mentioned positive sample and negative sample to train self-defined sorter 232 as training data.So both can align sample effectively to identify, the false recognition rate that negative sample or invalid gesture cause can have been reduced again.
Be described above the gesture identification method that the present invention proposes, the present invention also proposes a kind of gesture identifying device, comprising:
Pretreatment module, for obtaining the data sequence of user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence; Pre-service is carried out to described data sequence, obtains proper vector;
Predefine sorter, for identifying described proper vector, obtains user's gesture that described proper vector is corresponding.
In said apparatus, pretreatment module pre-service is carried out to data sequence and obtain proper vector mode can:
Denoising smooth, dimension normalization and dimension normalizing operation are performed respectively successively to each data sequence; Operating result is formed the proper vector of user's gesture.
Said apparatus can also comprise:
Self-defined sorter, for when described predefine sorter None-identified proper vector, identifies this proper vector, obtains user's gesture that described proper vector is corresponding.
Above-mentioned self-defined sorter can also be used for,
Obtain the data sequence of user's gesture of the double above input of user; When the proper vector for each user's gesture all cannot by described predefine sorter identification time, judge whether the similarity of the data sequence of any two user's gestures is all not more than the threshold value preset, if so, then using the data sequence of each user's gesture as initial sample;
Be weighted described initial sample and add process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar is as the positive sample of self-defined sorter; Choose the negative sample of other data sequences as self-defined sorter; Described positive sample and negative sample are trained as training data.
Wherein, self-defined sorter weighting add make an uproar process mode can be: each data in data sequence are multiplied by respectively to the random number in preset range;
Described self-defined sorter is weighted initial sample and adds process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar as the mode of the positive sample of self-defined sorter can be: be weighted initial sample and add process of making an uproar, obtain new data sequence; Described initial sample and new data sequence are weighted again and add process of making an uproar, obtain new data sequence; Until when the quantity of data sequence meets the requirement preset, using the new data sequence that obtains and initial sample as positive sample.
Other data sequences chosen by self-defined sorter: choose the data sequence of part or all of user's gesture that described predefine sorter can identify or more than one random series as negative sample.
As fully visible, the gesture identification method that the present invention proposes not only to the gesture based on arm motion, and also can effectively can identify the gesture based on Wrist-sport, meets the user's request of different operating custom.The present invention utilizes support vector machine as sorter, can obtain higher discrimination and good generalization ability with the algorithm of one-dimensional degree global feature establishment proper vector.In addition, the aspect that the present invention significantly distinguishes other Gesture Recognition is, can provide the function of self-defined gesture identification, support online mode of learning for user, brings better Consumer's Experience and operation is interesting.In order to the validity of verification algorithm, the present invention selects increase income acceleration and angular velocity data of 6DMG to carry out 20 tests, wherein 80% random sample is as training data, all the other 20% data are as test data, result display average recognition rate can reach 97.92%, showing this invention is that specifying information as shown in Figure 7 effectively accurately.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (12)

1. a gesture identification method, is characterized in that, described method comprises:
Obtain the data sequence of user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence;
Pre-service is carried out to described data sequence, obtains proper vector;
Proper vector described in the predefine sorter identification that employing presets, obtains user's gesture that described proper vector is corresponding.
2. method according to claim 1, is characterized in that, describedly carries out pre-service to data sequence and the mode obtaining proper vector is:
Denoising smooth, dimension normalization and dimension normalizing operation are performed respectively successively to each data sequence;
Operating result is formed the proper vector of user's gesture.
3. method according to claim 1 and 2, is characterized in that, when described predefine sorter None-identified goes out described proper vector, described method comprises further:
Proper vector described in the self-defined sorter identification adopting user to set, obtains user's gesture that described proper vector is corresponding.
4. method according to claim 3, is characterized in that, described method comprises further:
Obtain the data sequence of user's gesture of the double above input of user;
Pre-service is carried out to described data sequence, obtains the proper vector for each user's gesture;
When the proper vector for each user's gesture all cannot by described predefine sorter identification time, judge whether the similarity of the data sequence of any two user's gestures is all not more than the threshold value preset, if so, then using the data sequence of each user's gesture as initial sample;
Be weighted described initial sample and add process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar is as the positive sample of self-defined sorter; Choose the negative sample of other data sequences as self-defined sorter; Described positive sample and negative sample are trained described self-defined sorter as training data.
5. method according to claim 4, is characterized in that, the mode that described weighting adds process of making an uproar is: each data in data sequence are multiplied by respectively to the random number in preset range;
Described being weighted initial sample adds process of making an uproar, and the sample obtained after initial sample and weighting being added process of making an uproar as the mode of the positive sample of self-defined sorter is:
Initial sample is weighted and adds process of making an uproar, obtain new data sequence;
Described initial sample and new data sequence are weighted again and add process of making an uproar, obtain new data sequence;
Until when the quantity of data sequence meets the requirement preset, using the new data sequence that obtains and initial sample as positive sample.
6. method according to claim 4, is characterized in that, described in choose other data sequences and as the mode of the negative sample of self-defined sorter be:
Choose the data sequence of part or all of user's gesture that described predefine sorter can identify or more than one random series as negative sample.
7. a gesture identifying device, is characterized in that, described device comprises:
Pretreatment module, for obtaining the data sequence of user's gesture, described data sequence comprises acceleration information sequence and angular velocity data sequence; Pre-service is carried out to described data sequence, obtains proper vector;
Predefine sorter, for identifying described proper vector, obtains user's gesture that described proper vector is corresponding.
8. device according to claim 7, is characterized in that, described pretreatment module carries out pre-service to data sequence and the mode obtaining proper vector is:
Denoising smooth, dimension normalization and dimension normalizing operation are performed respectively successively to each data sequence;
Operating result is formed the proper vector of user's gesture.
9. the device according to claim 7 or 8, is characterized in that, described device also comprises:
Self-defined sorter, for when described predefine sorter None-identified proper vector, identifies this proper vector, obtains user's gesture that described proper vector is corresponding.
10. device according to claim 9, is characterized in that, described self-defined sorter also for,
Obtain the data sequence of user's gesture of the double above input of user; When the proper vector for each user's gesture all cannot by described predefine sorter identification time, judge whether the similarity of the data sequence of any two user's gestures is all not more than the threshold value preset, if so, then using the data sequence of each user's gesture as initial sample;
Be weighted described initial sample and add process of making an uproar, the sample obtained after initial sample and weighting being added process of making an uproar is as the positive sample of self-defined sorter; Choose the negative sample of other data sequences as self-defined sorter; Described positive sample and negative sample are trained as training data.
11. devices according to claim 10, is characterized in that, the mode that described self-defined sorter weighting adds process of making an uproar is: each data in data sequence are multiplied by respectively to the random number in preset range;
Described self-defined sorter is weighted initial sample and adds process of making an uproar, and the sample obtained after initial sample and weighting being added process of making an uproar as the mode of the positive sample of self-defined sorter is:
Initial sample is weighted and adds process of making an uproar, obtain new data sequence;
Described initial sample and new data sequence are weighted again and add process of making an uproar, obtain new data sequence;
Until when the quantity of data sequence meets the requirement preset, using the new data sequence that obtains and initial sample as positive sample.
12. devices according to claim 10, is characterized in that, described self-defined sorter is chosen other data sequences and as the mode of the negative sample of self-defined sorter is:
Choose the data sequence of part or all of user's gesture that described predefine sorter can identify or more than one random series as negative sample.
CN201410621500.7A 2014-11-06 2014-11-06 A kind of gesture identification method and device Active CN104484644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410621500.7A CN104484644B (en) 2014-11-06 2014-11-06 A kind of gesture identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410621500.7A CN104484644B (en) 2014-11-06 2014-11-06 A kind of gesture identification method and device

Publications (2)

Publication Number Publication Date
CN104484644A true CN104484644A (en) 2015-04-01
CN104484644B CN104484644B (en) 2018-10-16

Family

ID=52759185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410621500.7A Active CN104484644B (en) 2014-11-06 2014-11-06 A kind of gesture identification method and device

Country Status (1)

Country Link
CN (1) CN104484644B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912910A (en) * 2016-04-21 2016-08-31 武汉理工大学 Cellphone sensing based online signature identity authentication method and system
WO2017050140A1 (en) * 2015-09-23 2017-03-30 歌尔股份有限公司 Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN106950927A (en) * 2017-02-17 2017-07-14 深圳大学 A kind of method and Intelligent worn device of control smart home
CN107533371A (en) * 2015-04-29 2018-01-02 三星电子株式会社 Controlled using the user interface for influenceing gesture
CN107638165A (en) * 2016-07-20 2018-01-30 平安科技(深圳)有限公司 A kind of sleep detection method and device
CN108108015A (en) * 2017-11-20 2018-06-01 电子科技大学 A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping
CN109491507A (en) * 2018-11-14 2019-03-19 南京邮电大学 Gesture identifying device based on FDC2214
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN110618754A (en) * 2019-08-30 2019-12-27 电子科技大学 Surface electromyogram signal-based gesture recognition method and gesture recognition armband
CN112614263A (en) * 2020-12-30 2021-04-06 浙江大华技术股份有限公司 Method and device for controlling gate, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110291926A1 (en) * 2002-02-15 2011-12-01 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US20130058565A1 (en) * 2002-02-15 2013-03-07 Microsoft Corporation Gesture recognition system using depth perceptive sensors
CN103809748A (en) * 2013-12-16 2014-05-21 天津三星通信技术研究有限公司 Portable terminal and gesture recognition method thereof
CN103995592A (en) * 2014-05-21 2014-08-20 上海华勤通讯技术有限公司 Wearable equipment and terminal information interaction method and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110291926A1 (en) * 2002-02-15 2011-12-01 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US20130058565A1 (en) * 2002-02-15 2013-03-07 Microsoft Corporation Gesture recognition system using depth perceptive sensors
CN103809748A (en) * 2013-12-16 2014-05-21 天津三星通信技术研究有限公司 Portable terminal and gesture recognition method thereof
CN103995592A (en) * 2014-05-21 2014-08-20 上海华勤通讯技术有限公司 Wearable equipment and terminal information interaction method and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
严焰 等,: ""基于HMM的手势识别研究"", 《华中师范大学学报(自然科学版)》 *
肖茜 等,: ""一种基于MEMS惯用传感器的手势识别方法"", 《传感技术学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533371A (en) * 2015-04-29 2018-01-02 三星电子株式会社 Controlled using the user interface for influenceing gesture
US10339371B2 (en) 2015-09-23 2019-07-02 Goertek Inc. Method for recognizing a human motion, method for recognizing a user action and smart terminal
WO2017050140A1 (en) * 2015-09-23 2017-03-30 歌尔股份有限公司 Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN105912910A (en) * 2016-04-21 2016-08-31 武汉理工大学 Cellphone sensing based online signature identity authentication method and system
CN107638165A (en) * 2016-07-20 2018-01-30 平安科技(深圳)有限公司 A kind of sleep detection method and device
CN106950927A (en) * 2017-02-17 2017-07-14 深圳大学 A kind of method and Intelligent worn device of control smart home
CN106950927B (en) * 2017-02-17 2019-05-17 深圳大学 A kind of method and intelligent wearable device controlling smart home
CN108108015A (en) * 2017-11-20 2018-06-01 电子科技大学 A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping
CN109491507A (en) * 2018-11-14 2019-03-19 南京邮电大学 Gesture identifying device based on FDC2214
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN110618754A (en) * 2019-08-30 2019-12-27 电子科技大学 Surface electromyogram signal-based gesture recognition method and gesture recognition armband
CN110618754B (en) * 2019-08-30 2021-09-14 电子科技大学 Surface electromyogram signal-based gesture recognition method and gesture recognition armband
CN112614263A (en) * 2020-12-30 2021-04-06 浙江大华技术股份有限公司 Method and device for controlling gate, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104484644B (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN104484644A (en) Gesture identification method and device
Li et al. Deep Fisher discriminant learning for mobile hand gesture recognition
Bhattacharya et al. From smart to deep: Robust activity recognition on smartwatches using deep learning
Duffner et al. 3D gesture classification with convolutional neural networks
CN107316067B (en) A kind of aerial hand-written character recognition method based on inertial sensor
CN109886068B (en) Motion data-based action behavior identification method
CN107219924B (en) A kind of aerial gesture identification method based on inertial sensor
CN110245718A (en) A kind of Human bodys' response method based on joint time-domain and frequency-domain feature
Tian et al. MEMS-based human activity recognition using smartphone
CN105787434A (en) Method for identifying human body motion patterns based on inertia sensor
CN105184325A (en) Human body action recognition method and mobile intelligent terminal
Su et al. HDL: Hierarchical deep learning model based human activity recognition using smartphone sensors
CN115294658B (en) Personalized gesture recognition system and gesture recognition method for multiple application scenes
CN107742095A (en) Chinese sign Language Recognition Method based on convolutional neural networks
CN107273726B (en) Equipment owner's identity real-time identification method and its device based on acceleration cycle variation law
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
He Accelerometer Based Gesture Recognition Using Fusion Features and SVM.
CN107582077A (en) A kind of human body state of mind analysis method that behavior is touched based on mobile phone
CN108108015A (en) A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping
CN105447506A (en) Gesture recognition method based on interval distribution probability characteristics
CN105354532A (en) Hand motion frame data based gesture identification method
Xu et al. A long term memory recognition framework on multi-complexity motion gestures
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
Dehkordi et al. Feature extraction and feature selection in smartphone-based activity recognition
Li et al. Hand gesture recognition and real-time game control based on a wearable band with 6-axis sensors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant