CN106682585A - Dynamic gesture identifying method based on kinect 2 - Google Patents

Dynamic gesture identifying method based on kinect 2 Download PDF

Info

Publication number
CN106682585A
CN106682585A CN201611096405.5A CN201611096405A CN106682585A CN 106682585 A CN106682585 A CN 106682585A CN 201611096405 A CN201611096405 A CN 201611096405A CN 106682585 A CN106682585 A CN 106682585A
Authority
CN
China
Prior art keywords
hand
hmm
gesture
type
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611096405.5A
Other languages
Chinese (zh)
Inventor
凌晨
王清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611096405.5A priority Critical patent/CN106682585A/en
Publication of CN106682585A publication Critical patent/CN106682585A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dynamic gesture identifying method based on kinect 2. The method is characterized in that the method includes steps of respectively establishing Hidden Markov models for a track characteristic and a hand characteristic of the dynamic gesture; using the hand identifying result and the track identifying result as the input characteristics by means of native bayesian classification so as to perform the gesture identification. The dynamic gesture identifying method based on kinect 2 can decompose the complex dynamic gesture process into a hand type change and a track movement change, avoid description of gesture by a high-dimensional characteristics, reduce the operation complexity; for adding the characteristics of the hand type, more gestures can be identified; meanwhile, the accuracy of identifying the gesture is further improved.

Description

A kind of dynamic gesture identification method based on kinect2
Technical field
The present invention relates to a kind of dynamic gesture identification method based on kinect2.
Background technology
With the development of information technology, the mode of man-machine interaction also is occurring to change, and gesture is daily as a kind of people A kind of natural interactive mode is also applied in man-machine interaction in life.In recent years the gesture identification of view-based access control model technology is people Machine interacts the study hotspot in field, and Microsoft kienct2 somatosensory devices can also obtain ring while two dimensional image is obtained The depth information in border, greatly facilitates the research to gesture identification, is at present only to target to the research great majority of dynamic gesture Track be identified, and have ignored the change of the hand-type during gesture motion.
Accordingly, it is desirable to provide a kind of new dynamic gesture identification method is solving the above problems.
The content of the invention
To solve the deficiencies in the prior art, it is an object of the invention to provide a kind of dynamic gesture based on kinect2 is known Other method.
In order to realize above-mentioned target, the present invention is adopted the following technical scheme that:
A kind of dynamic gesture identification method based on kinect2, the track characteristic of dynamic gesture and hand-type feature are distinguished HMM is set up, and it is by the use of Naive Bayes Classification that hand-type recognition result and track identification result is special as input Levying carries out gesture identification.
Further, comprise the following steps:
Step 1, obtain the three-dimensional position of the centre of the palm using the bone tracking technique of kinect2 and be mapped in depth image Obtain gesture depth image;
Step 2, gesture depth image and movement locus are pre-processed respectively, extraction obtains hand-type feature and motion rail Mark direction corner characteristics;
Step 3, respectively different HMMs are set up to hand-type feature and movement locus direction corner characteristics, obtain Hand-type HMM and track HMM;
The output of step 4, the hand-type HMM that step 3 is obtained and track HMM is used as spy Levy, using Naive Bayes Classifier gesture identification is carried out.
Further, using the bone tracking technique of kinect2 obtain the three-dimensional position of the centre of the palm in step 1 and map Obtain gesture depth image in depth image to comprise the following steps:
Step S11:Depth data, Body index data, Body data are obtained using kinect2;
Step S12:According to Body index data:If human body has multiple, according to depth information, chosen distance Kinect2 nearest human body extracts the space coordinates of its right hand centre of the palm skeleton point and right hand wrist skeleton point as target;
Step S13:Using the MapCameraPointToDepthSpace functions in kinect2SDK by camera coordinates system In centre of the palm point and wrist point be transformed into deep space, obtain the position of the centre of the palm and wrist skeleton point in depth image;
Step S14:With centre of the palm point as the center of circle, the centre of the palm is that radius picture circle carries out dividing for gesture with 1.5 times of wrist point distance Cut, obtain the image of hand, then using the depth of wrist point as threshold value, the pixel that will be greater than this depth value is removed, obtained Whole gesture depth image.
Further, gesture depth image and movement locus are pre-processed respectively in the step 2, extraction is obtained Hand-type feature and movement locus direction corner characteristics are comprised the following steps:
Step S21:To step 1) the gesture depth image that obtains carries out binaryzation, during the binary image to obtaining is carried out Value filtering, removes salt-pepper noise, and carries out the empty miscellaneous point of morphologic corrosion expansive working removal;
Step S22:Extract images of gestures Hu not bending moment as hand-type feature;Three-dimensional motion to the palm of the hand carries out Kalman Filter tracking obtains the palm of the hand movement locus for processing, and the direction corner characteristics for extracting palm of the hand movement locus obtain movement locus direction Corner characteristics.The present invention adds the hand-type feature through quantifying in dynamic gesture feature extraction, plus track is separately built with hand-type Vertical HMM (HMM).
Further, in step S22 extract images of gestures Hu not bending moment as hand-type feature, hand-type feature include as Descend the not bending moment of described seven:
M1=η2002
M2=(η2002)2+4η11 2
M3=(η30-3η12)2+(3η2103)2
M4=(η3012)2+(η2103)2
M5=(η30-3η12)23012)[(η3012)2-3(η2103)2]
+(3η2103)(η2103)[3(η3012)2-(η2103)2]
M6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103)
M7=(3 η2103)(η3012)[(η3012)2-3(η2103)2]
+(η30-3η12)(η2130)[3(η3012)2-(η2103)2]
Wherein, normalized central moment is ηpqpq/(μ00 ρ)
In formula, ρ=(p+q)/2+1,N and M be respectively image height and Width,WithThe center of gravity of difference representative image,
Further, the hand that Kalman filter tracking obtains processing is carried out to the three-dimensional motion of the palm of the hand in step S22 Heart movement locus, the direction corner characteristics for extracting palm of the hand movement locus obtain movement locus direction corner characteristics, wherein, deflection passes through Following formula is calculated:
θ represents the angle between adjacent track point, (xi,yi) and (xi+1,yi+1) it is respectively the coordinate of adjacent track point.
Further, for the deflection for obtaining, 12 direction vector quantization encodings, such as following formula are carried out:When θ >=15 ° When
Wherein, f=13-k
As 15 ° of θ <
Wherein, f=(k+7) %12
The value of f is exactly the characteristic value of the angle after quantifying, and value is 1~12 integer.
Further, set up to hand-type feature and movement locus direction corner characteristics different hidden in the step 3 respectively Markov model, obtains hand-type HMM and track HMM, comprises the following steps:
Step S31:For hand-type feature extraction value Hu square, using k-means vector quantization methods, characteristic vector is converted Into discrete features label sequence, as the input of HMM;
Step S32:A training sample is extracted, respectively to hand-type HMM and track HMM Carry out parameter initialization;
Step S33:Using Baum-Welch algorithms respectively to hand-type HMM and track Hidden Markov mould Type enters the training of line parameter, and repeat step S32 is trained the hand-type Hidden Markov mould for obtaining every kind of gesture to every kind of gesture Type and track HMM.
Further, the hand-type HMM for obtaining step 3 in the step 4 and track Hidden Markov The output of model carries out gesture identification and comprises the following steps as feature using Naive Bayes Classifier:
Step S41:For the hand-type HMM and track HMM that train, another portion is input into Divide training sample, respectively obtain the output of hand-type HMM and the output of track HMM;
Step S42:The output that step S41 is obtained is used as 2 dimensional feature vector X={ x1,x2, x1And x2Respectively correspond to The numbering of hand-type recognition result corresponding with track, the result for recognizing gesture is n class, is calculated respectively
Wherein, skRepresent in numbering AkIt is upper that there is class C being worthiNumber of training, siFor CiOn total number of samples;
Step S43:For input feature vector amount X, according to Bayes' theorem:
Wherein,Select the gesture class of the probable value of maximum Ji Wei not recognition result.
Beneficial effect:The dynamic gesture identification method based on kinect2 of the present invention divides complicated dynamic gesture process Solve as hand-type change and track motion change, it is to avoid gesture is described using high dimensional feature, reduces the complexity of computing; Feature due to adding hand-type, therefore more gestures can be recognized, while also further increasing the accurate of identification gesture Degree.
Description of the drawings
Fig. 1 is flow chart of the present invention based on the dynamic gesture identification method of kinect2;
Fig. 2 is deflection quantization encoding schematic diagram of the present invention;
Fig. 3 is the HMM model structure chart that the present invention is used.
Specific embodiment
Make specific introduction to the present invention below in conjunction with specific embodiment.
Embodiment 1:
Refer to shown in Fig. 1, based on the dynamic gesture identification method of kinect2, followed the trail of using kinect2 depth cameras The hand of people, by the hand-type feature and movement locus feature to dynamic gesture HMM (HMM) is set up respectively, and is tied Closing Naive Bayes Classifier carries out the identification of gesture, and the method is comprised the following steps:
Step 1:The three-dimensional position of the centre of the palm is obtained using the bone tracking technique of kinect2 and be mapped in depth image Gesture depth image is obtained, specially:
Step S11:Depth data, Body index data, Body data are obtained using kinect2;
Step S12:According to Body index data, if human body has multiple, according to depth information, chosen distance kinect2 Nearest human body extracts its right hand centre of the palm skeleton point and right hand wrist skeleton point space coordinates as target;
Step S13:Using MapCameraPointToDepthSpace functions in kinect2 SDK by camera coordinates system Centre of the palm point and wrist point be transformed into deep space, obtain the position of the centre of the palm and wrist skeleton point in depth image;
Step S14:With centre of the palm point as the center of circle, the centre of the palm is that radius picture circle carries out dividing for gesture with 1.5 times of wrist point distance Cut, obtain the image of hand, then using the depth of wrist point as threshold value, the pixel that will be greater than this depth value is removed, so as to incite somebody to action Unnecessary arm segment is removed, and obtains complete images of gestures.
Step 2:Gesture depth image and movement locus are pre-processed, and extracts hand-type feature and course bearing Corner characteristics, specially:
Step S21:Images of gestures to obtaining carries out binaryzation, and the binary image to obtaining carries out medium filtering, goes Except salt-pepper noise, and carry out the empty miscellaneous point of morphologic corrosion expansive working removal;
Step S22:Images of gestures extract its Hu not bending moment as hand-type feature, for image f (x, y), its p+q ranks Geometric moment (standard square) is defined as:
P+q rank central moments are defined as:
WhereinWithThe center of gravity of representative image,
For discrete digital picture, with summation integration is replaced
Wherein N and M are respectively the height and width of image, and normalized central moment is defined as:
ηpqpq/(μ00 ρ) (7)
Wherein ρ=(p+q)/2+1.
Seven not bending moments can be defined using second order and three rank centre-to-centre spacing, image can be kept in translation, scaling and rotated When keep constant, be specifically defined:
M1=η2002 (8)
M2=(η2002)2+4η11 2 (9)
M3=(η30-3η12)2+(3η2103)2 (10)
M4=(η3012)2+(η2103)2 (11)
M6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103) (13)
Thus we can calculate the Hu squares of images of gestures, this seven not bending moment just as the characteristic value of images of gestures;
Step S23:Three-dimensional motion to the palm of the hand carries out the palm of the hand movement locus that Kalman filter tracking obtains processing, and It is separated by 10mm to extract the coordinate value once put and preserve;
Step S24:The direction corner characteristics for extracting track are used as the feature of movement locus, the direction of motion phase of tracing point Representing, wherein r represents the size of direction vector to direction vector (r, θ) between adjoint point, i.e., between adjacent track point away from From θ represents the angle between adjacent track point, and we take angle as the motion feature of track, and the computing formula of angle is
With reference to Fig. 2, for the angle value for obtaining, 12 direction vector quantization encodings are carried out, specific formula is
When θ >=15 °
As 15 ° of θ <
The value of f is exactly the characteristic value of the angle after quantifying, and value is 1~12 integer;
Step 3:Respectively different HMMs, detailed process are set up to hand-type feature and direction corner characteristics For:
Step S31:For hand-type feature extraction value Hu square, using k-means vector quantization methods, characteristic vector is converted Into discrete features label sequence, as the input of HMM model;
Step S32:One complete parameter sets of HMM can represent with a five-tuple λ=(N, M, A, B, π), wherein N For the number of the hidden state of HMM, M for HMM observed values number, A={ aijBe N × N-state transfering probability distribution matrix, B ={ bj(k) } for N × M observation probability distribution matrix, π={ π123...πNBe initial state distribution, model initialization rank Section, N can be with oneself definition, for hand-type HMM for the number of the hidden state of HMM, the number of the hand-type that M is recognized required for being, It is that the deflection vector number for quantifying is 12 for track HMM, shift-matrix A is determined by following formula:
Wherein aiiValue it is relevant with the duration d of average each hidden state
For average sample length, observe probability distribution matrix B and determined by following formula:
Original state π is first state, it is thus determined that being:
π=[1 0...0]T (21)
So far, the determination of each initial parameter of model is completed;
Step S33:Enter the instruction of line parameter to hand-type HMM model and track HMM model respectively using Baum-Welch algorithms Practice, Baum-Welch algorithm detailed processes are as follows:
Using posterior probability function and probability function first can respectively obtain:
γt(i)=P (qt=si|o,λ) (22)
Specific parameter reconstruction formula:
πi=P (qt=si)=γt(i) (24)
It is hereby achieved that new model parameter
Step 4:Using the output of hand-type HMM and track characteristic HMM as feature, utilize Naive Bayes Classifier carries out the identification of gesture, specially:
Step S41:For the gesture HMM model and track HMM model that train, another part training sample is input into, point Do not obtain the output of hand-type HMM model and the output of track HMM model;
Step S42:The output that step S41 is obtained is used as 2 dimensional feature vector X={ x1,x2, x1、x2Correspond to respectively and hand The numbering of type recognition result corresponding with track, the result for recognizing gesture is n classCalculate respectively
Wherein skRepresent in numbering AkIt is upper that there is class C being worthiNumber of training, siFor CiOn total number of samples;
Step S43:Input feature vector amount X new for one, according to Bayes' theorem:
WhereinThus the hand of the probable value of maximum is selected Gesture classification is recognition result.

Claims (9)

1. a kind of dynamic gesture identification method based on kinect2, it is characterised in that by the track characteristic and hand-type of dynamic gesture Feature sets up respectively HMM, and is made hand-type recognition result and track identification result using Naive Bayes Classification Gesture identification is carried out for input feature vector.
2. the dynamic gesture identification method of kinect2 is based on as claimed in claim 1, it is characterised in that comprised the following steps:
Step 1, obtain the three-dimensional position of the centre of the palm and be mapped in depth image using the bone tracking technique of kinect2 to obtain Gesture depth image;
Step 2, gesture depth image and movement locus are pre-processed respectively, extraction obtains hand-type feature and movement locus side To corner characteristics;
Step 3, respectively different HMMs are set up to hand-type feature and movement locus direction corner characteristics, obtain hand-type HMM and track HMM;
The output of step 4, the hand-type HMM that step 3 is obtained and track HMM as feature, Gesture identification is carried out using Naive Bayes Classifier.
3. the dynamic gesture identification method of kinect2 is based on as claimed in claim 2, it is characterised in that utilized in step 1 The bone tracking technique of kinect2 obtains the three-dimensional position of the centre of the palm and is mapped in depth image to obtain gesture depth image bag Include following steps:
Step S11:Depth data, Body index data, Body data are obtained using kinect2;
Step S12:According to Body index data:If human body has multiple, according to depth information, chosen distance kinect2 is most Near human body extracts the space coordinates of its right hand centre of the palm skeleton point and right hand wrist skeleton point as target;
Step S13:Using the MapCameraPointToDepthSpace functions in kinect2SDK by camera coordinates system Centre of the palm point and wrist point are transformed into deep space, obtain the position of the centre of the palm and wrist skeleton point in depth image;
Step S14:With centre of the palm point as the center of circle, the centre of the palm is that radius draws the round segmentation for carrying out gesture with 1.5 times of wrist point distance, is obtained To the image of hand, then using the depth of wrist point as threshold value, the pixel that will be greater than this depth value is removed, and obtains complete hand Gesture depth image.
4. the dynamic gesture identification method based on kinect2 according to claim 2, it is characterised in that in the step 2 Gesture depth image and movement locus are pre-processed respectively, extraction obtains hand-type feature and movement locus direction corner characteristics bag Include following steps:
Step S21:To step 1) the gesture depth image that obtains carries out binaryzation, and the binary image to obtaining carries out intermediate value filter Ripple, removes salt-pepper noise, and carries out the empty miscellaneous point of morphologic corrosion expansive working removal;
Step S22:Extract images of gestures Hu not bending moment as hand-type feature;Three-dimensional motion to the palm of the hand carries out Kalman filtering Tracking obtains the palm of the hand movement locus for processing, and the direction corner characteristics for extracting palm of the hand movement locus obtain movement locus deflection spy Levy.
5. the dynamic gesture identification method based on kinect2 according to claim 4, it is characterised in that
Extract in step S22 the Hu of images of gestures not bending moment used as hand-type feature, hand-type feature includes as described below seven not Bending moment:
M1=η2002
M2=(η2002)2+4η11 2
M3=(η30-3η12)2+(3η2103)2
M4=(η3012)2+(η2103)2
M5=(η30-3η12)23012)[(η3012)2-3(η2103)2]
+(3η2103)(η2103)[3(η3012)2-(η2103)2]
M6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103)
M7=(3 η2103)(η3012)[(η3012)2-3(η2103)2]
+(η30-3η12)(η2130)[3(η3012)2-(η2103)2]
Wherein, normalized central moment is ηpqpq/(μ00 ρ)
In formula, ρ=(p+q)/2+1,N and M are respectively the height and width of image Degree,WithThe center of gravity of difference representative image,
6. the dynamic gesture identification method based on kinect2 according to claim 4, it is characterised in that
The palm of the hand movement locus that Kalman filter tracking obtains processing is carried out to the three-dimensional motion of the palm of the hand in step S22, is extracted The direction corner characteristics of palm of the hand movement locus obtain movement locus direction corner characteristics, wherein, deflection is calculated by following formula:
θ represents the angle between adjacent track point, (xi,yi) and (xi+1,yi+1) it is respectively the coordinate of adjacent track point.
7. the dynamic gesture identification method based on kinect2 according to claim 6, it is characterised in that
For the deflection for obtaining, 12 direction vector quantization encodings, such as following formula are carried out:
When θ >=15 °
Wherein, f=13-k
As 15 ° of θ <
Wherein, f=(k+7) %12
The value of f is exactly the characteristic value of the angle after quantifying, and value is 1~12 integer.
8. the dynamic gesture identification method based on kinect2 according to claim 2, respectively to hand-type in the step 3 Feature and movement locus direction corner characteristics set up different HMMs, obtain hand-type HMM and track HMM, comprises the following steps:
Step S31:For hand-type feature extraction value Hu square, using k-means vector quantization methods, by characteristic vector change into from Scattered feature label sequence, as the input of HMM;
Step S32:A training sample is extracted, hand-type HMM and track HMM are carried out respectively Parameter initialization;
Step S33:Using Baum-Welch algorithms hand-type HMM and track HMM are entered respectively The training of line parameter, repeat step S32, every kind of gesture is trained obtain every kind of gesture hand-type HMM and Track HMM.
9. the dynamic gesture identification method based on kinect2 according to claim 2, obtains step 3 in the step 4 Hand-type HMM and track HMM output as feature, entered using Naive Bayes Classifier Row gesture identification is comprised the following steps:
Step S41:For the hand-type HMM and track HMM that train, input another part instruction Practice sample, respectively obtain the output of hand-type HMM and the output of track HMM;
Step S42:The output that step S41 is obtained is used as 2 dimensional feature vector X={ x1,x2, x1And x2Respectively correspond to hand-type and The numbering of track correspondence recognition result, the result for recognizing gesture is n class, is calculated respectively
Wherein, skRepresent in numbering AkIt is upper that there is class C being worthiNumber of training, siFor CiOn total number of samples;
Step S43:For input feature vector amount X, according to Bayes' theorem:
Wherein,Select the gesture classification of probable value of maximum i.e. For recognition result.
CN201611096405.5A 2016-12-02 2016-12-02 Dynamic gesture identifying method based on kinect 2 Pending CN106682585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611096405.5A CN106682585A (en) 2016-12-02 2016-12-02 Dynamic gesture identifying method based on kinect 2

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611096405.5A CN106682585A (en) 2016-12-02 2016-12-02 Dynamic gesture identifying method based on kinect 2

Publications (1)

Publication Number Publication Date
CN106682585A true CN106682585A (en) 2017-05-17

Family

ID=58866330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611096405.5A Pending CN106682585A (en) 2016-12-02 2016-12-02 Dynamic gesture identifying method based on kinect 2

Country Status (1)

Country Link
CN (1) CN106682585A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909003A (en) * 2017-10-16 2018-04-13 华南理工大学 A kind of gesture identification method for large vocabulary
CN109196438A (en) * 2018-01-23 2019-01-11 深圳市大疆创新科技有限公司 A kind of flight control method, equipment, aircraft, system and storage medium
CN109461203A (en) * 2018-09-17 2019-03-12 百度在线网络技术(北京)有限公司 Gesture three-dimensional image generating method, device, computer equipment and storage medium
CN110232321A (en) * 2019-05-10 2019-09-13 深圳奥比中光科技有限公司 Detection method, device, terminal and the computer storage medium of finger tip click location
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device
CN111897433A (en) * 2020-08-04 2020-11-06 吉林大学 Method for realizing dynamic gesture recognition and control in integrated imaging display system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345626A (en) * 2013-07-18 2013-10-09 重庆邮电大学 Intelligent wheelchair static gesture identification method
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN103390168A (en) * 2013-07-18 2013-11-13 重庆邮电大学 Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information
CN103472916A (en) * 2013-09-06 2013-12-25 东华大学 Man-machine interaction method based on human body gesture recognition
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
KR20140134803A (en) * 2013-05-14 2014-11-25 중앙대학교 산학협력단 Apparatus and method for gesture recognition using multiclass Support Vector Machine and tree classification
CN105005769A (en) * 2015-07-08 2015-10-28 山东大学 Deep information based sign language recognition method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140134803A (en) * 2013-05-14 2014-11-25 중앙대학교 산학협력단 Apparatus and method for gesture recognition using multiclass Support Vector Machine and tree classification
CN103345626A (en) * 2013-07-18 2013-10-09 重庆邮电大学 Intelligent wheelchair static gesture identification method
CN103390168A (en) * 2013-07-18 2013-11-13 重庆邮电大学 Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN103472916A (en) * 2013-09-06 2013-12-25 东华大学 Man-machine interaction method based on human body gesture recognition
CN103593680A (en) * 2013-11-19 2014-02-19 南京大学 Dynamic hand gesture recognition method based on self incremental learning of hidden Markov model
CN103941866A (en) * 2014-04-08 2014-07-23 河海大学常州校区 Three-dimensional gesture recognizing method based on Kinect depth image
CN105005769A (en) * 2015-07-08 2015-10-28 山东大学 Deep information based sign language recognition method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RAJAT SHRIVASTAVA: "A Hidden Markov Model based Dynamic Hand Gesture Recognition System using OpenCV", 《2013 3RD IEEE INTERNATIONAL ADVANCE COMPUTING CONFERENCE(IACC)》 *
宓超 等: "《装卸机器视觉及其应用》", 31 January 2016 *
李国臻: "多通道手语信息融合方法的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
金欢: "基于视觉的手势识别技术的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909003A (en) * 2017-10-16 2018-04-13 华南理工大学 A kind of gesture identification method for large vocabulary
CN107909003B (en) * 2017-10-16 2019-12-10 华南理工大学 gesture recognition method for large vocabulary
CN109196438A (en) * 2018-01-23 2019-01-11 深圳市大疆创新科技有限公司 A kind of flight control method, equipment, aircraft, system and storage medium
CN109461203A (en) * 2018-09-17 2019-03-12 百度在线网络技术(北京)有限公司 Gesture three-dimensional image generating method, device, computer equipment and storage medium
CN110232321A (en) * 2019-05-10 2019-09-13 深圳奥比中光科技有限公司 Detection method, device, terminal and the computer storage medium of finger tip click location
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device
WO2021115181A1 (en) * 2019-12-13 2021-06-17 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, apparatuses, medium and terminal device
CN111062312B (en) * 2019-12-13 2023-10-27 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control device, medium and terminal equipment
CN111897433A (en) * 2020-08-04 2020-11-06 吉林大学 Method for realizing dynamic gesture recognition and control in integrated imaging display system

Similar Documents

Publication Publication Date Title
Zimmermann et al. Learning to estimate 3d hand pose from single rgb images
CN106682585A (en) Dynamic gesture identifying method based on kinect 2
CN107609459B (en) A kind of face identification method and device based on deep learning
CN110852182B (en) Depth video human body behavior recognition method based on three-dimensional space time sequence modeling
Yoon et al. Hand gesture recognition using combined features of location, angle and velocity
CN109086706B (en) Motion recognition method based on segmentation human body model applied to human-computer cooperation
Megavannan et al. Human action recognition using depth maps
US10068131B2 (en) Method and apparatus for recognising expression using expression-gesture dictionary
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
Feng et al. Depth-projection-map-based bag of contour fragments for robust hand gesture recognition
CN106097381B (en) A kind of method for tracking target differentiating Non-negative Matrix Factorization based on manifold
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN106687989A (en) Method and system of facial expression recognition using linear relationships within landmark subsets
JP2009514109A (en) Discriminant motion modeling for tracking human body motion
Hobson et al. HEp-2 staining pattern recognition at cell and specimen levels: datasets, algorithms and results
Yao et al. Real-time hand pose estimation from RGB-D sensor
Rao et al. Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera.
CN105893942B (en) A kind of sign Language Recognition Method of the adaptive H MM based on eSC and HOG
CN112379779B (en) Dynamic gesture recognition virtual interaction system based on transfer learning
Chen et al. Silhouette-based object phenotype recognition using 3D shape priors
CN109558855B (en) A kind of space gesture recognition methods combined based on palm contour feature with stencil matching method
CN103985143A (en) Discriminative online target tracking method based on videos in dictionary learning
Rao et al. Neural network classifier for continuous sign language recognition with selfie video
CN104077742A (en) GABOR characteristic based face sketch synthetic method and system
Neverova Deep learning for human motion analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517

RJ01 Rejection of invention patent application after publication