CN112084898A - Assembling operation action recognition method based on static and dynamic separation - Google Patents

Assembling operation action recognition method based on static and dynamic separation Download PDF

Info

Publication number
CN112084898A
CN112084898A CN202010863071.XA CN202010863071A CN112084898A CN 112084898 A CN112084898 A CN 112084898A CN 202010863071 A CN202010863071 A CN 202010863071A CN 112084898 A CN112084898 A CN 112084898A
Authority
CN
China
Prior art keywords
gesture
finger
action
value
gestures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010863071.XA
Other languages
Chinese (zh)
Other versions
CN112084898B (en
Inventor
刘永
杨明顺
高新勤
万鹏
李斌鹏
史晟睿
乔琦
王祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010863071.XA priority Critical patent/CN112084898B/en
Publication of CN112084898A publication Critical patent/CN112084898A/en
Application granted granted Critical
Publication of CN112084898B publication Critical patent/CN112084898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models

Abstract

The invention discloses an assembly operation action recognition method based on static and dynamic separation, which specifically comprises the following steps: acquiring action gestures, and dividing original data corresponding to the action gestures into training samples and recognition samples; extracting effective data segments from the collected original data of the action gestures; calculating a characteristic threshold according to the action gestures in the training sample, and dividing the action gestures in the recognition sample according to the characteristic threshold; and respectively inputting the feature values of the gesture invariant gesture and the gesture variant gesture in the divided recognition samples into a KNN recognition model and a GMM-HMM recognition model for training to respectively obtain recognition models of the gesture invariant gesture and the gesture variant gesture. According to the gesture recognition method and device, the characteristic values of the gesture invariant gesture and the gesture variable gesture are analyzed, the threshold value is extracted, the gesture types are effectively distinguished according to the threshold value, recognition models of the two gestures are respectively obtained, and the recognition accuracy and the recognition speed are improved.

Description

Assembling operation action recognition method based on static and dynamic separation
Technical Field
The invention belongs to the technical field of gesture motion recognition methods, and relates to an assembly operation motion recognition method based on static and dynamic separation.
Background
As a key technology for realizing a novel intelligent human-computer interaction system and a virtual reality system, a gesture recognition technology has long become a hot problem of wide attention at home and abroad in research theories and methods.
No matter the gesture recognition research based on wearable equipment or the sign language recognition based on computer vision, the recognition process comprises five steps of gesture data acquisition, gesture feature recognition, gesture tracking, gesture classification and gesture instruction mapping. Among them, the research on methods such as gesture feature recognition, gesture tracking, gesture classification, etc. is a key to solve the problem of recognition of limbs or gestures, and has received high attention from scholars. Algorithms such as hidden Markov, support vector machine, neural network and deep learning become the main hand segments of gesture recognition, and after certain improvement of some algorithms, the recognition accuracy can reach more than 95%, and a solid foundation is laid for further research of human-computer cooperation.
In general, the current research efforts on gesture recognition are quite rich and comprehensive, but there are some deficiencies. In the aspects of data preprocessing and feature extraction, most researches directly set features for training, learning and identifying and do not analyze feature values of the features, so that identification effects can be distinguished due to differences of the feature values; in the aspect of gesture recognition methods, most researches adopt a single recognition scheme, and the effectiveness of the recognition scheme is rarely verified.
Disclosure of Invention
The invention aims to provide an assembly operation action recognition method based on static and dynamic separation, which completes extraction of a threshold value by analyzing characteristic values of gesture invariant and gesture variant gestures, effectively distinguishes gesture types according to the threshold value, respectively obtains recognition models of the two gestures, and improves recognition accuracy and recognition speed.
The technical scheme adopted by the invention is that the assembling operation action recognition method based on static and dynamic separation is implemented according to the following steps:
step 1, dividing a complete operation action into n action gestures, collecting M groups of original data for each action gesture, and dividing the original data corresponding to the action gesture into a training sample and an identification sample;
step 2, extracting effective data segments of the original data of each action gesture collected in the step 1;
step 3, calculating a characteristic threshold according to the motion gestures in the training sample, and then dividing the motion gestures in the recognition sample into gesture-changing gestures and gesture-invariant gestures according to the characteristic threshold;
step 4, inputting the characteristic values of the gesture invariant gestures in the recognition samples divided in the step 3 into a KNN recognition model for training to obtain a recognition model of the gesture invariant gestures;
and 5, inputting the characteristic value of the gesture change type gesture in the recognition sample divided in the step 3 into a GMM-HMM recognition model for training to obtain the recognition model of the gesture change type gesture.
The present invention is also characterized in that,
the original data of each action gesture in the step 1 comprises five finger tip coordinates of the human hand: fti(i ═ 1,2,3,4,5), the five finger-heel joint point coordinates of the human hand: fbi(i ═ 1,2,3,4,5), finger length: l isi(i ═ 1,2,3,4,5), palm center point coordinates: p, palm vector: h, pointing to the inner side of the palm, with the palm facing: f, the direction of the palm center pointing to the fingers, the speed of the five fingertips: vti(i ═ 1,2,3,4,5), palm center velocity: vz
The extraction of the effective data segment in the step 2 specifically comprises the following steps:
setting a speed threshold value V, then respectively taking out an active segment interval of which the speed values in continuous N frames of the palm center speed, the thumb finger tip speed, the index finger tip speed, the middle finger tip speed, the ring finger tip speed and the little finger tip speed are not less than the threshold value V, then taking the active segment interval in the original data as effective data to finish the extraction of the effective segment data, and correspondingly extracting M groups of effective segment data from M groups of original data of the same action gesture.
The step 3 specifically comprises the following steps:
step 3.1, manually dividing each action gesture in the training sample into a gesture variation type and a gesture invariant according to the gesture of each action gesture;
step 3.2, calculating the elevation alpha of each finger plane in each frame of action gesture in each effective data segment corresponding to each action gesture according to the effective segment data extracted in the step 2iFinger opening degree betaiAnd degree of finger curvature muiAnd as feature values, i is 1,2,3,4,5, the j-th frame pose feature of each motion gesture describes a 15-dimensional feature vector:
Oj=fj1,…,α51,…,β51,…,μ5);
step 3.2, obtaining a range value by taking the difference between the maximum value and the minimum value corresponding to each characteristic value in the N frames of motion gestures in the same effective data segment corresponding to the same motion gesture in the training sample, and taking the range value as a gesture state change amount characteristic value to obtain a gesture state change amount characteristic vector as:
Cm=(α1c,…,α5c1c,…,β5c1c,…,μ5c)
wherein M is 1,2, …, M, CmRepresenting gesture state change quantity characteristic vector alpha corresponding to mth effective data segment corresponding to the same action gesture1c,…,α5cRespectively representing the polar difference value of the elevation of each finger plane in N frames of action gestures in the mth effective data segment corresponding to the same action gesture; beta is a1c,…,β5cRespectively representThe range of each finger in N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture; mu.s1c,…,μ5cRespectively representing the range of each finger curvature in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture;
step 3.3, correspondingly comparing each characteristic value in the M gesture state change quantity characteristic vectors corresponding to each action gesture to obtain a maximum value and a minimum value of the elevation angle, the curvature value of the finger, and the curvature value of the finger of each finger plane corresponding to each action gesture;
step 3.4, assuming that the training sample divided in the step 3.1 has a posture change type action gesture and b posture invariant action gestures, correspondingly comparing maximum values of a height deviation value, a finger curvature deviation value and a finger curvature deviation value on each finger plane corresponding to the b posture invariant action gestures divided in the step 3.1, taking the maximum value of the maximum values of the b height deviation values, correspondingly comparing the height deviation value, the finger curvature deviation value and the finger curvature deviation value on each finger plane corresponding to the a posture change type action gesture divided in the step 3.1, taking the minimum value of the minimum values of the a height deviation values, subtracting the minimum value of the minimum values of the b height deviation values corresponding to each characteristic value, taking the characteristic of which the difference value is less than 0 as a threshold distinguishing characteristic, the gesture variation type gesture range value corresponding to the feature is a feature threshold value;
and 3.5, selecting a characteristic threshold value corresponding to any characteristic, calculating a range value corresponding to the characteristic of the action gesture in the identification sample, identifying the action gesture corresponding to the sample as a gesture change type gesture if the characteristic threshold value is larger than the characteristic threshold value, and identifying the action gesture corresponding to the sample as a gesture invariable type gesture if the characteristic threshold value is smaller than the characteristic threshold value.
Step 3.2. alpha1c,…,α5cThe calculation method comprises the following steps:
αic=αmaxcminc
wherein, i is 1,2,3,4,5,αmaxcrepresents the maximum value of the elevation degree, alpha, of the finger plane corresponding to the N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the upward angles of the finger planes corresponding to the N frames of action gestures in the same effective data segment;
β1c,…,β5cthe calculation method comprises the following steps:
βic=βmaxcminc
wherein, betamaxcRepresents the maximum value of the finger opening degree, beta, corresponding to N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the finger opening degrees corresponding to the N frames of action gestures in the same effective data segment;
μ1c,…,μ5cthe calculation method comprises the following steps:
μic=μmaxcminc
wherein, mumaxcRepresents the maximum value of the finger curvature degree mu corresponding to the N frames of action gestures in the same effective data segmentmincAnd representing the minimum value of the finger curvature corresponding to the N frames of action gestures in the same effective data segment.
The calculation method of the elevation on the finger plane comprises the following steps:
Figure BDA0002648823110000051
the calculation method of the finger opening degree comprises the following steps:
Figure BDA0002648823110000052
when i is 5, let i +1 be 1, then β is the case5The value of the angle between the thumb and the little finger;
the method for calculating the degree of curvature of the finger comprises the following steps:
Figure BDA0002648823110000053
Lithe length of the finger refers to the total length of three segments of bone from the tip to the base of the finger.
The step 4 specifically comprises the following steps:
adopting a KNN algorithm as a static gesture recognition algorithm, establishing a KNN recognition model, and extracting 15-dimensional feature vectors of gesture invariant gestures in the recognition sample divided in the step 3, namely: the motion gesture jth frame pose features describe a 15-dimensional feature vector:
Oj=fj1,…,α51,…,β51,…,μ5)
and then, the gesture recognition model is sent into a KNN model for training to obtain a gesture invariant gesture recognition model.
The step 5 specifically comprises the following steps:
establishing a GMM-HMM recognition model, and extracting 19-dimensional feature vectors of gesture change type gestures in the recognition samples divided in the step 3, namely the gesture features of the jth frame of the action gesture:
gj=fj1,…,α51,…,β51,…,μ5,whj,wfj,aj,bj)
then the gesture recognition model is sent to a GMM-HMM recognition model for training to obtain a gesture change type gesture recognition model, wherein whjIndicates the palm normal direction h of the current framejDirection h of palmar center of previous framej+1Angle w offjRepresenting the palm direction f of the current framejPalm direction f of the previous framej+1The included angle of (A); a isj、bjTwo eigenvalues of the two-dimensional displacement characteristic.
whjThe calculation method comprises the following steps:
Figure BDA0002648823110000061
wfjthe calculation method comprises the following steps:
Figure BDA0002648823110000062
the method for determining the two-dimensional displacement characteristics comprises the following steps:
removing Y-axis data in the three-dimensional data of the palm center coordinates P, taking the palm center coordinates of the previous frame of continuous data as the origin, and taking the palm center coordinates P of the current frame as the originxzjObtaining P by projecting XOZ plane code discxzjThe region number where the projection is located is used as the chain code value a of the current framej
Removing Z-axis data in three-dimensional data of palm center coordinates P, taking palm center coordinates of a previous frame of continuous data as an origin, and processing next frame of data PxyjProjecting to XOY plane chain code disc to obtain PxyjThe region number where the projection is located is used as the chain code value b of the current framejForming a two-dimensional displacement feature [ aj,bj];
The specific calculation is as follows:
Figure BDA0002648823110000063
Figure BDA0002648823110000064
where j is 1,2, …, N.
The invention has the beneficial effects that: the hand description characteristics are divided into hand displacement characteristics, rotation characteristics and posture characteristics, the original data are divided into posture invariant type and posture variant type, the state of the current hand and the change of the hand state in a continuous sequence can be completely described, the motion data can be effectively divided by adopting a speed threshold, and the task of extracting effective section data is completed; by analyzing the characteristic values of the two gestures, the characteristic types and the threshold values suitable for distinguishing the two different types of gestures are obtained, the extraction of the state change amount threshold value is completed, the correct distinguishing of the test data is realized, the gesture types are effectively distinguished, a single-frame oriented gesture recognition method (KNN) is adopted for modeling aiming at the data with unchanged states, a GMM-HMM modeling method is adopted for modeling aiming at the data with changed states, the construction of a static and dynamic separation recognition model is completed, and the recognition accuracy and the recognition speed are improved.
Drawings
FIG. 1 is a flowchart of the overall assembly operation action recognition method based on static and dynamic separation according to the present invention;
FIG. 2 is a diagram illustrating the direction of a palm in the static and dynamic separation-based assembly operation recognition method of the present invention;
FIG. 3 is a chain wheel diagram in the assembling operation motion recognition method based on static and dynamic separation according to the present invention;
FIG. 4 is a schematic diagram of GMM-HMM algorithm recognition in the static and dynamic separation-based assembly operation motion recognition method of the present invention;
FIG. 5 is a line diagram of x, y, z coordinates of data points of a human hand in a static state in an example of the assembling operation action recognition method based on static and dynamic separation according to the present invention;
FIG. 6 is a line drawing of x, y, z coordinates of data points of a human hand processed by a window moving average method in an example of an assembly work motion recognition method based on static and dynamic separation according to the present invention;
FIG. 7 is a principal component analysis diagram of PCA dimensionality reduction data in an example of the assembly work action recognition method based on static and dynamic separation;
FIG. 8 is a diagram of effective segment data extraction in an example of the assembling operation motion recognition method based on static and dynamic separation according to the present invention;
FIG. 9 is a comparison graph of finger raising degree and finger tip angle characteristic values of nine gestures in an example of the assembling operation motion recognition method based on static and dynamic separation according to the present invention;
FIG. 10 is a comparison graph of the feature values of the curvature of fingers of nine gestures in the example of the assembling operation action recognition method based on static and dynamic separation according to the present invention;
FIG. 11 is a diagram of characteristic values of changes in hand orientations of nine gestures in an example of an assembly work action recognition method based on static and dynamic separation according to the present invention;
FIG. 12 is a diagram of quantized feature values of nine direction angles in an example of the assembling work motion recognition method based on static and dynamic separation according to the present invention;
FIG. 13 is a comparison graph of the difference values of nine gesture single features in the example of the assembling work action recognition method based on static and dynamic separation according to the present invention;
FIG. 14 is a test data classification diagram in an example of the assembling operation action recognition method based on static and dynamic separation according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses an assembly operation action recognition method based on static and dynamic separation, the flow of which is shown in figure 1 and is implemented according to the following steps:
step 1, dividing a complete operation action into n action gestures, collecting M groups of original data for each action gesture, and dividing the original data corresponding to the action gesture into a training sample and an identification sample; wherein the raw data of each action gesture comprises five finger-tip coordinates of the human hand: fti(i ═ 1,2,3,4,5), the five finger-heel joint point coordinates of the human hand: fbi(i ═ 1,2,3,4,5), finger length: l isi(i ═ 1,2,3,4,5), palm center point coordinates: p, palm vector: h, pointing to the inner side of the palm, with the palm facing: f, the palm center points to the direction of the fingers, as shown in fig. 2, the finger tip speed of the five fingers: vti(i ═ 1,2,3,4,5), palm center velocity: vz
Step 2, extracting effective data segments of the original data of each action gesture collected in the step 1, specifically:
setting a speed threshold value V, then respectively taking out an active segment interval of which the speed values in continuous N frames of the palm center speed, the thumb finger tip speed, the index finger tip speed, the middle finger tip speed, the ring finger tip speed and the little finger tip speed are not less than the threshold value V, then taking the active segment interval in the original data as effective data to finish the extraction of the effective segment data, and correspondingly extracting M groups of effective segment data from M groups of original data of the same action gesture;
step 3, calculating a characteristic threshold according to the motion gestures in the training sample, and then dividing the motion gestures in the recognition sample into gesture-changing gestures and gesture-invariant gestures according to the characteristic threshold;
the method specifically comprises the following steps:
step 3.1, manually dividing each action gesture in the training sample into a gesture variation type and a gesture invariant according to the gesture of each action gesture;
step 3.2, calculating the elevation alpha of each finger plane in each frame of action gesture in each effective data segment corresponding to each action gesture according to the effective segment data extracted in the step 2iFinger opening degree betaiAnd degree of finger curvature muiAnd as feature values, i is 1,2,3,4,5, the j-th frame pose feature of each motion gesture describes a 15-dimensional feature vector:
Oj=fj1,…,α51,…,β51,…,μ5);
step 3.2, obtaining a range value by taking the difference between the maximum value and the minimum value corresponding to each characteristic value in the N frames of motion gestures in the same effective data segment corresponding to the same motion gesture in the training sample, and taking the range value as a gesture state change amount characteristic value to obtain a gesture state change amount characteristic vector as:
Cm=(α1c,…,α5c1c,…,β5c1c,…,μ5c)
wherein M is 1,2, …, M, CmRepresenting gesture state change quantity characteristic vector alpha corresponding to mth effective data segment corresponding to the same action gesture1c,…,α5cRespectively representing the polar difference value of the elevation of each finger plane in N frames of action gestures in the mth effective data segment corresponding to the same action gesture; beta is a1c,…,β5cRespectively representing the range of each finger opening in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture; mu.s1c,…,μ5cRespectively representing the range value of each finger curvature in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture, wherein alpha is1c,…,α5cThe calculation method comprises the following steps:
αic=αmaxcminc
wherein i is 1,2,3,4,5, alphamaxcRepresents the maximum value of the elevation degree, alpha, of the finger plane corresponding to the N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the upward angles of the finger planes corresponding to the N frames of action gestures in the same effective data segment;
β1c,…,β5cthe calculation method comprises the following steps:
βic=βmaxcminc
wherein, betamaxcRepresents the maximum value of the finger opening degree, beta, corresponding to N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the finger opening degrees corresponding to the N frames of action gestures in the same effective data segment;
μ1c,…,μ5cthe calculation method comprises the following steps:
μic=μmaxcminc
wherein, mumaxcRepresents the maximum value of the finger curvature degree mu corresponding to the N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the finger curvature corresponding to the N frames of action gestures in the same effective data segment;
the calculation method of the elevation on the finger plane comprises the following steps:
Figure BDA0002648823110000101
the calculation method of the finger opening degree comprises the following steps:
Figure BDA0002648823110000102
when i is 5, let i +1 be 1, then β is the case5The value of the angle between the thumb and the little finger;
the method for calculating the degree of curvature of the finger comprises the following steps:
Figure BDA0002648823110000111
Lithe length of the finger is the total length of three segments of bone segments from the fingertip to the knuckle of the finger root;
step 3.3, correspondingly comparing each characteristic value in the M gesture state change quantity characteristic vectors corresponding to each action gesture to obtain a maximum value and a minimum value of the elevation angle, the curvature value of the finger, and the curvature value of the finger of each finger plane corresponding to each action gesture;
step 3.4, assuming that the training sample divided in the step 3.1 has a posture change type action gesture and b posture invariant action gestures, correspondingly comparing maximum values of a height deviation value, a finger curvature deviation value and a finger curvature deviation value on each finger plane corresponding to the b posture invariant action gestures divided in the step 3.1, taking the maximum value of the maximum values of the b height deviation values, correspondingly comparing the height deviation value, the finger curvature deviation value and the finger curvature deviation value on each finger plane corresponding to the a posture change type action gesture divided in the step 3.1, taking the minimum value of the minimum values of the a height deviation values, subtracting the minimum value of the minimum values of the b height deviation values corresponding to each characteristic value, taking the characteristic of which the difference value is less than 0 as a threshold distinguishing characteristic, the gesture variation type gesture range value corresponding to the feature is a feature threshold value;
and 3.5, selecting a characteristic threshold value corresponding to any characteristic, calculating a range value corresponding to the characteristic of the action gesture in the identification sample, identifying the action gesture corresponding to the sample as a gesture change type gesture if the characteristic threshold value is larger than the characteristic threshold value, and identifying the action gesture corresponding to the sample as a gesture invariable type gesture if the characteristic threshold value is smaller than the characteristic threshold value.
Step 4, inputting the characteristic values of the gesture invariant gestures in the recognition samples divided in the step 3 into a KNN recognition model for training to obtain a recognition model of the gesture invariant gestures; the method specifically comprises the following steps: adopting a KNN algorithm as a static gesture recognition algorithm, establishing a KNN recognition model, and extracting 15-dimensional feature vectors of gesture invariant gestures in the recognition sample divided in the step 3, namely: the gesture feature of the jth frame of the action gesture describes a 15-dimensional feature vector:
Oj=fj1,…,α51,…,β51,…,μ5)
then, the gesture recognition model is sent into a KNN model for training to obtain a gesture invariant gesture recognition model;
step 5, inputting the feature values of the gesture change type gestures in the recognition samples divided in the step 3 into a GMM-HMM recognition model for training to obtain a recognition model of the gesture change type gestures, wherein a GMM-HMM algorithm recognition schematic diagram is shown in fig. 4, and specifically includes:
establishing a GMM-HMM recognition model, and extracting 19-dimensional feature vectors of gesture change type gestures in the recognition samples divided in the step 3, namely the gesture features of the jth frame of the action gesture:
gj=fj1,…,α51,…,β51,…,μ5,whj,wfj,aj,bj)
then the gesture recognition model is sent to a GMM-HMM recognition model for training to obtain a gesture change type gesture recognition model, wherein whjIndicates the palm normal direction h of the current framejDirection h of palmar center of previous framej+1Angle w offjRepresenting the palm direction f of the current framejPalm direction f of the previous framej+1The included angle of (A); a isj、bjTwo eigenvalues of a two-dimensional displacement characteristic, where whjThe calculation method comprises the following steps:
Figure BDA0002648823110000121
wfjthe calculation method comprises the following steps:
Figure BDA0002648823110000122
the method for determining the two-dimensional displacement characteristics comprises the following steps:
removing Y-axis data in the three-dimensional data of the palm center coordinates P, taking the palm center coordinates of the previous frame of continuous data as the origin, and taking the palm center coordinates P of the current frame as the originxzjThe projected XOZ planar code wheel, as shown in FIG. 3, is 16 fan-shaped disks equally divided on the disk to obtain PxzjThe area number where the projection is located is used as the chain code value a of the current framej
Removing Z-axis data in three-dimensional data of palm center coordinates P, taking palm center coordinates of a previous frame of continuous data as an origin, and processing next frame of data PxyjProjecting to XOY plane chain code disc to obtain PxyjThe region number where the projection is located is used as the chain code value b of the current framejForming a two-dimensional displacement feature [ aj,bj];
The specific calculation is as follows:
Figure BDA0002648823110000131
Figure BDA0002648823110000132
where j is 1,2, …, N.
Examples
In an assembly operation scene, firstly, a common bolt assembly case and a more complex part of ECU assembly cases of a manufacturing enterprise are analyzed to obtain nine gesture action libraries, as shown in table 1.
TABLE 1 hand type action library
Figure BDA0002648823110000133
1. Action classification
For 9 summarized hand-type actions, the hand-type actions are classified into posture change type and posture invariant according to three description characteristics of proposed hand displacement characteristics, rotation characteristics and posture characteristics, and the specific classification is shown in table 2.
TABLE 2 action Classification
Figure BDA0002648823110000141
2. Data acquisition
Data collection is carried out according to a set hand type of an operation action gesture library, gesture data of 4 subjects are collected in an experiment, each person collects 9 gestures defined in the text, each gesture comprises 50 samples, after collection is completed, all sample data are integrated and stored in a CSV file, 360 samples in 450 samples collected by each person are used for training a model, and the remaining 90 samples are used for testing the accuracy of the model.
26 parameters including the palm center and the speed, position, time and other information of five fingers are collected through Leap Motion. The names and meanings of the parameters during data acquisition are shown in Table 3.
TABLE 3 acquisition parameter Annotation Table
Figure BDA0002648823110000142
3. Data pre-processing
Firstly, extracting the characteristics of original data, then adopting a certain data dimension reduction method to solve the redundancy problem of multi-characteristic data, reducing the data magnitude, and finally using a data standardization method to improve the comparability of the data, wherein the mainly adopted methods comprise data normalization, PCA dimension reduction and Z-SCORE standardization.
(1) Data denoising
As shown in fig. 5, a-c in fig. 5 are respectively line graphs of three-dimensional x, y, z coordinate values of a thumb root of continuous 1000 frames of data in a certain static state, and as shown in fig. 5, the device acquisition value has the problems of large fluctuation, discontinuous data acquisition value, excessive small data peak value and the like, and is not in accordance with the actual hand movement coordinate condition.
And smoothing the original value by using a K-proximity method based on distance detection, filtering the original data by using a window moving average mode by using the K-proximity method, and replacing the original value by using the average value of continuous data frames to finish the preprocessing of the data.
The K proximity value parameter is taken as 20, the broken line of the coordinates processed by the window moving average method is shown in fig. 6, and the processed X, Y and Z coordinate data are closer to the actual situation and more continuous.
(2) PCA dimension reduction
For the 15-dimensional posture-invariant hand posture data of the embodiment, the principal component analysis after dimension reduction is as shown in fig. 7, and the feature dimension is reduced from 15 dimensions to 5 dimensions.
(3) Z-Score normalization
Some parameters in the hand expression model are angle values, when a new sample is faced, the size of the acquired data is different due to personal difference of hands, and the identification error caused by the personal difference is reduced by a Z-Score method.
4. Analysis and acquisition of hand state thresholds
(1) Efficient work segment segmentation and acquisition
Example setting consecutive 20 frames of data greater than 20000 speed threshold as valid in job sequence
Taking a screwing action formed by six actions of stretching, grabbing, moving, sleeving, rotating and releasing hands as an example, the example firstly obtains speed values of a palm center and a finger tip of five fingers, takes out an activity section interval with 6 speed values which are all greater than 20000 speed units in continuous 20 frames, and then takes data of the section interval in original data as effective data to finish extraction of the effective section data.
As shown in fig. 8, (a) in fig. 8 is the speed of five fingertips and palm centers of the original data of the screwing operation, (b) is the length of each active segment interval after the active segment is extracted, and (c) is the speed of the active segment after the extraction. As can be seen from fig. 8, the motion data can be effectively segmented by using the speed threshold, and the task of extracting the valid segment data is completed.
(2) And (3) motion threshold extraction based on morphological feature change quantity: examples are the hand posture feature, hand displacement feature and hand rotation featureOn the basis of the feature, a feature parameter image suitable for distinguishing gesture invariant gestures and gesture variant gestures is obtained through experiments, and feature values most suitable for distinguishing threshold feature types are obtained by comparing and calculating state variation feature values of two gestures of a training sample. The 15-dimensional posture features of the nine actions summarized in the text are classified, extracted and plotted according to the types of the feature values. The angles between fingers in the nine actions are shown as f in FIG. 911To f15The finger lift is shown as f31To f35In the figure, the abscissa represents the time frame and the ordinate represents the angle value. As shown in fig. 9, a-f are angle characteristic values of six gestures of extending, moving, inserting, pressing, pulling and prying, respectively, as shown in fig. 9, g-i are angle characteristic values of three gestures of grabbing, rotating and releasing, respectively, the first six gestures are gesture invariant gestures defined in the example, and the last three gestures are gesture variant gestures. In a-f of fig. 9, the single characteristic angle change value is small, which indicates that the angle information change is not obvious when the action gesture moves, while in g-i of fig. 9, the single characteristic angle change amplitude is large, which indicates that the angle information change is obvious in the moving process. As shown in FIG. 10, compared with the angle information in FIG. 9, the change information of the finger curvature in the first six graphs and the curvature in the last three graphs in FIG. 10 is not large, and the single characteristic range of the change is 0.4. However, in terms of front-back comparison, the change range of a certain curvature characteristic value of grabbing, rotating and releasing is still obtained to be obviously larger than the change range of the curvature of six gestures, namely, stretching, moving, sleeving, pressing in, pulling out and prying. In addition to the gesture posture feature, the example compares the hand orientation change value feature of nine actions with the direction angle quantization value, the hand orientation change value feature is shown in fig. 11, wherein the abscissa is the time frame and the ordinate is the angle value. The characteristic of the direction angle quantization value is shown in fig. 12, in which the abscissa is a time frame and the ordinate is a chain code value. Compared with the 15-dimensional human hand state feature, the hand orientation feature and the direction angle quantized value feature have small difference between the gesture changing type gesture and the gesture unchanging type gesture, so that the two types of feature gestures are difficult to distinguish. Through the aboveAnalysis, 15 gesture posture features may be temporally determined, suitable as a feature threshold class to distinguish between two data types. The example carries out range value operation processing on the characteristic values of the collected 40 multiplied by 5 groups of training sample data, and obtains the characteristic value of the gesture state change amount by calculating the difference between the maximum value and the minimum value in the data effective section. At this time, a very poor comparison value of the posture characteristics of the posture-invariant gesture and the posture-variant gesture in the multiple groups of data can be obtained, and the comparison result is shown in table 4.
TABLE 4 gesture characteristic range comparison table for gesture invariant and gesture variant gestures
Figure BDA0002648823110000171
Figure BDA0002648823110000181
The characteristics 1, 4, 7, 10 and 13 compared in the table are respectively the extreme difference values of the upper elevation angle characteristics of the thumb, the index finger, the middle finger, the ring finger and the little finger, the characteristics 2, 5, 8, 11 and 14 are respectively the extreme difference values of the bending ratio characteristics of the thumb, the index finger, the middle finger, the ring finger and the little finger, and the characteristics 3, 6, 9, 12 and 15 are respectively the extreme difference values of the three-dimensional angle characteristics between the thumb and the index finger, between the index finger and the middle finger, between the middle finger and the ring finger, between the ring finger and the little finger. The maximum value, the average value and the minimum value in the table refer to the values of the maximum value, the average value and the minimum value of the characteristic range values of the gesture samples in 160 groups of training samples of 4 × 40. Therefore, the characteristics of different gestures are different greatly, and the characteristics of the gestures can be effectively represented.
In order to better compare the above features, the maximum hand gesture difference value of the maximum range of the former six gesture invariant gestures is used as a boundary, and the maximum value of the former six gesture features and the minimum value of the latter three gesture features are selected for drawing, so as to obtain the single feature comparison result shown in fig. 13.
And the example adopts the difference between the maximum value of the gesture invariant type gesture and the minimum value of the gesture variant type gesture for all data sets, and takes the characteristic that the difference value is less than 0 as the threshold distinguishing characteristic. The difference is shown in table 5.
TABLE 5 gesture invariant gesture and gesture variant gesture single feature difference table
Figure BDA0002648823110000191
As can be seen from the above analysis, when the maximum single-feature range value of the gesture-invariant gesture is subtracted from the minimum single-feature range value of the gesture-variant gesture, it can be obtained that the gesture range of the feature 1, the feature 4, and the feature 13 is greater than the gesture range value of the gesture-invariant gesture, and therefore the feature 1, the feature 4, and the feature 13 are taken as data types for distinguishing the gesture-variant gesture from the gesture-invariant gesture, and at this time, the three feature threshold values are gesture range values 38.15, 30.50, and 14.6, respectively.
The test set 40 groups of data were tested using feature 1, feature 4, and feature 13, and the test results are shown in fig. 14. In the figure, nine motion data of stretching, moving, nesting, pressing in, pulling out, prying, grabbing, rotating and releasing are spliced in sequence and then subjected to a threshold test experiment, wherein fig. 14 (a) and 14(b) are distinguishing graphs for data classification by means of a characteristic value 1 and a characteristic value 4, and fig. 14(c) is a data classification graph of a characteristic value 13. From the diagrams a and b, it can be concluded that the characteristics 1 and 4 have good distinction between the grab and release gesture and the gesture without change in the gesture. It can be seen from fig. 14(c) that the feature 13 has good distinction between gesture-varying type rotation gestures and gesture-invariant type gestures.
Through the test experiment results, it can be found that the adopted threshold value can realize 100% correct discrimination of the test data in the total 360 samples tested by the experiment, namely 4 × 10 × 9. Therefore, by combining the 3 kinds of feature thresholds, the gesture types can be effectively distinguished.
5. Identification experiment and identification result analysis
5.1 gesture invariant gesture experiment
In the case experiment, firstly, the gesture data with 6 postures which are obtained in the previous step are processed, then the training data are preprocessed, the features are extracted and the dimensions are reduced, and the single-frame data at any position in the effective data segment is taken out to be used as input data. For gesture invariant gesture recognition, the total number of recognition samples is 960 pieces 6 × 40 × 4, and the remaining 240 pieces of data 6 × 10 × 4 are used to test the accuracy of the obtained model.
The example sends the processed data into a KNN model, the data is processed by selecting Z-Score in the construction of the KNN model, the identification model is constructed by using a parameter K which is 5, the result shows that the identification rate of the model reaches 100%, the time is consumed by 0.62ms, and the identification accuracy and the identification speed are greatly improved.
5.2 continuous operation gesture recognition experiment
The experiment is used for modeling 3 gesture changing type gesture data, a 3-channel GMM-HMM recognition model is established, and recognition rate is obtained on test sample data. A training sample aiming at a GMM-HMM model is different from a KNN model for gesture invariant gesture recognition, and for gesture variation type operation gestures, hand gesture angle characteristics and hand motion chain code characteristics are added into operation gesture characteristics to total 4 characteristics, so that hand gesture state characteristics are increased from original 15 dimensions to 19-dimensional dynamic characteristic sequences.
When the GMM-HMM model is trained, one motion sample is a data source, so that the total number of training data is 480 and the total number of testing data is 120 in the range of 3 × 10 × 4 for the posture change type 3-channel GMM-HMM model test. For the obtained data, a 3-channel GMM-HMM recognition model is established in an experiment, and the experiment result is shown in Table 6, wherein the recognition rate is 83.3%, and the time consumption is 2.85 s.
TABLE 6 Complex actions Using GMM-HMM recognition Rate and recognition schedules
Type of algorithm Identifying an object Data processing Recognition rate Run time
GMM-HMM Three posture changing type gestures Normalization process 83.3% 2.85s
Carrying out comprehensive analysis on the static and dynamic separation recognition model, wherein the comprehensive recognition rate is as follows:
P=Pi×(Pj×6+Pk×3)/9
in the formula:
Pi-a threshold value distinguishes the correct rate;
Pj-gesture invariant recognition rate;
Pk-attitude change type recognition rate.
The comprehensive identification speed is as follows:
T=(Ti×6+Tj×3)/9
in the formula:
Ti-six postures invariantIdentifying time consumption;
Tjthree pose-changing type recognition is time consuming.
The comprehensive identification rate and speed of the static and dynamic separation identification model obtained by calculating the identification speed and the identification rate are shown in table 7.
TABLE 7 static and dynamic separation recognition model comprehensive recognition rate and speedometer
Figure BDA0002648823110000221
As can be seen from the table, in the experiment, the comprehensive recognition rate of the gesture of the KNN-GMM-HMM static and dynamic separation recognition model for recognizing nine gestures is 94.33%, and the comprehensive recognition speed is only 950ms, so that it can be seen that the recognition model based on the static and dynamic separation is effective.
The invention can be used in the process of recognizing the assembly operation action gesture with time sequence, and has the following advantages: 1) according to the invention, the hand description characteristics are divided into hand displacement characteristics, rotation characteristics and posture characteristics, and the original data are divided into posture invariant and posture variant types, so that the state of the current hand and the change of the hand state in a continuous sequence can be completely described; 2) the invention effectively segments the motion data by adopting a speed threshold value to finish the task of extracting the effective segment data; 3) the invention analyzes the characteristic values of the two gestures to obtain the characteristic types and the threshold values suitable for distinguishing the two different types of gestures, completes the extraction of the state change quantity threshold value, realizes the correct distinguishing of the test data and effectively distinguishes the gesture types; 4) when the gesture is recognized, a gesture recognition method (KNN) facing a single frame is adopted for modeling aiming at data with a constant state type, and a GMM-HMM modeling method is adopted for modeling aiming at data with a variable state type, so that the construction of a static and dynamic separation recognition model is completed, and the recognition accuracy and the recognition speed are improved.

Claims (10)

1. The assembling operation action recognition method based on static and dynamic separation is characterized by comprising the following steps:
step 1, dividing a complete operation action into n action gestures, collecting M groups of original data for each action gesture, and dividing the original data corresponding to the action gesture into a training sample and an identification sample;
step 2, extracting effective data segments of the original data of each action gesture collected in the step 1;
step 3, calculating a characteristic threshold according to the motion gestures in the training sample, and then dividing the motion gestures in the recognition sample into gesture-changing gestures and gesture-invariant gestures according to the characteristic threshold;
step 4, inputting the characteristic values of the gesture invariant gestures in the recognition samples divided in the step 3 into a KNN recognition model for training to obtain a recognition model of the gesture invariant gestures;
and 5, inputting the characteristic value of the gesture change type gesture in the recognition sample divided in the step 3 into a GMM-HMM recognition model for training to obtain the recognition model of the gesture change type gesture.
2. The assembly work motion recognition method based on static-dynamic separation according to claim 1, wherein the raw data of each motion gesture in the step 1 comprises five finger tip coordinates of a human hand: fti(i ═ 1,2,3,4,5), the five finger-heel joint point coordinates of the human hand: fbi(i ═ 1,2,3,4,5), finger length: l isi(i ═ 1,2,3,4,5), palm center point coordinates: p, palm vector: h, pointing to the inner side of the palm, with the palm facing: f, the direction of the palm center pointing to the fingers, the speed of the five fingertips: vti(i ═ 1,2,3,4,5), palm center velocity: vz
3. The assembling work action recognition method based on static and dynamic separation as claimed in claim 2, wherein the extracting of the valid data segment in the step 2 is specifically as follows:
setting a speed threshold value V, then respectively taking out an active segment interval of which the speed values in continuous N frames of the palm center speed, the thumb finger tip speed, the index finger tip speed, the middle finger tip speed, the ring finger tip speed and the little finger tip speed are not less than the threshold value V, then taking the active segment interval in the original data as effective data to finish the extraction of the effective segment data, and correspondingly extracting M groups of effective segment data from M groups of original data of the same action gesture.
4. The assembling work action recognition method based on static-dynamic separation according to claim 3, wherein the step 3 is specifically as follows:
step 3.1, manually dividing each action gesture in the training sample into a gesture variation type and a gesture invariant according to the gesture of each action gesture;
step 3.2, calculating the elevation alpha of each finger plane in each frame of action gesture in each effective data segment corresponding to each action gesture according to the effective segment data extracted in the step 2iFinger opening degree betaiAnd degree of finger curvature muiAnd as feature values, i is 1,2,3,4,5, the j-th frame pose feature of each motion gesture describes a 15-dimensional feature vector:
Oj=fj1,...,α5,β1,...,β5,μ1,...,μ5);
step 3.2, obtaining a range value by taking the difference between the maximum value and the minimum value corresponding to each characteristic value in the N frames of motion gestures in the same effective data segment corresponding to the same motion gesture in the training sample, and taking the range value as a gesture state change amount characteristic value to obtain a gesture state change amount characteristic vector as:
Cm=(α1c,...,α5c,β1c,...,β5c,μ1c,...,μ5c)
wherein, M is 1,2mThe gesture state change quantity characteristic vector alpha corresponding to the mth effective data segment corresponding to the same action gesture1c,...,α5cRespectively representing the polar difference value of the elevation of each finger plane in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture; beta is a1c,...,β5cRespectively representing the range of each finger opening in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture; mu.s1c,...,μ5cRespectively representing the range of each finger curvature in the N frames of motion gestures in the mth effective data segment corresponding to the same motion gesture;
step 3.3, correspondingly comparing each characteristic value in the M gesture state change quantity characteristic vectors corresponding to each action gesture to obtain a maximum value and a minimum value of the elevation value, the curvature value of the finger, and the maximum value and the minimum value of the curvature value of the finger of each finger plane corresponding to each action gesture;
step 3.4, assuming that the training sample divided in the step 3.1 has a posture change type action gesture and b posture invariant action gestures, correspondingly comparing the maximum values of the elevation deviation value, the finger curvature deviation value and the finger curvature deviation value on each finger plane corresponding to the b posture invariant action gestures divided in the step 3.1, taking the maximum value of the maximum values of the b deviation values, correspondingly comparing the maximum values of the elevation deviation value, the finger curvature deviation value and the finger curvature deviation value on each finger plane corresponding to the a posture change type action gestures divided in the step 3.1, taking the minimum value of the minimum values of the a deviation values, making a difference between the maximum value of the b deviation values corresponding to each characteristic value and the minimum value of the a deviation values, taking the characteristic of which the difference value is less than 0 as a threshold distinguishing characteristic, the gesture variation type gesture range value corresponding to the feature is a feature threshold value;
and 3.5, selecting a characteristic threshold corresponding to any characteristic, calculating a range value corresponding to the characteristic of the motion gesture in the identification sample, identifying the motion gesture corresponding to the sample as a gesture change type gesture if the characteristic threshold is larger than the characteristic threshold, and identifying the motion gesture corresponding to the sample as a gesture invariable type gesture if the characteristic threshold is smaller than the characteristic threshold.
5. The static-dynamic separation-based assembly work action recognition method according to claim 4, wherein α in the step 3.2 is1c,...,α5cThe calculation method comprises the following steps:
αic=αmaxcminc
wherein i is 1,2,3,4,5, alphamaxcRepresents the maximum value of the elevation on the finger plane corresponding to N frames of action gestures in the same effective data segment, alphamincRepresenting the minimum value of the upward elevation of the finger plane corresponding to the N frames of action gestures in the same effective data segment;
β1c,...,β5cthe calculation method comprises the following steps:
βic=βmaxcminc
wherein, betamaxcRepresents the maximum value of the finger opening degree, beta, corresponding to N frames of action gestures in the same effective data segmentmincRepresenting the minimum value of the finger opening degrees corresponding to the N frames of action gestures in the same effective data segment;
μ1c,...,μ5cthe calculation method comprises the following steps:
μic=μmaxcminc
wherein, mumaxcRepresents the maximum value of the finger curvature degree mu corresponding to the N frames of action gestures in the same effective data segmentmincAnd representing the minimum value of the finger curvature corresponding to the N frames of action gestures in the same effective data segment.
6. The assembling work motion recognition method based on static-dynamic separation according to claim 5, wherein the calculation method of the elevation on the finger plane is as follows:
Figure FDA0002648823100000041
the calculation method of the finger opening degree comprises the following steps:
Figure FDA0002648823100000042
when i is 5, leti +1 is 1, then β is present5The value of the angle between the thumb and the little finger;
the method for calculating the degree of curvature of the finger comprises the following steps:
Figure FDA0002648823100000043
Lithe length of the finger refers to the total length of three segments of bone from the tip to the base of the finger.
7. The assembling work action recognition method based on static-dynamic separation according to claim 5, wherein the step 4 is specifically as follows:
adopting a KNN algorithm as a static gesture recognition algorithm, establishing a KNN recognition model, and extracting 15-dimensional feature vectors of gesture invariant gestures in the recognition sample divided in the step 3, namely: the motion gesture jth frame pose features describe a 15-dimensional feature vector:
Oj=fj1,...,α5,β1,...,β5,μ1,...,μ5)
and then, the gesture recognition model is sent into a KNN model for training to obtain a gesture invariant gesture recognition model.
8. The assembling work action recognition method based on static-dynamic separation according to claim 5, wherein the step 5 is specifically as follows:
establishing a GMM-HMM recognition model, and extracting 19-dimensional feature vectors of gesture change type gestures in the recognition samples divided in the step 3, namely the gesture features of the jth frame of the action gesture:
gj=fj1,...,α5,β1,...,β5,μ1,...,μ5,whj,wfj,aj,bj)
then the gesture recognition model is sent to a GMM-HMM recognition model for training to obtain a gesture change type gesture recognition model, wherein whjIndicates the palm normal direction h of the current framejDirection h of palmar center of previous framej+1Angle w offjRepresenting the palm direction f of the current framejPalm direction f of the previous framej+1The included angle of (A); a isj、bjTwo eigenvalues of the two-dimensional displacement characteristic.
9. The static-dynamic separation-based assembly work action recognition method as claimed in claim 8, wherein w ishjThe calculation method comprises the following steps:
Figure FDA0002648823100000051
said wfjThe calculation method comprises the following steps:
Figure FDA0002648823100000052
10. the assembling work motion recognition method based on static-dynamic separation according to claim 8, wherein the two-dimensional displacement feature is determined by:
removing Y-axis data in the three-dimensional data of the palm center coordinates P, taking the palm center coordinates of the previous frame of continuous data as the origin, and taking the palm center coordinates P of the current frame as the originxzjObtaining P by projecting XOZ plane code discxzjThe region number where the projection is located is used as the chain code value a of the current framej
Removing Z-axis data in three-dimensional data of palm center coordinates P, taking palm center coordinates of a previous frame of continuous data as an origin, and processing next frame of data PxyjProjecting to XOY plane chain code disc to obtain PxyjThe region number where the projection is located is used as the chain code value b of the current framejForming a two-dimensional displacement feature [ aj,bj];
The specific calculation is as follows:
Figure FDA0002648823100000061
Figure FDA0002648823100000062
wherein j is 1, 2.
CN202010863071.XA 2020-08-25 2020-08-25 Assembly operation action recognition method based on static and dynamic separation Active CN112084898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010863071.XA CN112084898B (en) 2020-08-25 2020-08-25 Assembly operation action recognition method based on static and dynamic separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010863071.XA CN112084898B (en) 2020-08-25 2020-08-25 Assembly operation action recognition method based on static and dynamic separation

Publications (2)

Publication Number Publication Date
CN112084898A true CN112084898A (en) 2020-12-15
CN112084898B CN112084898B (en) 2024-02-09

Family

ID=73728585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010863071.XA Active CN112084898B (en) 2020-08-25 2020-08-25 Assembly operation action recognition method based on static and dynamic separation

Country Status (1)

Country Link
CN (1) CN112084898B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111844A (en) * 2021-04-28 2021-07-13 中德(珠海)人工智能研究院有限公司 Operation posture evaluation method and device, local terminal and readable storage medium
CN113282167A (en) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 Interaction method and device of head-mounted display equipment and head-mounted display equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120068917A1 (en) * 2010-09-17 2012-03-22 Sony Corporation System and method for dynamic gesture recognition using geometric classification
US20180307319A1 (en) * 2017-04-20 2018-10-25 Microsoft Technology Licensing, Llc Gesture recognition
CN109634415A (en) * 2018-12-11 2019-04-16 哈尔滨拓博科技有限公司 It is a kind of for controlling the gesture identification control method of analog quantity
CN109993073A (en) * 2019-03-14 2019-07-09 北京工业大学 A kind of complicated dynamic gesture identification method based on Leap Motion
CN110837792A (en) * 2019-11-04 2020-02-25 东南大学 Three-dimensional gesture recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120068917A1 (en) * 2010-09-17 2012-03-22 Sony Corporation System and method for dynamic gesture recognition using geometric classification
US20180307319A1 (en) * 2017-04-20 2018-10-25 Microsoft Technology Licensing, Llc Gesture recognition
CN109634415A (en) * 2018-12-11 2019-04-16 哈尔滨拓博科技有限公司 It is a kind of for controlling the gesture identification control method of analog quantity
CN109993073A (en) * 2019-03-14 2019-07-09 北京工业大学 A kind of complicated dynamic gesture identification method based on Leap Motion
CN110837792A (en) * 2019-11-04 2020-02-25 东南大学 Three-dimensional gesture recognition method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨文璐;乔海丽;谢宏;夏斌;: "基于Leap Motion和支持向量机的手势识别", 传感器与微系统, no. 05 *
陈国良;葛凯凯;李聪浩;: "基于多特征HMM融合的复杂动态手势识别", 华中科技大学学报(自然科学版), no. 12 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111844A (en) * 2021-04-28 2021-07-13 中德(珠海)人工智能研究院有限公司 Operation posture evaluation method and device, local terminal and readable storage medium
CN113111844B (en) * 2021-04-28 2022-02-15 中德(珠海)人工智能研究院有限公司 Operation posture evaluation method and device, local terminal and readable storage medium
CN113282167A (en) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 Interaction method and device of head-mounted display equipment and head-mounted display equipment
CN113282167B (en) * 2021-05-08 2023-06-27 青岛小鸟看看科技有限公司 Interaction method and device of head-mounted display equipment and head-mounted display equipment

Also Published As

Publication number Publication date
CN112084898B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
De Smedt et al. Skeleton-based dynamic hand gesture recognition
Ibraheem et al. Survey on various gesture recognition technologies and techniques
Jiang et al. Multi-layered gesture recognition with Kinect.
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
Just et al. Hand posture classification and recognition using the modified census transform
Li Gesture recognition based on fuzzy c-means clustering algorithm
Lee et al. Kinect-based Taiwanese sign-language recognition system
CN110232308B (en) Robot-following gesture track recognition method based on hand speed and track distribution
Bhuyan et al. Fingertip detection for hand pose recognition
CN111046731B (en) Transfer learning method and recognition method for gesture recognition based on surface electromyographic signals
Shin et al. Skeleton-based dynamic hand gesture recognition using a part-based GRU-RNN for gesture-based interface
Hemayed et al. Edge-based recognizer for Arabic sign language alphabet (ArS2V-Arabic sign to voice)
Jambhale et al. Gesture recognition using DTW & piecewise DTW
De Smedt et al. 3d hand gesture recognition by analysing set-of-joints trajectories
CN112084898B (en) Assembly operation action recognition method based on static and dynamic separation
CN111368762A (en) Robot gesture recognition method based on improved K-means clustering algorithm
Zinnen et al. Multi activity recognition based on bodymodel-derived primitives
CN107346207B (en) Dynamic gesture segmentation recognition method based on hidden Markov model
Pradhan et al. A hand gesture recognition using feature extraction
Nakkach et al. Hybrid approach to features extraction for online Arabic character recognition
CN112101293A (en) Facial expression recognition method, device, equipment and storage medium
Shitole et al. Dynamic hand gesture recognition using PCA, Pruning and ANN
Nömm et al. Interpretable Quantitative Description of the Digital Clock Drawing Test for Parkinson's Disease Modelling
Gurav et al. Vision based hand gesture recognition with haar classifier and AdaBoost algorithm
Liu et al. Keyframe Extraction and Process Recognition Method for Assembly Operation Based on Density Clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant