CN111831122B - Gesture recognition system and method based on multi-joint data fusion - Google Patents

Gesture recognition system and method based on multi-joint data fusion Download PDF

Info

Publication number
CN111831122B
CN111831122B CN202010698344.XA CN202010698344A CN111831122B CN 111831122 B CN111831122 B CN 111831122B CN 202010698344 A CN202010698344 A CN 202010698344A CN 111831122 B CN111831122 B CN 111831122B
Authority
CN
China
Prior art keywords
unit
inertial sensor
gesture
sensor unit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010698344.XA
Other languages
Chinese (zh)
Other versions
CN111831122A (en
Inventor
汪凝
洪榛
洪淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Finance and Economics
Original Assignee
Zhejiang University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Finance and Economics filed Critical Zhejiang University of Finance and Economics
Priority to CN202010698344.XA priority Critical patent/CN111831122B/en
Publication of CN111831122A publication Critical patent/CN111831122A/en
Application granted granted Critical
Publication of CN111831122B publication Critical patent/CN111831122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a gesture recognition system and a gesture recognition method based on multi-joint data fusion, wherein the system comprises the following steps: a glove; the device comprises a battery unit, a control unit, a first inertial sensor unit, a communication unit and an interaction unit, wherein the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit are positioned at the back of the hand of the glove; and a second inertial sensor unit located at the knuckle of the glove, the second inertial sensor unit being electrically connected with the control unit. By adopting the method, the glove can be conveniently and comfortably worn on the hand of a human body, the hand action is not influenced, and simultaneously, the characteristics of the bending degree, the opening degree, the motion state and the palm posture of the fingers are extracted through the fusion of the finger joint data, so that the real-time identification of the gestures is realized, and the problems of high difference of the gestures of different hands and low compatibility of a gesture identification system are solved.

Description

Gesture recognition system and method based on multi-joint data fusion
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition system and method based on multi-joint data fusion.
Background
With the development of technologies such as the internet of things, gesture recognition plays an increasingly important role in important fields such as man-machine interaction, smart home and the like. The deaf-mute is difficult to make economic contribution to families and society due to communication barriers, and a great amount of direct economic loss and indirect economic loss are caused. The deaf-mute mainly relies on gestures to communicate with other people, but common people with normal hearing cannot understand gesture meanings. The existing gesture recognition system has some defects: 1) The problems of privacy safety and environmental interference exist based on a visual image analysis method; 2) The method for detecting the bending degree of the finger by adopting the plurality of film-type bending sensors has the problem that the sensors are difficult to recover after being bent for a long time, and the service life is short; 3) The existing gesture recognition system has high user correlation and low compatibility.
Disclosure of Invention
In order to solve the problems of short service life, high hand potential difference of different people and low compatibility of a gesture recognition system in the prior art, the invention provides a gesture recognition system based on multi-joint data fusion,
in order to achieve the above object, the present invention has the following constitution:
the invention provides a gesture recognition system based on multi-joint data fusion, which comprises:
a glove;
the device comprises a battery unit, a control unit, a first inertial sensor unit, a communication unit and an interaction unit, wherein the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit are positioned at the back of the hand of the glove, the control unit is respectively and electrically connected with the first inertial sensor unit, the communication unit and the interaction unit, and the battery unit respectively provides electric energy for the control unit, the first inertial sensor unit, the communication unit and the interaction unit; and
a second inertial sensor unit located at a knuckle of the glove, the second inertial sensor unit being electrically connected to the control unit, the battery unit also providing electrical energy to the second inertial sensor unit;
the control unit respectively acquires angle and acceleration data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, and performs gesture recognition.
In some embodiments, the glove body of the glove is of a flexible material and the glove at least partially encases the back of the wearer's hand and fingers when worn.
In some embodiments, the back of the hand and the back of the finger of the glove are respectively provided with a buckle, the buckle of the back of the hand of the glove is used for fixing the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit, and the buckle of the back of the finger of the glove is used for fixing the second inertial sensor unit.
In some embodiments, the first inertial sensor unit is located at a center of the back of the hand of the glove, and the second inertial sensor unit includes multiple inertial sensors located at respective joints of the finger.
In some embodiments, the communication unit is configured to communicate with a server or other gesture recognition system according to a preset data format, and the interaction unit is configured to display the recognized gesture and play the corresponding voice, and receive input information of the user.
The embodiment of the invention also provides a gesture recognition method based on multi-joint data fusion, which adopts the gesture recognition system based on multi-joint data fusion, and comprises the following steps:
the control unit captures the triaxial angles and the acceleration raw data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit;
the control unit performs normalization and digital filtering processing on the triaxial angle and acceleration original data to obtain preprocessing data;
the control unit extracts the curvature and the opening degree characteristics through the fuzzy classifier, calculates the motion state characteristics through variance, and takes the motion state characteristics, the palm triaxial angle and the acceleration data as characteristic data;
the control unit predicts the characteristic data by using a single gesture recognition model, the predicted label is the target gesture, and the gesture is output through the interaction unit.
In some embodiments, before the control unit captures the three-axis angle and acceleration raw data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, the control unit further comprises selecting a training mode or a prediction mode by the user through the interaction unit;
the control unit performs normalization and digital filtering processing on the triaxial angle and acceleration original data to obtain preprocessed data, and then the control unit further comprises the following steps:
if the model is a training mode, training the curvature fuzzy classifier and the opening fuzzy classifier by using calibrated preprocessing data respectively, otherwise, skipping the step;
the method comprises the steps of extracting the curvature and the opening degree characteristics through a fuzzy classifier, calculating the motion state characteristics through variance, and taking the motion state characteristics, the palm triaxial angle and the acceleration data as characteristic data, and further comprising the following steps:
if the model is a training mode, training a single gesture recognition model by using the calibrated characteristic data, and updating and storing the model;
if the gesture is a prediction mode, predicting the feature data by using a single gesture recognition model, wherein the predicted label is the target gesture, and outputting the gesture through the interaction unit.
In some embodiments, the training of the curvature fuzzy classifier and the opening degree fuzzy classifier by using calibrated preprocessing data comprises the following steps:
setting the category of the fuzzy classifier as curvature or opening degree;
if the training mode is adopted, different types of reference gestures are selected, and finger and palm triaxial angle data corresponding to the calibrated reference gestures are extracted to serve as a training set of the fuzzy classifier;
updating and storing the fuzzy classifier;
and extracting uncalibrated gesture data from the next gesture, predicting by using the fuzzy classifier, calibrating the gesture data by using a label with the most category occupation ratio in a prediction result, taking the calibrated gesture data and the input of the last fuzzy classifier as a new training set, and repeating the previous step until the gesture is empty.
The embodiment of the invention also provides a gesture recognition method based on multi-joint data fusion, which adopts the gesture recognition system based on multi-joint data fusion, and comprises the following steps:
establishing a compound gesture dictionary and a key dictionary;
setting window length and sliding window interval;
normalizing and digitally filtering the original data to obtain preprocessed data;
extracting bending and opening characteristics by using a bending and opening fuzzy classifier, and calculating motion state characteristics by variance, wherein the motion state characteristics, the palm triaxial angle and acceleration data are used as characteristic data;
predicting the characteristic data through a single gesture recognition model to obtain a single gesture;
circularly reading a single gesture to obtain window labels with the window length and the sliding window interval;
traversing keys of the compound gesture dictionary and converting the keys into a rule expression;
and converting the window label into a character string, and matching with the rule expression, wherein a key value corresponding to the successful matching is the target gesture.
In some embodiments, the keys in the compound gesture dictionary are label strings uniquely corresponding to the compound gesture, and the key values are the gesture; the keys in the key dictionary are tag character strings, and the key values are tag lists of the gestures;
the conversion flow of the rule expression is as follows:
setting an initialization rule expression as "(.+ -.);
obtaining a key of a compound gesture dictionary;
acquiring a corresponding key value from a key dictionary;
traversing key values, and sequentially inserting character strings of the key values and primary character strings at the end of the regular expression;
and (5) obtaining the required rule expression after the traversal is finished.
In summary, compared with the prior art, the gesture recognition system based on multi-joint data fusion provided by the invention can be conveniently and comfortably worn on the hands of a human body, does not influence the hand movements, extracts the characteristics of finger curvature, opening degree, motion state and palm posture through fusion of finger joint data, realizes real-time gesture recognition, and solves the problems of high difference of different hand gestures and low compatibility of the gesture recognition system.
Drawings
FIG. 1 is a schematic diagram of a gesture recognition system based on multi-joint data fusion in embodiment 1 of the present invention;
FIG. 2 is a diagram showing the construction of a glove according to example 1 of the present invention;
FIG. 3 is a flow chart of single-type gesture recognition in embodiment 1 of the present invention;
FIG. 4 is a flow chart of the design of a fuzzy classifier in accordance with embodiment 1 of the present invention;
FIG. 5 is a flow chart of composite sign language recognition in embodiment 2 of the present invention;
fig. 6 is a flowchart of rule expression conversion in embodiment 2 of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. It will be appreciated, however, by one skilled in the art that the inventive aspects may be practiced without one or more of the specific details, or with other methods, components, materials, etc. In some instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring the invention.
As shown in fig. 1, in order to solve the technical problems in the prior art, the present invention provides a gesture recognition system based on multi-joint data fusion, which includes: a glove; the device comprises a battery unit, a control unit, a first inertial sensor unit, a communication unit and an interaction unit, wherein the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit are positioned at the back of the hand of the glove, the control unit is respectively and electrically connected with the first inertial sensor unit, the communication unit and the interaction unit, and the battery unit respectively provides electric energy for the control unit, the first inertial sensor unit, the communication unit and the interaction unit; and a second inertial sensor unit located at a knuckle of the glove, the second inertial sensor unit being electrically connected to the control unit, the battery unit also providing electrical energy to the second inertial sensor unit; the control unit respectively acquires angle and acceleration data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, and performs gesture recognition. The communication unit is used for communicating with a server or other gesture recognition systems according to a preset data format, and the interaction unit is used for displaying recognized gestures, playing corresponding voices and receiving input information of a user.
Embodiments of the gesture recognition system and method based on multi-joint data fusion of the present invention are specifically described below in conjunction with two examples.
Example 1
A gesture recognition system based on multi-joint data fusion in this embodiment is configured as shown in fig. 1, and includes a glove 1 for wearing and fixing devices, a battery unit 2 located at the back of the hand of the glove 1, a control unit 3, a first inertial sensor unit 4, a communication unit 5, and an interaction unit 6, and a second inertial sensor unit 7 located at the knuckles of the glove 1. The control unit 3 is electrically connected with the first inertial sensor unit 4, the second inertial sensor unit 7, the communication unit 5 and the interaction unit 6 respectively, and the battery unit 2 is used for supplying power for the operation of each unit.
Further, as shown in fig. 2, the glove 1 is made of a soft material with low hardness, and is covered on the back of the hand and the back of the finger by a semi-coating structure, and the back of the hand and the back of the finger of the glove 1 are provided with buckles for fixing other units.
Referring to fig. 1 and 2, the control unit 3 collects angle and acceleration data of the palm and the finger joints through the first inertial sensor unit 4 and the second inertial sensor unit 7, and recognizes gestures after processing the data.
Further, the first inertial sensor unit 4 is located at a back center position; the second inertial sensor unit 7 comprises multiple inertial sensors, located at respective joints of the finger.
Furthermore, the communication unit 5 is configured to communicate with a server or other gesture recognition system by the control unit 3 according to a certain data format; the interaction unit 6 is configured to display the recognized gesture and play the corresponding voice, and may also receive input information of the user.
FIG. 3 is a flowchart of single gesture recognition in this embodiment, and the specific flow includes:
1. the user selects a training mode or a prediction mode through the interaction unit;
2. the control unit acquires triaxial angles and acceleration raw data of the palm joints and the finger joints through the first inertial sensor unit and the second inertial sensor unit;
3. carrying out normalization, digital filtering and other treatments on the original data to obtain preprocessed data;
4. if the model is a training mode, training the curvature and opening degree fuzzy classifier by using calibrated preprocessing data respectively, otherwise, skipping the step;
5. extracting the curvature and opening characteristics by a fuzzy classifier, and calculating the motion state characteristics by variance, wherein the characteristics and the triaxial angle of the palm and acceleration data are used as characteristic data;
6. if the model is a training mode, training a single gesture recognition model by using the calibrated characteristic data, and updating and storing the model;
7. if the gesture is a prediction mode, the feature data is predicted by using a single gesture model, the predicted label is the target gesture, and the gesture is output through the interaction unit.
Further, the bending degree is a bending state describing the finger, and is divided into three types: no bending, slight bending and heavy bending; the splay describes the splay state of the index and middle fingers, and is divided into four categories: no splaying, crossing, and no straightening.
As shown in fig. 4, a flowchart for fuzzy classifier design in this embodiment, the specific flow includes: still further, the fuzzy classifier design includes the steps of:
1. setting the category of the fuzzy classifier as curvature or opening degree;
2. if the training mode is adopted, different types of reference gestures are selected, and finger and palm triaxial angle data corresponding to the calibrated reference gestures are extracted to serve as a training set of the fuzzy classifier;
3. updating and saving the model;
4. extracting uncalibrated gesture data from the next gesture, predicting by using a fuzzy classifier, calibrating the gesture data by using a label with the most category occupation ratio in a prediction result, taking the calibrated data and the input of the last fuzzy classifier as a new training set, and repeating the previous step until the gesture is empty;
5. if the prediction mode is the prediction mode, selecting a corresponding data set and a fuzzy classifier according to the category, and digitizing the prediction label.
Example 2
A gesture recognition system based on multi-joint data fusion in this embodiment is the same as that of embodiment 2.
As shown in fig. 5, in this embodiment, a flow chart for implementing compound gesture recognition includes:
1. establishing a compound gesture dictionary and a key dictionary;
2. setting window length and sliding window interval;
3. normalizing and digitally filtering the original data to obtain preprocessed data;
4. extracting bending and opening degree characteristics by using a bending and opening degree fuzzy classifier, and calculating motion state characteristics by using variances, wherein the characteristics and the triaxial angle of the palm and acceleration data are used as characteristic data;
5. predicting the characteristic data through a single gesture recognition model to obtain a single gesture;
6. circularly reading a single gesture to obtain window labels with the window length and the sliding window interval;
7. traversing keys of the compound gesture dictionary and converting the keys into a rule expression;
8. and converting the window label into a character string, and matching with the rule expression, wherein a key value corresponding to the successful matching is the target gesture.
Further, keys in the compound gesture dictionary are label character strings uniquely corresponding to compound gestures, and key values are the gestures; the keys in the key dictionary are tag character strings, and the key values are tag lists of the gestures.
Still further, as shown in fig. 6, a flowchart of the rule expression conversion in this embodiment includes:
1. initializing a rule expression as "(.+ -.);
2. obtaining a key of a compound gesture dictionary;
3. acquiring a corresponding key value from a key dictionary;
4. traversing key values, and sequentially inserting character strings of the key values and primary character strings at the end of the regular expression;
5. and (5) obtaining the required rule expression after the traversal is finished.
In summary, compared with the prior art, the gesture recognition system based on multi-joint data fusion provided by the invention can be conveniently and comfortably worn on the hands of a human body, does not influence the hand movements, extracts the characteristics of finger curvature, opening degree, motion state and palm posture through fusion of finger joint data, realizes real-time gesture recognition, and solves the problems of high difference of different hand gestures and low compatibility of the gesture recognition system.
In this specification, the invention has been described with reference to specific embodiments thereof. It will be apparent, however, that various modifications and changes may be made without departing from the spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (8)

1. A gesture recognition method based on multi-joint data fusion, characterized in that a gesture recognition system based on multi-joint data fusion is adopted, the system comprising:
a glove;
the device comprises a battery unit, a control unit, a first inertial sensor unit, a communication unit and an interaction unit, wherein the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit are positioned at the back of the hand of the glove, the control unit is respectively and electrically connected with the first inertial sensor unit, the communication unit and the interaction unit, and the battery unit respectively provides electric energy for the control unit, the first inertial sensor unit, the communication unit and the interaction unit; and
a second inertial sensor unit located at a knuckle of the glove, the second inertial sensor unit being electrically connected to the control unit, the battery unit also providing electrical energy to the second inertial sensor unit;
the control unit respectively acquires angle and acceleration data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, and performs gesture recognition;
the back of the hand and the back of the finger of the glove are respectively provided with a buckle, the buckles of the back of the hand of the glove are used for fixing the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit, and the buckles of the back of the finger of the glove are used for fixing the second inertial sensor unit;
the first inertial sensor unit is positioned at the center of the back of the hand of the glove, and the second inertial sensor unit comprises multiple paths of inertial sensors which are respectively positioned at all joints of the finger;
the method comprises the following steps:
the control unit captures the triaxial angles and the acceleration raw data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit;
the control unit performs normalization and digital filtering processing on the triaxial angle and acceleration original data to obtain preprocessing data;
the control unit extracts the curvature and the opening degree characteristics through the fuzzy classifier, calculates the motion state characteristics through variance, and takes the motion state characteristics, the palm triaxial angle and the acceleration data as characteristic data;
the control unit predicts the characteristic data by utilizing a single gesture recognition model, the predicted label is the target gesture, and the gesture is output through the interaction unit;
before the control unit captures the triaxial angles and the acceleration raw data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, the control unit further comprises a training mode or a prediction mode which is selected by a user through the interaction unit;
the control unit performs normalization and digital filtering processing on the triaxial angle and acceleration original data to obtain preprocessed data, and then the control unit further comprises the following steps:
if the model is a training mode, training the curvature fuzzy classifier and the opening fuzzy classifier by using calibrated preprocessing data respectively, otherwise, skipping the step;
the method comprises the steps of extracting the curvature and the opening degree characteristics through a fuzzy classifier, calculating the motion state characteristics through variance, and taking the motion state characteristics, the palm triaxial angle and the acceleration data as characteristic data, and further comprising the following steps:
if the model is a training mode, training a single gesture recognition model by using the calibrated characteristic data, and updating and storing the model;
if the gesture is a prediction mode, predicting the feature data by using a single gesture recognition model, wherein the predicted label is the target gesture, and outputting the gesture through the interaction unit.
2. The gesture recognition method based on multi-joint data fusion according to claim 1, wherein the training of the curvature fuzzy classifier and the opening degree fuzzy classifier by using the calibrated preprocessing data comprises the following steps:
setting the category of the fuzzy classifier as curvature or opening degree;
if the training mode is adopted, different types of reference gestures are selected, and finger and palm triaxial angle data corresponding to the calibrated reference gestures are extracted to serve as a training set of the fuzzy classifier;
updating and storing the fuzzy classifier;
and extracting uncalibrated gesture data from the next gesture, predicting by using the fuzzy classifier, calibrating the gesture data by using a label with the most category occupation ratio in a prediction result, taking the calibrated gesture data and the input of the last fuzzy classifier as a new training set, and repeating the previous step until the gesture is empty.
3. The method of claim 1, wherein the glove body of the glove is made of a flexible material and the glove at least partially covers the back of the hand and the back of the finger of the wearer when worn.
4. The gesture recognition method based on multi-joint data fusion according to claim 1, wherein the communication unit is configured to communicate with a server or other gesture recognition system according to a preset data format, and the interaction unit is configured to display the recognized gesture and play the corresponding voice, and receive input information of a user.
5. A gesture recognition method based on multi-joint data fusion, characterized in that a gesture recognition system based on multi-joint data fusion is adopted, the system comprising:
a glove;
the device comprises a battery unit, a control unit, a first inertial sensor unit, a communication unit and an interaction unit, wherein the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit are positioned at the back of the hand of the glove, the control unit is respectively and electrically connected with the first inertial sensor unit, the communication unit and the interaction unit, and the battery unit respectively provides electric energy for the control unit, the first inertial sensor unit, the communication unit and the interaction unit; and
a second inertial sensor unit located at a knuckle of the glove, the second inertial sensor unit being electrically connected to the control unit, the battery unit also providing electrical energy to the second inertial sensor unit;
the control unit respectively acquires angle and acceleration data of the palm and finger joints through the first inertial sensor unit and the second inertial sensor unit, and performs gesture recognition;
the back of the hand and the back of the finger of the glove are respectively provided with a buckle, the buckles of the back of the hand of the glove are used for fixing the battery unit, the control unit, the first inertial sensor unit, the communication unit and the interaction unit, and the buckles of the back of the finger of the glove are used for fixing the second inertial sensor unit;
the first inertial sensor unit is positioned at the center of the back of the hand of the glove, and the second inertial sensor unit comprises multiple paths of inertial sensors which are respectively positioned at all joints of the finger;
the method comprises the following steps:
establishing a compound gesture dictionary and a key dictionary;
setting window length and sliding window interval;
normalizing and digitally filtering the original data to obtain preprocessed data;
extracting bending and opening characteristics by using a bending and opening fuzzy classifier, and calculating motion state characteristics by variance, wherein the motion state characteristics, the palm triaxial angle and acceleration data are used as characteristic data;
predicting the characteristic data through a single gesture recognition model to obtain a single gesture;
circularly reading a single gesture to obtain window labels with the window length and the sliding window interval;
traversing keys of the compound gesture dictionary and converting the keys into a rule expression;
and converting the window label into a character string, and matching with the rule expression, wherein a key value corresponding to the successful matching is the target gesture.
6. The gesture recognition method based on multi-joint data fusion according to claim 5, wherein keys in the compound gesture dictionary are label character strings uniquely corresponding to compound gestures, and key values are the gestures; the keys in the key dictionary are tag character strings, and the key values are tag lists of the gestures;
the conversion flow of the rule expression is as follows:
setting an initialization rule expression as "(.+ -.);
obtaining a key of a compound gesture dictionary;
acquiring a corresponding key value from a key dictionary;
traversing key values, and sequentially inserting character strings of the key values and primary character strings at the end of the regular expression;
and (5) obtaining the required rule expression after the traversal is finished.
7. The method of claim 5, wherein the glove body of the glove is made of a flexible material and the glove at least partially covers the back of the hand and the back of the finger of the wearer when worn.
8. The gesture recognition method based on multi-joint data fusion according to claim 5, wherein the communication unit is configured to communicate with a server or other gesture recognition system according to a preset data format, and the interaction unit is configured to display the recognized gesture and play the corresponding voice, and receive input information of a user.
CN202010698344.XA 2020-07-20 2020-07-20 Gesture recognition system and method based on multi-joint data fusion Active CN111831122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010698344.XA CN111831122B (en) 2020-07-20 2020-07-20 Gesture recognition system and method based on multi-joint data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010698344.XA CN111831122B (en) 2020-07-20 2020-07-20 Gesture recognition system and method based on multi-joint data fusion

Publications (2)

Publication Number Publication Date
CN111831122A CN111831122A (en) 2020-10-27
CN111831122B true CN111831122B (en) 2023-05-16

Family

ID=72923109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010698344.XA Active CN111831122B (en) 2020-07-20 2020-07-20 Gesture recognition system and method based on multi-joint data fusion

Country Status (1)

Country Link
CN (1) CN111831122B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115525141A (en) * 2021-06-25 2022-12-27 中国科学院深圳先进技术研究院 Intelligent interactive glove with AI chip, interactive method and storage medium thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204378001U (en) * 2015-01-05 2015-06-10 刘述亮 A kind of sign language communication gloves
KR102046707B1 (en) * 2018-02-27 2019-11-19 세종대학교산학협력단 Techniques of performing convolutional neural network-based gesture recognition using inertial measurement unit
CN110472506B (en) * 2019-07-11 2023-05-26 广东工业大学 Gesture recognition method based on support vector machine and neural network optimization

Also Published As

Publication number Publication date
CN111831122A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
Al-Qurishi et al. Deep learning for sign language recognition: Current techniques, benchmarks, and open issues
US10446059B2 (en) Hand motion interpretation and communication apparatus
CN105868715B (en) Gesture recognition method and device and gesture learning system
US20190138607A1 (en) System and apparatus for non-intrusive word and sentence level sign language translation
US7565295B1 (en) Method and apparatus for translating hand gestures
US20080036737A1 (en) Arm Skeleton for Capturing Arm Position and Movement
Das et al. Smart glove for sign language communications
Bui et al. Recognizing postures in Vietnamese sign language with MEMS accelerometers
Gupta et al. Indian sign language recognition using wearable sensors and multi-label classification
Ahmed et al. Real-time sign language framework based on wearable device: analysis of MSL, DataGlove, and gesture recognition
CN107678550A (en) A kind of sign language gesture recognition system based on data glove
CN108703824B (en) Bionic hand control system and control method based on myoelectricity bracelet
CN111708433B (en) Gesture data acquisition glove and sign language gesture recognition method based on gesture data acquisition glove
CN111722713A (en) Multi-mode fused gesture keyboard input method, device, system and storage medium
CN111857334A (en) Human body gesture letter recognition method and device, computer equipment and storage medium
CN111831122B (en) Gesture recognition system and method based on multi-joint data fusion
CN113849068A (en) Gesture multi-mode information fusion understanding and interacting method and system
CN109542220B (en) Sign language gloves with calibration and learning functions, system and implementation method
Punsara et al. IoT Based Sign Language Recognition System
Loeding et al. Progress in automated computer recognition of sign language
CN110413106B (en) Augmented reality input method and system based on voice and gestures
Cohen et al. Recognition of continuous sign language alphabet using leap motion controller
Nam et al. Recognition of hand gestures with 3D, nonlinear arm movement
AU2021101436A4 (en) Wearable sign language detection system
Verma et al. Design of communication interpreter for deaf and dumb person

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.18 Xueyuan street, Xiasha Higher Education Park, Hangzhou, Zhejiang 310000

Applicant after: ZHEJIANG University OF FINANCE & ECONOMICS

Address before: No.83, Wenyi West Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant before: ZHEJIANG University OF FINANCE & ECONOMICS

GR01 Patent grant
GR01 Patent grant