CN110390281B - Sign language recognition system based on sensing equipment and working method thereof - Google Patents

Sign language recognition system based on sensing equipment and working method thereof Download PDF

Info

Publication number
CN110390281B
CN110390281B CN201910623439.2A CN201910623439A CN110390281B CN 110390281 B CN110390281 B CN 110390281B CN 201910623439 A CN201910623439 A CN 201910623439A CN 110390281 B CN110390281 B CN 110390281B
Authority
CN
China
Prior art keywords
gesture
finger
data
axis
joint point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910623439.2A
Other languages
Chinese (zh)
Other versions
CN110390281A (en
Inventor
谢磊
徐岩
陆桑璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910623439.2A priority Critical patent/CN110390281B/en
Publication of CN110390281A publication Critical patent/CN110390281A/en
Application granted granted Critical
Publication of CN110390281B publication Critical patent/CN110390281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a sign language recognition system based on a sensing device and a working method thereof, wherein the sign language recognition system comprises the following steps: collecting joint point data of a user hand in real time; denoising the joint point data; segmenting the joint point data subjected to noise reduction to complete gesture action segmentation; calculating gesture features used for gesture classification; classifying gesture features corresponding to the gesture actions, and determining the category of the gesture actions; and matching sign languages corresponding to the gesture action categories, and displaying sign language results. On the basis of gesture motion modeling and gesture classification, the invention designs and develops an online sign language recognition system, and provides a solution for the communication problem of the hearing-aid person and the deaf-mute.

Description

Sign language recognition system based on sensing equipment and working method thereof
Technical Field
The invention belongs to the technical field of gesture perception and sign language recognition, and particularly relates to a sign language recognition system based on a perception device and a working method thereof.
Background
In society, a group can not hear the voice and speak, and only lives in a silent world, so that the group is just thousands of deaf-mutes in China, can not communicate like normal people, and brings great trouble to living states of the group such as working, learning, hospitalizing and the like. In real life, they also have a way to communicate, sign language, but most of them cannot understand the meaning of sign language. For this reason, gesture-aware techniques are widely studied. At present, gesture sensing technologies can be classified into four categories according to sensing means, namely a gesture sensing technology based on sound signals, a gesture sensing technology based on a human body sensor, a gesture sensing technology based on radio frequency signals and a gesture sensing technology based on vision. The gesture sensing technology based on the sound signals is easily influenced by environmental noise, and recognizable gesture actions are limited; the gesture sensing technology based on the human body sensor usually requires a user to wear a related sensor, which is not friendly to the user; the gesture sensing technology based on the radio frequency signals is sensitive to the environment, and the implementation scene arrangement of the system is troublesome. The gesture sensing technology based on vision can well solve the problems in the sensing technology, the most representative sensing equipment based on vision comprises Kinect and LeapMotion, the Kinect sensing equipment can obtain human skeleton point data, more gesture sensing applied to coarse granularity can not be carried out, finger gesture sensing can not be carried out, and the LeapMotion sensing equipment can obtain finger joint point position data and can be used for finger gesture sensing of fine granularity.
Therefore, in view of the above, there is a need to provide a sign language recognition system based on a sensing device, which recognizes sign language gesture actions from gesture sensing, and designs a user-friendly system to provide a solution for communication between deaf-dumb people and hearing-aid people.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a sign language recognition system based on a sensing device and a working method thereof, so as to realize real-time sign language recognition of user gesture actions, and provide a solution for communication between deaf-dumb people and hearing-aid people.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention relates to a sign language recognition system based on a perception device, which comprises:
the gesture motion acquisition module is used for acquiring joint point data of a hand of a user in real time and performing gesture motion segmentation and feature extraction on the acquired data;
the gesture action model library is used for storing gesture action characteristics and gesture action classification models, wherein the gesture action characteristics are obtained by modeling gesture actions;
and the calculation module is used for classifying the gesture actions, and classifying the action characteristics of the collected hand joint point data through a gesture action classification model in a gesture action model library so as to obtain a gesture action classification result.
Further, the system further comprises a calibration module for calibrating whether the hands of the user are in the valid detection range.
Further, the system further comprises a display module for displaying the gesture action recognition result, the sensing device state, the height of the hands of the user and the real-time images of the hands.
Further, the gesture motion modeling is divided into static gesture modeling and dynamic gesture modeling; dynamic gesture modeling is divided into global dynamic gesture modeling of hand motion and local dynamic gesture modeling of finger motion.
Further, the gesture motion characteristics in the gesture motion model library are specifically: the device comprises a finger bending angle characteristic, a finger fingertip joint point distance characteristic, a finger pointing characteristic, a displacement characteristic, a rotation angle characteristic and a rotation direction characteristic of a dynamic gesture.
Further, the gesture motion classification model in the gesture motion model library specifically includes: a Support Vector Machine (SVM) algorithm is improved by solving the problems of multi-classification of gestures, initialization of gesture actions and classification of unlabeled gesture classes, and the gesture actions are collected for training of the SVM algorithm to obtain a gesture action classification model.
Further, the solution to the gesture multi-classification problem in the improved Support Vector Machine (SVM) algorithm is specifically: a support vector machine classification model is constructed between any two gesture motion samples to process the gesture multi-classification problem, and when one gesture to be classified is classified, the class with the most votes is the class of an unknown sample.
Further, the problem of initializing the gesture action in the improved Support Vector Machine (SVM) algorithm is specifically: the problem of gesture action initialization is solved by defining an initial state for the gesture action, so that the classification errors caused by different initial states are avoided.
Further, the solution of the classification problem of the unlabeled gesture categories in the improved Support Vector Machine (SVM) algorithm is specifically: the problem of classification of the unmarked gesture classes is solved by taking the unmarked gesture as a new gesture motion class, so that the accuracy of gesture classification is improved.
The invention discloses a working method of a sign language recognition system of a sensing device, which comprises the following steps:
1) Collecting joint point data of a user hand in real time;
2) Denoising the joint point data to obtain processed joint point data;
3) Dividing the processed joint point data to finish gesture motion division, determining the starting position and the ending position of the gesture motion, and extracting the data of the complete gesture motion;
4) Calculating gesture features used for gesture classification;
5) Classifying gesture features corresponding to the gesture actions, and determining the category of the gesture actions;
6) And matching sign languages corresponding to the gesture action categories, and displaying gesture action classification results.
Further, the joint data in step 1) is in the form of frame data.
Further, the method for reducing the noise of the data in the step 2) comprises the following steps:
21 Setting the length of the sliding window as 20 sampling points, and setting the joint point as three-dimensional coordinate data;
22 Apply a median filter to each axis coordinate data of the joint point to perform data de-noising, and the current joint point data is marked as X i After applying median filter to perform data noise reduction, then X i Will be (X) -19 ,…,X i-2 ,X i-1 ) Is replaced by the median value of (1).
Further, the gesture motion segmentation method in step 3) includes:
31 Setting the length L of a gesture data segmentation sliding window to be 40 sampling points, wherein the gesture data uses finger-tip joint point data;
32 Data of X-axis, Y-axis, and Z-axis of finger-tip joint points are respectively recorded as:
(X(i) 1 ,X(i) 2 ,…,X(i) L-1 ,X(i) L ),(Y(i) 1 ,(Y(i) 2 ,…,Y(i) L-1 ,Y(i) L ),(Z(i) 1 ,Z(i) 2 …,Z(i) L-1 ,Z(i) L ) The variance of the corresponding three-axis coordinate data is recorded as
Figure BDA0002126274260000031
Wherein, the value of i is 0 to 9 and respectively corresponds to the thumb, the index finger, the middle finger, the ring finger, the little finger of the right hand and the thumb, the index finger, the middle finger, the ring finger and the little finger of the left hand;
33 A variance of data of a certain axis of a certain finger in the sliding window exceeds a certain threshold, the window is a starting window of gesture action, and the starting position of the sliding window is taken as the starting position of the gesture action;
34 Detecting a starting position, moving the sliding window continuously, and if the variance of certain axis data of a certain finger exceeds a certain threshold value, keeping the window in the gesture working process; if the variance of any axis data of any finger does not exceed a certain threshold, the window is an ending window of the gesture action, and the starting position of the sliding window is used as the ending position of the gesture action;
35 After the starting position and the ending position of the gesture action are determined, the gesture action is successfully divided.
Further, the gesture in the step 4) is characterized by: the bending angle characteristics of the fingers, the included angle of the adjacent joints of each finger, and 3 bending angles of each finger; distance characteristics between finger tip joint points, the distance between every two finger tip joint points of the same hand, and each hand has 10 distances; the finger pointing characteristic of the finger and the pointing vector of the finger are three-dimensional vectors; displacement characteristics of the dynamic gesture, and distance differences of two frame data along an X axis, a Y axis and a Z axis; the rotation angle characteristic is the rotation angle corresponding to the two frame data; the rotation direction feature, the rotation direction corresponding to the two frame data, is represented by a normal vector of the rotation plane, and is a three-dimensional vector.
The invention has the beneficial effects that:
1. comprehensive and stable gesture action characteristics: the gesture is comprehensively analyzed and modeled to obtain comprehensive gesture action characteristics, and the characteristics are stable through verification of user independence and position independence;
2. high-accuracy sign language recognition: the accuracy of the hand language recognition is analyzed through experiments, and the hand language recognition has high accuracy;
3. real-time sign language recognition system: the system identifies the gesture action of the user in real time and displays a sign language identification result on a system interface;
4. the system is user-friendly: the system can feed back the position information of the hand and the equipment state information to the user in real time, so that the position of the hand of the user can be conveniently adjusted; the system comprises a camera real-time output module, and a user can compare his gesture actions conveniently.
Drawings
FIG. 1 is a diagram of a gesture modeling framework;
FIG. 2 is a system flow diagram;
FIG. 3a is a system initialization diagram;
FIG. 3b is a diagram of the recognition result of sign language "very";
FIG. 3c is a diagram of sign language "happy" recognition result;
FIG. 3d is a diagram of the recognition result of sign language recognition;
FIG. 3e is a diagram of the recognition result of sign language recognition;
fig. 3f is a diagram of sign language "you" recognition result.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
The invention relates to a sign language recognition system based on a perception device, which comprises:
the gesture motion acquisition module is used for acquiring joint point data of a hand of a user in real time and performing gesture motion segmentation and feature extraction on the acquired data;
the gesture motion model library is used for storing gesture motion characteristics and gesture motion classification models, wherein the gesture motion characteristics are obtained by modeling gesture motions, and the gesture motion modeling is divided into static gesture modeling and dynamic gesture modeling; the dynamic gesture modeling is divided into global dynamic gesture modeling of hand motion and local dynamic gesture modeling of finger motion;
and the calculation module is used for classifying the gesture actions, and classifying the action characteristics of the collected hand joint point data through a gesture action classification model in a gesture action model library so as to obtain a gesture action classification result.
The system further comprises a calibration unit for calibrating whether the user's hands are in valid detection ranges.
The system further comprises a display unit for displaying the sign language action recognition result, the sensing equipment state, the height of the hands of the user and the real-time images of the hands.
Referring to fig. 1, the gesture motion characteristics in the gesture motion model library specifically include: the device comprises a finger bending angle characteristic, a finger fingertip joint point distance characteristic, a finger pointing characteristic, a displacement characteristic, a rotation angle characteristic and a rotation direction characteristic of a dynamic gesture.
The gesture motion classification model in the gesture motion model library specifically comprises the following steps: a Support Vector Machine (SVM) algorithm is improved by solving the problems of multi-classification of gestures, initialization of gesture actions and classification of unlabeled gesture classes, and the gesture actions are collected for training of the SVM algorithm to obtain a gesture action classification model.
The problem of solving the gesture multi-classification in the improved Support Vector Machine (SVM) algorithm is specifically as follows: the sign language classification relates to a plurality of classes, is a multi-classification problem, and solves the problem that an SVM classification model is constructed between any two gesture action samples, and when a gesture to be classified is classified, the class with the most votes is the class of an unknown sample.
The problem of solving the gesture action initialization in the improved Support Vector Machine (SVM) algorithm is specifically as follows: the gesture action needs to have the same initial state, and the solution is to define an initial state for the gesture action, so that the classification errors caused by different initial states can be effectively avoided.
The improved Support Vector Machine (SVM) algorithm specifically solves the problem of classification of the unmarked gesture categories: sign language classification is real-time and continuous, and unmarked gesture actions except the marked gesture actions are inevitably generated, so that the gesture classification accuracy is reduced.
Referring to fig. 2, the working method of the sign language recognition system based on the sensing device of the present invention includes the following steps:
1) Collecting joint point data of a hand in real time through sensing equipment;
2) Denoising the joint point original data to obtain processed joint point data;
3) Dividing the processed joint point data to finish gesture motion division, determining the starting position and the ending position of the gesture motion, and extracting the data of the complete gesture motion;
4) Calculating gesture features used for gesture classification;
5) Classifying gesture features corresponding to the gesture actions, and determining the category of the gesture actions;
6) And matching sign languages corresponding to the gesture action categories, and displaying gesture action classification results.
Wherein, the method for reducing the noise of the data in the step 2) comprises the following steps:
21 Setting the length of the sliding window as 20 sampling points, and setting the joint point as three-dimensional coordinate data;
22 Apply a median filter to each axis coordinate data of the joint point to perform data noise reduction, and the current joint point data is marked as X i After applying median filter to perform data noise reduction, X i Will be (X) -19 ,…,X i-2 ,X i-1 ) Is replaced by the median value of (1).
The joint data in step 1) is in the form of frame data.
The gesture motion segmentation method in the step 3) comprises the following steps:
31 Setting the length L of a gesture data segmentation sliding window to be 40 sampling points, wherein the gesture data uses finger-tip joint point data;
32 Data of X-axis, Y-axis, and Z-axis of finger-tip joint points are respectively recorded as:
(X(i) 1 ,X(i) 2 ,…,X(i) L-1 ,X(i) L ),(Y(i) 1 ,(Y(i) 2 ,…,Y(i) L-1 ,Y(i) L ),(Z(i) 1 ,Z(i) 2 …,Z(i) L-1 ,Z(i) L ) The variance of the corresponding three-axis coordinate data is recorded as
Figure BDA0002126274260000051
Wherein, the value of i is 0 to 9 and respectively corresponds to the thumb, the index finger, the middle finger, the ring finger, the little finger of the right hand and the thumb, the index finger, the middle finger, the ring finger and the little finger of the left hand;
33 A variance of data of a certain axis of a certain finger in the sliding window exceeds a certain threshold, the window is a starting window of gesture action, and the starting position of the sliding window is taken as the starting position of the gesture action;
34 Detecting a starting position, moving the sliding window continuously, and if the variance of certain axis data of a certain finger exceeds a certain threshold value, keeping the window in the gesture working process; if the variance of any axis data of any finger does not exceed a certain threshold, the window is an ending window of the gesture action, and the starting position of the sliding window is used as the ending position of the gesture action;
35 After the starting position and the ending position of the gesture action are determined, the gesture action is successfully divided.
The gesture characteristics in the step 4) are as follows: the bending angle characteristics of the fingers, the included angle of the adjacent joints of each finger, and 3 bending angles of each finger; distance characteristics between finger tip joint points, the distance between every two finger tip joint points of the same hand, and each hand has 10 distances; the finger pointing characteristic of the finger and the pointing vector of the finger are three-dimensional vectors; displacement characteristics of the dynamic gesture, and distance differences of two frame data along an X axis, a Y axis and a Z axis; the rotation angle characteristic is the rotation angle corresponding to the two frame data; the rotation direction feature, the rotation direction corresponding to the two frame data, is represented by a normal vector of the rotation plane, and is a three-dimensional vector.
Fig. 3 a-3 f are operation diagrams of the sign language recognition system, illustrating the process of the sign language recognition system recognizing the sign language, the content of the recognized sign language is "happy to know you". As shown in fig. 3a, after the system is started, the system checks and displays the state of the sensing device, outputs the height information of both hands, and simultaneously displays the real-time image of the camera in the real-time image display area of the camera; as shown in fig. 3b, the system recognizes the gesture motion corresponding to the sign language "very" of the user, and displays the recognition result of the sign language; as shown in fig. 3c, the system recognizes the gesture motion corresponding to the sign language "happy" of the user, and displays the sign language recognition result; as shown in fig. 3d, the system recognizes the gesture corresponding to the sign language "recognize" of the user, and displays the recognition result of the sign language; as shown in fig. 3e, the system recognizes the gesture action corresponding to the sign language "recognition" of the user, and displays the sign language recognition result; as shown in fig. 3f, the system recognizes the gesture action corresponding to the sign language "you" of the user, and displays the sign language recognition result; when the user clicks a "clear" button in the system interface, the contents of the sign language meaning display area are cleared.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (7)

1. A sign language recognition system based on a sensing device, comprising:
the gesture motion acquisition module is used for acquiring joint point data of a hand of a user in real time and performing gesture motion segmentation and feature extraction on the acquired data;
the gesture action model library is used for storing gesture action characteristics and gesture action classification models;
the calculation module is used for classifying the gesture actions, and classifying the action characteristics of the collected hand joint point data through a gesture action classification model in a gesture action model library so as to obtain a gesture action classification result;
the gesture motion segmentation method comprises the following steps:
setting the length L of a gesture data segmentation sliding window as 40 sampling points, wherein the gesture data uses finger tip joint point data;
the data of the X axis, the Y axis and the Z axis of the finger tip joint point are respectively recorded as: (X (i) 1 ,X(i) 2 ,…,X(i) L-1 ,X(i) L ),(Y(i) 1 ,(Y(i) 2 ,…,Y(i) L-1 ,Y(i) L ),(Z(i) 1 ,Z(i) 2 …,Z(i) L-1 ,Z(i) L ) The variance of the corresponding three-axis coordinate data is recorded as
Figure FDA0003929083770000011
Wherein, the value of i is 0 to 9 and respectively corresponds to the thumb, the index finger, the middle finger, the ring finger, the little finger of the right hand and the thumb, the index finger, the middle finger, the ring finger and the little finger of the left hand;
if the variance of certain axis data of a certain finger in the sliding window exceeds a certain threshold value, the window is the starting window of the gesture action, and the starting position of the sliding window is used as the starting position of the gesture action;
after a starting position is detected, the sliding window continues to move, and if the variance of certain axis data of a certain finger exceeds a certain threshold value, the window is still in the gesture working process; if the variance of any axis data of any finger does not exceed a certain threshold, the window is an ending window of the gesture action, and the starting position of the sliding window is used as the ending position of the gesture action;
after the starting position and the ending position of the gesture action are determined, the gesture action is successfully divided;
the gesture is characterized in that: the bending angle characteristics of the fingers, the included angle of the adjacent joints of each finger, and 3 bending angles of each finger; distance characteristics between finger tip joint points, the distance between every two finger tip joint points of the same hand, and each hand has 10 distances; the finger pointing characteristic of the finger and the pointing vector of the finger are three-dimensional vectors; displacement characteristics of the dynamic gesture, and distance differences of two frame data along an X axis, a Y axis and a Z axis; the rotation angle characteristics are rotation angles corresponding to two frame data; the rotation direction feature, the rotation direction corresponding to the two frame data, is represented by a normal vector of the rotation plane, and is a three-dimensional vector.
2. The perception-device based sign language recognition system of claim 1, further comprising a calibration module to calibrate whether the user's hands are within a valid detection range.
3. The perceptual device-based sign language recognition system of claim 2 further comprising a display module configured to display the gesture motion recognition result, the perceptual device state, the user's hand height, and a real-time image of the hands.
4. The perceptual-device-based sign language recognition system of claim 1, wherein the gesture motion characteristics in the gesture motion model library are specifically: the device comprises a finger bending angle characteristic, a finger fingertip joint point distance characteristic, a finger pointing characteristic, a displacement characteristic, a rotation angle characteristic and a rotation direction characteristic of a dynamic gesture.
5. A working method of a sign language recognition system of a perception device is characterized by comprising the following steps:
1) Collecting joint point data of a user hand in real time;
2) Denoising the joint point data to obtain processed joint point data;
3) Dividing the processed joint point data to finish gesture motion division, determining the starting position and the ending position of the gesture motion, and extracting the data of the complete gesture motion;
4) Calculating gesture features used for gesture classification;
5) Classifying gesture features corresponding to the gesture actions, and determining the category of the gesture actions;
6) Matching sign languages corresponding to the gesture action categories, and displaying gesture action classification results;
the gesture motion segmentation method in the step 3) comprises the following steps:
31 Setting the length L of a gesture data segmentation sliding window to be 40 sampling points, wherein the gesture data uses finger-tip joint point data;
32 Data of X-axis, Y-axis, and Z-axis of finger-tip joint points are respectively recorded as:
(X(i) 1 ,X(i) 2 ,…,X(i) L-1 ,X(i) L ),(Y(i) 1 ,(Y(i) 2 ,…,Y(i) L-1 ,Y(i) L ),(Z(i) 1 ,Z(i) 2 …,Z(i) L-1 ,Z(i) L ) The variance of the corresponding three-axis coordinate data is recorded as
Figure FDA0003929083770000021
Wherein, the value of i is 0 to 9 and respectively corresponds to the thumb, the index finger, the middle finger, the ring finger, the little finger of the right hand and the thumb, the index finger, the middle finger, the ring finger and the little finger of the left hand;
33 A certain axis data variance of a certain finger in the sliding window exceeds a certain threshold value, the window is a starting window of gesture action, and the starting position of the sliding window is taken as the starting position of the gesture action;
34 Detecting a starting position, moving the sliding window continuously, and if the variance of certain axis data of a certain finger exceeds a certain threshold value, keeping the window in the gesture working process; if the variance of any axis data of any finger does not exceed a certain threshold, the window is an ending window of the gesture action, and the starting position of the sliding window is used as the ending position of the gesture action;
35 Determining a starting position and an ending position of the gesture motion, and then successfully dividing the gesture motion;
the gesture characteristics in the step 4) are as follows: the bending angle characteristics of the fingers, the included angle of the adjacent joints of each finger, and 3 bending angles of each finger; distance characteristics between finger tip joint points, the distance between every two finger tip joint points of the same hand, and each hand has 10 distances; the finger pointing characteristic of the finger and the pointing vector of the finger are three-dimensional vectors; displacement characteristics of the dynamic gesture, and distance differences of two frame data along an X axis, a Y axis and a Z axis; the rotation angle characteristic is the rotation angle corresponding to the two frame data; the rotation direction feature, the rotation direction corresponding to the two frame data, is represented by a normal vector of the rotation plane, and is a three-dimensional vector.
6. The method of claim 5, wherein the joint data in step 1) is in the form of frame data.
7. The working method of sign language recognition system of sensing device as claimed in claim 5, wherein the method of data noise reduction in step 2) is:
21 Setting the length of the sliding window as 20 sampling points, and setting the joint point as three-dimensional coordinate data;
22 Apply a median filter to each axis coordinate data of the joint point to perform data de-noising, and the current joint point data is marked as X i After applying median filter to perform data noise reduction, X i Will be (X) -19 ,…,X i-2 ,X i-1 ) Is replaced by the median value of (1).
CN201910623439.2A 2019-07-11 2019-07-11 Sign language recognition system based on sensing equipment and working method thereof Active CN110390281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910623439.2A CN110390281B (en) 2019-07-11 2019-07-11 Sign language recognition system based on sensing equipment and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910623439.2A CN110390281B (en) 2019-07-11 2019-07-11 Sign language recognition system based on sensing equipment and working method thereof

Publications (2)

Publication Number Publication Date
CN110390281A CN110390281A (en) 2019-10-29
CN110390281B true CN110390281B (en) 2023-03-24

Family

ID=68286491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910623439.2A Active CN110390281B (en) 2019-07-11 2019-07-11 Sign language recognition system based on sensing equipment and working method thereof

Country Status (1)

Country Link
CN (1) CN110390281B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546824B (en) * 2022-04-18 2023-11-28 荣耀终端有限公司 Taboo picture identification method, apparatus and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130335318A1 (en) * 2012-06-15 2013-12-19 Cognimem Technologies, Inc. Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
CN107678550A (en) * 2017-10-17 2018-02-09 哈尔滨理工大学 A kind of sign language gesture recognition system based on data glove
CN109597485B (en) * 2018-12-04 2021-05-07 山东大学 Gesture interaction system based on double-fingered-area features and working method thereof

Also Published As

Publication number Publication date
CN110390281A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
Kumar et al. A position and rotation invariant framework for sign language recognition (SLR) using Kinect
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
JP4625074B2 (en) Sign-based human-machine interaction
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
CN102231093B (en) Screen locating control method and device
Trindade et al. Hand gesture recognition using color and depth images enhanced with hand angular pose data
KR20200111617A (en) Gesture recognition method, device, electronic device, and storage medium
JP2001056861A (en) Device and method for recognizing shape and attitude of hand and recording medium where program implementing the method is recorded
Kalsh et al. Sign language recognition system
KR20150127381A (en) Method for extracting face feature and apparatus for perforimg the method
Dinh et al. Hand number gesture recognition using recognized hand parts in depth images
Joshi et al. American sign language translation using edge detection and cross correlation
Aggarwal et al. Online handwriting recognition using depth sensors
CN112749646A (en) Interactive point-reading system based on gesture recognition
Shinde et al. Real time two way communication approach for hearing impaired and dumb person based on image processing
Francis et al. Significance of hand gesture recognition systems in vehicular automation-a survey
CN107346207B (en) Dynamic gesture segmentation recognition method based on hidden Markov model
CN110390281B (en) Sign language recognition system based on sensing equipment and working method thereof
Caplier et al. Comparison of 2D and 3D analysis for automated cued speech gesture recognition
Robert et al. A review on computational methods based automated sign language recognition system for hearing and speech impaired community
KR101141936B1 (en) Method of tracking the region of a hand based on the optical flow field and recognizing gesture by the tracking method
Dhamanskar et al. Human computer interaction using hand gestures and voice
Yeom et al. [POSTER] Haptic Ring Interface Enabling Air-Writing in Virtual Reality Environment
Nguyen et al. Vietnamese sign language reader using Intel Creative Senz3D
Pradeep et al. Advancement of sign language recognition through technology using python and OpenCV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant