CN210402266U - Sign language translation system and sign language translation gloves - Google Patents

Sign language translation system and sign language translation gloves Download PDF

Info

Publication number
CN210402266U
CN210402266U CN201920633509.8U CN201920633509U CN210402266U CN 210402266 U CN210402266 U CN 210402266U CN 201920633509 U CN201920633509 U CN 201920633509U CN 210402266 U CN210402266 U CN 210402266U
Authority
CN
China
Prior art keywords
sign language
user
motion
action
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920633509.8U
Other languages
Chinese (zh)
Inventor
黎冰
李达峰
庄锐鸿
林伟斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Application granted granted Critical
Publication of CN210402266U publication Critical patent/CN210402266U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/04Devices for conversing with the deaf-blind

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A sign language interpretation system and sign language interpretation gloves, the sign language interpretation system comprising: the device comprises an acquisition module and a data identification module; the acquisition module is used for acquiring hand action signals of a user; the data identification module is connected with the acquisition module and is used for carrying out normalization processing on the action signals so as to map the action signals to a preset range and extract a characteristic vector group; identifying the feature vector group to obtain a sign language result; the embodiment of the utility model provides an in sign language translation system can obtain accurately according to user's hand action information in real time with action signal assorted thought content to realize user's both sides information communication function, the rate of accuracy of action signal's machine learning is high, has improved user's sign language translation efficiency, and user's accessible sign language carries out more convenient, nimble communication, and user's use is experienced splendid.

Description

Sign language translation system and sign language translation gloves
Technical Field
The utility model belongs to the technical field of the electron, especially, relate to a sign language translation system and sign language translation gloves.
Background
Sign language, as a unique communication mode, has been commonly used by people with no language, simulates images or syllables through the change of gestures to form meanings or words which are equal to human language, has important significance for promoting the information communication between the deaf-mutes, can assist the deaf-mutes to be reintegrated into the society through the sign language, and has positive promotion effect on the harmonious development of the society; for example, China is the country with the most disabilities in the world, and deaf-mutes account for about 33 percent of the total number of the disabilities. Sign language becomes a main communication mode of the deaf-mute and the normal person in life, but most people in life do not know the sign language, and communication through the pen and the paper is greatly limited, so that the deaf-mute and the normal person have communication barrier problems; therefore, sign language translation is realized, and the deaf-mutes can be helped to overcome social obstacles and bring help to the deaf-mutes.
However, in the practical application process, sign language has great limitation in the popularization of deaf-mutes, and the main reasons are that: the traditional technology can not accurately translate sign language of the deaf-mute in real time; in the traditional technology, sign language translation needs a computer vision technology, so that the sign language translation process is limited by the external environment and space; because human languages are diversified and changeable, when the human thought is expressed by using sign language, arms of people move in an external environment, and the traditional technology needs to capture the action amplitude and the swing position of the arms through a computer and other equipment, so that the position of the arms is detected by the traditional technology and is limited by external light and an environmental background, errors easily occur when the action amplitude of the arms is obtained by the traditional technology, two parties in conversation cannot transmit correct ideas through the sign language, and the efficiency of communication of the terms is reduced; meanwhile, in the process of analyzing the sign language through a computer vision technology, equipment such as a camera and the like must be adopted, so that the cost of sign language translation is increased, the carrying of a user is not facilitated, a large amount of data needs to be transmitted and processed in the working process of the camera, and the efficiency is extremely low.
In combination with the above, the sign language translating device in the conventional technology is limited by the external environment, so that the translating precision of the sign language is low, and the communication efficiency of the sign language is reduced; in addition, the operation of sign language translation in the traditional technology is complex, the cost of sign language translation is extremely high, and great inconvenience is brought to the use of a user, so that sign language translation equipment in the traditional technology cannot be generally applied.
SUMMERY OF THE UTILITY MODEL
In view of this, the embodiment of the utility model provides a sign language translation system and sign language translation gloves aims at solving traditional technical scheme and too big to the translation error of sign language, receives external environment's interference easily, and stability and flexibility are lower, crosses low to the translation efficiency of user's sign language, the too high problem of application cost.
The embodiment of the utility model provides a first aspect provides a sign language translation system, include:
the acquisition module is used for acquiring hand motion signals of a user; and
and the data identification module is connected with the acquisition module and used for carrying out normalization processing on the action signals so as to map the action signals to a preset range, extract a characteristic vector group and identify the characteristic vector group to obtain a sign language result.
In one embodiment thereof, the acquisition module comprises:
the six-axis sensor is connected with the data identification module and is used for acquiring the motion angular velocity and the motion acceleration of the hand of the user; and
and the bending sensor is connected with the data identification module and is used for collecting the bending angle of the finger of the user.
In one embodiment, the sign language translation system further includes:
and the filtering module is connected between the six-axis sensor and the data identification module and is used for performing Kalman filtering processing on the motion angular velocity and the motion acceleration.
In one embodiment thereof, the bending sensor comprises:
the pressure sensor is used for receiving the stress deformation of the finger of the user so as to change the resistance value; and
and the A/D converter is used for obtaining a voltage value responding to the change of the resistance value of the pressure sensor and acquiring the bending angle of the finger of the user according to the voltage value.
In one embodiment, the sign language translation system further includes:
and the wireless sending module is connected between the data identification module and the server terminal and is used for wirelessly sending the action signal after the normalization processing to the server terminal so as to enable the server terminal to display the action signal after the normalization processing.
In one embodiment, the sign language translation system further includes:
and the recording module is connected with the data identification module and is used for recording the action signals after the normalization processing and the sign language results into a sign language database after the sign language results are obtained by the data identification module.
A second aspect of the embodiments of the present invention provides a sign language translation glove, including the sign language translation system as described above.
The sign language translation system can acquire hand action signals of a user in real time through the acquisition module, when the user expresses ideas by using the sign language, the word meaning matched with the hand operation information is analyzed according to the action signals, and after the standardized action signals are identified, the essential ideas of the user can be summarized according to the historical action information of the hand of the user; when other users acquire the sign language result, the two communicating parties complete the information interaction process, and the sign language replaces the language, so that thought communication between the two parties is realized; therefore, after the hand motion information of the user is intelligently analyzed and summarized, the sign language result of the user can be accurately identified, the sign language translation system can obtain the corresponding sign language result after self-learning and self-training by utilizing the motion signal of the hand, and further eliminates the interference of external environment information on the sign language identification process, and the sign language translation system can realize the accurate identification of the user in various external environments; the sign language translation system has a simplified structure, the intelligent processing speed of the action signals of the hands is extremely high, the translation efficiency of the sign language is high, the function is strong, and the volume of the sign language translation system is further simplified; the sign language translation system in the embodiment of the utility model has higher compatibility, and brings good use experience to users; the problems that the sign language translation system in the traditional technology is low in sign language translation precision, too high in circuit application cost, too low in translation efficiency and difficult to universally apply are solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the embodiments or the prior art descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a schematic structural diagram of a sign language translation system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a data identification module according to an embodiment of the present invention;
fig. 3 is another schematic structural diagram of a data identification module according to an embodiment of the present invention;
fig. 4 is a circuit diagram of a sign language translation system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a sign language translation system according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a sign language translation system according to an embodiment of the present invention;
fig. 7 is a schematic structural view of sign language translation gloves according to an embodiment of the present invention;
fig. 8 is a specific flowchart of a sign language translation method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the structural schematic diagram of the sign language translation system 10 provided in the embodiment of the present invention can analyze the substantial meaning represented by the hand movement of the user in real time through the sign language translation system 10, and the error is small, thereby bringing great convenience to the two parties in communication; for convenience of explanation, only the parts related to the present embodiment are shown, and detailed as follows:
as shown in fig. 1, the sign language translation system 10 includes: an acquisition module 101 and a data recognition module 102.
The acquisition module 101 is configured to acquire a hand motion signal of a user.
The hand refers to an arm of a human body, clothing on the arm, and the like; the human body needs to complete most functions in life through the hands, and particularly for deaf-mutes, the hands are the main parts for completing daily communication; in this embodiment, the user can complete corresponding actions according to the agreed rules of sign language by waving his/her arm, and convey his/her thought to the outside; therefore, the hand motion signal contains thought information of the user, and the actual thought required to be expressed by the user can be obtained through further analysis of the motion signal; the acquisition module 101 is used for accurately acquiring the spatial information and the motion trajectory of the hand, so that the efficiency and the precision of communication of the sign language of the user are guaranteed, the substantial idea of the user can be reflected according to the motion signal of the hand, and the sign language translation system 10 has higher flexibility for the translation of the sign language.
The data identification module 102 is connected to the acquisition module 101, and the data identification module 102 is configured to perform normalization processing on the motion signal, so that the motion signal is mapped to a preset range and a feature vector group is extracted; and identifying the feature vector group to obtain a sign language result.
As an optional implementation manner, the preset range is set in advance; for example, if the preset range is 0 to 1, the data identification module 102 performs normalization processing on the motion signal, and then maps the motion signal to the range of 0 to 1.
In this embodiment, the acquisition module 101 outputs the motion signal to the data recognition module 102, so as to realize information interaction between the acquisition module 101 and the data recognition module 102; because the hand action of the user has certain movement errors in the sign language expression process of the arm of the user, certain fluctuation and interference can also occur in the action signal; for example, if a user needs to use sign language to express the same language meaning, the amplitude of hand swing of the user will have slight difference each time the user uses hand, and then there will be a plurality of different motion signals in the same language meaning; meanwhile, the action signals in the embodiment are various, and different types of data cannot be compared and analogized; therefore, through the normalization processing of the data identification module 102 on the motion signal, on one hand, the motion signal error caused by fine fluctuation occurring when the user uses the sign language every time can be reduced; on the other hand, the normalization process can eliminate the difference between different types of data dimensions, and the idea of the user can be analyzed more comprehensively through the motion signal of the hand, so that the motion signal after the normalization process has higher standardization, the intelligent detection precision of the hand motion information of the user is improved, and the comprehensive evaluation precision of the sign language translation system 10 on the motion signal of the hand is facilitated.
Specifically, there are various normalization processing methods for data, and it is preferable that the normalization processing method for the motion signal in the present embodiment adopts the following normalization formula:
Figure BDA0002049321070000061
in the above formula (1), x is the amplitude of the motion signal output by the acquisition module 101, μ is the average value of the amplitudes of the motion signals output by the acquisition module 101, σ is the standard deviation of the amplitudes of the motion signals output by the acquisition module 101, and x*The amplitude of the action signal after normalization processing is obtained; in this embodiment, after the action signal is normalized according to the above formula (1), the sign language interpretation system 10 in this embodiment can more accurately reflect the thought language of the user to the action signal after the normalization, reduce the interference caused by the change of the external environment to the sign language recognition process of the user, and ensure the accuracy and reliability of the action signal.
When other users acquire the sign language result, the information interaction and the information communication are realized by the two users, the actual thought content of the user can be accurately obtained through the sign language result, so that the information real-time communication between the user and other people is completed, and the operation is simple and convenient.
In this embodiment, the data recognition module 102 has data analysis and processing capabilities, and thought contents contained in the motion signal can be accurately obtained through the data recognition module 102; after the data identification module 102 acquires the action signal after the normalization processing, the data identification module 102 can perform self-learning and self-elicitation processing on the action signal to obtain a feature vector group, wherein the feature vector group comprises feature information of the action signal, and the hand action state of a user can be obtained according to the feature vector group to realize a depth analysis function on the action signal; further, the characteristic vector group is subjected to operations such as learning and training, and after the characteristic vector group is processed by machine learning, action signals are classified and identified according to a data distribution rule in the characteristic vector group; when the gesture language translation system 10 acquires the motion signals of the hands, the data recognition module 102 can acquire the gesture language results matched with the motion signals in real time after deeply mining and judging the feature information in the feature vector group so as to accurately recognize the gesture language information of the user; therefore, the sign language translation system 10 in this embodiment has higher sign language translation accuracy and sign language translation efficiency, and by identifying a corresponding sign language result in the change rule of the motion signal of the hand, the sign language translation system 10 in this embodiment has a higher intelligent recognition level for the sign language of the user, the change amount of the external environment cannot cause interference to the sign language information recognition process of the data recognition module 102 in this embodiment, and the sign language information obtained by the data recognition module 102 has higher authenticity, so that the sign language translation system 10 can be applied to different industrial environments, and has extremely strong compatibility.
In the sign language translation system 10 shown in fig. 1, the motion information of the hand of the user can be obtained in real time through the acquisition module 101, wherein the motion information contains the language content that the user needs to express; the action signal is standardized and then the characteristic vector group is extracted, so that comprehensive processing and identification can be favorably carried out on the standardized data to obtain a more accurate sign language result, the operation is simple and convenient, and the efficiency and the accuracy rate of sign language identification of a user are higher; therefore, the sign language translation system 10 in this embodiment only needs to process the sign language information of the user through two modules (the acquisition module 101 and the data recognition module 102), has a simple structure and high flexibility, and effectively reduces the detection and recognition costs of the sign language of the user; meanwhile, the embodiment of the utility model provides a can carry out self-learning, self-inspiring operation according to the eigenvector group, obtain the sign language result accurately according to user sign language information and user's expression custom, prevent that external environment disturbance variable from causing the interference to the sign language identification process; the sign language result obtained by detection in the embodiment has higher stability and reliability, the sign language communication efficiency of the user is improved, the user has higher use experience, and the sign language translation system 10 has higher compatibility, can be suitable for various external environments, and brings greater convenience to the use of the user; the problem that the traditional technology has large translation error for sign language, is easily interfered by external environment variation in the sign language translation process, and has high cost for sign language translation is effectively solved.
As an optional implementation manner, fig. 2 shows a specific structure of the data identification module 102 provided in this embodiment, please refer to fig. 2, where the data identification module 102 includes: a feature extraction unit 1021 and a recognition unit 1022.
The feature extraction unit 1021 is connected with the acquisition module 101, and the feature extraction unit 1021 is configured to perform normalization processing on the motion signal, so that the motion signal is mapped to a preset range and a feature vector group is extracted.
Specifically, the motion signal includes hand motion data of the user; the characteristic vector group comprises a set of characteristic vectors, and the characteristic vectors of different intervals can be obtained by inducing and classifying the characteristic information of the action signals after normalization processing, and can represent data information of a certain category; for example, the embodiment can manually select an optimal feature vector according to the influence weight of each factor in the sign language information, and the feature vector can accurately contain the sign language information of the user; according to the embodiment, the useless information in the action signal can be completely filtered by extracting the feature vector group, the feature vector in the feature vector group is obtained according to the action rule of the sign language, the feature vector is effective data information, a corresponding sign language result can be more effectively and accurately obtained by processing and analyzing the feature vector, and the time for processing and analyzing the action signal is saved; the efficiency of sign language identification of the user can be accelerated through the action information in the feature vector, and the precision of sign language action identification is guaranteed.
The recognition unit 1022 is connected to the feature extraction unit 1021, and the recognition unit 1022 is configured to input the feature vector group into a pre-established gesture template library, and perform one-to-one matching recognition with pre-stored gesture data to obtain the sign language result.
Specifically, a matching rule between the gesture action and the thought content of the user is set through the gesture template library, and the recognition result of the hand action can be quickly and accurately obtained according to the comparison relation; when the feature vectors in the feature vector group are transmitted to the gesture template library, the feature vectors are matched and identified with gesture data in the gesture template library one by one, and if the feature vectors are successfully matched with the gesture data in the gesture template library, the gesture data matched with the feature vectors are used as the sign language result; the gesture data is thought content which is actually required to be expressed by the user, and corresponding thought content can be transmitted to the outside according to the sign language result so as to realize real-time information interaction between two communication parties.
The recognition unit 1022 in this embodiment can implement a function of data classification matching, where the gesture template library includes hand motion experience information of a user, and outputs a feature vector group through the feature extraction unit 1021, where feature vectors in the feature vector group represent hand motion information of the user, and performs one-to-one matching on gesture data of the feature vector group pre-stored in the gesture template library, and if the feature vector group is successfully matched with the gesture data, it indicates that the gesture language recognition for the user is successful; on the contrary, if the feature vector group is not successfully matched with the gesture data, the feature vector group is continuously matched with other gesture data until the feature vector group can be identified through the gesture template library; therefore, the embodiment can perform accurate real-time identification on the hand motion information of the user through the identification unit 1022, and the sign language result has higher accuracy and feasibility; because the one-to-one comparison relationship between the hand movement of the user and the language content of the user is established through the gesture template library, the recognition unit 1022 can intelligently recognize the hand movement of the user according to the gesture template library, and obtain the thought content of the user according to the experience value of the movement signal, so that the recognition rate and the accuracy of the sign language of the user are improved, and the interference of external environment information on the sign language recognition process of the user is avoided; the embodiment has higher stability and reliability for the sign language recognition of the user by using the recognition unit 1022, and the sign language translation system 10 has higher compatibility.
As a specific implementation manner, fig. 3 shows another specific structure of the data recognition module 102 provided in this embodiment, compared with the structure of the data recognition module 102 in fig. 2, the data recognition module 102 in fig. 3 further includes a training unit 1023, and the training unit 1023 establishes a gesture template library according to the feature vector group.
Specifically, referring to fig. 3, a training unit 1023 is connected between the feature extraction unit 1021 and the recognition unit 1022; the training unit 1023 is used for constructing a training set, and training the feature vector group samples by using the training set to obtain the gesture template library.
In the embodiment, when the feature extraction unit 1021 outputs the feature vector group to the training unit 1023, feature information included in each feature vector in the feature vector group can be analyzed by the training unit 1023; specifically, a training set can be set according to the difference between action signals under different semantic conditions, the training set can simulate a virtual data function model, and when data are input into the mathematical function model, the mathematical function model performs machine learning according to the relation between the data and the change rule of the data to obtain a set of each type of data; therefore, in the training unit 1023, the feature vector group is used as a training sample, and the variation characteristics of the feature vectors are trained and classified through a training set; establishing a hand action information set of each type by taking the training sample as a central point to form a gesture template library; in each type of hand motion information set, motion information of a plurality of sign languages exists, and each type of sign language motion information represents a specific sign language thought meaning; the gesture template library includes: thought content matched with each type of sign language action information; the gesture template library has corresponding gesture data, and the user information represented by each type of hand motion can be judged according to the gesture data; the gesture template library is a standard sign language translation integrator; therefore, in this embodiment, the training unit 1023 can access the feature vector in real time, and perform machine learning on the feature vector by using the training set to extract the motion information of the user, so as to read the experimental information in the hand motion, the gesture data in the gesture template library can completely conform to the sign language motion expression habit of the user, and the gesture database can more comprehensively express the hand motion information of the user; the training set carries out deep excavation and self-heuristic learning on the feature vector group samples, can accurately classify feature attributes of hand actions, enables the hand action recognition process of the sign language translation system 10 for users to be more accurate, enables the sign language translation system 10 to execute sign language translation according to specific sign language expression habits of each user, and brings good use experience for the users.
Therefore, in this embodiment, the data recognition module 102 performs a self-heuristic learning on the motion signal of the hand by using the training unit 1023, and can completely conform to the sign language expression habit of the user according to the pre-established gesture template library, and can more comprehensively obtain language expression information according to the gesture template library, so as to realize a machine learning function on the hand motion information, and the data recognition module 102 has higher compatibility; the interference of external environmental factors on the hand action recognition process is eliminated; the embodiment can accurately judge through gesture data in the gesture template library: the thought content represented by the sign language action of the user improves the translation precision and accuracy of the sign language of the user.
As an alternative embodiment, the motion signal of the hand comprises: angular velocity of motion, acceleration of motion, and bending angle of the finger; therefore, the motion information of the hand of the user in the space can be comprehensively reflected by collecting the motion signals of the hand, wherein the substantial character meaning of the sign language and the motion amplitude of the hand have a one-to-one correspondence relationship; when more motion parameters of the hand are obtained through detection, the hand motion information of the user can be analyzed more comprehensively according to the motion parameters, and detection errors of the sign language of the user can be reduced; the sign language interpretation system 10 in the present embodiment has higher recognition accuracy for the sign language of the user.
As an alternative implementation, fig. 4 shows a circuit structure of the sign language translation system 10 provided in this embodiment; the acquisition module 101 includes a six-axis sensor and at least one bending sensor.
The six-axis sensor is connected to the data recognition module 102, and is used for acquiring the motion angular velocity and the motion acceleration of the hand of the user.
The moving direction and the rotating speed of the hand of the user can be obtained through the motion angular velocity, and when the user uses the limb movement of the hand to express sign language, the motion angular velocity can be used for judging the motion trend of the hand of the user; the motion acceleration can obtain the moving amplitude of the hand of the user and the amplitude of the hand waving in the space; the motion track of the hand of the user in the space can be accurately obtained through the six-axis sensor in the embodiment, the hand motion variable quantity of the user can be comprehensively monitored by combining the motion angular velocity and the motion acceleration, the sign language translation system 10 can have higher monitoring efficiency and monitoring precision for the motion of the hand of the user, and data detection errors are avoided.
Referring to fig. 4, the six-axis sensor in this embodiment can be implemented by using an MPU6050 sensor chip, and a communication pin I/O of the MPU6050 sensor chip is connected to the data identification module 102 to implement data transmission between the MPU6050 sensor chip and the data identification module 102; acquiring hand motion information of a user through the MPU6050 sensor chip, and performing corresponding signal processing; because the MPU6050 sensor chip has spatial motion detection performance, can realize an accurate detection function for small motion amount changes, and has extremely high sensitivity, the MPU6050 sensor chip can be used for detecting the spatial position of the hand and the motion state of the hand in real time, and the detection response speed and the detection precision of the sign language translation system 10 for motion signals of the hand are guaranteed; the MPU6050 sensor chip has complete functions and high data detection sensitivity, and effectively reduces the identification cost of the sign language translation system 10 on hand motions.
The bending sensor is connected to the data recognition module 102, and the bending sensor is used for collecting a bending angle of a finger of a user.
The user needs to transmit sign language information outwards through the bending action of fingers, when the user needs to express different sign language contents, the bending angles of the fingers of the user are different, the swing amplitude and the bending amplitude of the hand can be accurately judged by monitoring the bending angles of the fingers of the user, and the motion information of the fingers also contains sign language change information of the user; optionally, the bending sensor is attached to a joint of a finger, and when the bending angles of the finger are different, the bending sensor can sense the bending degree of the finger, and the bending sensor can convert the bending degree of the finger into corresponding data to identify the language content of the user; therefore, the embodiment can accurately detect the micro-motion change condition of the finger by using the bending sensor, and the substantial language information of the user is analyzed according to the bending angle of the finger, so that the comprehensive monitoring function of the hand action of the user is realized, and the recognition error of the sign language of the user is avoided; meanwhile, the number of the bending sensors in the embodiment can be specifically set according to the measurement accuracy requirement of the sign language action, so that the compatibility and the use experience of the sign language translation system 10 are improved; illustratively, referring to fig. 4, the acquisition module 101 includes 5 bending sensors Flex 1-Flex 5, each bending sensor is attached to each finger when the bending motion of the finger is detected by the sign language translation system 10, each bending sensor can detect the bending angle of each finger, and output the bending angle to the data recognition module 102; the acquisition module 101 can synchronously detect the bending angles of 5 fingers, and has higher detection precision for the motion information of each finger; therefore, the embodiment can more accurately obtain the sign language information of the user through the bending angle of the finger.
As a specific embodiment, the bending sensor includes a pressure sensor and an A/D converter.
The pressure sensor is used for receiving the stress of the finger of the user and deforming so as to change the resistance value.
When a user uses a sign language to communicate, in the motion process of the finger of the user, the stress of the finger on an external object is changed, the pressure sensor converts a motion signal into a resistance signal by using a piezoelectric effect, when the pressure sensor is subjected to external pressure, mechanical deformation, namely a polarization effect, is generated, electric power parameters presented by the pressure sensor are different, and the external pressure amplitude can be obtained by detecting the amplitude of the electric power parameters; therefore, the resistance value of the pressure sensor in the embodiment can be changed according to the change of the stress, and when the bending angles of the fingers of the user are different, the resistance value of the pressure sensor is different, so that the conversion of the electric signals is realized through the pressure sensor, and the detection process of the embodiment on the motion state of the fingers is facilitated.
For example, when the pressure sensor receives a greater stress from a user's finger, the deformation amplitude of the pressure sensor is greater, and the resistance value of the pressure sensor is greater; the present embodiment can accurately detect the finely varied state of the user's finger by the pressure sensor.
The A/D converter is used for obtaining a voltage value which is in response to the change of the resistance value of the pressure sensor, and obtaining the bending angle of the finger of the user according to the voltage value.
The A/D converter has an analog-to-digital conversion function, as described above, when the power of the power supply is transmitted to the pressure sensor, according to the ohm's theorem, the voltage at two ends of the pressure sensor changes with the change of the resistance value of the pressure sensor, so that the voltage value of the pressure sensor responds to the resistance value of the pressure sensor, and the resistance value of the pressure sensor and the voltage value of the pressure sensor have a one-to-one correspondence relationship; compared with resistance value detection, voltage value detection has higher simplicity and lower detection cost; in the embodiment, the changed voltage value of the pressure sensor can be acquired in real time through A/D conversion so as to obtain the deformation amplitude of the pressure sensor, and after the voltage value is further analyzed and processed, the bending angle of the finger corresponding to the voltage value is obtained, and the bending angle has higher identification precision; the A/D converter can realize the rapid conversion of signals, and the detection efficiency and the detection accuracy of the finger movement information of the user are improved; when the a/D converter obtains the bending angle of the finger, the a/D converter transmits the bending angle to the data recognition module 102, so that the data recognition module 102 can perform the normalization processing and the machine learning on the relevant motion signal in real time.
As an alternative embodiment, the six-axis sensor comprises: a three-axis gyroscope, a three-axis accelerometer, and a digital motion processing engine.
Please refer to fig. 4, the three circuit functions of the three-axis gyroscope, the three-axis accelerometer, and the digital motion processing engine can be implemented by the MPU6050 sensor chip.
The three-axis gyroscope can measure information such as positions, moving tracks and acceleration of the hands of a user in 6 directions, so that the three-axis gyroscope can comprehensively detect the operation variable quantity of the hands of the user in each direction of a three-dimensional space, and has higher detection precision and detection efficiency for the spatial positions of the hands; the three-axis accelerometer can measure the speed variation of the hand of the user in each dimension in the waving process, and the three-axis accelerometer can comprehensively and accurately detect the operation variation trend of the hand, such as accelerated motion or decelerated motion, so that the motion trend of the hand of the user can be accurately detected through the three-axis accelerometer, and deep excavation of sign language actions is realized; the digital motion processing engine is used for storing and identifying the motion angular velocity and the motion acceleration of the hand, a user can generate a large amount of data of the motion angular velocity and the motion acceleration in the process of using sign language, and the data in the six-axis sensor can be updated in time through the digital motion processing engine, so that the digital motion processing engine can keep the current latest velocity value, illustratively, the digital motion processing engine comprises a register, and the motion angular velocity and the motion acceleration can be read through accessing the register; so as to realize data transmission between the acquisition module 101 and the data identification module 102; therefore, the digital motion processing engine can greatly ensure the data security of the motion angular velocity and the motion acceleration, avoid the phenomenon of data loss, and more accurately judge the motion change condition of the hand of the user according to the motion angular velocity and the motion acceleration.
Therefore, the six-axis sensor in this embodiment has a relatively simplified spatial structure, the spatial position data of the hand of the user can be acquired from various aspects through the three-axis gyroscope, the three-axis accelerometer and the digital motion processing engine, and the motion angular velocity and the motion acceleration of the hand can be analyzed more comprehensively according to the position data, the six-axis sensor has a more simplified spatial structure, and the application volume of the six-axis sensor in the sign language translation system 10 is also reduced.
As an alternative implementation, fig. 5 shows another schematic structure of the sign language translation system 10 provided in this embodiment, please refer to fig. 5, wherein the six-axis sensor 1011 and the bending sensor 1012 in fig. 5 have been discussed in detail in the above embodiments, and will not be described again here; wherein the sign language translation system 10 in fig. 5 further comprises a filtering module 401.
The filtering module 401 is connected between the six-axis sensor 1011 and the data identification module 102, and the filtering module 401 is configured to perform kalman filtering processing on the motion angular velocity and the motion acceleration.
It should be noted that the kalman filtering can realize an optimal estimation function, the kalman filtering can filter some abnormal data obviously not conforming to the average value according to the overall change rule of the data, the overall action signal change can be more stable by removing the abnormal data, the original information in the data can be retained by the stable action signals, the optimal position estimation of the data can be realized by the kalman filtering, and the kalman filtering can improve the precision of data conversion processing; the kalman filter has been widely used in the field of data processing in the conventional art.
Specifically, in the process of detecting the operation state of the hand of the user through the six-axis sensor 1011, the six-axis sensor 1011 has inherent data detection characteristics, and when the user uses sign language to communicate, the unintentional shaking of the hand of the user can also cause a large random error, so that the hand of the user has certain error fluctuation at a spatial position, the semantic expression of the user has uniqueness, and the motion state of the hand of the user in the space has a high influence degree on the motion angular velocity and the motion acceleration; if the data recognition module 102 receives the wrong motion angular velocity and motion acceleration, the sign language interpretation system 10 will have a larger error for the sign language action recognition of the user; in this embodiment, a filtering module 401 may overcome a data error caused by a small disturbance of a hand of a user, and kalman filtering is performed on a motion angular velocity and a motion acceleration, so that noise in the data may be removed, the motion angular velocity and the motion acceleration output by the filtering module 401 have higher stability and reliability, the motion angular velocity and the motion acceleration may truly represent an inner mind of the user, and the data received by the data identification module 102 has higher stationarity; therefore, in this embodiment, the sign language translation system 10 can obtain more accurate angular velocity values and acceleration values, and more accurate monitoring of the hand motion state of the user is achieved.
As an optional implementation manner, the filtering module 401 may be implemented by using a kalman filtering circuit in the conventional technology, for example, a technician may set parameters of the kalman filtering circuit, so as to implement different data filtering processing functions of the filtering module 401 for the motion angular velocity and the motion acceleration, the sign language translation system 10 in this embodiment has higher data acquisition sensitivity and data processing accuracy, the user experience is better, and the sign language translation system 10 has a higher application range.
As an optional implementation manner, please refer to fig. 4, the data identification module 102 in this embodiment may be implemented by using a main control chip, where the model of the main control chip is: the system comprises an STM32F103, wherein a communication pin of a main control chip is connected with an acquisition module 101, when the acquisition module 101 outputs an action signal to the main control chip, the main control chip realizes intelligent processing of data and operations such as data learning and training according to an operation signal stored by the main control chip, the main control chip can independently learn and explore characteristic information of the action signal to find out a habitual language expression form of user action, and further the main control chip can independently learn sign language action according to the operation information of a user to complete a sign language identification function, and the sign language translation system 10 has more flexible controllable performance and wider application range; therefore, the sign language translation system 10 in this embodiment can determine and recognize the sign language through the main control chip, and has the advantages of low manufacturing cost of the circuit, simple circuit structure, and extremely strong compatibility, thereby greatly reducing the translation cost of the sign language.
As an alternative implementation, fig. 6 shows another schematic structure of the sign language translation system 10 provided in this embodiment, and compared with the structure of the sign language translation system 10 in fig. 1, the sign language translation system 10 in fig. 6 further includes: a wireless transmitting module 501 and a recording module 502.
The wireless transmission module 501 is connected between the data identification module 102 and the server terminal 20, and is configured to wirelessly transmit the normalized action signal to the server terminal 20, so that the server terminal 20 displays the normalized action signal.
In this embodiment, the data identification module 102 outputs the standardized motion signal to the wireless sending module 501, where the wireless sending module 501 has a function of wireless data transmission, and the motion signal can retain high data integrity in the wireless sending module 501, so that the gesture language motion information of the user can be completely obtained through the motion signal output by the wireless sending module 501; the wireless sending module 501 is in wireless communication with the server terminal 20, the server terminal 20 has data storage and data display functions, and can display the action signal in real time through a display picture of the server terminal, so that a user can more intuitively acquire the hand action information of the user in the server terminal 20 and judge whether the action information of the user is in a safe state in real time according to the action signal; therefore, in the embodiment, the sign language recognition state of the user can be displayed in real time through the server terminal 20, so that the human-computer interaction performance of the sign language translation system 10 is improved; the sign language translation system 10 can realize wireless communication interaction with an external server terminal 20, so that the compatibility and flexibility of the sign language translation system 10 are improved, the sign language translation system 10 can realize accurate sign language identification in different industrial technical fields, and great convenience is brought to the use of users.
It should be noted that the server terminal 20 may be various devices such as a mobile phone or a tablet computer, which is not limited herein; optionally, the wireless sending module 501 includes a wireless communication chip, and the type of the wireless communication chip is: NRF24L01, wherein the wireless communication chip has lower manufacturing cost and application cost, has stronger compatibility, and greatly reduces the transmission cost of action signals; the sign language translation system 10 in this embodiment has a more powerful function, higher communication compatibility, and better user experience.
The recording module 502 is connected to the data recognition module 102, and the recording module 502 is configured to record the normalized action signal and the sign language result to a sign language database after the sign language result is obtained by the data recognition module 102.
When the data recognition module 102 finishes machine learning recognition on the action signal, the two parties of the communication realize information interaction through the sign language result, at this time, the normalized data and the sign language result are in one-to-one correspondence, and the sign language of the user is successfully recognized; the recording module 502 will record the action signal and sign language result in time, and the recording module 502 will store these recognition results in time, wherein the action signal and sign language result are used as history information and recorded in the sign language database; the sign language database comprises a large number of sign language action signals and sign language recognition results matched with the action signals, data in the sign language database can be used as experience guide values for next sign language recognition, and action expression habits of each user when the sign language is used can be obtained through machine learning of historical data information; when the sign language translation system 10 judges and identifies the sign language of the user next time, the current action signal can be analyzed according to the action expression habits, and the training result of the action signal has higher reasonability and intelligence, so that the real language thought of the user can be more accurately reflected to the identification result of the sign language of the user, and the data error of the sign language result is reduced; therefore, in the present embodiment, the historical motion information of the sign language recognition result of the user is collected by the recording module 502, so that the sign language translation system 10 can gradually and completely adapt to the hand motion of the user, and the subsequent sign language recognition process can be guided by the historical motion information, thereby enhancing the autonomous learning and deep search performance of the sign language translation system 10 in the present embodiment, so that the sign language translation system 10 has higher accuracy and precision on the dynamic recognition result of the sign language of the user, and the flexibility and stability of the sign language translation system 10 on data processing are improved.
Illustratively, the recording module 502 can be implemented by using a circuit structure in the conventional technology, for example, the recording module 502 is a ROM (Read Only Memory) or a RAM (Random Access Memory); the recording module 502 can store large-capacity data in real time, so that the machine learning performance and the autonomous learning capacity of the sign language translation system 10 for action signals are guaranteed, the sign language identification stabilizing process of a user can be guaranteed by reading historical data in the recording module 502 in real time, and guide information can be provided for the next sign language identification process through the historical data, so that the sign language translation system 10 has higher stability and compatibility, and higher use experience is brought to the user.
Fig. 7 shows a structural schematic diagram of sign language translation gloves 60 provided in an embodiment of the present invention, please refer to fig. 7, the sign language translation gloves 60 include the sign language translation system 10 described above.
Referring to the embodiment of fig. 1 to 6, when the sign language translation glove 60 is worn on the hand of the user, the sign language translation system 10 can acquire the motion signal of the hand of the user in real time, process the motion signal of the hand, convert the motion signal into a standard data format, perform machine learning and depth search on the motion signal of the user to obtain a uniform rule of the motion change of the hand of the user, perform information matching on the motion signal of the user based on the uniform rule to identify thought content completely matched with the motion of the hand, and implement real-time and accurate detection and identification of the sign language of the user; when the sign language translation system 10 is applied to the sign language translation glove 60, the sign language translation glove 60 can perform autonomous recognition and translation on the sign language action of the user, and the thought content of the user is transmitted by using historical experience information, so that the problems that the action signal detection error is caused by external environment interference and the sign language recognition accuracy of the user is not high are solved; the sign language translation glove 60 in the embodiment has a simplified module structure, has high recognition efficiency and precision for the sign language of the user, is suitable for various external environments, has high compatibility and stability, and is convenient for the user to use; when the user wears the sign language translation glove 60 on the hand, the sign language translation glove 60 can transmit language information outwards accurately according to the sign language action of the user, great convenience is brought to the communication between the two parties, and the cost of the sign language communication between the two parties is reduced; therefore, the problems that sign language translation gloves in the traditional technical scheme have large errors in sign language translation, are difficult to universally apply, have low reliability and stability and high application cost and are not good in user experience are effectively solved.
Fig. 8 shows a specific flow of the sign language translation method provided in this embodiment, and as shown in fig. 8, the sign language translation method includes the following steps:
s701: collecting hand motion signals of a user.
The gesture language action information of the user can be monitored in real time through the hand action signal of the user, wherein the action signal contains the natural language thought of the user, and the content which the user actually wants to express through the hand action can be obtained through analysis and processing of the action signal.
S702: and carrying out normalization processing on the action signals so as to map the action signals to a preset range and extract a characteristic vector group.
After the action signals are subjected to normalization processing, different action signals can be subjected to comparative analysis, and dimension difference among various types of data can be avoided; and eliminating data recognition differences caused by fine motion errors of the user; under the semantics of the same category, a certain type of action is set to express a specific meaning, so that the identification accuracy and accuracy of the hand action of the user are guaranteed, and the user can express the hand language more conveniently; the feature vector group comprises feature information of the action signals, experience historical data of the user can be obtained through the feature information, the hand action change rule of the user can be directly analyzed according to the feature vector group, and the translation precision of the sign language is improved.
S703: and identifying the feature vector group to obtain a sign language result.
The action signals after standardization processing contain language information of the user, change characteristics of the action signals and one-to-one correspondence between hand actions and thought information of the user are summarized according to expression habits and action amplitude of the user, machine learning and deep exploration are carried out on the characteristic vectors, the action signals are matched with pre-established template gesture data one by one, sign language results can be accurately and quickly obtained, and the recognition rate of the hands of the user is improved.
It should be noted that the sign language translation method in fig. 8 corresponds to the sign language translation system 10 in fig. 1 one to one, so that reference may be made to the embodiment in fig. 1 for a specific implementation of the sign language translation method in this embodiment, and details will not be described here again.
In the sign language translation method shown in fig. 8, the action information of the user can be obtained according to the action signal of the hand, the action signal is autonomously learned and autonomously explored by using a machine learning method, the action signal is trained according to the change feature information of the hand action of the user, so as to obtain a reasonable matching rule between the hand action of the user and the language information of the user, the feature vectors in the feature vector group are subjected to information matching according to the matching rule, if the matching is successful, a corresponding sign language result is output, and two parties in communication can realize information exchange; therefore, the sign language identification method in the embodiment is simple to operate, convenient, fast and flexible, and realizes real-time and dynamic translation of the user sign language; in addition, the sign language translation method in the embodiment performs autonomous learning on the action signal by using the empirical value of the hand action of the user, judges the language information contained in the action signal according to the hand motion rule of the user, has high intelligent degree, and completely eliminates the interference of the change of the external environment on the recognition of the sign language; the sign language translation method in the embodiment can provide accurate sign language translation for different users, and has low application cost and high practical value; the method effectively solves the problems that the sign language translation method in the traditional technology is easily interfered by external environment changes, the sign language recognition is accurate and low, the application cost is high, the sign language data processing efficiency of a user is low, the translation speed is low, great inconvenience is brought to the user, and the method is difficult to generally apply.
To sum up, the sign language translation system in the utility model can eliminate the interference of external environment factors, and utilizes machine learning to perform machine learning on the sign language action so as to identify the user language information matched with the action signal, the operation is simple and convenient, the sign language identification accuracy rate for the user is extremely high, the cost for the sign language identification of the user is greatly reduced, the function is strong, the sign language communication of the user is more convenient and reliable, and the sign language translation system is compatible and applicable to different users; the utility model provides a sign language translation system to the popularization of sign language in the public of society, especially the popularization of sign language in deaf and mute will play very great promotion effect, will bring very big practical application and worth.
Various embodiments are described herein for various devices, circuits, apparatuses, systems, and/or methods. Numerous specific details are set forth in order to provide a thorough understanding of the overall structure, function, manufacture, and use of the embodiments as described in the specification and illustrated in the accompanying drawings. However, it will be understood by those skilled in the art that the embodiments may be practiced without such specific details. In other instances, well-known operations, components and elements have been described in detail so as not to obscure the embodiments in the description. It will be appreciated by those of ordinary skill in the art that the embodiments herein and shown are non-limiting examples, and thus, it can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.
Reference throughout the specification to "various embodiments," "in an embodiment," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without presuming that such combination is not an illogical or functional limitation. Any directional references (e.g., plus, minus, upper, lower, upward, downward, left, right, leftward, rightward, top, bottom, above …, below …, vertical, horizontal, clockwise, and counterclockwise) are used for identification purposes to aid the reader's understanding of the present disclosure, and do not create limitations, particularly as to the position, orientation, or use of the embodiments.
Although certain embodiments have been described above with a certain degree of particularity, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this disclosure. Joinder references (e.g., attached, coupled, connected, and the like) are to be construed broadly and may include intermediate members between a connection of elements and relative movement between elements. Thus, connection references do not necessarily imply that two elements are directly connected/coupled and in a fixed relationship to each other. The use of "for example" throughout this specification should be interpreted broadly and used to provide non-limiting examples of embodiments of the disclosure, and the disclosure is not limited to such examples. It is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative only and not limiting. Changes in detail or structure may be made without departing from the disclosure.
The above description is only exemplary of the present invention and should not be construed as limiting the present invention, and any modifications, equivalents and improvements made within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (5)

1. A sign language translation system, comprising:
the acquisition module is used for acquiring hand motion signals of a user; and
the data identification module is connected with the acquisition module and used for carrying out normalization processing on the action signals so as to map the action signals into a preset range, extracting a characteristic vector group and identifying the characteristic vector group to obtain a sign language result;
wherein, the collection module includes:
the six-axis sensor is connected with the data identification module and is used for acquiring the motion angular velocity and the motion acceleration of the hand of the user; and
the bending sensor is connected with the data identification module and is used for acquiring the bending angle of the finger of the user;
wherein the bending sensor comprises:
the pressure sensor is used for receiving the stress deformation of the finger of the user so as to change the resistance value; and
the A/D converter is used for obtaining a voltage value which is in response to the change of the resistance value of the pressure sensor and acquiring the bending angle of the finger of the user according to the voltage value;
wherein the six-axis sensor comprises an MPU6050 sensor chip.
2. The sign language translation system according to claim 1, further comprising:
and the filtering module is connected between the six-axis sensor and the data identification module and is used for performing Kalman filtering processing on the motion angular velocity and the motion acceleration.
3. The sign language translation system according to claim 1, further comprising:
and the wireless sending module is connected between the data identification module and the server terminal and is used for wirelessly sending the action signal after the normalization processing to the server terminal so as to enable the server terminal to display the action signal after the normalization processing.
4. The sign language translation system according to claim 1, further comprising:
and the recording module is connected with the data identification module and is used for recording the action signals after the normalization processing and the sign language results into a sign language database after the sign language results are obtained by the data identification module.
5. A sign language interpretation glove comprising the sign language interpretation system according to any one of claims 1 to 4.
CN201920633509.8U 2019-01-23 2019-05-05 Sign language translation system and sign language translation gloves Active CN210402266U (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910061948.0A CN109696963A (en) 2019-01-23 2019-01-23 Sign language interpretation system, glove for sign language translation and sign language interpretation method
CN2019100619480 2019-01-23

Publications (1)

Publication Number Publication Date
CN210402266U true CN210402266U (en) 2020-04-24

Family

ID=66234265

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910061948.0A Pending CN109696963A (en) 2019-01-23 2019-01-23 Sign language interpretation system, glove for sign language translation and sign language interpretation method
CN201910369271.7A Pending CN110096153A (en) 2019-01-23 2019-05-05 A kind of sign language interpretation system, glove for sign language translation and sign language interpretation method
CN201920633509.8U Active CN210402266U (en) 2019-01-23 2019-05-05 Sign language translation system and sign language translation gloves

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201910061948.0A Pending CN109696963A (en) 2019-01-23 2019-01-23 Sign language interpretation system, glove for sign language translation and sign language interpretation method
CN201910369271.7A Pending CN110096153A (en) 2019-01-23 2019-05-05 A kind of sign language interpretation system, glove for sign language translation and sign language interpretation method

Country Status (1)

Country Link
CN (3) CN109696963A (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189590A (en) * 2019-06-18 2019-08-30 合肥工业大学 A kind of adaptively correcting formula sign language mutual translation system and method
CN111428802B (en) * 2020-03-31 2023-02-07 上海市计量测试技术研究院 Sign language translation method based on support vector machine
CN114120770A (en) * 2021-03-24 2022-03-01 张银合 Barrier-free communication method for hearing-impaired people
CN113407034B (en) * 2021-07-09 2023-05-26 呜啦啦(广州)科技有限公司 Sign language inter-translation method and system
WO2023033725A2 (en) * 2021-09-02 2023-03-09 National University Of Singapore Sensory glove system and method for sign gesture sentence recognition
CN115643485B (en) * 2021-11-25 2023-10-24 荣耀终端有限公司 Shooting method and electronic equipment
CN118607268A (en) * 2024-08-09 2024-09-06 长春职业技术学院 Visual interaction system for virtual simulation training of electric power operation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10061389B2 (en) * 2014-06-03 2018-08-28 Beijing TransBorder Information Technology Co., Ltd. Gesture recognition system and gesture recognition method
US10685219B2 (en) * 2016-01-27 2020-06-16 University Industry Foundation, Yonsei University Wonju Campus Sign language recognition system and method
CN107678550A (en) * 2017-10-17 2018-02-09 哈尔滨理工大学 A kind of sign language gesture recognition system based on data glove

Also Published As

Publication number Publication date
CN110096153A (en) 2019-08-06
CN109696963A (en) 2019-04-30

Similar Documents

Publication Publication Date Title
CN210402266U (en) Sign language translation system and sign language translation gloves
Shukor et al. A new data glove approach for Malaysian sign language detection
Bukhari et al. American sign language translation through sensory glove; signspeak
CN104780217B (en) Detect method, system and the client of user job efficiency
CN110262664B (en) Intelligent interactive glove with cognitive ability
CN205721628U (en) A kind of quick three-dimensional dynamic hand gesture recognition system and gesture data collecting device
CN108196668B (en) Portable gesture recognition system and method
CN107092882B (en) Behavior recognition system based on sub-action perception and working method thereof
CN111722713A (en) Multi-mode fused gesture keyboard input method, device, system and storage medium
CN105068657B (en) The recognition methods of gesture and device
CN109885166A (en) Intelligent sign language translation gloves and its gesture identification method
CN113029153B (en) Multi-scene PDR positioning method based on intelligent mobile phone multi-sensor fusion and SVM classification
Chen et al. A fusion recognition method based on multifeature hidden markov model for dynamic hand gesture
CN110236560A (en) Six axis attitude detecting methods of intelligent wearable device, system
Dhamanskar et al. Human computer interaction using hand gestures and voice
Patel et al. Hand Gesture based Home Control Device using IoT.
Naosekpam et al. Machine learning in 3D space gesture recognition
Bulugu Real-time Complex Hand Gestures Recognition Based on Multi-Dimensional Features.
Feng et al. Design and implementation of gesture recognition system based on flex sensors
Cheng et al. Finger-worn device based hand gesture recognition using long short-term memory
Mahajan et al. Digital pen for handwritten digit and gesture recognition using trajectory recognition algorithm based on triaxial accelerometer
Khan et al. Electromyography based Gesture Recognition: An Implementation of Hand Gesture Analysis Using Sensors
CN220730766U (en) Sign language acquisition and recognition intelligent glove and conversion system
Mali et al. Hand gestures recognition using inertial sensors through deep learning
Wang et al. A Gesture Recognition System Based On MEMS Accelerometer

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant