CN103279734A - Novel intelligent sign language translation and man-machine interaction system and use method thereof - Google Patents

Novel intelligent sign language translation and man-machine interaction system and use method thereof Download PDF

Info

Publication number
CN103279734A
CN103279734A CN2013101012976A CN201310101297A CN103279734A CN 103279734 A CN103279734 A CN 103279734A CN 2013101012976 A CN2013101012976 A CN 2013101012976A CN 201310101297 A CN201310101297 A CN 201310101297A CN 103279734 A CN103279734 A CN 103279734A
Authority
CN
China
Prior art keywords
semantic
electromyographic signal
sign language
gesture
man
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101012976A
Other languages
Chinese (zh)
Inventor
朱向阳
张定国
李成璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN2013101012976A priority Critical patent/CN103279734A/en
Publication of CN103279734A publication Critical patent/CN103279734A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a novel intelligent sign language translation and man-machine interaction system which comprises a gesture identifying system and a semantic vocalizing system. The gesture identifying system is connected with a user with language disorder to transmit an original electromyographic signal; the gesture identifying system is connected with the semantic vocalizing system to transmit basic semantic information; and the semantic vocalizing system transmits voice which is consistent with the semantics of sign language. The gesture identifying system comprises an electromyographic signal acquisition device, a wireless transceiver, a filter, a characteristic extraction unit and a classifier. The semantic vocalizing system comprises a semantic analyzer, a voice controller and a portable speaker. According to the novel intelligent sign language translation and man-machine interaction system and the use method thereof, a mode identifying technology based on the electromyographic signal is adopted, the gesture identification precision is improved and the efficiency in exchange between the user with language disorder and an healthy person is improved by combining a voice system. After being simply adjusted, the system and the method can be used by users with language disorder with different body shapes. The electromyographic signal acquisition device and the semantic vocalizing system are respectively worn through a wrist band and a waist band, so that the normal life of the user with language disorder is not affected.

Description

Novel intelligent sign language interpreter and man-machine interactive system and using method thereof
Technical field
The present invention relates to translation and man-machine interactive system, specifically be based on the electromyographic signal mode identification technology accurately identified and realized online sounding to aphasis person's sign language backup system.
Background technology
Aphasis person is except deaf person and mute, comprises that also language communication that other reason causes has the crowd of obstacle.Listening the artificial example of barrier, the nearly 2,075 ten thousand tins of barrier people of the whole of China comprise hard of hearing, hard of hearing, aging deaf etc.And sign language is certain meaning or the word to constitute according to the change modeling image of gesture or syllable, and it is the language of aphasis crowd a kind of hand of communicating mutually and rapping off, and is aphasis person's main communicative instrument.According to statistics, in countries such as the U.S., sign language popularity in civic is generally 5%; And at home, as Shanghai, can the sign language number be no more than 10,000 people, popularity rate is less than per mille.Therefore, lower sign language popularity rate has greatly hindered tin barrier people and abled person's effective communication.
Man-machine interaction based on Gesture Recognition has application in different field.Wherein a kind of be by computing machine provide a kind of effectively, mechanism will be used sign language interpreter always and become text or voice accurately, realize exchanging of these two xenogenesis language modes of natural language and sign language, make the interchange between aphasis person and the abled person become more convenient, quick, aphasis person can be socially reintegrated well.Be to extract kinematic parameters such as angle, angular acceleration by data glove as the mode identification technology of feature for aphasis person realizes the major technique of Sign Language Recognition at present, but data glove wear other operations that limited aphasis person's both hands, for normal life is made troubles.(for example, with reference to " a kind of based on the dynamic sign Language Recognition Method of data glove ", Chinese patent publication No.: CN102193633A).In addition, based on the gesture identification of camera technique certain technical progress and practical application having been arranged also, mainly is to identify coupling by extracting image information.But it is to photographed scene, and capture apparatus etc. have high requirement, thereby this technology also has certain difficulty aspect application popularization.(for example, with reference to " gesture identification system and method ", Chinese patent publication No.: CN102467657A).
Through existing document is found, Sign Language Recognition is mostly based on above-mentioned two kinds of technology at present, but lacks feasibility in actual applications, and the supporting capacity that aphasis person is carried out normal communication exchange still has certain limitation.Difference of the present invention is to adopt electromyographic signal collection and characteristic information extraction, and sensor is worn on the forearm that need not carry out the contact operation.The inconvenience of not only avoiding data glove to take care of oneself and bring to normal life has also been eliminated by environmental factor and characteristic information has been extracted the interference that causes.Pei Zhi intelligent semantic sonification system effective implementation language obstacle person and healthy people's communication simultaneously.
Summary of the invention
The objective of the invention is to improve the defective of Sign Language Recognition correlation technique and application in the above-mentioned prior art, a kind of voice backup system and using method thereof based on the electromyographic signal mode identification technology is provided.When improving the gesture identification degree of accuracy, also actual use is more natural, is simple and easy to use by the myoelectricity mode identification technology in the present invention.The semantic sonification system of configuration simultaneously guarantees aphasis person and abled person's real-time effective communication.
According to an aspect of the present invention, a kind of novel intelligent sign language interpreter and man-machine interactive system are provided, comprise continuous gesture identification system and semantic sonification system, the gesture identification system is used for linking to each other to receive electromyographic signal with user's forearm, gesture to the user is accurately identified judgement, generate basic semantic information, semantic sonification system is used for that basic semantic information is carried out Treatment Analysis and generates the sign language semanteme, and transmission and the semantic corresponding voice of sign language.
Preferably, described gesture identification system comprises the electromyographic signal collection device that connects successively, wireless launcher, wireless receiver, wave filter, feature extraction unit, sorter, wherein, the electromyographic signal collection device is used for linking to each other to receive the original electromyographic signal of corresponding each gesture with user's forearm muscle group, and transmit original electromyographic signal by wireless launcher and wireless receiver to wave filter, wave filter links to each other with feature extraction unit and eliminates the electromyographic signal of noise with transmission, feature extraction unit links to each other with sorter with the characteristic information of transmission electromyographic signal, sorter generates basic semantic information according to the characteristic information of electromyographic signal, and by linking to each other to transmit basic semantic information with semantic sonification system.
Preferably, described semantic sonification system comprises semantic analyzer, voice controller, the amplifier that connects successively, portable speaker, wherein, semantic analyzer links to each other to receive basic semantic information with the gesture identification system, semantic analyzer links to each other to transmit continuous semantic information with voice controller, voice controller links to each other with amplifier and comprises the electric pulse of voice messaging with transmission, the amplifier electric pulse after amplifying with transmission that links to each other with portable speaker, portable speaker produce with semantic information the voice of corresponding sign language semantic congruence.
Preferably, described electromyographic signal collection device comprises left arm electromyographic signal collection device and right arm electromyographic signal collection device.
According to another aspect of the present invention, electromyographic signal at all kinds of gestures and the generation of forearm muscle group has specific corresponding relation, different gestures is shunk diastole by each muscle group of forearm and is realized, each muscle group can produce corresponding electromyographic signal with different characteristic simultaneously, each described novel intelligent sign language interpreter and the using method of man-machine interactive system in the claim 1 to 4 also are provided, comprise the steps:
Step 1: in the mode of learning stage, the electromyographic signal that the gesture identification system produces each gesture is analyzed and is generated the electromyographic signal template;
Step 2: use mode phases, the gesture identification system identifies coupling according to the electromyographic signal template to the electromyographic signal that receives, and judges gesture that the user does.
Compare with prior art, the invention has the beneficial effects as follows:
(1) based on the electromyographic signal mode identification technology user's gesture and expression are had more accurately and judge.Compare other Gesture Recognition simultaneously, convenient easy-to-use aspect input, the feature extraction, do not hinder aphasis person's normal life to take care of oneself based on the electromyographic signal mode identification technology.
(2) the included semantic sonification system of the present invention can be in real time with the semantic voiceization of gesture expression, and auxiliary language obstacle person effectively exchanges with the abled person who is ignorant of sign language, realizes the efficient communication of natural language and sign language language.Wherein, semantic analyzer makes the pronunciation of portable speaker coherent more smooth to the processing of basic semantic information, meets abled person's the custom of hearing.
Description of drawings
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 is system's schematic block diagram;
Fig. 2 wears synoptic diagram for system;
Fig. 3 is the gesture identification system schematic;
Fig. 4 is semantic articulatory system synoptic diagram.
Among the figure:
201 is right arm electromyographic signal collection device and wireless launcher;
202 is left arm electromyographic signal collection device and wireless launcher;
203 is the gesture identification system;
204 is semantic sonification system.
Embodiment
The present invention is described in detail below in conjunction with specific embodiment.Following examples will help those skilled in the art further to understand the present invention, but not limit the present invention in any form.Should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement.These all belong to protection scope of the present invention.
Embodiment
Present embodiment is used for producing corresponding voice in real time by semantic sonification system after aphasis person makes sign language and finger spelling.As Fig. 1, in the present embodiment, intelligent sign language interpreter provided by the invention and man-machine interactive system comprise: gesture identification system and semantic sonification system.Wherein: the gesture identification system adductor muscle electric signal that is connected with user's forearm, user's gesture is accurately identified.The gesture identification system links to each other with semantic sonification system and transmits basic semantic information, and semantic sonification system carries out Treatment Analysis to basic semantic information sends out the semantic corresponding voice of sign language.
Described gesture identification system comprises: electromyographic signal collection device, wireless launcher, wireless receiver, wave filter, feature extraction unit, sorter.Wherein: the electromyographic signal collection device links to each other with user's forearm muscle group and receives the electromyographic signal of corresponding each gesture, wireless launcher links to each other with the electromyographic signal collection device and transmits original electromyographic signal, wireless receiver links to each other with wave filter and transmits original electromyographic signal, wave filter links to each other with feature extraction unit and transmits the electromyographic signal of eliminating noise, feature extraction unit links to each other with sorter and transmits the characteristic information of electromyographic signal, and sorter links to each other with semantic sonification system and transmits basic semantic information.
In " mode of learning " stage, the gesture identification system analyzes original electromyographic signal, generates the electromyographic signal template of different gesture correspondences.In " operational phase ", the gesture identification system carries out on-line analysis to original electromyographic signal, judges corresponding gesture.
Described semantic sonification system comprises: semantic analyzer, voice controller, amplifier, portable speaker.Wherein: semantic analyzer links to each other with the gesture identification system and transmits basic semantic information, semantic analyzer links to each other with voice controller and transmits continuous semantic information, voice controller links to each other with amplifier and transmits the electric pulse that comprises semantic information, amplifier links to each other with portable speaker and transmits the electric pulse that amplifies, and portable speaker produces the voice with the sign language semantic congruence.
The work engineering of present embodiment institute descriptive system may further comprise the steps:
1) as shown in Figure 2, the user wears gesture identification system and semantic sonification system.Two electromyographic signal collection devices are worn on the forearm middle part respectively, and bandage medial electrode paster and forearm muscle group are close to, and are used for receiving electromyographic signal.Semantic sonification system is worn between waist, is used for receiving the basic semantic information of gesture identification system transmissions and realizes semantic vocal function.
2) select " mode of learning ", this backup system is carried out simple and easy adjusting.Sign language is divided into finger spelling and sign language.In the adjustment process, made 30 gestures expressing Chinese phonetic alphabet successively by the user.30 corresponding electromyographic signals of phonetic transcriptions of Chinese characters of electromyographic signal collection device record.Made the gesture of sign language commonly used successively by the user.The electromyographic signal collection device records the corresponding electromyographic signal of each gesture.
3) the electromyographic signal collection device link to each other with pattern recognition processor the transmission original electromyographic signal.Pattern recognition processor carries out Treatment Analysis to original electromyographic signal, generates the electromyographic signal template of corresponding each gesture, and keeps a record.
4) select " use pattern ", the user is intended to make defined finger spelling arbitrarily by expressing.Make the gesture of expression " a ", " b " as the user, the electromyographic signal collection device receives " a ", forearm electromyographic signal that " b " is corresponding.Sign Language Recognition is handled original electromyographic signal, judges its corresponding electromyographic signal template and gesture expression voice, and transmits the basic semantic information of " a ", " b " to voice controller.
5) voice analyzer is handled after receiving the basic semantic signal of " a ", " b ", send the electric pulse of the semantic information comprise " a ", " b " by voice controller to amplifier, receive the voice that electric pulse after amplifying produces " a ", " b " by portable speaker.General finger spelling is used for the name spelling, gets final product so directly export the phonetic voice.
6) select " use pattern ", the user is intended to make defined sign language arbitrarily by expressing.Make the gesture of expression " one day " as the user, the electromyographic signal collection device receives " one day " corresponding forearm electromyographic signal.Sign Language Recognition is handled original electromyographic signal, judges its corresponding electromyographic signal template and gesture expression voice, and transmits the basic semantic information of " a day " to voice controller.
7) voice analyzer receive " one day " and basic semantic signal after handle, send the electric pulse of the semantic information comprise " one day " by voice controller to amplifier, the electric pulse that is received after amplifying by portable speaker produces the voice of " a day ".
Present embodiment only need carry out simple and easy adjusting to this assistant voice system by " mode of learning ", just can realize the identification effectively fast to multiple gesture, and carries out semantic pronunciation in real time.Can realize the finger spelling pronunciation based on phonetic transcriptions of Chinese characters, can realize that also the sign language of more using always pronounces, convenient easy-to-use, in real time effectively.
More than specific embodiments of the invention are described.It will be appreciated that the present invention is not limited to above-mentioned specific implementations, those skilled in the art can make various distortion or modification within the scope of the claims, and this does not influence flesh and blood of the present invention.

Claims (5)

1. novel intelligent sign language interpreter and man-machine interactive system, it is characterized in that, comprise continuous gesture identification system and semantic sonification system, the gesture identification system is used for linking to each other to receive electromyographic signal with user's forearm, gesture to the user is accurately identified judgement, generate basic semantic information, semantic sonification system is used for that basic semantic information is carried out Treatment Analysis and generates the sign language semanteme, and transmission and the semantic corresponding voice of sign language.
2. novel intelligent sign language interpreter and man-machine interactive system according to claim 1, it is characterized in that, described gesture identification system comprises the electromyographic signal collection device that connects successively, wireless launcher, wireless receiver, wave filter, feature extraction unit, sorter, wherein, the electromyographic signal collection device is used for linking to each other to receive the original electromyographic signal of corresponding each gesture with user's forearm muscle group, and transmit original electromyographic signal by wireless launcher and wireless receiver to wave filter, wave filter links to each other with feature extraction unit and eliminates the electromyographic signal of noise with transmission, feature extraction unit links to each other with sorter with the characteristic information of transmission electromyographic signal, sorter generates basic semantic information according to the characteristic information of electromyographic signal, and by linking to each other to transmit basic semantic information with semantic sonification system.
3. novel intelligent sign language interpreter and man-machine interactive system according to claim 1, it is characterized in that, described semantic sonification system comprises the semantic analyzer that connects successively, voice controller, amplifier, portable speaker, wherein, semantic analyzer links to each other to receive basic semantic information with the gesture identification system, semantic analyzer links to each other to transmit continuous semantic information with voice controller, voice controller links to each other with amplifier and comprises the electric pulse of voice messaging with transmission, the amplifier electric pulse after amplifying with transmission that links to each other with portable speaker, portable speaker produce with semantic information the voice of corresponding sign language semantic congruence.
4. novel intelligent sign language interpreter and man-machine interactive system according to claim 2 is characterized in that, described electromyographic signal collection device comprises left arm electromyographic signal collection device and right arm electromyographic signal collection device.
5. each described novel intelligent sign language interpreter and the using method of man-machine interactive system in the claim 1 to 4 is characterized in that, comprise the steps:
Step 1: in the mode of learning stage, the electromyographic signal that the gesture identification system produces each gesture is analyzed and is generated the electromyographic signal template;
Step 2: use mode phases, the gesture identification system identifies coupling according to the electromyographic signal template to the electromyographic signal that receives, and judges gesture that the user does.
CN2013101012976A 2013-03-26 2013-03-26 Novel intelligent sign language translation and man-machine interaction system and use method thereof Pending CN103279734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101012976A CN103279734A (en) 2013-03-26 2013-03-26 Novel intelligent sign language translation and man-machine interaction system and use method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101012976A CN103279734A (en) 2013-03-26 2013-03-26 Novel intelligent sign language translation and man-machine interaction system and use method thereof

Publications (1)

Publication Number Publication Date
CN103279734A true CN103279734A (en) 2013-09-04

Family

ID=49062249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101012976A Pending CN103279734A (en) 2013-03-26 2013-03-26 Novel intelligent sign language translation and man-machine interaction system and use method thereof

Country Status (1)

Country Link
CN (1) CN103279734A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN105243312A (en) * 2015-11-13 2016-01-13 上海傲意信息科技有限公司 Password system and encryption and decryption method
CN105522986A (en) * 2014-10-15 2016-04-27 现代摩比斯株式会社 Apparatus and method for controlling a vehicle using electromyographic signal
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 Two-dimensional code recognition-based sign language teaching system and method
TWI602164B (en) * 2015-09-03 2017-10-11 國立臺北科技大學 An electromyography sensor and inertia sensor-based posture recognition device for real-time sign language translation system
CN108877344A (en) * 2018-07-20 2018-11-23 荆明明 A kind of Multifunctional English learning system based on augmented reality
CN111026268A (en) * 2019-12-02 2020-04-17 清华大学 Gesture recognition device and method
CN111475206A (en) * 2019-01-04 2020-07-31 优奈柯恩(北京)科技有限公司 Method and apparatus for waking up wearable device
CN111783056A (en) * 2020-07-06 2020-10-16 诺百爱(杭州)科技有限责任公司 Method and device for identifying user identity based on electromyographic signal and electronic equipment
CN113111156A (en) * 2021-03-15 2021-07-13 天津理工大学 System for intelligent hearing-impaired people and healthy people to perform man-machine interaction and working method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101156811A (en) * 2007-11-21 2008-04-09 哈尔滨工业大学 Multiple freedom degrees hand-prosthesis voice controlling apparatus based on blue tooth wireless communication
US20120188158A1 (en) * 2008-06-26 2012-07-26 Microsoft Corporation Wearable electromyography-based human-computer interface

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101156811A (en) * 2007-11-21 2008-04-09 哈尔滨工业大学 Multiple freedom degrees hand-prosthesis voice controlling apparatus based on blue tooth wireless communication
US20120188158A1 (en) * 2008-06-26 2012-07-26 Microsoft Corporation Wearable electromyography-based human-computer interface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张旭: "基于表面肌电信号的人体动作识别与交互", 《万方学位论文》, 29 December 2010 (2010-12-29) *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134060B (en) * 2014-08-03 2018-01-05 上海威璞电子科技有限公司 Sign language interpreter and display sonification system based on electromyographic signal and motion sensor
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN105522986A (en) * 2014-10-15 2016-04-27 现代摩比斯株式会社 Apparatus and method for controlling a vehicle using electromyographic signal
CN105522986B (en) * 2014-10-15 2018-03-06 现代摩比斯株式会社 Utilize the controller of vehicle and method of electromyographic signal
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104599553B (en) * 2014-12-29 2017-01-25 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 Two-dimensional code recognition-based sign language teaching system and method
TWI602164B (en) * 2015-09-03 2017-10-11 國立臺北科技大學 An electromyography sensor and inertia sensor-based posture recognition device for real-time sign language translation system
CN105243312A (en) * 2015-11-13 2016-01-13 上海傲意信息科技有限公司 Password system and encryption and decryption method
CN105243312B (en) * 2015-11-13 2019-03-01 上海傲意信息科技有限公司 A kind of cryptographic system and encipher-decipher method
CN105654037B (en) * 2015-12-21 2019-05-21 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and characteristic image
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images
CN108877344A (en) * 2018-07-20 2018-11-23 荆明明 A kind of Multifunctional English learning system based on augmented reality
CN111475206A (en) * 2019-01-04 2020-07-31 优奈柯恩(北京)科技有限公司 Method and apparatus for waking up wearable device
CN111475206B (en) * 2019-01-04 2023-04-11 优奈柯恩(北京)科技有限公司 Method and apparatus for waking up wearable device
CN111026268A (en) * 2019-12-02 2020-04-17 清华大学 Gesture recognition device and method
CN111783056A (en) * 2020-07-06 2020-10-16 诺百爱(杭州)科技有限责任公司 Method and device for identifying user identity based on electromyographic signal and electronic equipment
CN111783056B (en) * 2020-07-06 2024-05-14 诺百爱(杭州)科技有限责任公司 Method and device for identifying user identity based on electromyographic signals and electronic equipment
CN113111156A (en) * 2021-03-15 2021-07-13 天津理工大学 System for intelligent hearing-impaired people and healthy people to perform man-machine interaction and working method thereof
CN113111156B (en) * 2021-03-15 2022-05-13 天津理工大学 System for intelligent hearing-impaired people and healthy people to perform man-machine interaction and working method thereof

Similar Documents

Publication Publication Date Title
CN103279734A (en) Novel intelligent sign language translation and man-machine interaction system and use method thereof
CN104134060B (en) Sign language interpreter and display sonification system based on electromyographic signal and motion sensor
Savur et al. American Sign Language Recognition system by using surface EMG signal
Li et al. Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors
US10878818B2 (en) Methods and apparatus for silent speech interface
CN109271901A (en) A kind of sign Language Recognition Method based on Multi-source Information Fusion
CN107945625A (en) A kind of pronunciation of English test and evaluation system
Madhuri et al. Vision-based sign language translation device
CN104983511A (en) Voice-helping intelligent glasses system aiming at totally-blind visual handicapped
CN103853071B (en) Man-machine facial expression interactive system based on bio signal
CN109508687A (en) Man-machine interaction control method, device, storage medium and smart machine
CN103116576A (en) Voice and gesture interactive translation device and control method thereof
CN105919591A (en) Surface myoelectrical signal based sign language recognition vocal system and method
CN206003392U (en) A kind of deaf-mute's social activity gloves
CN110443113A (en) A kind of virtual reality Writing method, system and storage medium
CN106097835A (en) A kind of deaf mute exchanges the method for intelligent assistance system and exchange
CN203149569U (en) Voice and gesture interactive translation device
CN110286774A (en) A kind of sign Language Recognition Method based on Wrist-sport sensor
CN110442233A (en) A kind of augmented reality key mouse system based on gesture interaction
CN103426342B (en) A kind of voice communication method and voice communicating device
US20170024380A1 (en) System and method for the translation of sign languages into synthetic voices
CN107785017A (en) A kind of interactive system based on Sign Language Recognition
CN104980599A (en) Sign language-voice call method and sign language-voice call system
CN206210144U (en) Gesture language-voice converts cap
CN108877409A (en) The deaf-mute's auxiliary tool and its implementation shown based on gesture identification and VR

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130904