CN104134060A - Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors - Google Patents
Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors Download PDFInfo
- Publication number
- CN104134060A CN104134060A CN201410376205.XA CN201410376205A CN104134060A CN 104134060 A CN104134060 A CN 104134060A CN 201410376205 A CN201410376205 A CN 201410376205A CN 104134060 A CN104134060 A CN 104134060A
- Authority
- CN
- China
- Prior art keywords
- sign language
- user
- subsystem
- arm
- semantic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors. The sign language interpreting, displaying and sound producing system comprises a gesture recognition subsystem and a semantic displaying and sound producing subsystem. The gesture recognition subsystem comprises the multi-axial motion sensors and a multi-channel muscle current acquisition and analysis module, the gesture recognition subsystem is put on the left arm and the right arm of a user, and the original surface electromyogram signals of the user and motion information of the arms of the user are obtained; gestures are differentiated by processing the electromyogram signals and data of the motion sensors. The displaying and sound producing subsystem comprises a semantic analyzer, a voice control module, a loudspeaker, a displaying module, a storage module, a communication module and the like. According to the sign language interpreting, displaying and sound producing system, by the adoption of the mode recognition technology based on the electromyographic signals of the double arms and the data of the motion sensors, the gesture recognition accuracy rate is increased; through the combination of the semantic displaying and sound producing subsystem, interpreting from commonly-used sign language to voice or text is achieved, and the efficiency of direct communication between people with language disorders and normal people is improved.
Description
Technical field
The present invention relates to a kind of conventional sign language to the real-time translation system of natural language word or voice, realize principle and be the movement of multichannel surface muscle current signal based on user left and right arm and arm and rotate attitude detection, carry out the Division identification of sign language action, and by showing that sonification system carries out the automatic adjustment of context semanteme, finally the mode with word or voice presents.
Background technology
Listen barrier and aphasis Base of population huge, colony is sign language with the major way that exchanges of the external world, and be limited to general population, to accept the chance of sign language training very low, and exchanging of aphasis colony and general population is very difficult.In Wei Tingzhanghe aphasis colony, provide aspect real-time Sign Language Recognition and translation, be limited to the difficulty of gesture identification, on Vehicles Collected from Market, still lack proven technique scheme.
The basis of Sign Language Recognition and translation is continuous gesture identification.Gesture identification scheme based on machine vision is current main technique direction, but because existing background light, block, many-sided limiting factor such as movability, system power dissipation, discrimination, make to exist in its identification in sign language and real time translation application more highly difficult.The other mode of gesture identification is to adopt data glove, its with the amount of exercise detection technique principle such as the detection of arthrogryposis angle, acceleration make the recognition effect of gesture good, but the electronics that it is complicated and physical construction, the high cost bringing, volume problems of too, add user after pendant puts on one's gloves, affected other operations of both hands, its practicality is greatly reduced.
By inquiry open source literature, Chinese Patent Application No. is: 201310101297.6 " novel intelligent sign language interpreter and man-machine interactive system and using method thereof " proposed a kind of scheme of carrying out sign language detection by arm electromyographic signal.This kind of scheme is by being worn on the equipment on arm, electromyographic signal that can Real-Time Monitoring arm, but in fact can only detect the relative attitude of hand according to its principle, but can not detect arm and hand at three-dimensional athletic posture, and sign language action these two kinds of indispensable closely combining of attitude exactly, so the method is very high for the difficulty of sign language interpreter, has huge technological deficiency.
The present invention adopts and is worn on user's both arms, by continuous tracking multichannel arm surface muscle electric current, detect finger, palm and wrist attitude, and the three-dimensional motion attitude of following the tracks of arm by multiaxial motion sensor, in conjunction with these two kinds of sensing datas, realizing gesture feature continuous parameters extracts, and realize gesture identification by sorter, realize sign language to the translation of natural language.The present invention is based on the advantage that is worn on user's arm, do not affect other operations of user's both hands, user experiences well, and that the Gesture Recognition based on surface electromyogram signal and motion sensor of employing has is simultaneously low in energy consumption, movability strong, application scenario is not limited, cost is low, discrimination is high, be applicable to the multiple advantages such as popularization.The semanteme of system configuration shows sonification system, has semantic context analysis and error correction, forecast function, can effectively strengthen aphasis colony and general population's the efficiency exchanging.
Summary of the invention
The technical scheme that the present invention proposes is effectively improved and is strengthened existing sign language interpreter technology or scheme deficiency in actual applications, the activity of following the trail of the main muscle of arm by surface myoelectric technology detects finger, palm and wrist attitude, by the three-dimensional motion attitude of multiaxial motion sensor detection and tracking arm, these 2 key parameters in conjunction with gesture motion, realize accurate gesture feature value and extract, the result that sorter is distinguished gesture is more accurate.The semanteme of invention configuration shows and sonification system to have context semantic analysis function, and such as error correction, prediction etc. makes gesture arrive the translation effect of natural language better.
The present invention is mainly divided into gesture identification subsystem and semantic demonstration and sounding subsystem.Gesture identification subsystem is divided into master-slave equipment and is worn on user's both arms, and there is multichannel surface muscle current signal detecting electrode directly to contact to extract the electromyographic signal of both arms with user's arm skin, the athletic posture that simultaneously carries out both arms by multiaxial motion sensor detects, and two kinds of data are indispensable to realization of the present invention.
The left and right arms equipment that gesture identification subsystem of the present invention comprises wired or wireless connections, each equipment has the software and hardware structures such as surperficial muscle current signal pickoff electrode, signal filtering, signal amplification, analog signal figure converter, multiaxial motion sensor, wireless communication module, data and instruction memory module, data and the signal processing of a plurality of passages and control module, active segment trace routine, eigenwert extraction procedure, sorter program, feed circuit.Gesture recognition system is extracted coordination computing electromyographic signal data and motion sensor data by left and right arms equipment, and by eigenwert extraction procedure, extract the characteristic parameter of each gesture, then through sorter, carry out gesture classification, generate basic semantic information, wired or wireless mode is sent to semantic demonstration sounding subsystem and is further processed.
Semantic demonstration sounding subsystem of the present invention includes the basic structures such as communication module, processor, memory module, display module, loudspeaker sounding module, user's input control module, supply module.This system can show sounding program with having moved supporting semanteme, and directly replaces with terminals such as the smart mobile phone of communication module, computers, and for cost control, also the design of hardware and software of available dedicated realizes.Native system will be processed the basic semantic information that is sent to this subsystem by gesture identification subsystem in real time, through context semantic analysis, the processing of a plurality of links such as carrying out such as error correction, predict, fill a vacancy, realize more natural language mapping, and carry out as required word demonstration, or natural-sounding is synthetic and broadcasting.
The present invention has comprised training mode and use pattern, by user is trained, can improve the discrimination of sign language.In this training step, by supporting training program, instruct user to learn the sign language project that completion system provides, and in training process, extract the sign language eigenwert template of user's uniqueness, and be applied in use pattern as auxiliary template, to improve the identification of sign language action.Under use pattern, system be take embedded recognition template as main, and user characteristics template is auxiliary, differentiates the sign language action that user does.
The present invention compares with imagination with other prior aries, has proposed a kind of comprehensive solution: on the one hand, be worn on user's arm, can not affect the use of the daily both hands of user; The second, by arm surface muscle current signal, accurately identify the attitude of both hands finger, palm, wrist; The 3rd, adopt multiaxial motion sensing data to identify both arms at three-dimensional athletic posture; In conjunction with second point and 2 crucial parameters of sign language action thirdly, make the eigenwert of sign language extract more accurate.Semantic demonstration with sounding subsystem both can be selected the smart machines such as existing smart mobile phone or computer, operation semantic analysis shows sound generation software, also can adopt special design of hardware and software to reduce costs, coordinate contextual analysis, comprise error correction and forecast function, make the combination of natural language more clear.
Accompanying drawing explanation
Fig. 1: sign language interpreter and demonstration sonification system structured flowchart.
Fig. 2: sign language interpreter and demonstration sonification system are worn schematic diagram.
Fig. 3: gesture identification subsystem schematic diagram.
Fig. 4: semantic demonstration and sounding subsystem schematic diagram.
Fig. 5: training and use model process schematic diagram.
Embodiment
Core of the present invention is that a kind of brand-new natural Gesture Recognition based on user's both arms surface muscle current signal and motion sensor data is implemented in the real time translation of sign language, and by context semantic analysis, carry out the optimal combination of natural language, and finally present sign language implication with the form of word demonstration or speech play.
Concrete technology of the present invention is embodied as two profiles and is designed to telescopic armlet, be worn on user's left and right arm, can carry out gesture identification, semantic demonstration sounding subsystem can be integrated in of two armlets, also can be independent external equipment, for example operation has smart mobile phone or the computer of software kit.Two armlet inner sides, be close to arm skin place, according to forearm, control the main muscle position characteristic of finger and wrist and place multichannel surface muscle current sensor, sensor may but be not limited to adopt single difference or two difference modes, the signal picking up is by corresponding simulating signal filtering, signal amplifies, the circuit such as analog signal figure conversion, are quantized into the accessible continuous digital signal streams of signal processor or CPU (central processing unit); Except electromyographic signal, in both arms ring, all dispose multiaxial motion sensor, 3-axis acceleration sensor for example, three axis angular rate sensors, triaxial magnetic field sensor; The armlet of left and right arm, can be divided into master-slave equipment, from equipment by original myoelectricity and motion sensor data or the data after rough handling, be synchronized to main equipment, and in main equipment, further carry out the extraction of gesture feature value, obtain sign language numbering, afterwards number information is sent to semantic demonstration and audible device and is further processed, and finally with word or voice mode, display.
Fig. 1 is the structured flowchart of entire system of the present invention.Sign language action (101) is quantized into multichannel electromyographic signal (102a) by the gesture recognition system (102) that is worn on left and right arm, motion sensor data (102b), myoelectricity data and motion sensor data are carried out gesture active segment detection (102c) through Gesture Recognition Algorithm, eigenwert is extracted (102d), the sequence of operations such as sorter identification (102e), obtain gesture numbering (102f).Gesture numbering is sent to semantic demonstration and does further processing with sonification system (103), wherein including but not limited to semantic analysis optimization (103a), carry out work such as context-prediction and error correction, language voice is controlled (103b) and is carried out phonetic synthesis, the processing such as volume adjusting, and finally by loudspeaker (103c), carry out speech play, or show that with word (103d) pattern presents sign language implication.
Fig. 2 is that the gesture identification armlet of system of the present invention is to (201,202) be worn on the positional representation of user left and right arm, the sign language of identification shows that by semanteme the display module (203a) of sonification system (203) presents with word or picture mode, or is play by loudspeaker (203b) with voice mode.It may be noted that semantic demonstration sonification system (203), except can be used as an individual components, also can be incorporated in armlet (201 or 202).
Fig. 3 is the structured flowchart of gesture identification subsystem in native system.In exemplifying embodiment, gesture recognition system be divided into be worn on left hand arm from equipment (301a) and be worn on the main equipment (301b) of right arm.Master-slave equipment all includes motion sensor module (302), multi-channel surface muscle current signal picks up differential electrode (303), electromyographic signal is carried out to pretreated signal filter circuit (304), signal amplification circuit (305), simulation electromyographic signal is carried out to digitized ADC module (306), the supply module (309) that comprises the parts such as battery, for depositing the memory module (310) of program and data, for feeding back gesture, whether identify, can be by feedback module emitting led or that vibrating motor forms (311); From the processor (307a) of equipment (301a), can carry out pre-service to left arm motion sensor data and electromyographic signal, also after can be synchronous with main equipment, parallel gesture identification of carrying out left hand, pretreated data or the left hand gesture information (312) identifying, deliver to the main equipment (301b) of right arm by communication module (308); The processor (307b) of main equipment (301b) is by the motion sensor data of right arm (302) and (304) after filtering, signal amplifies (305), electromyographic signal (303) after digitizing (306), and processed by the data (312) of sending here from equipment (301a), and realize the identification of both hands sign language; The sign language (313) of final identification is mail to semantic display word system for electrical teaching by communication module (308) and is processed.
Fig. 4 is the semantic sounding subsystem structure block diagram that shows in native system.In example, subsystem comprises control inputs module (401) and inputs for user instruction, show that output module (402) is for display system information and sign language Word message, processor (403) is controlled for system, the data processings such as semantic context optimization, communication module (404) is for receiving the sign language information (313) identifying being imported by gesture identification subsystem, voice driven module (405) can be used for the controls such as volume, loudspeaker (406) is for playing natural-sounding corresponding to sign language, memory module (408) is for depositing sign language semantic analysis control program and the user data that runs on processor, the supply module that comprises the circuit such as battery (407) is used to system power supply.
Fig. 5 is native system training and uses model process block diagram.In example, under training mode, under sign language action instruction program (503) guides, user's sign language action is realized and being quantized by electromyographic signal data (501) and motion sensor data (502), recognizer (504) is processed above-mentioned myoelectricity data (501) and motion sensor data (502) in conjunction with sign language feature templates (505), extract eigenwert, and qualified sign language action is generated to user characteristics template (506).Under use pattern, user's sign language action is the same with training mode, by electromyographic signal data (501) and motion sensor data (502), realize and quantizing, recognizer (504) is in conjunction with sign language feature templates (505), user characteristics template (506) is processed above-mentioned myoelectricity data (501) and motion sensor data (502), extract eigenwert and also do sign language and distinguish, finally qualified sign language is shown by word or voice mode presents (507).
Finally it should be noted that above embodiment is only in order to implementer's case that the present invention is possible to be described and unrestricted.According to the difference of practical application scene, product shows in outward appearance, algorithm enforcement etc. can be variant, although the present invention is had been described in detail with reference to preferred embodiment, those skilled in the art is to be understood that, technical scheme of the present invention is modified or is equal to replacement, can not depart from the spirit and scope of technical solution of the present invention.
Claims (10)
1. the sign language interpreter based on muscle current signal and motion sensor and show sonification system, it is characterized in that: gesture identification subsystem and gesture wired or that wireless mode is connected arrive semantic demonstration sonification system, gesture identification subsystem is divided into again master-slave equipment, be worn on respectively user's both arms, there are the non-invasion formula of hyperchannel differential electrode and user's arm epidermis directly to contact to receive initial surface muscle current signal, in conjunction with multiaxial motion sensing data separately in master-slave equipment, the object that realization is identified user's sign language action, and translate into basic semantic and send to gesture to semantic demonstration sonification system, further basic semantic is carried out after contextual analysis and tissue, play corresponding voice or display text.
2. the sign language interpreter based on muscle current signal and motion sensor as shown in claim 1 and demonstration sonification system, it is characterized in that: gesture identification subsystem comprises the master-slave equipment being worn on respectively on the arm of user left and right, configure but be not limited to the multi-channel surface muscle current signal acquisition electrode directly contacting with user's arm skin, filtering and amplifying circuit, analog to digital signaling conversion circuit, multiaxial motion sensor, operation active segment detects, eigenwert is extracted and the signal of sorter program is processed and control circuit, data and order communication module, data and program storing circuit, the circuit such as battery and power management.
3. the gesture identification subsystem as shown in claim 2, it is characterized in that: described surperficial muscle current signal acquisition electrode can adopt two difference or single difference channel to realize, and on the arm of left and right, dispose multi-electrode to realize the Real-Time Monitoring of multichannel surface muscle current signal, electromyographic signal is processed and computing circuit by signal, extracts the motion state eigenwert of both hands finger, palm, wrist.
4. as claim 2, gesture identification subsystem shown in 3, it is characterized in that: described multiaxial motion sensor comprises 3-axis acceleration sensor, three axis angular rate sensors, triaxial magnetic field sensor, or wherein two or one 's combination, for detection of left and right arm in three-dimensional movement and rotation, to reach the detection of arm attitude, and in conjunction with the motion state eigenwert of the both hands finger extracting shown in claim 3, palm, wrist, realize the classification and Detection of sign language.
5. as claim 2, the gesture identification subsystem shown in 3,4, it is characterized in that: two profiles are annular equipment, have telescopic physical construction, can adapt to the thickness of different user arm, and be close on user's arm, to obtain good surperficial muscle current signal.
6. as claim 2,3,4, gesture identification subsystem shown in 5, it is characterized in that: user is wearing position and the angle of conditioning equipment on arm within the specific limits that the adaptive algorithm carrying by system can realize and stablizing and consistent gesture identification.
7. as claim 2,3,4, the gesture identification subsystem shown in 5,6, is characterized in that: gesture identification subsystem has the ability of the singlehanded sign language of identification and both hands sign language, and the ability that a plurality of continuous sign language combination of actions is become to statement.
8. the sign language interpreter based on muscle current signal and motion sensor as shown in claim 1 and demonstration sonification system, it is characterized in that: gesture, to semantic demonstration sounding subsystem, configures but has been not limited to signal processing and control circuit, semantic analyzer, voice control, loudspeaker, communication module, data and program storing circuit, battery and electric power management circuit, display module etc.
9. the demonstration as shown in claim 8 and sounding subsystem, it is characterized in that: except being combined with gesture identification groups of subsystems, or the independently accessory of supporting design, also can be independently the terminals such as intelligent watch with communication module, smart mobile phone, panel computer, computing machine, intelligent television, by matching with gesture identification subsystem, and move real-time sign language handling procedure, carry out semantic translation, display text or broadcasting voice.
10. the sign language interpreter based on muscle current signal and motion sensor as shown in claim 1 and the using method that shows sonification system, it is characterized in that: user is under training mode, by the guidance of training program, the sign language the identified project that system is provided is learnt and trains, and in training process, generates user's sign language characteristic parameter recognition template; Under use pattern, the characteristic parameter recognition template of above-mentioned generation is used to aid identification user's sign language action, to improve discrimination.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410376205.XA CN104134060B (en) | 2014-08-03 | 2014-08-03 | Sign language interpreter and display sonification system based on electromyographic signal and motion sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410376205.XA CN104134060B (en) | 2014-08-03 | 2014-08-03 | Sign language interpreter and display sonification system based on electromyographic signal and motion sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104134060A true CN104134060A (en) | 2014-11-05 |
CN104134060B CN104134060B (en) | 2018-01-05 |
Family
ID=51806734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410376205.XA Active CN104134060B (en) | 2014-08-03 | 2014-08-03 | Sign language interpreter and display sonification system based on electromyographic signal and motion sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104134060B (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138133A (en) * | 2015-09-14 | 2015-12-09 | 李玮琛 | Biological signal gesture recognition device and method |
CN105893959A (en) * | 2016-03-30 | 2016-08-24 | 北京奇艺世纪科技有限公司 | Gesture identifying method and device |
CN105919591A (en) * | 2016-04-12 | 2016-09-07 | 东北大学 | Surface myoelectrical signal based sign language recognition vocal system and method |
CN106155277A (en) * | 2015-03-26 | 2016-11-23 | 联想(北京)有限公司 | Electronic equipment and information processing method |
CN106200988A (en) * | 2016-08-30 | 2016-12-07 | 上海交通大学 | A kind of wearable hand language recognition device and sign language interpretation method |
CN106648068A (en) * | 2016-11-11 | 2017-05-10 | 哈尔滨工业大学深圳研究生院 | Method for recognizing three-dimensional dynamic gesture by two hands |
TWI602164B (en) * | 2015-09-03 | 2017-10-11 | 國立臺北科技大學 | An electromyography sensor and inertia sensor-based posture recognition device for real-time sign language translation system |
CN107302548A (en) * | 2016-04-14 | 2017-10-27 | 中国电信股份有限公司 | Method, terminal device, server and the system of aid musical instruments playing practice |
CN107480697A (en) * | 2017-07-12 | 2017-12-15 | 中国科学院计算技术研究所 | A kind of myoelectricity gesture identification method and system |
CN107492287A (en) * | 2017-10-16 | 2017-12-19 | 重庆师范大学 | Mute speaks instrument |
CN108491077A (en) * | 2018-03-19 | 2018-09-04 | 浙江大学 | A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread |
CN108564105A (en) * | 2018-02-28 | 2018-09-21 | 浙江工业大学 | Online gesture recognition method for myoelectric individual difference problem |
CN108766434A (en) * | 2018-05-11 | 2018-11-06 | 东北大学 | A kind of Sign Language Recognition translation system and method |
CN108829252A (en) * | 2018-06-14 | 2018-11-16 | 吉林大学 | Gesture input computer character device and method based on electromyography signal |
CN109154864A (en) * | 2016-05-31 | 2019-01-04 | 索尼公司 | Program, information processing system, information processing method and read/write device equipment |
CN109192007A (en) * | 2018-09-21 | 2019-01-11 | 杭州电子科技大学 | A kind of AR sign Language Recognition Method and teaching method based on myoelectricity motion perception |
CN109508088A (en) * | 2018-10-23 | 2019-03-22 | 诺百爱(杭州)科技有限责任公司 | One kind is based on electromyography signal Sign Language Recognition translation armlet and sign Language Recognition Method |
CN109656358A (en) * | 2018-11-23 | 2019-04-19 | 南京麦丝特精密仪器有限公司 | A kind of multidimensional sign Language Recognition Method |
CN111462594A (en) * | 2020-04-23 | 2020-07-28 | 重庆电力高等专科学校 | Wearable sign language translation device based on natural spelling |
CN111475092A (en) * | 2020-03-23 | 2020-07-31 | 深圳市多亲科技有限公司 | Remote control operation method and device of smart phone and mobile terminal |
CN112349182A (en) * | 2020-11-10 | 2021-02-09 | 中国人民解放军海军航空大学 | Deaf-mute conversation auxiliary system |
EP3717991A4 (en) * | 2017-11-30 | 2021-04-28 | Facebook Technologies, Inc. | Methods and apparatus for simultaneous detection of discrete and continuous gestures |
CN114115531A (en) * | 2021-11-11 | 2022-03-01 | 合肥工业大学 | End-to-end sign language identification method based on attention mechanism |
CN114442798A (en) * | 2020-11-06 | 2022-05-06 | 复旦大学附属妇产科医院 | Portable control system and control method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109009718A (en) * | 2018-08-10 | 2018-12-18 | 中国科学院合肥物质科学研究院 | A method of based on electrical impedance technology combination gesture control wheelchair |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101609618A (en) * | 2008-12-23 | 2009-12-23 | 浙江工业大学 | Real-time hand language AC system based on space encoding |
CN102193633A (en) * | 2011-05-25 | 2011-09-21 | 广州畅途软件有限公司 | dynamic sign language recognition method for data glove |
US20130158998A1 (en) * | 2001-07-12 | 2013-06-20 | At&T Intellectual Property Ii, L.P. | Systems and Methods for Extracting Meaning from Multimodal Inputs Using Finite-State Devices |
CN103279734A (en) * | 2013-03-26 | 2013-09-04 | 上海交通大学 | Novel intelligent sign language translation and man-machine interaction system and use method thereof |
-
2014
- 2014-08-03 CN CN201410376205.XA patent/CN104134060B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130158998A1 (en) * | 2001-07-12 | 2013-06-20 | At&T Intellectual Property Ii, L.P. | Systems and Methods for Extracting Meaning from Multimodal Inputs Using Finite-State Devices |
CN101609618A (en) * | 2008-12-23 | 2009-12-23 | 浙江工业大学 | Real-time hand language AC system based on space encoding |
CN102193633A (en) * | 2011-05-25 | 2011-09-21 | 广州畅途软件有限公司 | dynamic sign language recognition method for data glove |
CN103279734A (en) * | 2013-03-26 | 2013-09-04 | 上海交通大学 | Novel intelligent sign language translation and man-machine interaction system and use method thereof |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106155277A (en) * | 2015-03-26 | 2016-11-23 | 联想(北京)有限公司 | Electronic equipment and information processing method |
CN106155277B (en) * | 2015-03-26 | 2019-03-08 | 联想(北京)有限公司 | Electronic equipment and information processing method |
TWI602164B (en) * | 2015-09-03 | 2017-10-11 | 國立臺北科技大學 | An electromyography sensor and inertia sensor-based posture recognition device for real-time sign language translation system |
CN105138133A (en) * | 2015-09-14 | 2015-12-09 | 李玮琛 | Biological signal gesture recognition device and method |
CN105893959A (en) * | 2016-03-30 | 2016-08-24 | 北京奇艺世纪科技有限公司 | Gesture identifying method and device |
CN105893959B (en) * | 2016-03-30 | 2019-04-12 | 北京奇艺世纪科技有限公司 | A kind of gesture identification method and device |
CN105919591A (en) * | 2016-04-12 | 2016-09-07 | 东北大学 | Surface myoelectrical signal based sign language recognition vocal system and method |
CN107302548A (en) * | 2016-04-14 | 2017-10-27 | 中国电信股份有限公司 | Method, terminal device, server and the system of aid musical instruments playing practice |
CN109154864A (en) * | 2016-05-31 | 2019-01-04 | 索尼公司 | Program, information processing system, information processing method and read/write device equipment |
CN106200988A (en) * | 2016-08-30 | 2016-12-07 | 上海交通大学 | A kind of wearable hand language recognition device and sign language interpretation method |
CN106648068A (en) * | 2016-11-11 | 2017-05-10 | 哈尔滨工业大学深圳研究生院 | Method for recognizing three-dimensional dynamic gesture by two hands |
CN107480697A (en) * | 2017-07-12 | 2017-12-15 | 中国科学院计算技术研究所 | A kind of myoelectricity gesture identification method and system |
CN107480697B (en) * | 2017-07-12 | 2020-04-03 | 中国科学院计算技术研究所 | Myoelectric gesture recognition method and system |
CN107492287A (en) * | 2017-10-16 | 2017-12-19 | 重庆师范大学 | Mute speaks instrument |
EP3717991A4 (en) * | 2017-11-30 | 2021-04-28 | Facebook Technologies, Inc. | Methods and apparatus for simultaneous detection of discrete and continuous gestures |
EP3951564A1 (en) * | 2017-11-30 | 2022-02-09 | Facebook Technologies, LLC | Methods and apparatus for simultaneous detection of discrete and continuous gestures |
CN108564105A (en) * | 2018-02-28 | 2018-09-21 | 浙江工业大学 | Online gesture recognition method for myoelectric individual difference problem |
CN108491077A (en) * | 2018-03-19 | 2018-09-04 | 浙江大学 | A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread |
CN108491077B (en) * | 2018-03-19 | 2020-06-16 | 浙江大学 | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network |
CN108766434A (en) * | 2018-05-11 | 2018-11-06 | 东北大学 | A kind of Sign Language Recognition translation system and method |
CN108766434B (en) * | 2018-05-11 | 2022-01-04 | 东北大学 | Sign language recognition and translation system and method |
CN108829252A (en) * | 2018-06-14 | 2018-11-16 | 吉林大学 | Gesture input computer character device and method based on electromyography signal |
CN109192007A (en) * | 2018-09-21 | 2019-01-11 | 杭州电子科技大学 | A kind of AR sign Language Recognition Method and teaching method based on myoelectricity motion perception |
CN109508088A (en) * | 2018-10-23 | 2019-03-22 | 诺百爱(杭州)科技有限责任公司 | One kind is based on electromyography signal Sign Language Recognition translation armlet and sign Language Recognition Method |
CN109656358A (en) * | 2018-11-23 | 2019-04-19 | 南京麦丝特精密仪器有限公司 | A kind of multidimensional sign Language Recognition Method |
CN111475092A (en) * | 2020-03-23 | 2020-07-31 | 深圳市多亲科技有限公司 | Remote control operation method and device of smart phone and mobile terminal |
CN111462594A (en) * | 2020-04-23 | 2020-07-28 | 重庆电力高等专科学校 | Wearable sign language translation device based on natural spelling |
CN114442798A (en) * | 2020-11-06 | 2022-05-06 | 复旦大学附属妇产科医院 | Portable control system and control method |
CN114442798B (en) * | 2020-11-06 | 2024-05-07 | 复旦大学附属妇产科医院 | Portable control system and control method |
CN112349182A (en) * | 2020-11-10 | 2021-02-09 | 中国人民解放军海军航空大学 | Deaf-mute conversation auxiliary system |
CN114115531A (en) * | 2021-11-11 | 2022-03-01 | 合肥工业大学 | End-to-end sign language identification method based on attention mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN104134060B (en) | 2018-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104134060A (en) | Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors | |
Lu et al. | A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices | |
Li et al. | Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors | |
CN103777752A (en) | Gesture recognition device based on arm muscle current detection and motion sensor | |
WO2017092225A1 (en) | Emg-based wearable text input system and method | |
CN104536558B (en) | A kind of method of intelligence finger ring and control smart machine | |
CN103794106B (en) | Instrument playing assisted teaching system | |
CN203300127U (en) | Children teaching and monitoring robot | |
CN107678550A (en) | A kind of sign language gesture recognition system based on data glove | |
CN102789313A (en) | User interaction system and method | |
CN104777775A (en) | Two-wheeled self-balancing robot control method based on Kinect device | |
CN103513770A (en) | Man-machine interface equipment and man-machine interaction method based on three-axis gyroscope | |
CN104007844A (en) | Electronic instrument and wearable type input device for same | |
CN110443113A (en) | A kind of virtual reality Writing method, system and storage medium | |
CN103279734A (en) | Novel intelligent sign language translation and man-machine interaction system and use method thereof | |
CN204155477U (en) | A kind of electronic oral cavity teaching aid toothbrush with gesture identification function | |
CN106200988A (en) | A kind of wearable hand language recognition device and sign language interpretation method | |
CN104825256B (en) | A kind of artificial limb system with perceptible feedback function | |
CN110442233A (en) | A kind of augmented reality key mouse system based on gesture interaction | |
CN109498375B (en) | Human motion intention recognition control device and control method | |
Zhang et al. | Multimodal fusion framework based on statistical attention and contrastive attention for sign language recognition | |
CN105530581A (en) | Smart wearable device based on voice recognition and control method thereof | |
CN113849068A (en) | Gesture multi-mode information fusion understanding and interacting method and system | |
Li et al. | Hand gesture recognition and real-time game control based on a wearable band with 6-axis sensors | |
CN109542220A (en) | A kind of sign language gloves, system and implementation method with calibration and learning functionality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20180108 Address after: 201203 Shanghai city Chinese (Shanghai) free trade zone fanchun Road No. 400 Building 1 layer 3 Patentee after: Shanghai Ao Yi Information technology company limited Address before: 201101 Shanghai city Minhang District Longming Road No. 2043 room 1905 Patentee before: Shanghai Weipu Electron Technology Co., Ltd. |
|
TR01 | Transfer of patent right |