CN109214347A - A kind of sign language interpretation method across languages, device and mobile device - Google Patents

A kind of sign language interpretation method across languages, device and mobile device Download PDF

Info

Publication number
CN109214347A
CN109214347A CN201811092150.4A CN201811092150A CN109214347A CN 109214347 A CN109214347 A CN 109214347A CN 201811092150 A CN201811092150 A CN 201811092150A CN 109214347 A CN109214347 A CN 109214347A
Authority
CN
China
Prior art keywords
sign language
language
text
sign
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811092150.4A
Other languages
Chinese (zh)
Inventor
蔡颖鹏
陈希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Time Robot Technology Co Ltd
Original Assignee
Beijing Time Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Time Robot Technology Co Ltd filed Critical Beijing Time Robot Technology Co Ltd
Priority to CN201811092150.4A priority Critical patent/CN109214347A/en
Publication of CN109214347A publication Critical patent/CN109214347A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Abstract

This application discloses a kind of sign language interpretation method across language, device and mobile devices, specially obtain the first sign language video of the people of behaviour's first language;Sign language video is identified using the sign language text translation model based on first language, obtains the first language text to match with the first sign language video;First language text is translated using text translation model trained in advance, obtains second language text.Since second language text here is matched with the first sign language video, so as to so that the sign language movement that the people of behaviour's first language makes is converted to the language text of second language, to the sign language expression for making the people for grasping a kind of language will also appreciate that the people for grasping another language makes, to promote the exchange of deaf-mute.

Description

A kind of sign language interpretation method across languages, device and mobile device
Technical field
This application involves field of artificial intelligence, more specifically to a kind of sign language interpretation method across languages, dress It sets and mobile device.
Background technique
Sign language is the important communication means of deaf-mute, however because region and cultural difference, country variant or area Sign language is different from, and the difference between this difference and natural language is much like, thus to country variant or area it is deaf Exchange between mute causes certain obstacle.
Summary of the invention
In view of this, the application provides a kind of sign language interpretation method across languages, device and mobile device, for difference The sign language in languages region is translated, to promote the exchange between deaf-mute.
To achieve the goals above, it is proposed that scheme it is as follows:
A kind of sign language interpretation method across languages, comprising steps of
Obtain the first sign language video of the people of behaviour's first language;
The sign language video is identified using the sign language text translation model based on the first language, is obtained and institute State the first language text that the first sign language video matches;
The first language text is translated using text translation model trained in advance, obtains second language text This.
Optionally, comprising steps of
Obtain the sign language text sequence grasping the sign language video collection of first voice and matching with the sign language video collection;
It concentrates hand gestures and upper limb posture to estimate the sign language video, obtains sign language action sequence;
Using the sign language text sequence and the sign language action sequence to model training is carried out, the sign language text is obtained Translation model.
Optionally, the hand gestures include the angle in each joint of hand;
The upper limb posture includes the angle in each joint of upper limb.
Optionally, the sign language text translation model is double-direction model.
Optionally, it further comprises the steps of:
The second language text is translated using preparatory trained text sign language interpreter model, is obtained and second Corresponding second sign language video of language text.
A kind of sign language translation device across languages, comprising:
First obtains module, the first sign language video of the people for obtaining behaviour's first language;
First translation module, for utilizing the sign language text translation model based on the first language to the sign language video It is identified, obtains the first language text to match with first sign language video;
Second translation module, for being turned over using text translation model trained in advance to the first language text It translates, obtains second language text.
Optionally, comprising:
Second obtain module, for obtain grasp first voice sign language video collection and with the sign language video collection phase The sign language text sequence matched;
Attitude estimation module obtains in one's hands for concentrating hand gestures and upper limb posture to estimate the sign language video Language action sequence;
Model training module is used for using the sign language text sequence and the sign language action sequence to progress model instruction Practice, obtains the sign language text translation model.
Optionally, the hand gestures include the angle in each joint of hand;
The upper limb posture includes the angle in each joint of upper limb.
Optionally, the sign language text translation model is double-direction model.
Optionally, further includes:
Third translation module, for using preparatory trained text sign language interpreter model to the second language text into Row translation, obtains the second sign language video corresponding with second language text.
A kind of mobile device, which is characterized in that be provided with described in any item sign language translation devices as above.
It can be seen from the above technical scheme that this application discloses a kind of sign language interpretation method across language, device and Mobile device specially obtains the first sign language video of the people of behaviour's first language;It is turned over using the sign language text based on first language It translates model to identify sign language video, obtains the first language text to match with the first sign language video;Utilize preparatory training Text translation model first language text is translated, obtain second language text.Due to second language text here It is to match with the first sign language video, so as to so that the sign language movement that the people of behaviour's first language makes is converted to second language Language text, thus the sign language expression for making the people for grasping a kind of language will also appreciate that the people for grasping another language makes, thus Promote the exchange of deaf-mute.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow chart of the sign language interpretation method across languages provided by the embodiments of the present application;
Fig. 2 is a kind of flow chart of model training method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another sign language interpretation method across languages provided by the embodiments of the present application;
Fig. 4 is a kind of block diagram of the sign language translation device across languages provided by the embodiments of the present application;
Fig. 5 is the block diagram of another sign language translation device across languages provided by the embodiments of the present application;
Fig. 6 is the block diagram of another sign language translation device across languages provided by the embodiments of the present application;
Fig. 7 is a kind of block diagram of mobile device provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Embodiment one
Fig. 1 is a kind of flow chart of the sign language interpretation method across languages provided by the embodiments of the present application.
As shown in Figure 1, sign language interpretation method provided in this embodiment is applied to the electronic devices such as mobile device, computer, Scene can be provided for user or long-range sign language exchange way, the sign language interpretation method specifically include step:
S1, the first sign language video for grasping the people of first language is obtained.
I.e. by connect with computer or in itself belong to the mobile device capture apparatus obtain the first sign language video, should First sign language video is to grasp made by the people of first language, such as sign language movement made by the people of behaviour's Chinese, passes through recording The sign language acts to obtain first sign language video.
S2, the first sign language video is translated using sign language text translation model.
The first language text that matches with the first sign language video is obtained by translating to get regarding to making first sign language The hint expression of sign language actor in frequency, external expressive form are the first language text.
Here the sign language interpreter model for being translated to first sign language video is based on sign language sample to depth mind The result being trained through network.As shown in Fig. 2, specific training process includes the following steps:
S21, the sign language video collection and sign language text sequence for grasping the people of first language are obtained.
Above-mentioned sign language video collection be to behaviour first language people sign language movement recorded to obtain, meanwhile, the sign language text The sign language that sentence and the sign language video in this sequence are concentrated, which acts, to be corresponded.
S22, in sign language video set hand gestures and upper limb posture estimate.
Specific estimation is that the Attitude estimation model obtained using preparatory training is carried out, and obtains sign language video by estimation The sign language action sequence of concentration, the sign language action sequence include the hand gestures and upper limb appearance for making the people of corresponding sign language movement State, hand gestures here refer to that the angle in each joint of the hand of the people, upper limb posture refer to each joint of the upper limb of the people Angle.
S23, the sign language text translation model is trained using sign language text sequence and sign language action sequence.
After obtaining corresponding sign language text sequence and sign language action sequence, using the sign language text sequence as input, by hand Language text sequence is trained deep neural network as output, to obtain the sign language text translation model.
Here deep neural network can select Recognition with Recurrent Neural Network or other neural networks.
The sign language text translation model can be a kind of double-direction model, can not only input sign language video and obtain language text This, can obtain corresponding sign language video with input language text.
S3, the first speech text is translated using text translation model trained in advance.
After obtaining first language text, first language text is turned over using preparatory trained text translation model It translates, i.e., by the first language text input text translation model, to obtain second language corresponding with first language text Text, such as English text or French version.
Text translation model be using a kind of text of language as input, another language text as export into Row model training obtains.
It can be seen from the above technical proposal that present embodiments providing a kind of sign language interpretation method across language, specially Obtain the first sign language video of the people of behaviour's first language;Using the sign language text translation model based on first language to sign language video It is identified, obtains the first language text to match with the first sign language video;Utilize text translation model pair trained in advance First language text is translated, and second language text is obtained.Since second language text here is and the first sign language video Match, so as to so that the sign language movement that the people of behaviour's first language makes is converted to the language text of second language, thus The sign language expression for making the people for grasping a kind of language will also appreciate that the people for grasping another language makes, to promote the friendship of deaf-mute Stream.
In addition, as shown in figure 3, the present embodiment further includes following steps:
S4, second language text is translated into the second sign language video.
After obtaining the second language text, the second language text is carried out using corresponding text sign language interpreter model Translation, so that the second sign language video corresponding with the second language text is obtained, accordingly even when the people of behaviour's second language is similarly Deaf-mute will also understand that the sign language movement that the deaf-mute of other languages makes, to further facilitate the exchange of deaf-mute.
Text sign language interpreter model is to move sign language corresponding with the language using the text of corresponding language as input It is trained as output.The same model is also double-direction model, i.e., can not only inputting text, to obtain sign language dynamic Make video, additionally it is possible to input sign language action video and obtain corresponding text.
The sign language action video can also be abstracted as action command, phase is made with action command driving anthropomorphic robot The sign language movement answered.
Embodiment two
Fig. 4 is a kind of block diagram of the sign language translation device across languages provided by the embodiments of the present application.
As shown in figure 4, sign language translation device provided in this embodiment is applied to the electronic devices such as mobile device, computer, Scene can be provided for user or long-range sign language exchange way, the sign language translation device specifically include the first acquisition module 10 First translation module 20 and the second translation module 30.
First acquisition module is used to obtain the first sign language video of the people of behaviour's first language.
I.e. by connect with computer or in itself belong to the mobile device capture apparatus obtain the first sign language video, should First sign language video is to grasp made by the people of first language, such as sign language movement made by the people of behaviour's Chinese, passes through recording The sign language acts to obtain first sign language video.
First translation module is for translating the first sign language video using sign language text translation model.
The first language text that matches with the first sign language video is obtained by translating to get regarding to making first sign language The hint expression of sign language actor in frequency, external expressive form are the first language text.
Here the sign language interpreter model for being translated to first sign language video is based on sign language sample to depth mind The result being trained through network.As shown in figure 5, the present embodiment further includes the second acquisition module 40,50 and of Attitude estimation module Model training module 60 obtains text sign language interpreter model will pass through training.
Second acquisition module is used to obtain the sign language video collection and sign language text sequence of the people of behaviour's first language.
Above-mentioned sign language video collection be to behaviour first language people sign language movement recorded to obtain, meanwhile, the sign language text The sign language that sentence and the sign language video in this sequence are concentrated, which acts, to be corresponded.
Attitude estimation module be used for in sign language video set hand gestures and upper limb posture estimate.
Specific estimation is that the Attitude estimation model obtained using preparatory training is carried out, and obtains sign language video by estimation The sign language action sequence of concentration, the sign language action sequence include the hand gestures and upper limb appearance for making the people of corresponding sign language movement State, hand gestures here refer to that the angle in each joint of the hand of the people, upper limb posture refer to each joint of the upper limb of the people Angle.
Model training module is used to train the sign language text translation model using sign language text sequence and sign language action sequence.
After obtaining corresponding sign language text sequence and sign language action sequence, using the sign language text sequence as input, by hand Language text sequence is trained deep neural network as output, to obtain the sign language text translation model.
Here deep neural network can select Recognition with Recurrent Neural Network or other neural networks.
The sign language text translation model can be a kind of double-direction model, can not only input sign language video and obtain language text This, can obtain corresponding sign language video with input language text.
Second translation module is used to translate the first speech text using text translation model trained in advance.
After obtaining first language text, first language text is turned over using preparatory trained text translation model It translates, i.e., by the first language text input text translation model, to obtain second language corresponding with first language text Text, such as English text or French version.
Text translation model be using a kind of text of language as input, another language text as export into Row model training obtains.
It can be seen from the above technical proposal that present embodiments providing a kind of sign language interpretation method across language, specially Obtain the first sign language video of the people of behaviour's first language;Using the sign language text translation model based on first language to sign language video It is identified, obtains the first language text to match with the first sign language video;Utilize text translation model pair trained in advance First language text is translated, and second language text is obtained.Since second language text here is and the first sign language video Match, so as to so that the sign language movement that the people of behaviour's first language makes is converted to the language text of second language, thus The sign language expression for making the people for grasping a kind of language will also appreciate that the people for grasping another language makes, to promote the friendship of deaf-mute Stream.
In addition, as shown in fig. 6, the present embodiment further includes third translation module 70.
Third translation module is used to second language text translating into the second sign language video.
After obtaining the second language text, the module is using corresponding text sign language interpreter model to second language text This is translated, so that the second sign language video corresponding with the second language text is obtained, accordingly even when the people of behaviour's second language Being similarly deaf-mute will also understand that the sign language movement that the deaf-mute of other languages makes, to further facilitate the friendship of deaf-mute Stream.
Text sign language interpreter model is to move sign language corresponding with the language using the text of corresponding language as input It is trained as output.The same model is also double-direction model, i.e., can not only inputting text, to obtain sign language dynamic Make video, additionally it is possible to input sign language action video and obtain corresponding text.
The sign language action video can also be abstracted as action command, phase is made with action command driving anthropomorphic robot The sign language movement answered.
Embodiment three
Fig. 7 is a kind of block diagram of mobile device provided by the embodiments of the present application.
A kind of mobile device is present embodiments provided, which is the translator of mobile phone, tablet computer or conversion, should Mobile device includes processor 101 and memory 102, and the two is connected by data/address bus 103.
Corresponding computer program or instruction are stored in memory, processor is for obtaining and executing the computer program Or instruction, so that the mobile device be made to be able to carry out following step:
Obtain the first sign language video of the people of behaviour's first language;
Sign language video is identified using the sign language text translation model based on first language, obtains regarding with the first sign language The first language text that frequency matches;
First language text is translated using text translation model trained in advance, obtains second language text.
It is also used to execute following steps:
The sign language text sequence for obtaining the sign language video collection of the first voice of behaviour and matching with sign language video collection;
Hand gestures in sign language video set and upper limb posture are estimated, sign language action sequence is obtained;
Using sign language text sequence and sign language action sequence to model training is carried out, sign language text translation model is obtained.
It is also used to execute step:
Second language text is translated using preparatory trained text sign language interpreter model, is obtained and second language Corresponding second sign language video of text.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiments of the present application may be provided as method, apparatus or calculating Machine program product.Therefore, the embodiment of the present application can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present application can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present application is referring to according to the method for the embodiment of the present application, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that A series of operational steps is executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although preferred embodiments of the embodiments of the present application have been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and all change and modification within the scope of the embodiments of the present application.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a sequence element not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Technical solution provided herein is described in detail above, specific case used herein is to this Shen Principle and embodiment please is expounded, the present processes that the above embodiments are only used to help understand and its Core concept;At the same time, for those skilled in the art, according to the thought of the application, in specific embodiment and application There will be changes in range, in conclusion the contents of this specification should not be construed as limiting the present application.

Claims (11)

1. a kind of sign language interpretation method across languages, which is characterized in that comprising steps of
Obtain the first sign language video of the people of behaviour's first language;
The sign language video is identified using the sign language text translation model based on the first language, is obtained and described The first language text that one sign language video matches;
The first language text is translated using text translation model trained in advance, obtains second language text.
2. sign language interpretation method as described in claim 1, which is characterized in that comprising steps of
Obtain the sign language text sequence grasping the sign language video collection of first voice and matching with the sign language video collection;
It concentrates hand gestures and upper limb posture to estimate the sign language video, obtains sign language action sequence;
Using the sign language text sequence and the sign language action sequence to model training is carried out, the sign language text translation is obtained Model.
3. sign language interpretation method as claimed in claim 2, which is characterized in that the hand gestures include the angle in each joint of hand Degree;
The upper limb posture includes the angle in each joint of upper limb.
4. sign language interpretation method as described in claim 1, which is characterized in that the sign language text translation model is two-way mould Type.
5. sign language interpretation method as described in claim 1, which is characterized in that further comprise the steps of:
The second language text is translated using preparatory trained text sign language interpreter model, is obtained and second language Corresponding second sign language video of text.
6. a kind of sign language translation device across languages characterized by comprising
First obtains module, the first sign language video of the people for obtaining behaviour's first language;
First translation module, for being carried out using the sign language text translation model based on the first language to the sign language video Identification, obtains the first language text to match with first sign language video;
Second translation module is obtained for being translated using text translation model trained in advance to the first language text To second language text.
7. sign language translation device as claimed in claim 6 characterized by comprising
Second obtains module, grasps the sign language video collection of first voice for obtaining and matches with the sign language video collection Sign language text sequence;
It is dynamic to obtain sign language for concentrating hand gestures and upper limb posture to estimate the sign language video for Attitude estimation module Make sequence;
Model training module, for, to model training is carried out, being obtained using the sign language text sequence and the sign language action sequence To the sign language text translation model.
8. sign language translation device as claimed in claim 7, which is characterized in that the hand gestures include the angle in each joint of hand Degree;
The upper limb posture includes the angle in each joint of upper limb.
9. sign language translation device as claimed in claim 6, which is characterized in that the sign language text translation model is two-way mould Type.
10. sign language translation device as claimed in claim 6, which is characterized in that further include:
Third translation module, for being turned over using preparatory trained text sign language interpreter model to the second language text It translates, obtains the second sign language video corresponding with second language text.
11. a kind of mobile device, which is characterized in that be arranged just like the described in any item sign language translation devices of claim 6~10.
CN201811092150.4A 2018-09-19 2018-09-19 A kind of sign language interpretation method across languages, device and mobile device Pending CN109214347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811092150.4A CN109214347A (en) 2018-09-19 2018-09-19 A kind of sign language interpretation method across languages, device and mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811092150.4A CN109214347A (en) 2018-09-19 2018-09-19 A kind of sign language interpretation method across languages, device and mobile device

Publications (1)

Publication Number Publication Date
CN109214347A true CN109214347A (en) 2019-01-15

Family

ID=64984521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811092150.4A Pending CN109214347A (en) 2018-09-19 2018-09-19 A kind of sign language interpretation method across languages, device and mobile device

Country Status (1)

Country Link
CN (1) CN109214347A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728203A (en) * 2019-09-23 2020-01-24 清华大学 Sign language translation video generation method and system based on deep learning
CN110931042A (en) * 2019-11-14 2020-03-27 北京欧珀通信有限公司 Simultaneous interpretation method and device, electronic equipment and storage medium
CN111354246A (en) * 2020-01-16 2020-06-30 浙江工业大学 System and method for helping deaf-mute to communicate
CN112256827A (en) * 2020-10-20 2021-01-22 平安科技(深圳)有限公司 Sign language translation method and device, computer equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089918A (en) * 2006-06-13 2007-12-19 上海市莘格高级中学 Sign language translator
CN101504803A (en) * 2009-03-11 2009-08-12 李海元 Sign language translating method and apparatus
CN101539994A (en) * 2009-04-16 2009-09-23 西安交通大学 Mutually translating system and method of sign language and speech
CN101794528A (en) * 2010-04-02 2010-08-04 北京大学软件与微电子学院无锡产学研合作教育基地 Gesture language-voice bidirectional translation system
CN102104670A (en) * 2009-12-17 2011-06-22 深圳富泰宏精密工业有限公司 Sign language identification system and method
CN102682644A (en) * 2012-03-26 2012-09-19 中山大学 Chinese sign language synthesis system driven by markup language
US20130289970A1 (en) * 2003-11-19 2013-10-31 Raanan Liebermann Global Touch Language as Cross Translation Between Languages
CN104538025A (en) * 2014-12-23 2015-04-22 西北师范大学 Method and device for converting gestures to Chinese and Tibetan bilingual voices
CN105096696A (en) * 2015-07-31 2015-11-25 努比亚技术有限公司 Sign language translation apparatus and method based on intelligent bracelet
CN106570473A (en) * 2016-11-03 2017-04-19 深圳量旌科技有限公司 Deaf-mute sign language identification interaction system based on robot
CN106898197A (en) * 2017-03-28 2017-06-27 西安电子科技大学 A kind of deaf-mute and the equipment of normal person's two-way exchange
CN107995374A (en) * 2018-01-17 2018-05-04 张权伟 It is applicable to the translation mobile phone that multiple languages mutually communicate
CN108256458A (en) * 2018-01-04 2018-07-06 东北大学 A kind of two-way real-time translation system and method for deaf person's nature sign language
CN207624216U (en) * 2017-08-17 2018-07-17 山东师范大学 A kind of voice and the two-way mutual translation system of sign language
CN108647603A (en) * 2018-04-28 2018-10-12 清华大学 Semi-supervised continuous sign language interpretation method based on attention mechanism and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130289970A1 (en) * 2003-11-19 2013-10-31 Raanan Liebermann Global Touch Language as Cross Translation Between Languages
CN101089918A (en) * 2006-06-13 2007-12-19 上海市莘格高级中学 Sign language translator
CN101504803A (en) * 2009-03-11 2009-08-12 李海元 Sign language translating method and apparatus
CN101539994A (en) * 2009-04-16 2009-09-23 西安交通大学 Mutually translating system and method of sign language and speech
CN102104670A (en) * 2009-12-17 2011-06-22 深圳富泰宏精密工业有限公司 Sign language identification system and method
CN101794528A (en) * 2010-04-02 2010-08-04 北京大学软件与微电子学院无锡产学研合作教育基地 Gesture language-voice bidirectional translation system
CN102682644A (en) * 2012-03-26 2012-09-19 中山大学 Chinese sign language synthesis system driven by markup language
CN104538025A (en) * 2014-12-23 2015-04-22 西北师范大学 Method and device for converting gestures to Chinese and Tibetan bilingual voices
CN105096696A (en) * 2015-07-31 2015-11-25 努比亚技术有限公司 Sign language translation apparatus and method based on intelligent bracelet
CN106570473A (en) * 2016-11-03 2017-04-19 深圳量旌科技有限公司 Deaf-mute sign language identification interaction system based on robot
CN106898197A (en) * 2017-03-28 2017-06-27 西安电子科技大学 A kind of deaf-mute and the equipment of normal person's two-way exchange
CN207624216U (en) * 2017-08-17 2018-07-17 山东师范大学 A kind of voice and the two-way mutual translation system of sign language
CN108256458A (en) * 2018-01-04 2018-07-06 东北大学 A kind of two-way real-time translation system and method for deaf person's nature sign language
CN107995374A (en) * 2018-01-17 2018-05-04 张权伟 It is applicable to the translation mobile phone that multiple languages mutually communicate
CN108647603A (en) * 2018-04-28 2018-10-12 清华大学 Semi-supervised continuous sign language interpretation method based on attention mechanism and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728203A (en) * 2019-09-23 2020-01-24 清华大学 Sign language translation video generation method and system based on deep learning
CN110728203B (en) * 2019-09-23 2022-04-12 清华大学 Sign language translation video generation method and system based on deep learning
CN110931042A (en) * 2019-11-14 2020-03-27 北京欧珀通信有限公司 Simultaneous interpretation method and device, electronic equipment and storage medium
CN110931042B (en) * 2019-11-14 2022-08-16 北京欧珀通信有限公司 Simultaneous interpretation method and device, electronic equipment and storage medium
CN111354246A (en) * 2020-01-16 2020-06-30 浙江工业大学 System and method for helping deaf-mute to communicate
CN112256827A (en) * 2020-10-20 2021-01-22 平安科技(深圳)有限公司 Sign language translation method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109214347A (en) A kind of sign language interpretation method across languages, device and mobile device
CN104992709B (en) A kind of the execution method and speech recognition apparatus of phonetic order
CN107609572B (en) Multi-modal emotion recognition method and system based on neural network and transfer learning
CN106205611B (en) Man-machine interaction method and system based on multi-mode historical response result
CN105843381A (en) Data processing method for realizing multi-modal interaction and multi-modal interaction system
Pi et al. Detgpt: Detect what you need via reasoning
CN108983979A (en) A kind of gesture tracking recognition methods, device and smart machine
CN105912530A (en) Intelligent robot-oriented information processing method and system
CN108491808B (en) Method and device for acquiring information
CN109766881A (en) A kind of character identifying method and device of vertical text image
CN109660865A (en) Make method and device, medium and the electronic equipment of video tab automatically for video
CN113903067A (en) Virtual object video generation method, device, equipment and medium
CN109409255A (en) A kind of sign language scene generating method and device
CN116797695A (en) Interaction method, system and storage medium of digital person and virtual whiteboard
CN108198559A (en) A kind of voice control robot system for learning action
CN107910006A (en) Audio recognition method, device and multiple source speech differentiation identifying system
CN207718803U (en) Multiple source speech differentiation identifying system
CN110378428A (en) A kind of domestic robot and its Emotion identification method and apparatus
CN110211236A (en) A kind of customized implementation method of virtual portrait based on intelligent sound box
CN112329593A (en) Gesture generation method and gesture generation system based on stylization
Park et al. Providing tablets as collaborative-task workspace for human-robot interaction
CN111046674A (en) Semantic understanding method and device, electronic equipment and storage medium
Cho et al. Implementation of human-robot VQA interaction system with dynamic memory networks
Preventis et al. Interact: Gesture recognition in the cloud
Zhong et al. Bridging the Gap between Robotic Applications and Computational Intelligence in Domestic Robotics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190115

RJ01 Rejection of invention patent application after publication