CN112506410A - Deaf-mute barrier-free online video interaction device - Google Patents

Deaf-mute barrier-free online video interaction device Download PDF

Info

Publication number
CN112506410A
CN112506410A CN202011430724.1A CN202011430724A CN112506410A CN 112506410 A CN112506410 A CN 112506410A CN 202011430724 A CN202011430724 A CN 202011430724A CN 112506410 A CN112506410 A CN 112506410A
Authority
CN
China
Prior art keywords
video
audio
sign language
deaf
mute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011430724.1A
Other languages
Chinese (zh)
Inventor
张永爱
张承强
周雄图
郭太良
吴朝兴
李欣晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Mindu Innovation Laboratory
Original Assignee
Fuzhou University
Mindu Innovation Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University, Mindu Innovation Laboratory filed Critical Fuzhou University
Priority to CN202011430724.1A priority Critical patent/CN112506410A/en
Publication of CN112506410A publication Critical patent/CN112506410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention relates to an obstacle-free online video interaction device for deaf-mutes. The system comprises a mobile terminal and a server terminal; the mobile terminal comprises an audio and video acquisition module for acquiring audio and video, a display playing module for playing the audio and video and an internet link module for linking a network; the server end comprises an audio/video-sign language gesture module used for converting audio/video into sign language gesture video and converting sign language gesture into audio/video and an audio/video transmitting and receiving module used for transmitting and receiving audio/video. The mobile terminal of the invention can be a mobile phone, the whole device can be basically and completely developed by software, and the invention has the characteristics of simple structure, low cost and convenient barrier-free video interaction between the deaf-mute and the normal people at any time and any place.

Description

Deaf-mute barrier-free online video interaction device
Technical Field
The invention relates to the field of deep learning such as video image generation, sign language gesture recognition, voice recognition and the like, in particular to an obstacle-free online video interaction device for deaf-mutes.
Background
The deaf-mutes cannot conveniently communicate with each other like normal persons, and particularly the deaf-mutes have a lot of troubles and are extremely inconvenient to communicate with the normal persons, and if most of the normal persons can directly communicate with the deaf-mutes by mastering sign language, the communication is obviously unrealistic.
The prior invention has various advantages, but still has various disadvantages and can not carry out online video interaction, for example, the prior invention is wearable, the arm can generate acid and pain feeling when being worn on the arm or the hand for too long time, and the work of contacting the hand with water can not be carried out, the image sensing mode captures gesture images of sign language and utilizes the image processing and recognition technology to carry out analysis and recognition, the wearing is better compared with the prior art, but the prior art invention can not realize barrier-free online video interaction between the deaf and the normal person at any time and any place, for example, a sign language communication device patent based on computer vision can carry out communication after assembling a plurality of devices, the assembling process is complicated, the video interaction at any time and any place can not be realized, and the SVM classifier used is only single static gesture recognition, and has no flexibility and diversity of the convolutional neural network used by the invention, for another example, the sign language conference method and system patent based on gesture recognition mainly solve the communication obstacles during the conference, but can not realize the video interaction at any time and any place, and the flexibility of use and the diversity of display are not enough, and for another example, a dual-mode deaf-mute communication equipment patent carries a plurality of equipment to realize face-to-face barrier-free communication, but carries a plurality of equipment with oneself, is troublesome, can only carry face-to-face communication, and can not realize the online video interaction at any time and any place.
In the current internet era, people can perform video interaction with other people anytime and anywhere, such as video telephone and video short message, but the current video interaction anytime and anywhere is still basically only an communication mode between normal people, and the deaf-mute and the normal people cannot conveniently perform barrier-free online video interaction. However, with the rapid development of deep learning technology, gesture recognition and text-to-sign language gesture video deep learning technology comes, and sign language gesture recognition and text-to-sign language gesture video deep learning technology derived from the technology can be realized, and the technology can translate sign language gesture information into text information and also translate the text information into sign language gesture video.
Disclosure of Invention
The invention aims to provide an obstacle-free online video interaction device for deaf-mutes, which can enable the deaf-mutes to perform obstacle-free video interaction with normal people anytime and anywhere, is simple and convenient, can perform obstacle-free video interaction with the normal people or the deaf-mutes anytime and anywhere like the normal people as long as a mobile device or a mobile phone is carried by a user, can be a mobile phone used in daily life, can be basically and completely developed and completed by software, has low cost, realizes the functions of the mobile device as an application integrated by various modules written by a programming language, and realizes the function of a server end as a functional module integrated system written by the programming language.
In order to achieve the purpose, the technical scheme of the invention is as follows: a deaf-mute barrier-free online video interaction device comprises a mobile terminal and a server terminal; the mobile terminal comprises an audio and video acquisition module for acquiring audio and video, a display playing module for playing the audio and video and an internet link module for linking a network; the server end comprises an audio/video-sign language gesture module for converting audio/video into sign language gesture video and converting the sign language gesture into audio/video and an audio/video transmitting and receiving module for transmitting and receiving the audio/video; the mobile terminal comprises an audio and video acquisition module, a display playing module and an internet link module, wherein the audio and video acquisition module, the display playing module and the internet link module are integrated on a single mobile terminal, and the mobile terminal has the functions of audio and video acquisition, display playing and internet link.
In an embodiment of the invention, the mobile terminal audio/video display and play function can display audio/video of normal people and sign language gesture video in a split screen mode, or can display received audio/video information of deaf-mute and normal people in a single screen mode.
In an embodiment of the present invention, the audio/video-sign language gesture module includes an audio/video voice to text function, a text to sign language gesture video function, a sign language gesture video to text function, and a text to audio/video voice function.
In an embodiment of the present invention, the audio/video transmitting/receiving module is configured to receive an audio/video transmitted from the mobile terminal and transmit the processed audio/video to the mobile terminal.
In an embodiment of the present invention, the audio/video voice to text function of the audio/video-sign language gesture module is to convert information in audio into text information by using a deep learning voice recognition technology.
In an embodiment of the invention, the video function of text-to-sign language gesture of the audio/video-sign language gesture module is to convert text content information into a video of sign language gesture by using a deep learning technology.
In an embodiment of the invention, the function of converting sign language gestures into text in the audio/video-sign language gesture module is to convert information contained in the sign language gestures into text content information by using a deep learning technology.
In an embodiment of the invention, the text-to-audio video voice function of the audio/video-sign language gesture module is to convert text contents into voice information by using a deep learning technology, and generate realistic voice by using a neural network voice synthesis function based on a voice synthesis technology.
In an embodiment of the invention, the receiving of the audio and video sent by the mobile terminal means that the mobile terminal collects the audio and video of normal human speech or the audio and video of sign language gestures of deaf-mute in real time, and then sends the audio and video to the server terminal on line, so that the server terminal receives the audio and video.
In one embodiment of the invention, the sending of the processed audio and video to the mobile terminal means that when the processed audio and video is sent to the mobile terminal of the deaf-mute, the audio and video of a normal person is converted into the audio and video of sign language gestures, the audio and video of the normal person is also contained, and the audio and video can be displayed in a split screen manner; when the sign language gesture audio/video is sent to a mobile terminal of a normal person, the sign language gesture audio/video of the deaf-mute is converted into the audio/video of the normal voice video through the sign language gesture, and then the audio/video is sent.
Compared with the prior art, the invention has the following beneficial effects: the invention can make the deaf-mute and the normal person carry out video interaction at any time and any place, the mobile device can be a mobile phone which is used by people in daily life, the whole device can be basically and completely developed by software, and the invention is very convenient and low in cost.
Drawings
Fig. 1 is a schematic structural diagram of an unobstructed online video interaction device for deaf-mutes according to the embodiment.
Fig. 2 is a working logic diagram of a specific implementation of the audio/video to sign language gesture module of the present invention.
FIG. 3 is a logic diagram of the working of the specific implementation of the sign language gesture audio-video conversion module according to the present invention.
FIG. 4 is a schematic diagram of the text to sign language gesture video of the present invention.
FIG. 5 is a schematic diagram of the video-to-text conversion principle of sign language and gesture according to the present invention.
Illustration of the drawings: 1. the entire mobile device; 2. an image acquisition device; 3. an internet linking device; 4. a voice playing device; 5. a voice acquisition device; 6. a video display device; 7. audio-video-sign language gesture means; 8. the whole server side; 9. and audio and video transmitting and receiving devices.
Detailed Description
The technical scheme of the invention is specifically explained below with reference to the accompanying drawings.
As shown in fig. 1, the device for supporting deaf-mute barrier-free online video interaction in the present embodiment includes a whole mobile device 1 and a whole server 8 linked with the mobile device through the internet, where the whole mobile device 1 includes an image capture device 2, an internet link device 3, a voice playing device 4, a voice capture device 5 and a video display device 6, and the whole server 8 includes an audio/video-sign language gesture device 7 and an audio/video transmitting and receiving device 9.
The whole mobile device 1 of this embodiment is a mobile phone, and the devices and modules included therein are integrated application software, and are written and implemented by Java programming language in Android system, and the devices of the whole server 8 are written and integrated by Python programming language.
The whole mobile device of the embodiment collects the audio and video information of normal people or deaf-mute through the image collecting device and the voice collecting device, and then sends the audio and video information out through the internet linking device.
The image acquisition device and the voice acquisition device are acquired in real time, a deaf-mute can click a video on a screen of a mobile device for chatting, the current user type is selected as the deaf-mute type, after the mobile device is connected, the mobile device only needs to be fixed, such as a mobile phone or a flat plate, standing on a desk, and then the mobile device transmits the meaning of the deaf-mute to be expressed by the deaf-mute type mobile device through a mobile phone or a flat plate by means of a hand gesture, and the mobile device acquires the video in real time and sends the video out in real time on line through an internet connection.
The server end receives the audio and video sent by the mobile equipment, judges whether the type is normal or deaf-mute, and if the type is deaf-mute, the operation of converting video content into voice content is carried out on the video image, the specific principle schematic diagram is shown in figure 3, the figure 3 is a working logic diagram of converting sign language gesture video of the deaf-mute into voice content, and the working logic diagram comprises the steps of obtaining video image information from the sign language gesture video of the deaf-mute, converting the sign language gesture image information into text information and text-to-audio, combining the audio and the sign language gesture video of the deaf-mute together and then sending the video and the processed audio and video to a sending device at the server end.
After the mobile device used by the normal person selects the type of the normal person to be switched on, the audio and video of the deaf-mute sent by the server end is received, the audio and video is subjected to sign language gesture translation at the server end, and the normal person hears vivid audio information generated by a machine which is well translated for the sign language gesture of the deaf-mute in the mobile device.
When the video is replied, normal people reply the speech of the spoken language to the lens of the mobile equipment, the image acquisition device and the voice acquisition device acquire images and audios of the normal people in real time, and the mobile equipment transmits the audios and videos on line in real time through the internet linking device.
The server end receives the audio and video sent by the mobile equipment, whether the audio and video is of a normal person type or a deaf-mute type is judged firstly, if the audio and video is of a normal person type, the operation of converting audio content into sign language gesture content is carried out on the audio of the video, a specific principle schematic diagram is shown in fig. 2, fig. 2 is a working logic diagram of the audio and video of the normal person into the sign language gesture video, the working logic diagram comprises the steps of obtaining audio information from the audio and video of the normal person, converting the audio information into text information and converting the text information into the sign language gesture video, the audio conversion text is used in a voice recognition technology, the sign language gesture video and the audio and video of the normal person are combined and then sent to a sending device, and the sending device of the server end sends.
The mobile device of the deaf-mute receives the audio and video replied by the normal person, the audio is converted into the hand language gesture video at the server side, and the above steps are repeated, so that the barrier-free online video interaction of the deaf-mute is realized.
As shown in fig. 4, the principle of text-to-sign language gesture video of this embodiment uses an improved countermeasure generation network of the current relatively advanced text2image (text-to-picture) computer vision technology, which belongs to the text2video (text-to-short-video) technology, that is, an extremely short video is generated through short segment of text information, which is a currently advanced deep learning technology in computer vision, the countermeasure generation network model is a model generated by training a large number of various sign language short videos and data sets of corresponding short segments of characters, and the purpose of generating the countermeasure network is that the generated pictures and videos have diversity, are very vivid and clear, and the unicity of fixing pictures and the inflexibility of not generating videos in the conventional manner are eliminated.
As shown in fig. 5, the principle of converting sign language/gesture videos into texts in this embodiment uses a deep learning technique of gesture recognition, and the trained deep learning model translates sign language/gestures in a short video into text contents according to context contents of video images.
The picture displayed on the screen of the audio and video received by the mobile device and the sound played by the loudspeaker are as follows: if the mobile device is used by the deaf-mute, the deaf-mute sees the split screen content in the screen, one half of the screen displays the conversation video of the normal person, the other half of the screen displays the translated sign language gesture video, and the loudspeaker can play the conversation voice of the normal person.
The normal person type and the deaf-mute type of the embodiment can be switched in the video interaction process, when the mobile terminal switches the types, the server terminal can automatically and intelligently change the video type sent to the mobile terminal, and the purpose is to facilitate that the mobile device can be changed into other types of users in the using process.
The sending and receiving device and the internet linking device of the embodiment are implemented by software, both the mobile terminal and the server terminal are implemented by writing codes of an application layer, a bottom-layer hardware device is called by writing the codes of the application layer, if the mobile terminal uses an Android system, the internet linking device is a method type or an interface packaged in the Android system, calling and rewriting are carried out according to actual needs, a protocol needs to be defined to ensure the unification of data types, whether data is a character string or a video image stream is distinguished, and if the server terminal uses a python-based network framework, the sending and receiving device is some methods or abstract types packaged by the python network framework, and can also be called and implemented according to needs.
The image acquisition device, the voice playing device and the video display device of the mobile device are all realized by software, and the codes of the application layer are compiled to call the hardware equipment at the bottom layer so as to acquire the audio and video information in real time.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.

Claims (10)

1. A deaf-mute barrier-free online video interaction device is characterized by comprising a mobile terminal and a server terminal; the mobile terminal comprises an audio and video acquisition module for acquiring audio and video, a display playing module for playing the audio and video and an internet link module for linking a network; the server end comprises an audio/video-sign language gesture module for converting audio/video into sign language gesture video and converting the sign language gesture into audio/video and an audio/video transmitting and receiving module for transmitting and receiving the audio/video; the mobile terminal comprises an audio and video acquisition module, a display playing module and an internet link module, wherein the audio and video acquisition module, the display playing module and the internet link module are integrated on a single mobile terminal, and the mobile terminal has the functions of audio and video acquisition, display playing and internet link.
2. The barrier-free online video interaction device for the deaf-mute as claimed in claim 1, wherein the mobile terminal audio/video display playing function can display the audio/video of the normal person and the sign language gesture video in a split screen manner, or can display the received audio/video information of the deaf-mute and the normal person in a single screen manner.
3. The deaf-mute unobstructed online video interaction device according to claim 1, wherein said audio-video-sign language gesture module includes an audio-video voice to text function, a text to sign language gesture video function, a sign language gesture video to text function, and a text to audio-video voice function.
4. The barrier-free online video interaction device for the deaf-mute according to claim 1, wherein the audio/video transmitting/receiving module is configured to receive an audio/video from the mobile terminal and transmit the processed audio/video to the mobile terminal.
5. The barrier-free online video interaction device for the deaf-mute as claimed in claim 3, wherein the audio-video voice to text function of the audio-video-sign language gesture module is to convert information in audio into text information by using a deep learning voice recognition technology.
6. The deaf-mute barrier-free online video interaction device according to claim 3, wherein the text-to-sign language gesture video function of the audio-video-sign language gesture module is to convert text content information into a video of sign language gestures by using a deep learning technique.
7. The device for the deaf-mute to perform the on-line video interaction without the obstacle according to claim 3, wherein the function of converting the sign language gesture video into the text of the audio-video-sign language gesture module is to convert the information contained in the sign language gesture into text content information by using a deep learning technology.
8. The deaf-mute barrier-free online video interaction device according to claim 3, wherein the text-to-audio video voice function of the audio/video-sign language gesture module is to convert text contents into voice information by using a deep learning technology and generate realistic voice by using a neural network voice synthesis function based on a voice synthesis technology.
9. The barrier-free online video interaction device for the deaf-mute as claimed in claim 4, wherein the receiving of the audio and video transmitted by the mobile terminal means that the mobile terminal transmits the audio and video of normal speech of the person or the audio and video of sign language gesture of the deaf-mute to the server terminal on line after acquiring the audio and video of normal speech of the person in real time, and then the server terminal receives the audio and video.
10. The barrier-free online video interaction device for the deaf-mute as claimed in claim 4, wherein the sending of the processed audio/video to the mobile terminal means that the audio/video of a normal person is converted into the audio/video of sign language gesture when being sent to the mobile terminal of the deaf-mute, and the audio/video of the normal person is also included, and can be displayed in a split screen manner; when the sign language gesture audio/video is sent to a mobile terminal of a normal person, the sign language gesture audio/video of the deaf-mute is converted into the audio/video of the normal voice video through the sign language gesture, and then the audio/video is sent.
CN202011430724.1A 2020-12-09 2020-12-09 Deaf-mute barrier-free online video interaction device Pending CN112506410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011430724.1A CN112506410A (en) 2020-12-09 2020-12-09 Deaf-mute barrier-free online video interaction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011430724.1A CN112506410A (en) 2020-12-09 2020-12-09 Deaf-mute barrier-free online video interaction device

Publications (1)

Publication Number Publication Date
CN112506410A true CN112506410A (en) 2021-03-16

Family

ID=74970134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011430724.1A Pending CN112506410A (en) 2020-12-09 2020-12-09 Deaf-mute barrier-free online video interaction device

Country Status (1)

Country Link
CN (1) CN112506410A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070065A (en) * 2019-04-30 2019-07-30 李冠津 The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110456906A (en) * 2019-07-23 2019-11-15 艾祎璠 A kind of intelligent interactive system for deaf-mute
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium
CN111223369A (en) * 2020-01-15 2020-06-02 上海馨予信息技术有限公司 Deaf language translator and communication method thereof
CN111768786A (en) * 2020-06-24 2020-10-13 重庆蓝岸通讯技术有限公司 Deaf-mute conversation intelligent terminal platform and conversation method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070065A (en) * 2019-04-30 2019-07-30 李冠津 The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110456906A (en) * 2019-07-23 2019-11-15 艾祎璠 A kind of intelligent interactive system for deaf-mute
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium
CN111223369A (en) * 2020-01-15 2020-06-02 上海馨予信息技术有限公司 Deaf language translator and communication method thereof
CN111768786A (en) * 2020-06-24 2020-10-13 重庆蓝岸通讯技术有限公司 Deaf-mute conversation intelligent terminal platform and conversation method thereof

Similar Documents

Publication Publication Date Title
US5982853A (en) Telephone for the deaf and method of using same
CN110609620B (en) Human-computer interaction method and device based on virtual image and electronic equipment
WO2020204000A1 (en) Communication assistance system, communication assistance method, communication assistance program, and image control program
EP2574220B1 (en) Hand-held communication aid for individuals with auditory, speech and visual impairments
CN101502094B (en) Methods and systems for a sign language graphical interpreter
US20030038884A1 (en) Method and apparatus for producing communication data, method and apparatus for reproducing communication data, and program storage medium
CN100358358C (en) Sign language video presentation device, sign language video i/o device, and sign language interpretation system
KR101587115B1 (en) System for avatar messenger service
TWI276357B (en) Image input apparatus for sign language talk, image input/output apparatus for sign language talk, and system for sign language translation
CN110931042A (en) Simultaneous interpretation method and device, electronic equipment and storage medium
CN113592985A (en) Method and device for outputting mixed deformation value, storage medium and electronic device
CN108877410A (en) A kind of deaf-mute's sign language exchange method and deaf-mute's sign language interactive device
WO2021006538A1 (en) Avatar visual transformation device expressing text message as v-moji and message transformation method
CN116524791A (en) Lip language learning auxiliary training system based on meta universe and application thereof
CN111221495A (en) Visual interaction method and device and terminal equipment
CN110717344A (en) Auxiliary communication system based on intelligent wearable equipment
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN113438300A (en) Network-based accessible communication online communication system and method for hearing-impaired people and normal people
CN107783650A (en) A kind of man-machine interaction method and device based on virtual robot
CN112506410A (en) Deaf-mute barrier-free online video interaction device
CN110133872A (en) A kind of intelligent glasses can be realized multilingual intertranslation
CN109977427A (en) A kind of miniature wearable real time translator
JP2932027B2 (en) Videophone equipment
KR20170127354A (en) Apparatus and method for providing video conversation using face conversion based on facial motion capture
CN113157241A (en) Interaction equipment, interaction device and interaction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316