AU2020103854A4 - Method and system for assisting communication for deaf persons - Google Patents

Method and system for assisting communication for deaf persons Download PDF

Info

Publication number
AU2020103854A4
AU2020103854A4 AU2020103854A AU2020103854A AU2020103854A4 AU 2020103854 A4 AU2020103854 A4 AU 2020103854A4 AU 2020103854 A AU2020103854 A AU 2020103854A AU 2020103854 A AU2020103854 A AU 2020103854A AU 2020103854 A4 AU2020103854 A4 AU 2020103854A4
Authority
AU
Australia
Prior art keywords
user
display
deaf
speech
enabled input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020103854A
Inventor
Prajakta Bharat Mane
Yogini Dilip Borole
Balu Ashok Phugate
Wagh Pratiksha Hiralal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pratiksha Hiralal Wagh Ms
Original Assignee
Pratiksha Hiralal Wagh Ms
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pratiksha Hiralal Wagh Ms filed Critical Pratiksha Hiralal Wagh Ms
Priority to AU2020103854A priority Critical patent/AU2020103854A4/en
Application granted granted Critical
Publication of AU2020103854A4 publication Critical patent/AU2020103854A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42391Systems providing special services or facilities to subscribers where the subscribers are hearing-impaired persons, e.g. telephone devices for the deaf

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Social Psychology (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Acoustics & Sound (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention provides Method and system for assisting communication for deaf persons by providing a real-time electronic translating system for converting body actions of deaf persons and visually displaying them on desired locations in real-time. Another embodiment of the present invention states that the system further comprises: a web-based interface wirelessly and communicatively coupled to the visually readable display interface and the display user interface unit, wherein said web-based interface is configured to detect a real-time location of the display user interface unit of the second user with respect to said visually readable display interface in order to transmit digitized framed images to the second user. 16 rr z LrI LLw U. - A IDU Cc .4 ~iD .4 .4 _ .4 .4

Description

rr
z LrI
LLw
U. - A
IDU Cc .4 ~iD .4 _ .4 .4 .4
Method and system for assisting communication for deaf persons
FIELDOFINVENTION
The present invention generally relates to a field of electronic and communication engineering and particularly to the processing of communications between speaking to a deaf person or hard of hearing individuals, in particular, to enhanced communications services for deaf or hard of hearing individuals.
BACKGROUND OF THE INVENTION
The persons who are deaf or hearing-impaired who cannot hear well enough to use the telephone commonly make use of communication terminals specifically constructed and designed to enable such persons to converse over the telephone lines. Such devices are referred to as telecommunication devices for the deaf (TDD) and include both a keyboard and a display connected to the telephone through a modem (modulator/demodulator). The modem is typically built into the TDD and either directly wired to a telephone line or coupled through an acoustic coupler to a normal telephone handset. The TDD is capable of transmitting information over a telephone line by means of coded tones to another similar TDD connected at the opposite end of the telephone line through another modem.
Deaf people are employed in almost every occupational field. They drive cars, get married, buy homes, and have children, much like everyone else. Because of many inherent communication difficulties, most deaf people are more comfortable when associating with other deaf people. They tend to marry deaf people whom they have met at schools for the deaf or through deaf clubs. Most deaf couples have hearing children who learn sign language early in life to communicate with their parents. Many deaf people tend to have special electronics and telecommunications equipment in their homes. Captioning decoders may be on their television, and electrical hook-ups may flash lights to indicate when the baby is crying, the doorbell is ringing, or the alarm clock is going off. However, deaf persons have substantial difficulties in communicating with persons at remote locations. One technique which is employed utilizes a teletype machine for use by the deaf person to transmit his message and also to receive messages, and the person with whom the deaf person is communicating also has such teletype machine so that there is an effective connection directly between them. In another method, the deaf person utilizes a teletype machine, but the person who is communicating with the deaf person is in contact with a communications center where a person reads the transmission to the hearing person over the telephone and receives the telephone message from the hearing person and transmits that information on the teletype machine to the deaf person. Obviously, this teletype-based system is limited and requires the deaf person to be able to manipulate a teletype machine and to understand effectively the written information which he or she receives on the teletype machine. Processing rapidly received written information is not always effective with those who have been profoundly deaf for extended periods of time. Moreover, a system based upon such teletype transmissions is generally relatively slow.
The widespread availability of personal computers and modems, has enabled direct communication with and between deaf persons having such computers. However, it is still required that the deaf person be able to type effectively and to readily comprehend the written message being received. Deaf persons generally are well schooled in the use of finger and hand signing to express themselves, and this signing may be coupled with facial expression and/or body motion to modify the words and phrases which are being signed by the hands and to convey emotion. As used herein, "signing motions" include finger and hand motions, body motions, and facial motions and expressions to convey emotions or to modify expressions generated by finger and hand motions. A written message being received on a teletype machine or computer may not convey any emotional content that may have been present in the voice of the person conveying the message.
USRE41002E1 discloses an electronic communications system for the deaf includes a video apparatus for observing and digitizing the facial, body and hand and finger signing motions of a deaf person, an electronic translator for translating the digitized signing motions into words and phrases, and an electronic output for the words and phrases. The video apparatus desirably includes both a video camera and a video display which will display signing motions provided by translating spoken words of a hearing person into digitized images.
US20060026001A1discloses a deaf party to communicate in a sign language by way of a video computing device to relay center having a sign language interpreter. A relay system receives a sign language input from the deaf party. Then, a spoken message is relayed to the hearing party that corresponds to the received sign language input. The relay system may also receive a spoken message from the hearing party. Then a sign language message is relayed to the deaf party corresponding to the spoken message via relay communication link.
However, by the application of existing and conventions systems and methods for communicating alongside with such persons having difficulty in hearing, it becomes sometimes complicated to understand the situations of said persons in real-time. the present invention overcomes the existing limitations by disclosing some technical advancements over existing technologies.
SUMMARY OF THE INVENTION
The present invention relates to a communication system for deaf persons by providing a real-time electronic translating system for converting body actions of deaf persons and visually displaying them on desired locations in real-time.
In an embodiment the present invention discloses a communication system for deaf comprising: a recording module coupled to a main server and configured to record a speech enabled input from a first user, wherein the recording module comprises an audio/video recording device in order to record the speech enabled input when the first user stands in front of said recording device; an electronic translator communicatively coupled to the recording module, wherein said translator is configured to receive the speech enabled format recorded by the recording module and convert the recorded speech into a plurality of suitable digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech; and a visually readable display interface configured to receive the plurality of suitable digitized images translated from the speech enabled input recorded by the first user, wherein the visually readable display interface comprises a display user interface unit configured to display said digitized images translated from the speech enabled input, to a second user through a communication channel in real-time.
Another embodiment of the present invention states that the system further comprises: a web-based interface wirelessly and communicatively coupled to the visually readable display interface and the display user interface unit, wherein said web-based interface is configured to detect a real-time location of the display user interface unit of the second user with respect to said visually readable display interface in order to transmit said digitized framed images to the second user.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings
BRIEF DESCRIPTION OF FIGURES
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a block diagram of components installed in a communication system for deaf.
Figure 2 illustrates a flow diagram of a method of operating a communication system for deaf.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to "an aspect", "another aspect" or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
Figure 1 illustrates a block diagram of components installed in a communication system for deaf. The communication system for deaf mainly includes following components. A recording module (102) which is coupled to a main server (104) and configured to record a speech enabled input from a first user (106), wherein the recording module (102) comprises an audio/video recording device in order to record the speech enabled input when the first user (106) stands in front of said recording device. An electronic translator (108) being communicatively coupled to the recording module (102), wherein said translator (108) is configured to receive the speech enabled format recorded by the recording module (102) and convert the recorded speech into a plurality of framed digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech.
A visually readable display interface (112) is provided and is configured to receive the plurality of suitable digitized images translated from the speech enabled input recorded by the first user (106), wherein the visually readable display (112) interface is connected to a display user interface unit (114) of a second user (110) configured to display said digitized images translated from the speech enabled input, to the second user (110) through a communication channel (116) in real-time. A web-based interface (118) is wirelessly and communicatively coupled to the visually readable display interface (112) and the display user interface unit (114), wherein said web based interface (118) is configured to detect a real-time location of the display user interface unit (114) of the second user (110) with respect to said visually readable display interface (112) in order to transmit said digitized framed images to the second user (110).
In an embodiment the system further comprises a video apparatus for visually observing the images of facial and hand and finger signing motions of the second user and converting the observed signing motions into digital identifiers, and a means for translating said digital identifiers of said observed signing motions into words and phrases. A means for outputting said words and phrases generated by the visual observation of said signing motions in a comprehensible form to a third user. The video apparatus includes a display screen to provide an output of said spoken words and phrases as signing motions on said display screen for viewing by the second user, and wherein said video apparatus includes a microphone and speaker whereby the second user is configured to communicate with another person in the immediate vicinity. Said video apparatus provides an output of said spoken words and phrases as signing motions on said display screen for viewing by the deaf person.
The communication system for deaf can also include a receiver for receiving spoken words and phrases of another person and transmitting them, and a means for translating said spoken words and phrases into a visual form which may be observed by the second user.
The system further states that said electronic translator is located at a central station with which said video apparatus and said receiver and outputting means are in communication. The said electronic translator of the system can also include artificial intelligence (AI) module for interpreting and converting the translated signaling motions into words and phrases and into coherent sentences.
In an embodiment the system further comprises a headset module and a communication module configured to communicate with said headset module using wireless two-way handshaking communication, wherein said communication module configured to use data from one or more first microphones in said headset module to receive sounds in the vicinity of the deaf user and to provide classification of sounds, to provide warnings such that a deaf user is alerted to warning sounds in the vicinity of the deaf user, and to provide a display of speech to text such that speech from a person talking to the deaf user is translated into text for the deafuser.
The system can also include a GPS receiver which can be communicatively coupled to the visually readable display interface and the display user interface unit in order to obtain location information from one or more location of the second user.
Figure 2 illustrates a flow diagram of a method of operating a communication system for deaf. The method of operating a communication system for deaf includes mainly steps as follows.
Step (202) states recording a speech enabled input from a first user to a recording module coupled to a main server, wherein the recording module comprises an audio/video recording device in order to record the speech enabled input when the first user stands in front of said recording device. Step (204) states receiving said recorded speech enabled input from the recording module to an electronic translator communicatively coupled to the recording module. Step (206) states converting the recorded speech into a plurality of framed digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech. Step (208) involves receiving said plurality of suitable digitized images translated from the speech enabled input recorded by the first user, to a visually readable display interface.
Step (210) states detecting a real-time location of a display user interface unit of a second user with respect to the visually readable display, through a web-based interface. Step (212) involves displaying said digitized images translated from the speech enabled input, to the display user interface unit of the second user through a communication channel communicating with said web-based interface in real-time.
The present invention can be beneficial to different fields and departments, such as schools and universities for enhancing development of disabled persons (deaf persons). For police departments to track and understand such persons while investigating or tracking deaf persons in strained locations. For different governmental and non-governmental departments and organization to develop programs for deaf persons.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.

Claims (5)

WE CLAIM
1. A method of assisting communication between deaf people, the method comprising steps:
recording a speech enabled input from a first user to a recording module coupled to a main server, wherein the recording module comprises an audio/video recording device in order to record the speech enabled input when the first user stands in front of said recording device;
receiving said recorded speech enabled input from the recording module to an electronic translator communicatively coupled to the recording module;
converting the recorded speech into a plurality of framed digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech;
receiving said plurality of suitable digitized images translated from the speech enabled input recorded by the first user, to a visually readable display interface;
detecting a real-time location of a display user interface unit of a second user with respect to the visually readable display, through a web-based interface;
displaying said digitized images translated from the speech enabled input, to the display user interface unit of the second user through a communication channel communicating with said web-based interface in real-time.
2. A system for assisting communication between deaf people, said system comprising: a recording module (102) coupled to a main server (104) and configured to record a speech enabled input from a first user (106), wherein the recording module (102) comprises an audio/video recording device in order to record the speech enabled input when the first user (106) stands in front of said recording device; an electronic translator (108) communicatively coupled to the recording module (102), wherein said translator (108) is configured to receive the speech enabled format recorded by the recording module (102) and convert the recorded speech into a plurality of framed digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech; and a visually readable display interface (112) configured to receive the plurality of suitable digitized images translated from the speech enabled input recorded by the first user (106), wherein the visually readable display (112) interface is connected to a display user interface unit (114) of a second user (110) configured to display said digitized images translated from the speech enabled input, to the second user (110) through a communication channel (116) in real time, wherein the system further comprises: a web-based interface (118) wirelessly and communicatively coupled to the visually readable display interface (112) and the display user interface unit (114), wherein said web-based interface (118) is configured to detect a real-time location of the display user interface unit (114) of the second user (110) with respect to said visually readable display interface (112) in order to transmit said digitized framed images to the second user (110), wherein the system further comprises: a video apparatus for visually observing the images of facial and hand and finger signing motions of the second user and converting the observed signing motions into digital identifiers, and a means for translating said digital identifiers of said observed signing motions into words and phrases; and a means for outputting said words and phrases generated by the visual observation of said signing motions in a comprehensible form to a third user.
3. The system for assisting communication between deaf people as claimed in claim 1, wherein the system further comprises:
a receiver for receiving spoken words and phrases of another person and transmitting them, and a means for translating said spoken words and phrases into a visual form which may be observed by the second user; and
a GPS receiver communicatively coupled to the visually readable display interface and the display user interface unit in order to obtain location information from one or more location of the second user.
4. The communication system for deaf as claimed in claim 1, wherein said electronic translator also includes artificial intelligence (AI) module for interpreting and converting the translated signaling motions into words and phrases and into coherent sentences, wherein the system further comprises: a headset module; and a communication module configured to communicate with said headset module using wireless two-way handshaking communication, wherein said communication module configured to use data from one or more first microphones in said headset module to receive sounds in the vicinity of the deaf user and to provide classification of sounds, to provide warnings such that a deaf user is alerted to warning sounds in the vicinity of the deaf user, and to provide a display of speech to text such that speech from a person talking to the deaf user is translated into text for the deafuser.
5. The communication system for deaf as claimed in claim 1, wherein said video apparatus includes a display screen to provide an output of said spoken words and phrases as signing motions on said display screen for viewing by the second user, and wherein said video apparatus includes a microphone and speaker whereby the second user is configured to communicate with another person in the immediate vicinity, wherein said electronic translator is located at a central station with which said video apparatus and said receiver and outputting means are in communication, and wherein said video apparatus provides an output of said spoken words and phrases as signing motions on said display screen for viewing by the deaf person.
FIG. 1
recording a speech enabled input from a first user to a recording module coupled to a main server 202 receiving said recorded speech enabled input from the recording module to an electronic translator communicatively coupled to the recording module 204 converting the recorded speech into a plurality of framed digitized images, wherein said digitized images includes movements of a body parts with respect to a particular part of said recorded speech 206
receiving said plurality of suitable digitized images translated from the speech enabled input recorded by the first user, to a visually readable display interface 208
detecting a real-time location of a display user interface unit of a second user with respect to the visually readable display, through a web-based interface 210 displaying said digitized images translated from the speech enabled input, to the display user interface unit of the second user through a communication channel communicating with said web-based interface in real-time 212
FIG. 2
AU2020103854A 2020-12-03 2020-12-03 Method and system for assisting communication for deaf persons Ceased AU2020103854A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020103854A AU2020103854A4 (en) 2020-12-03 2020-12-03 Method and system for assisting communication for deaf persons

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020103854A AU2020103854A4 (en) 2020-12-03 2020-12-03 Method and system for assisting communication for deaf persons

Publications (1)

Publication Number Publication Date
AU2020103854A4 true AU2020103854A4 (en) 2021-02-11

Family

ID=74502346

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020103854A Ceased AU2020103854A4 (en) 2020-12-03 2020-12-03 Method and system for assisting communication for deaf persons

Country Status (1)

Country Link
AU (1) AU2020103854A4 (en)

Similar Documents

Publication Publication Date Title
EP2574220B1 (en) Hand-held communication aid for individuals with auditory, speech and visual impairments
US6240392B1 (en) Communication device and method for deaf and mute persons
USRE41002E1 (en) Telephone for the deaf and method of using same
US20040143430A1 (en) Universal processing system and methods for production of outputs accessible by people with disabilities
US20140171036A1 (en) Method of communication
JP2003345379A6 (en) Audio-video conversion apparatus and method, audio-video conversion program
JP2003345379A (en) Audio video conversion apparatus and method, and audio video conversion program
CN101123630A (en) Communication method and system for voice and text conversion
WO2021006538A1 (en) Avatar visual transformation device expressing text message as v-moji and message transformation method
US20070003025A1 (en) Rybena: an asl-based communication method and system for deaf, mute and hearing impaired persons
AU2020103854A4 (en) Method and system for assisting communication for deaf persons
CN113438300A (en) Network-based accessible communication online communication system and method for hearing-impaired people and normal people
US20230247131A1 (en) Presentation of communications
Ladner Communication technologies for people with sensory disabilities
RU2312646C2 (en) Apparatus for partial substitution of speaking and hearing functions
KR20010107877A (en) Voice Recognized 3D Animation Sign Language Display System
JP2003234842A (en) Real-time handwritten communication system
JP2932027B2 (en) Videophone equipment
KR20140006198A (en) System for providing wireless captioned conversation service
JP2000004304A (en) Speech communication device enabling communication with different means
JP2006139138A (en) Information terminal and base station
KR101778548B1 (en) Conference management method and system of voice understanding and hearing aid supporting for hearing-impaired person
JP2000134301A (en) Speech communication system
JPH09116648A (en) Portable communication equipment
JP2007272260A (en) Automatic translation device

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry