KR20160142079A - Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network - Google Patents
Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network Download PDFInfo
- Publication number
- KR20160142079A KR20160142079A KR1020150077983A KR20150077983A KR20160142079A KR 20160142079 A KR20160142079 A KR 20160142079A KR 1020150077983 A KR1020150077983 A KR 1020150077983A KR 20150077983 A KR20150077983 A KR 20150077983A KR 20160142079 A KR20160142079 A KR 20160142079A
- Authority
- KR
- South Korea
- Prior art keywords
- text
- voice
- smart device
- signal
- speech
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/009—Teaching or communicating with deaf persons
Abstract
Description
The present invention relates to a language interpretation assistance device for the hearing impaired, a speech synthesis server, a speech recognition server, an alarm device for the hearing impaired, a local server for the hearing impaired, and a voice call application for the hearing impaired . Specifically, the present invention provides a language interpretation based on the text of a hearing-impaired person and the voice of a hearing impaired person through an assistant device or the like connected through a smart device and a short-range wireless communication network.
Many smart devices such as smart phones supporting Bluetooth low energy (BLE) are being launched with low-power, near-field data communication technology. Smart devices enable the execution of various applications using wireless Internet communication. An application installed in a smart device can perform a function of processing a lot of information through communication with a cloud-based server, and a user can receive various application services through the communication.
In a local data communication environment, a separate device separate from the smart device can provide various application services performed through the application installed in the smart device.
As an example of an application, there is an application that converts text and voice input through a smart device into voice and text. This can be provided to users who have difficulty communicating in daily life, and can support voice conversation with the other party. As an example of another application, there is an application that identifies a detected ambient sound and provides an alarm function to the user.
Since the microphones and loudspeakers provided in the smart device are limited in their location and specifications, there is a limitation in inputting voice and sound due to noise, and an application provided by the smart device may cause the quality of converted voice due to echo phenomenon to be low have. In addition, since the services supported by the application of the smart device are diversified and the complex functions are simultaneously performed by the one smart device, only the voice and text conversion service or the alarm service are separately provided by the smart device, There is a limit to the service, and inconvenience of portability may occur.
Accordingly, it is necessary to develop an auxiliary device that can operate conveniently in conjunction with a smart device while providing portable and high quality language interpretation and alarm services for the hearing impaired, separate from the smart device.
Lectures for the hearing impaired are provided through sign language. Speech that can convert lecturer's voice to text in real time, and text of hearing-impaired person to speech in real time, so that lecturer and hearing-impaired person can communicate.
Because there is a real difficulty in hearing-impaired people having to talk to the hearing-impaired person through a mobile phone, there is a need for an application that solves this problem and enables both parties to communicate without inconvenience.
According to an embodiment of the present invention, a separate device carried by a hearing-impaired person enables conversation between the hearing-impaired person and the hearing impaired person.
SUMMARY OF THE INVENTION According to an embodiment of the present invention, it is an object of the present invention to provide alarms and alarms for ambient sounds to a hearing-impaired person through a separate device separate from a smart device.
According to an embodiment of the present invention, a speech lecture of a lecturer in a lecture hall is provided as text to a hearing impaired person so that a hearing impaired person can take a lecture.
According to an embodiment of the present invention, an application installed in a smart device of a deaf person enables communication between a hearing-impaired person and a hearing impaired person.
According to the embodiment of the present invention, it is possible to express emotions of a hearing-impaired person in a conversation between a hearing-impaired person and a healthy person, and to identify the characteristics of the hearing impaired voice through a text.
According to an embodiment of the present invention, a language interpretation assistance device for a hearing impaired person, which is connected to a smart device through a local area wireless communication network to provide a language interpretation of a hearing impaired person, includes: a microphone; speaker; Communication module; And a processor coupled to the microphone, the speaker, and the communication module, wherein the processor is configured to cause the first device to convert the first voice signal, which is converted based on the first text input by the application installed in the smart device, Via the communication module; Receiving the first audio signal, deactivating the microphone and activating the speaker; And outputting a first voice through the speaker based on the first voice signal, wherein the first text is input from the user via the smart device, and wherein the first text is based on the first text The first text signal is transmitted from the smart device to the speech synthesis server by the application and the first text signal is converted into the first speech signal by the speech synthesis server and transmitted to the smart device.
According to an embodiment of the present invention, a speech synthesis method of a speech synthesis server includes the steps of: maintaining a database for recording a plurality of emotional expression inputs and a plurality of speech output patterns respectively corresponding to the plurality of emotional expression inputs; Extracting the emotional expression input from user input including emotional expression input and text input; Querying the speech output pattern corresponding to the extracted emotional expression input through the database; And synthesizing the text input into an output speech based on the speech output pattern.
According to an embodiment of the present invention, a text conversion method of a speech recognition server includes the steps of: maintaining a database for recording a plurality of input speech features and a plurality of text output patterns respectively corresponding to the plurality of input speech features; Analyzing an input speech characteristic of the input speech based on the input speech; Querying a text output pattern corresponding to the input speech feature through the database; And converting the input speech into output text based on the text output pattern.
According to an embodiment of the present invention, an alarm device for a hearing impaired person connected to a smart device through a local area wireless communication network includes a sensor for sensing a sound; An output section; Communication module; And a processor coupled to the sensor, the output unit, and the communication module, wherein the processor compares the magnitude of the first sound signal with a predetermined magnitude based on a first sound sensed by the sensor ; Transmitting the first sound signal to the smart device through the communication module when the size of the first sound signal is larger than the predetermined size; Receiving a first alarm signal transmitted from the smart device by an application installed in the smart device based on the first sound signal; And performing a first alarm through the output unit based on the first alarm signal, wherein the application generates a first pattern corresponding to the first sound signal based on the first sound signal , Inquiring a first pattern sound corresponding to the first pattern through the memory of the smart device, and transmitting the first alarm signal based on the first alarm corresponding to the first pattern sound.
In order to provide a speech lecture to a plurality of hearing-impaired persons within a lecture hall, the lecture-station local server for the hearing impaired, which is installed in the lecture hall, according to an embodiment of the present invention comprises a communication module; And a processor coupled to the communication module, wherein the processor is configured to: receive a speaker voice through a microphone installed in the speaker; Transmitting a speaker speech signal to the speech recognition server through the communication module based on the speaker speech; Receiving a speech text signal from the speech recognition server through the communication module; Transmitting the lecture text signal to a plurality of smart devices of the plurality of hearing impaired persons via the communication module, wherein the lecturer speech signal is converted into lecture text by the speech recognition server, Wherein the plurality of smart devices are connected to the local server through a local area network and the applications installed in the plurality of smart devices transmit the plurality of smart devices based on the lecture text signal The lecture text can be displayed through the display of the smart device of the present invention.
According to an embodiment of the present invention, a voice call support application for a hearing impaired person providing a call service for the hearing impaired is characterized in that the application is stored in a memory of the smart device and is executed by the processor of the smart device, Receiving a first text from a user via the smart device; Transmitting a first text signal from the smart device to a speech synthesis server based on the first text; Receiving the first text signal through the smart device, the first speech signal being converted by the speech synthesis server; And transmitting the first voice signal to the recipient smart device via the smart device, wherein a first voice is output via the recipient smart device based on the first voice signal, Convert the first text signal to the first voice through a database, and transmit the first voice signal based on the first voice.
According to the present invention, it is possible to provide a language interpretation assistance device for a hearing impaired person, which can operate in cooperation with a smart device and is compact and lightweight, and can be easily carried and easy to use.
According to the present invention, it is possible to provide an alarm device for a hearing impaired person who senses a sound around the hearing impaired person through short-range wireless communication with a smart device and provides an alarm for a car horn sound or the like.
According to the present invention, a speech lecture provided through a microphone from a speaker can be converted into text, and the converted text can be provided to a hearing-impaired person through a server installed in the lecture hall.
According to the present invention, an application installed in a smart device of a hearing impaired person can enable communication between the hearing-impaired person and the hearing impaired person.
FIG. 1 is a schematic view for explaining a process in which a conversation between a hearing impaired person and a hearing impaired person is performed through a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
2 is a diagram showing an example of the configuration of a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
FIG. 3A is a diagram for explaining an example of a process in which a text input from a hearing impaired person is outputted as a voice through a language interpreting aid for a hearing impaired person according to an embodiment of the present invention.
FIG. 3B is a diagram illustrating an example of a process in which a voice inputted from a patient through a language interpreting assistance device for a hearing impaired person is displayed as text through a smart device of the hearing impaired person according to an embodiment of the present invention.
3C is a flowchart for explaining a speech synthesis method of the speech synthesis server according to an embodiment of the present invention.
FIG. 3D is a flowchart for explaining a text conversion method of a speech recognition server according to an embodiment of the present invention.
FIG. 4 is a schematic view for explaining a process of providing an alarm to a hearing-impaired person through an alarm device for a hearing impaired person according to an embodiment of the present invention.
5A is a diagram showing an example of the configuration of an
FIG. 5B is a diagram illustrating an example of a process in which an alarm for the ambient noise is provided to the hearing-impaired person through the alarm device for a hearing impaired person according to an embodiment of the present invention.
FIG. 6 is a schematic diagram for explaining a process in which a speech lecturer of a lecturer is provided to a hearing-impaired person by a local server for a hearing impaired person according to an embodiment of the present invention.
7A is a diagram showing an example of the configuration of a local server for a hearing impaired person according to an embodiment of the present invention.
FIG. 7B is a diagram for explaining an example of a process in which a voice of a speaker is converted into text and displayed on a smart device of a hearing impaired person according to an embodiment of the present invention.
7C is a diagram for explaining an example of a process in which a text input from a hearing impaired person is converted into speech and output through a speaker according to an embodiment of the present invention.
8 is a schematic view illustrating a voice call support application for a hearing impaired person providing a call service for a hearing impaired person according to an embodiment of the present invention.
FIG. 9A is a diagram illustrating a process in which a text input from a hearing-impaired person is provided to a calling party as a voice through a voice call application for the hearing impaired according to an embodiment of the present invention.
FIG. 9B is a diagram illustrating a process in which a voice input from a health care provider is converted into text and displayed on a smart device of a hearing impaired person by a voice call application for the hearing impaired according to an embodiment of the present invention.
Various modifications may be made to the embodiments described below. It is to be understood that the embodiments described below are not intended to limit the embodiments, but include all modifications, equivalents, and alternatives to them.
The terms used in the examples are used only to illustrate specific embodiments and are not intended to limit the embodiments. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this embodiment belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.
In the following description of the present invention with reference to the accompanying drawings, the same components are denoted by the same reference numerals regardless of the reference numerals, and redundant explanations thereof will be omitted. In the following description of the embodiments, a detailed description of related arts will be omitted if it is determined that the gist of the embodiments may be unnecessarily blurred.
FIG. 1 is a schematic view for explaining a process in which a conversation between a hearing impaired person and a hearing impaired person is performed through a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
Hearing impaired people use communication methods using sign language, spoken language, and assistive devices, but they have many discomforts in everyday life and often have language or behavioral disorders. Hearing impaired persons who have difficulty in recognizing sounds due to hearing loss can communicate through sign language, but there is a great difficulty in communicating with non - sign language learners who do not know sign language. There is a need for a device that provides real-time communication through speech recognition and translation techniques that enable hearing-impaired people to communicate at the same level as regular users in everyday life.
Referring to FIG. 1, a hearing-
Referring to FIG. 1, the hearing impaired
Referring to FIG. 1, the
As shown in FIG. 1, the
According to one embodiment, the
According to one embodiment, the language interpreting
According to one embodiment, the
The
2 is a diagram showing an example of the configuration of a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
The language
According to one embodiment, the
Although not shown in the figure, according to one embodiment, the
According to one embodiment, the first microphone and the second microphone coincide with the traveling direction of the voice, and the size of the voice signal input through the first microphone and the size of the voice signal input through the second microphone If the difference is greater than the predefined value, the voice input via the
The difference between the magnitude of the first audio signal and the magnitude of the second audio signal may be less than a predefined value if the audio input to the
According to another embodiment, the first microphone and the second microphone may be spaced a predefined distance in a predefined direction, and the predefined direction may be selected by a person who is speaking to the hearing impaired < RTI ID = 0.0 > And may be orthogonal to the traveling direction of transmission. In such a case, the first microphone and the second microphone may be arranged to have a direction that is perpendicular to the direction in which the voice travels to the hearing impaired person's language
According to one embodiment, the first microphone and the second microphone are orthogonal to the progress direction of the voice, and the size of the voice signal input through the first microphone and the size of the voice signal input through the second microphone If the difference is smaller than the predefined value, the voice input via the
Hereinafter, with reference to FIG. 3A, a description will be given of a process in which a text input from the hearing-
FIG. 3A is a diagram for explaining an example of a process in which a text input from a hearing impaired person is outputted as a voice through a language interpreting aid for a hearing impaired person according to an embodiment of the present invention.
Although not shown in FIG. 3A, the
The
The application installed in the
The application may send 304 the first text signal to the
The first text signal transmitted from the
The
The application installed in the
When the
The
FIG. 3B is a diagram illustrating an example of a process in which a voice inputted from a patient through a language interpreting assistance device for a hearing impaired person is displayed as text through a smart device of the hearing impaired person according to an embodiment of the present invention.
The
The
The
The application installed in the
The
The
An application installed in the
Although not shown in FIGS. 3A and 3B, the
According to one embodiment, the boilerplate text may be input via the
The
The boilerplate speech signal transmitted from the
The
3C is a flowchart for explaining a speech synthesis method of the speech synthesis server according to an embodiment of the present invention.
Referring to FIG. 3C, in accordance with one embodiment, the speech synthesis server may maintain a
The speech synthesis server may extract the emotional expression input from the user input including the emotional expression input and the text input (319). According to one embodiment, user input may be entered from a user via a smart device. According to one embodiment, the language interpretation aid for the hearing impaired may be connected to the smart device via a short-range wireless communication network, and the output voice may be output through a language interpreting aid for the hearing impaired.
The speech synthesis server may inquire the speech output pattern corresponding to the extracted emotional expression input through the database (320).
The speech synthesis server may synthesize the text input into the output speech based on the speech output pattern (321).
According to one embodiment, the hearing impaired may enter user input via a smart device, in which case emoticons or special characters may be entered before or after the text. For example, when a user input of "Why am I breaking the appointment every once in a while (emoticon - meaning an angry state)" is input via the smart device, the output voice is synthesized based on the emoticon of the user input. The output speech can be synthesized by reflecting the pitch, speed, and strength of the sound to express the user's anger feeling.
According to one embodiment, the speech synthesis method described with reference to FIG. 3C may be implemented through an application of a smart device. The speech synthesizing method operating through the application is the same as the embodiment described with reference to FIG. 3C, and thus the detailed description thereof will be omitted. According to one embodiment, the speech synthesis server may include a database, a processor, and a communication module, and the speech synthesis method described with reference to FIG. 3C may be performed through a processor.
FIG. 3D is a flowchart for explaining a text conversion method of a speech recognition server according to an embodiment of the present invention.
Referring to FIG. 3D, in accordance with an embodiment, the speech recognition server may maintain 322 a database that records a plurality of text output patterns corresponding to a plurality of input speech features and a plurality of input speech features, respectively. The input voice characteristics may include the size of the input voice, the speed of the input voice, and the gender of the input voice. The text output pattern may include the size of the output text, the spacing of the output text, and the color of the output text.
The speech recognition server may analyze the input speech characteristics of the input speech based on the input speech (323). According to one embodiment, the input voice may be input via a smart device and input through a smart device, via a language interpretation aid for a hearing impaired person connected to the smart device. The input voice input through the language interpreting aid for the hearing impaired can be transmitted to the smart device and transmitted from the smart device to the voice recognition server. According to one embodiment, the magnitude, speed, and sex of the input voice can be analyzed.
The speech recognition server may query the text output pattern corresponding to the input speech characteristic through the database (324).
The speech recognition server may convert the input speech to output text based on the text output pattern (325).
According to one embodiment, when the size of the input voice is large, the size of the text can be converted into a large output text. When the speed of the input voice is fast, the interval between the texts can be converted into the output text having a small interval. If the speed of the input voice is slow, the interval between the texts can be converted into the output text. According to one embodiment, the gender of the input voice can be analyzed, and if the sex of the male is proved, the output text can be converted by changing the color of the text. According to one embodiment, when a plurality of input voices are input, the genders of the plurality of input voices are analyzed so that the color of the output text may be changed (for example, male is blue and female is red) The output text can be displayed via the smart device's display.
According to one embodiment, the text conversion method described with reference to FIG. 3d may be implemented through an application of a smart device. The text conversion method operating through the application is the same as that of the embodiment described with reference to FIG. 3D, and thus detailed description thereof will be omitted. According to one embodiment, the text conversion method may include a database, a processor, and a communication module, and the text conversion method described with reference to FIG. 3D may be performed through a processor.
FIG. 4 is a schematic view for explaining a process of providing an alarm to a hearing-impaired person through an alarm device for a hearing impaired person according to an embodiment of the present invention.
Referring to FIG. 4, the hearing impaired
As shown in FIG. 4, the
According to one embodiment, the hearing
According to one embodiment, the application of the
According to one embodiment, the hearing
According to one embodiment, the
The
5A is a diagram showing an example of the configuration of an
The
According to one embodiment, the
FIG. 5B is a diagram illustrating an example of a process in which an alarm for the ambient noise is provided to the hearing-impaired person through the alarm device for a hearing impaired person according to an embodiment of the present invention.
Referring to FIG. 5B, the
The
When the size of the first sound signal is larger than a predetermined size, the
Although not shown, according to one embodiment, the
The application installed in the
The application can query the first pattern sound corresponding to the first pattern through the memory of the smart device 402 (510). The memory of the
The application can inquire the first alarm corresponding to the first pattern sound (car horn sound) through the memory of the smart device 402 (511). In the memory of the
The application may send 512 a first alarm signal from the
The
The
The generated update information may be transmitted to the smart device 402 (516). According to one embodiment, the update information may be transmitted in response to an update request by the application of the
The application may receive the update information from the
FIG. 6 is a schematic diagram for explaining a process in which a speech lecturer of a lecturer is provided to a hearing-impaired person by a local server for a hearing impaired person according to an embodiment of the present invention.
Referring to FIG. 6, a
According to another embodiment, text input via the
7A is a diagram showing an example of the configuration of a local server for a hearing impaired person according to an embodiment of the present invention.
Referring to FIG. 7A, the lecture
FIG. 7B is a diagram for explaining an example of a process in which a voice of a speaker is converted into text and displayed on a smart device of a hearing impaired person according to an embodiment of the present invention.
The
The
The
The
The
An application installed in a plurality of
7C is a diagram for explaining an example of a process in which a text input from a hearing impaired person is converted into speech and output through a speaker according to an embodiment of the present invention.
The question text can be input by the application from the hearing impaired person through any of the plurality of smart devices 605 (709). The question text is the string that the hearing impaired inputs through the smart device by the application.
The application may send 710 the question text signal from the smart device where the question text is entered based on the question text to the lecture
The
The
The
8 is a schematic view illustrating a voice call support application for a hearing impaired person providing a call service for a hearing impaired person according to an embodiment of the present invention.
It is difficult for hearing impaired people to make voice calls smoothly. The hearing impaired person communicates by inputting a brief sentence in the communication terminal and delivering it to the other party, or by communicating the doctor to the sign language through the video call. When a person with hearing impairments makes a call, image sign language or sign language interpretation call center is used. In such a case, there are many restrictions on the use of image signatures due to different communication protocols for each business.
The voice
Referring to FIG. 8, the text input from the hearing-impaired person via the smart device is transmitted to the
Referring to FIG. 8, the voice "Where is the address" input from the well-being 805 to the
According to one embodiment, a voice
FIG. 9A is a diagram illustrating a process in which a text input from a hearing-impaired person is provided to a calling party as a voice through a voice call application for the hearing impaired according to an embodiment of the present invention.
Referring to FIG. 9A, a first text can be received from a hearing-impaired person through a smart device by a voice
The voice
The
The
The hearing impaired voice
Based on the first voice signal transmitted from the voice
FIG. 9B is a diagram illustrating a process in which a voice input from a health care provider is converted into text and displayed on a smart device of a hearing impaired person by a voice call application for the hearing impaired according to an embodiment of the present invention.
Referring to FIG. 9B, a second voice may be input via the recipient smart device 804 (907). The recipient
The hearing impaired voice
The voice
The
The voice
The voice
Although not shown, the voice
According to one embodiment, the voice
The voice
The method according to an embodiment of the present invention may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
102: Language Interpreter for the Deaf
103: Smart Devices
104:
Claims (21)
MIC;
speaker;
Communication module; And
And a processor coupled to the microphone, the speaker, and the communication module,
The processor comprising:
Receiving a first voice signal, which is converted based on a first text input by an application installed in the smart device, from the smart device via the communication module;
Receiving the first audio signal, deactivating the microphone and activating the speaker; And
Outputting a first voice through the speaker based on the first voice signal;
Lt; / RTI >
Wherein the first text is input from the user via the smart device by the application and the first text signal is sent from the smart device to the speech synthesis server by the application based on the first text, Signal is converted into the first voice signal by the voice synthesis server and transmitted to the smart device,
Language Interpreting Device for the Deaf.
Further comprising an action button,
Wherein the processor is coupled to the action button,
The processor comprising:
Receiving, from a user, a first input for driving the application via the action button; And
Transmitting a first command to the smart device via the communication module based on the first input
Lt; / RTI >
Wherein the application is activated by the first instruction,
Language Interpreting Device for the Deaf.
The processor comprising:
Activating the microphone and deactivating the speaker;
Receiving a second voice through the microphone; And
Transmitting a second voice signal to the smart device via the communication module based on the second voice
Lt; / RTI >
Wherein the second voice signal is transmitted to the voice recognition server by the application, the second voice signal is converted into a second text signal by the voice recognition server, the second text signal is transmitted to the smart device, Wherein the second text is displayed on the display of the smart device by the application based on the second text signal,
Language Interpreting Device for the Deaf.
The processor comprising:
Receiving, via the communication module from the smart device, a converted-word voice signal that is converted based on the boilerplate text input by the application;
Disabling the microphone and activating the speaker upon receiving the boilerplate voice signal; And
Outputting a voice of the boilerplate through the speaker based on the boilerplate voice signal;
Lt; / RTI >
Wherein the boilerplate text is input from the user via the smart device and the boilerplate text signal is sent from the smart device to the speech synthesis server by the application based on the boilerplate text,
The voice synthesizing server retrieves the boilerplate voice corresponding to the boilerplate text through a database and transmits the boilerplate voice signal to the smart device based on the retrieved boilerplate voice.
Language Interpreting Device for the Deaf.
A plurality of boilerplate texts and a plurality of boilerplate voices corresponding respectively to the plurality of boilerplate texts are recorded and held in the database,
Wherein the voice of the surname is searched from among the plurality of syllabary voices,
Language Interpreting Device for the Deaf.
Extracting the emotional expression input from user input including emotional expression input and text input;
Querying the speech output pattern corresponding to the extracted emotional expression input through the database; And
Synthesizing the text input into an output speech based on the speech output pattern
/ RTI >
A speech synthesis method of a speech synthesis server.
Wherein the user input is input from a user via a smart device and the output voice is output through a language interpretation assistance device for a hearing impaired person connected to the smart device via a local area wireless communication network,
Wherein the emotional expression input is input before or after the text input and includes emoticons, special characters, and boilerplate,
Wherein the speech output pattern includes a pitch, a pitch, a length, and an intensity of a pitch of the output speech,
A speech synthesis method of a speech synthesis server.
Analyzing an input speech characteristic of the input speech based on the input speech;
Querying a text output pattern corresponding to the input speech feature through the database; And
Converting the input speech into output text based on the text output pattern
/ RTI >
Text conversion method of speech recognition server.
Wherein the input voice is input through a smart device and a language interpretation assistance device for a hearing impaired person connected to a local area network and transmitted to the smart device,
Wherein the output text is displayed through a display of the smart device,
Wherein the input voice feature includes a size of the input voice, a speed of the input voice, and a sex of the input voice,
Wherein the text output pattern includes a size of the output text, an interval of the output text, and a color of the output text.
Text conversion method of speech recognition server.
A sensor for sensing sound;
An output section;
Communication module; And
And a processor coupled to the sensor, the output, and the communication module,
The processor comprising:
Comparing a magnitude of the first sound signal with a predetermined magnitude based on a first sound sensed through the sensor;
Transmitting the first sound signal to the smart device through the communication module when the size of the first sound signal is larger than the predetermined size;
Receiving a first alarm signal transmitted from the smart device by an application installed in the smart device based on the first sound signal; And
Performing a first alarm through the output based on the first alarm signal
Lt; / RTI >
The application generates a first pattern corresponding to the first sound signal based on the first sound signal, inquires a first pattern sound corresponding to the first pattern through the memory of the smart device, Transmitting the first alarm signal based on the first alarm corresponding to the first pattern sound,
Alarm device for the hearing impaired.
The output comprising a vibrator, an LED, and an LCD,
Alarm device for the hearing impaired.
Wherein the application outputs a description of the first pattern sound through a display of the smart device,
Alarm device for the hearing impaired.
Wherein the memory stores a plurality of pattern sounds and a plurality of patterns respectively corresponding to the plurality of pattern sounds,
Wherein the first pattern sound is searched among the plurality of pattern sounds recorded in the memory,
Wherein the plurality of pattern sounds include at least one of a car horn sound, an animal sound, a baby sound, a beep sound, a beep sound, a bell sound, and a siren sound.
Alarm device for the hearing impaired.
Wherein the plurality of alarms corresponding to the plurality of pattern sounds are recorded in the memory,
Wherein the first alarm is one of the plurality of alarms recorded in the memory,
Alarm device for the hearing impaired.
The plurality of alarms including at least one of a vibration and a blink of a lamp,
Wherein the period, length, intensity, and frequency of the vibration and the flicker correspond to the plurality of alarms, respectively,
Alarm device for the hearing impaired.
The smart device is connected to an update server through a communication network,
The application comprises:
Receiving update information from the update server, updating the plurality of pattern sounds and the plurality of patterns based on the update information,
Wherein the update server maintains a database for recording a plurality of pattern sounds and a plurality of patterns, generates and transmits the update information through the database,
Wherein the update information is transmitted in response to an update request by the application, transmitted in real time or periodically,
Alarm device for the hearing impaired.
Communication module; And
A processor coupled to the communication module,
Lt; / RTI >
The processor comprising:
Receiving a speaker voice through a microphone installed in the speaker;
Transmitting a speaker speech signal to the speech recognition server through the communication module based on the speaker speech;
Receiving a speech text signal from the speech recognition server through the communication module;
Transmitting the lecture text signal through the communication module to a plurality of smart devices of the plurality of hearing impaired persons
Lt; / RTI >
The speaker speech signal is converted into a lecture text by the speech recognition server, the lecture text signal is transmitted from the speech recognition server based on the lecture text,
Wherein the plurality of smart devices are connected to the local server via a local area network and a lecture text is displayed on the display of the plurality of smart devices based on the lecture text signal by an application installed in the plurality of smart devices,
Local server for the hearing impaired.
The processor comprising:
Receiving a question text signal from any one of the plurality of smart devices through the communication module;
Transmitting the question text signal to the speech synthesis server through the communication module;
Receiving a question voice signal from the speech synthesis server through the communication module; And
Outputting a question voice through a speaker installed in the lecture hall based on the question voice signal
Lt; / RTI >
Wherein the question text is input via any one of the plurality of smart devices by the application and the question text signal is transmitted by the application from any one of the plurality of smart devices based on the question text,
Wherein the question text signal is converted into the question voice by the voice synthesis server and the question voice signal is transmitted from the voice synthesis server based on the question voice,
Local server for the hearing impaired.
The application being stored in a memory of the smart device, being executed by the processor of the smart device,
The application comprises:
Receiving a first text from a user via the smart device;
Transmitting a first text signal from the smart device to a speech synthesis server based on the first text;
Receiving the first text signal through the smart device, the first speech signal being converted by the speech synthesis server; And
Transmitting the first voice signal to the recipient smart device via the smart device
Lt; / RTI >
Based on the first voice signal, a first voice is output via the recipient smart device,
Wherein the speech synthesis server converts the first text signal into the first speech through a database and transmits the first speech signal based on the first speech,
Voice call support application for the hearing impaired.
The application comprises:
Receiving a second voice signal from the recipient smart device;
Transmitting the second voice signal to the voice recognition server through the smart device;
Receiving the second text signal through the smart device, the second text signal being converted by the speech recognition server; And
Displaying a second text on the display of the smart device based on the second text signal,
Lt; / RTI >
Wherein the speech recognition server converts the second voice signal into the second text via a database and transmits the second text signal based on the second text,
Voice call support application for the hearing impaired.
The application comprises:
Receiving commercial text data from a user via the smart device;
Transmitting, from the smart device, a boilerplate text signal to the speech synthesis server based on the boilerplate text;
Receiving the converted second text signal through the smart device, wherein the second text signal is searched by the speech synthesis server and converted; And
Transmitting the second voice signal to the recipient smart device via the smart device
Lt; / RTI >
A second voice is output via the recipient smart device based on the second voice signal,
Wherein the database stores a plurality of boilerplate texts and a plurality of boilerplate voices corresponding respectively to the plurality of boilerplate texts, and the voice synthesizing server stores the plurality of boilerplate texts corresponding to the plurality of boilerplate texts, 2 voice, and transmits the second voice signal based on the second voice,
Voice call support application for the hearing impaired.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150077983A KR101846218B1 (en) | 2015-06-02 | 2015-06-02 | Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150077983A KR101846218B1 (en) | 2015-06-02 | 2015-06-02 | Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20160142079A true KR20160142079A (en) | 2016-12-12 |
KR101846218B1 KR101846218B1 (en) | 2018-05-18 |
Family
ID=57574073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150077983A KR101846218B1 (en) | 2015-06-02 | 2015-06-02 | Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101846218B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200049404A (en) * | 2018-10-31 | 2020-05-08 | 강병진 | System and Method for Providing Simultaneous Interpretation Service for Disabled Person |
US11580985B2 (en) | 2020-06-19 | 2023-02-14 | Sorenson Ip Holdings, Llc | Transcription of communications |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200083905A (en) | 2019-01-01 | 2020-07-09 | 보리 주식회사 | System and method to interpret and transmit speech information |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006133433A (en) * | 2004-11-05 | 2006-05-25 | Fuji Photo Film Co Ltd | Voice-to-character conversion system, and portable terminal device, and conversion server and control methods of them |
JP2007257562A (en) * | 2006-03-27 | 2007-10-04 | Pentax Corp | Sound file upload system |
JP2015100054A (en) * | 2013-11-20 | 2015-05-28 | 日本電信電話株式会社 | Voice communication system, voice communication method and program |
-
2015
- 2015-06-02 KR KR1020150077983A patent/KR101846218B1/en active IP Right Grant
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200049404A (en) * | 2018-10-31 | 2020-05-08 | 강병진 | System and Method for Providing Simultaneous Interpretation Service for Disabled Person |
US11580985B2 (en) | 2020-06-19 | 2023-02-14 | Sorenson Ip Holdings, Llc | Transcription of communications |
Also Published As
Publication number | Publication date |
---|---|
KR101846218B1 (en) | 2018-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2842055B1 (en) | Instant translation system | |
KR101585793B1 (en) | Smart Hearing Aid Device | |
US20170243582A1 (en) | Hearing assistance with automated speech transcription | |
US8082152B2 (en) | Device for communication for persons with speech and/or hearing handicap | |
US20100250253A1 (en) | Context aware, speech-controlled interface and system | |
CN104604250A (en) | Smart notification tool for headphones | |
US11893997B2 (en) | Audio signal processing for automatic transcription using ear-wearable device | |
US20180374483A1 (en) | Interpreting assistant system | |
Dhanjal et al. | Tools and techniques of assistive technology for hearing impaired people | |
KR101846218B1 (en) | Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network | |
JP2014204429A (en) | Voice dialogue method and apparatus using wired/wireless communication network | |
KR101017421B1 (en) | communication system for deaf person | |
WO2019228329A1 (en) | Personal hearing device, external sound processing device, and related computer program product | |
WO2022001170A1 (en) | Call prompting method, call device, readable storage medium and system on chip | |
US20230260534A1 (en) | Smart glass interface for impaired users or users with disabilities | |
KR101609585B1 (en) | Mobile terminal for hearing impaired person | |
JP3165585U (en) | Speech synthesizer | |
KR102000282B1 (en) | Conversation support device for performing auditory function assistance | |
US10936830B2 (en) | Interpreting assistant system | |
CN106125922A (en) | A kind of sign language and spoken voice image information AC system | |
US20050129250A1 (en) | Virtual assistant and method for providing audible information to a user | |
KR20150059460A (en) | Lip Reading Method in Smart Phone | |
KR101522291B1 (en) | Auxiliary Aid Apparatus of Hearing for Coping to with External Environmental Situation and Method for Controlling Operation of the Same Associated with Multimedia Device | |
KR102496398B1 (en) | A voice-to-text conversion device paired with a user device and method therefor | |
JP2000184077A (en) | Intercom system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |