KR20160142079A - Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network - Google Patents

Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network Download PDF

Info

Publication number
KR20160142079A
KR20160142079A KR1020150077983A KR20150077983A KR20160142079A KR 20160142079 A KR20160142079 A KR 20160142079A KR 1020150077983 A KR1020150077983 A KR 1020150077983A KR 20150077983 A KR20150077983 A KR 20150077983A KR 20160142079 A KR20160142079 A KR 20160142079A
Authority
KR
South Korea
Prior art keywords
text
voice
smart device
signal
speech
Prior art date
Application number
KR1020150077983A
Other languages
Korean (ko)
Other versions
KR101846218B1 (en
Inventor
김병규
Original Assignee
(주)에스앤아이스퀘어
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)에스앤아이스퀘어 filed Critical (주)에스앤아이스퀘어
Priority to KR1020150077983A priority Critical patent/KR101846218B1/en
Publication of KR20160142079A publication Critical patent/KR20160142079A/en
Application granted granted Critical
Publication of KR101846218B1 publication Critical patent/KR101846218B1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons

Abstract

The present invention relates to a language interpretation assist device, a speech synthesis server, a speech recognition server, an alarm device, a local auditorium server, and a voice call support application for the deaf for supporting a voice conversation, based on a local area wireless communication network, which can ease inconvenience in communications of the deaf, using a smart device. According to the present invention, the language interpretation assist device for the deaf which operates in connection with a smart device, is easy to carry through miniaturization and weight lightening, and can improve user convenience can be provided. According to the present invention, the alarm device for the deaf, which can sense sound around the deaf through local wireless communications with a smart device and can provide an alarm such as a vehicles horn to the deaf can be provided. According to the present invention, a voice lecture provided through a microphone from a lecturer can be converted into text, and the text can be provided to the deaf through a server installed in an auditorium. According to the present invention, a call between the deaf and a hearing person can be provided through an application installed in a smart device of the deaf. According to the present invention, emotion recognition and expression between the deaf and a hearing person can be enabled.

Description

{LANGUAGE INTERPRETER, SPEECH SYNTHESIS SERVER, LANGUAGE INTERPRETER, SPEECH SYNTHESIS SERVER, LANGUAGE INTERPRETER, SPEECH SYNTHESIS SERVER, LANGUAGE INTERPRETER, SPEECH SYNTHESIS SERVER , SPEECH RECOGNITION SERVER, ALARM DEVICE, LECTURE LOCAL SERVER, AND VOICE CALL SUPPORT APPLICATION FOR DEAF AUXILIARIES BASED ON THE LOCAL AREA WIRELESS COMMUNICATION NETWORK}

The present invention relates to a language interpretation assistance device for the hearing impaired, a speech synthesis server, a speech recognition server, an alarm device for the hearing impaired, a local server for the hearing impaired, and a voice call application for the hearing impaired . Specifically, the present invention provides a language interpretation based on the text of a hearing-impaired person and the voice of a hearing impaired person through an assistant device or the like connected through a smart device and a short-range wireless communication network.

Many smart devices such as smart phones supporting Bluetooth low energy (BLE) are being launched with low-power, near-field data communication technology. Smart devices enable the execution of various applications using wireless Internet communication. An application installed in a smart device can perform a function of processing a lot of information through communication with a cloud-based server, and a user can receive various application services through the communication.

In a local data communication environment, a separate device separate from the smart device can provide various application services performed through the application installed in the smart device.

As an example of an application, there is an application that converts text and voice input through a smart device into voice and text. This can be provided to users who have difficulty communicating in daily life, and can support voice conversation with the other party. As an example of another application, there is an application that identifies a detected ambient sound and provides an alarm function to the user.

Since the microphones and loudspeakers provided in the smart device are limited in their location and specifications, there is a limitation in inputting voice and sound due to noise, and an application provided by the smart device may cause the quality of converted voice due to echo phenomenon to be low have. In addition, since the services supported by the application of the smart device are diversified and the complex functions are simultaneously performed by the one smart device, only the voice and text conversion service or the alarm service are separately provided by the smart device, There is a limit to the service, and inconvenience of portability may occur.

Accordingly, it is necessary to develop an auxiliary device that can operate conveniently in conjunction with a smart device while providing portable and high quality language interpretation and alarm services for the hearing impaired, separate from the smart device.

Lectures for the hearing impaired are provided through sign language. Speech that can convert lecturer's voice to text in real time, and text of hearing-impaired person to speech in real time, so that lecturer and hearing-impaired person can communicate.

Because there is a real difficulty in hearing-impaired people having to talk to the hearing-impaired person through a mobile phone, there is a need for an application that solves this problem and enables both parties to communicate without inconvenience.

According to an embodiment of the present invention, a separate device carried by a hearing-impaired person enables conversation between the hearing-impaired person and the hearing impaired person.

SUMMARY OF THE INVENTION According to an embodiment of the present invention, it is an object of the present invention to provide alarms and alarms for ambient sounds to a hearing-impaired person through a separate device separate from a smart device.

According to an embodiment of the present invention, a speech lecture of a lecturer in a lecture hall is provided as text to a hearing impaired person so that a hearing impaired person can take a lecture.

According to an embodiment of the present invention, an application installed in a smart device of a deaf person enables communication between a hearing-impaired person and a hearing impaired person.

According to the embodiment of the present invention, it is possible to express emotions of a hearing-impaired person in a conversation between a hearing-impaired person and a healthy person, and to identify the characteristics of the hearing impaired voice through a text.

According to an embodiment of the present invention, a language interpretation assistance device for a hearing impaired person, which is connected to a smart device through a local area wireless communication network to provide a language interpretation of a hearing impaired person, includes: a microphone; speaker; Communication module; And a processor coupled to the microphone, the speaker, and the communication module, wherein the processor is configured to cause the first device to convert the first voice signal, which is converted based on the first text input by the application installed in the smart device, Via the communication module; Receiving the first audio signal, deactivating the microphone and activating the speaker; And outputting a first voice through the speaker based on the first voice signal, wherein the first text is input from the user via the smart device, and wherein the first text is based on the first text The first text signal is transmitted from the smart device to the speech synthesis server by the application and the first text signal is converted into the first speech signal by the speech synthesis server and transmitted to the smart device.

According to an embodiment of the present invention, a speech synthesis method of a speech synthesis server includes the steps of: maintaining a database for recording a plurality of emotional expression inputs and a plurality of speech output patterns respectively corresponding to the plurality of emotional expression inputs; Extracting the emotional expression input from user input including emotional expression input and text input; Querying the speech output pattern corresponding to the extracted emotional expression input through the database; And synthesizing the text input into an output speech based on the speech output pattern.

According to an embodiment of the present invention, a text conversion method of a speech recognition server includes the steps of: maintaining a database for recording a plurality of input speech features and a plurality of text output patterns respectively corresponding to the plurality of input speech features; Analyzing an input speech characteristic of the input speech based on the input speech; Querying a text output pattern corresponding to the input speech feature through the database; And converting the input speech into output text based on the text output pattern.

According to an embodiment of the present invention, an alarm device for a hearing impaired person connected to a smart device through a local area wireless communication network includes a sensor for sensing a sound; An output section; Communication module; And a processor coupled to the sensor, the output unit, and the communication module, wherein the processor compares the magnitude of the first sound signal with a predetermined magnitude based on a first sound sensed by the sensor ; Transmitting the first sound signal to the smart device through the communication module when the size of the first sound signal is larger than the predetermined size; Receiving a first alarm signal transmitted from the smart device by an application installed in the smart device based on the first sound signal; And performing a first alarm through the output unit based on the first alarm signal, wherein the application generates a first pattern corresponding to the first sound signal based on the first sound signal , Inquiring a first pattern sound corresponding to the first pattern through the memory of the smart device, and transmitting the first alarm signal based on the first alarm corresponding to the first pattern sound.

In order to provide a speech lecture to a plurality of hearing-impaired persons within a lecture hall, the lecture-station local server for the hearing impaired, which is installed in the lecture hall, according to an embodiment of the present invention comprises a communication module; And a processor coupled to the communication module, wherein the processor is configured to: receive a speaker voice through a microphone installed in the speaker; Transmitting a speaker speech signal to the speech recognition server through the communication module based on the speaker speech; Receiving a speech text signal from the speech recognition server through the communication module; Transmitting the lecture text signal to a plurality of smart devices of the plurality of hearing impaired persons via the communication module, wherein the lecturer speech signal is converted into lecture text by the speech recognition server, Wherein the plurality of smart devices are connected to the local server through a local area network and the applications installed in the plurality of smart devices transmit the plurality of smart devices based on the lecture text signal The lecture text can be displayed through the display of the smart device of the present invention.

According to an embodiment of the present invention, a voice call support application for a hearing impaired person providing a call service for the hearing impaired is characterized in that the application is stored in a memory of the smart device and is executed by the processor of the smart device, Receiving a first text from a user via the smart device; Transmitting a first text signal from the smart device to a speech synthesis server based on the first text; Receiving the first text signal through the smart device, the first speech signal being converted by the speech synthesis server; And transmitting the first voice signal to the recipient smart device via the smart device, wherein a first voice is output via the recipient smart device based on the first voice signal, Convert the first text signal to the first voice through a database, and transmit the first voice signal based on the first voice.

According to the present invention, it is possible to provide a language interpretation assistance device for a hearing impaired person, which can operate in cooperation with a smart device and is compact and lightweight, and can be easily carried and easy to use.

According to the present invention, it is possible to provide an alarm device for a hearing impaired person who senses a sound around the hearing impaired person through short-range wireless communication with a smart device and provides an alarm for a car horn sound or the like.

According to the present invention, a speech lecture provided through a microphone from a speaker can be converted into text, and the converted text can be provided to a hearing-impaired person through a server installed in the lecture hall.

According to the present invention, an application installed in a smart device of a hearing impaired person can enable communication between the hearing-impaired person and the hearing impaired person.

FIG. 1 is a schematic view for explaining a process in which a conversation between a hearing impaired person and a hearing impaired person is performed through a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
2 is a diagram showing an example of the configuration of a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.
FIG. 3A is a diagram for explaining an example of a process in which a text input from a hearing impaired person is outputted as a voice through a language interpreting aid for a hearing impaired person according to an embodiment of the present invention.
FIG. 3B is a diagram illustrating an example of a process in which a voice inputted from a patient through a language interpreting assistance device for a hearing impaired person is displayed as text through a smart device of the hearing impaired person according to an embodiment of the present invention.
3C is a flowchart for explaining a speech synthesis method of the speech synthesis server according to an embodiment of the present invention.
FIG. 3D is a flowchart for explaining a text conversion method of a speech recognition server according to an embodiment of the present invention.
FIG. 4 is a schematic view for explaining a process of providing an alarm to a hearing-impaired person through an alarm device for a hearing impaired person according to an embodiment of the present invention.
5A is a diagram showing an example of the configuration of an alarm device 401 for the hearing impaired person according to an embodiment of the present invention.
FIG. 5B is a diagram illustrating an example of a process in which an alarm for the ambient noise is provided to the hearing-impaired person through the alarm device for a hearing impaired person according to an embodiment of the present invention.
FIG. 6 is a schematic diagram for explaining a process in which a speech lecturer of a lecturer is provided to a hearing-impaired person by a local server for a hearing impaired person according to an embodiment of the present invention.
7A is a diagram showing an example of the configuration of a local server for a hearing impaired person according to an embodiment of the present invention.
FIG. 7B is a diagram for explaining an example of a process in which a voice of a speaker is converted into text and displayed on a smart device of a hearing impaired person according to an embodiment of the present invention.
7C is a diagram for explaining an example of a process in which a text input from a hearing impaired person is converted into speech and output through a speaker according to an embodiment of the present invention.
8 is a schematic view illustrating a voice call support application for a hearing impaired person providing a call service for a hearing impaired person according to an embodiment of the present invention.
FIG. 9A is a diagram illustrating a process in which a text input from a hearing-impaired person is provided to a calling party as a voice through a voice call application for the hearing impaired according to an embodiment of the present invention.
FIG. 9B is a diagram illustrating a process in which a voice input from a health care provider is converted into text and displayed on a smart device of a hearing impaired person by a voice call application for the hearing impaired according to an embodiment of the present invention.

Various modifications may be made to the embodiments described below. It is to be understood that the embodiments described below are not intended to limit the embodiments, but include all modifications, equivalents, and alternatives to them.

The terms used in the examples are used only to illustrate specific embodiments and are not intended to limit the embodiments. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this embodiment belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

In the following description of the present invention with reference to the accompanying drawings, the same components are denoted by the same reference numerals regardless of the reference numerals, and redundant explanations thereof will be omitted. In the following description of the embodiments, a detailed description of related arts will be omitted if it is determined that the gist of the embodiments may be unnecessarily blurred.

FIG. 1 is a schematic view for explaining a process in which a conversation between a hearing impaired person and a hearing impaired person is performed through a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.

Hearing impaired people use communication methods using sign language, spoken language, and assistive devices, but they have many discomforts in everyday life and often have language or behavioral disorders. Hearing impaired persons who have difficulty in recognizing sounds due to hearing loss can communicate through sign language, but there is a great difficulty in communicating with non - sign language learners who do not know sign language. There is a need for a device that provides real-time communication through speech recognition and translation techniques that enable hearing-impaired people to communicate at the same level as regular users in everyday life.

Referring to FIG. 1, a hearing-impaired person 100 carrying a language interpreting assistant 102 for a hearing impaired person can communicate with a healthy person 101 through a conversation. The text input from the hearing-impaired person 100 via the smart device 103 is converted into speech through the speech synthesis server 104, and the converted speech is output through the speaker of the language interpretation assist device 102 for the hearing impaired . The voice input through the language interpretation assistant device 102 for the hearing impaired person 101 from the hearing impaired person 101 is converted into text through the speech recognition server 105 and the converted text is transmitted to the smart device 103 of the hearing- Lt; / RTI > Accordingly, the hearing impaired person 100 can communicate with the healthy person 101 through the text, and the healthy person 101 can communicate with the hearing impaired person 100 through the voice.

Referring to FIG. 1, the hearing impaired person 100 may enter "came to be repaired" through the smart device 103, which is an example of text. The input text is converted into speech through the speech synthesis server 104, and the converted speech is transmitted to the language interpretation assistant 102 for the hearing impaired person via the smart device 103. The language interpreting aid device 102 for the hearing impaired person can output the voice that has been converted through the speaker "came to be repaired" based on the voice signal transmitted. Hearing impaired person 101 can confirm the text of the deaf person 100 through the voice output through the speaker of the language interpreting aid device 102 for the hearing impaired person.

Referring to FIG. 1, the language interpreting assistant 102 for a hearing impaired person can receive a voice "What is fixed?" From the hearing impaired person 101 through a microphone. The input voice is transmitted to the voice recognition server 105 via the smart device 103. [ The voice "What is fixed?" By the voice recognition server 105 can be converted into text. The converted text is transmitted from the speech recognition server 105 to the smart device 103. [ Although not shown in FIG. 1, the text "What is broken ?," received by the smart device 103 may be displayed via the display of the smart device 103. The hearing impaired person 100 can confirm the voice of the healthy person 101 through the text displayed on the display of the smart device 103. [

As shown in FIG. 1, the language interpreting assistant 102 for the hearing impaired person according to the present invention implements a small and light weight with only minimal hardware configuration, Can be used separately. The smart device 103 may operate normally if it is in a position that can be connected to the language interpretation aid 102 for the hearing impaired person via the local area wireless communication network.

According to one embodiment, the language interpreting assistant 102 for the hearing impaired person is separately attached to the body of the hearing impaired person 100 separately from the smart device 103 carried by the hearing impaired person 100 to enter or confirm the text, It can be in a possible form.

According to one embodiment, the language interpreting aid device 102 for the hearing impaired may be in the form of an accessory and a peripheral device connected to the smart device 103 and a local area network such as a Bluetooth specification. The hearing impaired language interpretation aid device 102 may be a separate separate miniature accessory that may be combined with the smart device 103 to provide additional functionality in addition to the functionality of the smart device 103 itself. For example, it may be a lightweight and compact device such as a wearable device that can be worn on the body, a necklace, a headset, a portable bag, a bracelet, an arm band and a neck band. In one embodiment, the hearing impaired language interpretation aid device 102, which may be separate from the smart device 103 by being connected to the smart device 103 via Bluetooth, is provided in a portable form of an accessory, The problems of mobility and portability constraints in outdoor activities such as mountain climbing, camping and the like can be improved.

According to one embodiment, the language interpreting assistant 102 for the hearing impaired person can perform data transmission / reception with the smart device 103 via the short-range wireless communication network. According to an exemplary embodiment, a communication standard of Bluetooth low energy may be applied to a short range wireless communication network. The smart device 103 includes a smart phone, a tablet PC, a notebook, a wearable device, and a portable device that can access a server based on a wireless communication network and can execute various application programs through communication with the server. The smart device 103 includes a memory for storing an application program, and the application program recorded in the memory can be executed via wireless communication with the server.

The smart device 103 in which the application is installed can access the servers 104 and 105 via the wireless communication network and the wireless communication network for connecting to the servers 104 and 105 can be connected to the Wifi, 2G, 3G, 4G, 5G and LTE The communication standard of the mobile communication device can be applied. The servers 104 and 105 provide the converted voice or text through a process such as recognizing, converting, and synthesizing the provided text or voice by the application, and transmitting the server to the voice synthesis server 104 and And is called a voice recognition server 105. The voice synthesis server 104 and the voice recognition server 105 may include a database for recording the data required for converting the received text and voice into voice and text.

2 is a diagram showing an example of the configuration of a language interpretation assistance device for a hearing impaired person according to an embodiment of the present invention.

The language interpretation aid device 102 for a hearing impaired person of the present invention includes a microphone 201, a speaker 202, an operation button 203, a processor 205, and a communication module 206. The processor 205 may be selectively connected to the microphone 201, the speaker 202, the operation button 203, and the communication module 206. The language interpretation assistance device 102 for the hearing impaired person shown in Fig. 2 is not limited to the embodiment of the present invention, and is merely an example of an appearance for performing the voice input and output functions.

According to one embodiment, the processor 205 may activate an application installed in the smart device 103 of the hearing impaired person 100 based on the input received via the action button 203. The processor 205 may receive the voice signal transmitted from the smart device 103 of the deaf person 100 via the communication module 206 and output the voice through the speaker 202. [ The voice output through the speaker 202 is a voice that the text input to the smart device 103 from the deaf person 100 is converted and recognized by the regenerative medicine 101. [ The processor 205 can receive the voice of the well-being 101 via the microphone 201. [ The inputted voice is converted into text and displayed on the display of the smart device 103 of the deaf person 100 so that the deaf person 100 can recognize it.

Although not shown in the figure, according to one embodiment, the microphone 201 to which a voice is input may include a first microphone and a second microphone. The first microphone and the second microphone may be spaced a predefined distance in a predefined direction and the predefined direction may correspond to a direction in which speech is delivered to the language interpretation aid device 102 for the hearing impaired can do. When a voice for the hearing impaired human language interpreting aid device 102 is input to the microphone 201, there will be a traveling direction in which the voice is transmitted toward the hearing impaired person's language interpreting aid device 102, 2 The microphone may be arranged to have a directivity in the direction in which the voice is transmitted. The predefined interval may be an interval that allows the first microphone and the second microphone to be placed on both edges of the front side of the language interpretation aid device 102 for the hearing impaired but may be any Interval.

According to one embodiment, the first microphone and the second microphone coincide with the traveling direction of the voice, and the size of the voice signal input through the first microphone and the size of the voice signal input through the second microphone If the difference is greater than the predefined value, the voice input via the microphone 201 is converted into a digital signal by the processor 205 and transmitted to the smart device 103. When the audio signal input by the first microphone is referred to as a first audio signal and the audio signal input by the second microphone is referred to as a second audio signal, the processor 205 determines the size of the first audio signal, Based on the difference in size of the microphone 201 and determines whether to transmit the voice to the smart device 103 or not.

The difference between the magnitude of the first audio signal and the magnitude of the second audio signal may be less than a predefined value if the audio input to the microphone 201 is input in the other direction, Lt; / RTI > If the difference between the magnitude of the first voice signal and the magnitude of the second voice signal is less than a predefined value, the processor 205 determines that the voice input to the microphone 201 is noise, 103 are not performed.

According to another embodiment, the first microphone and the second microphone may be spaced a predefined distance in a predefined direction, and the predefined direction may be selected by a person who is speaking to the hearing impaired < RTI ID = 0.0 > And may be orthogonal to the traveling direction of transmission. In such a case, the first microphone and the second microphone may be arranged to have a direction that is perpendicular to the direction in which the voice travels to the hearing impaired person's language interpretation assistance device 102. The predefined interval may be an interval that allows the first microphone and the second microphone to be disposed on one side of the front side of the language interpretation aid device 102 for the hearing impaired but is arranged to have a directivity orthogonal to the direction in which the voice is transmitted May be any interval.

According to one embodiment, the first microphone and the second microphone are orthogonal to the progress direction of the voice, and the size of the voice signal input through the first microphone and the size of the voice signal input through the second microphone If the difference is smaller than the predefined value, the voice input via the microphone 201 is converted into a digital signal by the processor 205 and transmitted to the smart device 103. When the audio signal input by the first microphone is referred to as a first audio signal and the audio signal input by the second microphone is referred to as a second audio signal, the processor 205 determines the size of the first audio signal, Based on the difference in size of the microphone 201 and determines whether to transmit the voice to the smart device 103 or not. If the difference between the magnitude of the first voice signal and the magnitude of the second voice signal is less than a predefined value And in this case, the processor 205 does not perform the conversion of the input voice signal. If the difference between the magnitude of the first voice signal and the magnitude of the second voice signal is greater than a predefined value, the processor 205 determines that the voice input to the microphone 201 is noise, 103 are not performed.

Hereinafter, with reference to FIG. 3A, a description will be given of a process in which a text input from the hearing-impaired person 100 is converted into speech and output as a voice recognizable by the well-being 101. Referring to FIG. 3B, the process of displaying text through the display of the smart device 103 of the hearing-impaired person 100 so that the sound of the well-being 101 is converted into text and the hearing-impaired person 100 can confirm this Explain.

FIG. 3A is a diagram for explaining an example of a process in which a text input from a hearing impaired person is outputted as a voice through a language interpreting aid for a hearing impaired person according to an embodiment of the present invention.

Although not shown in FIG. 3A, the processor 205 may receive a first input via an action button 203 to drive an application installed in the smart device 103. [

The processor 205 may send a first command to the smart device 103 via the communication module 206 based on the received first input. According to one embodiment, the smart device 103 and the language interpretation aid 102 for the hearing impaired are connected to the short-range wireless communication network, and the application installed in the smart device 103 can be activated through the first command. The application installed in the smart device 103 can transmit and receive data through the communication network between the smart device 103 and the language interpretation aid 102 for the hearing impaired. The application installed in the smart device 103 can perform data transmission and reception through the communication network between the smart device 103 and the voice synthesis server 104 or the voice recognition server 105. [

The application installed in the smart device 103 receiving the first command is activated and the first text may be input 303 through the smart device 103 by the application from the hearing impaired person 100. The first text may refer to a character (referred to as "came to be repaired" in FIG. 1) that is input by the deaf person 100 through the application of the smart device 103.

The application may send 304 the first text signal to the speech synthesis server 104 from the smart device 103 based on the entered first text.

The first text signal transmitted from the smart device 103 may be synthesized by the speech synthesis server 104 into a first speech and converted (305).

The speech synthesis server 104 may transmit the first speech signal to the smart device 103 based on the converted first speech (306).

The application installed in the smart device 103 may transmit the first speech signal received from the speech synthesis server 104 to the language interpreting assistant 102 for the hearing impaired (307).

When the processor 205 receives the first voice signal from the smart device 103, it may deactivate the microphone 201 and activate the speaker 202 (308). This is to prevent the sound output through the speaker 202 from being input to the microphone 201 and overlapping.

The processor 205 may output the first voice through the speaker 202 based on the received first voice signal (309).

FIG. 3B is a diagram illustrating an example of a process in which a voice inputted from a patient through a language interpreting assistance device for a hearing impaired person is displayed as text through a smart device of the hearing impaired person according to an embodiment of the present invention.

The processor 205 may activate the microphone 201 and deactivate the speaker 202 (311). This is to prevent the voice input through the microphone 201 from overlapping with the voice output through the speaker 202. [

The processor 205 may receive the second voice through the microphone 201 (312). According to one embodiment, the second voice may refer to a voice (referred to as "What's wrong?" In Fig. 1) that is delivered to the sound wave when the well-being 101 speaks.

The processor 205 may transmit the second voice signal to the smart device 103 via the communication module 206 based on the inputted second voice (313).

The application installed in the smart device 103 may transmit 314 the second voice signal received from the language interpreting assistant 102 for the hearing impaired to the voice recognition server 105. [

The speech recognition server 105 may recognize and synthesize the second speech signal received from the smart device 103 and convert it into a second text (315).

The speech recognition server 105 may transmit the second text signal to the smart device 103 based on the converted second text (316).

An application installed in the smart device 103 may display 317 the second text through the display of the smart device 103 based on the second text signal received from the speech recognition server 105. [ The hearing impaired person 100 can recognize the voice of the healthy person 101 as a character through the displayed second text.

Although not shown in FIGS. 3A and 3B, the language interpreting assistant 102 for the hearing impaired person can receive the voice of the boilerplate through the speaker 202 based on the boilerplate text input through the smart device 103 from the deaf- Can be output. The language interpreting aid device 102 for the hearing impaired can output the pre-stored boilerplate voice based on the input of the boilerplate, not the complete sentence form, thereby simplifying the inputting of the complete form of the character of the hearing impaired person 100, Communication function can be provided.

According to one embodiment, the boilerplate text may be input via the smart device 103 from the hearing impaired person 100 by the application. As described with reference to FIG. 3A, the application installed in the smart device 103 may be activated based on the first input received via the action button 203. [0033] FIG. The application may send the boilerplate text signal from the smart device 103 to the speech synthesis server 104 based on the entered boilerplate text.

The speech synthesis server 104 can look up the boilerplate voice corresponding to the boilerplate text through the database based on the received boilerplate text signal. The database of the voice synthesizing server 104 can record and maintain a plurality of boilerplate texts and a plurality of boilerplate voices respectively corresponding to a plurality of boilerplate texts. The voice synthesis server 104 can search for a boilerplate voice corresponding to the boilerplate text received from the smart device 103 from among a plurality of Boilerplate voice data via the database. Based on the retrieved boilerplate voice, And can transmit a common phrase voice signal to the smart device 103.

The boilerplate speech signal transmitted from the speech synthesis server 104 may be transmitted from the smart device 103 to the language interpretation aid 102 for the hearing impaired by the application installed in the smart device 103.

The processor 205 may receive a boilerplate speech signal from the smart device 103 via the communication module 206. The processor 205 may deactivate the microphone 201 and activate the speaker 202 upon receiving the boilerplate speech signal. The processor 205 can output the boilerplate voice through the speaker 202 based on the boilerplate voice signal.

3C is a flowchart for explaining a speech synthesis method of the speech synthesis server according to an embodiment of the present invention.

Referring to FIG. 3C, in accordance with one embodiment, the speech synthesis server may maintain a database 318 that records a plurality of speech output patterns corresponding to a plurality of emotional expression inputs and a plurality of emotional expression inputs, respectively. According to one embodiment, the emotional expression input may be input before or after the text input. Emotion expression input may include emoticons, special characters, and boilerplate. The speech output pattern may include the pitch, pitch, length, and intensity of the sound of the output speech.

The speech synthesis server may extract the emotional expression input from the user input including the emotional expression input and the text input (319). According to one embodiment, user input may be entered from a user via a smart device. According to one embodiment, the language interpretation aid for the hearing impaired may be connected to the smart device via a short-range wireless communication network, and the output voice may be output through a language interpreting aid for the hearing impaired.

The speech synthesis server may inquire the speech output pattern corresponding to the extracted emotional expression input through the database (320).

The speech synthesis server may synthesize the text input into the output speech based on the speech output pattern (321).

According to one embodiment, the hearing impaired may enter user input via a smart device, in which case emoticons or special characters may be entered before or after the text. For example, when a user input of "Why am I breaking the appointment every once in a while (emoticon - meaning an angry state)" is input via the smart device, the output voice is synthesized based on the emoticon of the user input. The output speech can be synthesized by reflecting the pitch, speed, and strength of the sound to express the user's anger feeling.

According to one embodiment, the speech synthesis method described with reference to FIG. 3C may be implemented through an application of a smart device. The speech synthesizing method operating through the application is the same as the embodiment described with reference to FIG. 3C, and thus the detailed description thereof will be omitted. According to one embodiment, the speech synthesis server may include a database, a processor, and a communication module, and the speech synthesis method described with reference to FIG. 3C may be performed through a processor.

FIG. 3D is a flowchart for explaining a text conversion method of a speech recognition server according to an embodiment of the present invention.

Referring to FIG. 3D, in accordance with an embodiment, the speech recognition server may maintain 322 a database that records a plurality of text output patterns corresponding to a plurality of input speech features and a plurality of input speech features, respectively. The input voice characteristics may include the size of the input voice, the speed of the input voice, and the gender of the input voice. The text output pattern may include the size of the output text, the spacing of the output text, and the color of the output text.

The speech recognition server may analyze the input speech characteristics of the input speech based on the input speech (323). According to one embodiment, the input voice may be input via a smart device and input through a smart device, via a language interpretation aid for a hearing impaired person connected to the smart device. The input voice input through the language interpreting aid for the hearing impaired can be transmitted to the smart device and transmitted from the smart device to the voice recognition server. According to one embodiment, the magnitude, speed, and sex of the input voice can be analyzed.

The speech recognition server may query the text output pattern corresponding to the input speech characteristic through the database (324).

The speech recognition server may convert the input speech to output text based on the text output pattern (325).

According to one embodiment, when the size of the input voice is large, the size of the text can be converted into a large output text. When the speed of the input voice is fast, the interval between the texts can be converted into the output text having a small interval. If the speed of the input voice is slow, the interval between the texts can be converted into the output text. According to one embodiment, the gender of the input voice can be analyzed, and if the sex of the male is proved, the output text can be converted by changing the color of the text. According to one embodiment, when a plurality of input voices are input, the genders of the plurality of input voices are analyzed so that the color of the output text may be changed (for example, male is blue and female is red) The output text can be displayed via the smart device's display.

According to one embodiment, the text conversion method described with reference to FIG. 3d may be implemented through an application of a smart device. The text conversion method operating through the application is the same as that of the embodiment described with reference to FIG. 3D, and thus detailed description thereof will be omitted. According to one embodiment, the text conversion method may include a database, a processor, and a communication module, and the text conversion method described with reference to FIG. 3D may be performed through a processor.

FIG. 4 is a schematic view for explaining a process of providing an alarm to a hearing-impaired person through an alarm device for a hearing impaired person according to an embodiment of the present invention.

Referring to FIG. 4, the hearing impaired person 400 can recognize the situation of an emergency or dangerous or distracting sound through an alarm device 401 for a hearing impaired person who detects surrounding sounds and provides alarms and alarms have. Referring to FIG. 4, when a horn sound is generated in a vehicle in the vicinity of the person 400, the horn sound can be sensed through the sensor of the alarm device 401 for the hearing impaired person. The signal sensed by the sensor may be transmitted from the hearing impaired person's alarm device 401 to the hearing impaired person's alarm device 401 and the smart device 402 connected via the local communication network and analyzed through the smart device 402 have. Based on the results analyzed by the smart device 402, the hearing-impaired alarm device 401 may be able to alarm the hearing impaired person 400 with respect to the horn sound of the nearby vehicle through vibrations, flickering of lights, . Further, a description of the vehicle horn can be displayed through the display of the smart device 402. [

As shown in FIG. 4, the alarm device 401 for a hearing impaired person according to the present invention has a minimal hardware configuration to realize small size and light weight, and is manufactured as an accessory type for easy carrying, Can be used. The smart device 402 may operate normally if it is in a position that can be connected to the alarm device 401 for the hearing impaired person through a short-range wireless communication network.

According to one embodiment, the hearing impaired alarm device 401 may be separate from the smart device 402 carried by the deaf person 100 to identify an explanation of the ambient sound for which the alarm has been performed, (400). ≪ / RTI >

According to one embodiment, the application of the smart device 402 may analyze the pattern of sound based on the sound signal transmitted from the alarm device 401 for the hearing impaired. When an analysis of the ambient noise of the deaf 400 is performed by the application, it may be performed through communication with the server or locally through the memory of the smart device 402. [ However, since the alarm for the ambient sound is often an emergency or a situation requiring attention, analysis by the application of the ambient noise of the deaf person 400 is performed through the memory of the smart device 402, Data recorded in the memory can be updated through the server.

According to one embodiment, the hearing impaired alarm device 401 may be in the form of accessories and peripherals connected to the smart device 402 and a local area network such as Bluetooth specification. The alarm device 401 for the hearing impaired person may be a separate separate small accessory that can be combined with the smart device 402 to provide additional functionality in addition to the functionality of the smart device 402 itself. For example, it may be a lightweight and compact device such as a wearable device that can be worn on the body, a necklace, a headset, a portable bag, a bracelet, an arm band and a neck band. In one embodiment, the hearing-impaired alarm device 401, which may be separate from the smart device 402 by being connected to the smart device 402 via Bluetooth, is provided as an accessory that is portable and can be used for sightseeing, traveling, jogging, , And problems of mobility and portability in outdoor activities such as camping can be improved.

According to one embodiment, the alarm device 401 for the hearing impaired can perform data transmission / reception with the smart device 402 via the short-range wireless communication network. According to an exemplary embodiment, a communication standard of Bluetooth low energy may be applied to a short range wireless communication network. The smart device 402 includes a smart phone, a tablet PC, a notebook, a wearable device, and a portable device capable of accessing a server based on a wireless communication network and executing various application programs through communication with the server. The smart device 402 includes a memory for storing an application program, and the application program recorded in the memory can be executed via wireless communication with the server.

The smart device 402 in which the application is installed can access the server through the wireless communication network and the wireless communication network for connecting to the server is applicable to the communication standard of the mobile communication device such as Wifi, 2G, 3G, 4G, 5G and LTE . The application installed in the smart device 402 can be updated through communication with the server so that the range of sounds to be provided to the hearing impaired person 400 can be supplemented.

5A is a diagram showing an example of the configuration of an alarm device 401 for the hearing impaired person according to an embodiment of the present invention.

The alarm device 401 for the hearing impaired of the present invention includes a sensor 501, a communication module 502, a processor 503, and an output unit 504. [ The processor 503 may be selectively connected to the sensor 501, the communication module 502, and the output unit 504. [ The alarm device 401 for the hearing impaired person shown in Fig. 5A is not limited to the embodiment of the present invention, and is merely an example of the appearance of an apparatus for providing an alarm of the ambient sound to the hearing impaired person 400. Fig.

According to one embodiment, the sensor 501 may sense sound within a predetermined range around the alarm device 401 for the hearing impaired. The processor 503 may perform data transmission and reception with the smart device 402 of the deaf person 400 through the communication module 502. [ The processor 503 may receive information analyzed by the application installed in the smart device 402 through the communication module 502 based on the sound around the deaf person 400 sensed by the sensor 501. [ The processor 503 may be operable to detect an ambient noise for the deaf person 400 through vibration or flashing of a light when the sound around the deaf person 400 is urgent or requires attention or an alarm to the deaf person 400 is required Alarms can be performed. Hereinafter, a process of providing an alarm to the deaf person 400 based on the sound detected by the deaf-mute person's alarm device 401 will be described in detail with reference to FIG. 5B.

FIG. 5B is a diagram illustrating an example of a process in which an alarm for the ambient noise is provided to the hearing-impaired person through the alarm device for a hearing impaired person according to an embodiment of the present invention.

Referring to FIG. 5B, the processor 503 may sense the first sound through the sensor 501 (506). The sensor 501 can sense sound within a predetermined range. The first sound means a sound sensed by the sensor 501 within a predetermined range. The sound sensed by the sensor 501 may be in the form of a sound wave having a certain waveform.

The processor 503 may compare the magnitude of the first sound signal to a predetermined magnitude based on the first sound (507).

When the size of the first sound signal is larger than a predetermined size, the processor 503 may transmit the first sound signal to the smart device through the communication module 502 (508).

Although not shown, according to one embodiment, the processor 503 may transmit the first sound signal to the smart device via the communication module 502, if the first sound signal is similar to a particular pattern. Although not shown in Fig. 5A, the specific pattern may be recorded in the memory of the alarm device 401 for the hearing impaired person, and the processor 503 may compare a plurality of specific patterns recorded in the memory with the first sound signal. The processor 502 may transmit the first sound signal to the smart device via the communication module 502 when it is determined that the degree of similarity between the specific pattern and the first sound signal is within a predetermined range.

The application installed in the smart device 402 may generate a first pattern corresponding to the first sound signal based on the first sound signal received from the alarm device 401 for the hearing impaired person (509). The first pattern is a pattern specifying the first sound, and the application can classify the first sound through the pattern.

The application can query the first pattern sound corresponding to the first pattern through the memory of the smart device 402 (510). The memory of the smart device 402 records a plurality of patterns corresponding to a plurality of pattern sounds and a plurality of pattern sounds, respectively. The application can inquire the first pattern sound corresponding to the first pattern among a plurality of pattern sounds recorded in the memory. According to one embodiment, the plurality of pattern sounds include a car horn sound, an animal cry, a baby cry, a notification broadcast sound, a beep sound, a bell sound, and a siren sound. 4, the application generates a first pattern corresponding to the first sound signal received from the alarm device 401 for the hearing impaired person, and outputs a plurality of pattern sounds .

The application can inquire the first alarm corresponding to the first pattern sound (car horn sound) through the memory of the smart device 402 (511). In the memory of the smart device 402, a plurality of alarms each corresponding to a plurality of pattern sounds are recorded. The application can inquire, through the memory, the first alarm corresponding to the first pattern sound (car horn sound) among the plurality of alarms recorded in the memory. The processes described in steps 509 through 511 of FIG. 5B may be performed through communication with the server, and the present invention is not limited to the embodiments described above. However, in order to provide a quick alarm to the deaf person 400, the alarm information corresponding to the first sound signal through the local inquiry through the memory of the smart device 402 is transmitted from the smart device 402 to the hearing- To the alarm device (401). The plurality of alarms may be one of a vibration and a flicker of a lamp. The period, length, intensity, and frequency of the vibration and flicker may be recorded and stored in the memory of the smart device 402 in correspondence with a plurality of alarms, respectively.

The application may send 512 a first alarm signal from the smart device 402 to the audible alarm device 401 based on the first alarm being viewed. The application may output 514 a description of the first pattern sound (car horn sound) through the display of the smart device 402.

The processor 503 may receive the first alarm signal from the smart device 402 via the communication module 502 and may perform the first alarm via the output 504 based on the received first alarm signal (513). The first alarm to be performed may be an alarm that identifies a car horn sound, which may be specified by a predefined vibration or period, length, intensity, and frequency of blinking of the light. According to one embodiment, the output 504 may include a vibrator, an LED, and an LCD. The processor 503 can perform an alarm of vibration through the vibrator, and can perform an alarm of flickering of light through the LED and the LCD.

The smart device 402 may be connected to the update server 505 via a communication network. The update server 505 may generate the update information (515). The update server 505 can maintain a database for recording a plurality of patterns corresponding to a plurality of pattern sounds and a plurality of pattern sounds, respectively. The update server 505 can generate update information through the database.

The generated update information may be transmitted to the smart device 402 (516). According to one embodiment, the update information may be transmitted in response to an update request by the application of the smart device 402 and transmitted in real time or periodically.

The application may receive the update information from the update server 505 and update the plurality of pattern sounds and the plurality of patterns recorded in the memory based on the received update information (517).

FIG. 6 is a schematic diagram for explaining a process in which a speech lecturer of a lecturer is provided to a hearing-impaired person by a local server for a hearing impaired person according to an embodiment of the present invention.

Referring to FIG. 6, a local server 603 for a hearing impaired person can provide a speech lecture to a plurality of hearing impaired persons who are installed in a lecture hall and are in a lecture hall. The voice input from the speaker 601 through the microphone 602 installed at the lecture hall can be transmitted to the speech recognition server 604 through the lecture local server 603 for the hearing impaired. The text converted by the speech recognition server 604 may be transmitted to the smart device 605 of a plurality of hearing impaired persons via the lecture local server 603 for the hearing impaired. The text converted through the application installed in the smart device 605 of the plurality of deaf persons can be displayed on the display of the smart device 605 of the plurality of deaf persons. The lecture provided through voice from the speaker (601) can be converted into text and displayed through the display of the smart device of the deaf. Thus, the lecturer's local lecture server 603 for the hearing impaired person converts the voice of the lecturer 601 into a text in real time, so as to overcome the disadvantage that the voice lecture of the lecturer 601 is indirectly provided to the hearing impaired person through sign language It can be provided to a plurality of hearing-impaired persons.

According to another embodiment, text input via the smart device 605 of a plurality of deaf persons may be transmitted to the speech synthesis server 606 via the lecture local server 603 for the hearing impaired. The speech synthesis server 606 can convert the text to speech and transmit the converted speech to the lecture local server 603 for the hearing impaired. The local server 603 for the hearing impaired person can output the received voice through the speaker 607 installed in the lecture hall. The local server 603 for the hearing impaired receives the feedback or the question of the hearing impaired person against the speech of the speaker 601 as text through the smart device and converts the inputted text into speech in real time, As shown in FIG. Thus, instead of delivering the speech lecture of the lecturer 601 to a plurality of hearing-impaired persons unilaterally, it is possible to output a question or feedback of the hearing impaired person to the lecture in real time through the speaker, so that the hearing impaired person participates in the lecture, The constraints of bilateral communication between hearing impaired audiences can be improved.

7A is a diagram showing an example of the configuration of a local server for a hearing impaired person according to an embodiment of the present invention.

Referring to FIG. 7A, the lecture local server 603 for the hearing impaired includes a communication module 701 and a processor 702. The lecture server 603 for the hearing impaired can be installed in the lecture hall to provide a voice lecture to a plurality of hearing impaired persons in the lecture hall. The processor 702 is connected to the communication module 701 and transmits and receives data to and from the voice recognition server 604, the voice synthesis server 606, and the smart devices 605 of a plurality of hearing impaired persons via the communication module 701 Can be performed. Hereinafter, referring to FIG. 7B, an embodiment will be described in which the voice of the speaker 601 is converted into text and displayed on the smart device 605 of a plurality of hearing-impaired persons through the speaker's local server 603 for the hearing impaired. Hereinafter, with reference to FIG. 7C, a description will be given of an embodiment in which the question text inputted from the hearing impaired person via the smart device 605 is converted into speech and outputted through the speaker 607 by the speech local server 603 for the hearing impaired do.

FIG. 7B is a diagram for explaining an example of a process in which a voice of a speaker is converted into text and displayed on a smart device of a hearing impaired person according to an embodiment of the present invention.

The processor 702 can receive the speech of the speaker from the speaker 601 through the microphone 602 installed at the speaker (703). The speech of the speaker 601 input via the microphone 602 is called a speaker speech.

The processor 702 may transmit the speaker speech signal to the speech recognition server 604 via the communication module 701 based on the speaker speech (704).

The speech recognition server 604 can recognize and synthesize the received speech signal of the speaker and convert it into speech text (705). The lecture text means a character string whose speech is converted by the speech recognition server 604.

The speech recognition server 604 may send the lecture text signal to the lecture local server 603 for the hearing impaired based on the lecture text (706). The processor 702 may receive the lecture text signal from the speech recognition server 604 via the communication module 701. [

The processor 702 may transmit the received lecture text signal to the plurality of smart devices 605 of the plurality of hearing impaired persons through the communication module 701 (707). A plurality of smart devices 605 of a plurality of hearing impaired persons may be connected to the lecture local server 603 of the present invention via a local area network. According to one embodiment, the communication standard in which the above-described language interpretation assistance device 102 for the hearing impaired person is connected to the smart device may be applied to the local area network.

An application installed in a plurality of smart devices 605 of a plurality of hearing impaired persons may send a lecture text through a display of a plurality of smart devices 605 based on the lecture text signal received from the lecture local server 603 for the hearing impaired (708).

7C is a diagram for explaining an example of a process in which a text input from a hearing impaired person is converted into speech and output through a speaker according to an embodiment of the present invention.

The question text can be input by the application from the hearing impaired person through any of the plurality of smart devices 605 (709). The question text is the string that the hearing impaired inputs through the smart device by the application.

The application may send 710 the question text signal from the smart device where the question text is entered based on the question text to the lecture local server 603 for the hearing impaired. The processor 702 may receive the question text signal from the smart device to which the question text is input via the communication module 701. [

Processor 702 may send a query text signal to speech synthesis server 606 via communication module 701. [

The speech synthesis server 606 may convert the question text signal into a question speech (712). The question voice refers to the converted voice corresponding to the question text.

The speech synthesis server 606 may transmit the question voice signal to the lecture field local server 603 for the hearing impaired based on the converted question voice (713). The processor 702 may receive the question voice signal from the speech synthesis server 606 via the communication module 701. [

The processor 702 can output the question voice through the speaker 607 installed in the lecture hall based on the question voice signal (714).

8 is a schematic view illustrating a voice call support application for a hearing impaired person providing a call service for a hearing impaired person according to an embodiment of the present invention.

It is difficult for hearing impaired people to make voice calls smoothly. The hearing impaired person communicates by inputting a brief sentence in the communication terminal and delivering it to the other party, or by communicating the doctor to the sign language through the video call. When a person with hearing impairments makes a call, image sign language or sign language interpretation call center is used. In such a case, there are many restrictions on the use of image signatures due to different communication protocols for each business.

The voice call support application 801 for a hearing impaired person of the present invention is a voice call support application for a hearing impaired person when the hearing impaired person talks with the other party And the text input by the hearing impaired person is outputted as a voice through the terminal 804 of the well-being 805, thereby enabling smooth communication between the hearing-impaired person and the hearing impaired person 805.

Referring to FIG. 8, the text input from the hearing-impaired person via the smart device is transmitted to the speech synthesis server 802 by the speech call support application 801 for the hearing impaired. The text is converted into voice by the voice synthesizing server 802 and the converted voice is transmitted from the smart device of the hearing impaired person to the call terminal 804 of the hearing impaired person 805 by the voice calling support application 801 for the hearing impaired person . Through the voice outputted from the call terminal 804 of the intact patient 805, the intact patient 805 can confirm the text inputted by the hearing impaired person by voice. The call terminal 804 includes a smart device, and refers to a device that enables communication with a base station or a server through communication with the server.

Referring to FIG. 8, the voice "Where is the address" input from the well-being 805 to the call terminal 804 is transmitted to the smart device of the deaf. The voice "address" is transmitted to the voice recognition server 803 by the voice call support application 801 for a hearing impaired person of the present invention. The voice "Where is the address?" Is converted into text by the voice recognition server 803 and transmitted to the smart device. The text "Where is the address" is displayed on the display of the smart device by the voice call support application 801 for the hearing impaired. The hearing impaired person can confirm the voice delivered by the healthy partner 805, which is the communication partner, in a text form through a character string displayed on the display of the smart device. According to an embodiment of the present invention, the voice call support application 801 for a hearing impaired person is installed and executed in a smart device of a hearing impaired person, and a caller who is the other party of the hearing impaired person can make a call through general voice conversation. According to one embodiment, when a hearing-impaired person makes a call to a calling party through a voice call support application 801 for a hearing-impaired person installed on a smart device of a deaf-mute person, the calling party transmits the voice call support for the hearing- An explanation can be given that the application 801 is used.

According to one embodiment, a voice call support application 801 for a hearing impaired person providing a call service for the hearing impaired can be stored in the memory of the smart device and executed by the processor of the smart device. Hereinafter, with reference to FIG. 9A, the process of converting the text input from the hearing impaired person into speech and providing it to the calling party 805 will be described in detail. Hereinafter, with reference to FIG. 9B, a process of converting a voice input from a healthy person who is the communication partner 805 into text and displaying it on a smart device of the hearing impaired person will be described in detail.

FIG. 9A is a diagram illustrating a process in which a text input from a hearing-impaired person is provided to a calling party as a voice through a voice call application for the hearing impaired according to an embodiment of the present invention.

Referring to FIG. 9A, a first text can be received from a hearing-impaired person through a smart device by a voice call support application 801 for the hearing impaired (901). The first text means a character string input from a deaf person through a smart device.

The voice call support application 801 for a hearing impaired user may transmit the first text signal from the smart device to the voice synthesis server 802 based on the first text (902).

The speech synthesis server 802 may receive the first text signal from the smart device and convert it to the first speech through the database based on the received first text signal (903). The first voice means a sound for pronouncing a string of the first text.

The speech synthesis server 802 may transmit the first speech signal to the speech call support application 801 for the hearing impaired based on the first speech (904).

The hearing impaired voice call support application 801 may receive the first voice signal from the voice synthesis server 802 and transmit the received first voice signal to the recipient smart device 804 (905). According to one embodiment, the recipient smart device 804 is a communication terminal for the calling party to make a call, and may not be a smart phone, since it is not required to execute an application, and thus may be a telephone capable terminal.

Based on the first voice signal transmitted from the voice call support application 801 for the hearing impaired, the recipient smart device 804 can output the first voice.

FIG. 9B is a diagram illustrating a process in which a voice input from a health care provider is converted into text and displayed on a smart device of a hearing impaired person by a voice call application for the hearing impaired according to an embodiment of the present invention.

Referring to FIG. 9B, a second voice may be input via the recipient smart device 804 (907). The recipient smart device 804 may transmit the second voice signal to the voice call support application 801 for the hearing impaired based on the second voice.

The hearing impaired voice call support application 801 may receive the second voice signal from the recipient smart device 804 via the smart device of the deaf person (908).

The voice call support application 801 for the hearing impaired user may transmit the second voice signal to the voice recognition server 803 through the smart device of the hearing impaired person (909).

The speech recognition server 803 may convert 910 the second text through the database based on the received second speech signal. And the second text means a character string corresponding to the second voice. The speech recognition server 803 may transmit the second text signal to the smart device of the hearing impaired based on the converted second text.

The voice call support application 801 for the hearing impaired user may receive the second text signal from the voice recognition server 803 through the smart device of the hearing impaired person (911).

The voice call support application 801 for the hearing impaired user may display the second text through the display of the smart device of the hearing impaired based on the second text signal (912).

Although not shown, the voice call support application 801 for a hearing impaired person according to an embodiment may transmit a voice corresponding to the boilerplate text input from the hearing impaired person to the communication terminal 804 of the communication counterpart.

According to one embodiment, the voice call support application 801 for the hearing impaired can receive the usage phrase text from the hearing impaired person through the smart device of the hearing impaired person. The voice call support application 801 for the hearing impaired user can transmit the usage text signal to the voice synthesis server 802 from the smart device of the hearing impaired based on the entered boilerplate text. The voice synthesis server 802 includes a database, and a plurality of boilerplate texts corresponding to a plurality of boilerplate texts and a plurality of boilerplate texts corresponding to the plurality of boilerplate texts can be recorded and maintained. The speech synthesis server 802 may inquire a second voice corresponding to the boilerplate text received among the plurality of boilerplate voices through the database and transmit the second voice signal to the smart device of the hearing impaired based on the second voice. According to another embodiment, the database described above is included in the smart device of the hearing impaired, and the voice call support application 801 for the hearing impaired user transmits a second voice corresponding to the entered commercial tip text signal through the database of the smart device It can be viewed locally.

The voice call support application 801 for the hearing impaired person can receive the second voice signal from the voice synthesis server 802 through the smart device of the hearing impaired person. The voice call support application 801 for the hearing impaired person can transmit the second voice signal from the smart device of the hearing impaired person to the communication terminal 804 of the communication partner. The communication terminal 804 of the other party of communication can output the second voice based on the received second voice signal. According to one embodiment, the voice call support application 801 for a hearing impaired person through simple input of the boilerplate text can simplify the procedure of inputting a character string by the hearing impaired person, thereby providing convenience of text input.

The method according to an embodiment of the present invention may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

102: Language Interpreter for the Deaf
103: Smart Devices
104:

Claims (21)

A language interpretation assistance device for a hearing impaired person to provide a language interpretation of a hearing impaired person through a smart device and a short-range wireless communication network,
MIC;
speaker;
Communication module; And
And a processor coupled to the microphone, the speaker, and the communication module,
The processor comprising:
Receiving a first voice signal, which is converted based on a first text input by an application installed in the smart device, from the smart device via the communication module;
Receiving the first audio signal, deactivating the microphone and activating the speaker; And
Outputting a first voice through the speaker based on the first voice signal;
Lt; / RTI >
Wherein the first text is input from the user via the smart device by the application and the first text signal is sent from the smart device to the speech synthesis server by the application based on the first text, Signal is converted into the first voice signal by the voice synthesis server and transmitted to the smart device,
Language Interpreting Device for the Deaf.
The method according to claim 1,
Further comprising an action button,
Wherein the processor is coupled to the action button,
The processor comprising:
Receiving, from a user, a first input for driving the application via the action button; And
Transmitting a first command to the smart device via the communication module based on the first input
Lt; / RTI >
Wherein the application is activated by the first instruction,
Language Interpreting Device for the Deaf.
The method according to claim 1,
The processor comprising:
Activating the microphone and deactivating the speaker;
Receiving a second voice through the microphone; And
Transmitting a second voice signal to the smart device via the communication module based on the second voice
Lt; / RTI >
Wherein the second voice signal is transmitted to the voice recognition server by the application, the second voice signal is converted into a second text signal by the voice recognition server, the second text signal is transmitted to the smart device, Wherein the second text is displayed on the display of the smart device by the application based on the second text signal,
Language Interpreting Device for the Deaf.
The method according to claim 1,
The processor comprising:
Receiving, via the communication module from the smart device, a converted-word voice signal that is converted based on the boilerplate text input by the application;
Disabling the microphone and activating the speaker upon receiving the boilerplate voice signal; And
Outputting a voice of the boilerplate through the speaker based on the boilerplate voice signal;
Lt; / RTI >
Wherein the boilerplate text is input from the user via the smart device and the boilerplate text signal is sent from the smart device to the speech synthesis server by the application based on the boilerplate text,
The voice synthesizing server retrieves the boilerplate voice corresponding to the boilerplate text through a database and transmits the boilerplate voice signal to the smart device based on the retrieved boilerplate voice.
Language Interpreting Device for the Deaf.
5. The method of claim 4,
A plurality of boilerplate texts and a plurality of boilerplate voices corresponding respectively to the plurality of boilerplate texts are recorded and held in the database,
Wherein the voice of the surname is searched from among the plurality of syllabary voices,
Language Interpreting Device for the Deaf.
Maintaining a database for recording a plurality of emotional expression inputs and a plurality of audio output patterns respectively corresponding to the plurality of emotional expression inputs;
Extracting the emotional expression input from user input including emotional expression input and text input;
Querying the speech output pattern corresponding to the extracted emotional expression input through the database; And
Synthesizing the text input into an output speech based on the speech output pattern
/ RTI >
A speech synthesis method of a speech synthesis server.
The method according to claim 6,
Wherein the user input is input from a user via a smart device and the output voice is output through a language interpretation assistance device for a hearing impaired person connected to the smart device via a local area wireless communication network,
Wherein the emotional expression input is input before or after the text input and includes emoticons, special characters, and boilerplate,
Wherein the speech output pattern includes a pitch, a pitch, a length, and an intensity of a pitch of the output speech,
A speech synthesis method of a speech synthesis server.
Maintaining a database that records a plurality of input speech features and a plurality of text output patterns respectively corresponding to the plurality of input speech features;
Analyzing an input speech characteristic of the input speech based on the input speech;
Querying a text output pattern corresponding to the input speech feature through the database; And
Converting the input speech into output text based on the text output pattern
/ RTI >
Text conversion method of speech recognition server.
9. The method of claim 8,
Wherein the input voice is input through a smart device and a language interpretation assistance device for a hearing impaired person connected to a local area network and transmitted to the smart device,
Wherein the output text is displayed through a display of the smart device,
Wherein the input voice feature includes a size of the input voice, a speed of the input voice, and a sex of the input voice,
Wherein the text output pattern includes a size of the output text, an interval of the output text, and a color of the output text.
Text conversion method of speech recognition server.
1. An alarm device for a hearing impaired person, which is connected to a smart device via a short-range wireless communication network,
A sensor for sensing sound;
An output section;
Communication module; And
And a processor coupled to the sensor, the output, and the communication module,
The processor comprising:
Comparing a magnitude of the first sound signal with a predetermined magnitude based on a first sound sensed through the sensor;
Transmitting the first sound signal to the smart device through the communication module when the size of the first sound signal is larger than the predetermined size;
Receiving a first alarm signal transmitted from the smart device by an application installed in the smart device based on the first sound signal; And
Performing a first alarm through the output based on the first alarm signal
Lt; / RTI >
The application generates a first pattern corresponding to the first sound signal based on the first sound signal, inquires a first pattern sound corresponding to the first pattern through the memory of the smart device, Transmitting the first alarm signal based on the first alarm corresponding to the first pattern sound,
Alarm device for the hearing impaired.
11. The method of claim 10,
The output comprising a vibrator, an LED, and an LCD,
Alarm device for the hearing impaired.
11. The method of claim 10,
Wherein the application outputs a description of the first pattern sound through a display of the smart device,
Alarm device for the hearing impaired.
11. The method of claim 10,
Wherein the memory stores a plurality of pattern sounds and a plurality of patterns respectively corresponding to the plurality of pattern sounds,
Wherein the first pattern sound is searched among the plurality of pattern sounds recorded in the memory,
Wherein the plurality of pattern sounds include at least one of a car horn sound, an animal sound, a baby sound, a beep sound, a beep sound, a bell sound, and a siren sound.
Alarm device for the hearing impaired.
14. The method of claim 13,
Wherein the plurality of alarms corresponding to the plurality of pattern sounds are recorded in the memory,
Wherein the first alarm is one of the plurality of alarms recorded in the memory,
Alarm device for the hearing impaired.
15. The method of claim 14,
The plurality of alarms including at least one of a vibration and a blink of a lamp,
Wherein the period, length, intensity, and frequency of the vibration and the flicker correspond to the plurality of alarms, respectively,
Alarm device for the hearing impaired.
11. The method of claim 10,
The smart device is connected to an update server through a communication network,
The application comprises:
Receiving update information from the update server, updating the plurality of pattern sounds and the plurality of patterns based on the update information,
Wherein the update server maintains a database for recording a plurality of pattern sounds and a plurality of patterns, generates and transmits the update information through the database,
Wherein the update information is transmitted in response to an update request by the application, transmitted in real time or periodically,
Alarm device for the hearing impaired.
A loudspeaker local server for the hearing impaired set up in the lecture room for providing a lecture to a plurality of deaf persons in the lecture hall,
Communication module; And
A processor coupled to the communication module,
Lt; / RTI >
The processor comprising:
Receiving a speaker voice through a microphone installed in the speaker;
Transmitting a speaker speech signal to the speech recognition server through the communication module based on the speaker speech;
Receiving a speech text signal from the speech recognition server through the communication module;
Transmitting the lecture text signal through the communication module to a plurality of smart devices of the plurality of hearing impaired persons
Lt; / RTI >
The speaker speech signal is converted into a lecture text by the speech recognition server, the lecture text signal is transmitted from the speech recognition server based on the lecture text,
Wherein the plurality of smart devices are connected to the local server via a local area network and a lecture text is displayed on the display of the plurality of smart devices based on the lecture text signal by an application installed in the plurality of smart devices,
Local server for the hearing impaired.
18. The method of claim 17,
The processor comprising:
Receiving a question text signal from any one of the plurality of smart devices through the communication module;
Transmitting the question text signal to the speech synthesis server through the communication module;
Receiving a question voice signal from the speech synthesis server through the communication module; And
Outputting a question voice through a speaker installed in the lecture hall based on the question voice signal
Lt; / RTI >
Wherein the question text is input via any one of the plurality of smart devices by the application and the question text signal is transmitted by the application from any one of the plurality of smart devices based on the question text,
Wherein the question text signal is converted into the question voice by the voice synthesis server and the question voice signal is transmitted from the voice synthesis server based on the question voice,
Local server for the hearing impaired.
A voice call support application for a hearing impaired person providing a call service for a hearing impaired person,
The application being stored in a memory of the smart device, being executed by the processor of the smart device,
The application comprises:
Receiving a first text from a user via the smart device;
Transmitting a first text signal from the smart device to a speech synthesis server based on the first text;
Receiving the first text signal through the smart device, the first speech signal being converted by the speech synthesis server; And
Transmitting the first voice signal to the recipient smart device via the smart device
Lt; / RTI >
Based on the first voice signal, a first voice is output via the recipient smart device,
Wherein the speech synthesis server converts the first text signal into the first speech through a database and transmits the first speech signal based on the first speech,
Voice call support application for the hearing impaired.
20. The method of claim 19,
The application comprises:
Receiving a second voice signal from the recipient smart device;
Transmitting the second voice signal to the voice recognition server through the smart device;
Receiving the second text signal through the smart device, the second text signal being converted by the speech recognition server; And
Displaying a second text on the display of the smart device based on the second text signal,
Lt; / RTI >
Wherein the speech recognition server converts the second voice signal into the second text via a database and transmits the second text signal based on the second text,
Voice call support application for the hearing impaired.
21. The method of claim 20,
The application comprises:
Receiving commercial text data from a user via the smart device;
Transmitting, from the smart device, a boilerplate text signal to the speech synthesis server based on the boilerplate text;
Receiving the converted second text signal through the smart device, wherein the second text signal is searched by the speech synthesis server and converted; And
Transmitting the second voice signal to the recipient smart device via the smart device
Lt; / RTI >
A second voice is output via the recipient smart device based on the second voice signal,
Wherein the database stores a plurality of boilerplate texts and a plurality of boilerplate voices corresponding respectively to the plurality of boilerplate texts, and the voice synthesizing server stores the plurality of boilerplate texts corresponding to the plurality of boilerplate texts, 2 voice, and transmits the second voice signal based on the second voice,
Voice call support application for the hearing impaired.
KR1020150077983A 2015-06-02 2015-06-02 Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network KR101846218B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150077983A KR101846218B1 (en) 2015-06-02 2015-06-02 Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150077983A KR101846218B1 (en) 2015-06-02 2015-06-02 Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network

Publications (2)

Publication Number Publication Date
KR20160142079A true KR20160142079A (en) 2016-12-12
KR101846218B1 KR101846218B1 (en) 2018-05-18

Family

ID=57574073

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150077983A KR101846218B1 (en) 2015-06-02 2015-06-02 Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network

Country Status (1)

Country Link
KR (1) KR101846218B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200049404A (en) * 2018-10-31 2020-05-08 강병진 System and Method for Providing Simultaneous Interpretation Service for Disabled Person
US11580985B2 (en) 2020-06-19 2023-02-14 Sorenson Ip Holdings, Llc Transcription of communications

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200083905A (en) 2019-01-01 2020-07-09 보리 주식회사 System and method to interpret and transmit speech information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006133433A (en) * 2004-11-05 2006-05-25 Fuji Photo Film Co Ltd Voice-to-character conversion system, and portable terminal device, and conversion server and control methods of them
JP2007257562A (en) * 2006-03-27 2007-10-04 Pentax Corp Sound file upload system
JP2015100054A (en) * 2013-11-20 2015-05-28 日本電信電話株式会社 Voice communication system, voice communication method and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200049404A (en) * 2018-10-31 2020-05-08 강병진 System and Method for Providing Simultaneous Interpretation Service for Disabled Person
US11580985B2 (en) 2020-06-19 2023-02-14 Sorenson Ip Holdings, Llc Transcription of communications

Also Published As

Publication number Publication date
KR101846218B1 (en) 2018-05-18

Similar Documents

Publication Publication Date Title
EP2842055B1 (en) Instant translation system
KR101585793B1 (en) Smart Hearing Aid Device
US20170243582A1 (en) Hearing assistance with automated speech transcription
US8082152B2 (en) Device for communication for persons with speech and/or hearing handicap
US20100250253A1 (en) Context aware, speech-controlled interface and system
CN104604250A (en) Smart notification tool for headphones
US11893997B2 (en) Audio signal processing for automatic transcription using ear-wearable device
US20180374483A1 (en) Interpreting assistant system
Dhanjal et al. Tools and techniques of assistive technology for hearing impaired people
KR101846218B1 (en) Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network
JP2014204429A (en) Voice dialogue method and apparatus using wired/wireless communication network
KR101017421B1 (en) communication system for deaf person
WO2019228329A1 (en) Personal hearing device, external sound processing device, and related computer program product
WO2022001170A1 (en) Call prompting method, call device, readable storage medium and system on chip
US20230260534A1 (en) Smart glass interface for impaired users or users with disabilities
KR101609585B1 (en) Mobile terminal for hearing impaired person
JP3165585U (en) Speech synthesizer
KR102000282B1 (en) Conversation support device for performing auditory function assistance
US10936830B2 (en) Interpreting assistant system
CN106125922A (en) A kind of sign language and spoken voice image information AC system
US20050129250A1 (en) Virtual assistant and method for providing audible information to a user
KR20150059460A (en) Lip Reading Method in Smart Phone
KR101522291B1 (en) Auxiliary Aid Apparatus of Hearing for Coping to with External Environmental Situation and Method for Controlling Operation of the Same Associated with Multimedia Device
KR102496398B1 (en) A voice-to-text conversion device paired with a user device and method therefor
JP2000184077A (en) Intercom system

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant