WO2021080362A1 - Language processing system using earset - Google Patents

Language processing system using earset Download PDF

Info

Publication number
WO2021080362A1
WO2021080362A1 PCT/KR2020/014544 KR2020014544W WO2021080362A1 WO 2021080362 A1 WO2021080362 A1 WO 2021080362A1 KR 2020014544 W KR2020014544 W KR 2020014544W WO 2021080362 A1 WO2021080362 A1 WO 2021080362A1
Authority
WO
WIPO (PCT)
Prior art keywords
language
function
interpretation
voice
earset
Prior art date
Application number
PCT/KR2020/014544
Other languages
French (fr)
Korean (ko)
Inventor
정승규
김경일
Original Assignee
주식회사 이엠텍
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190133599A external-priority patent/KR102285877B1/en
Priority claimed from KR1020190133598A external-priority patent/KR102219494B1/en
Application filed by 주식회사 이엠텍 filed Critical 주식회사 이엠텍
Publication of WO2021080362A1 publication Critical patent/WO2021080362A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Definitions

  • the present invention relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for
  • the best practice of foreign language is repetitive and accurate writing correction, hearing and speaking practice by experts, but it can also be supplemented by using a learning device.
  • the present invention relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It is an object of the present invention to provide a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It is an object of the present invention to provide a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It is an object of the present invention to provide a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user,
  • the language processing system using the earset of the present invention includes a first earset or a second earset, a wearable sound device performing wireless communication with the first earset or wired communication with the second earset, and a first earset or a wearable sound device wirelessly. It is composed of an electronic communication device including a communication unit that performs communication and communicates with a translation server, and a display unit, and the electronic communication device is a processing including a speech signal that is a processing target language received from a first earphone or a wearable sound device.
  • Target information is generated and transmitted to the translation server, and processing information including translation text, which is a processing target language corresponding to the processing target information transmitted from the translation server, or audio signal converted from the translated text is received and visually displayed through the display unit. Or perform a speech processing function that is expressed aurally.
  • the language processing function includes an interpretation function
  • the processing information corresponds to the interpretation information
  • the wearable sound device and the electronic communication device are in a communication enabled state
  • the wearable sound device responds to the voice signal from the first earset or the second earset.
  • an interpretation function control command corresponding to the reference voice signal is generated and transmitted to the electronic communication device, and the electronic communication device receives the interpretation function control command, and the interpretation function control command It is desirable to perform the start or end of the interpreting function in response.
  • the electronic communication device wakes up the application for the interpretation function when receiving an interpretation function control command including starting the interpretation function while operating the application for the interpretation function in the foreground service state in the standby state.
  • the electronic communication device receives an interpretation function control command including termination of the interpretation function while the electronic communication device is performing the interpretation function, it is preferable to terminate the interpretation function by operating the application for the interpretation function in the foreground service state. Do.
  • the electronic communication device performs an interpretation function by opening a voice communication channel with the wearable sound device.
  • the language processing function includes a language learning function
  • the processing information corresponds to the translation information
  • the electronic communication device includes an input unit, and after the display of the translation information, the selection of success in learning the first language through the input unit or It is desirable to acquire and store the learning failure selection.
  • the electronic communication device receives the translation target text as a second language through the input unit, transmits the translation target text information to the translation server, and transmits the first text corresponding to the translation target text information transmitted from the translation server. It is preferable to receive the translated text, which is a language, and display it through the display unit.
  • the first and second earphones define an installation space in which parts are installed, and a housing that forms an exterior and has a soundproofing hole, a sound reproduction unit installed in the installation space and emitting sound, a voice microphone installed in the installation space, It is installed in the housing, it is preferable to include a; a conduit for transmitting the voice transmitted through the soundproof hole to the voice microphone.
  • the housing includes an insertion tube inserted into the user's external ear canal, and it is preferable that the insertion tube serves as a soundproofing device.
  • it further includes a chamber forming a closed space surrounding the voice microphone, and the conduit is preferably formed in the chamber to transmit the voice transmitted through the insertion tube to the voice microphone.
  • the chamber has an upper bracket that fixes the voice microphone in the installation space, and a lower bracket that meshes with the upper bracket to form a space, and the sound reproduction unit is installed between the upper bracket and the lower bracket, and surrounds the sound reproduction unit. It is preferable that the closed space and the installation space of the voice microphone are separated from each other.
  • the housing has a back hole communicating with the rear surface of the sound reproduction unit.
  • At least one bracket installed between the sound reproduction unit and the housing and capable of tuning acoustic characteristics.
  • the conduit extends into the soundproofing hole.
  • a clearer voice of the user can be obtained, and interpretation for the obtained voice can be performed.
  • the interpretation function is performed by executing an application for an interpretation function by the user's voice.
  • the present invention has an effect of blocking the inflow of external noise to obtain a clearer voice of the wearer, and helping the wearer learn language by performing translation on the acquired voice.
  • the voice transmitted from the user's ear can be input more clearly.
  • the present invention has an advantage in that it is possible to prevent external noise from flowing into the inner microphone by forming a chamber that closes the rear of the voice microphone.
  • FIG. 1 is a control configuration diagram of a language processing system using an ear set according to the present invention.
  • FIG. 2 is a cross-sectional view of an earset according to a first embodiment of the present invention.
  • FIG 3 is an exploded view of an earset according to a second embodiment of the present invention.
  • FIG. 4 is a perspective view of an earset according to a second embodiment of the present invention.
  • FIG. 5 is a cross-sectional view of an earset according to a second embodiment of the present invention.
  • FIG. 6 is a cross-sectional view of an earset according to a third embodiment of the present invention.
  • expressions such as “A or B”, “at least one of A or/and B”, or “one or more of A or/and B” may include all possible combinations of the items listed together.
  • “A or B”, “at least one of A and B”, or “at least one of A or B” includes (1) at least one A, (2) at least one B, Or (3) it may refer to all cases including both at least one A and at least one B.
  • first, second, first, or “second” used in this document can modify various elements, regardless of their order and/or importance, and It is used to distinguish it from the component, but does not limit the component.
  • a first user device and a second user device may represent different user devices regardless of order or importance.
  • a first component may be referred to as a second component, and similarly, a second component may be renamed to a first component.
  • Some component eg, the first component
  • another component eg, the second component
  • the certain component may be directly connected to the other component or may be connected through another component (eg, a third component).
  • a component eg, a first component
  • the component and the It may be understood that no other component (eg, a third component) exists between the different components.
  • a device configured to may mean that the device “can” along with other devices or parts.
  • a processor configured (or configured) to perform A, B, and C means a dedicated processor (eg, an embedded processor) for performing the corresponding operation, or executing one or more software programs stored in a memory device. By doing so, it may mean a generic-purpose processor (eg, a CPU or an application processor) capable of performing corresponding operations.
  • the language processing system in the present invention performs a language processing function, but it should be recognized that the language processing function includes an interpretation function and a translation function, and additionally includes a language learning function.
  • the language to be interpreted corresponds to the language to be translated or interpreted into the language spoken by the user
  • the language to be interpreted is the language spoken by the user to be translated or interpreted. This corresponds to the language to be finally conveyed visually or aurally to the conversation partner.
  • the target language for interpretation is English
  • the target language for interpretation is Korean.
  • the first language corresponds to a language to be translated as a language to be learned (a processing target language), and the second language is translated as a language used by the user.
  • the selected language the target language for processing. For example, when a user who speaks Korean wants to learn English, the first language becomes English and the second language becomes Korean.
  • FIG. 1 is a control configuration diagram of a language processing system using an ear set according to the present invention.
  • the language processing system includes a first earset 10 for performing wireless communication with the wearable sound device 30 or the electronic communication device 40, and a second earset 20 electrically connected to the wearable sound device 30 through wired communication. And, a wearable sound device 30 that performs wireless communication with the electronic communication device 40 and performs wireless communication with the first ear set 10 and performs wired communication with the second ear set 20, and the first ear set.
  • an electronic communication device 40 that performs wireless communication with the wearable sound device 30 and communicates with the translation server 50 through the network 60, and the electronic communication device ( Interpretation information (or processing information) including voice signals converted from text by performing communication with 40) and translating the voice included in the voice signal included by receiving the interpretation target information (or processing target information) It is configured to include a translation (or interpreter) server 50 to provide a.
  • the first earphone 10 includes a microphone 11 that acquires a user's voice, a speaker 13 (or a receiver) that emits sound by receiving an electrical signal, and a communication for performing wireless communication with the wearable sound device 30 And a module 15 (for example, a wireless communication module such as a Bluetooth module).
  • a module 15 for example, a wireless communication module such as a Bluetooth module.
  • the configuration or function of the power supply unit (not shown) for supplying power, the microphone 11 and the speaker 13 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof is omitted.
  • FIGS. 2 to 6 the mechanical structure of the first ear set 10 will be described in detail.
  • the communication module 15 performs a phone call function and a sound reproduction function, and performs an interpretation function according to the present invention, as already known to those skilled in the art to which the present invention pertains.
  • the communication module 15 transmits the user's voice signal obtained from the microphone 11 to the wearable sound device 30 or the electronic communication device 40 in a communication connection state with the wearable sound device 30, and the wearable sound device 30 or an electrical signal including an audio signal is received from the electronic communication device 40 and sound is emitted through the speaker 13.
  • the second earphone 20 is a connection for performing wired communication with a microphone 21 that acquires a user's voice, a speaker 23 (or a receiver) that emits sound by receiving an electrical signal, and a wearable sound device 30 It comprises a cable 24 (for example, a wired cable, etc.).
  • a cable 24 for example, a wired cable, etc.
  • the configuration or function of the microphone 21, the speaker 23, and the connection cable 24 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof will be omitted.
  • FIGS. 2 to 6 the mechanical structure of the second ear set 20 will be described in detail.
  • the wearable sound device 30 is a device that includes a wireless communication function, such as a neckband type sound conversion device, and performs a phone call function, a sound reproduction function, and the like.
  • the wearable sound device 30 includes a microphone 31 that acquires external sound, a speaker 33 that emits sound by receiving an electric signal, and wireless communication with the first earset 10 and the electronic communication device 40 ( For example, a communication unit 35 for performing Bluetooth communication), an input unit 37 for acquiring an input from a user, a microphone 31, a speaker 33, a communication unit 35, and an input unit 37 It is configured to include a data processor 39 for selectively performing a phone call function, a sound reproduction function, and an interpretation function by controlling.
  • the configuration or function of the power supply unit (not shown) that supplies power, the microphone 31 and the speaker 33, the communication unit 35, and the input unit 37 are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
  • the data processor 39 is a processor that performs a phone call function and a sound reproduction function, and performs an interpretation function according to the present invention, as already known to those of ordinary skill in the art to which the present invention pertains (for example, CPU, MCU, MICROPROCESSOR, etc.), and a storage space (eg, memory, etc.) for storing voice signals, interpretation performance information (processing performance information), and the like.
  • a storage space eg, memory, etc.
  • Interpretation performance information is at least an audio signal for starting the interpretation function (eg, an audio signal for'interpretation start') and an audio signal for ending the interpretation function (eg, an audio signal for'interpretation end', etc.) It is configured to include a reference speech signal for identifying.
  • the wearable sound device 30 may maintain a state capable of communicating with at least one or more of the first and second earphones 10 (eg, a wireless communication connection state or a wired communication connection state).
  • a state capable of communicating with at least one or more of the first and second earphones 10 eg, a wireless communication connection state or a wired communication connection state.
  • the data processor 39 causes the communication module 15 to transmit the voice signal acquired through the microphone 11 to the communication unit 35 in a communication connection state with the first earset 10.
  • the data processor 39 acquires a voice signal from the microphone 21 inserted into the user's ear and the voice signal from the microphone 31 facing the outside of the user.
  • the voice signal (first voice signal) obtained from the microphone 11 and the microphone 21 is the user's voice
  • the voice signal (second voice signal) obtained from the microphone 31 is the user's voice. It is treated as the voice of the other party.
  • the data processor 39 may transmit conversational identification information capable of identifying each of the first and second voice signals together when transmitting the interpretation target information.
  • the data processor 39 is performing a mode or function other than a phone call function and a sound reproduction function (for example, a standby mode, an interpretation function, etc.), while performing a voice signal from the first ear set 10 or a second ear set ( 20) by comparing the voice signal from the voice signal and the reference voice signal, it is determined whether the voice signal includes or corresponds to the reference voice signal for starting the interpretation function or ending the interpretation function. If the voice signal includes or corresponds to a reference voice signal for starting an interpretation function or ending the interpretation function, the data processor 39 generates an interpretation function control command corresponding to the start or end of the interpretation function, which is a reference voice signal, and communicates electronically. Apply to the device 40.
  • a mode or function other than a phone call function and a sound reproduction function for example, a standby mode, an interpretation function, etc.
  • the data processor 39 performs a mode or function (e.g., a standby mode) other than a phone call function and a sound reproduction function. , Interpreting functions, etc.) are performed in the same way.
  • a mode or function e.g., a standby mode
  • the data processor 39 is a voice communication channel (for example, SCO: Synchronous Connection-Oriented) between the communication unit 35 and the communication unit 45 by the interpretation function started by the electronic communication device 40. Transmit information to be interpreted and receive interpretation information. More detailed interpretation functions are described below.
  • SCO Synchronous Connection-Oriented
  • the electronic communication device 40 corresponds to, for example, an information communication device such as a smartphone or tablet having a communication function, and input from a user (eg, selection of the start or end of an application for an interpreter function, interpretation)
  • An input unit 41 that acquires a function start or end selection, selection input of an interpretation target language and/or an interpretation target language, etc.) and applies it to the data processor 49, and a user interface for an interpretation function visually or aurally.
  • a communication unit 45 for performing communication a microphone 46 for acquiring voice or sound
  • a data processor 49 for performing a phone call function and a sound reproduction function for performing an interpretation function according to the present invention. It is composed.
  • the configuration and function of the power supply unit (not shown), the input unit 41, the display unit 43, the microphone 46, and the communication unit 45 supplying power are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
  • the data processor 49 is a processor (e.g., CPU, MCU, MICROPROCESSOR, etc.) that performs a phone call function, a sound reproduction function, and an interpretation function, and an application for an interpretation function, a user interface, and a storage for storing interpretation information, etc. It is configured to include space (eg, memory, etc.).
  • processor e.g., CPU, MCU, MICROPROCESSOR, etc.
  • space e.g., memory, etc.
  • the data processor 49 executes an application for an interpretation function.
  • the application for the interpretation function includes a process of selecting and setting an interpretation target language and/or an interpretation target language, and a process of generating and transmitting interpretation target information including voice information of a user, which is an interpretation target language, to the interpretation server 50, It includes a process of receiving interpretation information including voice information that is a target language for interpretation from the interpretation server 50 and transmitting it to the wearable sound device 30.
  • the data processor 49 activates or executes an application for an interpretation function in a foreground service state according to an execution input from the input unit 41, and a standby state in which the phone call function and sound reproduction function are not performed.
  • an interpretation function control command processing function control command
  • processing function control command for example, an interpretation function start command
  • the application for the interpretation function wakes up without an additional user's button or touch input.
  • the data processor 49 is an additional user's button or touch according to an interpretation function control command (processing function control command) (e.g., an interpretation function termination command) from the wearable sound device 30 while the interpretation function is being performed.
  • processing function control command processing function control command
  • the application for the interpretation function is terminated or the interpreter function is terminated by operating in the foreground service state, and it operates as a standby state, a phone call function, or a sound reproduction function.
  • the data processor 19 controls the communication unit 45 to control the interpretation target information including the voice information received from the first earphone 10 or the wearable sound device 30 while performing the interpretation function through the network 50.
  • Interpretation functions such as transmitting to the translation server 50 and controlling the communication unit 45 to receive interpretation information including an interpretation target language from the translation server 50 through the network 50 will be described in detail below. do.
  • the translation server 50 has a STT (Speech to Text) function (a function of extracting voice information included in the interpretation target information and recognizing it and converting it to text) and/or a function of translating a text to generate a translated text, and/or As a server including a TTS (Text to Speech) function (a function for synthesizing text into speech), such a translation server 50 corresponds to a technology naturally recognized by a person skilled in the art to which the present invention belongs, and its detailed Description is omitted.
  • STT Seech to Text
  • TTS Text to Speech
  • the network 60 corresponds to a system for performing wired communication and/or wireless communication, and corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention pertains, and a detailed description thereof is omitted.
  • the language processing system performs the interpreter function by performing the following process.
  • the wearable sound device 30 is in a state capable of communicating with the first ear set 10 and/or the second ear set 20, and the electronic communication device 40 communicates with the first ear set 10 or the wearable sound device 40 It is possible.
  • the data processor 49 operates an application for an interpretation function in a foreground service state.
  • the data processor 49 obtains an interpretation function start command from the wearable sound device 30 through the communication unit 45 while maintaining the standby state, the application for the interpretation function wakes up to perform the interpretation function. Start.
  • the data processor 49 enables the user to set the interpretation target language and/or the interpretation target language by voice or input by the input unit 41.
  • the data processor 49 stores information on the language to be interpreted (eg, type of language) and information on the target language for interpretation (eg, type of language).
  • the data processor 49 controls the communication unit 45 to open a voice communication channel with the communication unit 35 to enable transmission and reception of interpretation target information and interpretation information.
  • the data processor 49 receives the voice signal (first or second voice signal) and/or the speaker identification information received from the first earphone 10 or the wearable sound device 30 while performing the interpretation function, and the voice signal And/or the interpretation target information including the speaker identification information is transmitted to the translation server 50 through the network 50 by controlling the communication unit 45.
  • the translation server 50 translates the voice signal included in the interpretation target information into text, converts the text into a voice signal, and transmits the converted voice signal and/or interpretation information including the speaker identification information to the electronic communication device 40. send.
  • the data processor 49 receives interpretation information through the communication unit 45 and transmits it to the wearable sound device 30.
  • the data processor 39 receives interpretation information through the communication unit 35, and applies the converted voice signal included according to the speaker identification information included in the interpretation information to the speaker 33 or the first or second earphone 10 , 20). That is, if the conversational party identification information represents the user's voice signal, the conversation partner must listen to the converted audio signal, and therefore, when the conversation party identification information indicates the conversation partner’s voice signal, the user converts it. Since it is necessary to listen to the obtained voice signal, the converted voice is transmitted or applied to the first or second earphones 10 and 20.
  • the data processor 39 does not transmit the interpretation target information to the electronic communication device 40 during sound emission by receiving interpretation information corresponding to the interpretation target information transmitted immediately before.
  • the voice signal that has already been interpreted is again included in the interpretation target information to prevent transmission.
  • the data processor 49 When the data processor 49 receives an interpretation function termination command from the wearable sound device 30 while performing the interpretation function, the data processor 49 terminates the application for the interpretation function or operates in a foreground service state to terminate the interpretation function and wait. It operates as a state or a phone call function or a sound reproduction function.
  • the language processing system includes a first earset 10 that performs wired or wireless communication with the wearable sound device 30 or the electronic communication device 40, and a second earset that is electrically connected to the wearable sound device 30 through wired communication ( 20), a wearable sound device 30 that performs wireless communication with the electronic communication device 40 and performs wired communication with the second ear set 20, and the first ear set 10 and/or the wearable sound device 30 )
  • the electronic communication device 40 that performs wireless communication with each and communicates with the translation server 50 through the network 60, and the electronic communication device 40 through the network 60, It is configured to include a translation server 50 that receives the voice information, translates the voice included in the voice information into text, and provides translation information (or processing information) corresponding to the voice information.
  • the first earset 10 includes a microphone 11 for acquiring a voice, a speaker 13 (or a receiver) for emitting sound by receiving an electric signal, and a communication module 15 for performing communication with the electronic communication device 40.
  • a wireless communication module such as a Bluetooth module or a wired cable.
  • the configuration or function of the power supply unit (not shown) for supplying power, the microphone 11 and the speaker 13 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof is omitted.
  • FIGS. 2 to 6 the mechanical structure of the first ear set 10 will be described in detail.
  • the communication module 15 performs a phone call function and a sound reproduction function, and performs a language learning function according to the present invention, as already known to those of ordinary skill in the art to which the present invention pertains.
  • the communication module 15 transmits the voice information including the user's voice signal acquired from the microphone 11 to the electronic communication device 40 when performing the language learning function.
  • the second earphone 20 includes a microphone 21 for acquiring a voice, a speaker 23 (or a receiver) that emits sound by receiving an electric signal, and a connection cable for performing wired communication with the wearable sound device 30 ( 24) (for example, a wired cable, etc.).
  • a microphone 21 for acquiring a voice
  • a speaker 23 or a receiver
  • a connection cable for performing wired communication with the wearable sound device 30 ( 24) (for example, a wired cable, etc.).
  • the configuration or function of the microphone 21, the speaker 23, and the connection cable 24 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof will be omitted.
  • FIGS. 2 to 6 the mechanical structure of the second ear set 20 will be described in detail.
  • the wearable sound device 30 is a device that includes a wireless communication function, such as a neckband type sound conversion device, and performs a phone call function, a sound reproduction function, and the like.
  • the wearable sound device 30 includes a microphone 31 that acquires external sound, a speaker 33 that emits sound by receiving an electric signal, and wireless communication (for example, Bluetooth communication, etc.) ), the input unit 37 for acquiring input from the user, the microphone 31, the speaker 33, the communication unit 35, and the input unit 37 to control the phone call function, sound And a data processor 39 that selectively performs a reproduction function and a language learning function.
  • the configuration or function of the power supply unit (not shown) that supplies power, the microphone 31 and the speaker 33, the communication unit 35, and the input unit 37 are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
  • the data processor 39 is a processor that performs a phone call function and a sound reproduction function, and performs a language learning function according to the present invention, as already known to a person skilled in the art. , CPU, MCU, MICROPROCESSOR, etc.).
  • the data processor 39 transmits the voice information including the user's voice signal obtained from the microphone 21 to the electronic communication device 40 when performing the language learning function.
  • the language learning functions performed by the data processor 39 are described in detail below.
  • the electronic communication device 40 corresponds to, for example, an information communication device such as a smartphone or tablet having a communication function, and input from a user (for example, selection of the start or end of a language learning function, a language to be learned).
  • An input unit 41 that acquires selection of, evaluation selection for translation information (learning success, learning failure), input of words or sentences composed of a language to be studied, etc., and applies it to the data processor 49, and a language learning function.
  • Wireless communication for example, Bluetooth communication, etc.
  • a communication unit 45 that communicates with the translation server 50 through the network 60, a microphone 46 that acquires voice or sound, a phone call function and a sound reproduction function, and a language according to the present invention. It is configured to include a data processor 49 performing a learning function.
  • the configuration and function of the power supply unit (not shown), the input unit 41, the display unit 43, the microphone 46, and the communication unit 45 supplying power are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
  • the data processor 49 includes a processor (eg, CPU, MCU, MICROPROCESSOR, etc.) that performs a phone call function, a sound reproduction function, and a language learning function, and an application and user interface for a language learning function, translation information, and translation text. It is configured to include a storage space (eg, memory, etc.) for storing the and the like.
  • the data processor 19 controls the communication unit 45 to transmit information to be translated, including voice information received from the first earset 10 or the wearable sound device 30, and the translation server 50 through the network 50. Language learning functions such as transmission to and controlling the communication unit 45 to receive translation information from the translation server 50 through the network 50 will be described in detail below.
  • the translation server 50 generates translation information including the translated text by translating the text and/or the STT (Speech to Text) function (a function to extract voice information included in the translation target information and recognize it and convert it into text).
  • STT Seech to Text
  • a server including a function to perform and/or a TTS (Text to Speech) function (a function to synthesize text into speech)
  • TTS Text to Speech
  • a translation server 50 is a technology that is naturally recognized by a person skilled in the art to which the present invention belongs. Corresponds to the detailed description thereof will be omitted.
  • the network 60 corresponds to a system for performing wired communication and/or wireless communication, and corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention pertains, and a detailed description thereof is omitted.
  • the language processing system performs the language learning function by performing the following process.
  • the data processor 49 controls the communication unit 45 to perform a pairing operation with the first ear set 10 or the wearable sound device 30 to enter a communication enabled state. That is, the communication unit 45 performs wireless communication with the communication module 15 or the communication unit 35.
  • the data processor 49 executes an application for the language learning function according to the start selection input of the language learning function from the input unit 41 and displays a user interface on the display unit 43.
  • the data processor 39 obtains a start selection input of the language learning function from the input unit 37, and transmits a start selection input of the language learning function through the communication unit 45 to the electronic communication device 40 through the communication unit 35.
  • the data processor 49 executes an application for the language learning function according to the received start selection input of the language learning function, and displays the user interface on the display unit 43.
  • the data processor 49 may start an application for a language learning function in various other ways.
  • the data processor 49 displays a language to be learned that can be selected in the user interface displayed on the display unit 43 and stores a language to be learned (ie, a first language) by a language selection input from the input unit 41.
  • the data processor 49 enables a user's voice command, that is, checks the language included in the voice input through the microphone 46, and selects the identified language (eg, English, Chinese, etc.). Save as the language to be learned (first language).
  • the data processor 49 may visually and/or audibly display the selection of the language to be learned to the user through the display unit 43.
  • the data processor 49 displays the language to be learned on the display unit 43, and prompts the user to speak a word or sentence to be learned through the display unit 43 visually and/or aurally.
  • the second language may be preset or the data processor 49 may allow the user to set it in the same manner as above.
  • the data processor 49 communicates with the communication module 15 through the communication unit 45 to perform a language learning function, or communicates with the communication unit 35 to allow the data processor 39 to perform a language learning function.
  • the communication module 15 of the first earset 10 electronically communicates first voice information including a voice signal (first voice signal) that is the user's first language in which external noise obtained by the microphone 11 is reduced. Transmitted to the device 40, or the data processor 39 includes a voice signal (a second voice signal) that is the first language of the user in which the external noise obtained by the microphone 21 of the second ear set 20 has been reduced.
  • the second voice information to be transmitted is transmitted to the electronic communication device 40 through the communication unit 35.
  • the data processor 49 receives and stores the first or second voice information through the communication unit 45, and generates translation target information (processing target information) including the first or second voice information, and the communication unit 45 And transmits it to the translation server 50 through the network 60.
  • the translation target information includes a first or second voice signal, a first language type code, and a second language type code.
  • the translation server 50 receives and stores the translation target information, extracts the first or second voice signal included in the translation target information, recognizes it, and converts it into text by referring to the language type code included in the translation target information. Save it.
  • the translation server 50 translates the text into a second language, generates and stores the translated text, which is a second language, and generates translation information (processing information) including the translated text, and communicates electronically through the network 60. It is transmitted to the device 40.
  • the data processor 49 receives and stores the translation information through the communication unit 45, and displays the translated text included in the translation information through the display unit 43 to display the contents spoken by the user in the first language and the displayed second language. Make it possible to compare and learn the translated texts.
  • the data processor 49 allows the user to select whether or not the content intended by the user is included in the translated text. If the user determines that the content of the translated text includes or is the same as the content spoken in the first language, it is determined as success and inputs a selection of success in learning for the first language through the input unit 41, and the data processor 49 Save your learning success choices. Otherwise, if the content spoken in the first language is not included in the translated text, since there is an error in the content spoken in the first language by the user, the user inputs a selection of learning failure for the first language through the input unit 41, The data processor 49 stores the learning failure selection.
  • the data processor 49 when the learning failure selection is obtained from the input unit 41, the user interface of the display unit 43, an input window for inputting the contents spoken in the first language as text in the second language. Is displayed.
  • the data processor 49 receives and stores the text to be translated, inputted in the input window through the input unit 41, and stores the text to be translated, the type of the language to be translated (i.e., the second language), and the type of the language to be translated (i.e. , The first language) to be transmitted to the translation server 50 through the network 60 by controlling the communication unit 45.
  • the translation server 50 receives the translation target text information, refers to the type of the translation target language and the type of the translation language, and translates the included translation target text into a translation language to generate the translated text.
  • the translation server 50 transmits the generated translated text to the electronic communication device 40 through the network 60.
  • the data processor 49 receives and stores the translated text through the communication unit 45, and displays the translated text through the display unit 43, so that the user can check and learn the translated text.
  • the translation server 50 generates translated speech information including a speech signal of the translated text by synthesizing the translated text into speech, in addition to the translation text, and transmitting it to the electronic communication device 40, and the data processor 49
  • the voice signal included in the translated voice information through the communication unit 45 may be audibly expressed through the display unit 43.
  • the language processing system enables the user to learn the first language.
  • FIGS. 2 to 6 are cross-sectional views of an earset equipped with a voice microphone according to a first embodiment of the present invention.
  • the ear sets in FIGS. 2 to 6 may be applied to the first ear set 10 and the second ear set 20, the voice microphone corresponds to the microphone 11 and the microphone 21 described above, and the sound reproduction unit is a speaker ( 13) and the speaker 23.
  • the housing 100 has an installation space 110 in which parts can be installed and an insertion tube 120 structure that can be inserted into the ear canal, and the insertion tube 120 has an ear of an elastic member so that it can be gently in close contact with the user's ear.
  • the bud can be detached.
  • Components such as a sound reproduction unit 200 such as a microspeaker, a voice microphone 300 to which a user's voice signal is input, and a communication module 15 for controlling them may be installed in the installation space 110.
  • the sound reproducing unit 200 emits sound toward the insertion tube 120 to direct sound to the user's ear, and the voice microphone 300 receives the user's voice from the insertion tube.
  • the sound reproduction unit 200 and the voice microphone 300 are assembled on the upper bracket 410 to facilitate installation, and the lower bracket 430 is coupled to the upper bracket 410 to facilitate the installation. To form a closed space 400 for the volume of the bag. Meanwhile, between the upper bracket 410 and the lower bracket 430, a microphone bracket 420 is additionally installed, and a voice microphone 300 is installed in the microphone bracket 420.
  • the upper bracket 410 communicates with the insertion tube 120, a first conduit 412 leading to the sound reproducing unit 200 and a second conduit 414 leading to the microphone bracket 420 are formed.
  • a conduit 422 connecting the second conduit 414 and the voice microphone 300 to the microphone bracket 420 is formed.
  • a terminal capable of transmitting an electrical signal to the sound reproduction unit 200 and the voice microphone 300 may be additionally provided.
  • the terminal (not shown) may be connected to a communication module 15 such as a PCB or a connection cable 24.
  • the assembly After the assembly of the upper bracket 410, the sound reproduction unit 200, the microphone bracket 420, the voice microphone 300, the terminal (not shown), and the lower bracket 430 is completed, the assembly is installed in the housing 100. It is inserted and fixed in the space 110.
  • the upper bracket 410 has a shape corresponding to the installation space 110 of the housing 100.
  • the size of the backhaul of the sound reproducing unit 200 can be increased to approximately 1.0 mm, and accordingly, the sound pressure can be increased by 6 dB or more in a low frequency band.
  • FIG. 3 is an exploded view of an earset according to a second embodiment of the present invention
  • FIG. 4 is a perspective view of an earset according to a second embodiment of the present invention
  • FIG. 5 is a cross-sectional view of an earset according to a second embodiment of the present invention.
  • the earset according to the second embodiment of the present invention applies a conduit structure for transmitting the voice transmitted through the soundproofing device, which is a technical feature of the present invention, to an open earset in which a bag 122a is formed on the rear housing 120a. .
  • the earset according to the second embodiment of the present invention has a front housing 110a facing the user's ear and a rear housing 120a facing the user's ear, and the front housing 110a and the rear housing 120a are Components are installed in the installation space formed by combining.
  • Components such as a sound reproducing unit 200 such as a microspeaker, a voice microphone 300 to which a user's voice signal is input, and a communication module 15 for controlling them may be installed in the housings 110a and 120a.
  • the front housing 110a includes one or more soundproofing holes 112a and 114a, and the earset according to the second embodiment of the present invention has two soundproofing holes 112a and 114a formed at a predetermined angle to each other.
  • the soundproofing holes 112a and 114a may be divided into a first soundproofing hole 112a having a relatively large size and a second soundproofing hole 114a having a relatively small size.
  • the first soundproofing hole 112a outputs sound from the acoustic conversion device 300 to the ear canal
  • the second soundproofing hole 114a is a structure for the overall balance of the SPL and plays a role of flatly tuning the sound pressure in the mid-range, and the high-frequency sound pressure Raises. It is preferable that the acoustic radiation angle of the first soundproofing hole 112a and that of the second soundproofing hole 114a be 90 degrees or more.
  • the voice microphone 300 receives the user's voice from the first soundproofing hole 112a.
  • a conduit 420a communicating with the first soundproofing hole 112a and transmitting the user's voice to the voice microphone 300 is provided.
  • the conduit 420a is coupled to the front housing 110a. Accordingly, when the user speaks, the voice coming into the first soundproofing hole 112a through the Eustachian tube can be transmitted to the voice microphone 300.
  • the rear housing 120a includes a back hole 122a through which the inside of the housing communicates with the outside so as to maintain a constant sound pressure inside the ear.
  • a bracket 430a may be installed between the rear housing 120a and the acoustic conversion device 200.
  • the bracket 430a covers the back hole 122 to form a pipeline.
  • the bracket 430a is formed at a position spaced apart from the back hole 122a of the rear housing 120a and includes a communication hole 432a for communicating the inside of the housing and the pipeline. That is, the conduit connects the communication hole 432a and the back hole 122a.
  • the pipe structure formed by the bracket 430a serves to enhance low-frequency sound by generating internal resonance within the housings 110a and 120a.
  • the backhaul 122a serves to cancel Dip occurring in the 2 kHz band.
  • FIG. 6 is a cross-sectional view of an earset according to a third embodiment of the present invention.
  • Earsets according to the third embodiment of the present invention are all the same as those of the second embodiment, but the pipe 420b connecting the voice microphone 300 and the first soundproofing hole 112a is bent so that the first soundproofing hole 112a It is characterized by extending into ). As the pipe 420b is bent into the first soundproof hole, there is an advantage that howling can be suppressed during a voice call.
  • Earset provided by the present invention can suppress external noise by forming a voice microphone channel in the soundproofing hole to input a voice from the Eustachian tube during a call with a microphone.
  • howling may occur during a voice call, howling can be suppressed by separately manufacturing and installing a pipe 420b that extends into the soundproofing hole to guide the voice.
  • the structure of forming the voice microphone channel in the soundproofing hole may be applied to a kernel-type earset or an open-type earset. It can also be applied to wireless earsets and TWS earsets.
  • At least a part of a device eg, a processor or its functions
  • a method eg, operations
  • a computer-readable storage media in the form of, for example, a program module. It can be implemented as a stored command.
  • the one or more processors may perform a function corresponding to the command.
  • the computer-readable storage medium may be, for example, a memory.

Abstract

The present invention relates to a language processing system using an earset, and particularly to a language processing system using an earset, wherein the system more clearly acquires the speech of a user by blocking the inflow of external noise, and interprets the acquired speech or performs a translation and learning function for the acquired speech. A language processing system using an earset according to the present invention is configured from an electronic communication device including: a first earset or a second earset; a wearable acoustic device which performs wireless communication with the first earset or wired communication with the second earset; a communication unit which performs wireless communication with the first earset or the wearable acoustic device, and communicates with a translation server; and a display unit, wherein the electronic communication device performs a language processing function of: generating information to be processed including a speech signal, which is in a source language and received from the first earset or the wearable acoustic device, and transmitting the information to be processed to the translation server; receiving processed information transmitted from the translation server and including translation text which is in a target language and corresponds to the information to be processed, or a speech signal converted from the translation text; and visually displaying the processed information through the display unit, or acoustically expressing the processed information.

Description

이어셋을 이용한 언어 처리 시스템Language processing system using earphones
본 발명은 이어셋을 이용한 언어 처리 시스템에 관한 것으로서, 특히 외부의 노이즈 유입을 차단하여 사용자의 보다 명확한 음성을 획득하고, 획득된 음성에 대한 통역 또는 획득된 음성에 대한 번역 및 학습 기능을 수행하는 이어셋을 이용한 언어 처리 시스템에 관한 것이다.The present invention relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It relates to a language processing system using
최근 우리나라를 방문하는 외국인 방문객들과 해외를 방문하는 내국인의 수가 해를 거듭할수록 꾸준한 증가 추세를 보이고 있다. 특히 산업 전반적인 분야에서 중국과의 거래가 이루어지면서 중국인의 우리나라 방문이 급증하고 있다. 뿐만 아니라 일본을 비롯한 세계 각 국의 방문객들이 대거 방문할 것임은 쉽게 예상할 수 있다. 또한 사업을 목적으로 우리나라를 방문하는 사람들 수도 증가일로에 있다. 따라서 세계 각 국의 수많은 방문자들간의 의사소통과 우리나라 국민간의 의사소통이 매우 중요하게 대두되고 있다. Recently, the number of foreign visitors to Korea and the number of Koreans visiting abroad is showing a steadily increasing trend over the years. In particular, as transactions with China are being made in the overall field of industry, Chinese visits to Korea are increasing rapidly. In addition, it is easy to predict that many visitors from all over the world, including Japan, will visit. Also, the number of people who visit Korea for business purposes is on the rise. Therefore, communication between numerous visitors from all over the world and communication between the Korean people is becoming very important.
이러한 외국인 방문객들 및 해외 여행자들은 통상적으로는 서비스가 철저한 호텔 등을 이용하게 되는데, 일반적으로 호텔에서는 방문객이 자국의 언어를 사용하여 의사소통을 하고자 하는 경우, 또는 자신과는 다른 언어권의 언어를 사용하는 사람과 의사소통을 하고자 하는 경우, 호텔에 상주하는 통역사를 거쳐 의사소통을 하거나, 인터넷을 이용한 이-메일(e-mail)또는 팩시밀리 등을 이용할 수 있도록 하고 있다. 또한 호텔에 세계 각 국의 언 어를 구사할 수 있는 통역사들을 모두 배치시키는 것은 현실적으로 어려움이 있으며, 통역사를 항시 동반해야 하며 한 두 명의 통역사들로는 다수의 방문객들에게 만족할 만한 서비스를 제공할 수 없는 불합리함, 그리고 원 하는 시간에 통역서비스를 제공받지 못하는 문제점이 발생하게 된다. These foreign visitors and overseas travelers usually use hotels with thorough service. In general, in hotels, when visitors want to communicate using their own language, or use a language different from their own. If you want to communicate with a person who wants to communicate, you can communicate through an interpreter who resides in the hotel, or you can use e-mail or facsimile using the Internet. In addition, it is practically difficult to arrange all interpreters who can speak languages from all over the world in a hotel, and an interpreter must be accompanied at all times, and it is unreasonable that one or two interpreters cannot provide satisfactory services to a large number of visitors. And, there is a problem that the interpreter service is not provided at the desired time.
이에 따라 해당 기술분야에서는 관광 시 자신이 소지한 통신 단말기를 이용해 외국인과의 대화 시 실시간으로 동시통역을 위한 기술개발이 요구되고 있다.Accordingly, in the relevant technical field, it is required to develop a technology for simultaneous interpretation in real time when talking with a foreigner using a communication terminal possessed by oneself during tourism.
또한, 서로 다른 언어로 대화하기 위해서는 모국어와 다른 외국어로의 히어링과 스피킹이 필요하고, 외국어의 히어링과 스피킹은 수많은 히어링, 작문, 스피킹의 반복연습이 필요하다.In addition, in order to communicate in different languages, hearing and speaking in a foreign language different from that of the mother tongue are required, and hearing and speaking in a foreign language requires numerous repeated practice of hearing, writing, and speaking.
즉, 가장 좋은 외국어의 연습은 전문가에 의한 반복적이고 정확한 작문교정, 히어링 및 스피킹 연습이 효과적이 지만 학습장치를 사용하여 보완 할 수도 있다.In other words, the best practice of foreign language is repetitive and accurate writing correction, hearing and speaking practice by experts, but it can also be supplemented by using a learning device.
따라서 녹음기, 발음교정기, 동시통역기 등의 일부 학습장치가 개발되어 사용되고 있다.Therefore, some learning devices, such as a recorder, a pronunciation corrector, and a simultaneous interpreter, have been developed and used.
그러나 대부분의 학습장치는 문장이나 단어의 기계적인 선택과 히어링과 발음에 집중되고, 작문에 대한 검증 또는 교정기능의 학습장치는 없었다.However, most of the learning devices are focused on mechanical selection of sentences or words, hearing and pronunciation, and there is no learning device that verifies or corrects writing.
본 발명은 이어셋을 이용한 언어 처리 시스템에 관한 것으로서, 특히 외부의 노이즈 유입을 차단하여 사용자의 보다 명확한 음성을 획득하고, 획득된 음성에 대한 통역 또는 획득된 음성에 대한 번역 및 학습 기능을 수행하는 이어셋을 이용한 언어 처리 시스템을 제공하는 것을 목적으로 한다.The present invention relates to a language processing system using an earset, and in particular, an earset that blocks the inflow of external noise to obtain a clearer voice of the user, and performs an interpreter for the acquired voice or a translation and learning function for the acquired voice. It is an object of the present invention to provide a language processing system using
본 발명인 이어셋을 이용한 언어 처리 시스템은 제 1 이어셋 또는 제 2 이어셋과, 제 1 이어셋과 무선 통신을 수행하거나 제 2 이어셋과 유선 통신을 수행하는 웨어러블 음향 디바이스와, 제 1 이어셋 또는 웨어러블 음향 디바이스와 무선 통신을 수행하며, 번역 서버와 통신을 수행하는 통신부와, 표시부를 포함하는 전자 통신 디바이스로 구성되며, 전자 통신 디바이스는 제 1 이어셋이나 웨어러블 음향 디바이스로부터 수신된 처리 대상 언어인 음성 신호를 포함하는 처리 대상 정보를 생성하여 번역 서버로 전송하고, 번역 서버로부터 전송된 처리 대상 정보에 대응하는 처리 목표 언어인 번역 텍스트 또는 번역 텍스트로부터 변환된 음성 신호를 포함하는 처리 정보를 수신하여 표시부를 통하여 시각적으로 표시하거나 청각적으로 표출하는 언어 처리 기능을 수행한다.The language processing system using the earset of the present invention includes a first earset or a second earset, a wearable sound device performing wireless communication with the first earset or wired communication with the second earset, and a first earset or a wearable sound device wirelessly. It is composed of an electronic communication device including a communication unit that performs communication and communicates with a translation server, and a display unit, and the electronic communication device is a processing including a speech signal that is a processing target language received from a first earphone or a wearable sound device. Target information is generated and transmitted to the translation server, and processing information including translation text, which is a processing target language corresponding to the processing target information transmitted from the translation server, or audio signal converted from the translated text is received and visually displayed through the display unit. Or perform a speech processing function that is expressed aurally.
또한, 언어 처리 기능은 통역 기능을 포함하고, 처리 정보는 통역 정보에 대응하고, 웨어러블 음향 디바이스와 전자 통신 디바이스가 통신 가능 상태이고, 웨어러블 음향 디바이스는 제 1 이어셋 또는 제 2 이어셋으로부터의 음성 신호에 통역 기능의 제어를 위한 기준 음성 신호가 포함된 경우, 기준 음성 신호에 대응하는 통역 기능 제어 명령을 생성하여 전자 통신 디바이스로 전송하고, 전자 통신 디바이스는 통역 기능 제어 명령을 수신하고, 통역 기능 제어 명령에 대응하여 통역 기능에 대한 시작 또는 종료를 수행하는 것이 바람직하다.In addition, the language processing function includes an interpretation function, the processing information corresponds to the interpretation information, the wearable sound device and the electronic communication device are in a communication enabled state, and the wearable sound device responds to the voice signal from the first earset or the second earset. When a reference voice signal for controlling the interpretation function is included, an interpretation function control command corresponding to the reference voice signal is generated and transmitted to the electronic communication device, and the electronic communication device receives the interpretation function control command, and the interpretation function control command It is desirable to perform the start or end of the interpreting function in response.
또한, 전자 통신 디바이스는 대기 상태에서 통역 기능을 위한 어플리케이션을 포그라운드 서비스 상태로 동작시키는 중에, 통역 기능 시작을 포함하는 통역 기능 제어 명령을 수신하는 경우, 통역 기능을 위한 어플리케이션을 웨이크 업시켜 통역 기능을 시작하고, 전자 통신 디바이스는 통역 기능을 수행하는 중에, 통역 기능 종료를 포함하는 통역 기능 제어 명령을 수신하는 경우, 통역 기능을 위한 어플리케이션을 포그라운드 서비스 상태로 동작시켜 통역 기능을 종료시키는 것이 바람직하다.In addition, the electronic communication device wakes up the application for the interpretation function when receiving an interpretation function control command including starting the interpretation function while operating the application for the interpretation function in the foreground service state in the standby state. When the electronic communication device receives an interpretation function control command including termination of the interpretation function while the electronic communication device is performing the interpretation function, it is preferable to terminate the interpretation function by operating the application for the interpretation function in the foreground service state. Do.
또한, 전자 통신 디바이스는 웨어러블 음향 디바이스와의 음성 통신 채널을 개방하여 통역 기능을 수행하는 것이 바람직하다.In addition, it is preferable that the electronic communication device performs an interpretation function by opening a voice communication channel with the wearable sound device.
또한, 언어 처리 기능은 언어 학습 기능을 포함하고, 처리 정보는 번역 정보에 대응하고, 전자 통신 디바이스는 입력부를 포함하고, 번역 정보의 표시 이후에, 입력부를 통하여 제 1 언어에 대한 학습 성공 선택 또는 학습 실패 선택을 획득하여 저장하는 것이 바람직하다.In addition, the language processing function includes a language learning function, the processing information corresponds to the translation information, the electronic communication device includes an input unit, and after the display of the translation information, the selection of success in learning the first language through the input unit or It is desirable to acquire and store the learning failure selection.
또한, 전자 통신 디바이스는 학습 실패 선택의 경우, 입력부를 통하여 제 2 언어인 번역 대상 텍스트를 입력 받아 번역 대상 텍스트 정보를 번역 서버로 전송하고, 번역 서버로부터 전송된 번역 대상 텍스트 정보에 대응하는 제 1 언어인 번역 텍스트를 수신하여 표시부를 통하여 표시하는 것이 바람직하다.In addition, in the case of the learning failure selection, the electronic communication device receives the translation target text as a second language through the input unit, transmits the translation target text information to the translation server, and transmits the first text corresponding to the translation target text information transmitted from the translation server. It is preferable to receive the translated text, which is a language, and display it through the display unit.
또한, 제 1 및 제 2 이어셋은 부품이 설치되는 설치 공간을 정의하며 외관을 형성하며 방음구를 구비하는 하우징, 설치 공간 내에 설치되며 음향을 방출하는 음향 재생 유닛, 설치 공간 내에 설치되는 보이스 마이크, 하우징 내에 설치되며, 방음구를 통해 전달된 음성을 보이스 마이크로 전달하는 관로;를 포함하는 것이 바람직하다.In addition, the first and second earphones define an installation space in which parts are installed, and a housing that forms an exterior and has a soundproofing hole, a sound reproduction unit installed in the installation space and emitting sound, a voice microphone installed in the installation space, It is installed in the housing, it is preferable to include a; a conduit for transmitting the voice transmitted through the soundproof hole to the voice microphone.
또한, 하우징은 사용자의 외이도 내로 삽입되는 삽입관을 구비하며, 삽입관이 방음구의 역할을 하는 것이 바람직하다.In addition, the housing includes an insertion tube inserted into the user's external ear canal, and it is preferable that the insertion tube serves as a soundproofing device.
또한, 보이스 마이크를 감싸는 폐쇄 공간을 형성하는 챔버;를 더 포함하며, 관로는 챔버에 형성되어 삽입관을 통해 전달된 음성을 보이스 마이크로 전달하는 것이 바람직하다.In addition, it further includes a chamber forming a closed space surrounding the voice microphone, and the conduit is preferably formed in the chamber to transmit the voice transmitted through the insertion tube to the voice microphone.
또한, 챔버는 보이스 마이크를 설치 공간 내에서 고정하는 상부 브라켓과, 상부 브라켓과 맞물려 공간을 형성하는 하부 브라켓을 구비하고, 음향 재생 유닛은 상부 브라켓과 하부 브라켓 사이에 설치되며, 음향 재생 유닛을 감싸는 폐쇄 공간과 보이스 마이크의 설치 공간은 서로 분리된 것이 바람직하다.In addition, the chamber has an upper bracket that fixes the voice microphone in the installation space, and a lower bracket that meshes with the upper bracket to form a space, and the sound reproduction unit is installed between the upper bracket and the lower bracket, and surrounds the sound reproduction unit. It is preferable that the closed space and the installation space of the voice microphone are separated from each other.
또한, 하우징은 음향 재생 유닛의 후면과 연통하는 백홀을 구비하는 것이 바람직하다.In addition, it is preferable that the housing has a back hole communicating with the rear surface of the sound reproduction unit.
또한, 음향 재생 유닛과 하우징 사이에 설치되며, 음향 특성을 튜닝할 수 있는 하나 이상의 브라켓;을 포함하는 것이 바람직하다.Further, it is preferable to include at least one bracket installed between the sound reproduction unit and the housing and capable of tuning acoustic characteristics.
또한, 관로는 방음구 내로 연장된 것이 바람직하다.In addition, it is preferable that the conduit extends into the soundproofing hole.
본 발명은 외부의 노이즈 유입을 차단하여 사용자의 보다 명확한 음성을 획득하고, 획득된 음성에 대한 통역을 수행할 수 있으며, 특히 사용자의 음성에 의해 통역 기능을 위한 어플리케이션의 실행에 의한 통역 기능이 수행되도록 하여, 사용자의 손 동작에 의한 입력 없이도 외국인과 편리하게 의사소통을 가능하게 하는 효과가 있다.In the present invention, by blocking the inflow of external noise, a clearer voice of the user can be obtained, and interpretation for the obtained voice can be performed. In particular, the interpretation function is performed by executing an application for an interpretation function by the user's voice. Thus, there is an effect of enabling convenient communication with foreigners without input by the user's hand motion.
본 발명은 외부의 노이즈 유입을 차단하여 착용자의 보다 명확한 음성을 획득하고, 획득된 음성에 대한 번역을 수행하여 착용자의 언어 학습을 돕는 효과가 있다.The present invention has an effect of blocking the inflow of external noise to obtain a clearer voice of the wearer, and helping the wearer learn language by performing translation on the acquired voice.
본 발명은 음향 재생 유닛으로 외부의 노이즈가 유입되는 것을 방지할 수 있어, 사용자의 귀로부터 전달된 음성을 더욱 또렷하게 입력할 수 있다. According to the present invention, since external noise can be prevented from flowing into the sound reproducing unit, the voice transmitted from the user's ear can be input more clearly.
또한 본 발명은 이어셋의 하우징에 오픈형 구조를 채용함으로써, 압력차에 의한 귀의 먹먹함을 해소할 수 있다. In addition, according to the present invention, by employing an open structure in the housing of the ear set, it is possible to eliminate the discomfort of the ear caused by the pressure difference.
또한, 본 발명은 보이스 마이크의 후방을 폐쇄하는 챔버를 형성함으로써, 외부의 노이즈가 이너 마이크 내로 유입되는 것을 방지할 수 있다는 장점이 있다.In addition, the present invention has an advantage in that it is possible to prevent external noise from flowing into the inner microphone by forming a chamber that closes the rear of the voice microphone.
도 1은 본 발명에 따른 이어셋을 이용한 언어 처리 시스템의 제어 구성도이다.1 is a control configuration diagram of a language processing system using an ear set according to the present invention.
도 2는 본 발명의 제1 실시예에 따른 이어셋의 단면도이다.2 is a cross-sectional view of an earset according to a first embodiment of the present invention.
도 3는 본 발명의 제2 실시예에 따른 이어셋의 분해도이다.3 is an exploded view of an earset according to a second embodiment of the present invention.
도 4는 본 발명의 제2 실시예에 따른 이어셋의 사시도이다.4 is a perspective view of an earset according to a second embodiment of the present invention.
도 5은 본 발명의 제2 실시예에 따른 이어셋의 단면도이다.5 is a cross-sectional view of an earset according to a second embodiment of the present invention.
도 6은 본 발명의 제3 실시예에 따른 이어셋의 단면도이다.6 is a cross-sectional view of an earset according to a third embodiment of the present invention.
이하에서, 본 발명은 실시예와 도면을 통하여 상세하게 설명된다. 그러나, 이는 본 발명을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 본 발명의 실시 예의 다양한 변경(modification), 균등물(equivalent), 및/또는 대체물(alternative)을 포함하는 것으로 이해되어야 한다. 도면의 설명과 관련하여, 유사한 구성요소에 대해서는 유사한 참조 부호가 사용될 수 있다.In the following, the present invention will be described in detail through examples and drawings. However, this is not intended to limit the present invention to a specific embodiment, it should be understood to include various modifications, equivalents, and/or alternatives of the embodiments of the present invention. In connection with the description of the drawings, similar reference numerals may be used for similar elements.
본 문서에서, "가진다", "가질 수 있다", "포함한다", 또는 "포함할 수 있다" 등의 표현은 해당 특징(예: 수치, 기능, 동작, 또는 부품 등의 구성요소)의 존재를 가리키며, 추가적인 특징의 존재를 배제하지 않는다.In this document, expressions such as "have", "may have", "include", or "may contain" are the presence of corresponding features (eg, elements such as numbers, functions, actions, or parts). And does not exclude the presence of additional features.
본 문서에서, "A 또는 B", "A 또는/및 B 중 적어도 하나", 또는 "A 또는/및 B 중 하나 또는 그 이상" 등의 표현은 함께 나열된 항목들의 모든 가능한 조합을 포함할 수 있다. 예를 들면, "A 또는 B", "A 및 B 중 적어도 하나", 또는 "A 또는 B 중 적어도 하나"는, (1) 적어도 하나의 A를 포함, (2) 적어도 하나의 B를 포함, 또는 (3) 적어도 하나의 A 및 적어도 하나의 B 모두를 포함하는 경우를 모두 지칭할 수 있다.In this document, expressions such as "A or B", "at least one of A or/and B", or "one or more of A or/and B" may include all possible combinations of the items listed together. . For example, "A or B", "at least one of A and B", or "at least one of A or B" includes (1) at least one A, (2) at least one B, Or (3) it may refer to all cases including both at least one A and at least one B.
본 문서에서 사용된 "제1", "제2", "첫째", 또는 "둘째" 등의 표현들은 다양한 구성요소들을, 순서 및/또는 중요도에 상관없이 수식할 수 있고, 한 구성요소를 다른 구성요소와 구분하기 위해 사용될 뿐 해당 구성요소들을 한정하지 않는다. 예를 들면, 제1 사용자 기기와 제2 사용자 기기는, 순서 또는 중요도와 무관하게, 서로 다른 사용자 기기를 나타낼 수 있다. 예를 들면, 본 문서에 기재된 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 바꾸어 명명될 수 있다.Expressions such as "first", "second", "first", or "second" used in this document can modify various elements, regardless of their order and/or importance, and It is used to distinguish it from the component, but does not limit the component. For example, a first user device and a second user device may represent different user devices regardless of order or importance. For example, without departing from the scope of the rights described in this document, a first component may be referred to as a second component, and similarly, a second component may be renamed to a first component.
어떤 구성요소(예: 제1 구성요소)가 다른 구성요소(예: 제2 구성요소)에 "(기능적으로 또는 통신적으로) 연결되어((operatively or communicatively) coupled with/to)" 있다거나 "접속되어(connected to)" 있다고 언급된 때에는, 상기 어떤 구성요소가 상기 다른 구성요소에 직접적으로 연결되거나, 다른 구성요소(예: 제3 구성요소)를 통하여 연결될 수 있다고 이해되어야 할 것이다. 반면에, 어떤 구성요소(예: 제1 구성요소)가 다른 구성요소(예: 제2 구성요소)에 "직접 연결되어" 있다거나 "직접 접속되어" 있다고 언급된 때에는, 상기 어떤 구성요소와 상기 다른 구성요소 사이에 다른 구성요소(예: 제3 구성요소)가 존재하지 않는 것으로 이해될 수 있다.Some component (eg, the first component) is “(functionally or communicatively) coupled with/to)” to another component (eg, the second component) or “ When referred to as "connected to", it should be understood that the certain component may be directly connected to the other component or may be connected through another component (eg, a third component). On the other hand, when a component (eg, a first component) is referred to as being “directly connected” or “directly connected” to another component (eg, a second component), the component and the It may be understood that no other component (eg, a third component) exists between the different components.
본 문서에서 사용된 표현 "~하도록 구성된(또는 설정된)(configured to)"은 상황에 따라, 예를 들면, "~에 적합한(suitable for)", "~하는 능력을 가지는(having the capacity to)", "~하도록 설계된(designed to)", "~하도록 변경된(adapted to)", "~하도록 만들어진(made to)", 또는 "~를 할 수 있는(capable of)"과 바꾸어 사용될 수 있다. 용어 "~하도록 구성(또는 설정)된"은 하드웨어적으로 "특별히 설계된(specifically designed to)"것만을 반드시 의미하지 않을 수 있다. 대신, 어떤 상황에서는, "~하도록 구성된 장치"라는 표현은, 그 장치가 다른 장치 또는 부품들과 함께 "~할 수 있는" 것을 의미할 수 있다. 예를 들면, 문구 "A, B, 및 C를 수행하도록 구성(또는 설정)된 프로세서"는 해당 동작을 수행하기 위한 전용 프로세서(예: 임베디드 프로세서), 또는 메모리 장치에 저장된 하나 이상의 소프트웨어 프로그램들을 실행함으로써, 해당 동작들을 수행할 수 있는 범용 프로세서(generic-purpose processor)(예: CPU 또는 application processor)를 의미할 수 있다.The expression "configured to" used in this document is, for example, "suitable for", "having the capacity to" depending on the situation. It can be used interchangeably with ", "designed to", "adapted to", "made to", or "capable of". The term "configured to (or set) to" may not necessarily mean only "specifically designed to" in terms of hardware. Instead, in some situations, the expression "a device configured to" may mean that the device "can" along with other devices or parts. For example, the phrase "a processor configured (or configured) to perform A, B, and C" means a dedicated processor (eg, an embedded processor) for performing the corresponding operation, or executing one or more software programs stored in a memory device. By doing so, it may mean a generic-purpose processor (eg, a CPU or an application processor) capable of performing corresponding operations.
본 문서에서 사용된 용어들은 단지 특정한 실시 예를 설명하기 위해 사용된 것으로, 다른 실시 예의 범위를 한정하려는 의도가 아닐 수 있다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함할 수 있다. 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 용어들은 본 문서에 기재된 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가질 수 있다. 본 문서에 사용된 용어들 중 일반적인 사전에 정의된 용어들은 관련 기술의 문맥 상 가지는 의미와 동일 또는 유사한 의미로 해석될 수 있으며, 본 문서에서 명백하게 정의되지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다. 경우에 따라서, 본 문서에서 정의된 용어일지라도 본 문서의 실시 예들을 배제하도록 해석될 수 없다.Terms used in this document are only used to describe a specific embodiment, and may not be intended to limit the scope of other embodiments. Singular expressions may include plural expressions unless the context clearly indicates otherwise. Terms used herein, including technical or scientific terms, may have the same meaning as commonly understood by one of ordinary skill in the technical field described in this document. Among the terms used in this document, terms defined in a general dictionary may be interpreted as having the same or similar meaning as the meaning in the context of the related technology, and unless explicitly defined in this document, they may be interpreted in an ideal or excessively formal meaning. It is not interpreted. In some cases, even terms defined in this document cannot be interpreted to exclude embodiments of this document.
본 발명에서의 언어 처리 시스템은 언어 처리 기능을 수행하되, 언어 처리 기능은 통역 기능과, 번역 기능을 포함하여, 추가적으로 언어 학습 기능을 포함하는 것으로 인식되어야 한다.The language processing system in the present invention performs a language processing function, but it should be recognized that the language processing function includes an interpretation function and a translation function, and additionally includes a language learning function.
언어 처리 기능의 통역 기능에서, 통역 대상 언어(처리 대상 언어)는 사용자가 말하는 언어로 번역 또는 통역 되어야 하는 언어에 해당되며, 통역 목표 언어(처리 목표 언어)는 사용자가 말한 언어를 번역 또는 통역에 의해 최종적으로 대화 상대방에게 시각적으로 또는 청각적으로 전달하고자 하는 언어에 해당된다. 예를 들면, 한국어를 사용하는 사용자가 영어로 통역을 하고자 할 경우, 통역 목표 언어는 영어가 되며, 통역 대상 언어는 한국어가 된다. In the interpretation function of the language processing function, the language to be interpreted (the language to be processed) corresponds to the language to be translated or interpreted into the language spoken by the user, and the language to be interpreted (the language to be processed) is the language spoken by the user to be translated or interpreted. This corresponds to the language to be finally conveyed visually or aurally to the conversation partner. For example, when a Korean-speaking user wants to interpret in English, the target language for interpretation is English, and the target language for interpretation is Korean.
또한, 언어 학습 기능의 언어 학습 기능에서, 본 명세서에서, 제 1 언어는 사용자가 학습하고자 하는 언어(처리 대상 언어)로서 번역되어야 하는 언어에 해당되며, 제 2 언어는 사용자가 사용하는 언어로서 번역된 언어(처리 목표 언어)에 해당된다. 예를 들면, 한국어를 사용하는 사용자가 영어를 학습하고자 할 경우, 제 1 언어는 영어가 되며, 제 2 언어는 한국어가 된다. In addition, in the language learning function of the language learning function, in this specification, the first language corresponds to a language to be translated as a language to be learned (a processing target language), and the second language is translated as a language used by the user. Corresponds to the selected language (the target language for processing). For example, when a user who speaks Korean wants to learn English, the first language becomes English and the second language becomes Korean.
도 1은 본 발명에 따른 이어셋을 이용한 언어 처리 시스템의 제어 구성도이다.1 is a control configuration diagram of a language processing system using an ear set according to the present invention.
먼저, 제 1 실시예로서, 언어 처리 기능 중에서 통역 기능을 수행하는 언어 처리 시스템이 설명된다. First, as a first embodiment, a language processing system that performs an interpretation function among language processing functions is described.
언어 처리 시스템은 웨어러블 음향 디바이스(30) 또는 전자 통신 디바이스(40)와 무선 통신을 수행하는 제 1 이어셋(10)과, 웨어러블 음향 디바이스(30)과 유선 통신으로 전기적으로 연결된 제 2 이어셋(20)과, 전자 통신 디바이스(40)와 무선 통신을 수행하며 제 1 이어셋(10)과 무선 통신을 수행하고, 제 2 이어셋(20)과 유선 통신을 수행하는 웨어러블 음향 디바이스(30)와, 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)와 무선 통신을 수행하며 네트워크(60)를 통하여 번역 서버(50)와 통신을 수행하는 전자 통신 디바이스(40)와, 네트워크(60)를 통하여 전자 통신 디바이스(40)와 통신을 수행하며, 통역 대상 정보(또는 처리 대상 정보)를 수신하여 포함된 음성 신호에 포함된 음성을 텍스트로 번역하여, 텍스트로부터 변환된 음성 신호를 포함하는 통역 정보(또는 처리 정보)를 제공하는 번역(또는 통역) 서버(50)를 포함하여 구성된다. The language processing system includes a first earset 10 for performing wireless communication with the wearable sound device 30 or the electronic communication device 40, and a second earset 20 electrically connected to the wearable sound device 30 through wired communication. And, a wearable sound device 30 that performs wireless communication with the electronic communication device 40 and performs wireless communication with the first ear set 10 and performs wired communication with the second ear set 20, and the first ear set. (10) Or, an electronic communication device 40 that performs wireless communication with the wearable sound device 30 and communicates with the translation server 50 through the network 60, and the electronic communication device ( Interpretation information (or processing information) including voice signals converted from text by performing communication with 40) and translating the voice included in the voice signal included by receiving the interpretation target information (or processing target information) It is configured to include a translation (or interpreter) server 50 to provide a.
제 1 이어셋(10)은 사용자의 음성을 획득하는 마이크(11)와, 전기 신호를 인가 받아 음 방출하는 스피커(13)(또는 리시버)와, 웨어러블 음향 디바이스(30)와 무선 통신을 수행하는 통신 모듈(15)(예를 들면, 블루투스 모듈 등의 무선 통신 모듈 등)을 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 마이크(11)와 스피커(13)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다. 하기의 도 2 내지 6에서, 제 1 이어셋(10)의 기구적 구조에 대해서 상세하게 설명된다.The first earphone 10 includes a microphone 11 that acquires a user's voice, a speaker 13 (or a receiver) that emits sound by receiving an electrical signal, and a communication for performing wireless communication with the wearable sound device 30 And a module 15 (for example, a wireless communication module such as a Bluetooth module). However, the configuration or function of the power supply unit (not shown) for supplying power, the microphone 11 and the speaker 13 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof is omitted. . In FIGS. 2 to 6 below, the mechanical structure of the first ear set 10 will be described in detail.
통신 모듈(15)은 본 발명이 속하는 기술 분야에 대한 통상의 기술자가 이미 알고 있는 바와 같이, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 통역 기능을 수행한다. 통신 모듈(15)은 웨어러블 음향 디바이스(30)와 통신 연결 상태에서, 마이크(11)로부터 획득된 사용자의 음성 신호를 웨어러블 음향 디바이스(30) 또는 전자 통신 디바이스(40)로 전송하고, 웨어러블 음향 디바이스(30) 또는 전자 통신 디바이스(40)로부터 음성 신호를 포함하는 전기 신호를 수신하여 스피커(13)를 통하여 음 방출한다. The communication module 15 performs a phone call function and a sound reproduction function, and performs an interpretation function according to the present invention, as already known to those skilled in the art to which the present invention pertains. The communication module 15 transmits the user's voice signal obtained from the microphone 11 to the wearable sound device 30 or the electronic communication device 40 in a communication connection state with the wearable sound device 30, and the wearable sound device 30 or an electrical signal including an audio signal is received from the electronic communication device 40 and sound is emitted through the speaker 13.
제 2 이어셋(20)은 사용자의 음성을 획득하는 마이크(21)와, 전기 신호를 인가 받아 음 방출하는 스피커(23)(또는 리시버)와, 웨어러블 음향 디바이스(30)와 유선 통신을 수행하는 연결 케이블(24)(예를 들면, 유선 케이블 등)을 포함하여 구성된다. 다만, 마이크(21)와 스피커(23) 및 연결 케이블(24)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다. 하기의 도 2 내지 6에서, 제 2 이어셋(20)의 기구적 구조에 대해서 상세하게 설명된다.The second earphone 20 is a connection for performing wired communication with a microphone 21 that acquires a user's voice, a speaker 23 (or a receiver) that emits sound by receiving an electrical signal, and a wearable sound device 30 It comprises a cable 24 (for example, a wired cable, etc.). However, the configuration or function of the microphone 21, the speaker 23, and the connection cable 24 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof will be omitted. In FIGS. 2 to 6 below, the mechanical structure of the second ear set 20 will be described in detail.
웨어러블 음향 디바이스(30)는 예를 들면, 넥밴드형 음향 변환 장치 등과 같이 무선 통신 기능을 포함하며 전화 통화 기능, 음향 재생 기능 등을 수행하는 장치이다. 웨어러블 음향 디바이스(30)는 외부의 음향을 획득하는 마이크(31)와, 전기 신호를 인가 받아 음 방출하는 스피커(33)와, 제 1 이어셋(10) 및 전자 통신 디바이스(40)와 무선 통신(예를 들면, 블루투스 통신 등)을 수행하는 통신부(35)와, 사용자로부터의 입력을 획득하는 입력부(37)와, 마이크(31), 스피커(33), 통신부(35) 및 입력부(37)를 제어하여 전화 통화 기능, 음향 재생 기능 및 통역 기능을 선택적으로 수행하는 데이터 프로세서(39)를 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 마이크(31)와 스피커(33)와 통신부(35) 및 입력부(37)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다.The wearable sound device 30 is a device that includes a wireless communication function, such as a neckband type sound conversion device, and performs a phone call function, a sound reproduction function, and the like. The wearable sound device 30 includes a microphone 31 that acquires external sound, a speaker 33 that emits sound by receiving an electric signal, and wireless communication with the first earset 10 and the electronic communication device 40 ( For example, a communication unit 35 for performing Bluetooth communication), an input unit 37 for acquiring an input from a user, a microphone 31, a speaker 33, a communication unit 35, and an input unit 37 It is configured to include a data processor 39 for selectively performing a phone call function, a sound reproduction function, and an interpretation function by controlling. However, the configuration or function of the power supply unit (not shown) that supplies power, the microphone 31 and the speaker 33, the communication unit 35, and the input unit 37 are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
데이터 프로세서(39)는 본 발명이 속하는 기술 분야에 대한 통상의 기술자가 이미 알고 있는 바와 같이, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 통역 기능을 수행하는 프로세서(예를 들면, CPU, MCU, MICROPROCESSOR 등)와, 음성 신호, 통역 수행 정보(처리 수행 정보) 등을 저장하는 저장 공간(예를 들면, 메모리 등) 등을 포함하여 구성된다. The data processor 39 is a processor that performs a phone call function and a sound reproduction function, and performs an interpretation function according to the present invention, as already known to those of ordinary skill in the art to which the present invention pertains (for example, CPU, MCU, MICROPROCESSOR, etc.), and a storage space (eg, memory, etc.) for storing voice signals, interpretation performance information (processing performance information), and the like.
통역 수행 정보는 적어도 통역 기능을 시작하기 위한 음성 신호(예를 들면, '통역 시작"의 음성 신호 등)와 통역 기능을 종료하기 위한 음성 신호(예를 들면, '통역 종료'의 음성 신호 등)를 식별하기 위한 기준 음성 신호를 포함하여 구성된다. Interpretation performance information is at least an audio signal for starting the interpretation function (eg, an audio signal for'interpretation start') and an audio signal for ending the interpretation function (eg, an audio signal for'interpretation end', etc.) It is configured to include a reference speech signal for identifying.
웨어러블 음향 디바이스(30)는 제 1 및 제 2 이어셋(10) 중의 적어도 하나 이상과 통신 가능한 상태(예를 들면, 무선 통신 연결 상태 또는 유선 통신 연결 상태)를 유지할 수 있다.The wearable sound device 30 may maintain a state capable of communicating with at least one or more of the first and second earphones 10 (eg, a wireless communication connection state or a wired communication connection state).
데이터 프로세서(39)는 제 1 이어셋(10)과 통신 연결 상태에서, 통신 모듈(15)이 마이크(11)를 통하여 획득된 음성 신호를 통신부(35)로 전송하도록 한다. 또한, 데이터 프로세서(39)는 사용자의 귀에 삽입 장착되는 마이크(21)로부터 음성 신호 및 사용자의 외부를 향하는 마이크(31)로부터 음성 신호를 획득한다. 데이터 프로세서(39)는 마이크(11) 및 마이크(21)로부터 획득되는 음성 신호(제 1 음성 신호)는 사용자의 음성으로, 마이크(31)로부터 획득되는 음성 신호(제 2 음성 신호)는 사용자의 대화 상대방의 음성으로 간주하여 처리한다. 데이터 프로세서(39)는 제 1 및 제 2 음성 신호 각각을 식별할 수 있는 대화자 식별 정보를 통역 대상 정보의 전송 시에 함께 전송할 수 있다.The data processor 39 causes the communication module 15 to transmit the voice signal acquired through the microphone 11 to the communication unit 35 in a communication connection state with the first earset 10. In addition, the data processor 39 acquires a voice signal from the microphone 21 inserted into the user's ear and the voice signal from the microphone 31 facing the outside of the user. In the data processor 39, the voice signal (first voice signal) obtained from the microphone 11 and the microphone 21 is the user's voice, and the voice signal (second voice signal) obtained from the microphone 31 is the user's voice. It is treated as the voice of the other party. The data processor 39 may transmit conversational identification information capable of identifying each of the first and second voice signals together when transmitting the interpretation target information.
데이터 프로세서(39)는 전화 통화 기능과 음향 재생 기능 이외의 모드나 기능(예를 들면, 대기 모드, 통역 기능 등)을 수행하는 중에, 제 1 이어셋(10)로부터의 음성 신호 또는 제 2 이어셋(20)으로부터의 음성 신호와, 기준 음성 신호를 비교하여, 음성 신호가 통역 기능 시작 또는 통역 기능 종료의 기준 음성 신호를 포함하거나 대응하는지를 판단한다. 만약 음성 신호가 통역 기능 시작 또는 통역 기능 종료의 기준 음성 신호를 포함하거나 대응하면, 데이터 프로세서(39)는 기준 음성 신호인 통역 기능 시작 또는 통역 기능 종료에 대응하는 통역 기능 제어 명령을 생성하여 전자 통신 디바이스(40)에 인가한다. 만약 음성 신호가 통역 기능 시작 또는 통역 기능 종료의 기준 음성 신호를 포함하지도 대응하지도 않으면, 데이터 프로세서(39)는 전화 통화 기능과 음향 재생 기능 이외의 현재 수행 중인 모드나 기능(예를 들면, 대기 모드, 통역 기능 등)을 동일하게 수행한다. The data processor 39 is performing a mode or function other than a phone call function and a sound reproduction function (for example, a standby mode, an interpretation function, etc.), while performing a voice signal from the first ear set 10 or a second ear set ( 20) by comparing the voice signal from the voice signal and the reference voice signal, it is determined whether the voice signal includes or corresponds to the reference voice signal for starting the interpretation function or ending the interpretation function. If the voice signal includes or corresponds to a reference voice signal for starting an interpretation function or ending the interpretation function, the data processor 39 generates an interpretation function control command corresponding to the start or end of the interpretation function, which is a reference voice signal, and communicates electronically. Apply to the device 40. If the voice signal does not include or does not correspond to the reference voice signal for starting or ending the interpreting function, the data processor 39 performs a mode or function (e.g., a standby mode) other than a phone call function and a sound reproduction function. , Interpreting functions, etc.) are performed in the same way.
또한, 데이터 프로세서(39)는 전자 통신 디바이스(40)에 의해 시작되는 통역 기능에 의해 통신부(35)와 통신부(45) 간에의 음성 통신 채널(예를 들면, SCO: Synchronous Connection-Oriented)을 통하여 통역 대상 정보를 전송하고, 통역 정보를 수신한다. 보다 상세한 통역 기능에 대해서는 하기에서 기재된다.In addition, the data processor 39 is a voice communication channel (for example, SCO: Synchronous Connection-Oriented) between the communication unit 35 and the communication unit 45 by the interpretation function started by the electronic communication device 40. Transmit information to be interpreted and receive interpretation information. More detailed interpretation functions are described below.
전자 통신 디바이스(40)는 예를 들면, 통신 기능을 지닌 스마트폰, 테블릿 등의 정보 통신 장치에 해당되며, 사용자로부터의 입력(예를 들면, 통역 기능을 위한 어플리케이션의 시작이나 종료 선택, 통역 기능 시작이나 종료 선택, 통역 목표 언어 및/또는 통역 대상 언어의 선택 입력 등)을 획득하여 데이터 프로세서(49)에 인가하는 입력부(41)와, 통역 기능을 위한 사용자 인터페이스 등을 시각적 또는 청각적으로 표시하거나 표출하는 표시부(43)와, 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)와 무선 통신(예를 들면, 블루투스 통신 등)을 수행하며 네트워크(60)를 통하여 번역 서버(50)와 통신을 수행하는 통신부(45)와, 음성이나 음향을 획득하는 마이크(46)와, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 통역 기능을 수행하는 데이터 프로세서(49)를 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 입력부(41)와 표시부(43)와 마이크(46) 및 통신부(45)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다.The electronic communication device 40 corresponds to, for example, an information communication device such as a smartphone or tablet having a communication function, and input from a user (eg, selection of the start or end of an application for an interpreter function, interpretation) An input unit 41 that acquires a function start or end selection, selection input of an interpretation target language and/or an interpretation target language, etc.) and applies it to the data processor 49, and a user interface for an interpretation function visually or aurally. It performs wireless communication (for example, Bluetooth communication, etc.) with the display unit 43 for displaying or expressing and the first earset 10 or the wearable sound device 30, and the translation server 50 and the translation server 50 through the network 60 Including a communication unit 45 for performing communication, a microphone 46 for acquiring voice or sound, a data processor 49 for performing a phone call function and a sound reproduction function, and performing an interpretation function according to the present invention. It is composed. However, the configuration and function of the power supply unit (not shown), the input unit 41, the display unit 43, the microphone 46, and the communication unit 45 supplying power are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
데이터 프로세서(49)는 전화 통화 기능과 음향 재생 기능 및 통역 기능을 수행하는 프로세서(예를 들면, CPU, MCU, MICROPROCESSOR 등)와, 통역 기능을 위한 어플리케이션 및 사용자 인터페이스, 통역 정보 등을 저장하는 저장 공간(예를 들면, 메모리 등) 등을 포함하여 구성된다.The data processor 49 is a processor (e.g., CPU, MCU, MICROPROCESSOR, etc.) that performs a phone call function, a sound reproduction function, and an interpretation function, and an application for an interpretation function, a user interface, and a storage for storing interpretation information, etc. It is configured to include space (eg, memory, etc.).
데이터 프로세서(49)는 통역 기능을 위한 어플리케이션을 수행한다. 통역 기능을 위한 어플리케이션은 통역 대상 언어 및/또는 통역 목표 언어의 선택 및 설정 과정과, 통역 대상 언어인 사용자의 음성 정보를 포함하는 통역 대상 정보를 생성하여 통역 서버(50)로 전송하는 과정과, 통역 서버(50)로부터 통역 목표 언어인 음성 정보를 포함하는 통역 정보를 수신하여 웨어러블 음향 디바이스(30)로 전송하는 과정 등을 포함하여 구성된다. The data processor 49 executes an application for an interpretation function. The application for the interpretation function includes a process of selecting and setting an interpretation target language and/or an interpretation target language, and a process of generating and transmitting interpretation target information including voice information of a user, which is an interpretation target language, to the interpretation server 50, It includes a process of receiving interpretation information including voice information that is a target language for interpretation from the interpretation server 50 and transmitting it to the wearable sound device 30.
데이터 프로세서(49)는 통역 기능을 위한 어플리케이션을 입력부(41)로부터의 실행 입력에 따라 포그라운드 서비스(foreground service) 상태로 활성화시키거나 실행시키며, 전화 통화 기능과 음향 재생 기능을 수행하지 않는 대기 상태 중에 웨어러블 음향 디바이스(30)로부터의 통역 기능 제어 명령(처리 기능 제어 명령)(예를 들면, 통역 기능 시작 명령)에 따라 추가적인 사용자의 버튼 또는 터치 입력 없이도 통역 기능을 위한 어플리케이션을 웨이크 업(wake up)시켜 통역 기능을 시작한다. The data processor 49 activates or executes an application for an interpretation function in a foreground service state according to an execution input from the input unit 41, and a standby state in which the phone call function and sound reproduction function are not performed. In response to an interpretation function control command (processing function control command) (for example, an interpretation function start command) from the wearable sound device 30, the application for the interpretation function wakes up without an additional user's button or touch input. ) To start the interpreter function.
또한, 데이터 프로세서(49)는 통역 기능의 수행 중에, 웨어러블 음향 디바이스(30)로부터의 통역 기능 제어 명령(처리 기능 제어 명령)(예를 들면, 통역 기능 종료 명령)에 따라 추가적인 사용자의 버튼 또는 터치 입력 없이도 통역 기능을 위한 어플리케이션을 종료 시키거나 포그라운드 서비스 상태로 동작시켜 통역 기능을 종료하고 대기 상태 또는 전화 통화 기능이나 음향 재생 기능으로 동작한다. In addition, the data processor 49 is an additional user's button or touch according to an interpretation function control command (processing function control command) (e.g., an interpretation function termination command) from the wearable sound device 30 while the interpretation function is being performed. Even without input, the application for the interpretation function is terminated or the interpreter function is terminated by operating in the foreground service state, and it operates as a standby state, a phone call function, or a sound reproduction function.
데이터 프로세서(19)는 통역 기능의 수행 중에 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)로부터 수신된 음성 정보 등을 포함하는 통역 대상 정보를 통신부(45)를 제어하여 네트워크(50)를 통하여 번역 서버(50)로 전송하고, 통신부(45)를 제어하여 번역 서버(50)로부터 네트워크(50)를 통하여 통역 목표 언어를 포함하는 통역 정보를 수신하는 등의 통역 기능에 대해서는 하기에서 상세하게 기재된다. The data processor 19 controls the communication unit 45 to control the interpretation target information including the voice information received from the first earphone 10 or the wearable sound device 30 while performing the interpretation function through the network 50. Interpretation functions such as transmitting to the translation server 50 and controlling the communication unit 45 to receive interpretation information including an interpretation target language from the translation server 50 through the network 50 will be described in detail below. do.
번역 서버(50)는 STT(Speech to Text) 기능(통역 대상 정보에 포함된 음성 정보를 추출하고 이를 인식하여 텍스트로 변환하는 기능) 및/또는 텍스트를 번역하여 번역 텍스트를 생성하는 기능 및/또는 TTS(Text to Speech) 기능(텍스트를 음성으로 합성하는 기능)을 포함하는 서버로서, 이러한 번역 서버(50)는 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 상세한 설명은 생략된다.The translation server 50 has a STT (Speech to Text) function (a function of extracting voice information included in the interpretation target information and recognizing it and converting it to text) and/or a function of translating a text to generate a translated text, and/or As a server including a TTS (Text to Speech) function (a function for synthesizing text into speech), such a translation server 50 corresponds to a technology naturally recognized by a person skilled in the art to which the present invention belongs, and its detailed Description is omitted.
네트워크(60)는 유선 통신 및/또는 무선 통신을 수행하도록 하는 시스템에 해당되며, 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 상세한 설명은 생략된다.The network 60 corresponds to a system for performing wired communication and/or wireless communication, and corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention pertains, and a detailed description thereof is omitted.
언어 처리 시스템은 하기의 과정을 수행하여 통역 기능을 수행한다. The language processing system performs the interpreter function by performing the following process.
먼저, 통신 연결 단계이다. First, it is the communication connection step.
웨어러블 음향 디바이스(30)는 제 1 이어셋(10) 및/또는 제 2 이어셋(20)과 통신 가능한 상태이고, 전자 통신 디바이스(40)는 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(40)와 통신 가능한 상태이다. 이 때, 데이터 프로세서(49)는 통역 기능을 위한 어플리케이션을 포그라운드 서비스 상태로 동작시킨다.The wearable sound device 30 is in a state capable of communicating with the first ear set 10 and/or the second ear set 20, and the electronic communication device 40 communicates with the first ear set 10 or the wearable sound device 40 It is possible. At this time, the data processor 49 operates an application for an interpretation function in a foreground service state.
다음으로, 통역 기능의 시작 단계이다. Next, it is the starting stage of the interpreter function.
데이터 프로세서(49)는 대기 상태를 유지하는 중에, 통신부(45)를 통하여 웨어러블 음향 디바이스(30)로부터 통역 기능 시작 명령을 획득하면, 통역 기능을 위한 어플리케이션을 웨이크 업(wake up)시켜 통역 기능을 시작한다.When the data processor 49 obtains an interpretation function start command from the wearable sound device 30 through the communication unit 45 while maintaining the standby state, the application for the interpretation function wakes up to perform the interpretation function. Start.
또한, 데이터 프로세서(49)는 사용자가 통역 대상 언어 및/또는 통역 목표 언어를 음성에 의해 또는 입력부(41)에 의한 입력에 의해 설정할 수 있도록 한다. 데이터 프로세서(49)는 통역 대상 언어의 정보(예를 들면, 언어의 종류) 및 통역 목표 언어의 정보(예를 들면, 언어의 종류)를 저장한다. In addition, the data processor 49 enables the user to set the interpretation target language and/or the interpretation target language by voice or input by the input unit 41. The data processor 49 stores information on the language to be interpreted (eg, type of language) and information on the target language for interpretation (eg, type of language).
데이터 프로세서(49)는 통신부(45)를 제어하여 통신부(35)와의 음성 통신 채널을 개방(open)하여 통역 대상 정보와 통역 정보의 송수신이 가능하도록 한다.The data processor 49 controls the communication unit 45 to open a voice communication channel with the communication unit 35 to enable transmission and reception of interpretation target information and interpretation information.
다음으로, 통역 기능의 동작 단계이다.Next is the operation stage of the interpreter function.
데이터 프로세서(49)는 통역 기능의 수행 중에 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)로부터 수신된 음성 신호(제 1 또는 제 2 음성 신호) 및/또는 대화자 식별 정보를 수신하고, 음성 신호 및/또는 대화자 식별 정보를 포함하는 통역 대상 정보를 통신부(45)를 제어하여 네트워크(50)를 통하여 번역 서버(50)로 전송한다. The data processor 49 receives the voice signal (first or second voice signal) and/or the speaker identification information received from the first earphone 10 or the wearable sound device 30 while performing the interpretation function, and the voice signal And/or the interpretation target information including the speaker identification information is transmitted to the translation server 50 through the network 50 by controlling the communication unit 45.
번역 서버(50)는 통역 대상 정보에 포함된 음성 신호를 텍스트로 번역하고, 텍스트를 음성 신호로 변환하여, 변환된 음성 신호 및 또는 대화자 식별 정보를 포함하는 통역 정보를 전자 통신 디바이스(40)에 전송한다. The translation server 50 translates the voice signal included in the interpretation target information into text, converts the text into a voice signal, and transmits the converted voice signal and/or interpretation information including the speaker identification information to the electronic communication device 40. send.
데이터 프로세서(49)는 통역 정보를 통신부(45)를 통하여 수신하고, 웨어러블 음향 디바이스(30)로 전송한다. The data processor 49 receives interpretation information through the communication unit 45 and transmits it to the wearable sound device 30.
데이터 프로세서(39)는 통신부(35)를 통하여 통역 정보를 수신하고, 통역 정보에 포함된 대화자 식별 정보에 따라 포함된 변환된 음성 신호를 스피커(33)에 인가하거나 제 1 또는 제 2 이어셋(10, 20)에 인가한다. 즉, 대화자 식별 정보가 사용자의 음성 신호를 나타내는 경우에는 대화 상대방이 변환된 음성 신호를 청취해야 하므로, 스피커(33)로 인가하고, 대화자 식별 정보가 대화 상대방의 음성 신호를 나타내는 경우에는 사용자가 변환된 음성 신호를 청취해야 하므로, 제 1 또는 제 2 이어셋(10), (20)으로 변환된 음성을 전송하거나 인가한다.The data processor 39 receives interpretation information through the communication unit 35, and applies the converted voice signal included according to the speaker identification information included in the interpretation information to the speaker 33 or the first or second earphone 10 , 20). That is, if the conversational party identification information represents the user's voice signal, the conversation partner must listen to the converted audio signal, and therefore, when the conversation party identification information indicates the conversation partner’s voice signal, the user converts it. Since it is necessary to listen to the obtained voice signal, the converted voice is transmitted or applied to the first or second earphones 10 and 20.
또한, 데이터 프로세서(39)는 통역 대상 정보를 전송한 이후에, 직전에 전송한 통역 대상 정보에 대응하는 통역 정보를 수신하여 음 방출하는 동안에는 통역 대상 정보를 전자 통신 디바이스(40)로 전송하지 않도록 하여, 이미 통역된 음성 신호를 다시 통역 대상 정보에 포함시켜 전송되는 것을 방지한다.In addition, after transmitting the interpretation target information, the data processor 39 does not transmit the interpretation target information to the electronic communication device 40 during sound emission by receiving interpretation information corresponding to the interpretation target information transmitted immediately before. Thus, the voice signal that has already been interpreted is again included in the interpretation target information to prevent transmission.
다음으로, 통역 기능의 종료 단계이다.Next, it is the end of the interpretation function.
데이터 프로세서(49)는 통역 기능의 수행 중에, 웨어러블 음향 디바이스(30)로부터 통역 기능 종료 명령을 수신하는 경우, 통역 기능을 위한 어플리케이션을 종료 시키거나 포그라운드 서비스 상태로 동작시켜 통역 기능을 종료하고 대기 상태 또는 전화 통화 기능이나 음향 재생 기능으로 동작한다. When the data processor 49 receives an interpretation function termination command from the wearable sound device 30 while performing the interpretation function, the data processor 49 terminates the application for the interpretation function or operates in a foreground service state to terminate the interpretation function and wait. It operates as a state or a phone call function or a sound reproduction function.
다음으로, 제 2 실시예로서, 언어 처리 기능 중에서 언어 학습 기능을 수행하는 언어 처리 시스템이 설명된다. Next, as a second embodiment, a language processing system that performs a language learning function among language processing functions will be described.
언어 처리 시스템은 웨어러블 음향 디바이스(30) 또는 전자 통신 디바이스(40)와 유선 또는 무선 통신을 수행하는 제 1 이어셋(10)과, 웨어러블 음향 디바이스(30)과 유선 통신으로 전기적으로 연결된 제 2 이어셋(20)과, 전자 통신 디바이스(40)와 무선 통신을 수행하며 제 2 이어셋(20)과 유선 통신을 수행하는 웨어러블 음향 디바이스(30)와, 제 1 이어셋(10) 및/또는 웨어러블 음향 디바이스(30) 각각과 무선 통신을 수행하며 네트워크(60)를 통하여 번역 서버(50)와 통신을 수행하는 전자 통신 디바이스(40)와, 네트워크(60)를 통하여 전자 통신 디바이스(40)와 통신을 수행하며, 음성 정보를 수신하여 음성 정보에 포함된 음성을 문자로 번역하여, 음성 정보에 대응하는 번역 정보(또는 처리 정보)를 제공하는 번역 서버(50)를 포함하여 구성된다. The language processing system includes a first earset 10 that performs wired or wireless communication with the wearable sound device 30 or the electronic communication device 40, and a second earset that is electrically connected to the wearable sound device 30 through wired communication ( 20), a wearable sound device 30 that performs wireless communication with the electronic communication device 40 and performs wired communication with the second ear set 20, and the first ear set 10 and/or the wearable sound device 30 ) The electronic communication device 40 that performs wireless communication with each and communicates with the translation server 50 through the network 60, and the electronic communication device 40 through the network 60, It is configured to include a translation server 50 that receives the voice information, translates the voice included in the voice information into text, and provides translation information (or processing information) corresponding to the voice information.
제 1 이어셋(10)은 음성을 획득하는 마이크(11)와, 전기 신호를 인가 받아 음 방출하는 스피커(13)(또는 리시버)와, 전자 통신 디바이스(40)와 통신을 수행하는 통신 모듈(15)(예를 들면, 블루투스 모듈 등의 무선 통신 모듈 또는 유선 케이블 등)을 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 마이크(11)와 스피커(13)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다. 하기의 도 2 내지 6에서, 제 1 이어셋(10)의 기구적 구조에 대해서 상세하게 설명된다.The first earset 10 includes a microphone 11 for acquiring a voice, a speaker 13 (or a receiver) for emitting sound by receiving an electric signal, and a communication module 15 for performing communication with the electronic communication device 40. ) (For example, a wireless communication module such as a Bluetooth module or a wired cable). However, the configuration or function of the power supply unit (not shown) for supplying power, the microphone 11 and the speaker 13 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof is omitted. . In FIGS. 2 to 6 below, the mechanical structure of the first ear set 10 will be described in detail.
통신 모듈(15)은 본 발명이 속하는 기술 분야에 대한 통상의 기술자가 이미 알고 있는 바와 같이, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 언어 학습 기능을 수행한다. 통신 모듈(15)은 언어 학습 기능의 수행 시에 마이크(11)로부터 획득된 사용자의 음성 신호를 포함하는 음성 정보를 전자 통신 디바이스(40)로 전송한다. The communication module 15 performs a phone call function and a sound reproduction function, and performs a language learning function according to the present invention, as already known to those of ordinary skill in the art to which the present invention pertains. The communication module 15 transmits the voice information including the user's voice signal acquired from the microphone 11 to the electronic communication device 40 when performing the language learning function.
제 2 이어셋(20)은 음성을 획득하는 마이크(21)와, 전기 신호를 인가 받아 음 방출하는 스피커(23)(또는 리시버)와, 웨어러블 음향 디바이스(30)와 유선 통신을 수행하는 연결 케이블(24)(예를 들면, 유선 케이블 등)을 포함하여 구성된다. 다만, 마이크(21)와 스피커(23) 및 연결 케이블(24)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다. 하기의 도 2 내지 6에서, 제 2 이어셋(20)의 기구적 구조에 대해서 상세하게 설명된다.The second earphone 20 includes a microphone 21 for acquiring a voice, a speaker 23 (or a receiver) that emits sound by receiving an electric signal, and a connection cable for performing wired communication with the wearable sound device 30 ( 24) (for example, a wired cable, etc.). However, the configuration or function of the microphone 21, the speaker 23, and the connection cable 24 corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention belongs, and the description thereof will be omitted. In FIGS. 2 to 6 below, the mechanical structure of the second ear set 20 will be described in detail.
웨어러블 음향 디바이스(30)는 예를 들면, 넥밴드형 음향 변환 장치 등과 같이 무선 통신 기능을 포함하며 전화 통화 기능, 음향 재생 기능 등을 수행하는 장치이다. 웨어러블 음향 디바이스(30)는 외부의 음향을 획득하는 마이크(31)와, 전기 신호를 인가 받아 음 방출하는 스피커(33)와, 전자 통신 디바이스(40)와 무선 통신(예를 들면, 블루투스 통신 등)을 수행하는 통신부(35)와, 사용자로부터의 입력을 획득하는 입력부(37)와, 마이크(31), 스피커(33), 통신부(35) 및 입력부(37)를 제어하여 전화 통화 기능, 음향 재생 기능 및 언어 학습 기능을 선택적으로 수행하는 데이터 프로세서(39)를 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 마이크(31)와 스피커(33)와 통신부(35) 및 입력부(37)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다.The wearable sound device 30 is a device that includes a wireless communication function, such as a neckband type sound conversion device, and performs a phone call function, a sound reproduction function, and the like. The wearable sound device 30 includes a microphone 31 that acquires external sound, a speaker 33 that emits sound by receiving an electric signal, and wireless communication (for example, Bluetooth communication, etc.) ), the input unit 37 for acquiring input from the user, the microphone 31, the speaker 33, the communication unit 35, and the input unit 37 to control the phone call function, sound And a data processor 39 that selectively performs a reproduction function and a language learning function. However, the configuration or function of the power supply unit (not shown) that supplies power, the microphone 31 and the speaker 33, the communication unit 35, and the input unit 37 are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
데이터 프로세서(39)는 본 발명이 속하는 기술 분야에 대한 통상의 기술자가 이미 알고 있는 바와 같이, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 언어 학습 기능을 수행하는 프로세서(예를 들면, CPU, MCU, MICROPROCESSOR 등) 등을 포함하여 구성된다. 데이터 프로세서(39)는 언어 학습 기능의 수행 시에 마이크(21)로부터 획득된 사용자의 음성 신호를 포함하는 음성 정보를 전자 통신 디바이스(40)로 전송한다. 데이터 프로세서(39)에 의해 수행되는 언어 학습 기능은 하기에서 상세하게 기재된다.The data processor 39 is a processor that performs a phone call function and a sound reproduction function, and performs a language learning function according to the present invention, as already known to a person skilled in the art. , CPU, MCU, MICROPROCESSOR, etc.). The data processor 39 transmits the voice information including the user's voice signal obtained from the microphone 21 to the electronic communication device 40 when performing the language learning function. The language learning functions performed by the data processor 39 are described in detail below.
전자 통신 디바이스(40)는 예를 들면, 통신 기능을 지닌 스마트폰, 테블릿 등의 정보 통신 장치에 해당되며, 사용자로부터의 입력(예를 들면, 언어 학습 기능의 시작이나 종료 선택, 학습할 언어의 선택, 번역 정보에 대한 평가 선택(학습 성공, 학습 실패), 학습할 언어로 구성된 단어나 문장의 입력 등)을 획득하여 데이터 프로세서(49)에 인가하는 입력부(41)와, 언어 학습 기능을 위한 사용자 인터페이스 등을 시각적 또는 청각적으로 표시하거나 표출하는 표시부(43)와, 제 1 이어셋(10) 및/또는 웨어러블 음향 디바이스(30)와 무선 통신(예를 들면, 블루투스 통신 등)을 수행하며 네트워크(60)를 통하여 번역 서버(50)와 통신을 수행하는 통신부(45)와, 음성이나 음향을 획득하는 마이크(46)와, 전화 통화 기능과 음향 재생 기능을 수행하며, 본 발명에 따른 언어 학습 기능을 수행하는 데이터 프로세서(49)를 포함하여 구성된다. 다만, 전원을 공급하는 전원부(미도시), 입력부(41)와 표시부(43)와 마이크(46) 및 통신부(45)의 구성이나 기능은 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 설명이 생략된다.The electronic communication device 40 corresponds to, for example, an information communication device such as a smartphone or tablet having a communication function, and input from a user (for example, selection of the start or end of a language learning function, a language to be learned). An input unit 41 that acquires selection of, evaluation selection for translation information (learning success, learning failure), input of words or sentences composed of a language to be studied, etc., and applies it to the data processor 49, and a language learning function. Wireless communication (for example, Bluetooth communication, etc.) with the display unit 43 for visually or aurally displaying or expressing the user interface for the first earset 10 and/or the wearable sound device 30, and A communication unit 45 that communicates with the translation server 50 through the network 60, a microphone 46 that acquires voice or sound, a phone call function and a sound reproduction function, and a language according to the present invention. It is configured to include a data processor 49 performing a learning function. However, the configuration and function of the power supply unit (not shown), the input unit 41, the display unit 43, the microphone 46, and the communication unit 45 supplying power are naturally recognized by those skilled in the art to which the present invention belongs. The description is omitted because it corresponds to the technology to be used.
데이터 프로세서(49)는 전화 통화 기능과 음향 재생 기능 및 언어 학습 기능을 수행하는 프로세서(예를 들면, CPU, MCU, MICROPROCESSOR 등)와, 언어 학습 기능을 위한 어플리케이션 및 사용자 인터페이스, 번역 정보, 번역 텍스트 등을 저장하는 저장 공간(예를 들면, 메모리 등) 등을 포함하여 구성된다. 데이터 프로세서(19)는 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)로부터 수신된 음성 정보 등을 포함하는 번역 대상 정보를 통신부(45)를 제어하여 네트워크(50)를 통하여 번역 서버(50)로 전송하고, 통신부(45)를 제어하여 번역 서버(50)로부터 네트워크(50)를 통하여 번역 정보를 수신하는 등의 언어 학습 기능에 대해서는 하기에서 상세하게 기재된다.The data processor 49 includes a processor (eg, CPU, MCU, MICROPROCESSOR, etc.) that performs a phone call function, a sound reproduction function, and a language learning function, and an application and user interface for a language learning function, translation information, and translation text. It is configured to include a storage space (eg, memory, etc.) for storing the and the like. The data processor 19 controls the communication unit 45 to transmit information to be translated, including voice information received from the first earset 10 or the wearable sound device 30, and the translation server 50 through the network 50. Language learning functions such as transmission to and controlling the communication unit 45 to receive translation information from the translation server 50 through the network 50 will be described in detail below.
번역 서버(50)는 STT(Speech to Text) 기능(번역 대상 정보에 포함된 음성 정보를 추출하고 이를 인식하여 텍스트로 변환하는 기능) 및/또는 텍스트를 번역하여 번역 텍스트를 포함하는 번역 정보를 생성하는 기능 및/또는 TTS(Text to Speech) 기능(텍스트를 음성으로 합성하는 기능)을 포함하는 서버로서, 이러한 번역 서버(50)는 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 상세한 설명은 생략된다.The translation server 50 generates translation information including the translated text by translating the text and/or the STT (Speech to Text) function (a function to extract voice information included in the translation target information and recognize it and convert it into text). As a server including a function to perform and/or a TTS (Text to Speech) function (a function to synthesize text into speech), such a translation server 50 is a technology that is naturally recognized by a person skilled in the art to which the present invention belongs. Corresponds to the detailed description thereof will be omitted.
네트워크(60)는 유선 통신 및/또는 무선 통신을 수행하도록 하는 시스템에 해당되며, 본 발명이 속하는 기술 분야에 대한 통상의 기술자에게 당연히 인식되는 기술에 해당되어 그 상세한 설명은 생략된다.The network 60 corresponds to a system for performing wired communication and/or wireless communication, and corresponds to a technology that is naturally recognized by a person skilled in the art to which the present invention pertains, and a detailed description thereof is omitted.
언어 처리 시스템은 하기의 과정을 수행하여 언어 학습 기능을 수행한다. The language processing system performs the language learning function by performing the following process.
먼저, 데이터 프로세서(49)는 통신부(45)를 제어하여 제 1 이어셋(10) 또는 웨어러블 음향 디바이스(30)와 페어링 동작을 수행하여 통신 가능 상태로 진입한다. 즉, 통신부(45)는 통신 모듈(15) 또는 통신부(35)와 무선 통신을 수행한다.First, the data processor 49 controls the communication unit 45 to perform a pairing operation with the first ear set 10 or the wearable sound device 30 to enter a communication enabled state. That is, the communication unit 45 performs wireless communication with the communication module 15 or the communication unit 35.
데이터 프로세서(49)는 입력부(41)로부터의 언어 학습 기능의 시작 선택 입력에 의해 언어 학습 기능을 위한 어플리케이션을 실행하고, 사용자 인터페이스를 표시부(43)에 표시한다. 또는, 데이터 프로세서(39)가 입력부(37)로부터 언어 학습 기능의 시작 선택 입력을 획득하고, 통신부(45)를 통하여 언어 학습 기능의 시작 선택 입력을 통신부(35)를 통하여 전자 통신 디바이스(40)로 전송하고, 데이터 프로세서(49)는 수신된 언어 학습 기능의 시작 선택 입력에 의해 언어 학습 기능을 위한 어플리케이션을 실행하고, 사용자 인터페이스를 표시부(43)에 표시한다. 그 이외에 다른 다양한 방식으로 데이터 프로세서(49)는 언어 학습 기능을 위한 어플리케이션을 시작할 수 있다.The data processor 49 executes an application for the language learning function according to the start selection input of the language learning function from the input unit 41 and displays a user interface on the display unit 43. Alternatively, the data processor 39 obtains a start selection input of the language learning function from the input unit 37, and transmits a start selection input of the language learning function through the communication unit 45 to the electronic communication device 40 through the communication unit 35. Then, the data processor 49 executes an application for the language learning function according to the received start selection input of the language learning function, and displays the user interface on the display unit 43. In addition to that, the data processor 49 may start an application for a language learning function in various other ways.
다음으로, 학습할 언어의 선택 단계이다.Next, it is the step of selecting a language to learn.
데이터 프로세서(49)는 표시부(43)에 표시된 사용자 인터페이스 내에 선택 가능한 학습할 언어를 표시하고, 입력부(41)로부터의 언어 선택 입력에 의해 학습할 언어(즉, 제 1 언어)를 저장한다. 또는, 데이터 프로세서(49)는 사용자의 음성 명령이 가능하도록 하며, 즉, 마이크(46)를 통하여 입력되는 음성에 포함된 언어를 확인하여, 확인된 언어(예를 들면, 영어, 중국어 등)를 학습할 언어(제 1 언어)로 저장한다. 이러한 학습할 언어의 선택 시에, 데이터 프로세서(49)는 사용자에게 학습할 언어를 선택할 것을 시각적 및/또는 청각적으로 표시부(43)를 통하여 표시할 수도 있다. 학습할 언어의 선택이 완료되면, 데이터 프로세서(49)는 표시부(43)에 학습할 언어를 표시하고, 사용자에게 학습할 단어나 문장을 말할 것을 시각적 및/또는 청각적으로 표시부(43)를 통하여 표시한다. 이때, 제 2 언어는 기설정되어 있거나 위와 동일한 방법으로 데이터 프로세서(49)가 사용자로 하여금 설정하도록 할 수 있다.The data processor 49 displays a language to be learned that can be selected in the user interface displayed on the display unit 43 and stores a language to be learned (ie, a first language) by a language selection input from the input unit 41. Alternatively, the data processor 49 enables a user's voice command, that is, checks the language included in the voice input through the microphone 46, and selects the identified language (eg, English, Chinese, etc.). Save as the language to be learned (first language). When selecting the language to be learned, the data processor 49 may visually and/or audibly display the selection of the language to be learned to the user through the display unit 43. When the selection of the language to be learned is completed, the data processor 49 displays the language to be learned on the display unit 43, and prompts the user to speak a word or sentence to be learned through the display unit 43 visually and/or aurally. Indicate. In this case, the second language may be preset or the data processor 49 may allow the user to set it in the same manner as above.
다음으로, 언어 번역 단계이다. Next, it is the language translation step.
데이터 프로세서(49)는 통신부(45)를 통하여 통신 모듈(15)과 통신하여 언어 학습 기능을 수행하도록 하거나, 통신부(35)와 통신하여 데이터 프로세서(39)가 언어 학습 기능을 수행하도록 한다. The data processor 49 communicates with the communication module 15 through the communication unit 45 to perform a language learning function, or communicates with the communication unit 35 to allow the data processor 39 to perform a language learning function.
제 1 이어셋(10)의 통신 모듈(15)은 마이크(11)에 의해 획득된 외부 소음이 저감된 사용자의 제 1 언어인 음성 신호(제 1 음성 신호)를 포함하는 제 1 음성 정보를 전자 통신 디바이스(40)로 전송하거나, 데이터 프로세서(39)는 제 2 이어셋(20)의 마이크(21)에 의해 획득된 외부 소음이 저감된 사용자의 제 1 언어인 음성 신호(제 2 음성 신호)를 포함하는 제 2 음성 정보를 통신부(35)를 통하여 전자 통신 디바이스(40)로 전송한다.The communication module 15 of the first earset 10 electronically communicates first voice information including a voice signal (first voice signal) that is the user's first language in which external noise obtained by the microphone 11 is reduced. Transmitted to the device 40, or the data processor 39 includes a voice signal (a second voice signal) that is the first language of the user in which the external noise obtained by the microphone 21 of the second ear set 20 has been reduced. The second voice information to be transmitted is transmitted to the electronic communication device 40 through the communication unit 35.
데이터 프로세서(49)는 통신부(45)를 통하여 제 1 또는 제 2 음성 정보를 수신하여 저장하며, 제 1 또는 제 2 음성 정보를 포함하는 번역 대상 정보(처리 대상 정보)를 생성하여 통신부(45)를 제어하여 네트워크(60)를 통하여 번역 서버(50)로 전송한다. 여기서 번역 대상 정보는 제 1 또는 제 2 음성 신호, 제 1 언어의 종류 코드 및 제 2 언어의 종류 코드를 포함한다. The data processor 49 receives and stores the first or second voice information through the communication unit 45, and generates translation target information (processing target information) including the first or second voice information, and the communication unit 45 And transmits it to the translation server 50 through the network 60. Here, the translation target information includes a first or second voice signal, a first language type code, and a second language type code.
번역 서버(50)는 번역 대상 정보를 수신하여 저장하고, 번역 대상 정보에 포함된 제 1 또는 제 2 음성 신호를 추출하고 이를 인식하여 번역 대상 정보에 포함된 언어 종류 코드를 참조하여 텍스트로 변환하여 저장한다. 또한, 번역 서버(50)는 텍스트를 제 2 언어로 번역하여 제 2 언어인 번역 텍스트를 생성하여 저장하며, 번역 텍스트를 포함하는 번역 정보(처리 정보)를 생성하여 네트워크(60)를 통하여 전자 통신 디바이스(40)로 전송한다.The translation server 50 receives and stores the translation target information, extracts the first or second voice signal included in the translation target information, recognizes it, and converts it into text by referring to the language type code included in the translation target information. Save it. In addition, the translation server 50 translates the text into a second language, generates and stores the translated text, which is a second language, and generates translation information (processing information) including the translated text, and communicates electronically through the network 60. It is transmitted to the device 40.
데이터 프로세서(49)는 통신부(45)를 통하여 번역 정보를 수신하여 저장하며, 번역 정보에 포함된 번역 텍스트를 표시부(43)를 통하여 표시하여 사용자가 제 1 언어로 말한 내용과 표시된 제 2 언어인 번역 텍스트를 비교 학습할 수 있도록 한다. The data processor 49 receives and stores the translation information through the communication unit 45, and displays the translated text included in the translation information through the display unit 43 to display the contents spoken by the user in the first language and the displayed second language. Make it possible to compare and learn the translated texts.
다음으로, 평가 및 추가 정보 제공 단계이다. Next is the evaluation and provision of additional information.
데이터 프로세서(49)는 표시부(43)에 번역 텍스트를 표시하는 것에 추가하여, 사용자가 의도한 내용이 번역 텍스트에 포함되어 있는지를 사용자가 선택할 수 있도록 한다. 만약 사용자가 번역 텍스트의 내용이 제 1 언어로 말한 내용을 포함하거나 같다고 판단하면, 성공으로 판단하여, 입력부(41)를 통하여 제 1 언어에 대한 학습 성공 선택을 입력하고, 데이터 프로세서(49)는 학습 성공 선택을 저장한다. 그렇지 않고 제 1 언어도 말한 내용이 번역 텍스트에 포함되지 않으면, 사용자가 제 1 언어로 말한 내용에 오류가 있는 것이므로, 사용자가 입력부(41)를 통하여 제 1 언어에 대한 학습 실패 선택을 입력하고, 데이터 프로세서(49)는 학습 실패 선택을 저장한다. In addition to displaying the translated text on the display unit 43, the data processor 49 allows the user to select whether or not the content intended by the user is included in the translated text. If the user determines that the content of the translated text includes or is the same as the content spoken in the first language, it is determined as success and inputs a selection of success in learning for the first language through the input unit 41, and the data processor 49 Save your learning success choices. Otherwise, if the content spoken in the first language is not included in the translated text, since there is an error in the content spoken in the first language by the user, the user inputs a selection of learning failure for the first language through the input unit 41, The data processor 49 stores the learning failure selection.
또한, 데이터 프로세서(49)는 학습 실패 선택을 입력부(41)로부터 획득한 경우, 표시부(43)의 사용자 인터페이스에 사용자가 제 1 언어로 말한 내용을 제 2 언어로 텍스트로 입력할 수 있는 입력창을 표시한다. 데이터 프로세서(49)는 입력부(41)를 통하여 입력창에 입력된 번역 대상 텍스트를 입력 받아 저장하며, 번역 대상 텍스트와, 번역 대상 언어의 종류(즉, 제 2 언어), 번역 언어의 종류(즉, 제 1 언어)를 포함하는 번역 대상 텍스트 정보를 통신부(45)를 제어하여 네트워크(60)를 통하여 번역 서버(50)에 전송한다. In addition, the data processor 49, when the learning failure selection is obtained from the input unit 41, the user interface of the display unit 43, an input window for inputting the contents spoken in the first language as text in the second language. Is displayed. The data processor 49 receives and stores the text to be translated, inputted in the input window through the input unit 41, and stores the text to be translated, the type of the language to be translated (i.e., the second language), and the type of the language to be translated (i.e. , The first language) to be transmitted to the translation server 50 through the network 60 by controlling the communication unit 45.
번역 서버(50)는 번역 대상 텍스트 정보를 수신하여, 포함된 번역 대상 텍스트를 번역 대상 언어의 종류와 번역 언어의 종류를 참조하여, 번역 언어로 번역하여 번역 텍스트를 생성한다. 번역 서버(50)는 생성된 번역 텍스트를 네트워크(60)를 통하여 전자 통신 디바이스(40)로 전송한다.The translation server 50 receives the translation target text information, refers to the type of the translation target language and the type of the translation language, and translates the included translation target text into a translation language to generate the translated text. The translation server 50 transmits the generated translated text to the electronic communication device 40 through the network 60.
데이터 프로세서(49)는 통신부(45)를 통하여 번역 텍스트를 수신하여 저장하며, 표시부(43)를 통하여 번역 텍스트를 표시함으로써, 사용자가 번역 텍스트를 확인하여 학습할 수 있도록 한다. The data processor 49 receives and stores the translated text through the communication unit 45, and displays the translated text through the display unit 43, so that the user can check and learn the translated text.
또한, 번역 서버(50)는 번역 텍스트에 추가하여, 번역 텍스트를 음성으로 합성하여 번역 텍스트의 음성 신호를 포함하는 번역 음성 정보를 생성하여 전자 통신 디바이스(40)로 전송하고, 데이터 프로세서(49)는 통신부(45)를 통하여 번역 음성 정보에 포함된 음성 신호를 표시부(43)를 통하여 청각적으로 표출할 수도 있다.In addition, the translation server 50 generates translated speech information including a speech signal of the translated text by synthesizing the translated text into speech, in addition to the translation text, and transmitting it to the electronic communication device 40, and the data processor 49 The voice signal included in the translated voice information through the communication unit 45 may be audibly expressed through the display unit 43.
상술된 과정을 통하여, 언어 처리 시스템은 사용자가 제 1 언어에 대한 학습을 수행할 수 있도록 한다.Through the above-described process, the language processing system enables the user to learn the first language.
도 2은 본 발명의 제1 실시예에 따른 보이스 마이크를 구비한 이어셋의 단면도이다. 도 2 내지 6에서의 이어셋은 제 1 이어셋(10) 및 제 2 이어셋(20)에 적용될 수 있으며, 보이스 마이크는 상술된 마이크(11) 및 마이크(21)에 대응되며, 음향 재생 유닛은 스피커(13) 및 스피커(23)에 대응된다.2 is a cross-sectional view of an earset equipped with a voice microphone according to a first embodiment of the present invention. The ear sets in FIGS. 2 to 6 may be applied to the first ear set 10 and the second ear set 20, the voice microphone corresponds to the microphone 11 and the microphone 21 described above, and the sound reproduction unit is a speaker ( 13) and the speaker 23.
하우징(100)은 부품들을 설치할 수 있는 설치 공간(110)과 외이도 내로 삽입될 수 있는 삽입관(120) 구조를 구비하며, 삽입관(120)에는 사용자의 귀에 부드럽게 밀착될 수 있도록 탄성부재의 이어 버드가 탈부착될 수 있다. 마이크로스피커와 같은 음향 재생 유닛(200), 사용자의 음성 신호가 입력되는 보이스 마이크(300), 이들을 제어하는 통신 모듈(15) 등과 같은 부품들이 설치 공간(110) 내에 설치될 수 있다. 음향 재생 유닛(200)은 삽입관(120) 측으로 음향을 방출하여 사용자의 귀로 음향을 향하게 하며, 보이스 마이크(300)는 삽입관으로부터 사용자의 음성을 받아들이게 된다. 음향 재생 유닛(200)과 보이스 마이크(300)는 설치를 용이하게 하기 위해, 상부 브라켓(410) 상에 조립되며, 상부 브라켓(410)에 하부 브라켓(430)이 결합되어 음향 재생 유닛(200)의 백 볼륨을 위한 폐쇄 공간(400)을 형성한다. 한편, 상부 브라켓(410)과 하부 브라켓(430) 사이에, 마이크 브라켓(420)이 추가로 설치되며, 마이크 브라켓(420)에 보이스 마이크(300)가 설치된다. The housing 100 has an installation space 110 in which parts can be installed and an insertion tube 120 structure that can be inserted into the ear canal, and the insertion tube 120 has an ear of an elastic member so that it can be gently in close contact with the user's ear. The bud can be detached. Components such as a sound reproduction unit 200 such as a microspeaker, a voice microphone 300 to which a user's voice signal is input, and a communication module 15 for controlling them may be installed in the installation space 110. The sound reproducing unit 200 emits sound toward the insertion tube 120 to direct sound to the user's ear, and the voice microphone 300 receives the user's voice from the insertion tube. The sound reproduction unit 200 and the voice microphone 300 are assembled on the upper bracket 410 to facilitate installation, and the lower bracket 430 is coupled to the upper bracket 410 to facilitate the installation. To form a closed space 400 for the volume of the bag. Meanwhile, between the upper bracket 410 and the lower bracket 430, a microphone bracket 420 is additionally installed, and a voice microphone 300 is installed in the microphone bracket 420.
이때, 상부 브라켓(410)에는 삽입관(120)과 연통하며, 음향 재생 유닛(200)으로 이어지는 제1 관로(412), 마이크 브라켓(420)으로 이어지는 제2 관로(414)가 형성된다. 또한, 마이크 브라켓(420)에도 제2 관로(414)와 보이스 마이크(300)를 연결하는 관로(422)가 형성된다. At this time, the upper bracket 410 communicates with the insertion tube 120, a first conduit 412 leading to the sound reproducing unit 200 and a second conduit 414 leading to the microphone bracket 420 are formed. In addition, a conduit 422 connecting the second conduit 414 and the voice microphone 300 to the microphone bracket 420 is formed.
이때, 상부 브라켓(410)과 하부 브라켓(420) 사이로, 음향 재생 유닛(200)과 보이스 마이크(300)로 전기적 신호를 전달할 수 있는 터미널(미도시)이 추가로 구비될 수 있다. 터미널(미도시)은 PCB 등의 통신 모듈(15)이나 연결 케이블(24) 등과 연결될 수 있다. In this case, between the upper bracket 410 and the lower bracket 420, a terminal (not shown) capable of transmitting an electrical signal to the sound reproduction unit 200 and the voice microphone 300 may be additionally provided. The terminal (not shown) may be connected to a communication module 15 such as a PCB or a connection cable 24.
상부 브라켓(410), 음향 재생 유닛(200), 마이크 브라켓(420), 보이스 마이크(300), 터미널(미도시), 하부 브라켓(430)의 조립이 완료된 후, 조립품이 하우징(100)의 설치 공간(110) 내에 삽입되어 고정된다. After the assembly of the upper bracket 410, the sound reproduction unit 200, the microphone bracket 420, the voice microphone 300, the terminal (not shown), and the lower bracket 430 is completed, the assembly is installed in the housing 100. It is inserted and fixed in the space 110.
상부 브라켓(410)의 형상은 하우징(100)의 설치 공간(110)에 대응하는 형상으로 형성되는 것이 설치에 용이하다. It is easy to install that the upper bracket 410 has a shape corresponding to the installation space 110 of the housing 100.
이렇게 상부 브라켓(410)과 하부 브라켓(430)이 형성하는 폐쇄 공간(110) 덕에 음향 재생 유닛(200)의 백홀(미도시)을 통해 외부의 노이즈가 유입되지 않고 차폐될 수 있다. 따라서 음향 재생 유닛(200)의 백홀 사이즈를 대략 1.0mm까지 키울 수 있으며, 그에 따라 저역대에서 음압을 6dB이상 높일 수 있다는 장점이 있다. Due to the closed space 110 formed by the upper bracket 410 and the lower bracket 430, external noise may not be introduced through the back hole (not shown) of the sound reproducing unit 200 and may be shielded. Accordingly, the size of the backhaul of the sound reproducing unit 200 can be increased to approximately 1.0 mm, and accordingly, the sound pressure can be increased by 6 dB or more in a low frequency band.
도 3는 본 발명의 제2 실시예에 따른 이어셋의 분해도, 도 4는 본 발명의 제2 실시예에 따른 이어셋의 사시도, 도 5은 본 발명의 제2 실시예에 따른 이어셋의 단면도이다. 3 is an exploded view of an earset according to a second embodiment of the present invention, FIG. 4 is a perspective view of an earset according to a second embodiment of the present invention, and FIG. 5 is a cross-sectional view of an earset according to a second embodiment of the present invention.
본 발명의 제2 실시예에 따른 이어셋은 본 발명의 기술적 특징인 방음구를 통해 전달된 음성을 보이스 마이크로 전달하는 관로 구조를 후면 하우징(120a)에 백(122a)이 형성되는 오픈형 이어셋에 적용한 것이다. The earset according to the second embodiment of the present invention applies a conduit structure for transmitting the voice transmitted through the soundproofing device, which is a technical feature of the present invention, to an open earset in which a bag 122a is formed on the rear housing 120a. .
본 발명의 제2 실시예에 따른 이어셋은 사용자의 귀를 향하는 전면 하우징(110a)과 사용자의 귀 반대 방향을 향하는 후면 하우징(120a)을 구비하며, 전면 하우징(110a)과 후면 하우징(120a)이 결합하여 형성한 설치 공간 내에 부품들이 설치된다. The earset according to the second embodiment of the present invention has a front housing 110a facing the user's ear and a rear housing 120a facing the user's ear, and the front housing 110a and the rear housing 120a are Components are installed in the installation space formed by combining.
마이크로스피커와 같은 음향 재생 유닛(200), 사용자의 음성 신호가 입력되는 보이스 마이크(300), 이들을 제어하는 통신 모듈(15) 등과 같은 부품들이 하우징(110a, 120a) 내에 설치될 수 있다. Components such as a sound reproducing unit 200 such as a microspeaker, a voice microphone 300 to which a user's voice signal is input, and a communication module 15 for controlling them may be installed in the housings 110a and 120a.
전면 하우징(110a)은 하나 이상의 방음구(112a, 114a)를 구비하며, 본 발명의 제2 실시예에 따른 이어셋은 서로 소정의 각도를 두고 2개의 방음구(112a, 114a)가 형성되어 있다. 방음구(112a, 114a)는 상대적으로 크기가 큰 제1 방음구(112a)와 상대적으로 크기가 작은 제2 방음구(114a)로 구분할 수 있다. 제1 방음구(112a)는 음향변환장치(300)로부터 외이도로 소리를 출력하며, 제2 방음구(114a)는 SPL 전반적인 밸런스를 위한 구조로 중역대 음압을 평탄하게 튜닝해주는 역할을 하며, 고역대 음압을 상승시켜준다. 제1 방음구(112a)의 음향 방사각과 제2 방음구(114a)의 음향 방사각이 90도 이상인 것이 바람직하다. The front housing 110a includes one or more soundproofing holes 112a and 114a, and the earset according to the second embodiment of the present invention has two soundproofing holes 112a and 114a formed at a predetermined angle to each other. The soundproofing holes 112a and 114a may be divided into a first soundproofing hole 112a having a relatively large size and a second soundproofing hole 114a having a relatively small size. The first soundproofing hole 112a outputs sound from the acoustic conversion device 300 to the ear canal, and the second soundproofing hole 114a is a structure for the overall balance of the SPL and plays a role of flatly tuning the sound pressure in the mid-range, and the high-frequency sound pressure Raises. It is preferable that the acoustic radiation angle of the first soundproofing hole 112a and that of the second soundproofing hole 114a be 90 degrees or more.
보이스 마이크(300)는 제1 방음구(112a)로부터 사용자의 음성을 받아들이게 된다. 이를 위해 제1 방음구(112a)와 연통하며 보이스 마이크(300)로 사용자의 음성을 전달하는 관로(420a)가 구비된다. 관로(420a)는 전면 하우징(110a)에 결합된다. 따라서 사용자의 발화 시에 유스타키오관을 통해 제1 방음구(112a)로 들어오는 음성을 보이스 마이크(300)로 전달할 수 있다. The voice microphone 300 receives the user's voice from the first soundproofing hole 112a. To this end, a conduit 420a communicating with the first soundproofing hole 112a and transmitting the user's voice to the voice microphone 300 is provided. The conduit 420a is coupled to the front housing 110a. Accordingly, when the user speaks, the voice coming into the first soundproofing hole 112a through the Eustachian tube can be transmitted to the voice microphone 300.
이어셋은 앞서 설명한 바와 같이 후면 하우징(120a)은 귀 내부의 음압을 일정하게 유지할 수 있도록 하우징의 내부가 외부를 연통시키기 위한 백홀(122a)을 구비한다. 이때, 후면 하우징(120a)과 음향변환장치(200) 사이에는 브라켓(430a)이 설치될 수 있다. 브라켓(430a)은 백홀(122)을 덮어 관로를 형성한다. 브라켓(430a)은 후면 하우징(120a)의 백홀(122a)과 이격된 위치에 형성되어 하우징 내부와 관로를 연통시키는 연통홀(432a)을 구비한다. 즉, 관로는 연통홀(432a)과 백홀(122a)을 연결한다. 브라켓(430a)이 형성하는 관로 구조는 하우징(110a, 120a) 내에서 내부 공진을 생성하여 저역대 음향을 강화하는 역할을 한다. 또한 백홀(122a)은 2kHz 대역에서 발생하는 Dip을 상쇄하는 역할을 한다. As described above, as described above, the rear housing 120a includes a back hole 122a through which the inside of the housing communicates with the outside so as to maintain a constant sound pressure inside the ear. In this case, a bracket 430a may be installed between the rear housing 120a and the acoustic conversion device 200. The bracket 430a covers the back hole 122 to form a pipeline. The bracket 430a is formed at a position spaced apart from the back hole 122a of the rear housing 120a and includes a communication hole 432a for communicating the inside of the housing and the pipeline. That is, the conduit connects the communication hole 432a and the back hole 122a. The pipe structure formed by the bracket 430a serves to enhance low-frequency sound by generating internal resonance within the housings 110a and 120a. In addition, the backhaul 122a serves to cancel Dip occurring in the 2 kHz band.
도 6은 본 발명의 제3 실시예에 따른 이어셋의 단면도이다.6 is a cross-sectional view of an earset according to a third embodiment of the present invention.
본 발명의 제3 실시예에 따른 이어셋은, 제2 실시예와 모두 동일하나, 보이스 마이크(300)와 제1 방음구(112a)를 연결하는 관로(420b)가 절곡 되어 제1 방음구(112a) 내로 연장된 것이 특징이다. 관로(420b)가 제1 방음구 내로 절곡됨에 따라 음성 통화 시 하울링을 억제할 수 있다는 장점이 있다. Earsets according to the third embodiment of the present invention are all the same as those of the second embodiment, but the pipe 420b connecting the voice microphone 300 and the first soundproofing hole 112a is bent so that the first soundproofing hole 112a It is characterized by extending into ). As the pipe 420b is bent into the first soundproof hole, there is an advantage that howling can be suppressed during a voice call.
본 발명이 제공하는 이어셋은, 방음구 내에 음성 마이크 관로를 형성하여 통화 시 유스타키오관으로부터 들어오는 음성을 마이크로 입력하도록 구성함으로써, 외부 노이즈를 억제할 수 있다. Earset provided by the present invention can suppress external noise by forming a voice microphone channel in the soundproofing hole to input a voice from the Eustachian tube during a call with a microphone.
한편, 음성 통화 시에 하울링이 발생할 수 있으나, 방음구 내로 연장되어 음성을 안내하는 관로(420b)를 별도로 제조하여 설치함으로써 하울링을 억제할 수 있다.On the other hand, although howling may occur during a voice call, howling can be suppressed by separately manufacturing and installing a pipe 420b that extends into the soundproofing hole to guide the voice.
한편, 방음구 내에 음성 마이크 관로를 형성하는 구조는 커널형 이어셋에 적용할 수도 있고, 오픈형 이어셋에 적용할 수도 있다. 또한 무선형 이어셋과 TWS 이어셋에도 적용될 수 있다. On the other hand, the structure of forming the voice microphone channel in the soundproofing hole may be applied to a kernel-type earset or an open-type earset. It can also be applied to wireless earsets and TWS earsets.
다양한 실시 예에 따른 장치(예: 프로세서 또는 그 기능들) 또는 방법(예: 동작들)의 적어도 일부는, 예컨대, 프로그램 모듈의 형태로 컴퓨터로 읽을 수 있는 저장매체(computer-readable storage media)에 저장된 명령어로 구현될 수 있다. 상기 명령어가 프로세서에 의해 실행될 경우, 상기 하나 이상의 프로세서가 상기 명령어에 해당하는 기능을 수행할 수 있다. 컴퓨터로 읽을 수 있는 저장매체는, 예를 들면, 메모리가 될 수 있다. At least a part of a device (eg, a processor or its functions) or a method (eg, operations) according to various embodiments may be stored in a computer-readable storage media in the form of, for example, a program module. It can be implemented as a stored command. When the command is executed by a processor, the one or more processors may perform a function corresponding to the command. The computer-readable storage medium may be, for example, a memory.
이상 설명한 바와 같이, 본 발명은 상술한 특정의 바람직한 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술 분야에서 통상의 지식을 가진 자라면 누구든지 다양한 변형의 실시가 가능한 것은 물론이고, 그와 같은 변경은 청구범위 기재의 범위 내에 있게 된다.As described above, the present invention is not limited to the specific preferred embodiments described above, and anyone with ordinary knowledge in the technical field to which the present invention pertains without departing from the gist of the present invention claimed in the claims It goes without saying that modifications are possible, and such changes are within the scope of the description of the claims.

Claims (13)

  1. 제 1 이어셋 또는 제 2 이어셋과; A first ear set or a second ear set;
    제 1 이어셋과 무선 통신을 수행하거나 제 2 이어셋과 유선 통신을 수행하는 웨어러블 음향 디바이스와; A wearable sound device for performing wireless communication with the first earset or wired communication with the second earset;
    제 1 이어셋 또는 웨어러블 음향 디바이스와 무선 통신을 수행하며, 번역 서버와 통신을 수행하는 통신부와, 표시부를 포함하는 전자 통신 디바이스로 구성되며, It is composed of an electronic communication device including a communication unit that performs wireless communication with the first earset or wearable sound device, and performs communication with a translation server, and a display unit,
    전자 통신 디바이스는 제 1 이어셋이나 웨어러블 음향 디바이스로부터 수신된 처리 대상 언어인 음성 신호를 포함하는 처리 대상 정보를 생성하여 번역 서버로 전송하고, 번역 서버로부터 전송된 처리 대상 정보에 대응하는 처리 목표 언어인 번역 텍스트 또는 번역 텍스트로부터 변환된 음성 신호를 포함하는 처리 정보를 수신하여 표시부를 통하여 시각적으로 표시하거나 청각적으로 표출하는 언어 처리 기능을 수행하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The electronic communication device generates processing target information including a speech signal that is a processing target language received from the first earset or wearable sound device and transmits it to the translation server, and is a processing target language corresponding to the processing target information transmitted from the translation server. A language processing system using an ear set, characterized in that a language processing function of visually or aurally displaying the translated text or processing information including a voice signal converted from the translated text is received.
  2. 제 1 항에 있어서, The method of claim 1,
    언어 처리 기능은 통역 기능을 포함하고, 처리 정보는 통역 정보에 대응하고, The language processing function includes an interpretation function, the processing information corresponds to the interpretation information,
    웨어러블 음향 디바이스와 전자 통신 디바이스가 통신 가능 상태이고, 웨어러블 음향 디바이스는 제 1 이어셋 또는 제 2 이어셋으로부터의 음성 신호에 통역 기능의 제어를 위한 기준 음성 신호가 포함된 경우, 기준 음성 신호에 대응하는 통역 기능 제어 명령을 생성하여 전자 통신 디바이스로 전송하고,When the wearable sound device and the electronic communication device are in a communication enabled state, and the wearable sound device includes a reference voice signal for controlling the interpretation function in the voice signal from the first or second earphone, the interpretation corresponding to the reference voice signal Generate a function control command and transmit it to the electronic communication device,
    전자 통신 디바이스는 통역 기능 제어 명령을 수신하고, 통역 기능 제어 명령에 대응하여 통역 기능에 대한 시작 또는 종료를 수행하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The electronic communication device receives an interpretation function control command, and starts or ends an interpretation function in response to the interpretation function control command.
  3. 제 2 항에 있어서, The method of claim 2,
    전자 통신 디바이스는 대기 상태에서 통역 기능을 위한 어플리케이션을 포그라운드 서비스 상태로 동작시키는 중에, 통역 기능 시작을 포함하는 통역 기능 제어 명령을 수신하는 경우, 통역 기능을 위한 어플리케이션을 웨이크 업시켜 통역 기능을 시작하고, The electronic communication device wakes up the application for the interpretation function to start the interpretation function when receiving an interpretation function control command including starting the interpretation function while operating the application for the interpretation function in the foreground service state in the standby state. and,
    전자 통신 디바이스는 통역 기능을 수행하는 중에, 통역 기능 종료를 포함하는 통역 기능 제어 명령을 수신하는 경우, 통역 기능을 위한 어플리케이션을 포그라운드 서비스 상태로 동작시켜 통역 기능을 종료시키는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The electronic communication device uses an earphone, characterized in that, while performing an interpretation function, when receiving an interpretation function control command including termination of the interpretation function, the application for the interpretation function is operated in a foreground service state to terminate the interpretation function. Language processing system used.
  4. 제 3 항에 있어서, The method of claim 3,
    전자 통신 디바이스는 웨어러블 음향 디바이스와의 음성 통신 채널을 개방하여 통역 기능을 수행하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The electronic communication device opens a voice communication channel with the wearable sound device to perform an interpretation function.
  5. 제 1 항에 있어서, The method of claim 1,
    언어 처리 기능은 언어 학습 기능을 포함하고, 처리 정보는 번역 정보에 대응하고, The language processing function includes a language learning function, the processing information corresponds to the translation information,
    전자 통신 디바이스는 입력부를 포함하고, The electronic communication device includes an input unit,
    번역 정보의 표시 이후에, 입력부를 통하여 제 1 언어에 대한 학습 성공 선택 또는 학습 실패 선택을 획득하여 저장하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.After displaying the translation information, a language processing system using an earphone, characterized in that acquiring and storing a selection of learning success or a selection of learning failure for a first language through an input unit.
  6. 제 5 항에 있어서, The method of claim 5,
    전자 통신 디바이스는 학습 실패 선택의 경우, 입력부를 통하여 제 2 언어인 번역 대상 텍스트를 입력 받아 번역 대상 텍스트 정보를 번역 서버로 전송하고, 번역 서버로부터 전송된 번역 대상 텍스트 정보에 대응하는 제 1 언어인 번역 텍스트를 수신하여 표시부를 통하여 표시하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.In the case of learning failure selection, the electronic communication device receives the translation target text as a second language through the input unit and transmits the translation target text information to the translation server, and the first language corresponding to the translation target text information transmitted from the translation server. Language processing system using an ear set, characterized in that receiving the translated text and displaying it through a display unit.
  7. 제 1 항에 있어서, The method of claim 1,
    제 1 및 제 2 이어셋은 부품이 설치되는 설치 공간을 정의하며 외관을 형성하며 방음구를 구비하는 하우징, 설치 공간 내에 설치되며 음향을 방출하는 음향 재생 유닛, 설치 공간 내에 설치되는 보이스 마이크, 하우징 내에 설치되며, 방음구를 통해 전달된 음성을 보이스 마이크로 전달하는 관로;를 포함하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The first and second earsets define an installation space in which parts are installed, and form the exterior and have a soundproofing hole, a sound reproduction unit installed in the installation space and emitting sound, a voice microphone installed in the installation space, and in the housing. A language processing system using an ear set, comprising: a pipe that is installed and transmits the voice transmitted through the soundproof hole to a voice microphone.
  8. 제 7 항에 있어서,The method of claim 7,
    하우징은 사용자의 외이도 내로 삽입되는 삽입관을 구비하며, 삽입관이 방음구의 역할을 하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The housing includes an insertion tube inserted into the user's ear canal, and the insertion tube serves as a soundproof device.
  9. 제 8 항에 있어서, The method of claim 8,
    보이스 마이크를 감싸는 폐쇄 공간을 형성하는 챔버;를 더 포함하며, Further includes; a chamber forming a closed space surrounding the voice microphone,
    관로는 챔버에 형성되어 삽입관을 통해 전달된 음성을 보이스 마이크로 전달하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템. A language processing system using an ear set, characterized in that the conduit is formed in the chamber and delivers the voice transmitted through the insertion tube to the voice microphone.
  10. 제 9 항에 있어서,The method of claim 9,
    챔버는 보이스 마이크를 설치 공간 내에서 고정하는 상부 브라켓과, 상부 브라켓과 맞물려 공간을 형성하는 하부 브라켓을 구비하고, The chamber has an upper bracket that fixes the voice microphone in the installation space, and a lower bracket that meshes with the upper bracket to form a space,
    음향 재생 유닛은 상부 브라켓과 하부 브라켓 사이에 설치되며, 음향 재생 유닛을 감싸는 폐쇄 공간과 보이스 마이크의 설치 공간은 서로 분리된 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The sound reproducing unit is installed between the upper bracket and the lower bracket, and the closed space surrounding the sound reproducing unit and the installation space of the voice microphone are separated from each other.
  11. 제 7 항에 있어서, The method of claim 7,
    하우징은 음향 재생 유닛의 후면과 연통하는 백홀을 구비하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.The speech processing system using an ear set, characterized in that the housing has a back hole communicating with the rear surface of the sound reproduction unit.
  12. 제 11 항에 있어서, The method of claim 11,
    음향 재생 유닛과 하우징 사이에 설치되며, 음향 특성을 튜닝할 수 있는 하나 이상의 브라켓;을 포함하는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.One or more brackets installed between the sound reproducing unit and the housing and capable of tuning acoustic characteristics.
  13. 제 7 항에 있어서,The method of claim 7,
    관로는 방음구 내로 연장되는 것을 특징으로 하는 이어셋을 이용한 언어 처리 시스템.Language processing system using an ear set, characterized in that the pipe extends into the soundproofing sphere.
PCT/KR2020/014544 2019-10-25 2020-10-23 Language processing system using earset WO2021080362A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2019-0133599 2019-10-25
KR1020190133599A KR102285877B1 (en) 2019-10-25 2019-10-25 Translation system using ear set
KR1020190133598A KR102219494B1 (en) 2019-10-25 2019-10-25 Ear set and language learning system using thereof
KR10-2019-0133598 2019-10-25

Publications (1)

Publication Number Publication Date
WO2021080362A1 true WO2021080362A1 (en) 2021-04-29

Family

ID=75619982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/014544 WO2021080362A1 (en) 2019-10-25 2020-10-23 Language processing system using earset

Country Status (1)

Country Link
WO (1) WO2021080362A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411995A (en) * 2021-05-27 2021-09-17 王强 Language translator for identifying multiple languages

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150081157A (en) * 2014-01-03 2015-07-13 엘지전자 주식회사 Neck badn type terminal mobile
KR101693268B1 (en) * 2015-04-10 2017-01-05 해보라 주식회사 Earset
KR101767467B1 (en) * 2016-04-19 2017-08-11 주식회사 오르페오사운드웍스 Noise shielding earset and method for manufacturing the earset
KR101834546B1 (en) * 2013-08-28 2018-04-13 한국전자통신연구원 Terminal and handsfree device for servicing handsfree automatic interpretation, and method thereof
JP2019175426A (en) * 2018-12-20 2019-10-10 株式会社フォルテ Translation system, translation method, translation device, and voice input/output device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101834546B1 (en) * 2013-08-28 2018-04-13 한국전자통신연구원 Terminal and handsfree device for servicing handsfree automatic interpretation, and method thereof
KR20150081157A (en) * 2014-01-03 2015-07-13 엘지전자 주식회사 Neck badn type terminal mobile
KR101693268B1 (en) * 2015-04-10 2017-01-05 해보라 주식회사 Earset
KR101767467B1 (en) * 2016-04-19 2017-08-11 주식회사 오르페오사운드웍스 Noise shielding earset and method for manufacturing the earset
JP2019175426A (en) * 2018-12-20 2019-10-10 株式会社フォルテ Translation system, translation method, translation device, and voice input/output device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411995A (en) * 2021-05-27 2021-09-17 王强 Language translator for identifying multiple languages
CN113411995B (en) * 2021-05-27 2023-05-23 德州学院 Language translator for identifying multiple languages

Similar Documents

Publication Publication Date Title
US9484017B2 (en) Speech translation apparatus, speech translation method, and non-transitory computer readable medium thereof
US9280539B2 (en) System and method for translating speech, and non-transitory computer readable medium thereof
WO2018008885A1 (en) Image processing device, operation method of image processing device, and computer-readable recording medium
JP2019175426A (en) Translation system, translation method, translation device, and voice input/output device
WO2013077110A1 (en) Translation device, translation system, translation method and program
CN111599358A (en) Voice interaction method and electronic equipment
WO2019112181A1 (en) Electronic device for executing application by using phoneme information included in audio data and operation method therefor
WO2020080635A1 (en) Electronic device for performing voice recognition using microphones selected on basis of operation state, and operation method of same
TWI695281B (en) Translation system, translation method, and translation device
KR101517975B1 (en) Earphone apparatus with synchronous interpretating and translating function
JP2009178783A (en) Communication robot and its control method
WO2021080362A1 (en) Language processing system using earset
JP2004214895A (en) Auxiliary communication apparatus
WO2019004762A1 (en) Method and device for providing interpretation function by using earset
WO2020091482A1 (en) Method and device for reducing crosstalk in automatic interpretation system
WO2020101174A1 (en) Method and apparatus for generating personalized lip reading model
JP2010128766A (en) Information processor, information processing method, program and recording medium
WO2022177103A1 (en) Electronic device for supporting service for artificial intelligent agent that talks with user
KR102285877B1 (en) Translation system using ear set
WO2020009261A1 (en) Digital device capable of voice recognition and control method thereof
WO2021091063A1 (en) Electronic device and control method thereof
CN211319717U (en) Accessory for language interaction, mobile terminal and interaction system
KR102219494B1 (en) Ear set and language learning system using thereof
WO2019103340A1 (en) Electronic device and control method thereof
JP6813176B2 (en) Voice suppression system and voice suppression device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20878291

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20878291

Country of ref document: EP

Kind code of ref document: A1