KR20150136583A - Apparatus and method for multi-language dialogue - Google Patents
Apparatus and method for multi-language dialogue Download PDFInfo
- Publication number
- KR20150136583A KR20150136583A KR1020150074135A KR20150074135A KR20150136583A KR 20150136583 A KR20150136583 A KR 20150136583A KR 1020150074135 A KR1020150074135 A KR 1020150074135A KR 20150074135 A KR20150074135 A KR 20150074135A KR 20150136583 A KR20150136583 A KR 20150136583A
- Authority
- KR
- South Korea
- Prior art keywords
- language
- answer
- sentence
- user
- database
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000004044 response Effects 0.000 claims description 40
- 230000008921 facial expression Effects 0.000 claims description 16
- 230000002452 interceptive effect Effects 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 12
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
-
- G06F17/2809—
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/06—Foreign languages
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Entrepreneurship & Innovation (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
The present invention relates to an apparatus and method for performing multi-lingual conversation with a user, and more particularly, to an apparatus and method for performing a function in response to a user's query in a language designated by a user.
BACKGROUND ART [0002] Currently, many inventions including functions of performing questions and answers with users in toys, robots, and the like are being released. However, these devices are configured to answer in a specific language when they ask questions in a specific language. Especially, by experiencing conversation in various languages through these devices, there is a problem that users can not meet the demand of a user who wants to learn a foreign language naturally .
SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a toys and the like for implementing a function of answering a question in a specific language, The purpose of this course is to enable students to practice foreign languages through natural conversation and play rather than a rigid language learning method.
In addition, it aims to allow young children to be exposed to various languages from their young age through toys equipped with these functions, so that they can naturally learn to a foreign language.
In order to achieve the above object, there is provided a method for performing a multi-lingual conversation according to the present invention, comprising the steps of: (a) Setting a language to be used (hereinafter, a language set as a question is referred to as a 'first language'; and a language set as a answer is referred to as a 'second language'); (b) receiving a question voice in the first language from a user; (c) performing sentence analysis on the input question voice; (d) searching the first language database of the multilingual query response database for a question sentence closest to the meaning of the question identified by the sentence analysis in the step (c), and transmitting the answer sentence, ; (e) retrieving, from a second language database of the multilingual query response database, a response sentence corresponding to the answer sentence identified in step (d); And (f) outputting a response sentence composed of the second language retrieved in the step (e) to the user.
In the step (a), the input for setting the language to be used may be an input by voice or an input from a user via a button or a touch screen interface.
In the step (f), the output of the answer may include at least one of a speech output configured in the second language and a text output configured in the second language.
In the step (a), a language to be used as an answer is set to a language used for an answer in which two languages are input from a user and the second language is a voice output of the input language, A language (hereinafter, referred to as a 'third language') is set to a language used for an answer to be text-output, and a response sentence corresponding to the identified answer sentence is also searched in a third language database of a multi- In step (f), the output of the answer may comprise a speech output configured in the second language and a text output configured in the third language.
(G1) searching for a facial expression image corresponding to the searched answer sentence in a facial expression database; And (g2) displaying the searched facial image through a display unit.
According to another aspect of the present invention, there is provided an apparatus for conducting a dialogue by a multilingual response to a user's query, the apparatus comprising: a language setting unit (hereinafter referred to as " , A language set as a question is referred to as a 'first language', and a language set as an answer is referred to as a 'second language'); A voice input unit for receiving a question voice in the first language from a user; A speech recognition unit for performing sentence analysis on the input question voice; A multi-lingual query response database configured to store data of questions and their corresponding answers in multiple languages; A query sentence closest to the meaning of the question identified by the sentence analysis of the speech recognition unit is searched for in the first language database of the multilingual query response database and the answer sentence linked to the searched sentence sentence is grasped in the first language database, An answer search unit for searching a second language database of the multilingual query response database for a response sentence corresponding to the answer sentence; An output unit for outputting a response sentence retrieved by the answer retrieving unit to a user; And a control unit for controlling the respective components of the multilingual interactive apparatus to perform a series of processes related to multilingual conversation.
The input for setting the language to be used may be a user voice input received from the voice input unit, a button provided by the user interface providing unit, or an input date from the user through a touch screen interface (hereinafter collectively referred to as a "user interface" .
The output of said answer being a speech output configured in said second language through a speech output; And a text output configured in the second language through a display unit.
The language set for use as an answer is set to a language used for an answer in which two languages are input from a user and the second language of the input language is a voice output, 3 languages) is set to a language used for a text-output answer, and the answer searching unit searches the third language database of the multilingual query response database for a response sentence corresponding to the identified answer sentence, The output of which is configured in the second language via the audio output; And a text output configured in the third language through a display unit.
The apparatus for conducting a dialogue by a multilingual response to a user's query further includes a facial expression image database for storing a facial expression image corresponding to each answer sentence, and the output unit outputs, together with the retrieved answer sentence, And outputting a facial expression image corresponding to the sentence.
According to the present invention, when a user queries a specific language, a function of performing an answer to the user in another specific language designated by the user is implemented in a toy or the like, To practice foreign languages through natural conversation and play, rather than through existing rigid language learning methods. In addition, children can be exposed to various languages from their young children through toys equipped with these functions, .
FIG. 1 is a block diagram of a database for implementing a multilingual dialogue method according to the present invention, in which a question and its answer are matched for various languages.
FIG. 2 is a diagram illustrating an embodiment of a method for performing multilingual conversation according to the present invention.
FIG. 3 is a diagram illustrating another embodiment of a method for performing multilingual conversation according to the present invention.
4 is a diagram illustrating another embodiment of a multi-lingual conversation method according to the present invention.
5 is a diagram showing a configuration of a multilingual dialogue apparatus according to the present invention.
6 is a diagram illustrating an example of a screen in which a text answer appears in the multilingual dialogue apparatus according to the present invention.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined. Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.
FIG. 1 is a block diagram of a database for implementing a multilingual dialogue method according to the present invention, in which a question and its answer are matched for various languages, FIG. 2 is a diagram illustrating a multilingual dialogue FIG. 3 is a diagram illustrating another embodiment of a method for performing multilingual conversation according to the present invention. Referring to FIG. Hereinafter, a multilingual dialogue method according to the present invention will be described with reference to FIGS. 1 to 3. FIG. In particular, FIG. 1 illustrates the structure of the multi-lingual query response database 180 of FIG. 5 in greater detail.
First, referring to FIG. 2, a language to be used as a question and a language to be used as a response are set according to an input from a user (S210). In this case, when a language set by a user as a question is a 'first language' and a language set by a user is a 'second language', the first language and the second language may be the same language or different languages. For example, the first language can be set to Korean, and the second language can be set to Japanese.
The setting of the language to be used may be made by voice, or by input from a user via a button or a touch screen interface. That is, when the user speaks a specific phrase such as "language setting ", the multilingual conversation performance apparatus 100 (see FIG. 5) according to the present invention recognizes it as the start of language setting, Korean, Answer: Japanese ", and so on. Alternatively, the multilingual interactive apparatus 100 may include an input device configured by buttons to set a language through a button input of a user, or provide a user interface through a touch screen so that a user can set a language.
Thereafter, the user can input the question voice in the first language set in the question language (S220). The multilingual dialogue execution apparatus 100 performs a sentence analysis on the inputted question voice, that is, a sentence meaning analysis (S230). The multilingual dialogue execution apparatus 100 searches the multilingual query response database 180 (Refer to FIG. 5) (S240), and recognizes the answer sentence linked to the searched question sentence in the first language database (S250). The second language database of the multilingual query response database 180 is searched to find the second language sentence corresponding to the answer sentence corresponding to the identified answer sentence (S260), and a response sentence composed of the searched second language is sent to the user (S270). At this time, the answer may be output to the user in a voice composed of the second language or may be output in text format through the display unit 172 (see FIG. 5) of the multilingual interactive apparatus 100.
1, when the Korean language is set as the question language and the Japanese language is set as the answer language, the user can speak "thank you" in Korean, and the multilingual conversation performance apparatus 100 can speak "thank you" Quot; thank you "which is the most similar to the question" Thank you "in the Korean database of the multilingual query response database 180 after analyzing the sentence of the sentence. The Japanese sentence corresponding to the answer sentence "OK " linked to" Thank You "is searched in the Japanese database and found to be" OK " .
In addition, the facial expression image corresponding to the retrieved answer sentence may be searched in the facial expression database and the retrieved facial expression image may be displayed through the display unit (S280).
3 is substantially the same as FIG. 2, but outputs an answer by the second language to the user in the final output using both voice and text formats (S370, S380).
4 is a diagram illustrating another embodiment of a multi-lingual conversation method according to the present invention. In other words, it is mostly the same as the case of FIG. 2 and FIG. 3. However, in the case of outputting the answer to the user, both the voice and the text are output using the speech and text output languages. S480, S490).
In addition, the language set to be used as an answer is input to the user from two languages, that is, the second language among the input languages is set to a language used for answering voice output, and one of the input languages (S420), and searches the third language database of the multilingual query response database for the answer sentence corresponding to the identified answer sentence (step S420) S470), the answer sentence constructed in the second language is output (S480), and the answer sentence configured in the third language is text-outputted (S490).
5 is a diagram showing a configuration of a multilingual dialogue apparatus 100 according to the present invention.
The multilingual dialogue execution method performed by the multilingual dialogue apparatus 100 has already been described in detail with reference to FIGS. 1 to 4 and 6. Hereinafter, functions of the multilingual dialogue apparatuses 100 Will be briefly described.
The control unit 110 controls the following components of the multilingual interactive apparatus 100 to perform a series of processes related to multilingual conversation.
The language setting unit 140 sets a language to be used as a question and a language to be used as a reply according to an input from the user. Hereinafter, the language set as the question will be referred to as a 'first language', and the language set as the answer will be referred to as a 'second language'. The input for the setting of the language to be used can be made by voice or input through a button or a touch screen interface (collectively referred to as a " user interface ") 120. In the case of input through voice, the voice input unit 130 recognizes that the voice of the user is an input for setting a language, recognizes the voice of the user, And when the input is made through the interface provided by the user interface providing unit 120, the language setting unit 140 sets the input language in the language.
Also, the voice input unit 130 receives the question voice in the first language from the user, and the voice recognition unit 150 analyzes the sentence of the inputted question voice, that is, the meaning of the sentence.
The multilingual query response database 180 stores questions and corresponding answers configured in multiple languages, and the structure thereof is shown in detail in FIG.
The answer retrieving unit 160 searches the first language database of the multilingual query response database 180 for a question sentence closest to the meaning of the question identified by the sentence analysis of the speech recognition unit 150, The answer sentence is grasped in the first language database and the answer sentence corresponding to the grasped answer sentence is searched in the second language database of the multilingual question and answer database.
The output unit 170 outputs a response sentence composed of the second language retrieved by the answer retrieval unit 160 to the user. The output of the answer may comprise one or more of speech output configured in a second language via speech output 171, or text output configured in the second language via a display.
In addition, when both the voice output and the text output are outputted in different languages, the language to be set as the answer is input to the user from two languages, and the second language among the input languages is used for the voice output , And one of the input languages (hereinafter referred to as a 'third language') can be set to a language used for a text output reply. Accordingly, the answer sentence corresponding to the identified answer sentence is also searched in the third language database of the multi-lingual question and answer database 180, and the answer output is outputted through the speech output unit, A text output configured in the third language can be displayed.
In addition, the output unit 170 may further include a facial expression database 190 for storing a facial expression image corresponding to each answer sentence. In this case, the output unit 170 outputs a facial expression corresponding to the retrieved answer sentence, And a function of outputting an image.
6 is a diagram illustrating an example of a screen in which a text answer appears in the multilingual dialogue apparatus according to the present invention. That is, the
100: Multi-lingual conversation device
110:
120: User interface provisioning
130:
140: language setting section
150:
160:
170:
171: Audio output section
172:
180: Multilingual query response database
190: Facial image database
Claims (10)
(a) a step of setting a language to be used as a question and a language to be used as a response (hereinafter, a language set as a question is referred to as a 'first language', and a language set as a answer is referred to as a 'second language'), ;
(b) receiving a question voice in the first language from a user;
(c) performing sentence analysis on the input question voice;
(d) searching the first language database of the multilingual query response database for a question sentence closest to the meaning of the question identified by the sentence analysis in the step (c), and transmitting the answer sentence, ;
(e) retrieving, from a second language database of the multilingual query response database, a response sentence corresponding to the answer sentence identified in step (d); And
(f) outputting to the user a response sentence composed in the second language retrieved in the step (e)
The method comprising the steps of:
In the step (a), an input for setting a language to be used is,
An input by voice, or an input from a user via a button or a touch screen interface
Wherein the method comprises the steps of:
In step (f) above,
A speech output configured in the second language, and a text output configured in the second language
Wherein the method comprises the steps of:
In the step (a)
The language set for use as an answer is two languages entered from the user,
Wherein the second language among the input languages is set to a language used for a voice output,
One of the input languages (hereinafter referred to as a 'third language') is set to the language used for the text output,
The answer sentence corresponding to the identified answer sentence is also searched in the third language database of the multilingual question and answer database,
In step (f) above,
A voice output configured in the second language, and a text output configured in the third language
Wherein the method comprises the steps of:
A language setting unit for setting a language to be used as a question and a language to be used as a response (hereinafter, a language set as a question is referred to as a 'first language', and a language set as a answer is referred to as a 'second language');
A voice input unit for receiving a question voice in the first language from a user;
A speech recognition unit for performing sentence analysis on the input question voice;
A multi-lingual query response database configured to store data of questions and their corresponding answers in multiple languages;
A query sentence closest to the meaning of the question identified by the sentence analysis of the speech recognition unit is searched for in the first language database of the multilingual query response database and the answer sentence linked to the searched sentence sentence is grasped in the first language database, An answer search unit for searching a second language database of the multilingual query response database for a response sentence corresponding to the answer sentence;
An output unit for outputting a response sentence retrieved by the answer retrieving unit to a user; And
A controller for controlling the respective components of the multilingual interactive apparatus to perform a series of processes related to multilingual conversation
The multilingual dialogue execution device comprising:
The input for setting the language to be used is:
A user voice input received from the voice input unit,
A button provided by a user interface providing unit or an input from a user via a touch screen interface (collectively referred to as a "user interface")
Wherein the multi-lingual dialogue execution unit comprises:
The output of the answer,
A voice output configured in the second language through a voice output unit; And
A text output configured in the second language via a display unit
Including one or more of
Wherein the multi-lingual dialogue execution unit comprises:
The language set for use as an answer is two languages entered from the user,
Wherein the second language among the input languages is set to a language used for a voice output,
One of the input languages (hereinafter referred to as a 'third language') is set to the language used for the text output,
The answer searching unit searches the third language database of the multilingual query response database for the answer sentence corresponding to the identified answer sentence,
The output of the answer,
A voice output configured in the second language through a voice output unit; And
A text output configured in the third language via a display unit
Wherein the multi-lingual dialogue execution unit comprises:
(g1) searching the facial expression database for a facial expression image corresponding to the searched answer sentence; And
(g2) displaying the searched facial image through the display unit
Further comprising the steps of:
A facial expression database storing facial images corresponding to each answer sentence
Further comprising:
The output unit includes:
And outputting a facial expression image corresponding to the searched answer sentence together with the searched answer sentence
Wherein the multi-lingual dialogue execution unit comprises:
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140063370 | 2014-05-27 | ||
KR20140063370 | 2014-05-27 | ||
KR20140130051 | 2014-09-29 | ||
KR1020140130051 | 2014-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20150136583A true KR20150136583A (en) | 2015-12-07 |
Family
ID=54872428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150074135A KR20150136583A (en) | 2014-05-27 | 2015-05-27 | Apparatus and method for multi-language dialogue |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20150136583A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190004486A (en) * | 2017-07-04 | 2019-01-14 | 조희정 | Method for training conversation using dubbing/AR |
KR20190039079A (en) * | 2016-08-16 | 2019-04-10 | 코쿠리츠켄큐카이하츠호진 죠호츠신켄큐키코 | Dialog system and computer program for it |
KR20200098302A (en) * | 2019-02-12 | 2020-08-20 | 조희경 | Educational Robots, Foreign Language Learning Method Using Educational Robots, and Media Recorded with Program Executing Foreign Language Learning Method |
-
2015
- 2015-05-27 KR KR1020150074135A patent/KR20150136583A/en active Search and Examination
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190039079A (en) * | 2016-08-16 | 2019-04-10 | 코쿠리츠켄큐카이하츠호진 죠호츠신켄큐키코 | Dialog system and computer program for it |
KR20190004486A (en) * | 2017-07-04 | 2019-01-14 | 조희정 | Method for training conversation using dubbing/AR |
KR20200098302A (en) * | 2019-02-12 | 2020-08-20 | 조희경 | Educational Robots, Foreign Language Learning Method Using Educational Robots, and Media Recorded with Program Executing Foreign Language Learning Method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160171387A1 (en) | Digital companions for human users | |
Michael | Automated Speech Recognition in language learning: Potential models, benefits and impact | |
KR102292477B1 (en) | Server and method for automatic assessment of oral language proficiency | |
KR20150136583A (en) | Apparatus and method for multi-language dialogue | |
US20200042604A1 (en) | Translation device | |
JP6656529B2 (en) | Foreign language conversation training system | |
KR100593589B1 (en) | Multilingual Interpretation / Learning System Using Speech Recognition | |
JP2019061189A (en) | Teaching material authoring system | |
KR20190041890A (en) | Device and Server System for Learning Foreign Language | |
JP6231510B2 (en) | Foreign language learning system | |
Ali et al. | A bilingual interactive human avatar dialogue system | |
CN114170856B (en) | Machine-implemented hearing training method, apparatus, and readable storage medium | |
JP6383748B2 (en) | Speech translation device, speech translation method, and speech translation program | |
KR102272567B1 (en) | Speech recognition correction system | |
KR20140075994A (en) | Apparatus and method for language education by using native speaker's pronunciation data and thought unit | |
KR102098377B1 (en) | Method for providing foreign language education service learning grammar using puzzle game | |
US11961413B2 (en) | Method, system and non-transitory computer-readable recording medium for supporting listening | |
JP2006301967A (en) | Conversation support device | |
Kasrani et al. | A Mobile Cloud Computing Based Independent Language Learning System with Automatic Intelligibility Assessment and Instant Feedback. | |
Shivakumar et al. | AI-ENABLED LANGUAGE SPEAKING COACHING FOR DUAL LANGUAGE LEARNERS. | |
Shukla | Development of a human-AI teaming based mobile language learning solution for dual language learners in early and special educations | |
KR20090087855A (en) | Method and system for learning sentences provided with a step thinking a corresponding sentence | |
KR20140073768A (en) | Apparatus and method for language education by using native speaker's pronunciation data and thoughtunit | |
JP2006078802A (en) | Device and method for supporting learning, and program | |
KR20150054045A (en) | Chain Dialog Pattern-based Dialog System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment |