WO2023152942A1 - Language learning system - Google Patents

Language learning system Download PDF

Info

Publication number
WO2023152942A1
WO2023152942A1 PCT/JP2022/005560 JP2022005560W WO2023152942A1 WO 2023152942 A1 WO2023152942 A1 WO 2023152942A1 JP 2022005560 W JP2022005560 W JP 2022005560W WO 2023152942 A1 WO2023152942 A1 WO 2023152942A1
Authority
WO
WIPO (PCT)
Prior art keywords
language
sentence
data
unit
speech data
Prior art date
Application number
PCT/JP2022/005560
Other languages
French (fr)
Japanese (ja)
Inventor
伸雄 椿
Original Assignee
株式会社ムゴン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ムゴン filed Critical 株式会社ムゴン
Priority to PCT/JP2022/005560 priority Critical patent/WO2023152942A1/en
Publication of WO2023152942A1 publication Critical patent/WO2023152942A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Definitions

  • the present invention relates to a system for learning languages.
  • Patent Literature 1 listed below discloses a system for learning a language (language learning system).
  • An object of the present invention is to provide a language learning system capable of efficiently improving a user's foreign language listening and speaking ability.
  • a translation unit that translates sentences in a first language into sentences in a second language; a speech data generation unit that generates second language sentence speech data, which is speech data of the sentence in the second language; It is characterized by comprising a judgment unit for comparing the second language text speech data with pronunciation data, which is speech data of pronunciation by the user.
  • the speech data generation unit generates second-language sentence-similar speech data, which is speech data similar in pronunciation to sentences in the second language
  • the determination unit compares the second-language sentence voice data and the second-language sentence-similar voice data with the pronunciation data to determine the accuracy of the pronunciation data.
  • the speech data generation unit generates sentence piece speech data, which is speech data of the sentence piece
  • the determination unit is characterized by comparing the sentence piece speech data and the pronunciation data.
  • the voice data generation unit generates text segment-similar voice data, which is voice data whose pronunciation is similar to the text segment
  • the determination unit is characterized in that the accuracy of the pronunciation data is determined by comparing the sentence piece speech data and the sentence piece similar speech data with the pronunciation data.
  • It is characterized by comprising a voice data output instructing section for instructing the terminal to output the second language text voice data as voice.
  • the apparatus is characterized by comprising a voice data output instructing section for instructing the terminal to output the second language text voice data as voice and to output the text piece voice data as voice.
  • a question sentence database in which a plurality of first language sentences are stored; a question selection unit that selects one sentence in the first language from the question sentence database; a screen generation unit that generates a display screen that displays sentences in the first language selected by the question selection unit; A timer that sends a timer signal (first timer signal) to the question selection unit every time a predetermined time (first predetermined time) elapses, The question selection section performs the selection when the timer signal is input.
  • the language learning system it is possible to efficiently improve the user's foreign language listening and speaking ability.
  • FIG. 1 is a block diagram illustrating the configuration of a language learning system according to Example 1 of the present invention
  • FIG. 4 is a schematic diagram showing an example of a first display screen in Example 1 of the present invention
  • FIG. 4 is a schematic diagram showing an example of a second display screen in Example 1 of the present invention
  • 4 is a flowchart illustrating an example of the operation (during user learning) of the language learning system according to Example 1 of the present invention
  • 4 is a flowchart illustrating an example of the operation of the language learning system according to Example 1 of the present invention (at the time of linking similar words).
  • FIG. 2 is a block diagram illustrating the configuration of a language learning system according to Example 2 of the present invention
  • FIG. 3 is a block diagram illustrating the configuration of a language learning system according to Example 3 of the present invention
  • FIG. 11 is a conceptual diagram illustrating the timing of timer signal transmission in Example 3 of the present invention
  • a language learning system is a system used by a user to learn a foreign language.
  • the language learning system according to the present invention will be described with reference to the drawings.
  • FIG. 1 is a block diagram illustrating the configuration of the language learning system according to this embodiment.
  • the language learning system (language learning system 1) according to the present embodiment is provided in a server 3 that can mutually communicate with each terminal 2 by using a network line L such as the Internet.
  • this terminal 2 refers to a terminal capable of performing communication using the network line L, such as a PC, a smart phone, a tablet, and the like.
  • the language learning system 1 includes a speech data generation unit 11, a communication control unit 12, a question sentence database 13, a question selection unit 14, a screen generation unit 15, a word database 16, a translation unit 17, a division unit 18, a storage unit 19, and voice data.
  • An output instruction unit 20, a pronunciation similarity database 21, a determination unit 22, and a transmission instruction unit 23 are provided.
  • the communication control unit 12 transmits and receives each piece of information to and from the terminal 2 via the network line L. The description will be omitted below.
  • the question sentence database 13 is a database that stores one or more sentences in the first language (for example, everyday conversation (e.g. I feel like something is wrong)) (the more the better).
  • the first language is mainly assumed to be the native language of the user.
  • the first language is one language, but the present invention is not limited to this, and one or more sentences are stored for each of a plurality of languages. may be
  • the question selection unit 14 selects one sentence from the sentences stored in the question sentence database 13 .
  • the screen generation unit 15 generates at least the first display screen D1 and the second display screen D2 depending on the situation. These generated display screens are transmitted to each terminal 2 . Each terminal 2 displays these display screens received from the server 3 .
  • the second display screen D2 allows the system administrator or the user to enter words (parts of speech) that are similar in pronunciation to each other. It should be noted that whether the pronunciations of words are similar or not is determined at the discretion of the system administrator or the user.
  • the word database 16 is a database in which words (parts of speech) of a plurality of languages including a first language and a second language are associated with the same meaning and stored.
  • the word database 16 also stores voice data of each word. It should be noted that the second language is mainly assumed to be a foreign language for the user.
  • the translation unit 17 Based on the information stored in the word database 16, the translation unit 17 translates the sentences in the first language selected by the question selection unit 14 into sentences in the second language (the first language is English and the second language is Japanese). If it is a word, "I feel like something is wrong” ⁇ "I feel something is wrong”, etc.).
  • both the first language and the second language can be designated by the user on the first display screen D1 displayed on the terminal 2 (or separately on the third display screen).
  • the translation unit 17 translates English sentences into Japanese sentences.
  • the division unit 18 divides the sentences in the second language translated by the translation unit 17.
  • the place to delimit should be either between words, preferably all between phrases (for example, in the case of "something different”, “something”, “different”, “I feel") , or "something different", “I feel", etc. Conversely, do not use "na”, “nka difference”, “u”, “kisu”, “ru”, etc.).
  • Each segmented portion is hereinafter referred to as a "sentence fragment”. That is, the dividing unit 18 divides the sentence in the second language to generate sentence fragments.
  • the storage unit 19 stores the sentences in the second language translated by the translation unit 17 and the sentence fragments divided by the division unit 18 together with audio data (the audio data is stored in the audio data generation unit 11 described later). generated in ).
  • a first play button P1 displayed on the first display screen D1 is a button assigned to a sentence in the second language translated by the translation unit 17, and a plurality of second play buttons P2 are displayed by the division unit 18. It is a button assigned to each of the divided sentence fragments (the number of second playback buttons P2 is equal to the number of sentence fragments, so the number is changed each time). However, the sentences in the second language and the sentence fragments themselves are not displayed on the first display screen D1.
  • the division unit 18 does not perform division processing, and the first display screen D1 generated by the screen generation unit 15 is not provided with the second play button P2 (the first play button P1 is provided).
  • the pronunciation similarity database 21 stores words that are similar in pronunciation to each other in association with each other for a plurality of languages including the second language. It may be that 3 or more words are associated with each other.
  • the information of the pronunciation similarity database 21 can be input from the second display screen D2.
  • This second display screen D2 may allow viewing and input only from a terminal owned by the system administrator, or may allow viewing and input from a terminal owned by the system administrator and terminals owned by general users. .
  • the speech data generation unit 11 extracts speech data of words constituting sentences in the second language translated by the translation unit 17 (hereinafter referred to as “second language sentence words”) from the word database 16 and combines them, Speech data of this second language sentence (hereinafter referred to as "second language sentence speech data”) is generated.
  • the speech data generation unit 11 extracts speech data of words whose speech is similar to words in the second language sentence (hereinafter referred to as “similar words in the second language sentence”) from the pronunciation similarity database 21 and combines them to obtain the above-mentioned Speech data whose pronunciation is similar to sentences in the second language (hereinafter referred to as “sentence-similar speech data in the second language”) is generated.
  • the speech data generation unit 11 extracts from the word database 16 speech data of words (hereinafter referred to as “sentence fragment words”) constituting the sentence fragments divided by the division unit 18, and combines them to generate the sentence fragments.
  • Speech data (hereinafter referred to as “sentence fragment speech data”) is generated (same operation is performed for each of a plurality of sentence fragments).
  • the speech data generation unit 11 extracts and combines the speech data of words whose speech is similar to the words in the sentence fragment (hereinafter referred to as “words similar to the sentence fragment”) from the pronunciation similarity database 21, so that the pronunciation is similar to the sentence fragment.
  • Speech data (hereinafter referred to as "sentence segment-similar speech data”) is generated (same operation is performed for each of a plurality of sentence segments).
  • the relationship (phonetic similarity) between the words in the second language sentence and the similar words in the second language sentence is, for example, if the word in the second language sentence is "great”, the similar word in the second language sentence is " "Sugao" and so on.
  • the second-language-sentence-similar speech data includes speech data of sentences generated by combining speech data of second-language-sentence words and speech data of second-language-sentence-similar words (for example, "second-language sentence word/second language sentence similar word/second language sentence word”, etc.).
  • the voice data output instruction unit 20 When the voice data output instruction unit 20 receives information that the first playback button P1 has been clicked (by the user) on the first display screen D1 displayed on the terminal 2, the voice data output instruction unit 20 provides instruction information for outputting the voice data of the second language text. to generate The instruction information is transmitted to the terminal 2 together with the voice data of the second language sentence, and the terminal 2 receives the voice data of the second language sentence from the sound output unit (speaker or earphone connected to the terminal 2; not shown). Output audio.
  • the voice data output instruction unit 20 receives information that any one of the plurality of second playback buttons P2 has been clicked (by the user) on the first display screen D1 displayed on the terminal 2. Then, instruction information is generated for outputting the text piece voice data of the text piece to which the clicked second reproduction button P2 is assigned among the plurality of text pieces. The instruction information is transmitted to the terminal 2 together with the sentence fragment speech data, and the terminal 2 that receives the instruction information outputs the second language sentence speech data from the sound output unit.
  • the voice data output instructing unit 20 instructs the terminal 2 to output the second language text voice data as voice and to output the text segment voice data as voice.
  • the sound input unit (microphone; not shown) of the terminal 2 can receive the user's pronunciation.
  • the user pronounces according to the sound output from the sound output unit of the terminal 2 (imitation of the output sound).
  • This pronunciation is input to the sound input unit, converted into audio data (hereinafter referred to as “pronunciation data”) in the terminal 2 , and the audio data is transmitted to the server 3 .
  • the determination unit 22 compares the second-language sentence voice data with the pronunciation data received from the terminal 2 when the first playback button P1 is clicked. If the second language sentence speech data and the pronunciation data do not match as a result of comparison, the second language sentence similar speech data and the pronunciation data are compared. Furthermore, the judging section 22 judges the accuracy of the pronunciation data from the results of these comparisons.
  • the determination unit 22 compares the sentence piece speech data and the pronunciation data of the sentence piece assigned to the second playback button P2. Also, if the result of comparison between the sentence piece speech data and the pronunciation data is that they do not match, the sentence piece-similar speech data of the sentence piece assigned to the second playback button P2 is compared with the pronunciation data. Furthermore, the judging section 22 judges the accuracy of the pronunciation data from the results of these comparisons.
  • the accuracy determined by the determination unit 22 be in the following three stages. ⁇ The second language sentence speech data (or sentence segment speech data) matches the pronunciation data ⁇ High accuracy ⁇ The second language sentence similar speech data (or sentence fragment similar speech data) matches the pronunciation data ⁇ Accuracy medium ⁇ The second language sentence speech data and the second language sentence similar speech data (or the sentence piece speech data and the sentence piece similar speech data) do not match the pronunciation data ⁇ accuracy is low
  • the transmission instruction unit 23 generates instruction information for outputting the accuracy determined by the determination unit 22 . More specifically, it instructs the terminal 2 to perform one or both of reflecting on the display screen and outputting a specific sound. The terminal 2 that has received the instruction information notifies the user of the degree of accuracy by outputting according to the instruction information.
  • the above-mentioned reflection may be, for example, reflected in the display of the first play button P1 or the second play button P2 linked to the relevant sentence or sentence fragment on the first display screen (high accuracy and medium, change one or more of its properties such as color, shape, and size; or erase the display of the button itself; Alternatively, a new display screen (fourth display screen) representing the degree of accuracy determined by the determination unit 22 may be generated by the screen generation unit 15 and displayed.
  • the determination unit 22 determines that the pronunciation of the sentence or sentence fragment is In everyday conversation with native speakers of two languages), it is considered to be no problem.
  • the accuracy is not limited to the three levels described above, and may be calculated as a percentage, for example.
  • the judging section 22 is provided with a threshold value, and the judging section 22 regards the sentence or document fragment as having no problem if the degree of accuracy is equal to or higher than the threshold value.
  • the text translated by the translated text and the text fragments divided by the dividing unit 18 are not displayed in characters on the first display screen D1.
  • the question selection unit 14 selects a different sentence from the sentences stored in the question sentence database 13. Select sentences.
  • the translation unit 17, the division unit 18, the voice data generation unit 11, the determination unit 22, the voice data output instruction unit 20, and the transmission instruction unit 23 perform the same processing as the processing already described.
  • the question selection unit 14 selects the previous question from the sentences stored in the question sentence database 13. The sentence is selected again, and then the translation unit 17, the division unit 18, the voice data generation unit 11, the determination unit 22, the voice data output instruction unit 20, and the transmission instruction unit 23 perform processing similar to the processing already described. conduct.
  • steps SA1 to SA12 in FIG. 4 describe operations by the language learning system 1 provided in the server 3
  • steps SB1 to SB12 describe operations by the terminal 2.
  • step SA1 the question selection unit 14 selects one sentence from the first language sentences stored in the question sentence database 13.
  • the selected text is incorporated into the first display screen D1 generated by the screen generator 15.
  • FIG. This first display screen D1 is transmitted from the server 3 to the terminal 2 .
  • the translation unit 17 translates the sentences in the first language selected by the question selection unit 14 into sentences in the second language.
  • the translated second language sentence is stored in the storage unit 19 .
  • the dividing unit 18 divides the sentence in the second language.
  • the divided text pieces are stored in the storage unit 19 .
  • the speech data generator 11 extracts speech data of the words of the second language sentence from the word database 16 and combines them to generate speech data of the second language sentence, and extracts speech data of the second language sentence words from the pronunciation similarity database 21.
  • Second-language sentence-similar speech data is generated by extracting and combining speech data of sentence-similar words.
  • the speech data generation unit 11 extracts and combines speech data of sentence fragment words from the word database 16 to generate sentence fragment speech data, and also extracts sentence fragment speech speech data from the pronunciation similarity database 21 and extracts sentence fragment speech speech data from the similar sentence fragment words. Data is extracted to generate sentence segment-similar speech data.
  • step SB1 the user confirms the first display screen D1 transmitted to the terminal 2 at step SA1, and clicks the first play button P1 or the second play button P2 in the first display screen D1.
  • step SB2 the process proceeds to step SB2.
  • step SB2 information is sent to the server 3 indicating that the first playback button P1 has been clicked.
  • step SB3 information is sent to the server 3 to the effect that the second playback button P2 has been clicked.
  • step SA5 information is received from the terminal 2 indicating that the play button has been clicked.
  • step SB2 the process proceeds to step SA6, and when the information transmitted from the terminal 2 is received at step SB3, the process proceeds to step SA7.
  • the voice data output instruction unit 20 generates instruction information for outputting the second language text voice data.
  • the instruction information and the second language sentence voice data generated in step SA4 are transmitted to the terminal 2.
  • step SA7 the voice data output instruction unit 20 generates instruction information for outputting the text segment voice data of the text segment to which the clicked second playback button P2 is assigned.
  • the instruction information and the sentence fragment speech data generated in step SA4 are transmitted to the terminal 2.
  • step SB4 when the terminal 2 receives the instruction information transmitted from the language learning system 1 at step SA6, the sound output unit (not shown) of the terminal 2 outputs the second language sentence voice data.
  • step SB5 when the terminal 2 receives the instruction information transmitted from the language learning system 1 at step SA7, the sound output section (not shown) of the terminal 2 outputs sentence fragment speech data.
  • step SB6 the user clicks the voice input start button V displayed on the first display screen D1, and the terminal 2 becomes ready to receive voice.
  • step SB 7 the terminal 2 , which is in a voice-acceptable state by the user's pronunciation, accepts the pronunciation.
  • step SA8 the judgment unit 22 judges the accuracy of the pronunciation data by comparing each speech data generated in step SA4 with the pronunciation data received from the terminal 2.
  • step SA9 the transmission instruction unit 23 generates instruction information for outputting the accuracy determined by the determination unit 22.
  • the instruction information is transmitted from the server 3 to the terminal 2 .
  • step SB8 the accuracy is notified to the user by outputting according to the instruction information generated by the transmission instruction unit 23.
  • step SB9 the user clicks any button on the first display screen D1. That is, when either of the play buttons P1, P2 (the play button different from the play button clicked in step SB1 is assumed, but the same play button may be used) is clicked again, the process proceeds to step SB10. If the next question change button C1 is clicked, the process proceeds to step SB11, and if the previous question change button C2 is clicked, the process proceeds to step SB12.
  • step SB10 information is sent to the server 3 to the effect that one of the playback buttons P1 and P2 has been clicked.
  • step SB11 information is sent to the server 3 to the effect that the next question change button C1 has been clicked.
  • step SB12 information is sent to the server 3 to the effect that the previous question change button C2 has been clicked.
  • language learning system 1 receives from terminal 2 the information transmitted from terminal 2 in step SB10, language learning system 1 resumes processing from step SA5.
  • step SA10 information is received from the terminal 2 to the effect that the change next question button C1 or the change previous question button C2 has been clicked.
  • step SB11 information is received from the terminal 2 to the effect that the change next question button C1 or the change previous question button C2 has been clicked.
  • step SA11 the question selection unit 14 selects a sentence different from the one stored in the question sentence database 13, and then proceeds to step SA2.
  • step SA12 the question selection unit 14 selects the previously selected sentence again from the sentences stored in the question sentence database 13, and then proceeds to step SA2.
  • steps SA21 and SA22 in FIG. 5 describe the operation of the language learning system 1 provided in the server 3
  • step SB21 describes the operation of the terminal 2.
  • the screen generation unit 15 generates the second display screen D2, which is transmitted from the server 3 to the terminal 2.
  • step SB21 the system administrator or the user inputs words that are similar in pronunciation from the second display screen D2 displayed on the terminal 2 so as to associate them with each other.
  • the entered information is transmitted from the terminal 2 to the server 3 .
  • step SA22 the information received from the terminal 2 and linking words with similar pronunciations is stored in the pronunciation similarity database 21.
  • the language learning system 1 includes the translation unit 17 that translates sentences in the first language into sentences in the second language, the voice data generation unit 11 that generates voice data of the second language text, and the second Since the determination unit 22 is provided for comparing the language sentence voice data and the pronunciation data, the second language sentence voice data and the pronunciation data can be compared, and the user's speaking ability of the second language (that is, the foreign language) can be evaluated. can be improved.
  • the speech data generation unit 11 generates the second-language sentence-similar speech data
  • the determination unit 22 compares the second-language sentence-speech speech data and the second-language sentence-similar speech data with the pronunciation data. By doing so, the accuracy of the pronunciation data is determined, so the accuracy of the pronunciation data can be grasped, and the user's ability to speak the second language can be further improved.
  • the language learning system 1 further comprises a segmentation unit 18 for generating sentence fragments, the speech data generation unit 11 produces sentence fragment speech data, and the determination unit 22 produces sentence fragment speech data and pronunciation data. Since the comparison is made between and, for example, sentences in the second language can be learned by segmenting them into clauses, which improves convenience during learning.
  • the speech data generation unit 11 generates sentence piece-similar speech data
  • the determination unit 22 compares the sentence piece speech data and the sentence piece-similar speech data with the pronunciation data to obtain the pronunciation data. Since the degree of accuracy is determined, for example, sentences in the second language can be learned by segmenting them into clauses, which improves convenience during learning.
  • the terminal 2 since the terminal 2 is provided with the voice data output instruction unit 20 for instructing the terminal 2 to output the voice data of the second language text, the user can read the text in the second language output from the terminal 2. After listening to the voice, it is possible to use vocalization data by imitating the voice, so that the user's ability to hear the second language can also be improved. be able to.
  • the terminal 2 is provided with the audio data output instruction section 20 for instructing the terminal 2 to output the second language sentence audio data and to output the sentence fragment audio data as audio.
  • the audio data output instruction section 20 for instructing the terminal 2 to output the second language sentence audio data and to output the sentence fragment audio data as audio.
  • the sentences are not displayed on the first display screen D1, and only the sound output from the sound output unit of the terminal 2 is transmitted to the user, so that the user can concentrate more on the sound. You can effectively improve your foreign language speaking ability.
  • step SA1 After the question selection unit 14 selects a sentence in step SA1, the translation unit 17 translates the selected sentence into a sentence in the second language in step SA2, and the division unit 18 translates the sentence in the second language in step SA3.
  • a sentence in two languages is divided, and in step SA4, the speech data generation unit 11 generates second language sentence speech data, second language sentence similar speech data, sentence fragment speech data, and sentence fragment similar speech data.
  • steps SA1 to SA4 are completed in advance before step SB1 is started, and each data derived in each step is stored in the storage unit. 19 may be stored.
  • the language learning system 1 is provided on the server 3, but the present invention is not limited to such a form. can be This point also applies to the second embodiment.
  • Example 2 The language learning system according to this embodiment is obtained by partially changing the configuration of the language learning system 1 according to the first embodiment.
  • descriptions of the same configurations as in the first embodiment will be omitted as much as possible, and different parts will be mainly described.
  • FIG. 6 is a block diagram explaining the configuration of the language learning system according to this embodiment.
  • the language learning system (language learning system 1A) according to the present embodiment replaces the screen generator 15 in the configuration of the language learning system 1 according to the first embodiment, and partially changes its functions.
  • a screen generation unit 15A is provided, and an extraction unit 24 is added.
  • the screen generation unit 15A in this embodiment differs from the screen generation unit 15 in that it does not generate the second display screen D2. It is similar to the generation unit 15 .
  • the extraction unit 24 Based on the speech data of each word stored in the word database 16, the extraction unit 24 links and extracts words (parts of speech) that have similar pronunciations from each word stored in the word database 16. This extracted information is then stored in the pronunciation similarity database 21 .
  • the system administrator and the user do not need to input from the second display screen D2, and the screen generator 15 does not need to generate the second display screen D2.
  • the language learning system according to the present embodiment is obtained by partially changing the configuration of the language learning system 1 according to the first embodiment (or the language learning system 1A according to the second embodiment).
  • the configuration of the language learning system 1 according to the first embodiment or the language learning system 1A according to the second embodiment.
  • descriptions of the same configurations as those of the first and second embodiments will be omitted as much as possible, and different parts will be mainly described.
  • FIG. 7 is a block diagram illustrating the configuration of the language learning system according to this embodiment.
  • the language learning system (language learning system 1B) according to the present embodiment has the screen Instead of the generation unit 15 (15A), a screen generation unit 15B whose function is partially changed is provided.
  • the language learning system 1B is not provided with the division unit 18 and the voice data output instruction unit 20 described in the first embodiment, but is provided with a timer 25 instead.
  • the screen generator 15B generates each of the display screens described in Example 1 (Example 2). It is not provided (the text W selected by the question selection unit 14 is displayed).
  • steps SB1 to SB3, SB6, and SB9 to SB12 described with reference to FIG. 4 in the first embodiment are omitted.
  • Steps SA5-SA7, SA11 and SA12 are also omitted.
  • the timer 25 selects one sentence from the sentences in the first language stored in the question sentence database 13 by the question selection section 14, and after a first predetermined time elapses, a first timer signal is sent to the question selection section 14. send. After that, the first timer signal is sent to the question selection section 14 each time the first predetermined time elapses. It should be noted that the first predetermined time is preferably set to several seconds.
  • the timer 25 first selects one sentence from the sentences in the first language stored in the question sentence database 13 by the question selection unit 14, and sets it for a second predetermined time (second predetermined time ⁇ first predetermined time). time), a second timer signal is transmitted to the terminal 2 . After that, every time the first timer signal is transmitted to the question selection unit 14, the second timer signal is transmitted to the terminal 2 after the second predetermined time has elapsed (see FIG. 8).
  • the question selection unit 14 selects one sentence from sentences in the first language stored in the question sentence database 13, as in the first embodiment. Further, in this embodiment, when the first timer signal is input, one sentence is selected from sentences in the first language stored in the question sentence database 13 .
  • the sentences in the first language selected by the question selection section 14 are incorporated (displayed) in the first display screen D1 generated by the screen generation section 15B.
  • This first display screen D1 is then transmitted from the server 3 to the terminal 2 . This point is the same as the first (or second) embodiment.
  • the first display screen D1 generated by the screen generation unit 15B is provided with a voice input start button V, the user cannot operate this from the terminal 2, and the sound input unit of the terminal 2 It is assumed that the state in which pronunciation can be accepted and the state in which pronunciation cannot be accepted are automatically switched.
  • the sound input unit of the terminal 2 is in a pronunciation accepting state until the terminal 2 receives the second timer signal after displaying the first display screen D1 generated by the screen generating unit 15B. After receiving the signal, the terminal 2 is in the pronunciation unacceptable state until the terminal 2 newly displays the first display screen D1 generated by the screen generation unit 15B.
  • the screen generation unit 15B may change properties such as the color of the voice input start button V according to the state of the sound input unit of the terminal 2 so that the user can easily distinguish the state of the sound input unit. .
  • the sound data is transmitted to the determination unit 22 .
  • the determination unit 22 determines the degree of accuracy (if the terminal 2 is in a state where the pronunciation can be accepted and the user does not pronounce, the degree of accuracy will be the lowest value).
  • the determined accuracy indicates that the instruction information generated by the transmission instruction unit 23 (information instructing to perform either or both of reflecting on the display screen and outputting a specific sound) 2, it is output from the terminal 2 and notified to the user.
  • the screen generation unit 15B generates the fourth display screen showing the accuracy determined by the determination unit 22.
  • the question selection unit 14 first selects a sentence, and the screen generation unit 15B incorporating this selects the first display screen D1.
  • the user confirms the first display screen D1 from the terminal 2, thinks for himself the translation of the sentence W displayed on the first display screen D1, and pronounces it before the second predetermined time elapses.
  • the user's pronunciation is transmitted from the terminal 2 to the determination unit 22 as pronunciation data.
  • sentences in the first language selected by the question selection unit 14 are translated into sentences in the second language by the translation unit 17, and speech data of the second language sentences and sentences in the second language are generated by the speech data generation unit 11. Generate similar speech data.
  • the determination unit 22 determines the degree of accuracy based on the second-language sentence speech data and the second-language sentence-similar speech data after the second predetermined time has elapsed.
  • the determined accuracy is instructed to output from the transmission instruction unit 23 to the terminal 2 (instruction to perform either or both of generation of the fourth display screen and output of a specific sound).
  • the user is notified of the accuracy output from the terminal 2 by the output instruction.
  • the sound input unit of the terminal 2 is in a pronunciation unacceptable state.
  • the question selection section 14 After that (after the first predetermined time has passed since the first text selection by the question selection section 14), the question selection section 14 newly selects a sentence, and the same operation is repeated thereafter.
  • the pronunciation is pronounced while the first display screen is displayed on the terminal 2, the accuracy is displayed on the terminal 2 as the fourth display screen, and then automatically switches to the new first display screen.
  • the accuracy is output as a specific sound from the sound output unit of the terminal 2, and then automatically a new first display screen (However, the terminal 2 may display the fourth display screen and output a specific sound).
  • the language learning system 1B includes a question sentence database 13 that stores one or more first language sentences, and a question selection unit 14 that selects one first language sentence from the question sentence database 13. a screen generation unit 15B for generating a first display screen D1 displaying sentences in the first language selected by the question selection unit 14; A timer 25 for sending a first timer signal is provided, and the question selection section 14 makes a selection when this first timer signal is input (the translation section 17 translates sentences in the first language selected by the question selection section 14 into the first language). Translation into two-language sentences is the same as in Example 1 (Example 2).
  • sentences are automatically given to the terminal 2 one after another, so the user does not have to press the play button.
  • the second language sentence voice data is not output from the terminal 2, so the user does not imitate this, but translates into the second language document by himself/herself. is pronounced (there is no concept of division as in Examples 1 and 2). At that time, by setting each of the above predetermined times short, it is required to instantly think of the translation and pronounce it as if it were a flash mental calculation, so it is possible to further improve speaking ability.
  • the present invention is suitable as a language learning system.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

[Problem] To provide a language learning system which makes it possible to efficiently improve the foreign language listening and speaking abilities of a user. [Solution] This language learning system makes it possible to efficiently improve the foreign language listening and speaking abilities of a user by comprising: a translation unit 17 which translates text of a first language into text of a second language; a voice data generation unit 11 which generates second language text voice data that is voice data of the text of the second language; and a determination unit 22 which compares the second language text voice data and pronunciation data that is voice data of the user's pronunciation.

Description

言語学習システムlanguage learning system
 本発明は、言語を学習するシステムに関するものである。 The present invention relates to a system for learning languages.
例えば下記特許文献1等には、言語を学習するシステム(言語学習システム)が開示されている。 For example, Patent Literature 1 listed below discloses a system for learning a language (language learning system).
特開2017-191151号公報JP 2017-191151 A
 従来の言語学習システムでは、ユーザの外国語のリスニング能力及びスピーキング能力を向上することが難しい。 With conventional language learning systems, it is difficult to improve the user's foreign language listening and speaking ability.
本発明では、ユーザの外国語のリスニング能力及びスピーキング能力を効率的に向上することを可能とする言語学習システムを提供することを目的とする。 SUMMARY OF THE INVENTION An object of the present invention is to provide a language learning system capable of efficiently improving a user's foreign language listening and speaking ability.
本発明の第1の観点における言語学習システムによれば、
第1言語の文章を第2言語の文章に翻訳する翻訳部と、
前記第2言語の文章の音声データである第2言語文章音声データを生成する音声データ生成部と、
前記第2言語文章音声データと、ユーザの発音の音声データである発音データとを比較する判定部とを備える
ことを特徴とする。
According to the language learning system in the first aspect of the present invention,
a translation unit that translates sentences in a first language into sentences in a second language;
a speech data generation unit that generates second language sentence speech data, which is speech data of the sentence in the second language;
It is characterized by comprising a judgment unit for comparing the second language text speech data with pronunciation data, which is speech data of pronunciation by the user.
好ましくは、
前記音声データ生成部は、前記第2言語の文章に発音が類似する音声データである第2言語文章類似音声データを生成し、
前記判定部は、前記第2言語文章音声データ及び前記第2言語文章類似音声データと、前記発音データとを比較することで、前記発音データの正確度を判定する
ことを特徴とする。
Preferably,
The speech data generation unit generates second-language sentence-similar speech data, which is speech data similar in pronunciation to sentences in the second language,
The determination unit compares the second-language sentence voice data and the second-language sentence-similar voice data with the pronunciation data to determine the accuracy of the pronunciation data.
好ましくは、
前記第2言語の文章を分割して文章片を生成する分割部を備え、
前記音声データ生成部は、前記文章片の音声データである文章片音声データを生成し、
前記判定部は、前記文章片音声データと前記発音データとを比較する
ことを特徴とする。
Preferably,
comprising a dividing unit that divides the sentence in the second language to generate sentence fragments;
The speech data generation unit generates sentence piece speech data, which is speech data of the sentence piece,
The determination unit is characterized by comparing the sentence piece speech data and the pronunciation data.
好ましくは、
前記音声データ生成部は、前記文章片に発音が類似する音声データである文章片類似音声データを生成し、
前記判定部は、前記文章片音声データ及び前記文章片類似音声データと、前記発音データとを比較することで、前記発音データの正確度を判定する
ことを特徴とする。
Preferably,
The voice data generation unit generates text segment-similar voice data, which is voice data whose pronunciation is similar to the text segment,
The determination unit is characterized in that the accuracy of the pronunciation data is determined by comparing the sentence piece speech data and the sentence piece similar speech data with the pronunciation data.
好ましくは、
端末に対して、前記第2言語文章音声データを音声出力する指示を行う音声データ出力指示部を備える
ことを特徴とする。
Preferably,
It is characterized by comprising a voice data output instructing section for instructing the terminal to output the second language text voice data as voice.
好ましくは、
端末に対して、前記第2言語文章音声データを音声出力する指示、及び、前記文章片音声データを音声出力する指示を行う音声データ出力指示部を備える
ことを特徴とする。
Preferably,
The apparatus is characterized by comprising a voice data output instructing section for instructing the terminal to output the second language text voice data as voice and to output the text piece voice data as voice.
複数の第1言語の文章が記憶される出題文章データベースと、
前記出題文章データベースの中から1つの前記第1言語の文章を選択する出題選択部と、
前記出題選択部により選択された前記第1言語の文章を表示する表示画面を生成する画面生成部と、
所定時間(第1所定時間)経過毎に前記出題選択部にタイマー信号(第1タイマー信号)を送るタイマーとを備え、
前記出題選択部は前記タイマー信号が入力されると前記選択を行う
ことを特徴とする。
a question sentence database in which a plurality of first language sentences are stored;
a question selection unit that selects one sentence in the first language from the question sentence database;
a screen generation unit that generates a display screen that displays sentences in the first language selected by the question selection unit;
A timer that sends a timer signal (first timer signal) to the question selection unit every time a predetermined time (first predetermined time) elapses,
The question selection section performs the selection when the timer signal is input.
 本発明に係る言語学習システムによれば、ユーザの外国語のリスニング能力及びスピーキング能力を効率的に向上することができる。 According to the language learning system according to the present invention, it is possible to efficiently improve the user's foreign language listening and speaking ability.
本発明の実施例1に係る言語学習システムの構成を説明するブロック図である。1 is a block diagram illustrating the configuration of a language learning system according to Example 1 of the present invention; FIG. 本発明の実施例1における第1表示画面の一例を表わす概略図である。4 is a schematic diagram showing an example of a first display screen in Example 1 of the present invention; FIG. 本発明の実施例1における第2表示画面の一例を表わす概略図である。FIG. 4 is a schematic diagram showing an example of a second display screen in Example 1 of the present invention; 本発明の実施例1に係る言語学習システムの動作(ユーザ学習時)の一例を説明するフローチャートである。4 is a flowchart illustrating an example of the operation (during user learning) of the language learning system according to Example 1 of the present invention; 本発明の実施例1に係る言語学習システムの動作(互いに類似する単語の紐付け時)の一例を説明するフローチャートである。4 is a flowchart illustrating an example of the operation of the language learning system according to Example 1 of the present invention (at the time of linking similar words). 本発明の実施例2に係る言語学習システムの構成を説明するブロック図である。FIG. 2 is a block diagram illustrating the configuration of a language learning system according to Example 2 of the present invention; 本発明の実施例3に係る言語学習システムの構成を説明するブロック図である。FIG. 3 is a block diagram illustrating the configuration of a language learning system according to Example 3 of the present invention; 本発明の実施例3におけるタイマー信号送信のタイミングを説明する概念図である。FIG. 11 is a conceptual diagram illustrating the timing of timer signal transmission in Example 3 of the present invention;
本発明に係る言語学習システムは、ユーザが外国語を学習するために用いられるシステムである。
以下では、本発明に係る言語学習システムを実施例により図面を用いて説明する。
A language learning system according to the present invention is a system used by a user to learn a foreign language.
Hereinafter, the language learning system according to the present invention will be described with reference to the drawings.
[実施例1]
 図1は、本実施例に係る言語学習システムの構成を説明するブロック図である。図1に示すように本実施例に係る言語学習システム(言語学習システム1)は、インターネット等のネットワーク回線Lを用いることで各端末2と相互通信可能なサーバ3に設けられるものである。なお、この端末2とは、PC、スマートフォン、タブレット等、ネットワーク回線Lを利用した通信を行うことが可能な端末を指すものとする。
[Example 1]
FIG. 1 is a block diagram illustrating the configuration of the language learning system according to this embodiment. As shown in FIG. 1, the language learning system (language learning system 1) according to the present embodiment is provided in a server 3 that can mutually communicate with each terminal 2 by using a network line L such as the Internet. In addition, this terminal 2 refers to a terminal capable of performing communication using the network line L, such as a PC, a smart phone, a tablet, and the like.
言語学習システム1は、音声データ生成部11、通信制御部12、出題文章データベース13、出題選択部14、画面生成部15、単語データベース16、翻訳部17、分割部18、記憶部19、音声データ出力指示部20、発音類似データベース21、判定部22、及び、伝達指示部23を備えている。 The language learning system 1 includes a speech data generation unit 11, a communication control unit 12, a question sentence database 13, a question selection unit 14, a screen generation unit 15, a word database 16, a translation unit 17, a division unit 18, a storage unit 19, and voice data. An output instruction unit 20, a pronunciation similarity database 21, a determination unit 22, and a transmission instruction unit 23 are provided.
 このうち通信制御部12は、ネットワーク回線Lを介して端末2と各情報の送受信を行うものである。以下ではその旨を省略して説明する。 Of these, the communication control unit 12 transmits and receives each piece of information to and from the terminal 2 via the network line L. The description will be omitted below.
 出題文章データベース13は、第1言語の文章(例えば日常会話(e.g. I feel like something is wrong)など)が、1つ以上(多いほどよい)記憶されているデータベースである。なお、第1言語は主にユーザの母国語を想定している。 The question sentence database 13 is a database that stores one or more sentences in the first language (for example, everyday conversation (e.g. I feel like something is wrong)) (the more the better). Note that the first language is mainly assumed to be the native language of the user.
なお本実施例中では、第1言語が1つの言語であるものとして説明しているが、本発明はこれに限定されるものではなく、複数の言語についてそれぞれ文章が1つ以上記憶されるものとしてもよい。 In this embodiment, it is assumed that the first language is one language, but the present invention is not limited to this, and one or more sentences are stored for each of a plurality of languages. may be
出題選択部14は、出題文章データベース13に記憶された文章の中から1つの文章を選択する。 The question selection unit 14 selects one sentence from the sentences stored in the question sentence database 13 .
 画面生成部15は、状況に応じて少なくとも第1表示画面D1及び第2表示画面D2を生成する。生成されたこれらの表示画面は各端末2に送信される。そして、各端末2はサーバ3から受信したこれらの表示画面を表示する。 The screen generation unit 15 generates at least the first display screen D1 and the second display screen D2 depending on the situation. These generated display screens are transmitted to each terminal 2 . Each terminal 2 displays these display screens received from the server 3 .
 第1表示画面D1には、例えば図2に示すように、出題選択部14により選択された文章W、1つの第1再生ボタンP1、複数の第2再生ボタンP2、1つの音声入力開始ボタンV、1つの次出題変更ボタンC1、及び、1つの前出題変更ボタンC2が表示される。これらのボタンの機能については後述する。 For example, as shown in FIG. 2, on the first display screen D1, sentences W selected by the question selection unit 14, one first playback button P1, a plurality of second playback buttons P2, one voice input start button V , one next question change button C1, and one previous question change button C2 are displayed. The functions of these buttons will be described later.
第2表示画面D2は、例えば図3に示すように、システム管理者又はユーザによって、発音が類似する単語(品詞)同士を紐付けるようにして入力することができるようになっている。なお、単語同士の発音が類似するか否かについては、システム管理者又はユーザの裁量によって決めることになる。 For example, as shown in FIG. 3, the second display screen D2 allows the system administrator or the user to enter words (parts of speech) that are similar in pronunciation to each other. It should be noted that whether the pronunciations of words are similar or not is determined at the discretion of the system administrator or the user.
単語データベース16は、第1言語及び第2言語を含む複数の言語の単語(品詞)が、それぞれ同じ意味毎に紐付けられて記憶されているデータベースである。また単語データベース16には、各単語の音声データも記憶されている。なお、第2言語については主にユーザにとって外国語を想定している。 The word database 16 is a database in which words (parts of speech) of a plurality of languages including a first language and a second language are associated with the same meaning and stored. The word database 16 also stores voice data of each word. It should be noted that the second language is mainly assumed to be a foreign language for the user.
 翻訳部17は、単語データベース16に記憶された情報に基づき、出題選択部14によって選択された第1言語の文章を第2言語の文章に翻訳する(第1言語が英語、第2言語が日本語であれば、“I feel like something is wrong” → 「なんか違う気がする」など)。 Based on the information stored in the word database 16, the translation unit 17 translates the sentences in the first language selected by the question selection unit 14 into sentences in the second language (the first language is English and the second language is Japanese). If it is a word, "I feel like something is wrong" → "I feel something is wrong", etc.).
 なお、第1言語及び第2言語ともに、ユーザが端末2に表示された第1表示画面D1(又は別途第3表示画面としてもよい)において、いずれの言語にするかを指定することができるものとするのが好ましい(例えば、第1言語を英語、第2言語を日本語とすれば、翻訳部17において英語の文章を日本語の文章に翻訳することになる)。 Note that both the first language and the second language can be designated by the user on the first display screen D1 displayed on the terminal 2 (or separately on the third display screen). (For example, if the first language is English and the second language is Japanese, the translation unit 17 translates English sentences into Japanese sentences).
 分割部18は、翻訳部17によって翻訳された第2言語の文章を分割する。区切る箇所は、単語と単語との間のうちいずれか、好ましくは文節と文節の間の全てとする(例えば「なんか違う気がする」であれば、「なんか」「違う」「気がする」、あるいは、「なんか違う」「気がする」などとする。反対に、「な」「んか違」「う」「気がす」「る」などとはしない)。以下、区切られた各部分を「文章片」と記載する。すなわち、分割部18は第2言語の文章を分割して文章片を生成する。 The division unit 18 divides the sentences in the second language translated by the translation unit 17. The place to delimit should be either between words, preferably all between phrases (for example, in the case of "something different", "something", "different", "I feel") , or "something different", "I feel", etc. Conversely, do not use "na", "nka difference", "u", "kisu", "ru", etc.). Each segmented portion is hereinafter referred to as a "sentence fragment". That is, the dividing unit 18 divides the sentence in the second language to generate sentence fragments.
 記憶部19には、翻訳部17により翻訳された第2言語の文章及び分割部18により分割された文章片が、それぞれ音声データとともに記憶される(当該音声データについては後述の音声データ生成部11において生成される)。 The storage unit 19 stores the sentences in the second language translated by the translation unit 17 and the sentence fragments divided by the division unit 18 together with audio data (the audio data is stored in the audio data generation unit 11 described later). generated in ).
そして、第1表示画面D1に表示される第1再生ボタンP1は、翻訳部17によって翻訳された第2言語の文章に割り当てられるボタンであり、複数の第2再生ボタンP2は、分割部18によって分割した各文章片に対してそれぞれ1つずつ割り当てられるボタンである(第2再生ボタンP2の個数は文章片の個数と等しくなるため、その個数は都度変更される)。ただし、第1表示画面D1上に第2言語の文章及び文章片そのものは表示されない。 A first play button P1 displayed on the first display screen D1 is a button assigned to a sentence in the second language translated by the translation unit 17, and a plurality of second play buttons P2 are displayed by the division unit 18. It is a button assigned to each of the divided sentence fragments (the number of second playback buttons P2 is equal to the number of sentence fragments, so the number is changed each time). However, the sentences in the second language and the sentence fragments themselves are not displayed on the first display screen D1.
なお、翻訳部17において翻訳された文章が短く、区切ることができない場合(例えば1単語のみなど)は、分割部18は分割処理を行わず、画面生成部15により生成される第1表示画面D1には第2再生ボタンP2が設けられないことになる(第1再生ボタンP1は設けられる)。 Note that if the sentence translated by the translation unit 17 is too short to be segmented (for example, only one word), the division unit 18 does not perform division processing, and the first display screen D1 generated by the screen generation unit 15 is not provided with the second play button P2 (the first play button P1 is provided).
発音類似データベース21は、第2言語を含む複数の言語についてそれぞれ、互いに発音が類似する単語同士が紐付けられて記憶される。これは3以上の単語が互いに紐付けられることもある。 The pronunciation similarity database 21 stores words that are similar in pronunciation to each other in association with each other for a plurality of languages including the second language. It may be that 3 or more words are associated with each other.
 なお、発音類似データベース21の情報は、第2表示画面D2から入力することができる。そしてこの第2表示画面D2は、システム管理者の所有する端末のみから閲覧及び入力可能としてもよく、あるいは、システム管理者の所有する端末及び一般ユーザが所有する端末から閲覧及び入力可能としてもよい。 The information of the pronunciation similarity database 21 can be input from the second display screen D2. This second display screen D2 may allow viewing and input only from a terminal owned by the system administrator, or may allow viewing and input from a terminal owned by the system administrator and terminals owned by general users. .
音声データ生成部11は、翻訳部17により翻訳された第2言語の文章を構成する単語(以下「第2言語文章単語」と記載)の音声データを単語データベース16から抽出して組み合わせることで、この第2言語の文章の音声データ(以下「第2言語文章音声データ」と記載)を生成する。 The speech data generation unit 11 extracts speech data of words constituting sentences in the second language translated by the translation unit 17 (hereinafter referred to as “second language sentence words”) from the word database 16 and combines them, Speech data of this second language sentence (hereinafter referred to as "second language sentence speech data") is generated.
また音声データ生成部11は、音声が第2言語文章単語に類似する単語(以下「第2言語文章類似単語」と記載)の音声データを発音類似データベース21から抽出して組み合わせることで、上述の第2言語の文章に発音が類似する音声データ(以下「第2言語文章類似音声データ」と記載)を生成する。 In addition, the speech data generation unit 11 extracts speech data of words whose speech is similar to words in the second language sentence (hereinafter referred to as “similar words in the second language sentence”) from the pronunciation similarity database 21 and combines them to obtain the above-mentioned Speech data whose pronunciation is similar to sentences in the second language (hereinafter referred to as “sentence-similar speech data in the second language”) is generated.
さらに音声データ生成部11は、分割部18により分割された文章片を構成する単語(以下「文章片単語」と記載)の音声データを単語データベース16から抽出して組み合わせることで、この文章片の音声データ(以下「文章片音声データ」と記載)を生成する(複数の文章片に対しそれぞれ同様に行う)。 Further, the speech data generation unit 11 extracts from the word database 16 speech data of words (hereinafter referred to as “sentence fragment words”) constituting the sentence fragments divided by the division unit 18, and combines them to generate the sentence fragments. Speech data (hereinafter referred to as "sentence fragment speech data") is generated (same operation is performed for each of a plurality of sentence fragments).
そして音声データ生成部11は、音声が文章片単語に類似する単語(以下「文章片類似単語」と記載)の音声データを発音類似データベース21から抽出して組み合わせることで、文章片に発音が類似する音声データ(以下「文章片類似音声データ」と記載)を生成する(複数の文章片に対しそれぞれ同様に行う)。 Then, the speech data generation unit 11 extracts and combines the speech data of words whose speech is similar to the words in the sentence fragment (hereinafter referred to as “words similar to the sentence fragment”) from the pronunciation similarity database 21, so that the pronunciation is similar to the sentence fragment. Speech data (hereinafter referred to as "sentence segment-similar speech data") is generated (same operation is performed for each of a plurality of sentence segments).
ちなみに、第2言語文章単語と第2言語文章類似単語との(音声類似の)関係は、例えば、第2言語文章単語が「凄い(すごい)」であれば、第2言語文章類似単語は「素顔(すがお)」などになる。 By the way, the relationship (phonetic similarity) between the words in the second language sentence and the similar words in the second language sentence is, for example, if the word in the second language sentence is "great", the similar word in the second language sentence is " "Sugao" and so on.
第2言語文章類似音声データには、第2言語文章単語の音声データと第2言語文章類似単語の音声データとを組み合わせて生成した文章の音声データも含めるものとする(例えば「第2言語文章単語/第2言語文章類似単語/第2言語文章単語」など)。 The second-language-sentence-similar speech data includes speech data of sentences generated by combining speech data of second-language-sentence words and speech data of second-language-sentence-similar words (for example, "second-language sentence word/second language sentence similar word/second language sentence word", etc.).
さらに、1つの第2言語文章単語に対する第2言語文章類似単語が複数個存在する場合も考えられる。したがって、自ずと1つの第2言語文章音声データに対し第2言語文章類似音声データは複数種類生成される場合が多くなる。 Furthermore, it is conceivable that a plurality of second-language-sentence-similar words exist for one second-language-sentence word. Therefore, it is often the case that a plurality of types of second-language-sentence-similar speech data are generated for one second-language-sentence speech data.
なお、上述の第2言語文章音声データと第2言語文章類似音声データの関係、第2言語文章単語と第2言語文章類似単語の関係についての説明は、文章片音声データと文章片類似音声データの関係、文章片単語と文章片類似単語の関係についても同様となる。 The relationship between the second-language sentence speech data and the second-language sentence-similar speech data and the relationship between the second-language sentence words and the second-language sentence-similar words described above are described in , and the relationship between sentence fragment words and sentence fragment-similar words.
 音声データ出力指示部20は、端末2に表示された第1表示画面D1において(ユーザにより)第1再生ボタンP1がクリックされた情報を受信すると、第2言語文章音声データを音声出力する指示情報を生成する。当該指示情報は第2言語文章音声データとともに端末2へ送信され、これらを受信した端末2は、音出力部(スピーカーあるいは端末2に接続されたイヤホン。図示略)から第2言語文章音声データを音声出力する。 When the voice data output instruction unit 20 receives information that the first playback button P1 has been clicked (by the user) on the first display screen D1 displayed on the terminal 2, the voice data output instruction unit 20 provides instruction information for outputting the voice data of the second language text. to generate The instruction information is transmitted to the terminal 2 together with the voice data of the second language sentence, and the terminal 2 receives the voice data of the second language sentence from the sound output unit (speaker or earphone connected to the terminal 2; not shown). Output audio.
 また音声データ出力指示部20は、端末2に表示された第1表示画面D1において(ユーザにより)複数の第2再生ボタンP2のうちいずれか1つの第2再生ボタンP2がクリックされた情報を受信すると、複数の文章片のうちクリックされた第2再生ボタンP2が割り当てられた文章片の文章片音声データを音声出力する指示情報を生成する。当該指示情報は文章片音声データとともに端末2へ送信され、これらを受信した端末2は、上記音出力部から第2言語文章音声データを音声出力する。 Also, the voice data output instruction unit 20 receives information that any one of the plurality of second playback buttons P2 has been clicked (by the user) on the first display screen D1 displayed on the terminal 2. Then, instruction information is generated for outputting the text piece voice data of the text piece to which the clicked second reproduction button P2 is assigned among the plurality of text pieces. The instruction information is transmitted to the terminal 2 together with the sentence fragment speech data, and the terminal 2 that receives the instruction information outputs the second language sentence speech data from the sound output unit.
すなわち音声データ出力指示部20は、端末2に対して、第2言語文章音声データを音声出力する指示を行うとともに、文章片音声データを音声出力する指示を行うものである。 That is, the voice data output instructing unit 20 instructs the terminal 2 to output the second language text voice data as voice and to output the text segment voice data as voice.
 また、第1表示画面D1に表示される音声入力開始ボタンVがクリックされると、端末2の音入力部(マイク。図示略)がユーザの発音を受け付け可能となる。この状態において、ユーザは端末2の上記音出力部から出力された音声にしたがって(出力された音声を真似して)発音する。この発音は当該音入力部に入力され、端末2内で音声データ(以下「発音データ」と記載)に変換され、当該音声データはサーバ3に送信される。 Also, when the voice input start button V displayed on the first display screen D1 is clicked, the sound input unit (microphone; not shown) of the terminal 2 can receive the user's pronunciation. In this state, the user pronounces according to the sound output from the sound output unit of the terminal 2 (imitation of the output sound). This pronunciation is input to the sound input unit, converted into audio data (hereinafter referred to as “pronunciation data”) in the terminal 2 , and the audio data is transmitted to the server 3 .
 判定部22は、第1再生ボタンP1がクリックされた場合、第2言語文章音声データと端末2から受信した発音データとを比較する。また、第2言語文章音声データと発音データとを比較した結果、両者が一致しなかった場合、第2言語文章類似音声データと発音データとを比較する。さらに判定部22は、これらの比較の結果から発音データの正確度を判定する。 The determination unit 22 compares the second-language sentence voice data with the pronunciation data received from the terminal 2 when the first playback button P1 is clicked. If the second language sentence speech data and the pronunciation data do not match as a result of comparison, the second language sentence similar speech data and the pronunciation data are compared. Furthermore, the judging section 22 judges the accuracy of the pronunciation data from the results of these comparisons.
 そして判定部22は、1つの第2再生ボタンP2がクリックされた場合、当該第2再生ボタンP2に割り当てられた文章片の文章片音声データと発音データとを比較する。また、当該文章片音声データと発音データとを比較した結果、両者が一致しなかった場合、当該第2再生ボタンP2に割り当てられた文章片の文章片類似音声データと発音データとを比較する。さらに判定部22は、これらの比較の結果から発音データの正確度を判定する。 Then, when one second playback button P2 is clicked, the determination unit 22 compares the sentence piece speech data and the pronunciation data of the sentence piece assigned to the second playback button P2. Also, if the result of comparison between the sentence piece speech data and the pronunciation data is that they do not match, the sentence piece-similar speech data of the sentence piece assigned to the second playback button P2 is compared with the pronunciation data. Furthermore, the judging section 22 judges the accuracy of the pronunciation data from the results of these comparisons.
判定部22により判定される正確度については下記の3段階にするのが好ましい。
・第2言語文章音声データ(又は文章片音声データ)と発音データとが一致→正確度高
・第2言語文章類似音声データ(又は文章片類似音声データ)と発音データとが一致→正確度中
・第2言語文章音声データ及び第2言語文章類似音声データ(又は文章片音声データ及び文章片類似音声データ)と発音データとは一致しない→正確度低
It is preferable that the accuracy determined by the determination unit 22 be in the following three stages.
・The second language sentence speech data (or sentence segment speech data) matches the pronunciation data → High accuracy ・The second language sentence similar speech data (or sentence fragment similar speech data) matches the pronunciation data → Accuracy medium・The second language sentence speech data and the second language sentence similar speech data (or the sentence piece speech data and the sentence piece similar speech data) do not match the pronunciation data → accuracy is low
 伝達指示部23は、判定部22により判定された正確度を出力する指示情報を生成する。より具体的には、表示画面に反映する、及び、特定の音を出力する、のうちいずれか又は両方を行うように、端末2に対して指示するものである。当該指示情報を受信した端末2は、当該指示情報に従い出力を行うことでユーザに正確度を通知する。 The transmission instruction unit 23 generates instruction information for outputting the accuracy determined by the determination unit 22 . More specifically, it instructs the terminal 2 to perform one or both of reflecting on the display screen and outputting a specific sound. The terminal 2 that has received the instruction information notifies the user of the degree of accuracy by outputting according to the instruction information.
 なお上記反映とは、例えば、第1表示画面において、該当する文章又は文章片に紐付けられた第1再生ボタンP1又は第2再生ボタンP2の表示に反映するものとしてもよい(正確度高及び中の場合、色、形、及び大きさ等のプロパティのうち1つ以上を変更する、あるいは、ボタンの表示そのものを消去し、正確度低の場合はこのような変更はしない等)。あるいは、新たに、判定部22により判定された正確度を表わす表示画面(第4表示画面)を画面生成部15によって生成し、これを表示するものとしてもよい。 Note that the above-mentioned reflection may be, for example, reflected in the display of the first play button P1 or the second play button P2 linked to the relevant sentence or sentence fragment on the first display screen (high accuracy and medium, change one or more of its properties such as color, shape, and size; or erase the display of the button itself; Alternatively, a new display screen (fourth display screen) representing the degree of accuracy determined by the determination unit 22 may be generated by the screen generation unit 15 and displayed.
正確度がしきい値以上となった場合(例えば上述した3段階であれば、正確度高及び中を表わす状態となった場合)、判定部22は、その文章又は文章片の発音は(第2言語のネイティブとの日常会話において)問題ないと見做す。 When the accuracy is equal to or higher than the threshold value (for example, when the accuracy is high or medium in the case of the three levels described above), the determination unit 22 determines that the pronunciation of the sentence or sentence fragment is In everyday conversation with native speakers of two languages), it is considered to be no problem.
 ちなみに、正確度については上述した3段階に限らず、例えばパーセンテージで算出するようにしてもよい。いずれにしても、判定部22にはしきい値が設けられており、判定部22は、正確度が当該しきい値以上であればその文章又は文書片は問題ないと見做す。 By the way, the accuracy is not limited to the three levels described above, and may be calculated as a percentage, for example. In any case, the judging section 22 is provided with a threshold value, and the judging section 22 regards the sentence or document fragment as having no problem if the degree of accuracy is equal to or higher than the threshold value.
従来の言語学習では、学習者の発音がネイティブの発音と完全に一致しなければ間違いと判断されていたが、間違いが続くと学習意欲が低下し、学習を継続すること自体が困難になってしまうことが多かった。そのため本実施例においては、学習者の発音がネイティブの発音と完全に一致しなくとも、日常会話が成立するレベルであれば問題ないと判断するようにしている。 In conventional language learning, if the learner's pronunciation does not exactly match the pronunciation of the native speaker, it is judged to be a mistake. I often lost it. Therefore, in this embodiment, even if the learner's pronunciation does not completely match the native's pronunciation, it is judged that there is no problem as long as daily conversation is established.
 なお、本実施例においては、翻訳文により翻訳された文章、及び、分割部18により分割された文章片を文字で第1表示画面D1に表示することはしない。 It should be noted that, in this embodiment, the text translated by the translated text and the text fragments divided by the dividing unit 18 are not displayed in characters on the first display screen D1.
第1表示画面D1における次出題変更ボタンC1がクリックされ、その情報が端末2からサーバ3に送信されると、出題選択部14は、出題文章データベース13に記憶された文章の中から今回と異なる文章を選択する。その後、翻訳部17、分割部18、音声データ生成部11、判定部22、音声データ出力指示部20、及び、伝達指示部23は、既に説明した処理と同様の処理を行う。 When the next question change button C1 on the first display screen D1 is clicked and the information is transmitted from the terminal 2 to the server 3, the question selection unit 14 selects a different sentence from the sentences stored in the question sentence database 13. Select sentences. After that, the translation unit 17, the division unit 18, the voice data generation unit 11, the determination unit 22, the voice data output instruction unit 20, and the transmission instruction unit 23 perform the same processing as the processing already described.
第1表示画面D1における前出題変更ボタンC2がクリックされ、その情報が端末2からサーバ3に送信されると、出題選択部14は、出題文章データベース13に記憶された文章の中から前回選択した文章を再度選択し、その後、翻訳部17、分割部18、音声データ生成部11、判定部22、音声データ出力指示部20、及び、伝達指示部23は、既に説明した処理と同様の処理を行う。 When the previous question change button C2 on the first display screen D1 is clicked and the information is transmitted from the terminal 2 to the server 3, the question selection unit 14 selects the previous question from the sentences stored in the question sentence database 13. The sentence is selected again, and then the translation unit 17, the division unit 18, the voice data generation unit 11, the determination unit 22, the voice data output instruction unit 20, and the transmission instruction unit 23 perform processing similar to the processing already described. conduct.
 以上が、言語学習システム1の構成についての説明である。以下では、ユーザが言語学習システム1を用いて学習する際の言語学習システム1の動作の一例について、図4のフローチャートを用いて説明する。なお、図4中のステップSA1~SA12はサーバ3に設けられた言語学習システム1による動作を説明しており、ステップSB1~SB12は端末2による動作を説明している。 The above is the description of the configuration of the language learning system 1. An example of the operation of the language learning system 1 when the user learns using the language learning system 1 will be described below with reference to the flowchart of FIG. Note that steps SA1 to SA12 in FIG. 4 describe operations by the language learning system 1 provided in the server 3, and steps SB1 to SB12 describe operations by the terminal 2. FIG.
 ステップSA1では、出題選択部14において、出題文章データベース13に記憶された第1言語の文章の中から1つの文章を選択する。選択された文章は、画面生成部15により生成される第1表示画面D1に組み込まれる。この第1表示画面D1はサーバ3から端末2に送信される。 In step SA1, the question selection unit 14 selects one sentence from the first language sentences stored in the question sentence database 13. The selected text is incorporated into the first display screen D1 generated by the screen generator 15. FIG. This first display screen D1 is transmitted from the server 3 to the terminal 2 .
ステップSA2では、翻訳部17において、単語データベース16に記憶された情報に基づき、出題選択部14によって選択された第1言語の文章を第2言語の文章に翻訳する。翻訳された第2言語の文章は記憶部19に記憶される。 At step SA2, based on the information stored in the word database 16, the translation unit 17 translates the sentences in the first language selected by the question selection unit 14 into sentences in the second language. The translated second language sentence is stored in the storage unit 19 .
ステップSA3では、分割部18において第2言語の文章を分割する。分割された文章片は記憶部19に記憶される。 At step SA3, the dividing unit 18 divides the sentence in the second language. The divided text pieces are stored in the storage unit 19 .
 ステップSA4では、音声データ生成部11において、単語データベース16から第2言語文章単語の音声データを抽出して組み合わせることで、第2言語文章音声データを生成するとともに、発音類似データベース21から第2言語文章類似単語の音声データを抽出して組み合わせることで、第2言語文章類似音声データを生成する。 At step SA4, the speech data generator 11 extracts speech data of the words of the second language sentence from the word database 16 and combines them to generate speech data of the second language sentence, and extracts speech data of the second language sentence words from the pronunciation similarity database 21. Second-language sentence-similar speech data is generated by extracting and combining speech data of sentence-similar words.
さらにステップSA4では、音声データ生成部11において、単語データベース16から文章片単語の音声データを抽出して組み合わせることで、文章片音声データを生成するとともに、発音類似データベース21から文章片類似単語の音声データを抽出して文章片類似音声データを生成する。 Further, at step SA4, the speech data generation unit 11 extracts and combines speech data of sentence fragment words from the word database 16 to generate sentence fragment speech data, and also extracts sentence fragment speech speech data from the pronunciation similarity database 21 and extracts sentence fragment speech speech data from the similar sentence fragment words. Data is extracted to generate sentence segment-similar speech data.
 ステップSB1では、ステップSA1において端末2に送信された第1表示画面D1をユーザが確認し、第1表示画面D1中の第1再生ボタンP1又は第2再生ボタンP2をクリックする。第1再生ボタンP1がクリックされた場合にはステップSB2へ移行し、第2再生ボタンP2がクリックされた場合にはステップSB3へ移行する。 At step SB1, the user confirms the first display screen D1 transmitted to the terminal 2 at step SA1, and clicks the first play button P1 or the second play button P2 in the first display screen D1. When the first play button P1 is clicked, the process proceeds to step SB2, and when the second play button P2 is clicked, the process proceeds to step SB3.
 ステップSB2では、第1再生ボタンP1がクリックされた旨の情報をサーバ3に送信する。ステップSB3では第2再生ボタンP2がクリックされた旨の情報をサーバ3に送信する。 At step SB2, information is sent to the server 3 indicating that the first playback button P1 has been clicked. At step SB3, information is sent to the server 3 to the effect that the second playback button P2 has been clicked.
 ステップSA5では、再生ボタンがクリックされた旨の情報を端末2から受信する。ステップSB2にて端末2から送信された情報を受信した場合はステップSA6へ移行し、ステップSB3にて端末2から送信された情報を受信した場合はステップSA7へ移行する。 At step SA5, information is received from the terminal 2 indicating that the play button has been clicked. When the information transmitted from the terminal 2 is received at step SB2, the process proceeds to step SA6, and when the information transmitted from the terminal 2 is received at step SB3, the process proceeds to step SA7.
ステップSA6では、音声データ出力指示部20において、第2言語文章音声データを出力する指示情報を生成する。当該指示情報及びステップSA4において生成された第2言語文章音声データは、端末2へ送信される。 At step SA6, the voice data output instruction unit 20 generates instruction information for outputting the second language text voice data. The instruction information and the second language sentence voice data generated in step SA4 are transmitted to the terminal 2. FIG.
 ステップSA7では、音声データ出力指示部20において、クリックされた第2再生ボタンP2が割り当てられた文章片の文章片音声データを出力する指示情報を生成する。当該指示情報及びステップSA4において生成された文章片音声データは、端末2へ送信される。 In step SA7, the voice data output instruction unit 20 generates instruction information for outputting the text segment voice data of the text segment to which the clicked second playback button P2 is assigned. The instruction information and the sentence fragment speech data generated in step SA4 are transmitted to the terminal 2. FIG.
 ステップSB4では、ステップSA6にて言語学習システム1から送信された指示情報を端末2が受信した場合、端末2の音出力部(図示略)において第2言語文章音声データを音声出力する。ステップSB5では、ステップSA7にて言語学習システム1から送信された指示情報を端末2が受信した場合、端末2の音出力部(図示略)において文章片音声データを出力する。 At step SB4, when the terminal 2 receives the instruction information transmitted from the language learning system 1 at step SA6, the sound output unit (not shown) of the terminal 2 outputs the second language sentence voice data. At step SB5, when the terminal 2 receives the instruction information transmitted from the language learning system 1 at step SA7, the sound output section (not shown) of the terminal 2 outputs sentence fragment speech data.
 ステップSB6では、第1表示画面D1に表示される音声入力開始ボタンVがユーザによりクリックされ、端末2が音声を受付可能状態になる。 At step SB6, the user clicks the voice input start button V displayed on the first display screen D1, and the terminal 2 becomes ready to receive voice.
ステップSB7では、ユーザが発音することで、音声受付可能状態となった端末2が当該発音を受け付け、これを端末2内で音声データすなわち発音データに変換して、サーバ3に送信する。 At step SB 7 , the terminal 2 , which is in a voice-acceptable state by the user's pronunciation, accepts the pronunciation.
 ステップSA8では、判定部22において、ステップSA4において生成した各音声データと端末2から受信した発音データとを比較することで、発音データの正確度を判定する。 In step SA8, the judgment unit 22 judges the accuracy of the pronunciation data by comparing each speech data generated in step SA4 with the pronunciation data received from the terminal 2.
 ステップSA9では、伝達指示部23において、判定部22により判定された正確度を出力する指示情報を生成する。当該指示情報はサーバ3から端末2へ送信される。 In step SA9, the transmission instruction unit 23 generates instruction information for outputting the accuracy determined by the determination unit 22. The instruction information is transmitted from the server 3 to the terminal 2 .
 ステップSB8では、伝達指示部23において生成された指示情報に従い出力を行うことでユーザに正確度を通知する。 At step SB8, the accuracy is notified to the user by outputting according to the instruction information generated by the transmission instruction unit 23.
ステップSB9では、第1表示画面D1においていずれかのボタンがユーザによりクリックされる。すなわち、改めていずれかの再生ボタンP1,P2(ステップSB1においてクリックされた再生ボタンと異なる再生ボタンを想定しているが、同一の再生ボタンでも良い)がクリックされた場合にはステップSB10へ移行し、次出題変更ボタンC1がクリックされた場合にはステップSB11へ移行し、前出題変更ボタンC2がクリックされた場合にはステップSB12へ移行する。 At step SB9, the user clicks any button on the first display screen D1. That is, when either of the play buttons P1, P2 (the play button different from the play button clicked in step SB1 is assumed, but the same play button may be used) is clicked again, the process proceeds to step SB10. If the next question change button C1 is clicked, the process proceeds to step SB11, and if the previous question change button C2 is clicked, the process proceeds to step SB12.
ステップSB10では、いずれかの再生ボタンP1,P2がクリックされた旨の情報をサーバ3に送信する。ステップSB11では、次出題変更ボタンC1がクリックされた旨の情報をサーバ3に送信する。ステップSB12では、前出題変更ボタンC2がクリックされた旨の情報をサーバ3に送信する。 At step SB10, information is sent to the server 3 to the effect that one of the playback buttons P1 and P2 has been clicked. At step SB11, information is sent to the server 3 to the effect that the next question change button C1 has been clicked. At step SB12, information is sent to the server 3 to the effect that the previous question change button C2 has been clicked.
ここで言語学習システム1は、ステップSB10にて端末2から送信された情報を端末2から受信した場合、ステップSA5の処理から再開する。 Here, when language learning system 1 receives from terminal 2 the information transmitted from terminal 2 in step SB10, language learning system 1 resumes processing from step SA5.
ステップSA10では、端末2から次出題変更ボタンC1又は前出題変更ボタンC2がクリックされた旨の情報を受信する。ステップSB11にて端末2から送信された情報を受信した場合はステップSA11へ移行し、ステップSB12にて端末2から送信された情報を受信した場合はステップSA12へ移行する。 At step SA10, information is received from the terminal 2 to the effect that the change next question button C1 or the change previous question button C2 has been clicked. When the information transmitted from the terminal 2 is received at step SB11, the process proceeds to step SA11, and when the information transmitted from the terminal 2 is received at step SB12, the process proceeds to step SA12.
ステップSA11では、出題選択部14において、出題文章データベース13に記憶された文章の中から今回と異なる文章を選択しその後、ステップSA2へ移行する。 In step SA11, the question selection unit 14 selects a sentence different from the one stored in the question sentence database 13, and then proceeds to step SA2.
ステップSA12では、出題選択部14において、出題文章データベース13に記憶された文章の中から前回選択した文章を再度選択し、その後ステップSA2へ移行する。 At step SA12, the question selection unit 14 selects the previously selected sentence again from the sentences stored in the question sentence database 13, and then proceeds to step SA2.
 以上が、ユーザが言語学習システム1を使用する際の言語学習システム1の動作の一例の説明である。以下では、システム管理者又はユーザが互いに類似する単語同士を紐付ける際の言語学習システム1の動作の一例について、図5のフローチャートを用いて説明する。なお、図5中のステップSA21,SA22はサーバ3に設けられた言語学習システム1による動作を説明しており、ステップSB21は端末2による動作を説明している。 The above is an explanation of an example of the operation of the language learning system 1 when the user uses the language learning system 1. An example of the operation of the language learning system 1 when the system administrator or user associates similar words with each other will be described below with reference to the flowchart of FIG. Note that steps SA21 and SA22 in FIG. 5 describe the operation of the language learning system 1 provided in the server 3, and step SB21 describes the operation of the terminal 2. FIG.
 ステップSA21では、画面生成部15により第2表示画面D2を生成し、これをサーバ3から端末2に送信する。 At step SA21, the screen generation unit 15 generates the second display screen D2, which is transmitted from the server 3 to the terminal 2.
 ステップSB21では、システム管理者又はユーザが、端末2に表示された第2表示画面D2から、互いに発音が類似する単語同士を紐付けるようにして入力する。入力された情報は、端末2からサーバ3に送信される。 In step SB21, the system administrator or the user inputs words that are similar in pronunciation from the second display screen D2 displayed on the terminal 2 so as to associate them with each other. The entered information is transmitted from the terminal 2 to the server 3 .
 ステップSA22では、端末2から受信した、互いに発音が類似する単語同士を紐付ける情報が、発音類似データベース21に記憶される。 In step SA22, the information received from the terminal 2 and linking words with similar pronunciations is stored in the pronunciation similarity database 21.
 以上が、システム管理者又はユーザが言語学習システム1の第2表示画面D2を操作する際の言語学習システム1の動作の一例の説明である。 The above is an explanation of an example of the operation of the language learning system 1 when the system administrator or user operates the second display screen D2 of the language learning system 1.
本実施例によれば、言語学習システム1が、第1言語の文章を第2言語の文章に翻訳する翻訳部17と、第2言語文章音声データを生成する音声データ生成部11と、第2言語文章音声データと発音データとを比較する判定部22とを備えるので、第2言語文章音声データと発音データとを比較することができ、ユーザの第2言語(すなわち外国語)のスピーキング能力の向上を図ることができる。 According to this embodiment, the language learning system 1 includes the translation unit 17 that translates sentences in the first language into sentences in the second language, the voice data generation unit 11 that generates voice data of the second language text, and the second Since the determination unit 22 is provided for comparing the language sentence voice data and the pronunciation data, the second language sentence voice data and the pronunciation data can be compared, and the user's speaking ability of the second language (that is, the foreign language) can be evaluated. can be improved.
本実施例によれば、音声データ生成部11は第2言語文章類似音声データを生成し、判定部22は、第2言語文章音声データ及び第2言語文章類似音声データと、発音データとを比較することで、前記発音データの正確度を判定するので、発音データの正確度を把握することができ、ユーザの第2言語のスピーキング能力をより向上することができる。 According to this embodiment, the speech data generation unit 11 generates the second-language sentence-similar speech data, and the determination unit 22 compares the second-language sentence-speech speech data and the second-language sentence-similar speech data with the pronunciation data. By doing so, the accuracy of the pronunciation data is determined, so the accuracy of the pronunciation data can be grasped, and the user's ability to speak the second language can be further improved.
本実施例によれば、言語学習システム1がさらに、文章片を生成する分割部18を備え、音声データ生成部11は文章片音声データを生成し、判定部22は文章片音声データと発音データとを比較するので、例えば第2言語の文章を文節毎に区切って学習することなどができ、学習時の利便性が向上する。 According to this embodiment, the language learning system 1 further comprises a segmentation unit 18 for generating sentence fragments, the speech data generation unit 11 produces sentence fragment speech data, and the determination unit 22 produces sentence fragment speech data and pronunciation data. Since the comparison is made between and, for example, sentences in the second language can be learned by segmenting them into clauses, which improves convenience during learning.
本実施例によれば、音声データ生成部11は文章片類似音声データを生成し、判定部22は、文章片音声データ及び文章片類似音声データと、発音データとを比較することで、発音データの正確度を判定するので、例えば第2言語の文章を文節毎に区切って学習することなどができ、学習時の利便性が向上する。 According to this embodiment, the speech data generation unit 11 generates sentence piece-similar speech data, and the determination unit 22 compares the sentence piece speech data and the sentence piece-similar speech data with the pronunciation data to obtain the pronunciation data. Since the degree of accuracy is determined, for example, sentences in the second language can be learned by segmenting them into clauses, which improves convenience during learning.
本実施例によれば、端末2に対して、第2言語文章音声データを音声出力する指示を行う音声データ出力指示部20を備えるので、ユーザは端末2から出力された第2言語の文章の音声を聞いたうえで、これを真似するようにして発声したものを発声データとすることができるので、ユーザの第2言語のヒアリング能力も向上することができる。
ことができる。
According to this embodiment, since the terminal 2 is provided with the voice data output instruction unit 20 for instructing the terminal 2 to output the voice data of the second language text, the user can read the text in the second language output from the terminal 2. After listening to the voice, it is possible to use vocalization data by imitating the voice, so that the user's ability to hear the second language can also be improved.
be able to.
本実施例によれば、端末2に対して、第2言語文章音声データを音声出力する指示、及び、前記文章片音声データを音声出力する指示を行う音声データ出力指示部20を備えるので、ユーザは端末2から出力された第2言語の文章や文章片の音声を聞いたうえで、これを真似するようにして発声したものを発声データとすることができるので、ユーザの第2言語のヒアリング能力も向上することができる。 According to the present embodiment, the terminal 2 is provided with the audio data output instruction section 20 for instructing the terminal 2 to output the second language sentence audio data and to output the sentence fragment audio data as audio. After listening to the speech of sentences and sentence fragments in the second language output from the terminal 2, it is possible to make speech data by imitating the speech, so that the user's hearing of the second language can be performed. Abilities can also be improved.
 本実施例によれば、敢えて第1表示画面D1に文章を表示せず、端末2の音出力部から出力される音声のみでユーザに伝えることで、ユーザがより音声だけに集中してインプットすることができ、外国語のスピーキング能力を効果的に向上することができる。 According to the present embodiment, the sentences are not displayed on the first display screen D1, and only the sound output from the sound output unit of the terminal 2 is transmitted to the user, so that the user can concentrate more on the sound. You can effectively improve your foreign language speaking ability.
 なお本実施例においては、ステップSA1において出題選択部14が文章を選択した後に、ステップSA2において翻訳部17が選択された文章を第2言語の文章に翻訳し、ステップSA3において分割部18が第2言語の文章を分割し、ステップSA4において音声データ生成部11が第2言語文章音声データ、第2言語文章類似音声データ、文章片音声データ、及び、文章片類似音声データを生成するものとしたが、本発明はこの形態に限定されるものではなく、例えば、これらステップSA1~SA4の動作はステップSB1が開始される前に予め完了させておき、各ステップにおいて導かれた各データを記憶部19に記憶しておくようにしてもよい。 In this embodiment, after the question selection unit 14 selects a sentence in step SA1, the translation unit 17 translates the selected sentence into a sentence in the second language in step SA2, and the division unit 18 translates the sentence in the second language in step SA3. A sentence in two languages is divided, and in step SA4, the speech data generation unit 11 generates second language sentence speech data, second language sentence similar speech data, sentence fragment speech data, and sentence fragment similar speech data. However, the present invention is not limited to this form. For example, the operations of steps SA1 to SA4 are completed in advance before step SB1 is started, and each data derived in each step is stored in the storage unit. 19 may be stored.
また本実施例においては、言語学習システム1がサーバ3に設けられるものとしたが、本発明はこのような形態に限定されるものではなく、例えば、端末2にインストールされるアプリケーションソフトウェアとして設けるようにしてもよい。この点については実施例2においても同様である。 Also, in this embodiment, the language learning system 1 is provided on the server 3, but the present invention is not limited to such a form. can be This point also applies to the second embodiment.
[実施例2]
本実施例に係る言語学習システムは、実施例1に係る言語学習システム1の構成の一部を変更したものである。以下では、実施例1と同様の構成については説明を極力省略し、異なる部分を中心に説明する。
[Example 2]
The language learning system according to this embodiment is obtained by partially changing the configuration of the language learning system 1 according to the first embodiment. In the following, descriptions of the same configurations as in the first embodiment will be omitted as much as possible, and different parts will be mainly described.
 図6は、本実施例に係る言語学習システムの構成について説明するブロック図である。図6に示すように、本実施例に係る言語学習システム(言語学習システム1A)は、実施例1に係る言語学習システム1の構成のうち、画面生成部15に代えてその機能を一部変更した画面生成部15Aを設け、さらに、抽出部24を追加したものである。 FIG. 6 is a block diagram explaining the configuration of the language learning system according to this embodiment. As shown in FIG. 6, the language learning system (language learning system 1A) according to the present embodiment replaces the screen generator 15 in the configuration of the language learning system 1 according to the first embodiment, and partially changes its functions. A screen generation unit 15A is provided, and an extraction unit 24 is added.
 本実施例における画面生成部15Aは、第2表示画面D2を生成しない点で画面生成部15と異なり、第1表示画面D1(必要に応じて第3,4表示画面)を生成する点は画面生成部15と同様である。 The screen generation unit 15A in this embodiment differs from the screen generation unit 15 in that it does not generate the second display screen D2. It is similar to the generation unit 15 .
 抽出部24は、単語データベース16に記憶された各単語の音声データに基づき、単語データベース16に記憶された各単語の中から、互いに発音が類似する単語(品詞)同士を紐付けて抽出する。そして、この抽出された情報は発音類似データベース21に記憶される。 Based on the speech data of each word stored in the word database 16, the extraction unit 24 links and extracts words (parts of speech) that have similar pronunciations from each word stored in the word database 16. This extracted information is then stored in the pronunciation similarity database 21 .
このようにして本実施例では、システム管理者やユーザが第2表示画面D2から入力する必要がなくなり、また、画面生成部15は第2表示画面D2を生成する必要がなくなる。 Thus, in this embodiment, the system administrator and the user do not need to input from the second display screen D2, and the screen generator 15 does not need to generate the second display screen D2.
[実施例3]
本実施例に係る言語学習システムは、実施例1に係る言語学習システム1(又は実施例2に係る言語学習システム1A)の構成の一部を変更したものである。以下では、実施例1,2と同様の構成については説明を極力省略し、異なる部分を中心に説明する。
[Example 3]
The language learning system according to the present embodiment is obtained by partially changing the configuration of the language learning system 1 according to the first embodiment (or the language learning system 1A according to the second embodiment). In the following, descriptions of the same configurations as those of the first and second embodiments will be omitted as much as possible, and different parts will be mainly described.
図7は、本実施例に係る言語学習システムの構成について説明するブロック図である。図7に示すように、本実施例に係る言語学習システム(言語学習システム1B)は、実施例1に係る言語学習システム1(又は実施例2に係る言語学習システム1A)の構成のうち、画面生成部15(15A)に代えてその機能を一部変更した画面生成部15Bが設けられている。 FIG. 7 is a block diagram illustrating the configuration of the language learning system according to this embodiment. As shown in FIG. 7, the language learning system (language learning system 1B) according to the present embodiment has the screen Instead of the generation unit 15 (15A), a screen generation unit 15B whose function is partially changed is provided.
また、言語学習システム1Bには、実施例1において説明した分割部18及び音声データ出力指示部20は設けられず、その代わりにタイマー25が設けられる。 Moreover, the language learning system 1B is not provided with the division unit 18 and the voice data output instruction unit 20 described in the first embodiment, but is provided with a timer 25 instead.
画面生成部15Bは、実施例1(実施例2)で説明した各表示画面を生成するが、このうち第1表示画面D1については、再生ボタンP1,P2、及び、出題変更ボタンC1,C2を設けないようにする(出題選択部14により選択された文章Wは表示する)。 The screen generator 15B generates each of the display screens described in Example 1 (Example 2). It is not provided (the text W selected by the question selection unit 14 is displayed).
これに伴い、実施例1において図4を用いて説明したステップSB1~SB3,SB6,SB9~SB12は省略される。また、ステップSA5~SA7,SA11,SA12も省略される。 Accordingly, steps SB1 to SB3, SB6, and SB9 to SB12 described with reference to FIG. 4 in the first embodiment are omitted. Steps SA5-SA7, SA11 and SA12 are also omitted.
タイマー25は、出題選択部14にて出題文章データベース13に記憶された第1言語の文章の中から1つの文章を選択してより第1所定時間経過後に、出題選択部14に第1タイマー信号を送る。その後、第1所定時間が経過する度に出題選択部14に第1タイマー信号を送る。なお、この第1所定時間は数秒間に設定されることが好ましい。 The timer 25 selects one sentence from the sentences in the first language stored in the question sentence database 13 by the question selection section 14, and after a first predetermined time elapses, a first timer signal is sent to the question selection section 14. send. After that, the first timer signal is sent to the question selection section 14 each time the first predetermined time elapses. It should be noted that the first predetermined time is preferably set to several seconds.
さらにタイマー25は、初めに出題選択部14にて出題文章データベース13に記憶された第1言語の文章の中から1つの文章を選択してより第2所定時間(第2所定時間<第1所定時間)経過後に、端末2に第2タイマー信号を送信する。その後、出題選択部14に対して第1タイマー信号を送信する度に、それより第2所定時間経過後に、端末2に対して第2タイマー信号を送信する(図8参照)。 Further, the timer 25 first selects one sentence from the sentences in the first language stored in the question sentence database 13 by the question selection unit 14, and sets it for a second predetermined time (second predetermined time<first predetermined time). time), a second timer signal is transmitted to the terminal 2 . After that, every time the first timer signal is transmitted to the question selection unit 14, the second timer signal is transmitted to the terminal 2 after the second predetermined time has elapsed (see FIG. 8).
出題選択部14は、実施例1同様、出題文章データベース13に記憶された第1言語の文章の中から1つの文章を選択する。本実施例においてはさらに、第1タイマー信号が入力されると、出題文章データベース13に記憶された第1言語の文章の中から1つの文章を選択する。 The question selection unit 14 selects one sentence from sentences in the first language stored in the question sentence database 13, as in the first embodiment. Further, in this embodiment, when the first timer signal is input, one sentence is selected from sentences in the first language stored in the question sentence database 13 .
出題選択部14において選択された第1言語の文章は、画面生成部15Bにより生成される第1表示画面D1に組み込まれる(表示される)。そしてこの第1表示画面D1はサーバ3から端末2に送信される。この点については実施例1(又は実施例2)と同様である。 The sentences in the first language selected by the question selection section 14 are incorporated (displayed) in the first display screen D1 generated by the screen generation section 15B. This first display screen D1 is then transmitted from the server 3 to the terminal 2 . This point is the same as the first (or second) embodiment.
また、画面生成部15Bが生成する第1表示画面D1には音声入力開始ボタンVが設けられているものの、ユーザが端末2からこれを操作することはできず、端末2の音入力部は、発音受け付け可能状態と受け付け不可状態とが自動的に切り替わるものとする。 In addition, although the first display screen D1 generated by the screen generation unit 15B is provided with a voice input start button V, the user cannot operate this from the terminal 2, and the sound input unit of the terminal 2 It is assumed that the state in which pronunciation can be accepted and the state in which pronunciation cannot be accepted are automatically switched.
端末2の音入力部は、端末2が画面生成部15Bにて生成された第1表示画面D1を表示してより第2タイマー信号を受信するまでの間、発音受け付け可能状態となり、第2タイマー信号受信後、端末2が新たに画面生成部15Bにて生成された第1表示画面D1を表示するまでの間、発音受け付け不可状態となる。 The sound input unit of the terminal 2 is in a pronunciation accepting state until the terminal 2 receives the second timer signal after displaying the first display screen D1 generated by the screen generating unit 15B. After receiving the signal, the terminal 2 is in the pronunciation unacceptable state until the terminal 2 newly displays the first display screen D1 generated by the screen generation unit 15B.
なお、画面生成部15Bは、端末2の音入力部の状態に併せて音声入力開始ボタンVの色等のプロパティを変化させ、ユーザが音入力部の状態を判別しやすくするようにしてもよい。 Note that the screen generation unit 15B may change properties such as the color of the voice input start button V according to the state of the sound input unit of the terminal 2 so that the user can easily distinguish the state of the sound input unit. .
そして、実施例1同様、端末2が発音受け付け可能状態においてユーザが発音すると、その発音データは判定部22に送信される。 Then, as in the first embodiment, when the user makes a sound while the terminal 2 is in a state where the sound can be accepted, the sound data is transmitted to the determination unit 22 .
さらに、発音データを受信した判定部22は正確度を判定する(もし端末2が発音受け付け可能状態においてユーザが発音しなければ正確度は最低値となる)。判定された正確度は、伝達指示部23にて生成された指示情報(表示画面に反映する、及び、特定の音を出力する、のうちいずれか又は両方を行うように指示する情報)が端末2に送信されることで、端末2から出力され、ユーザに通知される。 Further, the determination unit 22 that has received the pronunciation data determines the degree of accuracy (if the terminal 2 is in a state where the pronunciation can be accepted and the user does not pronounce, the degree of accuracy will be the lowest value). The determined accuracy indicates that the instruction information generated by the transmission instruction unit 23 (information instructing to perform either or both of reflecting on the display screen and outputting a specific sound) 2, it is output from the terminal 2 and notified to the user.
なお、指示情報が第4表示画面を表示するように設定する場合、画面生成部15Bは判定部22にて判定された正確度を表わす第4表示画面を生成することになる。 Note that when the instruction information is set to display the fourth display screen, the screen generation unit 15B generates the fourth display screen showing the accuracy determined by the determination unit 22. FIG.
このように言語学習システム1Bを構成することで、本実施例では、まず出題選択部14が文章を選択し、これが組み込まれた画面生成部15Bにより第1表示画面D1が生成される。ユーザは第1表示画面D1を端末2から確認し、第2所定時間経過前に、第1表示画面D1に表示された文章Wの翻訳を自分で考え、これを発音する。ユーザの発音は発音データとして端末2から判定部22へ送信される。 By configuring the language learning system 1B in this way, in the present embodiment, the question selection unit 14 first selects a sentence, and the screen generation unit 15B incorporating this selects the first display screen D1. The user confirms the first display screen D1 from the terminal 2, thinks for himself the translation of the sentence W displayed on the first display screen D1, and pronounces it before the second predetermined time elapses. The user's pronunciation is transmitted from the terminal 2 to the determination unit 22 as pronunciation data.
一方、出題選択部14にて選択された第1言語の文章は、翻訳部17にて第2言語の文章に翻訳され、音声データ生成部11にて第2言語文章音声データ及び第2言語文章類似音声データを生成する。 On the other hand, sentences in the first language selected by the question selection unit 14 are translated into sentences in the second language by the translation unit 17, and speech data of the second language sentences and sentences in the second language are generated by the speech data generation unit 11. Generate similar speech data.
端末2から発音データを受信した判定部22は、第2所定時間経過後に第2言語文章音声データ及び第2言語文章類似音声データに基づき正確度を判定する。判定された正確度は、伝達指示部23から端末2に出力指示(第4表示画面の生成、及び、特定の音を出力する、のうちいずれか又は両方を行うように指示)される。 After receiving the pronunciation data from the terminal 2, the determination unit 22 determines the degree of accuracy based on the second-language sentence speech data and the second-language sentence-similar speech data after the second predetermined time has elapsed. The determined accuracy is instructed to output from the transmission instruction unit 23 to the terminal 2 (instruction to perform either or both of generation of the fourth display screen and output of a specific sound).
上記出力指示により端末2から出力された正確度がユーザに伝えられる。なお、端末2による当該出力中は端末2の音入力部が発音受け付け不可状態となる。 The user is notified of the accuracy output from the terminal 2 by the output instruction. During the output by the terminal 2, the sound input unit of the terminal 2 is in a pronunciation unacceptable state.
その後(出題選択部14による最初の文章選択から第1所定時間経過後)、出題選択部14が新たに文章を選択し、以降同様の動作が繰り返される。 After that (after the first predetermined time has passed since the first text selection by the question selection section 14), the question selection section 14 newly selects a sentence, and the same operation is repeated thereafter.
したがってユーザ視点では、端末2に第1表示画面が表示されている間に発音し、その正確度が端末2に第4表示画面として表示され、その後自動的に新たな第1表示画面に切り替わるといった動作を繰り返す、あるいは、端末2に第1表示画面が表示された状態で発音し、その正確度が端末2の音出力部から特定の音として出力され、その後自動的に新たな第1表示画面に切り替わるといった動作を繰り返すことになる(ただし、端末2は第4表示画面を表示するとともに特定の音を出力してもよい)。 Therefore, from the user's point of view, the pronunciation is pronounced while the first display screen is displayed on the terminal 2, the accuracy is displayed on the terminal 2 as the fourth display screen, and then automatically switches to the new first display screen. Repeat the action, or pronounce with the first display screen displayed on the terminal 2, the accuracy is output as a specific sound from the sound output unit of the terminal 2, and then automatically a new first display screen (However, the terminal 2 may display the fourth display screen and output a specific sound).
本実施例に係る言語学習システム1Bは、第1言語の文章が1つ以上記憶される出題文章データベース13と、出題文章データベース13の中から1つの第1言語の文章を選択する出題選択部14と、出題選択部14により選択された第1言語の文章を表示する第1表示画面D1を生成する画面生成部15Bと、出題選択部14による選択から第1所定時間経過後に出題選択部14に第1タイマー信号を送るタイマー25とを備え、出題選択部14はこの第1タイマー信号が入力されると選択を行う(翻訳部17が出題選択部14により選択された第1言語の文章を第2言語の文章に翻訳する等は実施例1(実施例2)と同様)ものである。 The language learning system 1B according to the present embodiment includes a question sentence database 13 that stores one or more first language sentences, and a question selection unit 14 that selects one first language sentence from the question sentence database 13. a screen generation unit 15B for generating a first display screen D1 displaying sentences in the first language selected by the question selection unit 14; A timer 25 for sending a first timer signal is provided, and the question selection section 14 makes a selection when this first timer signal is input (the translation section 17 translates sentences in the first language selected by the question selection section 14 into the first language). Translation into two-language sentences is the same as in Example 1 (Example 2).
これにより本実施例では、端末2に自動的に次々と文章が出題されていくことになるので、ユーザが再生ボタンを押す必要がない。また本実施例では、実施例1,2のように端末2から第2言語文章音声データが音声出力されないので、ユーザはこれを真似するのではなく、あくまでも自分で第2言語の文書への翻訳を考え発音することになる(実施例1,2のような分割という概念もない)。またその際、上記各所定時間を短く設定することで、あたかもフラッシュ暗算のように瞬時に翻訳を考えて発音することが要求されるので、スピーキング能力の向上を一層図ることができる。 As a result, in this embodiment, sentences are automatically given to the terminal 2 one after another, so the user does not have to press the play button. Also, in this embodiment, unlike the first and second embodiments, the second language sentence voice data is not output from the terminal 2, so the user does not imitate this, but translates into the second language document by himself/herself. is pronounced (there is no concept of division as in Examples 1 and 2). At that time, by setting each of the above predetermined times short, it is required to instantly think of the translation and pronounce it as if it were a flash mental calculation, so it is possible to further improve speaking ability.
 本発明は、言語学習システムとして好適である。 The present invention is suitable as a language learning system.
1,1A,1B 言語学習システム
2 端末
3 サーバ
11 音声データ生成部
12 通信制御部
13 出題文章データベース
14 出題選択部
15,15A,15B 画面生成部
16 単語データベース
17 翻訳部
18 分割部
19 記憶部
20 音声データ出力指示部
21 発音類似データベース
22 判定部
23 伝達指示部
24 抽出部
25 タイマー
1, 1A, 1B Language learning system 2 Terminal 3 Server 11 Voice data generation unit 12 Communication control unit 13 Question sentence database 14 Question selection unit 15, 15A, 15B Screen generation unit 16 Word database 17 Translation unit 18 Division unit 19 Storage unit 20 Voice data output instruction unit 21 Pronunciation similarity database 22 Judgment unit 23 Transmission instruction unit 24 Extraction unit 25 Timer

Claims (7)

  1. 第1言語の文章を第2言語の文章に翻訳する翻訳部と、
    前記第2言語の文章の音声データである第2言語文章音声データを生成する音声データ生成部と、
    前記第2言語文章音声データと、ユーザの発音の音声データである発音データとを比較する判定部とを備える
    ことを特徴とする言語学習システム。
    a translation unit that translates sentences in a first language into sentences in a second language;
    a speech data generation unit that generates second language sentence speech data, which is speech data of the sentence in the second language;
    A language learning system, comprising: a determination unit that compares the second language sentence voice data with pronunciation data that is voice data of a user's pronunciation.
  2. 前記音声データ生成部は、前記第2言語の文章に発音が類似する音声データである第2言語文章類似音声データを生成し、
    前記判定部は、前記第2言語文章音声データ及び前記第2言語文章類似音声データと、前記発音データとを比較することで、前記発音データの正確度を判定する
    ことを特徴とする請求項1に記載の言語学習システム。
    The speech data generation unit generates second-language sentence-similar speech data, which is speech data similar in pronunciation to sentences in the second language,
    2. The judgment unit judges the accuracy of the pronunciation data by comparing the second-language sentence speech data and the second-language sentence-similar speech data with the pronunciation data. The language learning system according to .
  3. 前記第2言語の文章を分割して文章片を生成する分割部を備え、
    前記音声データ生成部は、前記文章片の音声データである文章片音声データを生成し、
    前記判定部は、前記文章片音声データと前記発音データとを比較する
    ことを特徴とする請求項1又は2に記載の言語学習システム。
    comprising a dividing unit that divides the sentence in the second language to generate sentence fragments;
    The speech data generation unit generates sentence piece speech data, which is speech data of the sentence piece,
    3. The language learning system according to claim 1, wherein said judgment unit compares said sentence fragment speech data with said pronunciation data.
  4. 前記音声データ生成部は、前記文章片に発音が類似する音声データである文章片類似音声データを生成し、
    前記判定部は、前記文章片音声データ及び前記文章片類似音声データと、前記発音データとを比較することで、前記発音データの正確度を判定する
    ことを特徴とする請求項3に記載の言語学習システム。
    The voice data generation unit generates text segment-similar voice data, which is voice data whose pronunciation is similar to the text segment,
    4. The language according to claim 3, wherein the judging unit judges the accuracy of the pronunciation data by comparing the sentence piece speech data and the sentence piece similar speech data with the pronunciation data. learning system.
  5. 端末に対して、前記第2言語文章音声データを音声出力する指示を行う音声データ出力指示部を備える
    ことを特徴とする請求項1又は2に記載の言語学習システム。
    3. The language learning system according to claim 1, further comprising an audio data output instructing unit that instructs the terminal to output the second language text audio data as audio.
  6. 端末に対して、前記第2言語文章音声データを音声出力する指示、及び、前記文章片音声データを音声出力する指示を行う音声データ出力指示部を備える
    ことを特徴とする請求項3又は4に記載の言語学習システム。
    5. The apparatus according to claim 3, further comprising a voice data output instructing unit that instructs the terminal to output the second language text voice data and to output the text segment voice data as voice. A language learning system as described.
  7. 第1言語の文章が1つ以上記憶される出題文章データベースと、
    前記出題文章データベースの中から1つの前記第1言語の文章を選択する出題選択部と、
    前記出題選択部により選択された前記第1言語の文章を表示する表示画面を生成する画面生成部と、
    所定時間経過毎に前記出題選択部にタイマー信号を送るタイマーとを備え、
    前記出題選択部は前記タイマー信号が入力されると前記選択を行い、
    前記翻訳部は、前記出題選択部により選択された前記第1言語の文章を前記第2言語の文章に翻訳する
    ことを特徴とする請求項1又は2に記載の言語学習システム。
    a question sentence database in which one or more sentences in a first language are stored;
    a question selection unit that selects one sentence in the first language from the question sentence database;
    a screen generation unit that generates a display screen that displays sentences in the first language selected by the question selection unit;
    A timer that sends a timer signal to the question selection unit every time a predetermined time elapses,
    The question selection unit makes the selection when the timer signal is input,
    3. The language learning system according to claim 1, wherein the translation unit translates sentences in the first language selected by the question selection unit into sentences in the second language.
PCT/JP2022/005560 2022-02-14 2022-02-14 Language learning system WO2023152942A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/005560 WO2023152942A1 (en) 2022-02-14 2022-02-14 Language learning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/005560 WO2023152942A1 (en) 2022-02-14 2022-02-14 Language learning system

Publications (1)

Publication Number Publication Date
WO2023152942A1 true WO2023152942A1 (en) 2023-08-17

Family

ID=87563990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/005560 WO2023152942A1 (en) 2022-02-14 2022-02-14 Language learning system

Country Status (1)

Country Link
WO (1) WO2023152942A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004258231A (en) * 2003-02-25 2004-09-16 Akihiro Masuda Device and method of assisting language learning
JP2006139162A (en) * 2004-11-15 2006-06-01 Yamaha Corp Language learning system
JP2014038264A (en) * 2012-08-20 2014-02-27 Seiko Epson Corp Language learning device
JP2017134119A (en) * 2016-01-25 2017-08-03 吉雄 阿部 Employee Education Support System
JP2018116190A (en) * 2017-01-19 2018-07-26 有限会社トピックメーカー Language teaching material creation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004258231A (en) * 2003-02-25 2004-09-16 Akihiro Masuda Device and method of assisting language learning
JP2006139162A (en) * 2004-11-15 2006-06-01 Yamaha Corp Language learning system
JP2014038264A (en) * 2012-08-20 2014-02-27 Seiko Epson Corp Language learning device
JP2017134119A (en) * 2016-01-25 2017-08-03 吉雄 阿部 Employee Education Support System
JP2018116190A (en) * 2017-01-19 2018-07-26 有限会社トピックメーカー Language teaching material creation system

Similar Documents

Publication Publication Date Title
US6377925B1 (en) Electronic translator for assisting communications
US10991380B2 (en) Generating visual closed caption for sign language
CN110493123B (en) Instant messaging method, device, equipment and storage medium
JP2004021102A (en) Conversation practice system and its method
JP2007271655A (en) System for adding affective content, and method and program for adding affective content
KR20140071070A (en) Method and apparatus for learning pronunciation of foreign language using phonetic symbol
Han et al. Effects of modality and speaking style on Mandarin tone identification by non-native listeners
JPH11109991A (en) Man machine interface system
WO2023152942A1 (en) Language learning system
KR100898104B1 (en) Learning system and method by interactive conversation
JP3270356B2 (en) Utterance document creation device, utterance document creation method, and computer-readable recording medium storing a program for causing a computer to execute the utterance document creation procedure
KR102185387B1 (en) Sound recognition subtitle production system and control method thereof
JP2003228279A (en) Language learning apparatus using voice recognition, language learning method and storage medium for the same
JP2018097250A (en) Language learning device
KR20030079497A (en) service method of language study
US20070061139A1 (en) Interactive speech correcting method
JP6231510B2 (en) Foreign language learning system
JP2009122989A (en) Translation apparatus
KR101920653B1 (en) Method and program for edcating language by making comparison sound
JP2005107595A (en) Automatic translation device
CN115171645A (en) Dubbing method and device, electronic equipment and storage medium
Mirowski et al. Rosetta code: Improv in any language
JP7117228B2 (en) karaoke system, karaoke machine
JP2006139162A (en) Language learning system
JP2008032788A (en) Program for creating data for language teaching material

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22925963

Country of ref document: EP

Kind code of ref document: A1