CN115904172A - Electronic device, learning support system, learning processing method, and program - Google Patents

Electronic device, learning support system, learning processing method, and program Download PDF

Info

Publication number
CN115904172A
CN115904172A CN202211150391.6A CN202211150391A CN115904172A CN 115904172 A CN115904172 A CN 115904172A CN 202211150391 A CN202211150391 A CN 202211150391A CN 115904172 A CN115904172 A CN 115904172A
Authority
CN
China
Prior art keywords
word
learning
electronic device
control unit
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211150391.6A
Other languages
Chinese (zh)
Inventor
增茂良纪
沟上大志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN115904172A publication Critical patent/CN115904172A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to an electronic device, a learning support system, a learning processing method, and a program. An electronic device according to the present invention outputs a learning method list in which executable learning methods are tabulated, for words whose pronunciation is determined to be erroneous among words included in a voice of a user when reading a sentence aloud, and includes a control unit that executes a learning function based on a learning method selected from the learning method list.

Description

Electronic device, learning support system, learning processing method, and program
Technical Field
The invention relates to an electronic device, a learning support system, a learning processing method, and a program.
Background
Conventionally, electronic devices such as electronic dictionaries and personal computers have been equipped with a learning function for learning a foreign language. In the learning of a language, for example, as described in japanese patent application laid-open No. 2021-047769, there are a function of learning words using a dictionary, a function of learning a grammar using example sentences and the like exemplified in the dictionary, a function of hearing learning to listen to a sentence read in a language of a foreign language, a function of spoken language learning to enable accurate pronunciation of a language of a foreign language, and the like.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open No. 2021-047769
Disclosure of Invention
Problems to be solved by the invention
In such a conventional electronic device, although there are a plurality of learning functions for learning a language, learning of the same language may use contents of different learning methods depending on purposes. Therefore, the user needs to select a function using a content suitable for the purpose of learning from among the plurality of learning functions, specify a learning target (for example, a word), and execute the learning function. Therefore, when language learning is performed, learning of individual contents is preferred.
The present invention has been made in view of the above-described problems, and an object thereof is to provide an electronic device, a learning support system, a learning processing method, and a program that can perform language learning by effectively using a plurality of learning functions.
Means for solving the problems
In order to solve the above-described problem, an electronic device according to an embodiment of the present invention is an electronic device that outputs a learning method list in which executable learning methods are tabulated, with a word whose pronunciation is determined to be wrong among words included in a voice of a user reading a sentence, and that includes a control unit that executes a learning function based on a learning method selected from the learning method list.
Effects of the invention
According to the present invention, it is possible to provide an electronic device, a learning support system, a learning processing method, and a program that can efficiently use a plurality of learning functions to learn a language.
Drawings
Fig. 1 is a schematic diagram of a learning support system according to an embodiment of the present invention.
Fig. 2 is a front view showing an external configuration of the electronic dictionary in the present embodiment.
Fig. 3 is a functional block diagram showing a configuration of an electronic circuit of another electronic device used by being connected to an electronic dictionary via communication according to the present embodiment.
Fig. 4 is a flowchart showing the operation of the electronic dictionary in the present embodiment.
Fig. 5 is a flowchart showing the operation of the tablet PC (digital textbook) according to the present embodiment.
Fig. 6A is a diagram showing an example of a textbook screen displayed on the touch panel display unit of the tablet PC in the present embodiment.
Fig. 6B is a diagram showing an example of a textbook screen displayed on the touch panel display unit of the tablet PC in the present embodiment.
Fig. 7 is a diagram showing an example of an extracted word list displayed in the electronic dictionary in the present embodiment.
Fig. 8A is a diagram showing an example of a textbook screen displayed on the touch panel display unit of the tablet PC in the present embodiment.
Fig. 8B is a diagram showing an example of a textbook screen displayed on the touch panel display unit of the tablet PC in the present embodiment.
Fig. 9 is a diagram showing an example of a learning method list displayed in the electronic dictionary in the present embodiment.
Fig. 10 is a diagram showing an example of a learning function execution screen in the case where "word learning" is selected from the learning method list in the present embodiment.
Fig. 11 is a diagram showing an execution screen of a word book entry function executed by the learning method "word learning" according to the present embodiment.
Fig. 12 is a diagram showing an example of a screen for executing a learning function in a case where "pronunciation training" is selected from the learning method list in the present embodiment.
Fig. 13 is a diagram showing an example of an execution screen of the example sentence search function in the case where "example sentence learning" is selected from the learning method list in the present embodiment.
Fig. 14 is a schematic diagram showing a learning support system according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
Fig. 1 is a schematic diagram of a learning support system according to an embodiment of the present invention. The learning support system 1 includes: an electronic dictionary 10 as an electronic device (first electronic device), and a tablet PC20 as an electronic device (second electronic device). Also, the learning assistance system 1 may include a server 30. The server 30 provides data of images and texts including digital textbook data to the tablet PC20 and the like via the network N. Fig. 1 shows a functional block diagram showing a configuration of an electronic circuit of the electronic dictionary 10.
In the present embodiment, an example is shown in which the electronic device is configured as, for example, an electronic dictionary 10. The electronic device may be realized by various electronic devices such as a personal computer, a smart phone, a tablet PC, and a game device, in addition to the electronic dictionary 10.
The electronic dictionary 10 stores a plurality of dictionary contents as dictionary data. In the dictionary content, at least 1 piece of information related to word senses is entered in association with each of a plurality of words or the like as entries. The dictionary contents mounted in the electronic dictionary 10 are generally produced by a publishing company or the like, and contents published on paper media or the like are also included, and thus the reliability is high. Therefore, accurate and effective learning effects can be expected by learning with the dictionary contents with high reliability being effectively used.
The dictionary contents are not limited to dictionaries relating to languages such as foreign languages such as english and national languages, and may include contents such as encyclopedias, learning reference books, problem sets, literary works, and interpretation books.
The dictionary content can be used not only by the dictionary retrieval function but also as learning content for language learning.
The electronic dictionary 10 has a computer structure for reading programs stored in various storage media or programs transferred thereto and controlling operations by the read programs, and the electronic circuit includes a Central Processing Unit (CPU) 11.
The CPU11 functions as a control unit that controls the whole electronic dictionary 10. The CPU11 controls the operations of the respective circuit portions based on a control program stored in advance in the memory 12, a control program read from a storage medium 13 such as a ROM card to the memory 12 via a storage medium reading unit 14, or a control program downloaded from an external device (such as a server) via a network N such as the internet and read to the memory 12.
The control program stored in the memory 12 is activated in accordance with an input signal corresponding to a user operation from the key input unit 16, an input signal corresponding to a user operation from the touch panel display unit 17 as a display unit, or a connection communication signal with the external storage medium 13 such as an EEPROM (registered trademark), a RAM, or a ROM connected via the storage medium reading unit 14.
The CPU11 is connected to a memory 12, a storage medium reading unit 14, a communication unit 15, a key input unit 16, a touch panel display unit 17 (display), an audio input unit (microphone) 18, an audio output unit (speaker) 19, and the like. In one embodiment, the display unit may not have a function as a touch panel. For example, instead of the touch panel Display unit 17, an LCD (Liquid Crystal Display) may be used as the Display unit.
The control programs stored in the memory 12 include a dictionary control program 12a, a voice recognition program 12b, a learning method processing program 12c, and the like. The dictionary control program 12a is a program for controlling the overall operation of the electronic dictionary 10. The dictionary control program 12a also realizes a dictionary search function for searching dictionary contents and displaying information based on a character string input by the input unit (the key input unit 16, the touch panel display unit 17, and the voice input unit 18).
The dictionary control program 12a realizes not only a dictionary search function for searching dictionary contents and displaying information but also a plurality of different learning functions usable for language learning of foreign languages. As the foreign language, for example, a plurality of languages such as english, chinese, korean, german, and french can be targeted. The plurality of learning functions include, for example, a word search function effective for word learning, an example language search function suitable for example language learning, a text read in a language of a foreign language, a hearing learning function for words, a spoken language learning function (pronunciation training function) for practicing a text or a word in a language in which a foreign language can be correctly pronounced, a word book entry function for entering specific words necessary for learning, and the like. Further, a function for learning grammar and long text, a function for reading a reference book for a foreign language qualification test, a function for presenting and answering test questions for a test, and the like can be provided. In addition, the learning function described above is an example, and a function that can be used for learning other foreign languages can be provided.
Each learning function can execute processing with a plurality of contents as objects. For example, the word search function may use a plurality of dictionary contents as search targets, and may use all the dictionary contents as search targets or may select one of the dictionary contents as a search target. In addition, the spoken language learning function can selectively use article contents for practicing pronunciation of articles, word contents for practicing pronunciation of words, and the like. The examination question function includes a plurality of examination question contents having different examination types and examination question levels, and can be executed for one of the examination question contents.
The voice recognition program 12b is a program for realizing a voice recognition function of recognizing a voice of a sentence or the like inputted from the voice input unit 18 by reading the sentence or the like by a learner, and outputting the sentence or the like read by the learner. The voice recognition program 12b can perform voice recognition for a plurality of languages such as english, chinese, korean, german, and french. In the voice recognition by the voice recognition program 12b, for example, sound analysis is performed on voice data of a voice spoken by a learner, and a spoken article or the like is detected by comparing a sound analysis result with a voice recognition dictionary (including a sound model, a language model, a pronunciation dictionary, and the like) for each language prepared in advance.
When there is data such as a sentence to be read aloud in the voice recognition program 12b in the present embodiment, it is possible to judge whether or not the utterance contained in the read-aloud sentence is uttered by mistake by comparing the result of voice recognition of the voice of the read-aloud sentence or the like with the data such as the sentence to be read aloud. In the case where there is no data such as a sentence to be read aloud, if a word obtained from a result of voice recognition of a voice such as a read aloud is not included in a word to be voice-recognized that is previously registered (for example, a database in which words to be recognized are registered), it is possible to determine that there is a word with a wrong pronunciation.
The learning method processing program 12c uses the result of the voice recognition processing performed by the voice recognition program 12b to realize a function of distinguishing a learning function (learning method) that can be used for learning from a plurality of learning functions provided by the dictionary control program 12a for a word with an incorrect pronunciation, for example, and providing a learning method list in which the learning functions (learning methods) are listed.
The memory 12 stores dictionary data 12d, extracted word data 12e, learning method search data 12f, learning content data 12g, and the like.
The dictionary data 12d includes a database in which dictionary contents such as an english-japanese dictionary, a japanese-english dictionary, an english-chinese dictionary, and a national dictionary, and various encyclopedias are recorded. In addition, the dictionary contents include not only the dictionary contents of the english system but also the dictionary contents of other languages. In the dictionary data 12d, word sense information indicating a meaning (word sense) corresponding to each entry is associated with each dictionary. Note that the dictionary data 12d may be stored in an external device (such as a server) accessible via the network N instead of being incorporated in the main body of the electronic dictionary 10.
The extracted word data 12e is data indicating a word judged to be erroneous by a correct/incorrect judgment of pronunciation, which is extracted from a voice recognition result by the voice recognition performed by the voice recognition program 12 b. The extracted word data 12e may be data of words extracted regardless of erroneous judgment, which are obtained by voice recognition of a voice such as a read-aloud sentence.
The learning method search data 12f is data indicating a learning function executable by extracting a word whose pronunciation is determined to be erroneous as indicated by the word data 12e. The learning function shown in the learning method search data 12f can be tabulated as a learning method usable for word learning by the processing of the learning method processing program 12c, and is output by the display of the touch panel display unit 17 or the like.
The learning content data 12g is data related to contents utilized by various learning functions realized by the dictionary control program 12 a.
The communication unit 15 executes communication control for performing communication with another electronic device via a Network N such as the internet or a Local Area Network (LAN), or communication control for performing short-range wireless communication such as Bluetooth (registered trademark) or Wi-Fi (registered trademark) with another electronic device in a short range (for example, the tablet PC 20).
The electronic dictionary 10 is connected to another electronic device (for example, a tablet PC 20) via communication by the communication unit 15, and can execute processing cooperating with the other electronic device.
The electronic dictionary 10 configured in this manner controls the operations of the respective circuit portions by the CPU11 in accordance with commands written in various programs such as the dictionary control program 12a, the voice recognition program 12b, and the learning method processing program 12c, and the functions described in the following operation description are realized by the software and hardware operating in cooperation.
Fig. 2 is a front view showing an external configuration of the electronic dictionary 10 in the present embodiment.
The electronic dictionary 10 in fig. 2 includes a CPU11, a memory 12, a storage medium reading unit 14, a communication unit 15, an audio input unit 18, and an audio output unit 19, which are built in a lower stage of an openable and closable apparatus main body, and is provided with a key input unit 16 and a touch panel display unit 17 on an upper stage.
The key input unit 16 includes a character input key 16a, a dictionary selection key 16b for selecting various dictionaries and various functions, a [ translate/specify ] key 16c, a [ return ] key 16d, a cursor key (up, down, left, and right keys) 16e, a delete key 16f, a power button, various other function keys, and the like.
Various menus, buttons 17a, and the like are displayed on the touch panel display unit 17 in accordance with the execution of various functions. The touch panel display unit 17 can perform, for example, a touch operation of selecting various menus and buttons 17a with a pen, and a handwritten character input for inputting characters.
Fig. 3 is a functional block diagram showing a configuration of an electronic circuit of another electronic device (second electronic device) used in communication connection with the electronic dictionary 10 according to the present embodiment. In the present embodiment, another electronic device (second electronic device) is illustrated as an example of the tablet PC20. The tablet PC20 is a device used as a digital textbook, for example. The present invention is not limited to the tablet PC20, and may be implemented by various electronic devices such as a personal computer, a smart phone, and a game device.
The tablet PC20 can access the server 30 connected through the network N including the internet or the like, and display a Web page including digital textbook data, a text file, and the like provided by the server 30 for the user to view.
The tablet PC20 has a computer configuration for reading programs stored in various storage media or programs transferred thereto and controlling operations by the read programs, and the electronic circuit includes a CPU (central processing unit) 21.
The CPU21 functions as a control unit that controls the entire tablet PC20. The CPU21 controls the operations of the circuit components based on a control program stored in advance in the memory 22, a control program read from a storage medium 23 such as a ROM card to the memory 22 via a storage medium reading unit 24, or a control program downloaded from an external device (such as a server) via a network N such as the internet and read to the memory 2.
The control program stored in the memory 22 is activated in accordance with an input signal corresponding to a user operation from the button input unit 26, an input signal corresponding to a user operation from the touch panel display unit 27, or a connection communication signal with an external storage medium 23 such as an EEPROM (registered trademark), a RAM, or a ROM connected via the storage medium reading unit 24.
The CPU21 is connected to a memory 22, a storage medium reading unit 24, a communication unit 25, a button input unit 26, a touch panel display unit 27 as a display unit, an audio input unit (microphone) 28, an audio output unit (speaker) 29, and the like.
Examples of the control program stored in the memory 22 include a basic program 22a (Operating System) for controlling the overall operation of the tablet PC20, and a textbook processing program 22b for displaying a textbook screen (including text, images, and the like) based on textbook data.
The tablet PC20 executes the basic program 22a and the textbook processing program 22b, thereby transmitting, for example, article data (text data) included in textbook data set as a display object in the tablet PC20 to the electronic dictionary 10 via communication.
The memory 22 stores textbook data 22c, extracted word data 22d, and the like to be displayed in the textbook processing program 22 b.
The textbook data 22c includes text data and image data representing various textbooks. The textbook data 22c is displayed on the touch panel display unit 27 under the control of the textbook processing program 22 b. When the tablet PC20 is connected to the electronic dictionary 10 and executes the cooperation process, data (e.g., article data) to be processed displayed on the touch panel display unit 27 is transmitted to the electronic dictionary 10.
The extracted word data 22d is data received from the electronic dictionary 10 when the tablet PC20 is connected to the electronic dictionary 10 and performs the cooperation process. The extracted word data 22d is data indicating a word that is determined to be erroneous by a sound correctness determination, among words included in the sentence data, extracted from a voice recognition result by voice recognition in the electronic dictionary 10 when the sentence data that is a target to be processed is transmitted to the electronic dictionary 10. The word indicated by the extracted word data 22d in the article displayed on the touch panel display unit 27 is displayed in another display form based on the extracted word data 22d and by the control of the textbook processing program 22 b.
The communication unit 25 executes communication control for performing communication with another information processing apparatus (for example, the server 30) via a Network N such as the internet or a Local Area Network (LAN), or communication control for performing short-range wireless communication such as Bluetooth (registered trademark) or Wi-Fi (registered trademark) with another electronic device (for example, the electronic dictionary 10) in a short range.
The voice input unit 28 inputs voice of a learner reading a text or the like. The sound output unit 29 outputs various sounds in accordance with the processing of the CPU 21.
Next, the operations of the electronic dictionary 10 and the tablet PC20 (digital textbook) as the second electronic device in the present embodiment will be described.
First, a case where the electronic dictionary 10 operates in cooperation with the tablet PC20 will be described.
Fig. 4 is a flowchart showing the operation of the electronic dictionary 10 in the present embodiment. Fig. 5 is a flowchart showing the operation of the tablet PC20 (digital textbook) in the present embodiment.
First, the user starts processing for connection (link setting) by wireless communication with respect to the electronic dictionary 10 and the tablet PC20. When the instruction for the link setting is given (yes in step A1), the CPU11 of the electronic dictionary 10 executes the communication program, and thereby, the communication unit 15 is brought into a state of performing wireless communication with the tablet PC20. When the instruction for link setting is given (yes in step B1), the CPU21 of the tablet PC20 executes the communication program and sets the electronic dictionary 10 to a wireless communication state through the communication unit 25. Thus, the electronic dictionary 10 and the tablet PC20 execute processes of setting a communication link (steps A2 and B2), respectively, and are in a data communication enabled state.
When the tablet PC20 instructs the execution of the textbook processing program 22B in response to the user operation, the textbook data 22c is read from the memory 22 (step B3), and a textbook screen (text, image, etc.) is displayed on the touch panel display unit 27 based on the textbook data 22c (step B4). The tablet PC20 changes the display content of the textbook screen in response to an operation on the touch panel display unit 27 or the button input unit 26.
Fig. 6A and 6B show an example of a textbook screen displayed on the touch panel display unit 27 of the tablet PC20 according to the present embodiment. On the textbook screen shown in fig. 6A, for example, "Did you enjoy your homesteady in London? (do you enjoy a home-boarded life in london).
When displaying a sentence as shown in fig. 6A, the tablet PC20 can read aloud the voice of the sentence data by using the voice recognition function of the electronic dictionary 10.
For example, when a text range to be read is designated for a sentence displayed on the touch panel display unit 27 by a touch operation or the like (yes in step B5), the CPU21 of the tablet PC20 transmits sentence data (text data) to the electronic dictionary 10 via the communication unit 25 (step B6).
On the other hand, when the CPU11 of the electronic dictionary 10 receives text data to be read aloud from the tablet PC20 via the communication unit 15 (yes in step A3), the text data is temporarily stored in the memory 12 (step A4).
The CPU11 starts the voice recognition program 12b in response to the reception of the text data from the tablet PC20, and shifts to a state in which voice recognition based on the voice recognition function is possible for the voice input from the voice input unit 18.
Here, the user instructs the electronic dictionary 10 to start reading aloud by operating the key input unit 16, for example, and starts reading aloud of an article (text data transmitted from the tablet PC 20) selected as a reading aloud target in the tablet PC20.
When an instruction to start reading is input (step A5), the CPU11 executes a voice recognition process for voice data of a voice (a voice of a user reading a sentence) input from the voice input unit 18 (step A6). In the sound recognition processing, processing corresponding to the language to be read aloud is executed. For example, when reading a text in english, the language may be designated in advance as english, and the voice recognition processing using a voice recognition dictionary for english may be executed. Similarly, for example, by specifying a language such as chinese, korean, german, french, or the like, voice recognition processing for each language using a voice recognition dictionary for each language can be executed.
Further, it is also possible to perform the voice recognition processing for each language without specifying the language in advance. For example, sound analysis is performed on the sound data, and the sound analysis result is compared with a previously prepared voice recognition dictionary (including a sound model, a language model, a pronunciation dictionary, and the like) for each language to determine the language, and voice recognition processing using the voice recognition dictionary for the determined language is performed. Further, it is also possible to perform a voice recognition process for each language without specifying a language in advance by setting a voice recognition function using an AI (Artificial Intelligence) technique and learning a plurality of languages.
The CPU11 outputs text data as a result of voice recognition by voice recognition processing of the input voice data. Then, the CPU11 compares the text data as the voice recognition result with the sentence data received from the tablet PC20, thereby making a judgment of correctness or incorrectness of pronunciation of the word included in the sentence. That is, the text data as the voice recognition result and the text data of the sentence are compared in units of words, and if they match, it is determined that the word is correctly pronounced, and if they are not, it is determined that the word is incorrectly pronounced.
The CPU11 extracts a word determined to be a pronunciation error as a misread word, and stores the misread word in the memory 12 as extracted word data 12e in association with an original word in the sentence data (step A7). The CPU11 transmits the extracted word data 12e indicating the word determined to be the pronunciation error to the tablet PC20 as a voice recognition result for the article reading (step A8).
When receiving the extracted word data 12e from the electronic dictionary 10 (yes in step B7), the CPU21 of the tablet PC20 stores it in the memory 22 (extracted word data 22 d). The CPU21 causes the sentence selected as the object to be read to be displayed on the touch panel display unit 27 based on the extracted word data 22d to display a word corresponding to the misread word indicated by the extracted word data 22d, that is, a word incapable of correctly uttering, in a display form different from that of the other sentences (step B8).
An example in which the display format is changed for words such as "enjoy" and "homestay" is shown in the textbook screen shown in fig. 6B. In addition, as a display form different from others to be attached to the word, for example, there are a change in font color, highlighting, underlining, and the like. Other display formats may also be used.
In this way, the tablet PC20 cooperates with the electronic dictionary 10 to change the display format of a word that cannot be uttered correctly (a word determined to be an erroneous utterance) for a sentence being displayed on the touch panel display unit 27 (to perform reading aloud determination) by using the voice recognition function of the electronic dictionary 10. This makes it possible for the user to visually confirm the word determined to be a wrong pronunciation.
When an instruction to change the display content of the textbook screen is input in response to an operation on the touch panel display unit 27 (no at step B9), the CPU21 reads the textbook data 22c from the memory 22 (step B3), and displays the textbook screen changed on the touch panel display unit 27 based on the textbook data 22c (step B4).
When the CPU21 instructs the end of the display of the textbook screen (yes in step B9), the process ends.
On the other hand, the electronic dictionary 10 stores the extracted word data 12e in the memory 12, lists the misread words indicated by the extracted word data 12e (extracted word list), and displays the list on the touch panel display unit 17 (step A9).
In the present embodiment, the extracted word list is created and displayed, but words may be entered in the word book entry function implemented by the dictionary control program 12a, and a screen may be displayed in which the entered words of the word book entry function are displayed in one piece.
Fig. 7 is a diagram showing an example of the extracted word list displayed in the electronic dictionary 10 in the present embodiment. The extracted word list table shown in fig. 7 shows an example showing 2 words "enjoy (enjoyment)" "homestay (family lodging life)".
In the example shown in fig. 7, the word "enjoy" (see fig. 6A) in the article data received from the tablet PC20 is recognized by voice as "enjoying", for example, and similarly, the word "homeboardy" (see fig. 6A) is recognized by voice as "hamster", for example, and is determined to be a pronunciation error.
In the example shown in fig. 6A, 6B, and 7, when the extracted word list is created, the expressions of the words in the sentence data are listed as they are ("enjoy" (enjoyment) "" homestay) "), but as described below, the words in which the expressions of the words in the sentence data are changed may be listed.
Fig. 8A shows an example of a textbook screen displayed on the touch panel display unit 27 of the tablet PC20 in the present embodiment. On the textbook screen shown in fig. 8A, for example, "Are you enjoying your homestaff in London? (do you enjoy a family lodging life in london?.
As described above, the sentence shown in fig. 8A is read aloud, and the read aloud voice is determined by the voice recognition function of the electronic dictionary 10. As a result, the word "enjoying" in the sentence data is recognized as "join", for example, by voice, and it is determined that the utterance is wrong (the utterance is correct for the word "homestay"). As shown in fig. 8B, the textbook screen displays the word "enjoying" (enjoying) as a word that cannot be uttered correctly, with the display format changed.
In this case, the expression of the word "enjoying" in the text data is not listed as it is, but is listed by changing the word "enjoying" in the text data to an original form, for example. That is, in the same manner as in fig. 7, the original word "enjoy" of the word "enjoying" is tabulated and the extracted word list is displayed.
In the extracted word list, not only the same word but also the same word having a different expression (for example, "enjoying", "enjoy") is not repeatedly displayed, so that an operation of selecting a word from the extracted word list can be easily performed.
Note that although the article shown in fig. 8A shows an example of the word "enjoying" (enjoying) expressed in the present participle format, the present participle format is not limited to the expression, and a word different from the original expression may be used as the target.
For example, words such as expressions of three singles (third person's single-word expressions), expressions to which past expressions, past participles, plural numbers, prefixes and suffixes are attached, are also expressions of words whose shapes can be changed to original forms, and an extracted word list is created.
In addition, if the relationship between the original form of the word and the word of the expression different from the original form is included in the dictionary data 12d, the expression of the word in the article data is changed to the original form with reference to the dictionary data 12 d. Note that, by preparing table data indicating a relationship between the original form of a word and a word different from the expression of the original form, separately from the dictionary data 12d, and referring to the table data, it is also possible to change the expression of the word in the sentence data to the original form.
When an instruction to select one word from the extracted word list displayed on the touch panel display unit 17 is input (for example, a touch operation is performed on a check button corresponding to the word) (step a 10), the CPU11 searches for a learning method corresponding to the word and displays the learning method list on the touch panel display unit 17 (step a 11).
That is, the CPU11 retrieves a learnable learning function for a word selected from the extracted word list among a plurality of learning functions provided by the dictionary control program 12 a. For example, in the case where a word selected from the extracted word list is contained in the content to be processed by each learning function, it is possible to discriminate as a learnable (executable) learning function.
Further, an index in which an executable learning function (learning method) is registered may be created in advance for each word, and the index may be searched based on a word selected from the extracted word list, so that the executable learning function corresponding to the word may be identified.
Fig. 9 is a diagram showing an example of a learning method list displayed on the electronic dictionary 10 in the present embodiment. The learning method list shown in fig. 9 exemplifies a case where the word "enjoy" is selected from the extracted word list, for example.
As shown in fig. 9, in the learning method list, learning of "word learning", "example learning", "pronunciation training", and "grammar learning" that can be performed for the word "enjoy (enjoyment)" can be easily recognized. The learning method list does not include a learning method that cannot be learned for the word "enjoy". Therefore, by using the learning method list, it is possible to efficiently select a learning method that can be learned from a plurality of learning methods. Further, by selecting and executing a plurality of learning methods included in the learning method list, it is possible to efficiently learn about a word selected from the extracted word list (i.e., a word determined to be a pronunciation error).
When a learning method is selected from the learning method list (yes in step a 12), the CPU11 executes the selected learning function with the word selected from the extracted word list as the processing target (based on the selected word) (step a 13), and displays the learning function execution screen on the touch panel display unit 17 (step a 14). In one embodiment, the execution screen includes at least a part of the content to be processed by the learning function. Specifically, the execution screen may include a part of a word to be processed in the content to be processed (for example, a description of a word in a dictionary content, a sentence including a word to be processed in the case of executing an example retrieval function described later, or the like).
Fig. 10 is a diagram showing an example of a learning function execution screen in the case where "word learning" is selected from the learning method list in the present embodiment.
Fig. 10 shows an execution screen of the word retrieval function executed according to the learning method "word learning". The entry input region 40 is normally activated in a blank state when the word retrieval function is executed, but when activated in a case of selection from the learning method list, a screen indicating a result of performing word retrieval with a word selected from the extracted word list as an entry is displayed.
This eliminates the need to activate the learning function and also perform an operation such as specifying a word to be learned (inputting as an entry), and thus enables efficient learning.
In addition, on the execution screen of the word search function shown in fig. 10, search results for a plurality of different dictionaries ("G english date", "W english date", "O english date", and the like ") included in the dictionary data 12d are shown. That is, a plurality of contents to be processed by the word search function are processed. Here, the word retrieval function may be executed not only by targeting a plurality of contents (dictionaries) but also by selecting one of the dictionaries (contents).
In the above description, the word search function is executed when "word learning" is selected from the learning method list, but a plurality of different learning functions may be associated with the learning method "word learning".
For example, the learning method "word learning" is associated with a word book entry function in addition to the associated word search function. Thus, it is possible to select one of the plurality of learning functions having different contents to be processed, for 1 learning method, depending on the learning content required by the user.
Fig. 11 shows an execution screen of a word book entry function executed according to the learning method "word learning" in the present embodiment. As shown in fig. 11, the execution screen of the word book entry function displays an execution screen in which entry of the word "enjoy" selected from the extracted word list is completed.
Fig. 12 is a diagram showing an example of a screen for executing a learning function in a case where "pronunciation training" is selected from the learning method list in the present embodiment.
The execution screen of the spoken language learning function executed according to the learning method "pronunciation training" shown in fig. 12 shows the execution screen of the function of practicing pronunciation by word. In the spoken language learning function, not only a function of practicing pronunciation by words as shown in fig. 12 but also a function of practicing pronunciation by a unit of a sentence including a word "enjoy (enjoyment)" selected from the extracted word list can be executed.
Fig. 13 is a diagram showing an example of an execution screen of the example sentence search function in the case where "example sentence learning" is selected from the learning method list in the present embodiment. As shown in fig. 13, on the execution screen of the example sentence retrieval function, an execution screen in which example sentences obtained by performing example sentence retrieval on a dictionary (for example, "G dictionary") included in the dictionary data 12d are displayed in one body based on the word "enjoy (enjoyment)" selected from the extracted word list.
After selecting a learning method from the learning method list and executing one of the learning functions, for example, when the [ back ] key 16d of the key input unit 16 is operated (no in step a 15), the CPU11 returns to a state in which the learning method list before execution of the learning function is displayed on the touch panel display unit 17 (in this case, search for the learning method is not necessary). The CPU11 executes the same processing as described above (steps a11 to a 14).
Thus, for 1 word selected from the extracted word list, different learning functions are sequentially selected from the learning method list and executed, thereby enabling efficient learning. Further, according to the learning support system of the present embodiment, it is possible to efficiently learn a word that the user has not uttered correctly, that is, a word having a high learning necessity.
When the CPU11 instructs the end of the processing (yes in step a 15), it ends the learning processing using the result of the voice recognition processing.
Next, a case where the electronic dictionary 10 does not cooperate with the tablet PC20 will be described.
In this case, when the execution of the textbook processing program 22B is instructed by the user operation, the tablet PC20 reads the textbook data 22c from the memory 22 (step B10), and displays a textbook screen (text, image, etc.) on the touch panel display unit 27 based on the textbook data 22c (step B11). The tablet PC20 changes the display content of the textbook screen in accordance with the operation of the touch panel display unit 27.
When the user uses the voice recognition function of the electronic dictionary 10, the user instructs the electronic dictionary 10 to start reading aloud by operating the key input unit 16, for example, and starts reading aloud of a sentence or the like displayed on the touch panel display unit 27 of the tablet PC20.
When the instruction to start reading aloud is input (step A5), the CPU11 inputs the voice input from the voice input unit 18 (step A6), and executes the voice recognition processing for the voice data (step A7). In the voice recognition processing, processing corresponding to the language to be read aloud is executed in the same manner as described above.
However, since the text data to be read aloud is not received without cooperation with the tablet PC20, the CPU11 refers to a database in which a word to be recognized is entered as the text data of the voice recognition result, for example, and determines that the word is a pronunciation error when the word is included in a word that does not exist in the database. In the case where the word to be voice-recognized is not included in the database, the word obtained as a result of the voice recognition may be a word that does not exist.
In this case, the CPU11 may compare the word obtained as a result of the voice recognition with a plurality of words to be recognized as voice in advance, and determine that the word in advance, in which the degree of matching with the word obtained as a result of the voice recognition (recognition accuracy, i.e., recognition certainty) is equal to or less than a predetermined reference value, is a word with a wrong pronunciation. In one embodiment, the CPU11 may determine the word registered in advance with the highest degree of matching as a word having a wrong pronunciation.
The matching degree is calculated, for example, by converting input data (voice data) of voice spoken by a user into phonetic data (phonetic symbols), comparing the arrangement of phonetic data of the voice data with the arrangement of phonetic data of words registered in advance, and calculating from the ratio of the number of correctly pronounced characters to the total number of characters of a word to be compared. Pronunciation data of a pre-entered word is previously entered into the database in association with the word. The coincidence degree can be used for the above-described sound-making correctness determination.
The CPU11 may display all words extracted by the voice recognition processing of the spoken voice on the touch panel display unit 17 and generate the extracted word data 12e by the user arbitrarily selecting the words. That is, the user specifies a misread (misrecognized) word as a word having a wrong pronunciation from the words displayed on the touch panel display unit 17.
Thereafter, the CPU11 stores the extracted word data 12e indicating the word determined to be the pronunciation error in the memory 12, and executes the same processing as described above (steps A9 to a 15).
In this way, even when the electronic dictionary 10 does not cooperate with the tablet PC20, the user can perform the reading determination on the sentence read aloud by using the voice recognition function of the electronic dictionary 10, and can display the learning method list based on the word determined as the pronunciation error as the determination result. This can provide the same effect as in the case of the tablet PC20.
In the above description, the electronic dictionary 10 and the tablet PC20 are described as independent electronic devices, but if the functions of the electronic dictionary 10 and the tablet PC20 (digital textbook) are provided, the functions of the electronic dictionary 10 and the tablet PC20 can be realized in 1 electronic device.
In the above description, the electronic dictionary 10 compares the text data as the voice recognition result with the sentence data received from the tablet PC20 to make a correct or incorrect determination of pronunciation of the word included in the sentence, but the tablet PC20 may make a correct or incorrect determination of pronunciation. In this case, the tablet PC20 has a voice recognition function not shown, and transmits the result of the misjudgment of the voice input from the voice input unit 28 and word data indicating a word judged to be mispronouncing to the electronic dictionary 10. The electronic dictionary 10 tabulates the misread words based on the word data received from the tablet PC20 (creates an extracted word list). In this case, the tablet PC20 may not transmit the article data to be read aloud to the electronic dictionary 10.
In the above description, the learning support system is described as the tablet PC20 being connected to the electronic dictionary 10 to execute the cooperation process, but the electronic dictionary 10 may execute the same process as described above in cooperation with the server, or may execute the cooperation process only with the server instead of the electronic dictionary 10.
Fig. 14 is a schematic diagram of a learning support system according to another embodiment of the present invention. The learning support system 1 of the present embodiment further includes a server 40. Fig. 14 shows a functional block diagram showing a configuration of an electronic circuit of the server 40. The configuration and operation of the present embodiment, which are not described in particular, are the same as those of the above-described embodiment, and therefore, redundant description thereof is omitted.
The server 40 is provided as an electronic device (external electronic device) in the same manner as the electronic dictionary 10, and cooperates with the electronic dictionary 10 or is used instead of the electronic dictionary 10, for example. By providing the server 40 with a voice recognition function (voice recognition program) in the same manner as the electronic dictionary 10, it is possible to perform reading aloud determination by the voice recognition function (including word pronunciation misdetermination) realized by the electronic dictionary 10 in cooperation with the server 40 or by using only the voice recognition function provided in the server 40.
The server 40 has a computer structure for reading programs stored in various storage media or programs transferred thereto and controlling operations by the read programs, and the electronic circuit includes a CPU (central processing unit) 41.
The CPU41 functions as a control unit that controls the entire server 40. The CPU41 controls the operations of the respective circuit portions based on a control program stored in advance in the memory 42, a control program read from a storage medium 43 such as a ROM card to the memory 42 via a storage medium reading unit 44, or a control program downloaded from an external device (such as a server) via a network N such as the internet and read to the memory 42.
The CPU41 is connected with a memory 42, a storage medium reading unit 44, a communication unit 45, and the like. The control programs stored in the memory 42 include a dictionary control program 42a, a voice recognition program 42b, a learning method processing program 42c, and the like. The program and data stored in the memory 42 shown in fig. 14 are substantially the same as the program and data having the same names as those shown in fig. 1, and thus the description thereof is omitted.
Note that each part of the server 40 has basically the same function as each part of the electronic dictionary 10 shown in fig. 1 with the same name, and description thereof is omitted.
First, a case where the electronic dictionary 10 cooperates with the server 40 will be described. That is, the CPU11 (control unit) of the electronic dictionary 10 and the CPU41 (control unit) of the server 40 cooperate with each other based on the respective programs to function as a control unit of the learning support system.
For example, although the voice recognition is performed using the voice recognition function (the voice recognition program 12 b) of the electronic dictionary 10 in the above description, the voice recognition may be performed using the voice recognition function (the voice recognition program 42 b) of the server 40. In this case, similarly to the above, the voice of the learner to be voice-recognized is input from the voice input unit 18 of the electronic dictionary 10, and the voice data and the text data to be read-aloud received from the tablet PC20 are transmitted from the electronic dictionary 10 to the server 40, so that the voice recognition process (including the misjudgment of the word pronunciation) can be executed. Alternatively, the tablet PC20 may input the voice of the spoken text from the voice input unit 28 together with the specification of the text range to be spoken (fig. 5, steps B5, B6). In this case, the voice data of the voice input from the voice input unit 28 is transmitted to the server 40 via the electronic dictionary 10 together with the text data to be read aloud, and the voice recognition process can be executed.
Since the server 40 generally has a higher processing capability than electronic devices such as the electronic dictionary 10, the server 40 can execute the voice recognition processing with a large processing load, thereby greatly reducing the load on the electronic dictionary 10 and improving the processing efficiency of the whole processing.
Further, not limited to the voice recognition processing, for example, the function of providing the learning method list of the electronic dictionary 10 (the processing of the learning method processing program 12 c) may be executed by the function of the learning method processing program 42c of the server 40. In this case, the result of the processing executed by the learning method processing program 42c of the server 40 is transmitted to the tablet PC20 via the electronic dictionary 10, and the learning method list as the processing result is displayed as in the foregoing.
Next, a case will be described in which the server 40 cooperates with the tablet PC20 only in place of the electronic dictionary 10. In this case, the tablet PC20 cooperates with an external electronic device 50 (for example, a personal computer) via the electronic device 50 and the network N to set a link with the server 40. The electronic device 50 functions in the same manner as the electronic dictionary 10 cooperating with the server 40 described above, and displays a screen corresponding to the result of processing executed by the server 40.
The tablet PC20 inputs the voice of the learner who is the voice recognition target from the voice input unit 28 together with the specification of the text range which is the reading target (fig. 5, steps B5, B6), and transmits the sentence data of the sentence whose range is specified and the voice data of the voice to the server 40. Thereby, the server 40 can perform the voice recognition process on the voice uttered by the learner.
Note that the processing performed by the other server 40 (CPU 11) is executed in the same manner as the flowchart shown in fig. 4, and the description thereof is omitted.
Note that, although the above description has been made with the tablet PC20 inputting the learner's voice, if the electronic device 50 is provided with a voice input unit (microphone), the voice of the sentence designated in the tablet PC20 may be input from the electronic device 50, similarly to the electronic dictionary 10. In this case, the electronic apparatus 50 transmits the input sound data to the server 40 together with the article data received from the tablet PC20.
In the above description, the tablet PC20 performs link setting with the server 40 via the external electronic device 50, but the tablet PC20 may perform link setting with the server 40 without via the electronic device 50. In this case, the server 40 executes the processing in the same manner as the electronic dictionary 10 described above, and displays the screen image as the processing result on the display unit (touch panel display unit 27) of the tablet PC20.
As described above, in the learning support system according to the present embodiment, the electronic dictionary 10 is used, the electronic dictionary 10 and the server 40 are linked, or the server 40 is used to perform support processing instead of the electronic dictionary 10, so that language learning can be performed efficiently using the learning function.
The methods described in the embodiments, i.e., the methods such as the processing shown in the flowcharts, may be stored and distributed in a storage medium such as a memory card (ROM card, RAM card, etc.), a magnetic disk (flexible disk, hard disk, etc.), an optical disk (CD-ROM, DVD, etc.), a semiconductor memory, or the like as a program executable by a computer. The computer reads a program stored in the external storage medium and controls the operation of the computer by the program, thereby realizing the same processing as the functions described in the embodiments.
Further, data of a program for realizing each method can be transmitted to a network (internet) in the form of a program code, and the program data can be acquired from a computer connected to the network, whereby the same functions as those of the above-described embodiments can be realized.
The invention of the present application is not limited to the embodiment, and various modifications can be made in the implementation stage without departing from the gist thereof. Further, the invention of each stage is included in the embodiment, and various inventions can be extracted by appropriate combinations of a plurality of disclosed constituent elements. For example, several constituent elements may be deleted from all the constituent elements shown in the embodiments, or several constituent elements may be combined to solve the problems described in the section of the problems to be solved by the invention, and when the effects described in the section of the effects of the invention can be obtained, the configuration after deleting or combining the constituent elements can be extracted as the invention.

Claims (14)

1. An electronic device outputs a learning method list in which executable learning methods are tabulated, with a word whose pronunciation is determined to be wrong among words included in a voice of a user reading a sentence as an object,
the electronic device has a control unit that executes a learning function based on a learning method selected from the learning method list.
2. The electronic device of claim 1,
the control unit executes a voice recognition process for the voice and performs a sound correctness determination for the word included in the sentence,
the control unit extracts a word whose pronunciation is determined to be erroneous,
the control unit outputs the learning method list with the extracted word as a target.
3. The electronic device of claim 1,
the control unit receives article data of an article to be read from another device,
the control unit compares the sentence data with a result of speech recognition of the speech of the sentence to make a judgment of correctness of pronunciation of the word,
the control unit transmits the word determined to be erroneous to the other device.
4. The electronic device of claim 1,
the control unit executes a voice recognition process including an erroneous determination of the pronunciation of a word included in the voice by an external electronic device,
the control section outputs the learning method list based on a determination result of the external electronic device.
5. The electronic device of claim 1,
the control unit executes a learning function based on a learning method selected from the learning method list, with the word whose pronunciation is determined to be erroneous as a processing target.
6. The electronic device of any of claims 1-5,
the control unit executes a learning function with respect to one of a plurality of contents included in a learning method selected from the learning method list.
7. The electronic device of claim 6,
the control unit displays a portion including a word in which the pronunciation of the content is determined to be incorrect on a display device when the learning function is executed.
8. The electronic device of claim 1,
when a word obtained by voice-recognizing the voice of the spoken sentence is not included in a word to be voice-recognized which is previously recorded,
and determining a word with a pronunciation error for a word previously entered with a matching degree with the word obtained by the voice recognition being equal to or less than a preset reference value.
9. The electronic device of claim 2 or claim 4,
the voice recognition processing includes processing for performing voice recognition for each language with respect to the voice uttered by a plurality of different languages.
10. An electronic device has a control section,
the control unit causes a display device to display a text to be read aloud,
the control unit transmits article data of the article to another device,
the control unit receives word data indicating a word in the article whose pronunciation is determined to be wrong through the other device,
the control unit displays the word whose pronunciation is determined to be incorrect in a display form different from that of the other words based on the word data.
11. A learning support system outputs a learning method list in which a word whose pronunciation is determined to be wrong among words included in a voice of a user when reading a sentence is tabulated,
the learning support system includes a control unit that executes a learning function based on a learning method selected from the learning method list.
12. A learning support system is provided with:
a first electronic device having a first control section; and
a second electronic device having a second control section,
the first control unit outputs a learning method list in which executable learning methods are tabulated, with a word whose pronunciation is determined to be erroneous among words included in the voice of the user when reading the sentence as a target,
the first control portion executes a learning function based on a learning method selected from the learning method list,
the second control unit causes the display device to display a text to be read aloud,
the second control section transmits article data of the article to the second electronic device,
the second control unit receives, via the second electronic device, word data indicating a word in the article whose pronunciation is determined to be incorrect,
the second control unit displays the word whose pronunciation is determined to be incorrect in a display form different from the other display forms based on the word data.
13. A learning processing method for an electronic device having a control unit,
the control unit outputs a learning method list in which executable learning methods are tabulated, with a word whose pronunciation is determined to be erroneous among words included in the voice of the user when reading the text,
the control section executes a learning function based on a learning method selected from the learning method list.
14. A storage medium that is a non-transitory computer-readable storage medium storing a program executable by a computer of an electronic device,
the computer is in accordance with the program so that,
outputting a learning method list in which the pronunciations of words included in the voice of the user reading the article are determined to be wrong, with the words being a target, listing the executable learning methods,
a learning function based on a learning method selected from the list of learning methods is performed.
CN202211150391.6A 2021-09-22 2022-09-21 Electronic device, learning support system, learning processing method, and program Pending CN115904172A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2021-154457 2021-09-22
JP2021154457 2021-09-22
JP2022047166 2022-03-23
JP2022-047166 2022-03-23
JP2022-084542 2022-05-24
JP2022084542A JP2023046232A (en) 2021-09-22 2022-05-24 Electronic equipment, learning support system, learning processing method, and program

Publications (1)

Publication Number Publication Date
CN115904172A true CN115904172A (en) 2023-04-04

Family

ID=85776579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211150391.6A Pending CN115904172A (en) 2021-09-22 2022-09-21 Electronic device, learning support system, learning processing method, and program

Country Status (2)

Country Link
JP (1) JP2023046232A (en)
CN (1) CN115904172A (en)

Also Published As

Publication number Publication date
JP2023046232A (en) 2023-04-03

Similar Documents

Publication Publication Date Title
KR102043419B1 (en) Speech recognition based training system and method for child language learning
US20140220518A1 (en) Electronic Reading Device
KR101819458B1 (en) Voice recognition apparatus and system
US20160180741A1 (en) Pronunciation learning device, pronunciation learning method and recording medium storing control program for pronunciation learning
KR102078626B1 (en) Hangul learning method and device
JP6197706B2 (en) Electronic device, problem output method and program
JP2010198241A (en) Chinese input device and program
KR20140071070A (en) Method and apparatus for learning pronunciation of foreign language using phonetic symbol
JP4914808B2 (en) Word learning device, interactive learning system, and word learning program
KR20170009486A (en) Database generating method for chunk-based language learning and electronic device performing the same
US8489389B2 (en) Electronic apparatus with dictionary function and computer-readable medium
JP2019061189A (en) Teaching material authoring system
KR20170041642A (en) Foreign language learning device
KR20080100857A (en) Service system for word repetition study using round type
KR20090035346A (en) Language stydy method which accomplishes a vocabulary analysis
KR20130058840A (en) Foreign language learnning method
CN115904172A (en) Electronic device, learning support system, learning processing method, and program
KR20160106363A (en) Smart lecture system and method
JPS634206B2 (en)
KR101554619B1 (en) System and method for learning language using touch screen
KR100459030B1 (en) Method and Apparatus for English study using touch screen
KR20130128172A (en) Mobile terminal and inputting keying method for the disabled
JPH0344690A (en) Conversation manner education system
JP4677869B2 (en) Information display control device with voice output function and control program thereof
CN112307748A (en) Method and device for processing text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination