CN115840554A - Electronic device, information processing method, and storage medium - Google Patents

Electronic device, information processing method, and storage medium Download PDF

Info

Publication number
CN115840554A
CN115840554A CN202211092177.XA CN202211092177A CN115840554A CN 115840554 A CN115840554 A CN 115840554A CN 202211092177 A CN202211092177 A CN 202211092177A CN 115840554 A CN115840554 A CN 115840554A
Authority
CN
China
Prior art keywords
sound
learning
text
displayed
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211092177.XA
Other languages
Chinese (zh)
Inventor
横田悠希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022047169A external-priority patent/JP2023046222A/en
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN115840554A publication Critical patent/CN115840554A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to an electronic device, an information processing method, and a storage medium. A control unit of an electronic device executes processing based on the following modes: a text flag mode in which a model sound corresponding to the text selected as the processing target is not output from the sound output unit, and a flag indicating a playback position of the model sound is displayed for the text corresponding to the model sound displayed on the display unit in accordance with a playback timing of the model sound; and an evaluation mode in which the text corresponding to the demonstration sound is displayed on the display unit, a reading sound for reading the text is input from a sound input unit, the reading sound is evaluated based on the demonstration sound, and an evaluation result is displayed on the display unit.

Description

Electronic device, information processing method, and storage medium
Technical Field
The present disclosure relates to an electronic device, an information processing method, and a storage medium suitable for language learning.
Background
Learning functions to be incorporated in electronic devices such as electronic dictionaries and personal computers include learning functions for learning foreign language. Examples of the learning of the language include listening learning in which an article read in a language of a foreign language is listened to, and spoken learning in which a language of a foreign language is enabled to be uttered correctly.
In hearing learning, for example, a native language user who reads an article (text) displayed while the article is displayed outputs a sound. Learners can learn the pronunciation of the native language users by repeatedly listening to the voices of the native language users.
In spoken language learning, for example, a text (text) uttered by a learner is displayed, a voice spoken by the learner is input for the displayed text, and an evaluation result by the pronunciation determination function is output for the voice. The learner confirms the evaluation result based on the pronunciation determination function to feed back the reading speed, pronunciation, and the like.
Documents of the prior art
Patent document
Patent document 1: JP 2007-094055A
As described above, the conventional electronic device is provided with a learning function such as hearing learning or spoken language learning for language learning, but cannot perform language learning suitable for the effectiveness of the learner.
Disclosure of Invention
The present disclosure has been made in view of the above-described problems, and an object of the present disclosure is to provide an electronic device, an information processing method, and a storage medium that can efficiently perform language learning.
In order to solve the above problem, an electronic device according to an embodiment of the present invention includes a control unit that executes processing based on: a text flag mode in which a model sound corresponding to the text selected as the processing target is not output from the sound output unit, and a flag indicating a playback position of the model sound is displayed on the text corresponding to the model sound displayed on the display unit in accordance with the playback timing of the model sound; and an evaluation mode in which the text corresponding to the demonstration sound is displayed on the display unit, a reading sound for reading the text is input from a sound input unit, the reading sound is evaluated based on the demonstration sound, and an evaluation result is displayed on the display unit.
Drawings
Fig. 1 is a functional block diagram showing a configuration of an electronic circuit of an electronic device according to an embodiment of the present invention.
Fig. 2 is a front view showing an external configuration of the electronic dictionary in the present embodiment.
Fig. 3 is a diagram showing an example of learning management data in the present embodiment.
Fig. 4 is a flowchart showing the learning setting process in the present embodiment.
Fig. 5 is a diagram showing an example of the setting content of the learning setting data in the present embodiment.
Fig. 6 is a flowchart for explaining a learning process based on a learning process program of the electronic dictionary in the present embodiment.
Fig. 7 is a flowchart showing the process of learning step 1 (mode 1) in the learning process shown in fig. 6.
Fig. 8 is a flowchart showing the process of learning step 2 (pattern 2) in the learning process shown in fig. 6.
Fig. 9 is a flowchart showing the process of learning step 3 (pattern 3) in the learning process shown in fig. 6.
Fig. 10 is a diagram showing an example of a learning screen DS1 for the 1 st mode in the present embodiment.
Fig. 11 is a diagram showing an example of a flag that changes with the reproduction of the demonstration sound in the learning screen in the present embodiment.
Fig. 12 is a diagram showing an example of the learning screen DS2 for the 2 nd mode in the present embodiment.
Fig. 13 is a diagram showing an example of a learning screen DS3 for the 3 rd mode in the present embodiment.
Fig. 14 is a diagram showing an example of the 1 st flag and the 2 nd flag that change on the learning screen DS3 in the present embodiment.
Fig. 15 is a diagram showing an example of the evaluation result screen DS4 in the present embodiment.
Fig. 16 is a schematic diagram of a learning support system according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described below with reference to the drawings.
Fig. 1 is a functional block diagram showing a configuration of an electronic circuit of an electronic device according to an embodiment of the present invention.
In the present embodiment, an example is shown in which an electronic device is configured as the electronic dictionary 10, for example. The electronic device can be realized by various electronic devices such as a personal computer, a smart phone, a tablet PC, and a game device, in addition to the electronic dictionary 10. In the present embodiment, the functions realized by the electronic dictionary 10 described later may be realized by a system in which an electronic device and a server that are connected via a network including the internet and the like work together, or may be realized by only a server instead of the electronic dictionary 10, and the processing results may be output by an electronic device connected via a network. The details will be described later.
The electronic dictionary 10 records dictionary contents of various categories as dictionary data. In the dictionary contents, information relating to at least 1 sense is registered in association with each of words and the like as a plurality of entries. The dictionary contents loaded in the electronic dictionary 10 are generally created by a publisher or the like, and include contents published by paper media or the like, and are highly reliable. Therefore, by effectively utilizing the highly reliable dictionary contents in learning, an effect of accurate and effective learning can be expected.
The dictionary contents are not limited to the dictionary related to languages such as english system and national system, but include contents such as encyclopedias, learning reference books, problem sets, literary works, and interpretation books.
The dictionary contents can be used not only by the dictionary search function but also as learning contents for language learning.
The electronic dictionary 10 has a configuration of a computer that reads programs recorded on various recording media or programs transferred thereto and controls operations by the read programs, and includes a Central Processing Unit (CPU) 11 in an electronic circuit thereof.
The CPU11 functions as a control unit that controls the whole electronic dictionary 10. The CPU11 controls the operations of the circuit elements in accordance with a control program stored in advance in the memory 12, a control program read from a recording medium 13 such as a ROM card into the memory 12 via a recording medium reading unit 14, or a control program downloaded from an external device (such as a server) via a network such as the internet (not shown) and read into the memory 12.
The control program stored in the memory 12 is activated in response to an input signal corresponding to a user operation from the key input unit 16, an input signal corresponding to a user operation from the touch panel display unit 17 as a display unit, or a connection communication signal with the external recording medium 13 such as an EEPROM (registered trademark), a RAM, or a ROM connected via the recording medium reading unit 14.
The CPU11 is connected to a memory 12, a recording medium reading unit 14, a communication unit 15, a key input unit 16, a touch panel display unit 17 (display), a voice input unit (microphone) 18, a voice output unit (speaker) 19, and the like. In one embodiment, the display unit may not function as a touch panel. For example, an LCD (Liquid Crystal Display) may be used as the Display unit instead of the touch panel Display unit 17.
The control programs stored in the memory 12 include a dictionary control program 12a, a learning processing program 12b, and the like. The dictionary control program 12a is a program for controlling the operation of the entire electronic dictionary 10. The dictionary control program 12a also realizes a dictionary search function for searching for dictionary contents and displaying information based on a character string input by the input unit (the key input unit 16, the touch panel display unit 17, and the voice input unit 18). The dictionary control program 12a includes a handwritten character recognition program for recognizing handwritten characters on the touch panel display unit 17.
The learning process program 12b is a program that realizes a learning function for learning a language of a foreign language. The learning processing program 12b includes a learning setting processing program and a reading evaluation program. A learning setting processing program for executing processing for setting learning setting data for controlling display and sound reproduction at the execution time of the learning function in correspondence with an operation for a learner. The reading evaluation program is used for executing the following processing: recognizing the learner's voice input from the voice input unit 18 and detecting the uttered content (text, sentence, word, etc.); and evaluation is performed based on comparison of the reading sound of the learner's text with the demonstration sound of the text by the native language user (speaker who is native language in foreign language) or the like.
The electronic dictionary 10 according to the present embodiment executes processing based on the following pattern: a1 st mode (demonstration sound mode) in which a demonstration sound corresponding to the learning content (text) selected as the target of the learning process is reproduced and outputted from the sound output unit 19 based on the learning process program 12b (learning function), and a mark indicating the reproduction position of the demonstration sound is displayed on the text corresponding to the demonstration sound displayed on the touch panel display unit 17 in accordance with the reproduction speed (reproduction timing) of the demonstration sound; a2 nd mode (text flag mode) in which a flag indicating the reproduction position of the demonstration sound is displayed for the text corresponding to the demonstration sound displayed on the touch panel display unit 17 in accordance with the reproduction speed (reproduction timing) of the demonstration sound without outputting the demonstration sound from the sound output unit 19; and a3 rd mode (evaluation mode) in which a text corresponding to the demonstration sound is displayed on the touch panel display unit 17, a reading sound of the reading text is input from the sound input unit 18, the reading sound is evaluated based on the demonstration sound, and the evaluation result is displayed on the touch panel display unit 17.
The memory 12 stores dictionary data 12c, learning content data 12d, learning management data 12e, learning setting data 12f, reading evaluation data 12g, and the like.
The dictionary data 12c includes a database in which dictionary contents such as an english-japanese dictionary, a japanese-english dictionary, an english-chinese dictionary, and a chinese dictionary, and various types of dictionaries are recorded. In the dictionary data 12c, word sense information describing meanings (word senses) corresponding to the respective lemmas is associated with each dictionary. Note that the dictionary data 12c may be stored in an external device (such as a server) accessible via a network, instead of being built into the electronic dictionary 10.
The learning content data 12d is data for learning used in the learning function based on the learning processing program 12 b. The learning content data 12d includes: texts including a plurality of sentences, such as sentences and words, a model voice of each text spoken by a native language user, evaluation criterion data for evaluating the spoken voice of a learner based on the model voice, and the like. The learning content data 12d includes a plurality of learning contents (texts) arbitrarily selected by the learner for learning.
The learning management data 12e is data for managing the progress of learning of a plurality of learning contents included in the learning content data 12 d. In the learning management data 12e, learning progress information indicating which of the 1 st to 3 rd patterns based on the learning processing program 12b (learning function) has been executed is registered for each learning content.
The learning setting data 12f is data for controlling display and sound reproduction during execution of the learning processing program 12b (learning function). The learning setting data 12f is set by a process based on the learning setting processing program.
The reading evaluation data 12g is data showing the evaluation result based on the demonstration sound evaluation with respect to the reading sound of the text of the reading learning content by the learner by the execution of the 3 rd mode of the learning processing program 12 b.
The communication unit 15 executes communication control for performing communication with another electronic device via a Network such as the internet or a Local Area Network (LAN), or communication control for performing short-range wireless communication such as Bluetooth (registered trademark) or Wi-Fi (registered trademark) with another electronic device in a short range.
The electronic dictionary 10 thus configured realizes the functions described in the following operation description by the CPU11 controlling the operations of each part of the circuit, and operating the software and hardware in cooperation, in accordance with the commands described in various programs such as the dictionary control program 12a and the learning processing program 12 b.
Fig. 2 is a front view showing an external configuration of the electronic dictionary 10 according to the present embodiment.
In the case of the electronic dictionary 10 shown in fig. 2, a CPU11, a memory 12, a recording medium reading unit 14, a communication unit 15, a voice input unit 18, and a voice output unit 19 are built in the lower layer side of the opened and closed device main body, a key input unit 16 is provided, and a touch panel display unit 17 is provided on the upper layer side.
The key input unit 16 includes a character input key 16a, a dictionary selection key 16b capable of selecting various dictionaries and functions, a [ translate/decide ] key 16c, a [ return ] key 16d, a cursor key (up, down, left, and right keys) 16e, a delete key 16f, a power button, and various other function keys.
Various menus, buttons 17a, and the like are displayed on the touch panel display unit 17 in accordance with the execution of various functions. The touch panel display unit 17 can perform, for example, a touch operation for selecting various menus and buttons 17a, and a handwritten character input for inputting characters, using a pen.
In the handwritten character input, when a pattern representing a character is handwritten with a pen in the handwritten character input area displayed on the touch panel display unit 17, character recognition processing is performed on the pattern. The characters for the pattern obtained by the character recognition processing are displayed in the character input area of the touch panel display unit 17 in the same manner as the characters input by the operation of the character input keys 16a of the key input unit 16. Therefore, a character string for dictionary search can be input by handwriting a character on the touch panel display unit 17.
In addition, the electronic dictionary 10 can input characters by voice. The electronic dictionary 10 inputs a voice uttered by a learner from the voice input unit 18, executes voice recognition processing on the input voice, and inputs a character string corresponding to the uttered voice.
Fig. 3 is a diagram showing an example of the learning management data 12e in the present embodiment.
In the learning management data 12e, for example, learning progress information indicating whether or not execution of any one of the 1 st to 3 rd modes based on the learning processing program 12b (learning function) is completed is registered for each learning content (learning content 1, 2, or). With respect to the learning progress information shown in fig. 3, learning steps 1, 2, and 3 respectively represent pattern 1, pattern 2, and pattern 3. In the learning management data 12e shown in fig. 3, learning progress information indicating completion of execution is registered for the 1 st pattern and the 2 nd pattern (learning steps 1 and 2) with respect to the learning content 3. Data indicating that the content is not learned is registered for the content that is not to be processed by the learning function (for example, "learning content 2").
Next, the operation of the electronic dictionary 10 in the present embodiment will be described.
First, the learning setting process in the present embodiment will be described. Fig. 4 is a flowchart showing the learning setting process in the present embodiment.
When execution of the learning function by the learning processing program 12b is instructed by, for example, a menu operation, the CPU11 causes the touch panel display unit 17 to display a learning setting screen (step S1).
On the learning setting screen, setting items for controlling display and audio playback during execution of the learning function are displayed. For example, the learning setting screen is provided with the following setting items: in the 3 rd mode of the learning function, whether or not to display (on/off) is set independently for the text of the learning content, for each of the 1 st flag (guide flag) indicating the playback position of the demonstration sound and the 2 nd flag (user flag) indicating the reading position of the reading sound.
The CPU11 accepts an instruction (learning setting) for a setting item by an operation of the cursor key 16e or a touch operation of the touch panel display unit 17 (step S2). When the determination of the setting input in the learning setting screen is instructed by, for example, the operation of the [ translate/determine ] key 16c or the touch operation of the determination button provided in the learning setting screen (yes in step S3), the CPU11 stores the learning setting contents set in the learning setting screen, that is, the learning setting data 12f indicating the display (on) or non-display (off) of the 1 st flag and the display (on) or non-display (off) of the 2 nd flag in the memory 12 (step S4). The CPU11 refers to the learning setting data 12f when executing the 3 rd mode by the learning function, and performs display control of the 1 st flag and the 2 nd flag corresponding to the setting contents.
Fig. 5 is a diagram showing an example of the setting content of the learning setting data 12f in the present embodiment. In the example shown in fig. 5, "on" (displayed) is set for the 1 st flag (guide flag) and the 2 nd flag (user flag), respectively. Therefore, in the 3 rd mode, both the 1 st mark and the 2 nd mark are displayed on the text of the learning content.
As described above, in the learning setting process, whether to display both the 1 st and 2 nd marks, to display either the 1 st and 2 nd marks, or to display neither the 1 st and 2 nd marks can be set according to the learning style of the learner.
In addition, although setting items for display control of the 1 st flag and the 2 nd flag are provided on the learning setting screen, other learning settings may be performed.
For example, a reproduction speed may be set when reproducing the model sound of the learning content during execution of the learning function. For example, the learning setting screen may be displayed with options of several changeable stages (for example, 5 stages), and any one of them may be selected. The CPU11 stores data indicating the playback speed selected on the learning setting screen as learning setting data 12 f. The CPU11 controls the reproduction timing of the demonstration sound and the moving speed of the 1 st flag with reference to the data indicating the reproduction speed when executing the 1 st to 3 rd modes by the learning function.
In this way, in the learning setting process, the display of the flag and the playback speed of the demonstration sound during the execution of the learning function can be set. Thus, when the learning function is executed, display and voice reproduction suitable for the learner's learning are executed, and therefore effective learning can be expected.
Next, a learning process based on the learning function of the electronic dictionary 10 in the present embodiment will be described.
Fig. 6 is a flowchart for explaining the learning process by the learning process program 12b of the electronic dictionary 10 in the present embodiment. Fig. 7 is a flowchart showing the process of learning step 1 (1 st mode (demonstration sound mode)) in the learning process shown in fig. 6. Fig. 8 is a flowchart showing a process of learning step 2 (2 nd mode (text flag mode)) in the learning process shown in fig. 6. Fig. 9 is a flowchart showing the process of learning step 3 (mode 3 (evaluation mode)) in the learning process shown in fig. 6.
First, when the execution of the learning function is instructed by, for example, an operation on a menu, the CPU11 causes the touch panel display unit 17 to display a selection screen of a learning content (text) to be learned. In the learning content selection screen, for example, a plurality of learning contents that can be learned are displayed in a list and can be arbitrarily designated by the operation of the learner.
When an arbitrary learning content is specified on the learning content selection screen (step A1), the CPU11 acquires learning progress information of the specified learning content from the learning management data 12e (step A2).
If the learning progress information shows no learning with respect to all the learning steps (yes in step A3), the CPU11 shifts to the learning step 1 (1 st mode) (step A4). In the electronic dictionary 10 of the present embodiment, in the learning step 1 (mode 1), hearing learning is executed for the learning content.
If all the learning steps are not unlearned (no in step A3), the CPU11 determines whether or not the learning step 1 is unlearned. If learning step 1 is not learned (yes in step A6), the CPU11 proceeds to learning step 1 (mode 1) (step A4). Further, when the learning progress information indicates that the learning step 1 is completed (no in step A6), the CPU11 determines whether or not learning is not performed in the learning step 2 since the learning step 1 is already completed. If learning step 2 is not learned (yes in step A9), the CPU11 proceeds to learning step 2 (mode 2) (step A7). In the electronic dictionary 10 of the present embodiment, in the learning step 2 (mode 2), the spoken language learning is performed with respect to the same learning content for which the hearing learning has been performed in the learning step 1.
Further, when the learning progress information indicates that the learning step 2 is completed (no in step A9), the CPU11 shifts to the learning step 3 (pattern 3) because the execution of the learning step 2 is already completed (step a 10). In the electronic dictionary 10 of the present embodiment, in the learning step 3 (mode 3), in order to confirm the results of learning in the learning steps 1 and 2, reading is performed on the same learning content on which hearing/spoken language learning has been performed in the learning steps 1 and 2, and an evaluation result based on comparison with the model voice is output.
The CPU11 basically next shifts to the learning step 2 (pattern 2) in the case where the execution of the learning step 1 (pattern 1) is completed, and basically next shifts to the learning step 3 (pattern 3) in the case where the execution of the learning step 2 (pattern 2) is completed.
That is, in the electronic dictionary 10 of the present embodiment, the learning steps 1 to 3 (1 st to 3 rd modes) having different learning forms are executed in stages, whereby the learning of the language of the foreign language can be effectively executed.
After the end of the learning step 1, the CPU11 displays, for example, a transition confirmation screen to the learning step 2. When the instruction for the shift confirmation is input by the learner on the shift confirmation screen (yes in step A5), the CPU11 shifts to the learning step 2 (step A5). On the other hand, in the transition confirmation screen, if the instruction for transition confirmation is not input by the learner (no in step A5), the CPU11 executes the learning step 1 again. That is, the learning step 1 can be repeatedly performed corresponding to the request of the learner.
Similarly, after the end of the learning step 2, the CPU11 displays a transition confirmation screen to the learning step 3. When the instruction for the transfer confirmation is input by the learner in the transfer confirmation screen (yes in step A8), the CPU11 transfers to the learning step 3 (step a 10). On the other hand, in the transition confirmation screen, if the instruction for transition confirmation is not input by the learner (no in step A8), the CPU11 executes the learning step 2 again. Alternatively, in the transition confirmation screen, when the learner inputs the instruction to transition to learning step 1, the CPU11 may execute learning step 1 again.
That is, when the learning content to be learned is selected and learning is started (when execution of the program for learning is started), if learning of the selected learning content is the first time (when learning is not started), learning is performed in the order of learning step 1 (processing in the demonstration sound mode), learning step 2 (processing in the text label mode), and learning step 3 (processing in the evaluation mode) (the program for learning is executed in order).
When learning is started by selecting the learning content to be learned, if the selected learning content is not learned for the first time but for the 2 nd or later (if it is not learned), the learning step 1 (processing in the demonstration sound mode) is executed before the learning step 2 (processing in the text flag mode) is executed to perform learning, and the learning step 2 (processing in the text flag mode) is executed before the learning step 3 (processing in the evaluation mode) is executed to perform learning.
Thus, even if the learner does not particularly recognize, the learning steps 1 to 3 can be learned in stages in a correct order.
As described above, although the electronic dictionary 10 according to the present embodiment basically executes the learning steps 1 to 3 (1 st to 3 rd patterns) in different learning forms in stages, the same learning step can be repeatedly executed in response to a request from a learner, thereby achieving a solid learning in each of the learning steps 1 and 2.
In the above description, the learning steps 1 to 3 (1 st to 3 rd modes) are performed in stages, but any learning steps 1 to 3 may be individually performed or any learning steps (modes) may be combined and performed in accordance with a setting operation by a learner. For example, the 1 st mode (steps A3, A4, A6, and A9) shown in fig. 6 may be omitted to execute only the 2 nd mode and the 3 rd mode, or the 3 rd mode (steps A8 and a 10) may be omitted to execute only the 1 st mode and the 2 nd mode.
Next, a specific example of the learning steps 1 to 3 (1 st to 3 rd modes) will be described.
First, the learning step 1 (1 st mode) is explained with reference to a flowchart shown in fig. 7.
When the CPU11 starts the learning step 1, the touch panel display 17 displays the foreign language sentence on the learning screen for the 1 st mode based on the text of the learning content selected as the learning object.
Fig. 10 is a diagram showing an example of a learning screen DS1 for the 1 st mode in the present embodiment. As shown in fig. 10, a foreign language sentence (text) to be learned is displayed.
The CPU11 starts outputting the demonstration sound GS from the sound output unit 19 based on the data of the demonstration sound spoken by the native language user included in the learning content corresponding to the displayed sentence (text) and starts displaying a mark (guide mark) for the text in accordance with the reproduction speed (reproduction timing) of the demonstration sound (step B1).
In addition, when the speed of the demonstration sound is set in the learning setting data 12f, the reproduction timing of the demonstration sound is controlled in accordance with the reproduction speed set in the learning setting data 12 f. In this case, the same playback speed is used in all of the learning steps 1 to 3 (1 st to 3 rd modes).
Thereafter, until the reproduction of the demonstration sound is completed (no in step B2), the CPU11 controls the display so as to change the display position of the flag in accordance with the reproduction timing of the demonstration sound to recite the position in the text of the demonstration sound while reproducing the demonstration sound GS and outputting it from the sound output section 19 (step B1).
In the learning screen DS1 shown in fig. 10, the mark GM is made by displaying the text range of the recital demonstration sound in a display form different from that of the other text. In the example shown in fig. 10, the mark GM is made by displaying the background color of the text range where the demonstration sound is recited as a color different from others. This makes it easy to identify the flag GM.
Fig. 11 shows an example of the flag GM that changes together with the reproduction of the demonstration sound on the learning screen DS1 in the present embodiment. As shown in fig. 11 (a), if the demonstration sound is reproduced to "Acertain man", the mark GM1 is displayed to "a certain man".
Next, when the demonstration sound is reproduced to "\8230"; sectional Daughters ", as shown in fig. 11 (B), the marks GM2 are displayed to" \8230 "; sectional Daughters", and when the demonstration sound is reproduced to "\8230"; wee always ", as shown in fig. 11 (C), the marks GM3 are displayed to" \8230 "; wee always".
When the reproduction of the demonstration sound is finished (yes in step B2), the CPU11 updates the learning progress information to show the completion of the learning step 1 for the learning content to be a learning target in the learning management data 12e (step B3).
As described above, in the learning step 1 (mode 1) of the electronic dictionary 10 in the present embodiment, the reading position of the demo voice can be confirmed by the mark GM for the text while listening to the demo voice of the native language user for the text of the learning content. Therefore, it is possible to learn listening of the demonstration sound read by the native language user, and to effectively perform learning necessary for reading the text of the learning content by the native language user in the same manner as the native language user in terms of reading time (reading speed), suppression, smoothness, and the like.
Next, the learning step 2 (pattern 2) is explained with reference to a flowchart shown in fig. 8.
When the CPU11 starts the learning step 2, the touch panel display 17 displays the foreign language sentence on the learning screen for the 2 nd mode based on the text of the learning content selected as the learning object.
Fig. 12 is a diagram showing an example of the learning screen DS2 for the 2 nd mode in the present embodiment. As shown in fig. 12, a foreign language sentence (text) to be learned is displayed in the same manner as the learning screen DS 1.
The CPU11 starts display of a mark GM (guide mark) for the text in accordance with the reproduction timing of the demonstration sound, based on the data of the demonstration sound spoken by the native language user included in the learning content corresponding to the displayed sentence (text) (step C1). That is, in the learning step 2, the demonstration sound is not output from the sound output unit 19, and the reading position in the case of reading the text by the native language user is indicated by the mark GM.
Thereafter, the CPU11 controls the display so as to change the display position of the mark GM in accordance with the reproduction timing of the model voice until the display of the mark GM corresponding to the reproduction timing of the model voice reaches the end of the text (no in step C2) so as to indicate the reading position in the text in a case where the native language user reads the text (step C1). The display of the flag GM is performed in the same manner as the learning step 1 (1 st mode), and the description thereof is omitted.
The learner can practice reading of the text at the same reading time (reading speed) as that of the native language user by reading the text indicated by the mark GM corresponding to the position shift displayed in the text of the learning content. The learner can effectively perform learning of speaking similar to the native language user, such as suppression of the sense of excitement and smoothness, using not only the speaking time (speaking speed) but also the learning result in the learning step 1 (mode 1).
When the display of the flag GM corresponding to the reproduction timing of the demonstration sound reaches the end of the text (yes in step C2), the CPU11 updates the learning progress information for the learning content set as the learning target in the learning management data 12e to show the completion of the learning step 2 (step C3).
As described above, in the learning step 2 (mode 2) of the electronic dictionary 10 in the present embodiment, the reading position corresponding to the playback timing of the demonstration voice of the mother tongue user can be confirmed by the mark GM with respect to the text of the learning content. Therefore, it is possible to effectively learn (practice) the same reading as that of the native language user for the text of the learning content.
Next, the learning step 3 (pattern 3) is explained with reference to a flowchart shown in fig. 9.
When the CPU11 starts the learning step 3, the touch panel display 17 displays the foreign language sentence on the learning screen image for the 3 rd mode based on the text of the learning content selected as the learning object.
Fig. 13 is a diagram showing an example of a learning screen DS3 for the 3 rd mode in the present embodiment. As shown in fig. 13, a foreign language sentence (text) to be learned is displayed in the same manner as the learning screen DS 1.
First, the CPU11 refers to the learning setting data 12f to determine whether or not to display (turn on/off) the 1 st flag (guide flag) indicating the playback position of the demonstration sound and the 2 nd flag (user flag) indicating the reading position of the reading sound, respectively (step D1). In the following description, the 1 st flag (guide flag) is described as being displayed or not displayed.
When the display 1 st flag GM (guide flag) is set in the learning setting data 12f (yes in step D2), the CPU11 starts the voice input from the voice input unit 18 in order to input the reading voice of the learner's text (step D3), and starts the display of the 1 st flag GM for the text in accordance with the reproduction timing of the demonstration voice based on the data of the demonstration voice (step D4). The display of the 1 st flag GM is performed in the same manner as the learning step 2 (2 nd mode). Note that the demonstration sound is not output as in the learning step 2 (pattern 2).
The CPU11 records the reading voice of the learner's text input from the voice input unit 18 to evaluate it, which will be described later, and detects the uttered content to determine the reading position of the text (step D5). The CPU11 starts display of the 2 nd flag UM (user flag) for the text corresponding to the reading position of the text discriminated on the reading sound (step D6).
Thereafter, the CPU11 controls the display so as to change the display position of the 1 st mark GM in accordance with the reproduction timing of the demonstration sound until the display of the 1 st mark GM corresponding to the reproduction timing of the demonstration sound reaches the end of the text (the reproduction time of the demonstration sound elapses) (no in step D7) to indicate the reading position in the text when the native language user reads the text (step D4).
The CPU11 controls display in accordance with the reading voice of the learner's text inputted from the voice input unit 18, and changes the display position of the 2 nd flag UM so as to clearly show the reading position in the text (steps D5 and D6).
In the learning screen DS3 shown in fig. 13, the 1 st mark GM displayed in accordance with the playback of the demonstration sound and the 2 nd mark UM displayed in accordance with the reading sound US of the learner are displayed for the text of the learning content. The 1 st mark GM and the 2 nd mark UM are displayed in different display modes and can be easily recognized by the learner. For example, the 1 st mark GM is displayed in the same display mode as the 1 st/2 nd mode, so that the learner recognizes the mark corresponding to the demonstration sound. The 2 nd flag UM is displayed by underlining for the text so as not to be duplicated with the 1 st flag GM.
Fig. 14 shows an example of the 1 st flag GM and the 2 nd flag UM which are changed on the learning screen DS3 in the present embodiment. In fig. 14 (a), when the 1 st mark GM1 corresponding to the demonstration sound is displayed to "a certain man", the display is changed by discriminating the position where the learner's reading sound is uttered to "a certain man", so that the 2 nd mark UM1 represents reading to "a certain man".
Next, when the display of the 1 st mark GM2 is changed to the position of "\8230"; sectional Daughters "in accordance with the reproduction timing of the demonstration sound, as shown in fig. 14 (B), the display is changed so that the 2 nd mark GM2 represents the position of" \8230 "; who wee" to which the learner uttered the reading sound. When the display of the 1 st mark GM3 is changed to the position of "\8230"; wee always "in accordance with the reproduction timing of the demonstration sound, the 2 nd mark UM3 is indicated as being read aloud to" \8230 "; always squaring" by judging the position change display where the learner's reading sound is uttered to "\8230"; always squaring ", as shown in fig. 14 (C).
As such, by displaying the 2 nd mark UM representing the position where the learner speaks the text simultaneously with the 1 st mark GM displayed corresponding to the playback timing of the demonstration sound for the text of the learning content, it can be visually easily confirmed that the reading of the text of the learner is fast, proper, and slow with respect to the demonstration sound by virtue of the comparison of the display positions of the 1 st mark GM and the 2 nd mark UM. Therefore, the learner can adjust the reading at the same speed as the demonstration sound in the reading of the text.
If the display of the mark UM corresponding to the input position of the reading sound US reaches the end of the text (the learner reads the text to the end) (yes at step D7), the CPU11 performs an evaluation based on the comparison with the demonstration sound on the reading sound of the learner (step D12). The evaluation of the reading sound will be described later.
On the other hand, if it is set in the learning setting data 12f that the 1 st flag GM (guide flag) is not displayed (no in step D2), the CPU11 starts the voice input from the voice input unit 18 in order to input the reading voice of the text of the learner (step D8).
The CPU11 also stores the aloud sound of the learner's text input from the sound input unit 18, and detects the uttered content to determine the aloud position of the text (step D9). The CPU11 starts display of the 2 nd flag UM (user flag) for the text corresponding to the reading position of the text discriminated on the reading sound (step D10).
Then, until the input of the learner's reading sound US is finished (the learner reads the text to the end) (no in step D11), the CPU11 controls the display in accordance with the reading sound of the learner's text input from the sound input unit 18, and changes the display position of the 2 nd flag UM so as to clearly indicate the reading position in the text (steps D9 and D10).
In this case, the learning screen DS3 shown in fig. 13 displays only the 2 nd flag UM in the same manner as described above without displaying the 1 st flag GM. Detailed description is omitted.
As described above, by not displaying the 1 st mark GM for the text of the learning content, the difficulty for reading aloud is increased as in the case of the demonstration sound, and the evaluation result for reading aloud can be obtained. In other words, the evaluation result of reading aloud under difficult conditions can be obtained, and the learning result can be confirmed.
If the input of the reading sound US is ended (yes at step D11), the CPU11 performs an evaluation based on the comparison with the demonstration sound for the reading sound of the learner (step D12). The evaluation of the reading sound will be described later.
In the case where the learning setting data 12f is set so that the 2 nd flag UM is not displayed, the process for displaying the 2 nd flag UM is omitted in the above description.
In the above description, the background color of the 1 st mark GM is displayed in a color different from the other colors, and the 2 nd mark UM is displayed by underlining, but any other display form may be used as long as it can clearly indicate the position of the text. For example, the color of a character may be changed, the font of a character may be changed, and a character may be erased. Further, it is also possible to use the 1 st color that represents the 1 st mark GM for the upper half of the text (character or background color), the 2 nd color different from the 1 st color that represents the 2 nd mark UM for the lower half of the text, and the like.
Note that, although the text is specified in the range by the 1 st flag GM and the 2 nd flag UM, a flag (or a graphic) or the like indicating only the position of a word or a character to be targeted in the model sound (reading sound) may be displayed.
In fig. 11 and 14, the display positions of the 1 st mark GM and the 2 nd mark UM are shown to be changed in units of words, but the display positions may be shown in units of characters.
Next, evaluation of the reading sound of the learner will be described.
In the learning content, a demonstration sound of a reading text of a native language user is included, and evaluation criterion data for evaluating the reading sound of a learner with reference to the demonstration sound is included. The evaluation reference data is created as follows. For example, a model voice is converted into data, physical quantities such as power and frequency are calculated for each frame divided at a fixed time, and the time transition of these physical quantities is used to determine the intervals of uttered words or sentences, consonants/vowels, the intensity and intensity changes of consonants/vowels, and the like. Based on these determination results, the model voice spoken by the native language user is digitized as evaluation criterion data.
In learning step 3 (mode 3), the CPU11 performs the same processing as the above-described demonstration sound on the reading sound recorded by the learner reading the text, and generates evaluation target data by digitizing the reading sound.
The CPU11 compares the evaluation criterion data of the demonstration sound with the evaluation target data of the reading sound to evaluate a plurality of parameters, and further calculates a comprehensive evaluation of the reading sound by integrating the scores of the plurality of parameters. The parameters include, for example, the read time, consonant decision (consonant intensity), inflection, smoothness, and the like.
The CPU11 basically gives a higher score as the evaluation reference data and the evaluation target data are closer to each other. In addition, the reading time may be a score that is higher as the reading times of the demonstration sound and the reading sound are closer to each other, or may be a score that is higher when the reading sound is shorter than the demonstration sound.
Further, evaluation of the reading sound may be performed not only in units of texts of the entire text but also in units of sentences and words, and evaluation values of the respective evaluation values may be obtained. In this case, the learner may designate a text range of a portion to be evaluated with respect to the text and output an evaluation value regarding the text range. Further, instead of reading the entire text aloud, only a part of the text may be read aloud to evaluate the text aloud. For example, when the 1 st flag GM is not displayed, reading-out is started at the timing when the 1 st flag GM reaches the sentence desired to be evaluated. The CPU11 performs evaluation in units of sentences on the reading sound input by the learner's reading. Thus, even if the text of the learning content is long, the evaluation result can be output even if the reading is interrupted halfway, or the evaluation result can be output by selecting a sentence desired to be learned in a concentrated manner in the text, and the learning can be efficiently performed.
When the evaluation of the reading-aloud sound is completed, the CPU11 updates the learning progress information on the learning content of the learning management data 12e as the learning target so as to show the completion of the learning step 3 (step D13), and displays an evaluation result screen representing the evaluation result of the reading-aloud sound on the touch panel display unit 17 (step D14).
Fig. 15 is a diagram showing an example of the evaluation result screen DS4 in the present embodiment.
The evaluation result screen DS4 shown in fig. 15 displays, for example, together with the comprehensive evaluation D41: a parameter score table D43 in which a parameter score D42 and a parameter score D42 for each of a plurality of parameters (speaking time, consonant determination (consonant intensity), inflection, and smoothness) are graphed.
By confirming the comprehensive evaluation D41 and the parameter score D42 on the evaluation result screen DS4, the results of learning for 1 learning content performed through the learning steps 1 to 3 (1 st to 3 rd modes) can be easily grasped.
On the evaluation result screen DS4, for example, a "sound reproduction" button D44, a "relearning" button D45, and an "end" button D46 are provided.
When detecting the operation of selecting the "sound reproduction" button D44 (yes at step D15), the CPU11 executes, for example, a sound reproduction process for a demonstration sound or a recorded reading sound (step D16). For example, the learner arbitrarily selects the demonstration sound or the reading sound, and the CPU11 causes the demonstration sound or the reading sound to be output from the sound output section 19. At this time, the CPU11 causes the learning screen DS3 shown in fig. 13 to be displayed so that the changes of the 1 st flag GM and the 2 nd flag UM can be referred to, as in the case where the reading-aloud sound is input. Thus, the learner can confirm the reading state (reading speed/speed, etc.) while listening to the recorded reading voice. Further, the evaluation of smoothness can be visually confirmed by comparing the changes of the 1 st flag GM and the 2 nd flag UM.
When detecting the operation of selecting the "relearning" button D45 (yes in step D17), the CPU11 returns to the processing in step D1 and executes the same processing as described above.
When detecting the operation of selecting the "end" button D46 (yes at step D18), the CPU11 ends the process of learning step 3 (mode 3).
As described above, in the learning step 3 (mode 3) of the electronic dictionary 10 in the present embodiment, since the evaluation result can be obtained for the reading voice of the learner with reference to the demonstration voice for the text, the reading speed of the text and the pronunciation correction can be objectively recognized, and it becomes easy to feed back the results to the language learning.
In the electronic dictionary 10 of the present embodiment, since the learning process including the learning steps 1 to 3 (1 st to 3 rd modes) can be executed, language learning can be efficiently performed.
Further, by performing the learning steps 2, 3 (2 nd mode and 3 rd mode), or the learning steps 1, 2 (1 st mode and 2 nd mode) in order corresponding to the learner, for example, language learning suitable for the effect of the learner can be performed.
Fig. 16 is a schematic diagram of a learning support system according to an embodiment of the present invention. The learning support system 1 of the present embodiment further includes a server 20. Fig. 16 shows a functional block diagram showing a configuration of an electronic circuit of the server 20.
The server 20 has a computer configuration that reads programs recorded on various recording media or programs transferred thereto and controls operations by the read programs, and the electronic circuit includes a Central Processing Unit (CPU) 21.
The CPU21 functions as a control unit for controlling the entire server 20. The CPU21 controls the operations of the circuit elements in accordance with a control program stored in advance in the memory 22, a control program read from a recording medium 23 such as a ROM card into the memory 22 via a recording medium reading unit 24, or a control program downloaded from an external device (such as a server) via a network such as the internet (not shown) and read into the memory 22.
The control programs stored in the memory 22 include a dictionary control program 22a, a learning processing program 22b, and the like. The programs and data stored in the memory 22 shown in fig. 16 are substantially the same as the programs and data shown in fig. 1 and named the same, and description thereof is omitted.
In the above description, the learning process by the learning program 12b is executed in the electronic dictionary 10, but the learning process by the learning program 22b may be executed in the server 20. In this case, the CPU21 (control unit) installed in the server 20 executes the same processing as the learning processing by the learning program 12b described above, and transmits the processing result to the electronic device (the electronic dictionary 10 or the like) via the network N so as to perform the screen display (DS 1 to DS4 or the like) and the voice output as described above.
In general, since the server 20 has higher processing capability than electronic devices such as the electronic dictionary 10, the load on the electronic dictionary 10 can be greatly reduced by causing the server 20 to execute the learning process, and the processing efficiency of the whole process can be improved. In addition, a large amount of learning content data can be accumulated in the server 20, and a large number of learners can be provided with a plurality of types of learning content.
Further, the same processing as described above may be executed by a system in which the electronic device (the electronic dictionary 10 or the like) and the server 20 work together. In this case, the CPU11 (control unit) of the electronic device (electronic dictionary 10 or the like) and the CPU21 (control unit) of the server 20 function as a control unit of the system by operating together based on the learning program. The electronic apparatus and the server 20 execute a part of the learning process assigned to each, and cause the processing result to be output in the electronic apparatus.
The methods described in the embodiments, that is, the methods such as the processing shown in the flowcharts, can be stored and distributed as a program to be executed by a computer on a recording medium such as a memory card (ROM card, RAM card, etc.), a magnetic disk (floppy disk, hard disk, etc.), an optical disk (CD-ROM, DVD, etc.), a semiconductor memory, or the like. The computer reads a program recorded in an external recording medium and controls the operation using the program, thereby realizing the same processing as the functions described in the embodiments.
Further, data of a program for realizing each method may be transmitted as program codes over a network (internet), or program data may be acquired from a computer connected to the network, thereby realizing the same functions as those of the above-described embodiments.
The invention of the present application is not limited to the embodiments, and various modifications can be made in the implementation stage without departing from the gist thereof. Further, the embodiments include inventions in various stages, and various inventions can be extracted by appropriate combinations of a plurality of disclosed constituent elements. For example, even if some of the constituent elements are deleted from the whole constituent elements shown in the embodiments or some of the constituent elements are combined, if the problems described in the section of the problem to be solved by the invention can be solved and the effects described in the section of the effect of the invention can be obtained, the structure in which the constituent elements are deleted or combined can be extracted as the invention.

Claims (13)

1. An electronic device, characterized in that,
the control unit is provided with a control unit that executes a process based on the following pattern:
a text flag mode in which a model sound corresponding to the text selected as the processing target is not output from the sound output unit, and a flag indicating a playback position of the model sound is displayed for the text corresponding to the model sound displayed on the display unit in accordance with a playback timing of the model sound; and
and an evaluation mode in which the text corresponding to the demonstration sound is displayed on the display unit, a reading sound for reading the text is input from a sound input unit, the reading sound is evaluated based on the demonstration sound, and an evaluation result is displayed on the display unit.
2. The electronic device of claim 1,
the control section executes the following processing:
when the evaluation mode is executed, the demonstration sound is not output from the sound output unit, and the text corresponding to the demonstration sound is displayed on the display unit.
3. The electronic device of claim 1 or 2,
the control section executes the following processing:
when the learning of the user performed by executing the processing in the text flag mode is not learned, the processing in the text flag mode is executed before the learning of the user is performed by executing the processing in the evaluation mode.
4. The electronic device of any of claims 1-3,
the control section executes the following processing:
causing the demonstration sound to be output from the sound output section, and displaying a mark representing a playback position of the demonstration sound for the text corresponding to the demonstration sound displayed to the display section in cooperation with a playback timing of the demonstration sound.
5. The electronic device of claim 4,
the control section executes the following processing:
in a case where the learning of the user performed by executing the processing in the demonstration sound mode is not learning, the processing in the demonstration sound mode is executed before the learning of the user is performed by executing the processing in the text label mode.
6. The electronic device of claim 4 or 5,
the control section executes the following processing:
storing progress information indicating the execution status of the demonstration sound pattern, the text flag pattern, and the evaluation pattern for each text,
any one of the demonstration sound mode, the text flag mode, and the evaluation mode is executed based on the progress information of the text selected as the processing target.
7. The electronic device of claim 6,
the control section executes the following processing:
the processing of the text flag mode is performed after the processing of the demonstration sound mode is performed, and the processing of the evaluation mode is performed after the processing of the text flag mode is performed.
8. The electronic device according to any one of claims 1 to 7,
the control section executes the following processing:
when the evaluation mode is executed, both the 1 st mark representing the playback position of the demonstration sound and the 2 nd mark representing the reading position of the reading sound can be displayed on the text.
9. The electronic device of claim 8,
the control section executes the following processing:
when both the 1 st mark and the 2 nd mark are displayed, the 1 st mark and the 2 nd mark are displayed in different display forms.
10. The electronic device of claim 8 or 9,
the control section executes the following processing:
in the evaluation mode, whether or not the 1 st mark and the 2 nd mark are displayed is independently determined,
and controlling the presence or absence of display of the 1 st flag and the 2 nd flag during execution of the evaluation mode in accordance with the setting.
11. The electronic device according to any one of claims 8 to 10,
the control section executes the following processing:
displaying the mark displayed in the text mark mode and the 1 st mark displayed in the evaluation mode in the same display form,
displaying the 1 st mark displayed in the evaluation mode and the 2 nd mark displayed in the evaluation mode in different display forms.
12. An information processing method in an electronic device having a control unit, characterized in that,
the control section executes processing based on:
a text flag mode in which a model sound corresponding to the text selected as the processing target is not output from the sound output unit, and a flag indicating a playback position of the model sound is displayed on the text corresponding to the model sound displayed on the display unit in accordance with the playback timing of the model sound; and
and an evaluation mode in which the text corresponding to the demonstration sound is displayed on the display unit, a reading sound for reading the text is input from a sound input unit, the reading sound is evaluated based on the demonstration sound, and an evaluation result is displayed on the display unit.
13. A recording medium that records a program that causes a computer to function and executes a process based on the following mode:
a text flag mode in which a model sound corresponding to the text selected as the processing target is not output from the sound output unit, and a flag indicating a playback position of the model sound is displayed for the text corresponding to the model sound displayed on the display unit in accordance with a playback timing of the model sound; and
and an evaluation mode in which the text corresponding to the demonstration sound is displayed on the display unit, a reading sound for reading the text is input from a sound input unit, the reading sound is evaluated based on the demonstration sound, and an evaluation result is displayed on the display unit.
CN202211092177.XA 2021-09-21 2022-09-07 Electronic device, information processing method, and storage medium Pending CN115840554A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021-153015 2021-09-21
JP2021153015 2021-09-21
JP2022-047169 2022-03-23
JP2022047169A JP2023046222A (en) 2021-09-21 2022-03-23 Electronic apparatus, information processing method and program

Publications (1)

Publication Number Publication Date
CN115840554A true CN115840554A (en) 2023-03-24

Family

ID=85574908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211092177.XA Pending CN115840554A (en) 2021-09-21 2022-09-07 Electronic device, information processing method, and storage medium

Country Status (1)

Country Link
CN (1) CN115840554A (en)

Similar Documents

Publication Publication Date Title
US8826137B2 (en) Screen reader having concurrent communication of non-textual information
US20150170648A1 (en) Ebook interaction using speech recognition
JP6535998B2 (en) Voice learning device and control program
JP2002503353A (en) Reading aloud and pronunciation guidance device
JP2010198241A (en) Chinese input device and program
JP7166580B2 (en) language learning methods
US20220036759A1 (en) Augmentative and alternative communication (aac) reading system
JP4914808B2 (en) Word learning device, interactive learning system, and word learning program
US10825357B2 (en) Systems and methods for variably paced real time translation between the written and spoken forms of a word
CN115840554A (en) Electronic device, information processing method, and storage medium
JP5485050B2 (en) Electronic device, control method thereof, and control program
US20150127352A1 (en) Methods, Systems, and Tools for Promoting Literacy
JP2023046222A (en) Electronic apparatus, information processing method and program
JP2001005809A (en) Device and method for preparing document and recording medium recording document preparation program
JP6056938B2 (en) Program, processing method, and information processing apparatus
KR100620735B1 (en) Mobile communication terminal having function of writing study and method thereof
JP4534557B2 (en) Information display control device and information display control processing program
CN113452871A (en) System and method for automatically generating lessons from videos
CN112951013A (en) Learning interaction method and device, electronic equipment and storage medium
JP3762300B2 (en) Text input processing apparatus and method, and program
KR20010000156A (en) Method for studying english by constituent using internet
JP4677869B2 (en) Information display control device with voice output function and control program thereof
JP7379968B2 (en) Learning support devices, learning support methods and programs
KR20200083335A (en) Interactive education platform and device
CN115904172A (en) Electronic device, learning support system, learning processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination