CN111443890A - Reading assisting method and device, storage medium and electronic equipment - Google Patents

Reading assisting method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111443890A
CN111443890A CN202010245115.2A CN202010245115A CN111443890A CN 111443890 A CN111443890 A CN 111443890A CN 202010245115 A CN202010245115 A CN 202010245115A CN 111443890 A CN111443890 A CN 111443890A
Authority
CN
China
Prior art keywords
user
pronunciation
reading
sentence
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010245115.2A
Other languages
Chinese (zh)
Inventor
徐利民
陈宇飞
张姣姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Topronin Beijing Education Technology Co ltd
Original Assignee
Topronin Beijing Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Topronin Beijing Education Technology Co ltd filed Critical Topronin Beijing Education Technology Co ltd
Publication of CN111443890A publication Critical patent/CN111443890A/en
Priority to PCT/CN2021/083833 priority Critical patent/WO2021197296A1/en
Priority to TW110111739A priority patent/TW202139180A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/60Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/16Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters 

Abstract

The invention provides a reading assisting method, a reading assisting device, a storage medium and electronic equipment, wherein a text to be read is displayed and played sentence by sentence, the current sentence is hidden after the current sentence is played, a client rereads the current sentence and records the rereading pronunciation of a user, the rereading pronunciation of the user is evaluated by comparing the demonstration pronunciation with the rereading pronunciation of the user, the text to be read is played sentence by sentence and the user rereads, the participation degree and the learning effect of the user are improved, the current sentence is hidden after the current sentence is played, the user can be prevented from referring to the displayed current sentence during rereading, the listening and speaking capability of the user is improved, and meanwhile, the user can be helped to learn and master the learning content in a more targeted manner by evaluating the rereading pronunciation of the user.

Description

Reading assisting method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of language learning, in particular to a reading assisting method, a reading assisting device, a computer readable storage medium and electronic equipment.
Background
With the wide use of mobile terminals such as mobile phones, more and more applications are applied to the mobile terminals, which also results in that many things can be implemented on the mobile terminals, such as reading books and reading, and there are many products for reading books, reading news and learning knowledge online. However, these products usually only provide text content for the client to read, or input text content information to the client by voice playing, which is obviously not enough for the client to learn language, for example, when the client needs to learn foreign language, it is very likely that the client cannot read the foreign language text alone, and if the client only needs to learn foreign language by voice playing, such learning is passive, which results in decreased learning enthusiasm of the client, thereby abandoning learning.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a storage medium, and an electronic device for assisting reading, where a text to be read is displayed and played sentence by sentence, and the current sentence is hidden after the current sentence is played, a client rereads the current sentence and records a rereading pronunciation of a user, and evaluates the rereading pronunciation of the user by comparing a demonstration pronunciation with the rereading pronunciation of the user, and the text to be read is played sentence by sentence and the user is rereaded, which is helpful for improving participation degree and learning effect of the user, and the current sentence is hidden after the current sentence is played, so that the user is prevented from referring to the displayed current sentence when rereading, and is helpful for improving listening and speaking ability of the user, and meanwhile, the user is helped to learn and master learning content more specifically by evaluating the rereading pronunciation of the user.
According to an aspect of the present invention, an embodiment of the present invention provides a reading assistance method, including a rereading mode, where the rereading mode includes: displaying and playing the text to be read sentence by sentence; hiding the current sentence after the current sentence is played; recording the repeated pronunciation of the user; the repeated reading pronunciation of the user is the voice of the current sentence repeated reading of the user; and comparing the demonstration pronunciation with the user repeated pronunciation to obtain a repeated reading evaluation result of the user repeated reading pronunciation.
According to another aspect of the present invention, an apparatus for assisting reading according to an embodiment of the present invention includes a repeating module, where the repeating module includes: the first display unit is used for displaying and playing the text to be read sentence by sentence; the hiding unit is used for hiding the current sentence after the current sentence is played; the first recording unit is used for recording the repeated pronunciation of the user; the repeated reading pronunciation of the user is the voice of the current sentence repeated reading of the user; and the first comparison unit is used for comparing the demonstration pronunciation with the user rereaded pronunciation to obtain a rereaded evaluation result of the user rereaded pronunciation.
According to another aspect of the present invention, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program for executing any one of the methods described above.
According to another aspect of the present invention, an embodiment of the present invention provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform any of the methods described above.
The reading assisting method, the reading assisting device, the storage medium and the electronic equipment provided by the embodiment of the invention have the advantages that the text to be read is displayed and played sentence by sentence, the current sentence is hidden after the current sentence is played, the current sentence is rereaded by a client and the rereading pronunciation of the user is recorded, the rereading pronunciation of the user is evaluated by comparing the demonstration pronunciation with the rereading pronunciation of the user, the text to be read is played sentence by sentence and is rereaded by the user, the participation degree and the learning effect of the user are improved, the current sentence is hidden after the current sentence is played, the current sentence which is displayed by the user in reference during rereading can be avoided, the listening and speaking capability of the user is improved, and meanwhile, the user can be helped to learn and master the learning content in a more targeted manner by evaluating the rereading pronunciation of.
Drawings
Fig. 1 is a flowchart illustrating a method for reading assistance according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 3 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 4 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 5 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 6 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 7 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application.
Fig. 8 is a flowchart illustrating a method for creating an evaluation criterion according to an embodiment of the present application.
Fig. 9 is a flowchart illustrating a text recommendation method according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a device for assisting reading according to an embodiment of the present application.
Fig. 11 is a block diagram of an electronic device provided in an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Further, in the exemplary embodiments, since the same reference numerals denote the same components having the same structure or the same steps of the same method, if an embodiment is exemplarily described, only a structure or a method different from the already described embodiment is described in other exemplary embodiments.
Throughout the specification and claims, when one element is described as being "connected" to another element, the one element may be "directly connected" to the other element or "electrically connected" to the other element through a third element. Furthermore, unless explicitly described to the contrary, the term "comprising" and its corresponding terms should only be taken as including the stated features, but should not be taken as excluding any other features.
Currently, application software products which can be provided for reading by customers can only provide text content for the customers to read, or input text content information to the customers through a voice playing mode, which is obviously not enough for the customers who learn languages. Since many clients learn foreign languages by reading foreign languages, it is very likely that the reading of foreign language texts alone cannot be completed, and if the foreign language texts are merely played to the clients by voice, the clients are difficult to completely accept, such learning is passive, and the learning enthusiasm of the clients is reduced, thereby giving up learning.
In order to solve the above problems, embodiments of the present invention provide a reading assisting method, an apparatus, a storage medium, and an electronic device, where a text to be read is displayed and played sentence by sentence, and the current sentence is hidden after the current sentence is played, a client rereads the current sentence and records a rereading pronunciation of a user, and evaluates the rereading pronunciation of the user by comparing a demonstration pronunciation with the rereading pronunciation of the user, and the text to be read is played sentence by sentence and the user is rereaded, which is helpful for improving participation degree and learning effect of the user, and the current sentence is hidden after the current sentence is played, so that the user can be prevented from referring to the displayed current sentence when rereading, and is helpful for improving listening and speaking ability of the user, and meanwhile, the user can be helped to learn and master learning content more specifically by evaluating the rereading pronunciation of the.
The following detailed description of the embodiments of the present application refers to the accompanying drawings:
fig. 1 is a flowchart illustrating a method for reading assistance according to an embodiment of the present application. As shown in fig. 1, the reading-assisted method includes a rereading mode, wherein the rereading mode includes the following steps:
step 110: and displaying and playing the text to be read sentence by sentence.
When a user starts reading and the current reading mode is a repeated reading mode, the text to be read is displayed and played sentence by sentence on the display interface of the application software installed on the terminal equipment such as the mobile phone, namely, only the current sentence is displayed and played on the display interface of the application software, and other sentences in the text to be read are not displayed, so that the interference of other sentences on the learning of the current sentence is avoided, and the concentration degree of the user is improved.
Step 120: and hiding the current sentence after the playing of the current sentence is finished.
When the current sentence is played, the current sentence is displayed on the display interface, and after the playing of the current sentence is finished, the current sentence is hidden, namely, after the playing of the current sentence is finished, any sentence of a text to be read is not displayed on the display interface, so that the situation that the current sentence is not well received and memorized by a user due to the fact that the user only refers to the text content of the current sentence for learning is avoided, and the memory degree of the user on the current sentence is improved.
Step 130: recording the repeated pronunciation of the user; the pronunciation of the repeated reading of the user is the voice of the repeated reading of the current sentence of the user.
The method comprises the steps of recording and acquiring the rereading pronunciation of a user by starting a recording device, namely hiding the current sentence on a display interface after the current sentence is played, and simultaneously starting the recording device to record the voice of the rereading the current sentence of the user, wherein the recording device can comprise devices such as a microphone and the like which are arranged in a terminal device.
Step 140: and comparing the demonstration pronunciation with the user repeated pronunciation to obtain a repeated reading evaluation result of the user repeated reading pronunciation.
And comparing the recorded and obtained user repeated pronunciation with the demonstration pronunciation to obtain a repeated pronunciation evaluation result of the user repeated pronunciation, wherein the demonstration pronunciation is the standard pronunciation of the current sentence. By comparing the similarity between the user rereaded pronunciation and the demonstration pronunciation, the rereaded evaluation result of the user rereaded pronunciation is given, wherein the rereaded evaluation result can comprise pass and fail, and the pass rereaded evaluation result can further comprise good, fine, excellent and the like.
An embodiment of the invention provides a reading assisting method, wherein a text to be read is displayed and played sentence by sentence, the current sentence is hidden after the current sentence is played, a client rereads the current sentence and records the rereading pronunciation of a user, the rereading pronunciation of the user is evaluated by comparing the demonstration pronunciation with the rereading pronunciation of the user, the text to be read is played sentence by sentence and the user is rereaded, the participation degree and the learning effect of the user are improved, the current sentence is hidden after the current sentence is played, the user can be prevented from referring to the displayed current sentence during rereading, the listening and speaking capability of the user is improved, and meanwhile, the rereading pronunciation of the user is evaluated, so that the user can be helped to learn and master the learning content in a more targeted manner.
Fig. 2 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 2, after step 140, the method may further include:
step 150: and when the repeated reading evaluation result indicates that the repeated reading pronunciation of the user does not pass, prompting the user.
When the rereading evaluation result is failure, the user is prompted to fail, in one embodiment, the prompting mode can be that encouraging or comforting words such as a prompt tone 'refuel' or 'continue refuel' are played, or the encouraging or comforting words are displayed through characters.
In an embodiment, as shown in fig. 2, after step 140, the above-mentioned rereading mode may further include the following steps:
step 160: and when the rereading evaluation result indicates that the rereading pronunciation of the user does not pass, rereading the pronunciation of the user again to obtain the rereading pronunciation.
When the repeated pronunciation of the user does not pass, the user can repeat the pronunciation again and record the repeated pronunciation of the user again to obtain the repeated pronunciation. The mastery degree of the user can be improved through multiple exercises for the sentences which are not mastered or are not high in mastery degree by the client, and in one embodiment, the current sentence can be displayed and played again before the pronunciation of the user is recorded again, so that the user can learn the current sentence again.
Step 170: and comparing the demonstration pronunciation with the repeated pronunciation to obtain a repeated pronunciation evaluation result of the repeated pronunciation.
And after recording the rereading pronunciation of the user, comparing the demonstration pronunciation with the rereading pronunciation to obtain a rereading evaluation result of the rereading pronunciation.
In an embodiment, as shown in fig. 2, when the recording times do not exceed the preset first time threshold and the last rereading evaluation result indicates that the corresponding rereading pronunciation does not pass, step 160 is executed again; and when the recording times exceed the preset first time threshold and the last rereading evaluation result indicates that the corresponding rereading pronunciation does not pass, executing the step 180.
Step 180: a selection popup is provided for selection by a user. Wherein the selection popup includes options for skipping the current sentence and recording again. It should be understood that the option of selecting the pop-up window may be set according to the requirement of the actual application, and the application is not limited thereto.
When the rereading pronunciation of the user is not passed all the time and the recording frequency does not exceed a preset first time threshold (for example, 3 times), the user can try again to avoid the failed result caused by the interference of external noise on the rereading pronunciation; and when the repeated pronunciation of the user is not passed all the time and the recording times exceed the first time threshold value, a selection popup window can be provided for the user to select to continue the exercise or skip the current sentence.
Fig. 3 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 3, the reading-assisting method may further include a reading mode, wherein the reading mode includes the following steps:
step 210: the text to be read is displayed and the current sentence is highlighted sentence by sentence.
The method for assisting reading provided by the application can also comprise a reading mode besides a repeated reading mode, when a user starts reading and the current reading mode is the reading mode, the full text of the text to be read is displayed on the display interface and the current sentence is highlighted sentence by sentence, namely the full text of the text to be read is displayed on the display interface and the current sentence is highlighted sentence by sentence, so that the user can concentrate on the current sentence when reading, the interference of other sentences on the learning of the current sentence is reduced, and the concentration of the user is improved.
Step 220: recording the reading pronunciation of the user; the user speaks the pronunciation, and the user speaks the voice of the current sentence.
The method comprises the steps of starting a recording device to record and acquire reading pronunciation of a user, namely displaying a text to be read on a display interface and highlighting current sentences sentence by sentence after reading starts, and simultaneously starting the recording device to record voice of the current sentences read by the user, wherein the recording device can comprise devices such as a microphone and the like which are arranged in a terminal device.
Step 230: and comparing the demonstration pronunciation with the user reading pronunciation to obtain a reading evaluation result of the user reading pronunciation.
And comparing the recorded and acquired reading pronunciation of the user with the demonstration pronunciation to obtain a reading evaluation result of the reading pronunciation of the user, wherein the demonstration pronunciation is the standard pronunciation of the current sentence. By comparing the similarity between the user reading pronunciation and the demonstration pronunciation, the reading evaluation result of the user reading pronunciation is given, wherein the reading evaluation result can comprise pass and fail, and the pass reading evaluation result can further comprise good, fine, excellent and the like.
In one embodiment, when the reading evaluation result indicates that the reading pronunciation of the user is not passed, the user can be prompted. When the reading evaluation result is failed, the user is prompted to fail, in an embodiment, the prompting mode may be to play an encouraging or comforting word such as a prompt sound "refuel" or "continue refuel", or to display the encouraging or comforting word by characters, and the specific mode and form of prompting are not limited in the present application.
In an embodiment, as shown in fig. 3, after step 230, the reading mode may further include the following steps:
step 240: and when the reading evaluation result indicates that the reading pronunciation of the user does not pass, recording the reading pronunciation of the user again to obtain the reading pronunciation again.
When the reading pronunciation of the user does not pass, the user can read again and record the reading pronunciation of the user again to obtain the reading pronunciation again. The mastery degree of the user can be improved through multiple exercises for the sentences which are not mastered or are not high in mastery degree by the client, and in one embodiment, the current sentence can be played before the pronunciation of the user is recorded again, so that the user can learn the current sentence.
Step 250: and comparing the demonstration pronunciation with the re-speaking pronunciation to obtain a re-speaking evaluation result of the re-speaking pronunciation.
After recording the pronunciation read again of the user, comparing the demonstration pronunciation with the pronunciation read again to obtain the evaluation result of the pronunciation read again.
In an embodiment, as shown in fig. 3, when the recording time does not exceed the preset second time threshold and the last reading evaluation result indicates that the corresponding reading pronunciation does not pass, step 240 is executed again; and when the recording times exceed the preset second time threshold and the last reading evaluation result is that the corresponding reading pronunciation does not pass, executing step 260.
Step 260: a selection popup is provided for selection by a user. Wherein the selection popup includes options for skipping the current sentence and recording again. It should be understood that the option of selecting the pop-up window may be set according to the requirement of the actual application, and the application is not limited thereto.
When the reading pronunciation of the user is not passed all the time and the recording frequency does not exceed the preset second frequency threshold (for example, 3 times), the user can try again to avoid the failure result caused by the interference of external noise on the reading pronunciation; and when the reading pronunciation of the user is not passed all the time and the recording times exceed a second time threshold value, a selection popup window can be provided to be selected by the user to continue the exercise or skip the current sentence.
Fig. 4 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 4, the reading-assisting method may further include an listening and reading mode, where the listening and reading mode includes the following steps:
step 310: and displaying the text to be read, playing sentence by sentence and highlighting the current sentence.
The reading assisting method provided by the application can further comprise an listening and reading mode besides a repeating reading mode and a reading mode, when a user starts reading and the current reading mode is the listening and reading mode, the full text of the text to be read is displayed on the display interface, the current sentence is played and highlighted sentence by sentence, namely the full text of the text to be read is displayed on the display interface, and the current sentence is played and highlighted sentence by sentence, so that the user can concentrate on the current sentence during listening and reading, interference of other sentences on learning of the current sentence is reduced, and the concentration degree of the user is improved.
Step 320: recording the pronunciation of listening and reading of a user; wherein, the user listens and reads the pronunciation for the user listens and reads the voice of the current sentence.
The recording device is started to record and acquire the pronunciation of the user, that is, after the reading starts, the text to be read is displayed on the display interface and the current sentence is played sentence by sentence and highlighted, and meanwhile, the recording device is started to record the voice of the user for reading the current sentence, wherein the recording device can comprise devices such as a microphone and the like which are arranged in the terminal device.
Step 330: and comparing the demonstration pronunciation with the user listening and reading pronunciation to obtain a listening and reading evaluation result of the user listening and reading pronunciation.
And comparing the recorded and acquired listening and reading pronunciation of the user with the demonstration pronunciation to obtain a listening and reading evaluation result of the listening and reading pronunciation of the user, wherein the demonstration pronunciation is the standard pronunciation of the current sentence. By comparing the similarity between the listening and reading pronunciation and the demonstration pronunciation, the listening and reading evaluation result of the listening and reading pronunciation of the user is provided, wherein the listening and reading evaluation result can comprise passing and failing, and the passing listening and reading evaluation result can further comprise good, fine, excellent and the like.
In one embodiment, when the listening and reading evaluation result indicates that the user fails to listen and read the pronunciation, the user may be prompted. When the result of the listening and reading evaluation is failure, the user is prompted to fail, in an embodiment, the prompting mode may be to play an encouraging or comforting word such as a prompt sound "refuel" or "continue refuel", or to display the encouraging or comforting word by characters, and the specific mode and form of the prompting are not limited in the present application.
In an embodiment, as shown in fig. 4, after step 330, the reading mode may further include the following steps:
step 340: and when the listening and reading evaluation result indicates that the user cannot listen and read the pronunciation, recording the listening and reading pronunciation again to obtain the listening and reading pronunciation again.
When the listening and reading pronunciation of the user does not pass, the user can listen and read again and record the listening and reading pronunciation of the user again so as to obtain the listening and reading pronunciation again. The user's mastery degree can be improved by practicing for many times for sentences which are not mastered or are not high in mastery degree by the client, and in an embodiment, before the user listens and reads pronunciation, the user can play the current sentence again and then rerecord the user listen and read pronunciation, so that the user can further learn the current sentence.
Step 350: and comparing the demonstration pronunciation with the re-listening and reading pronunciation to obtain a re-listening and reading evaluation result of the re-listening and reading pronunciation.
After recording the pronunciation of the user for listening and reading again, comparing the demonstration pronunciation with the pronunciation of listening and reading again to obtain the result of evaluation of listening and reading again.
In an embodiment, as shown in fig. 4, when the recording times do not exceed the preset third time threshold and the last listening and reading evaluation result indicates that the corresponding listening and reading pronunciation does not pass, step 340 is executed again; and when the recording times exceed the preset third time threshold and the last listening and reading evaluation result indicates that the corresponding listening, reading and pronunciation fails, executing step 360.
Step 360: a selection popup is provided for selection by a user. Wherein the selection popup includes options for skipping the current sentence and recording again. It should be understood that the option of selecting the pop-up window may be set according to the requirement of the actual application, and the application is not limited thereto.
When the listening and reading pronunciation of the user is not passed all the time and the recording times do not exceed the preset third time threshold (for example, 3 times), the user can try again to avoid the failed result caused by the interference of external noise on the reading pronunciation; and when the listening and reading pronunciation of the user is not passed all the time and the recording times exceed a third time threshold value, a selection popup window can be provided for the user to select to continue the exercise or skip the current sentence.
Fig. 5 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 5, the reading-assisting method may further include an acquaintance mode, where the acquaintance mode includes the following steps:
step 410: displaying the text to be read and highlighting the current sentence.
The reading assisting method provided by the application can further comprise a perusal mode besides a repeated reading mode, a reading mode and a listening mode, when a user starts reading and the current reading mode is the perusal mode, the full text of the text to be read is displayed on the display interface, the current sentence is highlighted, namely the full text of the text to be read is displayed on the display interface, and the current sentence is highlighted, so that the user can focus on the current sentence when reading silently, interference of other sentences on learning of the current sentence is reduced, and the concentration of the user is improved.
In one embodiment, as shown in FIG. 5, the merry mode may further include the following steps:
step 420: and playing the text to be read.
Step 430: and pausing the playing of the text to be read.
By setting the pause/play button, the text to be read can be selectively played in an open play mode and the text to be read can be closed in a silently read mode, wherein when the play is started, the play can be started from the current sentence, and the play can also be started from the first sentence of the text to be read.
Fig. 6 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 6, the reading-aid method may further include a listening mode, wherein the listening mode includes the following steps:
step 510: and displaying the text to be read, playing the text to be read sentence by sentence and highlighting the current sentence.
The reading assisting method provided by the application can also comprise a listening mode besides a repeated reading mode, a listening mode and a reading silently mode, when a user starts reading and the current reading mode is the listening mode, the full text of the text to be read is displayed on the display interface, the current sentence is played and highlighted sentence by sentence, namely the full text of the text to be read is displayed on the display interface, and the current sentence is played and highlighted so as to provide the user with the attention focused on the current sentence when listening, reduce the interference of other sentences on the learning of the current sentence, and improve the concentration degree of the user.
In one embodiment, as shown in fig. 6, the listening mode may further include the steps of:
step 520: and displaying the playing progress bar according to the content duration of the text to be read.
And displaying the playing progress bar according to the content duration of the text to be read so as to facilitate the user to master the playing progress in real time.
Fig. 7 is a flowchart illustrating a method for reading assistance according to another embodiment of the present application. As shown in fig. 7, the reading-assisting method includes a repeating mode, wherein the repeating mode includes the following steps:
step 610: and playing the text to be read sentence by sentence.
The reading assisting method provided by the application can further comprise a repeating mode except for a repeating mode, a reading mode, a listening mode, a reading-silently mode and a listening mode, when the user starts reading and the current reading mode is the repeating mode, the content of the text to be read is not displayed on the display interface, and only the current sentence is played, so that the user can concentrate on the current sentence when listening, and the concentration and the mastering degree of the user are improved.
Step 620: after the current sentence is played, recording the repeat pronunciation of the user; wherein, the pronunciation is repeated by the user.
The recording device is started to record and acquire the repeat pronunciation of the user, that is, after the current sentence is played, the recording device is started to record the voice of the user repeat the current sentence, wherein the recording device may include a microphone and other devices built in the terminal device.
Step 630: and comparing the demonstration pronunciation with the user repeat pronunciation to obtain a repeat evaluation result of the user repeat pronunciation.
And comparing the recorded and acquired user repeat pronunciation with the demonstration pronunciation to obtain a repeat evaluation result of the user repeat pronunciation, wherein the demonstration pronunciation is the standard pronunciation of the current sentence. By comparing the similarity between the user repeat pronunciation and the demonstration pronunciation, the repeat evaluation result of the user repeat pronunciation is given, wherein the repeat evaluation result can comprise pass and fail, and the pass repeat evaluation result can further comprise good, fine, excellent and the like.
In one embodiment, when the retesting evaluation result is that the user fails to retest the pronunciation, the user may be prompted. When the retended evaluation result is failed, the user is prompted to fail, in an embodiment, the prompting mode can be that encouraging or comforting words such as a prompt tone 'refuel' or 'refuel continuously' are played, or the encouraging or comforting words are displayed through characters.
In an embodiment, as shown in fig. 7, after step 630, the aforementioned repeating mode may further include the following steps:
step 640: and when the retelling evaluation result indicates that the user retelling pronunciation fails, recording the retelling pronunciation of the user again to obtain the retelling pronunciation.
When the repeat pronunciation of the user does not pass, the user can repeat the pronunciation again and record the repeat pronunciation of the user again to obtain the repeat pronunciation. The user's mastery degree can be improved by practicing for many times for sentences which are not mastered or are not highly mastered by the client, and in an embodiment, before the user's rephrase pronunciation is recorded again, the user's rephrase pronunciation can be recorded again after the current sentence is played again, so that the user can further learn the current sentence.
Step 650: and comparing the demonstration pronunciation with the repeated pronunciation to obtain a repeated pronunciation evaluation result.
After recording the pronunciation of the user, comparing the demonstration pronunciation with the pronunciation of the user to obtain the evaluation result of the pronunciation of the user.
In an embodiment, as shown in fig. 7, when the recording time does not exceed the preset fourth time threshold and the last repeat evaluation result indicates that the corresponding repeat pronunciation does not pass, step 640 is executed again; when the recording times exceed the preset fourth time threshold and the last repeat evaluation result indicates that the corresponding repeat pronunciation does not pass, step 660 is executed.
Step 660: a selection popup is provided for selection by a user. Wherein the selection popup includes options for skipping the current sentence and recording again. It should be understood that the option of selecting the pop-up window may be set according to the requirement of the actual application, and the application is not limited thereto.
When the pronunciation of the user is not passed all the time and the recording times do not exceed the preset fourth time threshold (for example, 3 times), the user can try again to avoid the failure result caused by the interference of external noise on the pronunciation of the user; and when the repeat pronunciation of the user is not passed all the time and the recording times exceed a fourth time threshold value, a selection popup window can be provided for the user to select to continue the exercise or skip the current sentence.
In one embodiment, the current mode can be switched to other modes according to instructions of a user. For example, the current mode is an acquaintance reading mode, and the user may manually select a listening mode and switch the current reading mode to the listening mode. By switching the modes, reading options of the user can be improved, and the user can select a reading mode suitable for the user according to actual needs, so that the reading effect is improved in a more targeted manner. After the mode is switched, the reading progress in the mode before switching may not be reserved, that is, after the mode is switched, the text to be read is read again in a new reading mode. In one embodiment, the default mode may be set to achieve the current mode that the user defaults to when turning on reading.
The embodiment of the present application may select any one or a combination of multiple of the above six modes according to an actual application scenario, and preferably, the embodiment of the present application includes the above six modes at the same time, by setting the six modes, it may be implemented that when a user reads, the user may select a listening mode and a reading mode with low difficulty to know contents of a read text, then select the listening mode and the reading mode to learn reading ability, and finally select the reading mode and the reading mode to perform memory repetition on the read text. It should be understood that, of course, the five modes in the embodiment of the present application may be arbitrarily selected by the user according to the needs of the user, and do not necessarily need to be performed according to the above progressive sequence, which is not limited in the embodiment of the present application.
Fig. 8 is a flowchart illustrating a method for creating an evaluation criterion according to an embodiment of the present application. As shown in fig. 8, the formulation of the evaluation criteria for evaluating and obtaining the review evaluation results, the reading evaluation results and the listening and reading evaluation results in the above embodiments may include the following steps:
step 710: and acquiring the user level grade according to the user information.
Different users have different level ratings, for example, reading levels of adults and children are different, and thus, the level ratings of the users can be known from user information when rating criteria are determined. In an embodiment, the user information may include the user's age or the user's nationality. When the age and nationality of the user are different, the reading level of the user is greatly different, for example, Chinese people read Chinese texts much more easily than foreigners, so that generally, the level of Chinese people is obviously higher than that of foreigners in reading Chinese texts.
Step 720: and (4) establishing an evaluation standard according to the user level grade.
And formulating different evaluation standards according to the obtained user level grade, namely formulating different evaluation standards according to the reading level grade of the user for the text to be read of the specific type so as to realize fair evaluation.
In an embodiment, as shown in fig. 8, after step 720, the method for establishing the evaluation criterion may further include:
step 730: and adjusting the evaluation standard according to the comprehensive evaluation result of the user in the preset time period.
Because the user information during user registration can not completely reflect the reading level of the user, the user can comprehensively evaluate the user after learning for a period of time, and the evaluation standard is adjusted according to the comprehensive evaluation result, for example, the reading level of the user is greatly improved after the user learns for a period of time, at the moment, the evaluation standard of the user can be properly improved, and a text to be read with higher reading difficulty can be recommended to the user, so that the user is encouraged to progress again.
In one embodiment, after the user finishes the current text to be read, the user prompts the reading to be finished, and displays other texts to be read recommended to the user. The implementation mode of recommending the text to be read to the client comprises the following steps: and acquiring a recommended text set associated with the current reading text based on the search question set to be selected.
When a user reads a current text, a requirement for reading the text related to the text is generated, and at this time, a related recommended text set can be obtained based on a focused correlation dimension of a to-be-selected search question set reaction. The recommended text set can enable the user to think deeply for the user, and the cognition of upgrading the user is improved. In addition, the method provides choices for the user from multiple dimensions, jumps out of a fixed thinking mode, and can develop various thinking modes of dialectical of the user. In an embodiment, all historical search questions may be sorted in descending order or ascending order according to a preset rule (e.g., search times, click times, useful evaluation, etc.), and a candidate search question set satisfying a first preset condition may be screened. Wherein, the first preset condition can comprise one or more of the following conditions: the input times are more than or equal to a first threshold value and belong to a first preset number of historical search questions with the maximum input times. The more the input times are, the more the search question is concerned and typical, the more the relevance with the current article is, the higher the matching degree of the search question required by the user is, the time for the user to conceive a question editing mode can be effectively shortened, the search question of the user can be effectively solved, and the search questions with multiple dimensions and sufficient logic relevance with the current read text can be screened out through the setting of the first preset condition.
Fig. 9 is a flowchart illustrating a text recommendation method according to an embodiment of the present application. As shown in fig. 9, the recommendation method specifically includes:
step 810: and acquiring a search result text list corresponding to each search problem to be selected in the search problem set to be selected.
And searching by taking each to-be-selected search question in the to-be-selected search question set as a search input condition, and obtaining a search result text list corresponding to each to-be-selected search question.
Step 820: and respectively screening a second preset number of search result texts from the search result text list corresponding to each search problem to be selected according to the number of the search problems to be selected in the search problem set to be selected, and adding the second preset number of search result texts into the recommended text set.
The method comprises the steps of selecting a first preset number of search result texts from a search result text list corresponding to each to-be-searched question, and adding the first preset number of search result texts into a recommended article set according to the first preset number of search questions.
For example, when the number of the screened candidate search questions in the candidate search question set is 10, each of the 10 candidate search questions is searched to obtain each search result text list corresponding to each candidate search question, search result texts ranked at the top 1 are screened from each search result text list to obtain 10 search result texts, and the 10 search result texts are added to the recommended text set as recommended texts. It should be understood that there is a corresponding relationship between the number of the search problems to be selected and the second preset number, and a research and development worker preset the number of the search problems to be selected in the search problem set and the second preset number in advance.
In one embodiment, when the number of the candidate search questions in the candidate search question set is 1, the second preset number is 3. When the number of the screened to-be-selected search problems in the to-be-selected search problem set is 3, searching is carried out on each to-be-selected search problem in the 3 to-be-selected search problems respectively to obtain each search result text list corresponding to each to-be-selected search problem, search result texts with the top 1 name are screened out from each search result text list to obtain 3 search result texts, and the 3 search result texts are added into the recommendation text set as recommendation texts.
In one embodiment, when the number of the candidate search questions in the candidate search question set is 2 or 3, the second preset number is 2. When the number of the screened to-be-selected search problems in the to-be-selected search problem set is 2, each to-be-selected search problem in the 2 to-be-selected search problems is searched to obtain each search result text list corresponding to each to-be-selected search problem, search result texts with the top 2 names are screened from each search result text list to obtain 4 search result texts, and the 4 search result texts are added into the recommendation text set as recommendation texts.
In one embodiment, when the number of the candidate search questions in the candidate search question set is greater than or equal to 4, the second preset number is 1. When the number N of the selected to-be-selected search problems in the selected to-be-selected search problem set is N (N is more than or equal to 4), each to-be-selected search problem in the N to-be-selected search problems is searched to obtain each search result text list corresponding to each to-be-selected search problem, search result texts ranked at the top 1 are selected from each search result text list to obtain N search result texts, and the N search result texts are added into the recommended text set as recommended texts.
In an embodiment, as shown in fig. 9, the text recommendation method may further include:
step 830: and when the number of the recommended texts in the recommended text set is less than a third preset number, acquiring a plurality of label categories corresponding to the current reading text, wherein each label category comprises at least one label.
Specifically, when the total number of the screened search result texts according to the number of the questions to be selected is less than a third preset number, for example: when the number of the search questions to be selected in the search question set to be selected is 2 or 3, 4 search result texts are obtained, the number of the recommended texts in the current recommended text set is 4, and the number of the recommended texts in the recommended text set is 10 (a third preset number is 10). If the number of recommended texts in the current recommended text set is considered to be unsaturated, the recommended texts need to be obtained from other dimensions and are continuously added into the recommended text set. A plurality of label categories corresponding to the current reading text are obtained, wherein each label category comprises at least one label.
It should be understood that labels are preset in each text, and the embodiment of the present invention does not limit the specific obtaining means of each label. The labels are divided into different label categories according to different dimensions, and each label category comprises at least one label. For example, the tag categories may include major opinion categories including: the same point, the similar point, the opposite point, and the repelling point.
Step 840: and adding a fourth preset number of search result texts into the recommended text set before the search result text lists corresponding to the labels are screened according to the number of the labels in the label category.
Specifically, the number of the labels included in each label category is different, and a fourth preset number of search result texts before being screened from the search result text list corresponding to each label are added into the recommendation text set.
For example, the label categories are major opinion categories, which include: the 4 tags of the same perspective, similar perspective, opposite perspective and refute perspective. And searching by using each tag in the 4 tags respectively to obtain a search result text list corresponding to the same viewpoint, a search result text list corresponding to a similar viewpoint, a search result text list corresponding to an opposite viewpoint and a view rejection search result text list. And respectively screening the search result texts ranked at the top 1 from the corresponding search result text lists, the search result text lists corresponding to similar viewpoints and the search result text lists corresponding to opposite viewpoints to obtain 4 search result texts, wherein the 4 search result texts are used as recommended texts and added into the recommended text set. It should be understood that there is a corresponding relationship between the number of the tags in the tag category and the fourth preset number, and research and development personnel preset the number of the tags in the tag category and the fourth preset number in advance.
In an embodiment, as shown in fig. 9, the text recommendation method may further include:
step 850: and when the number of the recommended texts in the recommended text set is less than the third preset number, preferentially selecting the number of the labels in the higher label category according to the priority, and adding the fourth preset number of search result texts into the recommended text set before screening the search result text lists corresponding to the labels in the higher label category selected by the priority.
Specifically, there is a priority between tag categories, such as: the label categories include: the main point of view tag category, keyword tag category, main character tag category, and main character personality tag category 4 categories, there being a priority between these 4 categories. The major perspective label category has the highest priority, and the keyword label category, the major persona label category, and the major persona personality label category have the same priority but are all lower than the major perspective label category. When the number of recommended texts in the recommended text set is smaller than a third preset number, adding a fourth preset number of search result texts into the recommended text set before screening the search result text lists corresponding to the main viewpoint labels according to the number of the main viewpoint labels in the main viewpoint label category. After the recommended texts screened from the text list corresponding to the main label are added, if the quantity of the recommended texts in the recommended text set reaches a third preset quantity, stopping continuous screening; if the number of the recommended texts in the recommended text set is still smaller than the third preset number, the recommended texts are respectively screened from a stroking result text list corresponding to the keyword labels, a stroking result text list corresponding to the main character label type and a stroking result text list corresponding to the main character label type according to the number of the keyword labels in the keyword label type, the number of main tasks in the main character label type and the number of main character characters in the main character label type.
In one embodiment, the plurality of label categories includes a major perspective category, each major perspective category including at least one major perspective label; step 840 may specifically include: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main viewpoint labels according to the number of the main viewpoint labels in the main viewpoint category, and adding the fourth preset number of search result texts into the recommendation text set.
Specifically, the label categories are major perspective categories, which include: the 4 tags of the same perspective, similar perspective, opposite perspective and refute perspective. And searching by using each tag in the 4 tags respectively to obtain a search result text list corresponding to the same viewpoint, a search result text list corresponding to a similar viewpoint, a search result text list corresponding to an opposite viewpoint and a view rejection search result text list. And respectively screening the search result texts ranked at the top 1 from the corresponding search result text lists, the search result text lists corresponding to similar viewpoints and the search result text lists corresponding to opposite viewpoints to obtain 3 search result texts, and adding 4 search result texts serving as recommended texts into the recommended text set. It should be understood that there is a corresponding relationship between the number of the tags in the tag category and the fourth preset number, and research and development personnel preset the number of the tags in the tag category and the fourth preset number in advance.
In one embodiment, when the number of the main viewpoint labels in the main viewpoint category is 1, the fourth preset number is 3, the search result text lists corresponding to the 1 main viewpoints are obtained, the search result text lists corresponding to the main viewpoints are used for screening out the search result texts ranked at the top 3, so that 3 search result texts are obtained, and the 3 search result texts are added into the recommended text set as the recommended texts.
In one embodiment, when the number of the main viewpoint labels in the main viewpoint category is 1, the fourth preset number is 3, the search result text lists corresponding to the 1 main viewpoints are obtained, the search result text lists corresponding to the main viewpoints are used for screening out the search result texts ranked at the top 3, so that 3 search result texts are obtained, and the 3 search result texts are added into the recommended text set as the recommended texts.
In one embodiment, when the number of the main viewpoint labels in the main viewpoint category is 2 or 3, and the fourth preset number is 2, the search result text lists corresponding to the 2 or 3 main viewpoints are obtained, the search result texts ranked at the top 2 are screened out from the search result text lists corresponding to the 2 or 3 main viewpoints respectively, 4 or 6 search result texts are obtained, and the 4 or 6 search result texts are added into the recommendation text set as recommendation texts.
In one embodiment, when the number of main viewpoint labels in the main viewpoint category is M (M is greater than or equal to 4), and the fourth preset number is 1, the search result text lists corresponding to the M main viewpoints are obtained, the search result texts ranked at the top 1 are screened out from the search result text lists corresponding to the M main viewpoints, M search result texts are obtained, and the M search result texts are added into the recommendation text set as recommendation texts.
In one embodiment, the tag categories include keyword categories, each keyword category including at least one keyword tag; step 840 specifically includes: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the keyword labels according to the number of the keyword labels in the keyword category, and adding the fourth preset number of search result texts into the recommended text set.
In one embodiment, when the number of keyword tags in the keyword category is 1, the fourth preset number is 3; when the number of the keyword labels in the keyword category is 2 or 3, the fourth preset number is 2; when the number of keyword tags in the keyword category is greater than or equal to 4, the fourth preset number is 1.
In one embodiment, the plurality of label categories includes a primary role category, each primary role category including at least one primary role label; step 840 specifically includes: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main role labels according to the number of the main role labels in the main role category, and adding the fourth preset number of search result texts into the recommended text set.
In one embodiment, when the number of primary character tags in the to-be-primary character category is 1, the fourth preset number is 3; when the number of the main role labels in the main role category is 2 or 3, the fourth preset number is 2; when the number of primary character tags in the primary character category is greater than or equal to 4, the fourth preset number is 1.
In one embodiment, the plurality of label categories includes a primary role character category, each primary role character category including at least one primary role character label; step 840 specifically includes: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main role character labels according to the number of the main role character labels in the main role character category, and adding the fourth preset number of search result texts into the recommendation text set.
In one embodiment, when the number of primary character personality tags in the to-be-primary character personality category is 1, the fourth preset number is 3; when the number of the main role character labels in the main role character category is 2 or 3, the fourth preset number is 2; when the number of primary character personality tags in the primary character personality category is greater than or equal to 4, the fourth preset number is 1.
Fig. 10 is a schematic structural diagram of a device for assisting reading according to an embodiment of the present application. As shown in fig. 10, the reading aid 7 includes: a rereading module 71, wherein the rereading module 71 comprises: a first display unit 711 for displaying and playing a text to be read sentence by sentence; a hiding unit 712, configured to hide the current sentence after the playing of the current sentence is finished; a first recording unit 713, configured to record a user rereading utterance; the repeated reading pronunciation of the user is the voice of the repeated reading current sentence of the user; the first comparing unit 714 is used for comparing the exemplary pronunciation with the user rereaded pronunciation to obtain a rereaded evaluation result of the user rereaded pronunciation.
According to the reading assisting device provided by the invention, the first display unit 711 displays and plays texts to be read sentence by sentence, the hiding unit 712 hides the current sentence after the current sentence is played, the client rereads the current sentence and the first recording unit 713 records the rereading pronunciation of the user, the demonstration pronunciation and the rereading pronunciation of the user are compared through the first comparison unit 714, the rereading pronunciation of the user is evaluated, the texts to be read are played sentence by sentence and rereaded by the user, the participation degree and the learning effect of the user are improved, the text is hidden after the current sentence is played, the user can be prevented from referring to the displayed current sentence when rereading, the listening and speaking capability of the user is improved, and meanwhile, the user can be helped to learn and master the learning content in a more targeted manner by evaluating the rereading pronunciation of the user.
In one embodiment, the rereading module 71 is further configured to: when the evaluation result is that the repeated pronunciation does not pass, recording the repeated pronunciation of the user again to obtain the repeated pronunciation; comparing the demonstration pronunciation with the re-reading pronunciation to obtain a re-reading evaluation result of the re-reading pronunciation; when the re-reading evaluation result is that the re-reading pronunciation does not pass, the user is prompted again; when the recording times do not exceed a preset first time threshold value and the last rereading evaluation result indicates that the corresponding rereading pronunciation does not pass, recording again; and when the recording times exceed a preset first time threshold and the last rereading evaluation result indicates that the corresponding rereading pronunciation does not pass, providing a selection popup for a user to select, wherein the selection popup comprises options of skipping the current sentence and recording again.
In one embodiment, as shown in fig. 10, the reading-aid device 7 may further include a reading module 72, wherein the reading module 72 includes: a second display unit 721 for displaying a text to be read and highlighting a current sentence by sentence; the second recording unit 722 is used for recording the reading pronunciation of the user; the user reading pronunciation is that the user reads the voice of the current sentence; the second comparing unit 723 is configured to compare the demonstration pronunciation with the user reading pronunciation to obtain a reading evaluation result of the user reading pronunciation.
In one embodiment, speakable module 72 is further configured to: when the evaluation result is that the reading pronunciation does not pass, recording the reading pronunciation of the user again to obtain the reading pronunciation again; comparing the demonstration pronunciation with the re-speaking pronunciation to obtain a re-speaking evaluation result of the re-speaking pronunciation; when the re-reading evaluation result is that the re-reading pronunciation is not passed, the user is prompted again; when the recording times do not exceed the preset second time threshold value and the last reading evaluation result is that the corresponding reading pronunciation does not pass, recording again; and when the recording times exceed a preset second time threshold and the last reading evaluation result indicates that the corresponding reading pronunciation does not pass, providing a selection popup for a user to select, wherein the selection popup comprises options of skipping the current sentence and recording again.
In an embodiment, as shown in fig. 10, the reading aid 7 may further include an listening and reading module 73, wherein the listening and reading module 73 includes: a third display unit 731, configured to display a text to be read, play sentence by sentence, and highlight the current sentence; a third recording unit 732 for recording the pronunciation of the user; wherein, the step of listening and reading pronunciation by the user is that the user listens and reads the voice of the current sentence; the third comparing unit 733 is configured to compare the demonstration pronunciation with the user listening and reading pronunciation to obtain a listening and reading evaluation result of the user listening and reading pronunciation.
In one embodiment, the listening and reading module 73 is further configured to: when the evaluation result is that the listening and reading pronunciation does not pass, recording the listening and reading pronunciation of the user again to obtain the listening and reading pronunciation again; comparing the demonstration pronunciation with the re-listening and reading pronunciation to obtain a re-listening and reading evaluation result of the re-listening and reading pronunciation; when the re-listening and reading evaluation result indicates that the re-listening and reading pronunciation is not passed, the user is prompted again; when the recording times do not exceed the preset third time threshold value and the last listening and reading evaluation result indicates that the corresponding listening, reading and pronunciation are not passed, recording again; and when the recording times exceed a preset third time threshold and the last listening and reading evaluation result indicates that the corresponding listening and reading pronunciation does not pass, providing a selection popup for a user to select, wherein the selection popup comprises options of skipping the current sentence and recording again.
In an embodiment, as shown in fig. 10, the device for assisting reading 7 may further include an acquaintance module 74, where the acquaintance module 74 includes: and a fourth display unit 741 configured to display the text to be read and highlight the current sentence.
In one embodiment, as shown in FIG. 10, the implied module 74 may further include: a playing unit 742 for playing the text to be read; a pause unit 743 for pausing the playing of the text to be read.
In one embodiment, as shown in fig. 10, the reading aid 7 may further include a listening module 75, wherein the listening module 75 includes: the fifth display unit 751 is configured to display a text to be read, play the text to be read sentence by sentence, and highlight the current sentence.
In an embodiment, as shown in fig. 10, the listening module 75 may further include: the progress display unit 752 is configured to display a playing progress bar according to a content duration of the text to be read.
In an embodiment, as shown in fig. 10, the reading-aid device 7 may further include a repeating module 76, wherein the repeating module 76 includes: a playing unit 761 for playing the text to be read sentence by sentence; a sixth recording unit 762, configured to record a user repeat pronunciation after the current sentence is played; wherein, the pronunciation of the user repeat is that the user repeats the voice of the current sentence; and a sixth comparing unit 763, configured to compare the demonstration pronunciation with the user repeat pronunciation to obtain a repeat evaluation result of the user repeat pronunciation.
In one embodiment, the restatement module 76 is further configured to: when the evaluation result shows that the repeat pronunciation does not pass, recording the repeat pronunciation of the user again to obtain the repeat pronunciation; comparing the demonstration pronunciation with the repeated pronunciation to obtain a repeated pronunciation evaluation result; when the re-rephrasing evaluation result indicates that the re-rephrasing pronunciation does not pass, re-recording the user re-rephrasing pronunciation to obtain the re-rephrasing pronunciation; comparing the demonstration pronunciation with the repeated pronunciation to obtain a repeated pronunciation evaluation result; when the recording times do not exceed a preset fourth time threshold and the last repeated evaluation result indicates that the corresponding repeated pronunciation does not pass, recording again; when the recording times exceed a preset fourth time threshold and the last repeated evaluation result indicates that the corresponding repeated pronunciation does not pass, providing a selection popup for a user to select; and the selection popup window comprises options of skipping the current sentence and recording again.
In one embodiment, as shown in fig. 10, the reading aid 7 may further include: and a switching module 77 for switching the current module to other modules.
In one embodiment, as shown in fig. 10, the reading aid 7 may further include an evaluation criterion making module 78, wherein the evaluation criterion making module 78 includes: a level determining unit 781, configured to obtain a user level according to the user information; the user information may include the user age or the user nationality, etc.; a standard making unit 782 for making an evaluation standard according to the user level grade; and a standard adjusting unit 783 configured to adjust the evaluation standard according to the comprehensive evaluation result of the user within the preset time period.
In one embodiment, as shown in fig. 10, the reading-aid device 7 may further include a recommending module 79, wherein the recommending module 79 includes: a search result obtaining unit 791, configured to obtain a search result text list corresponding to each search question to be selected in the search question set to be selected; the recommended text obtaining unit 792 is configured to, according to the number of the search problems to be selected in the search problem set to be selected, respectively screen a second preset number of search result texts from the search result text list corresponding to each search problem to be selected, and add the second preset number of search result texts to the recommended text set.
In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the to-be-selected search questions in the to-be-selected search question set is 1, the second preset number is 3. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the to-be-selected search questions in the to-be-selected search question set is 2 or 3, the second preset number is 2. In one embodiment, the recommended text acquisition unit 792 may be further configured to: and when the number of the search problems to be selected in the search problem set to be selected is greater than or equal to 4, the second preset number is 1.
In an embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of recommended texts in the recommended text set is smaller than a third preset number, acquiring a plurality of label categories corresponding to the current reading text, wherein each label category comprises at least one label; and a recommended text set constructing unit 794, configured to add, according to the number of tags in the tag category, a fourth preset number of search result texts to the recommended text set before the search result text list corresponding to the tag is screened out.
In an embodiment, the recommended text acquisition unit 792 may be further configured to: and when the number of the recommended texts in the recommended text set is less than the third preset number, preferentially selecting the number of the labels in the higher label category according to the priority, and adding the fourth preset number of search result texts into the recommended text set before screening the search result text lists corresponding to the labels in the higher label category selected by the priority.
In one embodiment, the plurality of label categories includes a major perspective category, each major perspective category including at least one major perspective label; the recommended text retrieval unit 792 may be further configured to: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main viewpoint labels according to the number of the main viewpoint labels in the main viewpoint category, and adding the fourth preset number of search result texts into the recommendation text set.
In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the main viewpoint labels in the main viewpoint category is 1, the fourth preset number is 3, and the search result text list corresponding to the 1 main viewpoint is obtained. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the main viewpoint labels in the main viewpoint category is 1, the fourth preset number is 3, and the search result text list corresponding to the 1 main viewpoint is obtained. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the main viewpoint labels in the main viewpoint category is 2 or 3, and the fourth preset number is 2, the search result text list corresponding to the 2 or 3 main viewpoints is obtained. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of main viewpoint labels in the main viewpoint category is M (M is greater than or equal to 4), the fourth preset number is 1, and the search result text lists corresponding to the M main viewpoints are acquired.
In one embodiment, the tag categories include keyword categories, each keyword category including at least one keyword tag; the recommended text retrieval unit 792 may be further configured to: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the keyword labels according to the number of the keyword labels in the keyword category, and adding the fourth preset number of search result texts into the recommended text set. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the keyword labels in the keyword category is 1, the fourth preset number is 3; when the number of the keyword labels in the keyword category is 2 or 3, the fourth preset number is 2; when the number of keyword tags in the keyword category is greater than or equal to 4, the fourth preset number is 1.
In one embodiment, the plurality of label categories includes a primary role category, each primary role category including at least one primary role label; the recommended text retrieval unit 792 may be further configured to: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main role labels according to the number of the main role labels in the main role category, and adding the fourth preset number of search result texts into the recommended text set. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the main role labels in the to-be-main role category is 1, the fourth preset number is 3; when the number of the main role labels in the main role category is 2 or 3, the fourth preset number is 2; when the number of primary character tags in the primary character category is greater than or equal to 4, the fourth preset number is 1.
In one embodiment, the plurality of label categories includes a primary role character category, each primary role character category including at least one primary role character label; the recommended text retrieval unit 792 may be further configured to: and respectively screening a fourth preset number of search result texts from the search result text lists corresponding to the main role character labels according to the number of the main role character labels in the main role character category, and adding the fourth preset number of search result texts into the recommendation text set. In one embodiment, the recommended text acquisition unit 792 may be further configured to: when the number of the main role character labels in the to-be-main-role character category is 1, the fourth preset number is 3; when the number of the main role character labels in the main role character category is 2 or 3, the fourth preset number is 2; when the number of primary character personality tags in the primary character personality category is greater than or equal to 4, the fourth preset number is 1.
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 11. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 11, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the reading-assist method of the various embodiments of the present application described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a first device or a second device, the input means 13 may be a microphone or a microphone array for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 11, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the reading-assist method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of assisting reading according to various embodiments of the present application described in the "exemplary methods" section above of the specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (31)

1. A method for assisting reading, comprising a rereading mode, wherein the rereading mode comprises:
displaying and playing the text to be read sentence by sentence;
hiding the current sentence after the current sentence is played;
recording the repeated pronunciation of the user; the repeated reading pronunciation of the user is the voice of the current sentence repeated reading of the user; and
and comparing the demonstration pronunciation with the user repeated pronunciation to obtain a repeated reading evaluation result of the user repeated reading pronunciation.
2. The method of claim 1, wherein after obtaining the result of the rereading evaluation of the rereaded pronunciation of the user, the rereading mode further comprises:
and when the repeated reading evaluation result indicates that the repeated reading pronunciation of the user does not pass, prompting the user.
3. The method of claim 1, wherein after obtaining the result of the rereading evaluation of the rereaded pronunciation of the user, the rereading mode further comprises:
when the rereading evaluation result indicates that the rereading pronunciation of the user does not pass, rereading pronunciation of the user is recorded again to obtain rereading pronunciation; and
and comparing the demonstration pronunciation with the rereaded pronunciation to obtain a rereaded evaluation result of the rereaded pronunciation.
4. The method of claim 3, wherein the rereading mode further comprises:
when the recording times exceed a preset first time threshold and the last rereading evaluation result indicates that the corresponding rereading pronunciation does not pass, providing a selection popup for a user to select; and the selection popup window comprises options of skipping the current sentence and recording again.
5. The method of claim 1, further comprising a speakable mode, wherein the speakable mode comprises:
displaying the text to be read, and highlighting the current sentence by sentence;
recording the reading pronunciation of the user; wherein the user speaks pronunciation as the voice of the current sentence spoken by the user; and
and comparing the demonstration pronunciation with the user reading pronunciation to obtain a reading evaluation result of the user reading pronunciation.
6. The method of claim 5, wherein after said obtaining a speakable evaluation of the user speakable utterance, the speakable mode further comprises:
when the reading evaluation result indicates that the user reading pronunciation does not pass, recording the user reading pronunciation again to obtain the reading pronunciation again; and
and comparing the demonstration pronunciation with the re-speaking pronunciation to obtain a re-speaking evaluation result of the re-speaking pronunciation.
7. The method of claim 6, wherein the speakable mode further comprises:
when the recording times exceed a preset second time threshold and the last reading evaluation result indicates that the corresponding reading pronunciation does not pass, providing a selection popup for a user to select; and the selection popup window comprises options of skipping the current sentence and recording again.
8. The method of claim 1, further comprising an listen-and-read mode, wherein the listen-and-read mode comprises:
displaying the text to be read, playing sentence by sentence and highlighting the current sentence;
recording the pronunciation of listening and reading of a user; wherein, the user listening and reading pronunciation is that the user listens and reads the voice of the current sentence; and
and comparing the demonstration pronunciation with the user listening and reading pronunciation to obtain a listening and reading evaluation result of the user listening and reading pronunciation.
9. The method of claim 8, wherein after obtaining the result of the listening and reading evaluation of the user listening and reading pronunciation, the listening and reading mode further comprises:
when the listening and reading evaluation result indicates that the user listening and reading pronunciation does not pass, recording the user listening and reading pronunciation again to obtain the second listening and reading pronunciation; and
and comparing the demonstration pronunciation with the re-listening and reading pronunciation to obtain a re-listening and reading evaluation result of the re-listening and reading pronunciation.
10. The method of claim 9, wherein re-recording the user's reading and listening pronunciation comprises:
playing the current sentence again; and
and re-recording the pronunciation heard and read by the user.
11. The method of claim 9, wherein the listen-and-read mode further comprises:
when the recording times exceed a preset third time threshold and the last listening and reading evaluation result indicates that the corresponding listening, reading and pronunciation does not pass, providing a selection popup for a user to select; and the selection popup window comprises options of skipping the current sentence and recording again.
12. The method of claim 1, further comprising an implied mode, wherein the implied mode comprises:
and displaying the text to be read and highlighting the current sentence.
13. The method of claim 12, wherein the implied read mode further comprises:
playing the text to be read; and
and pausing the playing of the text to be read.
14. The method of claim 1, further comprising a listening mode, wherein the listening mode comprises:
and displaying the text to be read, playing the text to be read sentence by sentence and highlighting the current sentence.
15. The method of claim 14, wherein the listening mode further comprises:
and displaying a playing progress bar according to the content duration of the text to be read.
16. The method of claim 1, further comprising a rephrasing mode, wherein the rephrasing mode comprises:
playing the text to be read sentence by sentence;
after the current sentence is played, recording the repeat pronunciation of the user; wherein the user repeat pronunciation is the voice of the user repeat the current sentence; and
and comparing the demonstration pronunciation with the user repeat pronunciation to obtain a repeat evaluation result of the user repeat pronunciation.
17. The method of claim 16, wherein after obtaining the result of the review evaluation of the user review utterance, the review mode further comprises:
when the re-rephrasing evaluation result indicates that the re-rephrasing pronunciation does not pass, re-recording the re-rephrasing pronunciation of the user to obtain the re-rephrasing pronunciation; and
and comparing the demonstration pronunciation with the repeat pronunciation to obtain a repeat evaluation result of the repeat pronunciation.
18. The method of claim 17, wherein re-recording the user rephrasing the utterance comprises:
playing the current sentence again; and
and re-recording the user repeat pronunciation.
19. The method of claim 17, wherein the repeating pattern further comprises:
when the recording times exceed a preset fourth time threshold and the last repeated evaluation result indicates that the corresponding repeated pronunciation does not pass, providing a selection popup for a user to select; and the selection popup window comprises options of skipping the current sentence and recording again.
20. The method of any one of claims 5-19, further comprising:
and switching the current mode to other modes.
21. The method of any one of claims 1-19, further comprising:
acquiring a user level grade according to the user information; and
and establishing an evaluation standard according to the user level grade.
22. The method of claim 21, wherein the user information comprises user age or user nationality.
23. The method according to any one of claims 21, further comprising, after said formulating evaluation criteria:
and adjusting the evaluation standard according to the comprehensive evaluation result of the user in a preset time period.
24. An apparatus for assisting reading, comprising a rereading module, wherein the rereading module comprises:
the first display unit is used for displaying and playing the text to be read sentence by sentence;
the hiding unit is used for hiding the current sentence after the current sentence is played;
the first recording unit is used for recording the repeated pronunciation of the user; the repeated reading pronunciation of the user is the voice of the current sentence repeated reading of the user; and
and the first comparison unit is used for comparing the demonstration pronunciation with the user rereaded pronunciation to obtain a rereaded evaluation result of the user rereaded pronunciation.
25. The apparatus of claim 24, further comprising a speaking module, wherein the speaking module comprises:
the second display unit is used for displaying the text to be read and highlighting the current sentence by sentence;
the second recording unit is used for recording the reading pronunciation of the user; wherein the user speaks pronunciation as the voice of the current sentence spoken by the user; and
and the second comparison unit is used for comparing the demonstration pronunciation with the user reading pronunciation to obtain a reading evaluation result of the user reading pronunciation.
26. The apparatus of claim 24, further comprising an audio-visual module, wherein the audio-visual module comprises:
the third display unit is used for displaying the text to be read, playing sentence by sentence and highlighting the current sentence;
the third recording unit is used for recording the pronunciation of the user; wherein, the user listening and reading pronunciation is that the user listens and reads the voice of the current sentence; and
and the third comparison unit is used for comparing the demonstration pronunciation with the user listening and reading pronunciation to obtain a listening and reading evaluation result of the user listening and reading pronunciation.
27. The apparatus of claim 24, further comprising an implied module, wherein the implied module comprises:
and the fourth display unit is used for displaying the text to be read and highlighting the current sentence.
28. The apparatus of claim 24, further comprising a listening module, wherein the listening module comprises:
and the fifth display unit is used for displaying the text to be read, playing the text to be read sentence by sentence and highlighting the current sentence.
29. The apparatus of claim 24, further comprising a rephrasing module, wherein the rephrasing module comprises:
the playing unit is used for playing the text to be read sentence by sentence;
the sixth recording unit is used for recording the repeat pronunciation of the user after the current sentence is played; wherein the user repeat pronunciation is the voice of the user repeat the current sentence; and
and the sixth comparison unit is used for comparing the demonstration pronunciation with the user repeat pronunciation to obtain a repeat evaluation result of the user repeat pronunciation.
30. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the preceding claims 1-23.
31. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1-23.
CN202010245115.2A 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment Pending CN111443890A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/083833 WO2021197296A1 (en) 2020-01-19 2021-03-30 Assisted reading method and apparatus, and storage medium and electronic device
TW110111739A TW202139180A (en) 2020-01-19 2021-03-31 Assisted reading method and apparatus, and storage medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010060808 2020-01-19
CN2020100608084 2020-01-19

Publications (1)

Publication Number Publication Date
CN111443890A true CN111443890A (en) 2020-07-24

Family

ID=71649361

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202010246952.7A Pending CN111459453A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment
CN202010245101.0A Pending CN111459449A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment
CN202010245115.2A Pending CN111443890A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment
CN202010245088.9A Pending CN111459448A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202010246952.7A Pending CN111459453A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment
CN202010245101.0A Pending CN111459449A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010245088.9A Pending CN111459448A (en) 2020-01-19 2020-03-31 Reading assisting method and device, storage medium and electronic equipment

Country Status (3)

Country Link
CN (4) CN111459453A (en)
TW (2) TWI817101B (en)
WO (4) WO2021197296A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021197296A1 (en) * 2020-01-19 2021-10-07 托普朗宁(北京)教育科技有限公司 Assisted reading method and apparatus, and storage medium and electronic device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927566B (en) * 2021-01-27 2023-01-03 读书郎教育科技有限公司 System and method for student to rephrase story content
CN113781272A (en) * 2021-08-13 2021-12-10 洪恩完美(北京)教育科技发展有限公司 Reading training method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010117529A (en) * 2008-11-12 2010-05-27 Fujitsu Ltd Device, method and program for generating voice reading sentence
CN106952513A (en) * 2017-03-30 2017-07-14 河南工学院 A kind of system and method that immersion English study is carried out using free time
CN108039180A (en) * 2017-12-11 2018-05-15 广东小天才科技有限公司 A kind of achievement of childrenese expression practice learns method and microphone apparatus
CN108053839A (en) * 2017-12-11 2018-05-18 广东小天才科技有限公司 A kind of methods of exhibiting and microphone apparatus of language exercise achievement

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI238379B (en) * 2001-11-16 2005-08-21 Inventec Besta Co Ltd System and method for language reiterating and correcting pronunciation in a portable electronic device
CN1909016A (en) * 2005-08-07 2007-02-07 黄金富 Portable data processing device with rereading function and its rereading method
JP4840051B2 (en) * 2006-09-28 2011-12-21 カシオ計算機株式会社 Speech learning support apparatus and speech learning support program
KR100900081B1 (en) * 2008-06-18 2009-05-28 윤창훈 Language learning control method
CN101630448B (en) * 2008-07-15 2011-07-27 上海启态网络科技有限公司 Language learning client and system
US9679496B2 (en) * 2011-12-01 2017-06-13 Arkady Zilberman Reverse language resonance systems and methods for foreign language acquisition
CN103942990A (en) * 2013-01-23 2014-07-23 郭毓斌 Language learning device
KR101487005B1 (en) * 2013-11-13 2015-01-29 (주)위버스마인드 Learning method and learning apparatus of correction of pronunciation by input sentence
CN106896985B (en) * 2017-02-24 2020-06-05 百度在线网络技术(北京)有限公司 Method and device for switching reading information and reading information
US20190005030A1 (en) * 2017-06-30 2019-01-03 EverMem, Inc. System and method for providing an intelligent language learning platform
US20190114938A1 (en) * 2017-10-12 2019-04-18 Krisann Pergande Sound Symbols Speaking and Reading Approach
CN108231090A (en) * 2018-01-02 2018-06-29 深圳市酷开网络科技有限公司 Text reading level appraisal procedure, device and computer readable storage medium
CN108257615A (en) * 2018-01-15 2018-07-06 北京物灵智能科技有限公司 A kind of user language appraisal procedure and system
CN109756770A (en) * 2018-12-10 2019-05-14 华为技术有限公司 Video display process realizes word or the re-reading method and electronic equipment of sentence
CN109410664B (en) * 2018-12-12 2021-01-26 广东小天才科技有限公司 Pronunciation correction method and electronic equipment
CN109712443A (en) * 2019-01-02 2019-05-03 北京儒博科技有限公司 A kind of content is with reading method, apparatus, storage medium and electronic equipment
CN110136747A (en) * 2019-05-16 2019-08-16 上海流利说信息技术有限公司 A kind of method, apparatus, equipment and storage medium for evaluating phoneme of speech sound correctness
CN111459453A (en) * 2020-01-19 2020-07-28 托普朗宁(北京)教育科技有限公司 Reading assisting method and device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010117529A (en) * 2008-11-12 2010-05-27 Fujitsu Ltd Device, method and program for generating voice reading sentence
CN106952513A (en) * 2017-03-30 2017-07-14 河南工学院 A kind of system and method that immersion English study is carried out using free time
CN108039180A (en) * 2017-12-11 2018-05-15 广东小天才科技有限公司 A kind of achievement of childrenese expression practice learns method and microphone apparatus
CN108053839A (en) * 2017-12-11 2018-05-18 广东小天才科技有限公司 A kind of methods of exhibiting and microphone apparatus of language exercise achievement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021197296A1 (en) * 2020-01-19 2021-10-07 托普朗宁(北京)教育科技有限公司 Assisted reading method and apparatus, and storage medium and electronic device

Also Published As

Publication number Publication date
TWI817101B (en) 2023-10-01
WO2021197296A1 (en) 2021-10-07
WO2021197301A1 (en) 2021-10-07
CN111459453A (en) 2020-07-28
WO2021197300A1 (en) 2021-10-07
WO2021197299A1 (en) 2021-10-07
TW202139180A (en) 2021-10-16
TW202139153A (en) 2021-10-16
CN111459449A (en) 2020-07-28
CN111459448A (en) 2020-07-28
TW202139155A (en) 2021-10-16
TW202139154A (en) 2021-10-16

Similar Documents

Publication Publication Date Title
CN111443890A (en) Reading assisting method and device, storage medium and electronic equipment
Griol et al. An architecture to develop multimodal educative applications with chatbots
CN108874935B (en) Review content recommendation method based on voice search and electronic equipment
CN109801527B (en) Method and apparatus for outputting information
Kafle et al. Predicting the understandability of imperfect english captions for people who are deaf or hard of hearing
Nurmukhamedov et al. Corpus-based vocabulary analysis of English podcasts
Newton et al. Novel accent perception in typically-developing school-aged children
Yoshino et al. Japanese dialogue corpus of information navigation and attentive listening annotated with extended iso-24617-2 dialogue act tags
CN114170856B (en) Machine-implemented hearing training method, apparatus, and readable storage medium
Asadi et al. Quester: A Speech-Based Question Answering Support System for Oral Presentations
Walsh et al. Speech enabled e-learning for adult literacy tutoring
JP6656529B2 (en) Foreign language conversation training system
KR20190070682A (en) System and method for constructing and providing lecture contents
KR100687441B1 (en) Method and system for evaluation of foring language voice
KR20140075994A (en) Apparatus and method for language education by using native speaker's pronunciation data and thought unit
McRoberts et al. Exploring Interactions with Voice-Controlled TV
TWI839603B (en) Method, device, storage medium and electronic equipment for assisting reading
Abreu et al. Voice Interaction on TV: Analysis of natural language interaction models
TWI839604B (en) Method, device, storage medium and electronic equipment for assisting reading
KR20140073768A (en) Apparatus and method for language education by using native speaker's pronunciation data and thoughtunit
RU2807436C1 (en) Interactive speech simulation system
CN113781854B (en) Group discussion method and system for automatic remote teaching
CN114155479B (en) Language interaction processing method and device and electronic equipment
Hirsch et al. RehaLingo-towards a speech training system for aphasia
KR101478912B1 (en) Language Acquisition System and Operating Method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination