CN111741162B - Recitation prompting method, electronic equipment and computer readable storage medium - Google Patents

Recitation prompting method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111741162B
CN111741162B CN202010488043.4A CN202010488043A CN111741162B CN 111741162 B CN111741162 B CN 111741162B CN 202010488043 A CN202010488043 A CN 202010488043A CN 111741162 B CN111741162 B CN 111741162B
Authority
CN
China
Prior art keywords
information
recited
recitation
brightness
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010488043.4A
Other languages
Chinese (zh)
Other versions
CN111741162A (en
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010488043.4A priority Critical patent/CN111741162B/en
Publication of CN111741162A publication Critical patent/CN111741162A/en
Application granted granted Critical
Publication of CN111741162B publication Critical patent/CN111741162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Abstract

The application relates to the technical field of electronic equipment, and discloses a recitation prompting method, electronic equipment and a computer readable storage medium, which comprise: acquiring a scene image corresponding to the information to be recited; controlling a display screen of the electronic equipment to output a scene image at a first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment; and when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness. Implement this application embodiment, can export the scene image that the statement of waiting to recite corresponds to make electronic equipment's user can remember according to the scene image information of waiting to recite, and can when detecting that the user recites correctly, through output information of waiting to output and highlight scene image and make the feedback to the user in time, thereby promote the efficiency of reciting the article.

Description

Recitation prompting method, electronic equipment and computer readable storage medium
Technical Field
The application relates to the technical field of electronic equipment, in particular to a recitation prompting method, electronic equipment and a computer readable storage medium.
Background
At present, students usually need to recite articles required to be recited in textbooks in the learning process, and the students usually remember the articles hard according to the contents of the articles when reciting the articles. However, in practice it has been found that reciting articles by remembering to remember them is difficult for students, resulting in inefficient reciting of articles by students.
Disclosure of Invention
The embodiment of the application discloses a recitation prompting method, electronic equipment and a computer readable storage medium, which can improve article recitation efficiency.
A first aspect of an embodiment of the present application discloses a recitation prompting method, including:
acquiring a scene image corresponding to the information to be recited;
controlling a display screen of the electronic equipment to output the scene image at a first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment;
and when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness.
As an alternative implementation manner, in the first aspect of the embodiment of the present application, after the controlling the display screen of the electronic device to output the scene image with the first brightness and capturing the recitation voice input by the user through the audio capturing device of the electronic device, the method further includes:
acquiring the current time length of the scene image output by the display screen of the electronic equipment at the first brightness;
when the current time length is detected to reach a first preset time length, detecting the language type corresponding to the information to be recited;
when the language type is detected to be a foreign language type, acquiring translation information corresponding to the information to be recited;
outputting a recitation prompt comprising the translation information, the recitation prompt being used for prompting a user of the electronic equipment to recite the information to be recited.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after acquiring the current time length of the scene image output by the display screen of the electronic device at the first brightness, the method further includes:
when the current time length is detected to reach a second preset time length, the information to be recited is associated with the unsuccessful recitation identification, wherein the second preset time length is longer than the first preset time length;
when the user recitation end is detected, analyzing the information to be recited related to the unsuccessful recitation identifications to obtain recitation analysis results, wherein the recitation analysis results at least comprise recitation error rate and recitation suggestions;
and outputting the recitation analysis result.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the acquiring a scene image corresponding to information to be recited includes:
acquiring information to be recited in an article;
identifying at least one semantic information from the information to be recited;
obtaining a pre-stored semantic image corresponding to each semantic information;
and generating a scene image containing each semantic image.
As an optional implementation manner, in the first aspect of this embodiment of the present application, after identifying at least one semantic information from the to-be-recited information when the number of the identified semantic information is plural, the method further includes:
performing semantic recognition on the information to be recited to obtain a semantic relation between any two semantic information;
the generating of the scene image containing each semantic image comprises:
and generating a scene image containing each semantic image according to the semantic relation between any two semantic information.
As an optional implementation manner, in the first aspect of the embodiments of the present application, the method for reciting information includes at least one unit to be recited, the scene image includes an image area corresponding to each unit to be recited, and when it is detected that the reciting voice is the same as the pre-stored voice corresponding to the information to be recited, the outputting the information to be recited and adjusting the brightness of the scene image to the second brightness includes:
when the situation that the prestored voice corresponding to any unit to be recited is matched with the recitation voice is detected, determining a target image area corresponding to the any unit to be recited from the scene image;
and outputting any one to-be-recited unit in the to-be-recited information, and adjusting the brightness of the target image area in the scene image to be a second brightness.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after the capturing of the recitation voice input by the user by the audio capturing device of the electronic device, and when it is detected that the recitation voice is the same as the pre-stored voice corresponding to the information to be recited, the information to be recited is output, and the brightness of the scene image is adjusted to the second brightness, the method further includes:
acquiring native information of a user of the electronic device;
acquiring prestored accent correction information corresponding to the native place information;
and performing accent correction on the recitation speech through the accent correction information to obtain the corrected recitation speech.
A second aspect of an embodiment of the present application discloses an electronic device, including:
the acquiring unit is used for acquiring a scene image corresponding to the information to be recited;
the output unit is used for controlling a display screen of the electronic equipment to output the scene image with first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment;
and the adjusting unit is used for outputting the information to be recited when the recitation voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, and adjusting the brightness of the scene image to a second brightness, wherein the second brightness is greater than the first brightness.
A third aspect of the embodiments of the present application discloses another electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium storing program code, where the program code includes instructions for performing some or all of the steps of any one of the methods of the first aspect.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, configured to publish a computer program product, wherein when the computer program product runs on a computer, the computer is caused to perform part or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, a scene image corresponding to the information to be recited is acquired; controlling a display screen of the electronic equipment to output a scene image at a first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment; and when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness. It can be seen that, by implementing the embodiment of the application, the scene image corresponding to the sentence to be recited can be output, so that the user of the electronic device can recall the information to be recited according to the scene image, and when it is detected that the user recites correctly, the user can be fed back in time by outputting the information to be output and brightening the scene image, and the efficiency of reciting the article is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a recitation prompting method disclosed in an embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario in which a recitation prompting method disclosed in an embodiment of the present application is applicable;
FIG. 3-a is a schematic diagram of an application scenario in which another recitation prompting method disclosed in the embodiments of the present application is applicable;
FIG. 3-b is a schematic diagram of an application scenario in which another recitation prompting method disclosed in the embodiments of the present application is applicable;
FIG. 3-c is a schematic diagram of an application scenario in which another recitation prompting method disclosed in the embodiments of the present application is applicable;
FIG. 4 is a schematic flow chart diagram of another recitation prompting method disclosed in embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of another recitation prompting method disclosed in embodiments of the present application;
fig. 6 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses recitation prompting method, electronic equipment and computer readable storage medium, and the reciting prompting method can bring forward the efficiency of reciting articles by brightening the sentences to be output and scene images to feed back the user in time when the reciting correctness of the user is detected. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a recitation prompting method disclosed in an embodiment of the present application. The recitation prompting method can be applied to electronic equipment, and in order to better understand the recitation prompting method described in fig. 1, the embodiment of the present application can introduce an application scenario in which the recitation prompting method shown in fig. 1 is applicable.
Referring to fig. 2, fig. 2 is a schematic view of an application scenario in which the recitation prompting method shown in fig. 1 is applied. The application scene shown in fig. 2 can be used for the user to recite the information to be recited by using the electronic device, the electronic device can output the scene image corresponding to the information to be recited through the display screen, and the recitation prompt is made for the user through the scene image.
As shown in fig. 1, the recitation prompting method may include the following steps:
101. and acquiring a scene image corresponding to the information to be recited.
In the embodiment of the application, the scene image may be an image matched with the semantics of the information to be recited, or may be an image of the text of the information to be recited itself, the information to be recited may be one or more texts, the information to be recited may also be one or more words, the information to be recited may also be any sentence selected from a section of text, and the like, and therefore, the embodiment of the application is not limited. The language type corresponding to the information to be recited at least can comprise a Chinese type and/or a foreign language type. When the information to be recited is one or more characters, the scene image can contain images corresponding to the characters; when the information to be recited is one or more words, the scene image can contain images corresponding to the words; when the information to be recited is any sentence selected from a section of text, a noun included in the sentence can be identified, and a scene image corresponding to the sentence can include an image corresponding to each noun, and the scene image can also show the correlation between the images corresponding to each noun, for example, when the information to be recited is "Butterfly is on the wall", the scene image can include a Butterfly image corresponding to "Butterfly" and a wall image corresponding to "wall", and the Butterfly image in the scene image can be located in the wall image to show the correlation that "Butterfly" is located on "wall".
102. And controlling a display screen of the electronic equipment to output the scene image with first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment.
In the embodiment of the application, the electronic equipment can be an electronic equipment containing a display screen such as a smart phone, a learning tablet or a notebook computer, scene images can be output and displayed through the display screen of the electronic equipment, and correct part or all of the information to be recited can be output and displayed by a user through the display screen.
In the embodiment of the application, audio acquisition equipment can be for the equipment that can gather the sound in the environment that electronic equipment is located such as microphone, audio acquisition equipment can set up in electronic equipment, also can be independent audio acquisition equipment, independent audio acquisition equipment can with establish communication connection between the electronic equipment, independent audio acquisition equipment can gather the sound in the environment that electronic equipment is located, and can pass through communication connection with the sound of gathering and transmit to electronic equipment, so that electronic equipment can follow the reciting pronunciation of discerning user input in the sound that receives.
Furthermore, since the sound collected by the audio collection device in the environment of the electronic device usually includes noise in the environment (such as noise generated by driving of a vehicle, noise generated by operation of an electric appliance, or noise input by a user other than the electronic device) in addition to the recitation speech input by the user, the electronic device can usually perform noise reduction processing on the collected sound in order to ensure the accuracy of the identified recitation speech.
Optionally, the electronic device may perform noise reduction on the acquired sound in a manner that: the electronic equipment identifies the voice of the collected voice to obtain the voice in the voice; and voiceprint information pre-stored by a user of the electronic equipment can be acquired; and identifying a recitation speech matching the voiceprint information from the derived vocal sounds. Therefore, the noise except the voice in the voice can be removed firstly through the mode, and then the recitation voice matched with the voiceprint information of the user of the electronic equipment can be identified from the obtained voice, so that the obtained recitation voice is the voice input by the user of the electronic equipment, and the accuracy of the obtained recitation voice is improved.
In the embodiment of the application, the first brightness may be a preset brightness with a lower brightness, and outputting the scene image with the first brightness may enable a user of the electronic device to know a scene image corresponding to the current information to be recited, and may further prompt the user that the scene image output with the first brightness is a scene image corresponding to the information to be recited that has not been recited correctly.
As an optional implementation manner, after step 102, the following steps may also be performed:
acquiring the current time length of a display screen of the electronic equipment for outputting a scene image at a first brightness;
when the current time length is detected to reach a first preset time length, detecting the language type corresponding to the information to be recited;
when the language type is detected to be a foreign language type, acquiring translation information corresponding to the information to be recited;
and outputting a recitation prompt containing the translation information, wherein the recitation prompt is used for prompting a user of the electronic equipment to recite the information to be recited.
Wherein, implement this kind of implementation, can time the duration of outputting scene image with first luminance, obtain current duration, if electronic equipment lasts with first luminance output scene image, can regard as electronic equipment's user not to realize reciting the information of waiting to reciting that scene image corresponds, therefore electronic equipment does not adjust the luminance of scene image to the second luminance, and then can detect that the duration that the user did not recite successfully reaches first predetermined duration, the language type according to waiting to recite the information makes the suggestion to the user to supplementary user's completion reciting more fast.
In the embodiment of the application, the language type corresponding to the information to be recited may be a chinese type and a foreign language type, where the chinese type may include at least a white language type and a language type, and the foreign language type may include a language type corresponding to any language other than a language commonly used by a user of the electronic device, for example, when the language commonly used by the user of the electronic device is chinese, the foreign language type may be an english type, a russian type, a french type, or the like; when the common language of the user of the electronic device is english, the foreign language type may be a chinese type, a russian type, or a french type, etc.; therefore, the language type corresponding to the translation information corresponding to the information to be recited can be the same language type as the common language of the user, for example, when the common language of the user of the electronic device is chinese, the language type corresponding to the translation information corresponding to the information to be recited can also be chinese; when the common language of the user of the electronic equipment is English, the language type corresponding to the translation information corresponding to the information to be recited can also be English; so that the user can more easily understand the translation information contained in the prompt message.
Optionally, when the language type is detected to be the chinese language type, if the user needs to prompt the information to be recited, the electronic device may obtain language translation information corresponding to the information to be recited of the language type, where the language translation information may be obtained by the electronic device translating the information to be identified of the language type, and the translation corresponding to the information to be identified is language translation information of the dialect type; further, a language recitation prompt containing a language translation type can be output so that the user can recite based on the language translation information translated to spoken language.
Further, after obtaining the current duration of the scene image output by the display screen of the electronic device at the first brightness, the following steps may be further performed:
when the current time length is detected to reach a second preset time length, the information to be recited is associated with the unsuccessful recitation identification, and the second preset time length is longer than the first preset time length;
when the user reciting is detected to be finished, analyzing the information to be recited related to the unsuccessful reciting identification to obtain a reciting analysis result, wherein the reciting analysis result at least comprises a reciting error rate and a reciting suggestion;
and outputting recitation analysis results.
Wherein, implement this kind of implementation, can detect that the user did not recite when successful duration reaches the second and predetermines the duration, think that the user will not wait to recite information memory successfully, consequently, can generate the analysis result of reciting according to the condition of reciting of user, and then can make the user according to the analysis result of reciting deepen the memory of waiting to recite information.
In the embodiment of the application, if the information to be recited is successfully recited by the user, the information to be recited can be associated with the successful recitation identification, if the information to be recited is not successfully recited by the user, the information to be recited can be associated with the unsuccessful recitation identification, after all the information to be recited are recited, the recitation error rate can be calculated according to the successful recitation identification and the unsuccessful recitation identification, the information to be recited corresponding to the unsuccessful recitation identification can be analyzed, the recitation suggestion of the information to be recited to the unsuccessful recitation is obtained, so that the user can recite the information to be recited successfully again according to the recitation suggestion, and the efficiency of the information to be recited by the user is improved.
In the embodiment of the application, when it is detected that the current duration reaches a second preset duration, the electronic device may further detect whether there is information to be recited that is not recited, and if so, the electronic device may execute step 101; if not, the electronic equipment can consider that the recitation of the user is finished, and the information to be recited related to the unsuccessful recitation identification is analyzed to obtain the recitation analysis result, so that the electronic equipment can output each information to be recited that the user needs to recite.
103. And when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness.
In the embodiment of the application, the electronic device can pre-store the voice corresponding to the to-be-recited information, the electronic device can compare the collected recitation voice with the voice corresponding to the to-be-recited information, if the recitation voice is the same as the voice corresponding to the to-be-recited information, the user can be considered to successfully recite the to-be-recited information, the to-be-recited information can be output, the brightness of the scene image can be completely adjusted to be the second brightness, and the second brightness is greater than the first brightness, so that when the brightness of the scene image is adjusted to be the second brightness by the electronic device, the scene image output by the electronic device can be seen to be brightened through darkness, and then the user of the electronic device can be more intuitively aware that the current to-be-recited information has been successfully completed.
To better understand the recitation prompting method described in fig. 1, the embodiment of the present application may introduce an application scenario in which the recitation prompting method shown in fig. 1 is applicable.
Referring to fig. 3-a, fig. 3-b and fig. 3-c together, fig. 3-a, fig. 3-b and fig. 3-c are schematic application scenarios for the recitation prompting method shown in fig. 1, and fig. 3-a, fig. 3-b and fig. 3-c are schematic application scenarios for the recitation prompting method at different periods. The application scenarios shown in fig. 3-a, 3-b and 3-c may include an electronic device a, a scenario image b and a to-be-recited information output area c, and the to-be-recited information may be preset as "Butterfly is on the wall".
In the application scenario shown in fig. 3-a, the content output by the electronic device a may be the content that the electronic device a needs to output before the user begins to recite the information to be recited, the electronic device a may recognize the acquired information to be recited, may recognize the number of words contained in the information to be recited, and output a region c for outputting the information to be recited through the display screen of the electronic device a, and an underline matching the number of words may be included in the region c for outputting the information to be recited to prompt the user about the number of words contained in the information to be recited; the electronic device a can also identify semantic information contained in the recitation information, and thus, the electronic device a can identify semantic information of a butterfly and a wall from the recitation information, the electronic device a can also acquire a butterfly semantic image b1 corresponding to the semantic information of the butterfly, can also acquire a wall semantic image b2 corresponding to the semantic information of the wall, and the electronic device a can also output a scene image b containing a butterfly semantic image b1 and a wall semantic image b2, and the brightness of the output scene image b is the first brightness.
In the application scenario shown in fig. 3-b, the content output by the electronic device a may be the content that needs to be output by the electronic device a when the user recites the correct part of the to-be-recited information, the electronic device a may acquire the reciting speech acquired by the audio acquisition device, if the electronic device a identifies speech matching with the part of the vocabulary in the to-be-recited information from the reciting speech, the information corresponding to the part of the vocabulary in the to-be-recited information may be output, and the brightness of the image corresponding to the part of the vocabulary may be adjusted to the second brightness, that is, if the electronic device a identifies speech matching with the part of the vocabulary "Butterfly" in the to-be-recited information from the reciting speech acquired by the audio acquisition device, the display "Butterfly" may be output in the to-be-recited information output area c, and the brightness of the Butterfly image b1 corresponding to the "Butterfly" semantic meaning in the scene image b may be adjusted to the second brightness, and the second brightness is greater than the first brightness, the user can be prompted to recite the right Butterfly semantic image b1 in the information to be recited by outputting the Butterfly and adjusting the brightness of the Butterfly semantic image b1 corresponding to the Butterfly.
In the application scenario shown in fig. 3-c, the content output by the electronic device a may be the content that needs to be output by the electronic device a when the user recites all correct information to be recited, if the electronic device a identifies speech matching all of the information to be recited from the spoken speech collected by the audio collection device, the electronic device a may output and display all of the information to be recited "Butterfly is on the wall" in the information to be recited output area c, and may also adjust the brightness of the Butterfly image b1 corresponding to "Butterfly" in the information to be recited in the scene image b and the brightness of the wall semantic image b2 corresponding to "wall" in the information to be recited to a second brightness, and the second brightness is greater than the first brightness, perform brightness adjustment by outputting all of the information to be recited "Butterfly is on the wall" and the brightness of the wall image b1 corresponding to "Butterfly" and the brightness of the wall image 2 corresponding to "wall" in the information to be recited, the user can be prompted that the information to be recited is all recited correctly, so that the user can learn the current recitation condition in time.
In the embodiment of the application, when the fact that the user recites correctly is detected, the user can be fed back timely by brightening the sentence to be output and the scene image, and therefore the article reciting efficiency is improved. In addition, the method described in the embodiment of the application improves the accuracy of the retrieved recitation speech. In addition, the implementation of the method described in the embodiments of the present application can assist the user to complete recitation more quickly. In addition, the method described in the embodiment of the application can be implemented to enable the user to deepen the memory of the information to be recited according to the recitation analysis result.
Referring to fig. 4, fig. 4 is a flow chart illustrating another recitation prompting method disclosed in the embodiment of the present application. As shown in fig. 4, the recitation prompting method may include the following steps:
401. and acquiring information to be recited in the article.
In the embodiment of the application, the information to be recited can be any statement in any article to be recited, and when the electronic device acquires the information to be recited in the article, the sequence of each statement in the article can be taken as a basis, and each statement in the article can be taken as the information to be recited in turn.
402. At least one semantic information is identified from the information to be recited.
In the embodiment of the present application, the semantic information may be a noun included in the information to be recited, and therefore, the information to be recited may include one or more semantic information, for example, when the information to be recited is on the wall, "the noun in the information to be recited may be" Butterfly "and" wall, "and therefore, two semantic information may be determined from the information to be recited: "Butterfly" and "wall".
403. And acquiring a pre-stored semantic image corresponding to each semantic information.
In the embodiment of the application, semantic images corresponding to each semantic information may be pre-stored in the electronic device, and when the electronic device identifies one or more semantic information from the information to be recited, the pre-stored semantic images corresponding to each semantic information may be obtained, please refer to fig. 3-a, where the semantic information "Butterfly" may correspond to the pre-stored Butterfly semantic image b1, and the semantic information "wall" may correspond to the pre-stored wall semantic image b 2.
404. A scene image is generated that contains the respective semantic images.
In this embodiment of the application, by implementing the above steps 401 to 404, one or more semantic information may be identified from the information to be recited, and a semantic image corresponding to each semantic information may be acquired, so as to generate a scene image according to each semantic image, so that the generated scene image may include all semantic information in the information to be recited, and all semantic information in the information to be recited may be prompted to the user through the scene image, thereby ensuring the comprehensiveness of the semantic information included in the scene image.
As an alternative embodiment, when the number of the identified semantic information is plural, after step 402, the following steps may be further performed:
performing semantic identification on the information to be recited to obtain a semantic relation between any two semantic information;
and, the manner of generating the scene image including each semantic image may specifically be:
and generating a scene image containing each semantic image according to the semantic relation between any two semantic information.
By implementing the implementation mode, the semantic relation between any two semantic information can be acquired, and the semantic relation between any two semantic information is embodied in the scene image, so that the information amount in the scene image is increased, and a user can remember the information to be recited more quickly by looking up the scene image.
For better understanding of the above embodiment, referring to fig. 3-a together, the electronic device a may recognize that the reciting information is "Butterfly is on the wall", and the semantic relationship between the semantic information "Butterfly" and the semantic information "wall" is "Butterfly" on "wall", so that in the generated scene image including each semantic image, the Butterfly semantic image b1 may be located in the wall semantic image b2 to show the semantic relationship between "Butterfly" on "wall", so that the user may understand the meaning of the scene image more intuitively, and further recite the reciting information more quickly.
405. And controlling a display screen of the electronic equipment to output the scene image with first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment.
406. Native information of a user of an electronic device is obtained.
In the embodiment of the application, the electronic equipment can acquire the native information of the user when the user logs in for the first time, and because the user usually has local accents when speaking, the electronic equipment can acquire accent correction information corresponding to the accents of the user according to the native information of the user.
407. Pre-stored accent correction information corresponding to the native information is acquired.
In the embodiment of the application, the electronic equipment can acquire massive voice information in any region, can compare the massive voice information with standard voice information to generate accent correction information of the region, and the generated accent correction information can correct the voice input by a user in the region into more standard pronunciation.
408. And performing accent correction on the recitation speech through the accent correction information to obtain the corrected recitation speech.
In the embodiment of the present application, by implementing the above step 406 to step 408, the native information of the user can be acquired, and the collected recitation speech is subjected to accent correction through the accent correction information corresponding to the native information, so that the obtained corrected recitation speech better conforms to the standard pronunciation, and the accuracy of speech recognition can be further improved.
409. And when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness.
In the embodiment of the application, when the fact that the user recites correctly is detected, the user can be fed back timely by brightening the sentence to be output and the scene image, and therefore the article reciting efficiency is improved. In addition, by implementing the method described in the embodiment of the application, the comprehensiveness of semantic information contained in the scene image is ensured. In addition, by implementing the method described in the embodiment of the application, the information to be recited can be recalled more quickly by viewing the scene image. In addition, the method described in the embodiment of the application can improve the accuracy of voice recognition.
Referring to fig. 5, fig. 5 is a flow chart illustrating another recitation prompting method disclosed in the embodiment of the present application. As shown in fig. 5, the recitation prompting method may include the following steps:
501. and acquiring a scene image corresponding to the information to be recited, wherein the information to be recited comprises at least one unit to be recited, and the scene image comprises image areas corresponding to the units to be recited.
In the embodiment of the application, the unit to be recited can be a unit in which the information to be recited contains one or more words, the image area can contain an image corresponding to the unit to be recited, and when only one word exists in the unit to be recited, the image area can contain an image corresponding to the word; when a plurality of words exist in the unit to be recited, images corresponding to the respective words may be included in the image area.
502. And controlling a display screen of the electronic equipment to output the scene image with first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment.
503. When the matching of the pre-stored voice corresponding to any one unit to be recited and the recited voice is detected, the target image area corresponding to any one unit to be recited is determined from the scene image.
In the embodiment of the application, because the user of the electronic equipment recites the condition that the information to be recited may be incorrect, the reciting speech collected by the electronic equipment can contain part of the unit to be recited in the information to be recited, and when the collected reciting speech only has the speech matching corresponding to the part of the unit to be recited in the information to be recited, only the part of the unit to be recited and the target image area corresponding to the part of the unit to be recited can be output, so that the user can timely know the wrong part of the information to be recited.
504. And outputting any one to-be-recited unit in the to-be-recited information, and adjusting the brightness of the target image area in the scene image to be a second brightness, wherein the second brightness is greater than the first brightness.
In the embodiment of the present application, by implementing the above steps 503 to 504, a voice matching with any one unit to be recited in the information to be recited can be recognized from the recitation voice input by the user, so that the electronic device can output and display the any one unit to be recited, and the brightness of the target image area corresponding to the any one unit to be recited in the scene image can be adjusted, so as to prompt the user to recite the correct part or all of the content, so that the user can learn the recited content in time, and the interactivity between the electronic device and the user is improved.
In the embodiment of the application, when the fact that the user recites correctly is detected, the user can be fed back timely by brightening the sentence to be output and the scene image, and therefore the article reciting efficiency is improved. In addition, by implementing the method described in the embodiment of the application, the interactivity between the electronic device and the user is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device may include an acquisition unit 601, an output unit 602, and an adjustment unit 603.
An acquiring unit 601, configured to acquire a scene image corresponding to the information to be recited.
The output unit 602 is configured to control a display screen of the electronic device to output the scene image acquired by the acquisition unit 601 with the first brightness, and acquire recitation voice input by the user through an audio acquisition device of the electronic device.
An adjusting unit 603, configured to output the recitation information to be recited when it is detected that the recitation voice collected by the output unit 602 is the same as the pre-stored voice corresponding to the recitation information to be recited acquired by the acquiring unit 601, and adjust the brightness of the scene image to a second brightness, where the second brightness is greater than the first brightness.
In the embodiment of the application, victims can be timely and accurately positioned, the help seeking information is sent to the search and rescue personnel, and the success rate of search and rescue is improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. The electronic device shown in fig. 7 is optimized from the electronic device shown in fig. 6. Compared to the electronic device shown in fig. 6, the electronic device shown in fig. 7 may further include:
the duration obtaining unit 604 is configured to obtain a current duration of the scene image output by the display screen of the electronic device at the first brightness after the output unit 602 controls the display screen of the electronic device to output the scene image at the first brightness and the reciting voice input by the user is collected by the audio collecting device of the electronic device.
A detecting unit 605, configured to detect the language type corresponding to the information to be recited when it is detected that the current time length acquired by the time length acquiring unit 604 reaches a first preset time length.
A translation acquiring unit 606, configured to acquire translation information corresponding to the information to be recited when the detecting unit 605 detects that the language type is the foreign language type.
A prompt output unit 607 for outputting a recitation prompt containing the translation information acquired by the translation acquisition unit 606, the recitation prompt being used for prompting the user of the electronic device to recite the information to be recited.
In the embodiment of the application, the duration of outputting the scene image with the first brightness can be timed to obtain the current duration, if the electronic device continuously outputs the scene image with the first brightness, it can be considered that the user of the electronic device does not recite the information to be recited corresponding to the scene image, and therefore the electronic device does not adjust the brightness of the scene image to the second brightness, and further can prompt the user according to the language type of the information to be recited when the duration of detecting that the user fails to recite reaches the first preset duration, so as to assist the user to recite more quickly.
As an alternative implementation, the electronic device shown in fig. 7 may further include:
an associating unit 608, configured to associate the to-be-recited information with the unsuccessful recitation flag after the duration acquiring unit 604 acquires the current duration of the scene image output by the display screen of the electronic device at the first brightness, and when it is detected that the current duration reaches a second preset duration, where the second preset duration is greater than the first preset duration;
the analysis unit 609 is used for analyzing the information to be recited related to the unsuccessful recitation identification related to the association unit 608 when the user recitation end is detected, and obtaining recitation analysis results, wherein the recitation analysis results at least comprise the recitation error rate and recitation suggestion;
and a result output unit 610, configured to output the recitation analysis result obtained by the analysis unit 609.
Wherein, implement this kind of implementation, can detect that the user did not recite when successful duration reaches the second and predetermines the duration, think that the user will not wait to recite information memory successfully, consequently, can generate the analysis result of reciting according to the condition of reciting of user, and then can make the user according to the analysis result of reciting deepen the memory of waiting to recite information.
As an alternative implementation, the obtaining unit 601 of the electronic device shown in fig. 7 may include:
an acquiring subunit 6011, configured to acquire information to be recited in an article;
an identifying subunit 6012, configured to identify at least one semantic information from the information to be recited acquired by the acquiring subunit 6011;
an obtaining subunit 6011, configured to obtain a pre-stored semantic image corresponding to each piece of semantic information identified by the identifying subunit 6012;
a generating subunit 6013, configured to generate a scene image including each semantic image acquired by the acquiring subunit 6011.
By implementing the implementation mode, one or more semantic information can be identified from the information to be recited, the semantic image corresponding to each semantic information can be acquired, and then the scene image is generated according to each semantic image, so that the generated scene image can contain all the semantic information in the information to be recited, all the semantic information in the information to be recited can be prompted to the user through the scene image, and the comprehensiveness of the semantic information contained in the scene image is ensured.
As an alternative implementation, the electronic device shown in fig. 7 may further include:
the identifying unit 611, configured to perform semantic identification on the information to be recited when the number of pieces of semantic information identified by the identifying subunit 6012 is multiple and after at least one piece of semantic information is identified from the information to be recited, to obtain a semantic relationship between any two pieces of semantic information;
the manner of generating sub-unit 6013 to generate a scene image including each semantic image may specifically be:
the scene image including each semantic image is generated based on the semantic relationship between any two pieces of semantic information obtained by the identification unit 611.
By implementing the implementation mode, the semantic relation between any two semantic information can be acquired, and the semantic relation between any two semantic information is embodied in the scene image, so that the information amount in the scene image is increased, and a user can remember the information to be recited more quickly by looking up the scene image.
As an optional implementation manner, the information to be recited includes at least one unit to be recited, the scene image includes an image area corresponding to each unit to be recited, and the adjusting unit 603 of the electronic device shown in fig. 7 may include:
a determination sub-unit 6031 that determines a target image area corresponding to any one unit to be recited from the scene image when it is detected that the prestored voice corresponding to any one unit to be recited matches the recitation voice;
an adjusting sub-unit 6032 for outputting any one of the to-be-recited units of the to-be-recited information and adjusting the luminance of the target image area determined by the determining sub-unit 6031 in the scene image to the second luminance.
By implementing the implementation mode, the voice matched with any one unit to be recited in the information to be recited can be recognized from the recitation voice input by the user, so that the electronic equipment can output and display the any one unit to be recited, the brightness of the target image area corresponding to the any one unit to be recited in the scene image can be adjusted, and the prompting is made on the part or all of the content with the correct recitation of the user in the mode, so that the user can learn the recited content in time, and the interactivity between the electronic equipment and the user is improved.
As an alternative implementation, the electronic device shown in fig. 7 may further include:
an information acquisition unit 612 for acquiring native information of the user of the electronic apparatus after the output unit 602 acquires the recitation voice input by the user through the audio acquisition apparatus of the electronic apparatus and before the adjustment unit 603 detects that the recitation voice is the same as the prestored voice corresponding to the to-be-recited information, outputs the to-be-recited information, and adjusts the brightness of the scene image to the second brightness;
an information obtaining unit 612 further configured to obtain pre-stored accent correction information corresponding to the native information;
a correcting unit 613, configured to perform accent correction on the recited speech through the accent correction information acquired by the information acquiring unit 612, to obtain a corrected recited speech.
By the implementation of the implementation mode, native information of a user can be acquired, and the collected recitation voice is subjected to accent correction through accent correction information corresponding to the native information, so that the acquired corrected recitation voice better conforms to standard pronunciation, and the accuracy of voice recognition can be improved.
In the embodiment of the application, victims can be timely and accurately positioned, the help seeking information is sent to the search and rescue personnel, and the success rate of search and rescue is improved. In addition, in the electronic equipment shown in the embodiment of the application, the user can be assisted to complete recitation more quickly. In addition, in the electronic device shown in the embodiment of the present application, the user can be made to deepen the memory of the information to be recited according to the recitation analysis result. In addition, in the electronic device shown in the embodiment of the application, the comprehensiveness of semantic information contained in the scene image is ensured. In addition, in the electronic device shown in the embodiment of the present application, the information to be recited can be recalled more quickly by viewing the scene image. In addition, in the electronic device shown in the embodiment of the application, the interactivity between the electronic device and the user is improved. In addition, in the electronic device shown in the embodiment of the application, the accuracy of voice recognition can be improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. As shown in fig. 8, the electronic device may include:
a memory 801 in which executable program code is stored;
a processor 802 coupled with the memory 801;
wherein the processor 802 calls the executable program code stored in the memory 801 to perform some or all of the steps of the methods in the above method embodiments.
The embodiment of the application also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
The embodiments of the present application also disclose a computer program product, wherein, when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present application also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrase "in the embodiments of the present application" appearing in various places throughout the specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. It should be understood that the term "and/or" herein is merely one type of association relationship describing an associated object, meaning that three relationships may exist, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
The recitation prompting method, the electronic device and the computer-readable storage medium disclosed in the embodiments of the present application are introduced in detail above, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A recitation prompting method, comprising:
acquiring a scene image corresponding to the information to be recited;
controlling a display screen of the electronic equipment to output the scene image at a first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment;
and when the reciting voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, outputting the information to be recited, and adjusting the brightness of the scene image to be second brightness, wherein the second brightness is greater than the first brightness.
2. The method as recited in claim 1, wherein after controlling a display screen of an electronic device to output the scene image at a first brightness and capturing the user-entered recitation speech via an audio capture device of the electronic device, the method further comprises:
acquiring the current time length of the scene image output by the display screen of the electronic equipment at the first brightness;
when the current time length is detected to reach a first preset time length, detecting the language type corresponding to the information to be recited;
when the language type is detected to be a foreign language type, acquiring translation information corresponding to the information to be recited;
outputting a recitation prompt comprising the translation information, the recitation prompt being used for prompting a user of the electronic equipment to recite the information to be recited.
3. The method of claim 2, wherein after obtaining the current duration of the scene image output by the display screen of the electronic device at the first brightness, the method further comprises:
when the current time length is detected to reach a second preset time length, the information to be recited is associated with the unsuccessful recitation identification, wherein the second preset time length is longer than the first preset time length;
when the user recitation end is detected, analyzing the information to be recited related to the unsuccessful recitation identifications to obtain recitation analysis results, wherein the recitation analysis results at least comprise recitation error rate and recitation suggestions;
and outputting the recitation analysis result.
4. The method as claimed in any one of claims 1 to 3, wherein the acquiring the scene image corresponding to the information to be recited comprises:
acquiring information to be recited in an article;
identifying at least one semantic information from the information to be recited;
obtaining a pre-stored semantic image corresponding to each semantic information;
and generating a scene image containing each semantic image.
5. The method of claim 4, wherein after identifying at least one semantic information from the recited information when the number of identified semantic information is plural, the method further comprises:
performing semantic recognition on the information to be recited to obtain a semantic relation between any two semantic information;
the generating of the scene image containing each semantic image comprises:
and generating a scene image containing each semantic image according to the semantic relation between any two semantic information.
6. The method as claimed in claim 1, wherein said to-be-recited information includes at least one to-be-recited unit, said scene image includes an image area corresponding to each of said to-be-recited units, and when it is detected that said recitation voice is the same as the pre-stored voice corresponding to said to-be-recited information, said to-be-recited information is outputted and the brightness of said scene image is adjusted to a second brightness, including:
when the situation that the prestored voice corresponding to any unit to be recited is matched with the recitation voice is detected, determining a target image area corresponding to the any unit to be recited from the scene image;
and outputting any one to-be-recited unit in the to-be-recited information, and adjusting the brightness of the target image area in the scene image to be a second brightness.
7. The method of any of claims 1-3 and 6, wherein after the capturing of the recitation speech input by the user by the audio capture device of the electronic device and before the outputting of the recitation information and the adjusting of the brightness of the scene image to the second brightness when the recitation speech is detected to be identical to the pre-stored speech corresponding to the recitation information, the method further comprises:
acquiring native information of a user of the electronic device;
acquiring prestored accent correction information corresponding to the native place information;
and performing accent correction on the recitation speech through the accent correction information to obtain the corrected recitation speech.
8. An electronic device, comprising:
the acquiring unit is used for acquiring a scene image corresponding to the information to be recited;
the output unit is used for controlling a display screen of the electronic equipment to output the scene image with first brightness, and acquiring recitation voice input by a user through audio acquisition equipment of the electronic equipment;
and the adjusting unit is used for outputting the information to be recited when the recitation voice is detected to be the same as the pre-stored voice corresponding to the information to be recited, and adjusting the brightness of the scene image to a second brightness, wherein the second brightness is greater than the first brightness.
9. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the recitation prompting method of any of claims 1-7.
10. A computer-readable storage medium storing a computer program that causes a computer to execute the recitation prompting method of any of claims 1-7.
CN202010488043.4A 2020-06-01 2020-06-01 Recitation prompting method, electronic equipment and computer readable storage medium Active CN111741162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010488043.4A CN111741162B (en) 2020-06-01 2020-06-01 Recitation prompting method, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010488043.4A CN111741162B (en) 2020-06-01 2020-06-01 Recitation prompting method, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111741162A CN111741162A (en) 2020-10-02
CN111741162B true CN111741162B (en) 2021-08-20

Family

ID=72648103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010488043.4A Active CN111741162B (en) 2020-06-01 2020-06-01 Recitation prompting method, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111741162B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102280048A (en) * 2011-08-13 2011-12-14 德州学院 Auxiliary English word memorizing device and using method thereof
CN103123750A (en) * 2011-11-18 2013-05-29 英业达股份有限公司 Digital picture frame combining language studying function and display method thereof
CN103824481A (en) * 2014-02-28 2014-05-28 广东小天才科技有限公司 Method and device for detecting user recitation
CN105426511A (en) * 2015-11-30 2016-03-23 广东小天才科技有限公司 Recitation assistance method and apparatus
CN107808556A (en) * 2017-10-31 2018-03-16 上海市格致中学 A kind of hidden text recites accessory system
CN108777083A (en) * 2018-06-25 2018-11-09 南阳理工学院 A kind of wear-type English study equipment based on augmented reality
CN109976534A (en) * 2019-04-15 2019-07-05 北京猎户星空科技有限公司 Learn the generation method and device of scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2009235990B2 (en) * 2009-02-19 2016-07-21 Unicus Investments Pty Ltd Teaching Aid
US10249205B2 (en) * 2015-06-08 2019-04-02 Novel Effect, Inc. System and method for integrating special effects with a text source
CN106341549A (en) * 2016-10-14 2017-01-18 努比亚技术有限公司 Mobile terminal audio reading apparatus and method
CN108234735A (en) * 2016-12-14 2018-06-29 中兴通讯股份有限公司 A kind of media display methods and terminal
CN110007768A (en) * 2019-04-15 2019-07-12 北京猎户星空科技有限公司 Learn the processing method and processing device of scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102280048A (en) * 2011-08-13 2011-12-14 德州学院 Auxiliary English word memorizing device and using method thereof
CN103123750A (en) * 2011-11-18 2013-05-29 英业达股份有限公司 Digital picture frame combining language studying function and display method thereof
CN103824481A (en) * 2014-02-28 2014-05-28 广东小天才科技有限公司 Method and device for detecting user recitation
CN105426511A (en) * 2015-11-30 2016-03-23 广东小天才科技有限公司 Recitation assistance method and apparatus
CN107808556A (en) * 2017-10-31 2018-03-16 上海市格致中学 A kind of hidden text recites accessory system
CN108777083A (en) * 2018-06-25 2018-11-09 南阳理工学院 A kind of wear-type English study equipment based on augmented reality
CN109976534A (en) * 2019-04-15 2019-07-05 北京猎户星空科技有限公司 Learn the generation method and device of scene

Also Published As

Publication number Publication date
CN111741162A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN109300347B (en) Dictation auxiliary method based on image recognition and family education equipment
CN109346059B (en) Dialect voice recognition method and electronic equipment
US9824687B2 (en) System and terminal for presenting recommended utterance candidates
CN108053839B (en) Language exercise result display method and microphone equipment
US9262410B2 (en) Speech translation apparatus, speech translation method and program product for speech translation
CN109410664B (en) Pronunciation correction method and electronic equipment
US11145222B2 (en) Language learning system, language learning support server, and computer program product
CN109461436B (en) Method and system for correcting pronunciation errors of voice recognition
CN109545184B (en) Recitation detection method based on voice calibration and electronic equipment
US20060253272A1 (en) Voice prompts for use in speech-to-speech translation system
WO2007000698A1 (en) Error correction for speech recognition systems
CN111081080B (en) Voice detection method and learning device
KR102043419B1 (en) Speech recognition based training system and method for child language learning
CN111026949A (en) Question searching method and system based on electronic equipment
GB2527242A (en) System and method for dynamic response to user interaction
JP2010282058A (en) Method and device for supporting foreign language learning
CN111741162B (en) Recitation prompting method, electronic equipment and computer readable storage medium
US20190279623A1 (en) Method for speech recognition dictation and correction by spelling input, system and storage medium
KR20170017379A (en) Device for conversation translation and method thereof
CN109035896B (en) Oral training method and learning equipment
KR20180028980A (en) Device and Method of real-time Speech Translation based on the extraction of translation unit
CN115457951A (en) Voice control method and device, electronic equipment and storage medium
CN109582971B (en) Correction method and correction system based on syntactic analysis
CN111077989B (en) Screen control method based on electronic equipment and electronic equipment
KR20140075994A (en) Apparatus and method for language education by using native speaker's pronunciation data and thought unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant