CN111901665B - Teaching resource playing method and device and storage medium - Google Patents

Teaching resource playing method and device and storage medium Download PDF

Info

Publication number
CN111901665B
CN111901665B CN202010889606.0A CN202010889606A CN111901665B CN 111901665 B CN111901665 B CN 111901665B CN 202010889606 A CN202010889606 A CN 202010889606A CN 111901665 B CN111901665 B CN 111901665B
Authority
CN
China
Prior art keywords
word
displaying
user
teaching
teaching video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010889606.0A
Other languages
Chinese (zh)
Other versions
CN111901665A (en
Inventor
孔丽莉
张欣雪
张彩云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Holding Group Ltd
Original Assignee
Perfect World Holding Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Holding Group Ltd filed Critical Perfect World Holding Group Ltd
Priority to CN202010889606.0A priority Critical patent/CN111901665B/en
Publication of CN111901665A publication Critical patent/CN111901665A/en
Application granted granted Critical
Publication of CN111901665B publication Critical patent/CN111901665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/06Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the application provides a teaching resource playing method, teaching resource playing equipment and a storage medium. The teaching resource playing method is suitable for word learning scenes, and can display a teaching video corresponding to a first word and a plurality of operation nodes on the teaching video according to a learning request aiming at the first word, wherein the plurality of operation nodes correspond to a plurality of different teaching interaction modes. According to the triggering operation of the plurality of operation nodes, the teaching video clip corresponding to the triggered operation node can be displayed so as to learn the first word in the teaching interaction mode corresponding to the selected operation node. In the embodiment, the teaching video corresponding to the words is played, various teaching interaction modes are provided, the words can be flexibly, dynamically and comprehensively taught to the user, and the learning efficiency is favorably improved.

Description

Teaching resource playing method and device and storage medium
Technical Field
The present application relates to the field of online education technologies, and in particular, to a method, device, and storage medium for playing teaching resources.
Background
Learning words is the basis for learning a language. The traditional word learning method relies on paper books, and a user can learn words by memorizing words and paraphrases thereof printed on the paper books. In recent years, with the development of intelligent terminals, online word learning based on intelligent terminals is becoming popular.
The existing online word learning mode takes intelligent software installed on terminal equipment as a carrier, and words to be learned are presented through the terminal equipment, so that a user can learn the words at any time and any place by utilizing fragmented time. However, the online word learning method still has the defect of poor teaching flexibility, and the learning efficiency is not easy to improve. Therefore, a new solution is yet to be proposed.
Disclosure of Invention
Aspects of the present application provide a teaching resource playing method, device and storage medium, so as to improve flexibility of word teaching and improve learning efficiency.
The embodiment of the present application further provides a teaching resource playing method, including: responding to the learning operation aiming at a first word, and playing a teaching video corresponding to the first word; displaying a plurality of operation nodes on the teaching video, wherein the plurality of operation nodes are used for switching a plurality of teaching video clips of the first word; the plurality of instructional video segments correspond to different instructional interaction modes for the first word; responding to the trigger operation of any operation node in the plurality of operation nodes, and displaying the teaching video clip corresponding to the operation node.
Further optionally, responding to a trigger operation on any operation node of the plurality of operation nodes, displaying a teaching video segment corresponding to the trigger node, including: and responding to the triggering operation of the first operation node in the plurality of operation nodes, and displaying at least one application context of the first word based on the first teaching video segment.
Further optionally, responding to a trigger operation on any operation node of the plurality of operation nodes, displaying a teaching video segment corresponding to the trigger node, including: and responding to a triggering operation of a second operation node in the plurality of operation nodes, displaying at least one application context of the first word based on a second teaching video segment, and displaying a subtitle corresponding to the first word in each application context.
Further optionally, responding to a trigger operation on any operation node in the multiple operation nodes, displaying a teaching video segment corresponding to the trigger node, including: paraphrasing the first word based on a third teaching video segment in response to a triggering operation on a third operational node of the plurality of operational nodes; after paraphrasing is completed, displaying a first interaction question corresponding to the first word and a plurality of alternative answers; responding to the selection operation of the user on the multiple alternative answers, and determining a target answer selected by the user; and displaying the judgment result of the first interactive question according to the correctness of the target answer.
Further optionally, responding to a trigger operation on any operation node of the plurality of operation nodes, displaying a teaching video segment corresponding to the trigger node, including: responding to a trigger operation of a fourth operation node in the plurality of operation nodes, and performing pronunciation teaching on the first word based on a fourth teaching video clip; and after the pronunciation teaching is finished, displaying the follow-up reading control for the user to perform follow-up reading evaluation.
Further optionally, the displaying of the reading-after control to perform reading-after evaluation on the user further includes: responding to the triggering operation of the reading following control, displaying a sound evaluation page and displaying the remaining reading following time length on the reading following control; displaying bubble-shaped page elements on the pronunciation assessment page, wherein the first word is positioned in the page elements; and displaying the descending effect of the page elements on the pronunciation evaluation page according to the follow-up reading residual time.
Further optionally, the method further comprises: collecting follow-reading voice signals, and identifying follow-reading scores according to the collected voice signals; if the reading following score is larger than a set threshold value, displaying the reading following score in the page element; and if the read-after score is smaller than or equal to the set threshold value, displaying the blasting effect of the page elements.
Further optionally, responding to a trigger operation on any operation node in the multiple operation nodes, displaying a teaching video segment corresponding to the trigger node, including: responding to a trigger operation of a fifth operation node in the plurality of operation nodes, and playing a second interaction problem of the first word based on a fifth teaching video clip; displaying the voice input control after the second interactive question is played; and responding to the triggering operation of the voice input control, acquiring a voice signal input by a user, and displaying the voice signal as a preset answer to the second interactive question.
Further optionally, the method further includes: recognizing the voice signal input by the user to obtain a recognition result; generating an interactive comment of the user on the first word according to the recognition result; and displaying the interactive comment of the user on the first word in a comment area corresponding to the first word.
Further optionally, responding to a trigger operation on any operation node of the plurality of operation nodes, displaying a teaching video segment corresponding to the trigger node, including: responding to a triggering operation of a fifth operation node in the plurality of operation nodes, and displaying at least one application scene and a dubbing operation control of the first word based on a fifth teaching video clip; responding to the triggering operation of the user on the dubbing operation control, and acquiring dubbing data input by the user; and synthesizing the dubbing data and at least one application scene of the first word to generate the dubbing works of the user for the first word.
Further optionally, after generating the dubbing work of the user for the first word, at least one of the following is included: displaying the dubbing scores of the users aiming at the first words, wherein the dubbing scores are calculated according to dubbing data input by the users; displaying the preview effect of the dubbing works of the first word; showing a dubbing control for the user to re-execute dubbing operation; displaying a sharing control for the user to share the dubbing works of the first word to a specified platform; and displaying a continuous learning control for the user to learn the next word.
Further optionally, the method further includes: when the teaching video of the first word is played, responding to sliding operation in a first direction, displaying a meaning card of the first word, and recording the playing progress of the teaching video; or when the meaning card of the first word is displayed, responding to sliding operation in a second direction, and playing the teaching video of the first word according to the recorded playing progress; or when the teaching video of the first word is played, responding to the sliding operation in the third direction, and playing the teaching video of the second word.
Further optionally, displaying a teaching video clip corresponding to the operation node, further including: and displaying the playing progress of the teaching video clip corresponding to the operation node.
An embodiment of the present application further provides an electronic device, including: a memory, a processor, and a display component; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the method provided by the embodiments of the present application are performed.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method provided in the embodiments of the present application when executed.
According to the teaching resource playing method, the teaching video corresponding to the first word can be displayed and a plurality of operation nodes can be displayed on the teaching video according to the learning request aiming at the first word, wherein the plurality of operation nodes correspond to a plurality of different teaching interaction modes. According to the triggering operation of the plurality of operation nodes, the teaching video clip corresponding to the triggered operation node can be displayed so as to learn the first word in the teaching interaction mode corresponding to the selected operation node. In the embodiment, the teaching video corresponding to the words is played, various teaching interaction modes are provided, the words can be flexibly, dynamically and comprehensively taught to the user, and the learning efficiency is favorably improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic flowchart of a method for playing a teaching resource according to an exemplary embodiment of the present application;
fig. 1b is a schematic diagram of an operation node and a play progress prompt provided in an exemplary embodiment of the present application;
FIG. 2a is a schematic diagram of various instructional interaction modes provided in an exemplary embodiment of the present application;
FIG. 2b is a schematic diagram of interactive instruction in a context-dip mode according to an exemplary embodiment of the present application;
fig. 2c is a schematic view of interactive teaching in a subtitle assistance mode according to an exemplary embodiment of the present application;
FIG. 3a, FIG. 3b and FIG. 3c are schematic diagrams of interactive teaching in a thought mode according to an exemplary embodiment of the present application;
fig. 4a and 4b are schematic diagrams of interactive teaching in a pure pronunciation mode according to an exemplary embodiment of the present application;
FIGS. 5a, 5b, and 5c are schematic views of a read-after teaching provided by an exemplary embodiment of the present application;
FIGS. 6a and 6b are schematic diagrams of an interactive problem provided by an exemplary embodiment of the present application;
fig. 7a, 7b, and 7c are schematic diagrams illustrating interactive instruction in a scene reproduction mode according to an exemplary embodiment of the present application;
FIG. 8a is a schematic diagram of a video tutoring and word sense card switching mode as provided by an exemplary embodiment of the present application;
FIG. 8b is a diagram illustrating switching adjacent words according to an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some embodiments of the present application, a solution is provided to solve the technical problem in the prior art that the online word learning manner is relatively inflexible, and the technical solutions provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic flowchart of a teaching resource playing method according to an exemplary embodiment of the present application, where as shown in fig. 1a, the method includes:
step 101, responding to a learning operation aiming at a first word, and playing a teaching video corresponding to the first word.
102, displaying a plurality of operation nodes on the teaching video, wherein the plurality of operation nodes are used for switching a plurality of teaching video clips of the first word; the plurality of instructional video segments correspond to different instructional interaction patterns for the first word.
And 103, responding to the trigger operation of any one of the plurality of operation nodes, and displaying the teaching video clip corresponding to the operation node.
The embodiment can be executed by the terminal equipment, the terminal equipment runs on the online learning software, and the user can learn online words through the online learning software. The terminal device may include, but is not limited to, a mobile phone, a tablet computer, a computer device, a smart wearable device, and the like.
The embodiment can be applied to word learning scenes, such as a Chinese word learning scene, an English word learning scene, a German word learning scene or other types of language learning scenes. The first word may be any one word to be learned, and in this embodiment, the word to be learned is limited by "first", which is only used for convenience of description and distinction, and does not set any limit to the order of the words.
In this embodiment, the learning operation for the first word may be user initiated. In some scenarios, the terminal device may present a list of words from which a user may select a first word to initiate a learning operation for the first word. In other scenarios, the user may issue a voice instruction to the terminal device to learn the first word, and after capturing the voice signal of the user, the terminal device may consider that the learning operation for the first word is detected.
After the learning operation aiming at the first word is detected, the terminal equipment can play the teaching video of the first word. The teaching video is a learning video for a word, preferably comprises a plurality of video clips with a specific teaching concept, and is a new teaching mode for repeatedly learning the same word based on the plurality of video clips.
When the terminal equipment plays the teaching video of the first word, the teaching video can be played from the starting moment of the teaching video; or the terminal equipment can prompt the user of the historical playing progress and inquire whether the user jumps to the historical playing progress, and if the user selects to jump to the historical playing progress, the teaching video of the first word is played according to the historical playing progress. One way of prompting the progress of the history playing is shown in fig. 1 b. In the illustration of fig. 1b, the terminal device may include a floating window with prompt information such as "learn this last time", "jump play", and the like on the progress bar of the teaching video, and the user may trigger the floating window to realize jump play.
While the teaching video of the first word is played, the terminal device can display a plurality of operation nodes on the teaching video. The operation nodes on the teaching video are shown in fig. 1b, and on the progress bar, 6 operation nodes are included. The teaching video system comprises a plurality of teaching video clips, a plurality of teaching interaction modes and a plurality of operation nodes, wherein the plurality of operation nodes displayed on the teaching video are used for switching the plurality of teaching video clips corresponding to the first word, and the plurality of teaching video clips correspond to the plurality of different teaching interaction modes. That is, the user may trigger multiple different instructional interaction modes by triggering multiple operational nodes. Different teaching interaction modes can adopt different interaction contents to interact with the user, so that the efficiency of learning words by the user is improved.
For example, the teaching video of the first word may include six video segments having a specific teaching concept, and the six teaching video segments repeatedly learn the first word sequentially through a context-sinking mode, a subtitle-assisted mode, a thought/meaning mode, a pure-pronunciation mode, a dialogue with you mode, and a scene reproduction mode. Wherein, each operation node corresponds to a teaching video clip. For each teaching video clip, the video content contained in the teaching video clip is associated with the teaching interaction mode corresponding to the teaching video clip. One operation node can trigger one corresponding teaching interaction mode or multiple corresponding teaching interaction modes. The following embodiments will be described in detail with reference to the following embodiments, which are not repeated herein.
In the process of playing the teaching video corresponding to the first word, the terminal equipment can monitor the trigger operation aiming at the operation node in real time. Responding to the trigger operation of any one of the plurality of operation nodes, displaying the teaching video clip corresponding to the operation node so as to learn the first word in the teaching interaction mode corresponding to the operation node.
In some embodiments, the triggering operation on any of the plurality of operational nodes may be performed by a system event. The system event may be an event that the playing progress reaches a certain designated progress value, for example, when the playing progress is 0, the first operation node may be automatically triggered to play the teaching video clip corresponding to the first operation node; when the playing progress reaches the termination time of the teaching video clip corresponding to the first operation node, the second operation node can be automatically triggered to play the teaching video clip corresponding to the second operation node. When the playing progress reaches the termination time of the teaching video clip corresponding to the second operation node, the third operation node can be automatically triggered to play the teaching video clip corresponding to the third operation node, and the description is omitted. The effect automatically triggered by the system event can be presented as an effect that a plurality of teaching video clips contained in the teaching video are automatically played in sequence.
In other embodiments, the triggering operation on any of the plurality of operational nodes may be performed by a user event. In the process of playing the teaching video corresponding to the first word, the user can select and trigger any one of the plurality of operation nodes according to actual requirements. For example, the user may trigger an operation node before the current play progress to relearn the preamble content, or the user may trigger an operation node after the current play progress to jump to the next tutorial video clip.
When each teaching video clip is played, the terminal device may also display the playing progress of the currently played teaching video clip, as shown in fig. 1 b. A plurality of teaching video clip that the teaching video contains can correspond one section progress bar separately, and the progress bar that adjacent teaching video clip corresponds is connected in order, when playing every teaching video clip, can demonstrate this teaching video clip's broadcast progress to in real time know the required time of study word and grasp the study progress in a flexible way in the user.
In this embodiment, according to the learning request for the first word, a teaching video corresponding to the first word and a plurality of operation nodes can be displayed on the teaching video, where the plurality of operation nodes correspond to a plurality of different teaching interaction modes. According to the triggering operation of the plurality of operation nodes, the teaching video clip corresponding to the triggered operation node can be displayed so as to learn the first word in the teaching interaction mode corresponding to the selected operation node. In the embodiment, the teaching video corresponding to the words is played, various teaching interaction modes are provided, the words can be flexibly, dynamically and comprehensively taught to the user, and the learning efficiency is favorably improved.
In the above and following embodiments of the present application, the teaching video may provide a plurality of different teaching interaction modes including: as shown in fig. 2 a: context-immersion mode, subtitle-assistance mode, thinking-of-words mode, pure-pronunciation mode, dialogue with you mode, scene-reproduction mode, and the like.
Optionally, each operation node displayed on the teaching video may correspond to one teaching interaction mode respectively. When the terminal equipment displays the operation nodes, the prompt information of the teaching interaction mode corresponding to each operation node can be displayed in the area near each operation node by adopting the floating window, so that a user can conveniently select which teaching interaction mode to switch to.
In some exemplary embodiments, in response to a triggering operation of a first operational node of the plurality of operational nodes on the instructional video for the first word, the terminal device may present at least one application context for the first word based on the first instructional video segment, as shown in fig. 2 b. Further, interactive teaching can be performed in a context-sinking mode. Language is art for an application, and the purpose of learning language is to express it in a suitable context. Based on the context immersion mode provided by the embodiment, the application context of the word can be shown to the user, and on the other hand, the user can learn in which scene to apply the learned word.
In some exemplary embodiments, in response to a triggering operation on a second operation node of the plurality of operation nodes on the teaching video of the first word, the terminal device may present at least one application context of the first word based on the second teaching video segment, and present a subtitle corresponding to the first word in each application context, as shown in fig. 2 c. Based on the mode, the image and the subtitle information corresponding to the word can be combined, and the user can conveniently memorize the association.
In some exemplary embodiments, in response to a triggering operation on a third operation node of the plurality of operation nodes on the instructional video of the first word, the terminal device may paraphrase the first word based on the third instructional video segment, as shown in fig. 3 a. The first word may be paraphrased in a plurality of different languages, for example, for an english word, the first word may be paraphrased in english, or the first word may be paraphrased in chinese, which is not limited in this embodiment.
After the paraphrasing is completed, the terminal device may present a first interactive question corresponding to the first word and a plurality of alternative answers, as shown in fig. 3 b. The user may select a target answer from a plurality of alternative answers after thinking. And responding to the selection operation of the user on the multiple alternative answers, determining the target answer selected by the user, and displaying the judgment result of the first interactive question according to the correctness of the target answer. For example, if the target answer selected by the user is consistent with the preset standard answer, the bonus effect can be displayed on the page, as shown in fig. 3 c. Otherwise, if the target answer selected by the user is not consistent with the preset standard answer, an error prompt can be displayed on the page, and the standard answer is prompted, so that the repeated description is omitted.
In some exemplary embodiments, in response to a triggering operation on a fourth operation node of the plurality of operation nodes on the teaching video of the first word, the terminal device may perform pronunciation teaching on the first word based on the fourth teaching video segment, as shown in fig. 4 a. During pronunciation teaching, pronunciation rules of the first word can be displayed, for example, for an English word, phonetic symbols of the word can be displayed, for a Chinese word, pinyin or phonetic notation of the word can be displayed, and no graphical representation is performed. After the pronunciation teaching is completed, the terminal equipment can display the follow-up reading control so that the user can perform follow-up reading evaluation. As shown in fig. 4b, the read-after control may be a microphone icon displayed on the page, and a read-after mode may be prompted below the microphone icon.
Optionally, in response to the triggering operation of the follow-reading control, the terminal device may display a sound evaluation page. The triggering operation of the read-after control by the user can be realized as a long-time pressing operation or a single-click operation, which is not limited in this embodiment. Wherein, the read-after duration can be preset for each word according to the length of each word. After the user triggers the follow-up reading control, the remaining follow-up reading time can be displayed in the follow-up reading control. As shown in fig. 5a, the terminal device may display the read-after remaining time length in a manner of a circular progress bar. Certainly, in some other optional embodiments, the terminal device may display the read-after countdown, or display the remaining read-after duration in another visual manner, which is not limited in this embodiment.
And the pronunciation evaluation page is provided with bubble-shaped page elements, and the first word is positioned in the page elements. The terminal device may display the decreasing effect of the page element on the sound evaluation page according to the remaining time of the follow-up reading, as shown in fig. 5 a. For example, the page elements may be located at the top of the evaluation page at the beginning of the follow-up reading, and the page elements may decrease on the evaluation page as the remaining time of the follow-up reading decreases.
It should be understood that the page element for displaying the first word may be in a bubble shape, and may have other shapes, such as a bird shape, a fallen leaf shape, and the like, and the embodiment is not limited thereto.
Optionally, in the reading-after teaching mode, the terminal device may further collect a reading-after voice signal of the user based on an audio component (e.g., a microphone), and identify a reading-after score of the user according to the collected voice signal. When the reading following score is recognized, similarity matching can be carried out on the collected voice signal and the voice signal corresponding to the standard pronunciation of the first word, and the reading following score is determined according to the similarity matching result of the voice signal.
Optionally, if the read-after score is greater than the set threshold, the read-after score may be shown in the page element, as shown in fig. 5 b. Otherwise, if the read-after score is less than or equal to the set threshold, the effect of page element blasting is shown, as shown in fig. 5 c. The set threshold may be 60 minutes, 70 minutes, or 80 minutes, which is not limited in this embodiment. The case that the read-following score is less than or equal to the set threshold value may include: the condition that the voice signal is not detected in the set follow-up reading duration; or detecting a voice signal within a set follow-up reading time length, but the follow-up reading accuracy rate is poor; or the read-following is not completed within the set read-following duration, and so on, which will not be described in detail.
Optionally, if the reading following score is greater than a set threshold, displaying the ascending effect of the page elements on the pronunciation evaluation page; otherwise, if the reading following score is smaller than or equal to the set threshold value, the effect of page element descending can be displayed on the sound evaluation page.
Optionally, the set threshold may include a first upper threshold and a second upper threshold. If the following reading score is larger than a first upper threshold, the effect of the rising of the page elements can be displayed on the pronunciation evaluation page; if the read-after score is larger than the second upper threshold, the effect of accelerating the rising of the page element can be displayed on the sound evaluation page, or the effect of blasting the page element in the rising process can be displayed on the sound evaluation page.
Optionally, the set threshold may include a first lower threshold and a second lower threshold. If the reading following score is smaller than a first lower limit threshold value, the effect of page element reduction can be displayed on the pronunciation evaluation page; if the follow-up reading score is smaller than a second lower limit threshold, the effect of accelerating the descending of the page elements can be displayed on the pronunciation evaluation page, or the effect of blasting the page elements in the descending process can be displayed on the pronunciation evaluation page.
In some exemplary embodiments, in response to a triggering operation on a fifth operation node of the plurality of operation nodes, the terminal device may play the second interactive question of the first word based on a fifth teaching video segment; wherein the second interactive question can be set as an open question and different users can have a variety of different answers. After the second interactive question is played, the terminal device may display a voice input control for the user to input a voice signal for answering the second question, such as a "speak your idea" control shown in fig. 6 a. In response to the triggering operation for the voice input control, the terminal device may obtain a voice signal input by the user, and display the voice signal as a preset answer to the second interactive question, as shown in fig. 6 b.
Optionally, the terminal device may further recognize a voice signal input by the user to obtain a recognition result, and generate an interactive comment of the user on the first word according to the recognition result. In this embodiment, a user comment area may be set for each word, and after an interactive comment of a first word is generated for a user, the interactive comment of the user on the first word may be displayed in the comment area corresponding to the first word. Based on the mode, the communication among different users can be promoted, and the learning enthusiasm of the users is improved.
In some exemplary embodiments, in response to a triggering operation on a fifth operation node of the plurality of operation nodes on the instructional video of the first word, the terminal device may present at least one application scene of the first word and a dubbing operation control, such as the "immersive pass" control shown in fig. 7a, based on the fifth instructional video segment. In response to the triggering operation of the dubbing operation control by the user, the terminal device may obtain dubbing data input by the user, synthesize the dubbing data and at least one application scene of the first word, and generate a dubbing work for the first word by the user, as shown in fig. 7 b.
It should be noted that during the dubbing process of the user, a dubbing progress prompt may be displayed, and the user may determine the pronunciation time according to the progress prompt, such as the dubbing countdown prompt and the dot countdown prompt above the first word shown in fig. 7 a.
In some alternative embodiments, the terminal device may present an articulation cue marker, such as an accent marker, a plosive marker, a tone marker, and the like, during the user's dubbing. Meanwhile, a standard pronunciation curve can be displayed. After dubbing data of the user are obtained, the pronunciation curve of the user can be calculated in real time, and the matching degree of the pronunciation curve of the user and the standard pronunciation curve is displayed.
In some exemplary embodiments, after generating the dubbing composition for the first word by the user, the terminal device may further present a dubbing end page. As shown in fig. 7c, based on the dubbing end page, the terminal device may present a dubbing score of the user for said first word, such as the two-star score in fig. 7 c.
Wherein the dubbing score can be calculated according to dubbing data input by a user. When calculating the dubbing score, speech recognition can be performed on the dubbing data of the user to obtain the dubbing text of the user. And comparing the dubbing text of the user with the standard text to obtain a text comparison score. Meanwhile, the pronunciation characteristics of the user, such as accent, plosive, intonation, etc., can be identified through the dubbing data. Based on the pronunciation characteristics, a pronunciation score of the user can be obtained. And synthesizing the text contrast score, the pronunciation score and the matching degree of the pronunciation curve of the user and the standard pronunciation curve, and obtaining the dubbing score of the user through weighted calculation.
Alternatively, based on the dubbing end page, the terminal device may present a preview effect of the dubbing work of the first word, such as the screen shown in fig. 7c in which the preview user dubs the word "reproduction".
Optionally, based on the dubbing end page, the terminal device may present a dubbing control for the user to re-perform the dubbing operation, such as the "re-recording" control shown in fig. 7 c.
Optionally, based on the dubbing end page, the terminal device may display a sharing control, so that the user shares the dubbing works of the first word to a specified platform, such as a "sharing" control shown in fig. 7 c.
Optionally, based on the dubbing end page, the terminal device may present a continue learning control for the user to learn the next word, such as the "enter next word" control shown in fig. 7 c.
In some exemplary embodiments, the terminal device may detect a sliding operation on the screen while playing the teaching video of the first word, and may display the interactive teaching content in different modes according to a direction of the sliding operation. Optionally, in response to the sliding operation in the first direction, the terminal device presents a word sense card of the first word, as shown in fig. 8 a. Meanwhile, the terminal equipment can record the playing progress of the teaching video.
Optionally, when the word meaning card of the first word is displayed, in response to the sliding operation in the second direction, the terminal device may continue to play the teaching video of the first word according to the recorded playing progress, and no illustration is shown.
Wherein, the first direction may be a direction from bottom to top, and the second direction may be a direction from top to bottom; alternatively, the first direction may be a top-down direction, and the second direction may be a bottom-up direction. Based on the mode, the user can flexibly switch the video teaching mode and the meaning card learning mode, is not limited by time and is more convenient.
In some exemplary embodiments, the terminal device may play the teaching video of the second word in response to the sliding operation in the third direction while playing the teaching video of the first word.
The third direction may be a left-to-right direction, or the third direction may be a right-to-left direction. If the sliding operation in the left-to-right direction is detected, the terminal equipment can play a teaching video of a word before the first word; if the sliding operation in the direction from right to left is detected, the terminal device can play the teaching video of the next word of the first word. Based on the mode, the user can flexibly switch the adjacent words, and the method is very convenient and fast.
Further optionally, in this embodiment, in order to prevent the user from performing the misoperation, when the sliding operation in the third direction of the user is detected for the first time, the function prompt corresponding to the sliding operation may be presented. When the sliding operation of the user in the third direction is detected for the second time within the set duration, the user can be considered to initiate the word switching operation, and at this time, the teaching video of the second word can be played. As shown in fig. 8b, when the right-sliding operation is detected, the video learning can be prompted to suggest not to right-stroke, and the next word is immediately switched to right-stroke again.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 101 to 103 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of step 103 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application, where the electronic device is configured to execute the teaching resource playing method described in the foregoing embodiments. As shown in fig. 9, the electronic apparatus includes: memory 901, processor 902, and display component 903.
A memory 901 for storing computer programs and may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, first resources, and so forth.
A processor 902, coupled to the memory 901, for executing the computer program in the memory 901 for: responding to the learning operation aiming at a first word, and playing a teaching video corresponding to the first word; displaying a plurality of operation nodes on the teaching video, wherein the plurality of operation nodes are used for switching a plurality of teaching video clips of the first word; the plurality of instructional video segments correspond to different instructional interaction modes for the first word; responding to the trigger operation of any operation node in the plurality of operation nodes, and displaying the teaching video clip corresponding to the operation node.
Further optionally, when responding to the trigger operation on any operation node in the plurality of operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: and responding to the triggering operation of the first operation node in the plurality of operation nodes, and displaying at least one application context of the first word based on the first teaching video segment.
Further optionally, when responding to the trigger operation on any operation node in the plurality of operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: and responding to a triggering operation of a second operation node in the plurality of operation nodes, displaying at least one application context of the first word based on a second teaching video segment, and displaying a subtitle corresponding to the first word in each application context.
Further optionally, when responding to the trigger operation on any operation node in the multiple operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: paraphrasing the first word based on a third teaching video segment in response to a triggering operation on a third operational node of the plurality of operational nodes; after paraphrasing is completed, displaying a first interaction question corresponding to the first word and a plurality of alternative answers; responding to the selection operation of the user on the multiple alternative answers, and determining a target answer selected by the user; and displaying the judgment result of the first interactive question according to the correctness of the target answer.
Further optionally, when responding to the trigger operation on any operation node in the plurality of operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: responding to a triggering operation of a fourth operation node in the plurality of operation nodes, and performing pronunciation teaching on the first word based on a fourth teaching video clip; and after the pronunciation teaching is finished, displaying the follow-up reading control for the user to perform follow-up reading evaluation.
Further optionally, the processor 902, when displaying the read-after control for performing the read-after evaluation on the user, is further configured to: responding to the triggering operation of the follow-reading control, displaying a sound evaluation page and displaying the follow-reading residual time length on the follow-reading control; displaying bubble-shaped page elements on the pronunciation assessment page, wherein the first word is positioned in the page elements; and displaying the descending effect of the page elements on the pronunciation evaluation page according to the follow-up reading residual time.
Further optionally, the processor 902 is further configured to: collecting follow-reading voice signals, and identifying follow-reading scores according to the collected voice signals; if the reading following score is larger than a set threshold value, displaying the reading following score in the page element; and if the read-after score is smaller than or equal to the set threshold value, displaying the blasting effect of the page elements.
Further optionally, when responding to the trigger operation on any operation node in the plurality of operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: responding to a trigger operation of a fifth operation node in the plurality of operation nodes, and playing a second interactive question of the first word based on a fifth teaching video clip; displaying the voice input control after the second interactive question is played; and responding to the triggering operation of the voice input control, acquiring a voice signal input by a user, and displaying the voice signal as a preset answer to the second interactive question.
Further optionally, the processor 902 is further configured to: recognizing the voice signal input by the user to obtain a recognition result; generating an interactive comment of the user on the first word according to the recognition result; and displaying the interactive comment of the user on the first word in a comment area corresponding to the first word.
Further optionally, when responding to the trigger operation on any operation node in the plurality of operation nodes and displaying the teaching video segment corresponding to the trigger node, the processor 902 is specifically configured to: responding to a triggering operation of a fifth operation node in the plurality of operation nodes, and displaying at least one application scene and a dubbing operation control of the first word based on a fifth teaching video clip; responding to the triggering operation of the user on the dubbing operation control, and acquiring dubbing data input by the user; and synthesizing the dubbing data and at least one application scene of the first word to generate the dubbing works of the user for the first word.
Further optionally, the processor 902, after generating the dubbing work of the user for the first word, further comprises at least one of: displaying the dubbing scores of the users aiming at the first words, wherein the dubbing scores are calculated according to dubbing data input by the users; displaying the preview effect of the dubbing works of the first word; displaying a dubbing control for the user to perform dubbing operation again; displaying a sharing control for the user to share the dubbing works of the first word to a designated platform; and displaying a continuous learning control for the user to learn the next word.
Further optionally, the processor 902 is further configured to: when the teaching video of the first word is played, responding to sliding operation in a first direction, displaying a meaning card of the first word, and recording the playing progress of the teaching video; or when the meaning card of the first word is displayed, responding to sliding operation in a second direction, and playing the teaching video of the first word according to the recorded playing progress; or when the teaching video of the first word is played, responding to the sliding operation in the third direction, and playing the teaching video of the second word.
Further optionally, when the teaching video segment corresponding to the operation node is displayed, the processor 902 is further configured to: and displaying the playing progress of the teaching video clip corresponding to the operation node.
Further, as shown in fig. 9, the electronic device further includes: communication component 904, power component 905, audio component 906, and the like. Only some of the components are schematically shown in fig. 9, and the electronic device is not meant to include only the components shown in fig. 9.
The memory 901 may be implemented by any type or combination of volatile and non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display component 903 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Wherein the communication component 904 is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply 905 provides power to various components of the device in which the power supply is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In this embodiment, according to the learning request for the first word, the electronic device may display a teaching video corresponding to the first word and display a plurality of operation nodes on the teaching video, where the plurality of operation nodes correspond to a plurality of different teaching interaction modes. According to the triggering operation of the plurality of operation nodes, the teaching video clip corresponding to the triggered operation node can be displayed so as to learn the first word in the teaching interaction mode corresponding to the selected operation node. In the implementation mode, the teaching video corresponding to the words is played, various teaching interaction modes are provided, the words can be flexibly, dynamically and omnidirectionally taught to the user, and the learning efficiency is favorably improved.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (15)

1. A teaching resource playing method is suitable for word learning scenes and is characterized by comprising the following steps:
responding to a learning operation aiming at a first word, and playing a teaching video corresponding to the first word, wherein the teaching video comprises a plurality of teaching video segments of the first word, and the first word is from a word list;
displaying a plurality of operation nodes on a progress bar of the teaching video, wherein the plurality of operation nodes are used for switching a plurality of teaching video clips of the first word; the plurality of teaching video clips correspond to different teaching interaction modes of the first word, and different teaching interaction modes adopt different interaction contents to interact with a user so that the user can learn the first word repeatedly;
responding to the trigger operation of any one of the plurality of operation nodes, and displaying the teaching video clip corresponding to the operation node so that the user can learn the first word based on the corresponding interactive content in the teaching interaction mode corresponding to the operation node.
2. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
and responding to the triggering operation of the first operation node in the plurality of operation nodes, and displaying at least one application context of the first word based on the first teaching video segment.
3. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
and in response to the triggering operation of a second operation node in the plurality of operation nodes, displaying at least one application context of the first words on the basis of a second teaching video segment, and displaying subtitles corresponding to the first words in each application context.
4. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
paraphrasing the first word based on a third teaching video segment in response to a triggering operation on a third operational node of the plurality of operational nodes;
after paraphrasing is completed, displaying a first interaction question corresponding to the first word and a plurality of alternative answers;
responding to the selection operation of the user on the multiple alternative answers, and determining a target answer selected by the user;
and displaying the judgment result of the first interactive question according to the correctness of the target answer.
5. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
responding to a triggering operation of a fourth operation node in the plurality of operation nodes, and performing pronunciation teaching on the first word based on a fourth teaching video clip;
and after the pronunciation teaching is finished, displaying the follow-up reading control for the user to perform follow-up reading evaluation.
6. The method of claim 5, wherein exposing a read-after control to evaluate the user for read-after further comprises:
responding to the triggering operation of the follow-reading control, displaying a sound evaluation page and displaying the follow-reading residual time length on the follow-reading control;
displaying bubble-shaped page elements on the pronunciation assessment page, wherein the first word is positioned in the page elements;
and displaying the descending effect of the page elements on the pronunciation evaluation page according to the follow-up reading residual time.
7. The method of claim 6, further comprising:
collecting follow-reading voice signals, and identifying follow-reading scores according to the collected voice signals;
if the reading following score is larger than a set threshold value, displaying the reading following score in the page element;
and if the read-after score is smaller than or equal to the set threshold value, displaying the blasting effect of the page elements.
8. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
responding to a trigger operation of a fifth operation node in the plurality of operation nodes, and playing a second interaction problem of the first word based on a fifth teaching video clip;
displaying the voice input control after the second interactive question is played;
and responding to the triggering operation of the voice input control, acquiring a voice signal input by a user, and displaying the voice signal as a preset answer to the second interactive question.
9. The method of claim 8, further comprising:
recognizing the voice signal input by the user to obtain a recognition result;
generating an interactive comment of the user on the first word according to the recognition result;
and displaying the interactive comment of the user on the first word in a comment area corresponding to the first word.
10. The method according to claim 1, wherein the displaying the teaching video segment corresponding to the operation node in response to the triggering operation on any operation node of the plurality of operation nodes comprises:
responding to a triggering operation of a sixth operation node in the plurality of operation nodes, and displaying at least one application scene and a dubbing operation control of the first word based on a sixth teaching video clip;
responding to the triggering operation of the user on the dubbing operation control, and acquiring dubbing data input by the user;
and synthesizing the dubbing data and at least one application scene of the first word to generate the dubbing works of the user for the first word.
11. The method of claim 10, wherein after generating the user's dubbing work for the first word, further comprising at least one of:
displaying a dubbing score of the user for the first word, wherein the dubbing score is calculated according to dubbing data input by the user;
displaying the preview effect of the dubbing works of the first word;
displaying a dubbing control for the user to perform dubbing operation again;
displaying a sharing control for the user to share the dubbing works of the first word to a specified platform;
and displaying the continuous learning control for the user to learn the next word.
12. The method according to any one of claims 1-11, further comprising:
when the teaching video of the first word is played, responding to sliding operation in a first direction, displaying a meaning card of the first word, and recording the playing progress of the teaching video; when the meaning card of the first word is displayed, responding to sliding operation in a second direction, and playing the teaching video of the first word according to the recorded playing progress; or,
and when the teaching video of the first word is played, responding to the sliding operation in the third direction, and playing the teaching video of the second word.
13. The method according to any one of claims 1-11, wherein displaying the teaching video segment corresponding to the operation node further comprises:
and displaying the playing progress of the teaching video clip corresponding to the operation node.
14. An electronic device, comprising: a memory, a processor, and a display component;
the memory is to store one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the steps of the method of any one of claims 1-13.
15. A computer-readable storage medium storing a computer program, wherein the computer program is capable of performing the steps of the method of any one of claims 1-13 when executed.
CN202010889606.0A 2020-08-28 2020-08-28 Teaching resource playing method and device and storage medium Active CN111901665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010889606.0A CN111901665B (en) 2020-08-28 2020-08-28 Teaching resource playing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010889606.0A CN111901665B (en) 2020-08-28 2020-08-28 Teaching resource playing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111901665A CN111901665A (en) 2020-11-06
CN111901665B true CN111901665B (en) 2022-08-26

Family

ID=73225108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889606.0A Active CN111901665B (en) 2020-08-28 2020-08-28 Teaching resource playing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111901665B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905026B (en) * 2021-03-30 2024-04-16 完美世界控股集团有限公司 Method, device, storage medium and computer equipment for showing word suggestion
CN113223339A (en) * 2021-04-21 2021-08-06 宋明哲 English learning software based on comedy captions
CN113784059B (en) * 2021-08-03 2023-08-18 阿里巴巴(中国)有限公司 Video generation and splicing method, equipment and storage medium for clothing production
CN114035725B (en) * 2021-08-26 2024-06-25 武汉联影医疗科技有限公司 Teaching method and device of ultrasonic equipment, ultrasonic imaging equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105047025A (en) * 2015-07-10 2015-11-11 上海爱数软件有限公司 Rapid manufacturing method for mobile learning courseware
CN105117381A (en) * 2015-08-28 2015-12-02 上海第九城市教育科技股份有限公司 Method and system for generating interactive multimedia courseware

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10283013B2 (en) * 2013-05-13 2019-05-07 Mango IP Holdings, LLC System and method for language learning through film
CN106952515A (en) * 2017-05-16 2017-07-14 宋宇 The interactive learning methods and system of view-based access control model equipment
CN109756770A (en) * 2018-12-10 2019-05-14 华为技术有限公司 Video display process realizes word or the re-reading method and electronic equipment of sentence
CN109448466A (en) * 2019-01-08 2019-03-08 上海健坤教育科技有限公司 The learning method of too many levels training mode based on video teaching
CN110007768A (en) * 2019-04-15 2019-07-12 北京猎户星空科技有限公司 Learn the processing method and processing device of scene
CN111462546A (en) * 2020-04-03 2020-07-28 北京儒博科技有限公司 Voice teaching method, device, equipment and storage medium
CN111506741A (en) * 2020-04-14 2020-08-07 天津洪恩完美未来教育科技有限公司 Word information association method, device, system, storage medium and electronic device
CN111462553B (en) * 2020-04-17 2021-03-30 杭州菲助科技有限公司 Language learning method and system based on video dubbing and sound correction training

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105047025A (en) * 2015-07-10 2015-11-11 上海爱数软件有限公司 Rapid manufacturing method for mobile learning courseware
CN105117381A (en) * 2015-08-28 2015-12-02 上海第九城市教育科技股份有限公司 Method and system for generating interactive multimedia courseware

Also Published As

Publication number Publication date
CN111901665A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111901665B (en) Teaching resource playing method and device and storage medium
CN109960809B (en) Dictation content generation method and electronic equipment
EP2816549B1 (en) User bookmarks by touching the display of a music score while recording ambient audio
CN109634552A (en) Report control method and terminal device applied to dictation
JP2016157225A (en) Voice search apparatus, voice search method, and program
CN111711834B (en) Recorded broadcast interactive course generation method and device, storage medium and terminal
CN111081084B (en) Method for broadcasting dictation content and electronic equipment
KR102060229B1 (en) Method for assisting consecutive interpretation self study and computer readable medium for performing the method
CN111077996A (en) Information recommendation method based on point reading and learning equipment
JP6613560B2 (en) Electronic device, learning support method and program
KR20200086616A (en) Method of interactive foreign language learning by voice talking each other using voice recognition function and TTS function
JP6166831B1 (en) Word learning support device, word learning support program, and word learning support method
CN111028591B (en) Dictation control method and learning equipment
CN111726693A (en) Audio and video playing method, device, equipment and medium
KR20190070683A (en) Apparatus and method for constructing and providing lecture contents
CN111028590B (en) Method for guiding user to write in dictation process and learning device
CN111090383B (en) Instruction identification method and electronic equipment
CN113409653A (en) Information display method of programming interface and related equipment
JP5765592B2 (en) Movie playback device, movie playback method, movie playback program, movie playback control device, movie playback control method, and movie playback control program
KR101051602B1 (en) Foreign Language Learner and Learning Method
KR20200094060A (en) Method of interactive foreign language learning by voice talking each other using voice recognition function and TTS function
JP2016157042A (en) Electronic apparatus and program
KR20040013167A (en) An apparatus for studying foreign language and the method thereof
CN114915824A (en) Word teaching resource playing method, device and storage medium
JP2019175245A (en) Speech synthesizer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant