CN111882932A - Method and device for assisting language learning, electronic equipment and storage medium - Google Patents

Method and device for assisting language learning, electronic equipment and storage medium Download PDF

Info

Publication number
CN111882932A
CN111882932A CN202010758445.1A CN202010758445A CN111882932A CN 111882932 A CN111882932 A CN 111882932A CN 202010758445 A CN202010758445 A CN 202010758445A CN 111882932 A CN111882932 A CN 111882932A
Authority
CN
China
Prior art keywords
target image
user
image
learning
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010758445.1A
Other languages
Chinese (zh)
Inventor
徐利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Top Aiying Beijing Technology Co ltd
Original Assignee
Top Aiying Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Top Aiying Beijing Technology Co ltd filed Critical Top Aiying Beijing Technology Co ltd
Priority to CN202010758445.1A priority Critical patent/CN111882932A/en
Publication of CN111882932A publication Critical patent/CN111882932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a method, a device, electronic equipment and a storage medium for assisting language learning, wherein the method for assisting language learning comprises the following steps: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image; receiving a selection of the target image by a user; and when the user selection is correct, prompting the user that the selection is correct. Compared with the prior art, the technical scheme provided by the invention combines the game and word learning, utilizes the corresponding pictures searched in the game to strengthen the graphical memory effect of the words, and achieves the learning mode combining the sound, the shape and the meaning.

Description

Method and device for assisting language learning, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of auxiliary language learning, in particular to a method and a device for auxiliary language learning, electronic equipment and a storage medium.
Background
In recent years, people have a gradual rise in interest in learning the second language, the importance of learning the second language is gradually increased, and the age audience of learners is also getting smaller and smaller. Therefore, for the learners with small ages, if the learners can get fun in the learning process, the learners have great benefits on the learning effect. However, the current learning-assisting software is basically presented in a word learning format, such as listing words and word interpretations one by one. Thus, for a relatively young beginner, the learning process is tedious and difficult to put into, resulting in a relatively inefficient learning.
Therefore, with the appearance of younger beginners and the need of people for learning the second language, how to enrich the way of learning words for users and how to innovate the learning mode become a problem to be solved urgently.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method, an apparatus, an electronic device and a storage medium for assisting language learning, so as to solve the problem of tedious language learning in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a method for assisting language learning, the method including: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image; receiving a selection of a target image by a user; and when the user selects correctly, prompting the user to select correctly.
In one embodiment of the invention, the method further comprises: when the user selects an error, popping up a first popup window and playing the pronunciation of the target image at a first preset time interval, wherein the first popup window displays: the method comprises the following steps that a target image, a real object image corresponding to the target image and a word corresponding to the target image are obtained; or a real object image corresponding to the target image and a word corresponding to the target image, wherein the playing of the pronunciation of the target image comprises: and playing the pronunciation of the target image at a second preset time interval.
In one embodiment of the present invention, when the user selects correctly, prompting the user to select correctly includes: displaying a correct selection when a user clicks a hot zone containing a target image, and popping up a first popup window and playing a reading of the target image at a first predetermined time interval when the user selects an error, including: and when the user clicks the area outside the hot area, popping up a first popup window and playing the pronunciation of the target image for the user to learn.
In one embodiment of the present invention, before presenting a first image on a user interface, the first image including a target image, and playing a pronunciation of the target image, the method further includes: receiving a learning interaction instruction of a user; popping up a second popup, and automatically closing the second popup after continuously playing the voice of the target image with the specified number of times at a third preset time interval, wherein the second popup displays: the method comprises the following steps that a target image, a real object image corresponding to the target image and a word corresponding to the target image are obtained; or a real object image corresponding to the target image and a word corresponding to the target image; or a word corresponding to the target image.
In one embodiment of the invention, prior to receiving the learning interactive instruction of the user, the method further comprises learning a preset learning item in advance to qualify for executing the learning interactive instruction.
In one embodiment of the invention, the method further comprises: presenting a prompt control on a user interface for helping a user to quickly find a target image when the user selects the prompt control, wherein the method further comprises the following steps: receiving the selection of a user on a prompt control; the first area of the first image is highlighted to assist the user in making the correct selection, wherein the first area includes the target image.
In one embodiment of the invention, the method further comprises: and the user can continue to click the prompt control in the user interface after waiting for the preset cooling time, wherein the cooling time is increased by the preset seconds along with the increase of the use times of the prompt control.
In one embodiment of the invention, the speech of the target image includes English.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for assisting language learning, the apparatus including: the display module is used for presenting a first image on the user interface, wherein the first image comprises a target image and playing the pronunciation of the target image; the receiving module is used for receiving the selection of a user on the target image; and the prompting module is used for prompting the user to select the right selection when the user selects the right selection.
According to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: the system comprises a processor and a memory, wherein the memory is used for storing the program for assisting language learning in the embodiment; the processor is used for executing the program.
According to a further aspect of the embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program for executing the method for assisting language learning described in the above embodiments.
According to the technical scheme, the method for assisting language learning provided by the embodiment of the invention can achieve the combination of sound, shape and meaning by finding the corresponding answer in the graph according to the heard pronunciation. In addition, due to the characteristics of rich content and strong word covering capability of the background image, the memory and learning of the user on the words are improved, and the capability of the user for observing and searching things is also improved.
Drawings
Fig. 1 is a flowchart illustrating a method for assisting language learning according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a method for assisting language learning according to another embodiment of the present invention.
Fig. 3 is a flowchart illustrating a method for assisting language learning according to another embodiment of the present invention.
Fig. 4 is a flowchart illustrating a method for assisting language learning according to yet another embodiment of the present invention.
Fig. 5 is a flowchart illustrating an application prompt control of a method for assisting language learning according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an apparatus for assisting language learning according to an embodiment of the present invention.
Fig. 7 is a block diagram of an electronic device for assisting language learning according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a user interface of a method for assisting language learning according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a user interface of a method for assisting language learning according to another embodiment of the present invention.
Fig. 10 is a schematic diagram of a user interface of a method for assisting language learning according to another embodiment of the present invention.
Fig. 11 is a schematic diagram of a user interface of a method for assisting language learning according to another embodiment of the present invention.
Fig. 12 is a schematic diagram of a user interface of a method for assisting language learning according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the development of the learning-assisted technology, learners can learn with assistance through terminals, and the learning-assisted technology becomes more and more convenient. And the beginner can conveniently learn through the terminal so as to acquire another new knowledge or language.
Fig. 1 is a flowchart illustrating a method for assisting language learning according to an embodiment of the present invention. The method of fig. 1 can be applied to mobile terminals such as mobile phones and tablet computers. As shown in fig. 1, the method for assisting language learning includes the following.
110: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image.
Specifically, the first image may be a background image, which may be graded differently according to the difficulty level of the user's selection, for example, for a younger user, when the first image is set as the background image, the background image may be set as a cartoon. And the first image can be any one of a color background image or a black and white background image, and a user can flexibly set according to actual needs to exercise the ability of searching words and memorizing words in different modes. The embodiment of the present application does not specifically limit the specific settings (such as difficulty level, size, position, color mode, and background map) of the first image. For example, the first image may be located at the lowest position of the user interface and presented in a black and white background.
The target image may be a cartoon image and embedded in the first image as part of the cartoon to increase the difficulty for the user to find the target image, and may also vary with the colour of the first image, for example when the first image is coloured, the target image is also coloured.
The user interface displays the first image and simultaneously plays the pronunciation corresponding to the target image, wherein the pronunciation can be alternatively played by one tone or two tones, such as English and American. The pronunciation can also be performed by alternating two voices, such as a male voice and a female voice. The pronunciation may also be two sounds plus two tones alternately, for example, a man sound is an english sound and a woman sound is a american sound. The familiarity of the user to the words is enhanced through different intonation pronunciations, the pronunciation of the target image is not particularly limited in the embodiment of the invention, and the user can set the pronunciation according to the actual situation.
The played target pronunciation may be played at a predetermined time interval (here, the predetermined time interval is a second predetermined time interval mentioned below), and the second predetermined time interval is not particularly limited in the embodiment of the present invention. The second predetermined time interval may be played at a certain number of second intervals, for example, at 1s intervals, or at 2s intervals. The second predetermined time interval may also be played in increments of seconds, such as 1 second pass interval, 2 second pass interval, and 3 second pass interval.
Preferably, the embodiment of the present invention sets the second predetermined time interval to be a quantitative interval of 1s, i.e., pronunciations 1 time every 1 s.
The reference image of the target image to be searched and/or the word corresponding to the target image can be included in the user interface, the reference image can be understood as the target image to be searched, and the user searches the target image in the first image according to the prompted reference image. The reference image may be a photograph or an image corresponding to the target image, for example, when the target image is a bird, the reference image may be a photograph of the bird or a strongbox of the bird, so that the user can accurately identify the target image according to the reference image to read the character. The reference image may also contain both a photograph and a character image (e.g., a stroked character) to help the user better understand the words corresponding to the target image. The reference picture and the word can be placed on the outer side of the first image, and can also be placed at any position comfortable for the user in the form of a floating window according to the habit of the user. The embodiment of the present invention is not particularly limited to the presentation forms and positions of the reference drawings and words.
120: and receiving the selection of the target image by the user.
Specifically, the color of the target image may be the same as or similar to the color of the surrounding portions in order to increase the difficulty of the user finding the target image. The user's click on the correct answer at the user interface, i.e., the user's selection of the correct answer within the first image, is received.
130: and when the user selection is correct, prompting the user that the selection is correct.
Specifically, when the user selects the correct target image, the system provides the user with a reward for answering in a manner that encourages the user and enhances the user's learning interest, such as by awarding the user a quantitative gold award, a credit award, or other physical award, such as awarding 2 gold coins in an answer pair. The manner in which the user is rewarded may also be in the form of a monetary award, point award or other physical award of increasing nature to the user, such as a 2 monetary award in response to a 2 monetary award, a 4 monetary award in response to a two-fold increase in response to a 6 monetary award in response to a three-fold increase in award. The mode of rewarding the user can also be dynamic effect rewards such as "good" or "great", and the embodiment of the invention does not limit the form of the rewards.
Therefore, the method for assisting language learning provided by the embodiment of the invention achieves the combination of sound, shape and meaning by finding the corresponding answer in the graph according to the heard pronunciation. In addition, due to the characteristics of rich content and strong word covering capability of the background image, the memory and learning of the user on the words are improved, and the capability of the user for observing and searching things is also improved.
Fig. 2 is a flowchart illustrating a method for assisting language learning according to another embodiment of the present invention. FIG. 2 is an example of the embodiment of FIG. 1, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 2, the method includes the following.
210: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image.
Specifically, step 210 is substantially the same as step 110 in fig. 1, and please refer to the description of step 110 in fig. 1 for details, which are not repeated herein.
220: and receiving the selection of the target image by the user.
Specifically, step 220 is substantially the same as step 120 in fig. 1, and please refer to the description of step 120 in fig. 1 for details, which are not repeated herein.
230: and when the user selects the error, popping up a first popup window and playing the pronunciation of the target image at a first preset time interval.
Specifically, when the user selects an error, the interface may present error feedback, for example, a sound adding effect when the user clicks the error, and the time of the error feedback may be set to 1 s.
After presenting the error feedback, a first popup is popped up on the user interface, for example, a word learning popup is popped up. The first pop-up window may display a normal mode, that is, a normal mode including a target image (the target image may be consistent with the reference image of the target image displayed in the user interface mentioned in the above embodiment), a real object image corresponding to the target image, and a word corresponding to the target image. Or a single image mode can be displayed, namely, a real object image corresponding to the target image and a word corresponding to the target image are included. The target word, the target image and the real object image corresponding to the target image can be automatically adjusted and set according to actual needs, and embodiments of the present invention are not particularly limited. The normal mode and the single-image mode may also be combined with the first image to perform difficulty division, specifically, the division is performed in an easy-to-go color normal mode (the first image and the target image are in color, where the target image includes all target images presented in the mode in the interface), a black-and-white normal mode (the first image and the target image are in black and white, where the target image includes all target images presented in the mode in the interface), a color single-image mode (the first image is in color, and a real object image corresponding to the target image is not limited), and a black-and-white single-image mode (the first image is in black and white, and a real object image corresponding to the target image is not limited), which are not limited in the embodiment of the present invention. The user can set different difficulty levels according to the requirement of the user.
And when the first pop-up window pops up, the interface plays the pronunciation of the target image at a first predetermined time interval, where the first predetermined time interval may be consistent with a second predetermined time interval, and may be played at a certain number of seconds, for example, continuously played at 1s intervals, or continuously played at 2s intervals. The first predetermined time interval may also be played in increments of seconds, for example, 1s for the first pass, 2s for the second pass, and 3s for the third pass. The first predetermined time interval is not particularly limited in the embodiments of the present invention. The first popup is also displayed with a pass countdown, which automatically closes when a specified pass is detected to have been played (e.g., 6 or 7 plays). The number of passes for the predetermined play is not particularly limited in the embodiment of the present invention.
Preferably, the embodiment of the present invention sets the first predetermined time interval to be a quantitative interval of 1s, i.e., pronunciations 1 time every 1 s. The embodiment of the invention also sets the specified number of playing passes to be 6.
The time for which the first popup stays in the interface is determined by playing the pronunciation for a predetermined number of passes. And the number of passes of playback increases as the number of selection errors increases, that is, the greater the number of selection errors, the greater the number of prescribed passes of playback. However, a penalty pass maximum value may be set, that is, the number of errors is increased to the maximum value, for example, 3 times, and the specified pass in which the error is 3 times is played regardless of whether the number of errors is 4 or 5. And the first popup pronunciation pass formula (i.e., penalty pass formula) may be calculated according to formula 3 x (to the power of (n +1) of 2) (n is the number of error passes). The pronunciation pass, the pass formula and the maximum penalty pass value during the first penalty can be adjusted according to the requirements of the user.
Therefore, the embodiment of the invention adopts a pop-up popup window continuous learning mode, so that the user can enter learning again after selecting an error, and the memory of the target word is deepened.
Fig. 3 is a flowchart illustrating a method for assisting language learning according to another embodiment of the present invention. FIG. 3 is an example of the embodiment of FIG. 1, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 3, the method includes the following.
310: and receiving a learning interaction instruction of the user.
Specifically, an instruction for starting learning is received, for example, a user clicks a control similar to "start learning", and a learning operation is executed.
320: and popping up a second popup window, and automatically closing the second popup window after continuously playing the voice of the target image with the specified number of times at a third preset time interval.
Specifically, after the user clicks and executes the instruction of the learning operation, the user interface pops up a second popup window. The second popup can also display a graph mode or not on the basis of the first popup, namely only a word corresponding to the target image is included. For details of the similar portion of the second pop-up window and the first pop-up window, please refer to the description of the first pop-up window in the above embodiment, and details are not repeated again. In addition, the non-image mode may also be combined with the first image to perform division of difficulty (the difficulty of the non-image mode is greater than that of the normal mode and the single-image mode), and the division is easily performed into a color non-image mode (only the first image may be in color) and a black-and-white non-image mode (only the first image may be in black and white), which is not limited in the embodiment of the present invention.
And the second popup is automatically closed after continuously playing the voice of the target image with the specified number of times at a third preset time interval. The third predetermined time interval may be the same as the first predetermined time interval and the second predetermined time interval, and may be played at a certain number of second intervals, for example, continuously played at intervals of 1s, or continuously played at intervals of 2 s; or in incremental second intervals, e.g., 1s for the first pass, 2s for the second pass, and 3s for the third pass. The third predetermined time interval may also be different from the first predetermined time interval and the second predetermined time interval, for example, the third predetermined time interval is 1s, the second predetermined time interval and the first predetermined time interval are 2 s; or the third predetermined time interval is 2s and the second and first predetermined time intervals are 1 s. The third predetermined time interval may also be a third predetermined time interval, a second predetermined time interval and the first predetermined time interval are different, for example, the third predetermined time interval is 1s, the second predetermined time interval is 2s, and the first predetermined time interval is 3s, and the third predetermined time interval is not particularly limited in the embodiment of the present invention. The relationship among the first predetermined time interval, the second predetermined time interval, and the third predetermined time interval is also not particularly limited.
Preferably, the embodiment of the present invention sets the third predetermined time interval to be a quantitative interval of 1s, i.e., pronunciations 1 time every 1 s.
The predetermined number of playing passes in the embodiment of the present invention may be set according to the actual learning requirement of the user, which is not specifically limited in the embodiment of the present invention. E.g. 5, 6 or 7 passes etc. can be played.
Preferably, the embodiment of the present invention sets the play-specifying pass to 6 passes.
After the second pop-up window is automatically closed, the user interface may present, in addition to the first image, a target image (the target image is a reference image of the target image mentioned in the embodiment of fig. 1) and/or a real object image corresponding to the target image, and a word corresponding to the target image. The real object map corresponding to the target image may be the same as the real object map corresponding to the target image presented in the first pop-up window or the second pop-up window, or may be different from the real object map corresponding to the target image presented in the first pop-up window or the second pop-up window, which is not specifically limited in this application.
It should be noted that, according to the selection of the user on the difficulty level, there may be a corresponding relationship between the first popup and the second popup. The relationship may be that the first pop-up window is in a color normal mode, the second pop-up window is in a color normal mode, or the first pop-up window is in a black-and-white normal mode, or the second pop-up window is in a black-and-white normal mode, or the first pop-up window is in a black-and-white single picture mode, or the first pop-up window is in a color single picture mode, or the second pop-up window is in a color non-picture mode, or the first pop-up window is in a black-and-white single picture mode, or the second pop-up window is in a black-and-white non-picture mode.
The user can gradually increase the difficulty of learning, for example, gradually upgrade from a normal mode with low difficulty to a non-graph mode with high difficulty, and simultaneously, the user can also set the difficulty by himself, for example, degrade the difficulty and the like. The difficulty division can be performed in each mode, and the difficulty in the color mode is lower than that in the black-and-white mode, for example, in the normal mode, the difficulty in the color normal mode is lower than that in the black-and-white normal mode.
Therefore, the classification of difficulty degree grades can increase the interest of learning and consolidate the learning of words.
330: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image.
Specifically, step 330 is substantially the same as step 110 in fig. 1, and please refer to the description of step 110 in fig. 1 for details, which are not repeated herein.
340: and receiving the selection of the target image by the user.
Specifically, step 340 is substantially the same as step 120 in fig. 1, and please refer to the description of step 120 in fig. 1 for details, which are not described herein again.
350: and when the user selection is correct, prompting the user that the selection is correct.
Specifically, step 350 is substantially the same as step 130 in fig. 1, and please refer to the description of step 130 in fig. 1 for details, which are not repeated herein.
Therefore, the user can fully understand the target image by watching the target image and the real-object image corresponding to the target image and listening to the pronunciation with the specified number of times for learning, and meanwhile, the user needs to find the corresponding content in the background image according to the heard words and the seen reference image during learning, so that the combination of sound, shape and translation is realized.
Fig. 4 is a flowchart illustrating a method for assisting language learning according to yet another embodiment of the present invention. FIG. 4 is an example of the embodiment of FIG. 3, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 4, the method further includes the following steps based on the embodiment of fig. 3.
410: and learning a preset learning item in advance to obtain the qualification of executing the learning interactive instruction.
Specifically, the learning interactive instruction can be executed only when the preset learning item is completed, the preset learning item in the embodiment of the present invention may be to watch a video in a designated learning area, listen to and read a word in a designated time, or follow and read a designated number of words, and the like. These words may be words that are consistent with the target image of FIG. 1, which may further enhance the user's learning of these words, or words that are inconsistent with the target image of FIG. 1, which may further expand the user's vocabulary.
The qualification for executing the learning interactive instruction may be to obtain a qualification coupon (e.g., a challenge coupon), that is, to consume one qualification coupon (e.g., a challenge coupon) each time the learning interactive instruction is executed, to obtain a physical strength value, that is, to consume a certain amount of physical strength each time the learning interactive instruction is executed, to obtain a score, that is, to consume a certain amount of score each time the learning interactive instruction is executed, or to upgrade the qualification itself, that is, to reach a certain level to execute the learning interactive instruction. The qualification of learning the interactive command is not particularly limited in the embodiments of the present invention.
420: and receiving a learning interaction instruction of the user.
Specifically, step 420 is substantially the same as step 310 in fig. 3, and please refer to the description of step 310 in fig. 3 for details, which are not described herein again.
430: and popping up a second popup window, and automatically closing the second popup window after continuously playing the voice of the target image with the specified number of times at a third preset time interval.
Specifically, step 430 is substantially the same as step 320 in fig. 3, and please refer to the description of step 320 in fig. 3 for details, which are not described herein again.
440: presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image.
Specifically, step 440 is substantially the same as step 330 in fig. 3, and please refer to the description of step 330 in fig. 3 for details, which are not described herein again.
450: and receiving the selection of the target image by the user.
Specifically, step 450 is substantially the same as step 340 in fig. 3, and please refer to the description of step 340 in fig. 3 for details, which are not described herein again.
460: and when the user selection is correct, prompting the user that the selection is correct.
Specifically, step 460 is substantially the same as step 350 in fig. 3, and please refer to the description of step 350 in fig. 3 for details, which are not described herein again.
Therefore, by adopting the qualification challenging mode (namely, the qualification that the interactive instruction can be learned only by meeting the conditions), the interest of learning is increased, the enthusiasm of the user for learning is stimulated, the word learning mode is enriched, and the learning effect is strengthened.
In one embodiment of the present invention, when the user selection is correct, prompting the user selection to be correct comprises: when the user clicks the hot area containing the target image, displaying correct selection, and when the user selects incorrectly, popping up a first popup window and playing the pronunciation of the target image at a first preset time interval comprises: and when the user clicks the area outside the hot area, popping up the first popup window and playing the pronunciation of the target image for the user to learn.
Specifically, the hot area may be regarded as determining whether the user selects the correct standard range, that is, when the user clicks in the hot area including the target image, the user is prompted to select correctly, and when the user clicks in an area other than the hot area, the user is prompted to select incorrectly. The hot area may be set to a preset distance along the outside of the contour of the target image, for example, 3mm along the outside of the contour of the target image, or may be a circular area with the distance from the center of the target image to the outermost side as a radius.
The technical features of displaying the correct selection, popping up the first pop-up window and playing the pronunciation of the target image in the embodiment of the present invention are basically the same as those in the above embodiment, and the details are described in the above related description, which is not repeated herein.
Therefore, the hot area is set, so that the selection operation of the user can be effectively identified when the target image is small, and the identification accuracy is ensured.
Fig. 5 is a flowchart illustrating an application prompt control according to an embodiment of the present invention. As shown in fig. 5, the method includes the following.
510: and receiving the selection of the prompt control by the user.
Specifically, an instruction of a user for clicking a prompt control is received.
The prompt control may be any control for indicating that the active area within which the target image is to be magnified, such as an icon resembling a magnifying glass pattern. The control can also be directly a colored control such as a circle or a triangle, for example, a yellow circle control. It can also be a control directly written with "hint" two words. The embodiment of the present invention does not specifically limit the specific form of the prompt control.
520: highlighting a first region of the first image.
Specifically, after the prompt control is clicked, a first area is highlighted in the first image, and the first area contains the target image. The first area in the first image can be highlighted by flashing, appearing and disappearing instantly, i.e. the first area has a very short display time, e.g. 1s, or non-flashing, i.e. the first area stays for a predetermined time, e.g. 3s or 4s, after being highlighted.
Preferably, embodiments of the present invention highlight the first region in the form of a flash.
Therefore, the prompt control introduced by the embodiment of the invention can help the user to quickly search the target image and improve the learning speed.
In one embodiment of the present invention, the method further comprises: and the user can continue to click the prompt control in the user interface after waiting for the preset cooling time, wherein the cooling time is increased by a preset number of seconds along with the increase of the use times of the prompt control.
Specifically, after the user finishes using the prompt control, the user cannot click the prompt control again to prompt, but needs to wait for a period of time, that is, the prompt control is unlocked after the cooling time elapses, and then the user can click the prompt control again. During the cooling time, the countdown of seconds is displayed on the prompt control, and in a single picture game, the unfreezing time of the prompt control is continuously prolonged according to the continuous increase of the using times, so that a penalty mechanism is performed, for example, the unfreezing time of the prompt control is 10s for the first time, the unfreezing time of the prompt control is 15s after the prompt control is used for one time, and the like, and the unfreezing time of the prompt control is (10+5n) s after the prompt control is used for n times. The first cooling time and the unfreezing formula are not particularly limited, and a user can set the first cooling time and the unfreezing formula according to actual requirements.
It should be noted that, when the prompt control is dark, it represents that clicking is not possible, and only when the progress of countdown of the cooling time displayed on the prompt control is completed and normal brightness is recovered, the prompt function can be recovered by normal clicking.
Therefore, the embodiment of the invention sets the freezing time, and avoids the problem that the user depends on no active thinking due to excessive use in the learning process.
In one embodiment of the invention, the speech of the target image includes English.
Specifically, the voice of the target image may be english, chinese, japanese, french, or the like, which is not specifically limited in this embodiment of the present invention, and may be set according to actual requirements in the actual application process.
Preferably, the embodiment of the present invention selects english as the voice of the target image.
Fig. 6 is a schematic structural diagram of an apparatus for assisting language learning according to an embodiment of the present invention. The apparatus for assisting language learning includes: a display module 610, a receiving module 620, a prompting module 630 and an obtaining module 640.
Specifically, the display module 610 is configured to present a first image on the user interface, where the first image includes a target image, and play a pronunciation of the target image; a receiving module 620, configured to receive a selection of the target image by a user; and a prompt module 630, configured to prompt the user that the selection is correct when the selection is correct.
Therefore, the embodiment of the invention provides a device for assisting language learning, which can achieve the combination of sound, shape and meaning by finding the corresponding answer in the graph according to the heard pronunciation. In addition, due to the characteristics of rich content and strong word covering capability of the background image, the memory and learning of the user on the words are improved, and the capability of the user for observing and searching things is also improved.
In an embodiment of the present invention, the prompt module 630 is configured to: when the user selects an error, popping up a first pop-up window and playing the pronunciation of the target image at a first preset time interval, wherein the target image, a real object image corresponding to the target image and a word corresponding to the target image are displayed on the first pop-up window; the pronunciation of the target image is played, and the display module 610 is configured to play the pronunciation of the target image at a second predetermined time interval.
In an embodiment of the present invention, the receiving module 620 is configured to: receiving a learning interaction instruction of a user; the display module 610 is configured to pop up a second pop-up window, and automatically close the second pop-up window after the voice of the target image with the specified number of passes is continuously played at a third predetermined time interval, where the second pop-up window displays the target image, the real object image corresponding to the target image, and the word corresponding to the target image.
In an embodiment of the present invention, the apparatus further includes an obtaining module 640 configured to: and learning a preset learning item in advance to obtain the qualification of executing the learning interactive instruction.
In an embodiment of the present invention, the prompt module 630 is configured to: when the user clicks on a hotspot containing the target image, the correct selection is displayed; the prompt module 630 is further configured to: and when the user clicks the area outside the hot area, popping up a first popup window and playing the pronunciation of the target image for the user to learn.
In one embodiment of the present invention, the display module 610 is configured to: presenting a prompt control on a user interface, and helping a user to quickly search a target image when the user selects the prompt control; the receiving module 620 is configured to: receiving the selection of the prompt control by the user; the prompt module 630 is configured to: the first area of the first image is highlighted to assist the user in making the correct selection. Wherein the first region includes the target image.
In one embodiment of the present invention, the display module 610 is configured to: the user may continue to click the prompt control within the user interface after waiting for the preset cool-down time to elapse. The cooling time is increased by a preset number of seconds along with the increase of the use times of the prompt control.
In one embodiment of the invention, the speech of the target image includes English.
Specifically, for the specific working processes and functions of the display module 610, the receiving module 620 and the prompting module 630 in the foregoing embodiments, reference may be made to the description in the method for assisting language learning provided in the foregoing embodiments of fig. 1 to 5, and in order to avoid repetition, details are not repeated herein. Fig. 7 is a block diagram of an electronic device for assisting language learning according to an embodiment of the present invention.
Referring to fig. 7, electronic device 700 includes a processing component 710 that further includes one or more processors, and memory resources, represented by memory 720, for storing instructions, such as applications, that are executable by processing component 710. The application programs stored in memory 720 may include one or more modules that each correspond to a set of instructions. Further, the processing component 710 is configured to execute instructions to perform the above-described method for assisting language learning.
The electronic device 700 may also include a power supply component configured to perform power management of the electronic device 700, a wired or wireless network interface configured to connect the electronic device 700 to a network, and an input-output (I/O) interface. The electronic device 700 may be operated based on an operating system, such as Windows Server, stored in the memory 720TM,Mac OSXTM,UnixTM,LinuxTM, FreeBSDTMOr the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of the electronic device 700, enable the electronic device 700 to perform a method for assisting language learning, comprising: presenting a first image on a user interface and playing the pronunciation of the target image; receiving a selection of a target image by a user; and when the user selects correctly, prompting the user to select correctly.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 8 to 12 are schematic diagrams illustrating user interfaces of a method for assisting language learning according to an embodiment of the present invention. The following is displayed in the user interface.
Specifically, when the user enters the user interface 800 of fig. 8, if the user satisfies the instruction to execute the learning interaction, for example, the number of the qualification coupons at 810 in fig. 8 is greater than or equal to 1. The user can click on control 830 for word learning. At the same time, the first image 820 and the number of words that the user needs to learn together, such as 12 words in fig. 8, are presented on the interface.
When the user clicks on control 830 in FIG. 8, the interface state of FIG. 9 is obtained. The user interface 900 presents a first pop-up window on which are displayed: the target image 940, the real object image 950 corresponding to the target image, and the word 960 corresponding to the target image are provided for the user to learn. And simultaneously, when the first popup is popped up, the user interface plays the pronunciation of the target image at a first preset time interval. The first popup automatically closes when a specified number of passes are detected to have been played (e.g., 6 or 7 passes of play).
The user interface is in the state of fig. 10, a first image 1020 is presented on the user interface 1000, a target image 1040 (i.e. a reference image of the target image required by the user) (corresponding to 940 in fig. 9), a real object image 1050 (corresponding to 950 in fig. 9) corresponding to the target image, and a word 1060 (corresponding to 960 in fig. 9) corresponding to the target image are displayed above the user interface 1000, and at this time, after the user enters the user interface 1000, the sound of the target image 1040 is played at a second predetermined time interval in the interface. In practice, the user clicks on the target image 1040 in the first image 1020, and after receiving the user's selection of the target image 1040, the system prompts the user to select the correct one, and presents a successful animation with the correct selection, as shown in fig. 12 (where 1280 is a key frame of the successful animation). When the dynamic effect is played, the learning of the next word is automatically started. If the user does not find the target image 1040 in the first image 1020, the user interface 1000 may be clicked on the prompt control 1070 in the lower right-hand corner, and the user interface will assume the interface state of FIG. 11. At this time, the user interface 1100 may highlight the first region 1170 in the first image 1120 (corresponding to the first image 1020 in fig. 10), where the first region 1170 includes the target image, so as to facilitate quick search by the user.
Preferably, the first region 1170 of the embodiment of the present invention appears as flash, and thus fig. 11 can also be understood as a key frame of the flash of the first region 1170.
When the first region 1170 is highlighted, the interface returns to the interface state of fig. 10, and the prompt control 1070 is in the state of being applied for a cooling time, and the surface of the prompt control 1070 does not return to normal brightness, but rather has a portion that is still dark, and therefore cannot be clicked.
Therefore, the form of combining the game and the word learning forms a learning mode combining the sound, the shape and the meaning.
In the description herein, references to the description of "one embodiment," "some embodiments," "an example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Furthermore, in the description of the present invention, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.

Claims (11)

1. A method for assisting language learning, comprising:
presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image;
receiving a selection of the target image by a user;
and when the user selection is correct, prompting the user that the selection is correct.
2. The method of claim 1, further comprising:
popping up a first popup window and playing the pronunciation of the target image at a first predetermined time interval when the user selects an error,
wherein the first popup displays: the target image, a real object image corresponding to the target image and a word corresponding to the target image; or a real object image corresponding to the target image and a word corresponding to the target image,
wherein the playing the pronunciation of the target image comprises:
and playing the pronunciation of the target image at a second preset time interval.
3. The method of claim 2, wherein prompting the user to select correct when the user selection is correct comprises:
when the user clicks on the hotspot containing the target image, the correct selection is displayed,
when the user selects an error, popping up a first popup window and playing the pronunciation of the target image at a first preset time interval comprises the following steps:
and when the user clicks the area outside the hot area, popping up the first popup window and playing the pronunciation of the target image for the user to learn.
4. The method of claim 1, wherein prior to said presenting a first image on a user interface, the first image comprising a target image, and playing a reading of the target image, further comprising:
receiving a learning interaction instruction of the user;
popping up a second popup, and automatically closing the second popup after continuously playing the voice of the target image with a specified number of passes at a third preset time interval, wherein the second popup displays: the target image, a real object image corresponding to the target image and a word corresponding to the target image; or a real object image corresponding to the target image and a word corresponding to the target image; or a word corresponding to the target image.
5. The method of claim 4, prior to said receiving the user's learning interaction instruction, further comprising:
learning a preset learning item in advance to qualify to execute the learning interactive instruction.
6. The method of any of claims 1 to 5, further comprising:
presenting a prompt control on the user interface for assisting the user in quickly finding the target image when the user selects the prompt control,
wherein the method further comprises:
receiving a selection of the prompt control by a user;
highlighting a first region of the first image to assist the user in making a correct selection, wherein the first region includes the target image.
7. The method of claim 6, further comprising:
and the user can continuously click the prompt control in the user interface after waiting for the preset cooling time, wherein the cooling time is increased by a preset number of seconds along with the increase of the use times of the prompt control.
8. The method of any of claims 1 to 5, wherein the speech of the target image comprises English.
9. An apparatus for assisting language learning, comprising:
the display module is used for presenting a first image on a user interface, wherein the first image comprises a target image, and playing the pronunciation of the target image;
the receiving module is used for receiving the selection of the target image by the user;
and the prompting module is used for prompting the user to select the right selection when the user selects the right selection.
10. An electronic device comprising a processor and a memory,
wherein the memory is used for storing a program for assisting language learning according to any one of claims 1 to 8;
the processor is configured to execute the program.
11. A computer-readable storage medium, which stores a computer program for executing the method for assisting language learning according to any one of claims 1 to 8.
CN202010758445.1A 2020-07-31 2020-07-31 Method and device for assisting language learning, electronic equipment and storage medium Pending CN111882932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010758445.1A CN111882932A (en) 2020-07-31 2020-07-31 Method and device for assisting language learning, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010758445.1A CN111882932A (en) 2020-07-31 2020-07-31 Method and device for assisting language learning, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111882932A true CN111882932A (en) 2020-11-03

Family

ID=73204294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010758445.1A Pending CN111882932A (en) 2020-07-31 2020-07-31 Method and device for assisting language learning, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111882932A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113171606A (en) * 2021-05-27 2021-07-27 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007264643A (en) * 2007-04-20 2007-10-11 Casio Comput Co Ltd Information display device and information display processing program
CN108830764A (en) * 2018-09-04 2018-11-16 乔新霞 English Teaching Method, system and electric terminal
CN109389155A (en) * 2018-09-11 2019-02-26 广东智媒云图科技股份有限公司 A kind of interactive learning methods, electronic equipment and storage medium
CN109446891A (en) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 A kind of interactive learning methods based on image recognition, electronic equipment and storage medium
CN109448453A (en) * 2018-10-23 2019-03-08 北京快乐认知科技有限公司 Point based on image recognition tracer technique reads answering method and system
CN109712446A (en) * 2018-07-11 2019-05-03 北京美高森教育科技有限公司 Interactive learning methods based on new word detection
CN110459082A (en) * 2019-08-27 2019-11-15 深圳市柯达科电子科技有限公司 A kind of rapid memory method of word

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007264643A (en) * 2007-04-20 2007-10-11 Casio Comput Co Ltd Information display device and information display processing program
CN109712446A (en) * 2018-07-11 2019-05-03 北京美高森教育科技有限公司 Interactive learning methods based on new word detection
CN108830764A (en) * 2018-09-04 2018-11-16 乔新霞 English Teaching Method, system and electric terminal
CN109389155A (en) * 2018-09-11 2019-02-26 广东智媒云图科技股份有限公司 A kind of interactive learning methods, electronic equipment and storage medium
CN109446891A (en) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 A kind of interactive learning methods based on image recognition, electronic equipment and storage medium
CN109448453A (en) * 2018-10-23 2019-03-08 北京快乐认知科技有限公司 Point based on image recognition tracer technique reads answering method and system
CN110459082A (en) * 2019-08-27 2019-11-15 深圳市柯达科电子科技有限公司 A kind of rapid memory method of word

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113171606A (en) * 2021-05-27 2021-07-27 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device
CN113171606B (en) * 2021-05-27 2024-03-08 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device

Similar Documents

Publication Publication Date Title
Dalim et al. Using augmented reality with speech input for non-native children's language learning
US9870714B2 (en) Tablet learning apparatus
US20050175970A1 (en) Method and system for interactive teaching and practicing of language listening and speaking skills
US20170287356A1 (en) Teaching systems and methods
JP2003504646A (en) Systems and methods for training phonological recognition, phonological processing and reading skills
CN114125492B (en) Live content generation method and device
US20140134576A1 (en) Personalized language learning using language and learner models
US20070020592A1 (en) Method for teaching written language
US20120077165A1 (en) Interactive learning method with drawing
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
US10839714B2 (en) System and method for language learning
Lemos et al. Augmented reality musical app to support children’s musical education
CN111882932A (en) Method and device for assisting language learning, electronic equipment and storage medium
CN117541444A (en) Interactive virtual reality talent expression training method, device, equipment and medium
JP2002530724A (en) Apparatus and method for training with an interpersonal interaction simulator
JP2018097250A (en) Language learning device
Doumanis Evaluating humanoid embodied conversational agents in mobile guide applications
Hsu et al. Spelland: Situated Language Learning with a Mixed-Reality Spelling Game through Everyday Objects
JP2002159741A (en) Game device and information storage medium
Begic et al. Software prototype based on Augmented Reality for mastering vocabulary
US20050003333A1 (en) Method and a system for teaching a target of instruction
KR100593590B1 (en) Automatic Content Generation Method and Language Learning Method
US20030236667A1 (en) Computer-assisted language listening and speaking teaching system and method with circumstantial shadow and assessment functions
CN111090479A (en) Learning auxiliary method and learning auxiliary equipment
Valdimarsson English language learning Apps: A review of 11 English language learning apps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination