CN114170856B - Machine-implemented hearing training method, apparatus, and readable storage medium - Google Patents

Machine-implemented hearing training method, apparatus, and readable storage medium Download PDF

Info

Publication number
CN114170856B
CN114170856B CN202111481160.9A CN202111481160A CN114170856B CN 114170856 B CN114170856 B CN 114170856B CN 202111481160 A CN202111481160 A CN 202111481160A CN 114170856 B CN114170856 B CN 114170856B
Authority
CN
China
Prior art keywords
hearing
content
target
audio
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111481160.9A
Other languages
Chinese (zh)
Other versions
CN114170856A (en
Inventor
王艳
段亦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Youdao Information Technology Beijing Co Ltd
Original Assignee
Netease Youdao Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Youdao Information Technology Beijing Co Ltd filed Critical Netease Youdao Information Technology Beijing Co Ltd
Priority to CN202111481160.9A priority Critical patent/CN114170856B/en
Publication of CN114170856A publication Critical patent/CN114170856A/en
Application granted granted Critical
Publication of CN114170856B publication Critical patent/CN114170856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

Embodiments of the present invention provide a machine-implemented hearing training method, apparatus, and readable storage medium, the hearing training method comprising: generating guide-reading content related to target hearing content based on the target hearing content; outputting the read-guiding content in response to entering a pre-hearing link; and outputting the target hearing content in response to entering an in-listen link, wherein the target hearing content includes at least target hearing audio. According to the hearing training method provided by the embodiment of the invention, a progressive hearing training mode can be provided for the user, and the hearing understanding degree and the hearing training effect of the user in the hearing link can be improved.

Description

Machine-implemented hearing training method, apparatus, and readable storage medium
Technical Field
Embodiments of the present invention relate to the field of data processing technology, and more particularly, to a hearing training method implemented with a machine, an apparatus for implementing hearing training, and a computer-readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Accordingly, unless indicated otherwise, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The current hearing training method is usually a smoked ceramic hearing training, namely, a section of audio is given, and a user can achieve the purpose of hearing training by repeatedly listening and doing questions. Such training methods, while capable of addressing some of the needs of the test, are unable to meet the hearing needs in the examination of the country or in the real life scenario (e.g., listening to music, listening to audio books, or watching movies, etc.). In addition, even for hearing materials with moderate difficulty, such hearing training methods are difficult to help users achieve good training results, and conversely, are very prone to striking the user's confidence and learning courage, making it difficult for the user to adhere to.
Disclosure of Invention
Therefore, an improved hearing training method is highly needed, which can meet the requirements of various application scenes and can enable the hearing training process to be accepted by users more easily.
In this context, embodiments of the present invention desire to provide a machine-implemented hearing training method, an apparatus for implementing hearing training, and a computer-readable storage medium.
In a first aspect of embodiments of the present invention, there is provided a machine-implemented hearing training method comprising: generating guide-reading content related to target hearing content based on the target hearing content; outputting the read-guiding content in response to entering a pre-hearing link; and outputting the target hearing content in response to entering an in-listen link, wherein the target hearing content includes at least target hearing audio.
In one embodiment of the invention, the hearing training method comprises, prior to generating the read-through content: performing difficulty grading on the content difficulty of each candidate hearing content in the plurality of candidate hearing contents; and determining a range of candidate hearing content selectable by the user based on a difficulty level corresponding to the hearing level of the user, so that the user selects the target hearing content within the range.
In another embodiment of the present invention, the content difficulty is determined based on at least one of: a subject of candidate hearing content; candidate vocabulary of hearing content; grammar of candidate hearing content; and sentence length of the candidate hearing content.
In yet another embodiment of the present invention, generating the read-guided content includes: extracting semantics of the target hearing content to determine subject information of the target hearing content; and generating the read-through content based at least on the subject information.
In yet another embodiment of the present invention, the read-through content is further generated based on at least one of: keywords in the target hearing content; and/or a target question associated with the target hearing content.
In one embodiment of the invention, the read-through content includes one or more language expressions; outputting the read-through content includes outputting in a visual and/or audible manner.
In another embodiment of the invention, the target hearing content further comprises target hearing text corresponding to the target hearing audio, and outputting the target hearing content comprises outputting at least one of: the target hearing audio; the target hearing text; a translation of the target hearing text; and/or a translation of a user-selected portion of the target hearing text.
In yet another embodiment of the present invention, outputting the target hearing content includes performing at least one of the following output settings: an output speed; the number of cycles of each sentence audio in the target hearing audio; each sentence is output in a circulating way and then is stopped; and after each sentence audio frequency is circularly output, the next sentence audio frequency is automatically output.
In yet another embodiment of the present invention, the hearing training method further comprises: determining keywords in the target hearing content; and presenting keyword text corresponding to the keyword audio before outputting the keyword audio in the target hearing audio under the hearing ring section based on the position of the keyword in the target hearing content.
In one embodiment of the present invention, the determining the keyword includes: extracting candidate keywords in the target hearing content; searching the candidate keywords in a preset word list matched with the hearing level of the user; and determining the keywords according to the search result in the preset word list.
In another embodiment of the present invention, the hearing training method further comprises: determining a plurality of keywords in the target hearing content; and presenting the plurality of keywords simultaneously before sequentially outputting one or more sentence audios including the plurality of keywords based on the positional relationship of the plurality of keywords in the target hearing content.
In yet another embodiment of the present invention, the hearing training method further comprises: and judging whether the selection operation of the user on the keyword text is received or not within the preset time after the keyword audio is output, so as to determine the hearing training effect of the user.
In yet another embodiment of the present invention, the hearing training method further comprises: in response to entering the post-hearing segment, outputting a test question related to the target hearing content, wherein the test question comprises at least one of: target problems in the read-through content; and/or details in the target hearing content.
In one embodiment of the present invention, the test question and/or answer options related to the test question are output in at least one of the following forms: a picture; audio frequency; and/or text in one or more languages.
In another embodiment of the invention, the hearing training method further comprises: presenting to the user one or more of the following information: a hearing test result; correct answer at error; the error is at a corresponding location in the target hearing content.
In a second aspect of embodiments of the present invention, there is provided an apparatus for performing hearing training, comprising, a processor configured to execute program instructions; a memory configured to store the program instructions, which when executed by the processor, cause the apparatus to perform the hearing training method according to any one of the first aspects of the embodiments of the present invention.
In a third aspect of the embodiments of the present invention, there is provided a computer readable storage medium storing program instructions which, when loaded and executed by a processor, cause the processor to perform a method according to any one of the first aspects of the embodiments of the present invention.
According to the hearing training method implemented by the machine, the guide reading content related to the target hearing content can be generated, and the guide reading content can be output when entering the pre-hearing link, and according to the setting, a progressive hearing training mode can be provided for a user, so that the user can primarily know the target hearing content before listening to the target hearing audio, and the hearing understanding degree and the hearing training effect of the user in the hearing link are improved.
Further, in some embodiments, the hearing level of the user may be matched according to the content difficulty of each candidate hearing content in the plurality of candidate hearing contents, so as to implement a hearing training method combining the content difficulty and the training mode, and a technical means for assisting the user in learning hearing adapting to various difficulties from shallow to deep may be provided. In other embodiments, the keyword text may be presented before the listening link outputs the keyword audio in the target hearing audio, which is beneficial to improving the interest and training effect of the listening link.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the invention;
FIG. 2 schematically illustrates a flow chart of a hearing training method according to an embodiment of the invention;
FIG. 3 schematically illustrates a flow chart of a hearing training method including content difficulty rating according to an embodiment of the invention;
FIG. 4 schematically illustrates a flow chart of a hearing training method for presenting keyword text in a hearing ring segment according to an embodiment of the invention; and
fig. 5 schematically shows a flow chart of a hearing training method comprising a post-hearing link according to an embodiment of the invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable those skilled in the art to better understand and practice the invention and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
FIG. 1 schematically illustrates a block diagram of an exemplary computing system 100 suitable for implementing embodiments of the invention. As shown in fig. 1, a computing system 100 may include: a Central Processing Unit (CPU) 101, a Random Access Memory (RAM) 102, a Read Only Memory (ROM) 103, a system bus 104, a hard disk controller 105, a keyboard controller 106, a serial interface controller 107, a parallel interface controller 108, a display controller 109, a hard disk 110, a keyboard 111, a serial peripheral 112, a parallel peripheral 113, and a display 114. Of these devices, coupled to the system bus 104 are a CPU 101, a RAM 102, a ROM 103, a hard disk controller 105, a keyboard controller 106, a serial controller 107, a parallel controller 108, and a display controller 109. The hard disk 110 is coupled to the hard disk controller 105, the keyboard 111 is coupled to the keyboard controller 106, the serial external device 112 is coupled to the serial interface controller 107, the parallel external device 113 is coupled to the parallel interface controller 108, and the display 114 is coupled to the display controller 109. It should be understood that the block diagram depicted in FIG. 1 is for illustrative purposes only and is not intended to limit the scope of the present invention. In some cases, some devices may be added or subtracted as appropriate
Those skilled in the art will appreciate that embodiments of the invention may be implemented as a system, method, or computer program product. Thus, the invention may be embodied in the form of: all hardware, all software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, is generally referred to herein as a "circuit," "module," or "system," etc. Furthermore, in some embodiments, the invention may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive example) of the computer-readable storage medium could include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer, for example, through the internet using an internet service provider.
Embodiments of the present invention will be described below with reference to flowchart illustrations of methods and block diagrams of apparatus (or devices or systems) according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
According to an embodiment of the present invention, a machine-implemented spoken language training method, an apparatus for implementing spoken language training, and a computer readable storage medium are presented.
Herein, it is to be understood that the terms involved include the following:
NLP: natural language processing it is a natural language processing technology, it researches various theories and methods that can realize effective communication between human and computer by natural language, and is mainly applied to machine translation, public opinion monitoring, automatic abstract, viewpoint extraction, text classification, question answering, text semantic comparison, speech recognition, character recognition OCR, etc.
ASR: automatic Speech Recognition, automatic speech recognition techniques, can convert speech to text.
The second language is learned: second Language Acquisition, abbreviated as SLA or bilingual learning, generally refers to any other language learning after native language learning.
Keyword extraction technology: the keyword extraction technology is a technology capable of automatically extracting keyword interest groups, keywords and/or keyword groups reflecting texts. For example, for a sentence such as "I consider him a doctor", the dynamic word conductor and the noun conductor are extracted as keywords by using a keyword extraction technique.
Intent group: each component is divided into meaning and structure in the sentence, and each component is called a meaning group, and the words in the same meaning group have close relation with each other. The intent group may be a chunk of speech that has a practical meaning or that can summarize the emphasis of a sentence.
Keyword: a word or phrase that can reflect the subject matter or core ideas of text, or be understood as a word that has a practical meaning or can summarize the emphasis of a sentence.
Semantic analysis technology: is a branch of natural language processing technology that derives a formalized representation that reflects the meaning of a sentence based on the syntactic structure between the sentence or fragments and the word meaning of the words in the sentence.
Furthermore, any number of elements in the figures is for illustration and not limitation, and any naming is used for distinction only and not for any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments thereof.
Summary of The Invention
The inventor finds that the current hearing training method does not have a progressive process, and even for a hearing material with moderate difficulty, a user does not have a better method for deep learning and training layer by layer. Therefore, the user often receives a relatively large impact during the hearing learning process and cannot adhere well.
The inventors have also found that with the continued development of current computer technology, the application of artificial intelligence AI technology in speech technology and in semantic understanding has achieved significant results. For hearing learning, a user needs to process and convert audio information in the brain after hearing, so that ASR (speech recognition technology) in speech technology can be applied to the hearing exercise process by simulating the thinking process of the human brain, and then keywords in a hearing material can be extracted by combining with semantic understanding technology. Based on the method, the purposes and effects of helping the user divide the difficulty of the hearing materials, outputting the read content before hearing or training the post-hearing quick response capability of the user by combining keywords with the audio can be achieved.
Having described the basic principles of the present invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
The hearing training method of embodiments of the present invention may be implemented by an application running on a machine. Such an application may be, for example, a hearing training APP. The language of hearing training may be any of a variety of existing languages including, but not limited to, english, french, german, spanish, korean, japanese, chinese, and the like. The user population may be, for example, a bilingual learner. The user population may also be adults, teenagers, infants, etc. Typically, in such a hearing training APP, a user may be hearing trained according to the user's selection or the hearing training content set by the system. In other applications, the hearing training content of the system settings may be selected based on a hearing level that matches the user's previous hearing training results. Further, a machine implementing the hearing training APP may also be provided with, for example, a speaker or the like to play hearing training audio.
Exemplary method
A machine-implemented hearing training method according to an exemplary embodiment of the present invention is described below with reference to fig. 2 in conjunction with the above application scenario. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present invention, and the embodiments of the present invention are not limited in any way. Rather, embodiments of the invention may be applied to any scenario where applicable.
As shown in fig. 2, the hearing training method 200 may include: in step 210, read-through content associated with the target hearing content may be generated based on the target hearing content. In some embodiments, the target hearing content may include at least one of a scene dialogue, a science popularization introduction, a material to be tested, a song, an image dubbing, a voice book, and the like. In other embodiments, the target hearing content may be stored in text, audio, and/or video form. The video may be stored as a whole or may be stored in audio or image format.
For example, in one embodiment of the invention, the target hearing content may be pre-stored in a machine or other available medium in text form, and when it is desired to output the audio of the target hearing content (or target hearing audio), the text may be converted to speech for output to the user. The above operations may be performed using existing, e.g., text-to-speech TTS (Text to Speech) techniques, or various text-to-speech techniques developed in the future. In another embodiment of the present invention, the target hearing content described above may be pre-stored in the machine in audio and/or video form or other available media, and the stored audio and/or video may be directly output when it is desired to output text of the target hearing content (or target hearing text). In still other embodiments, the target hearing content may include target hearing text and/or target hearing audio.
In some embodiments, the target hearing content may be determined based on user selection, or may be set by the machine. In other embodiments, the hearing training method 200 may further include: receiving a user input of hearing material; and generating target hearing content based on the hearing material. In this embodiment, the hearing material may be input in the form of text, speech, and/or pictures, etc., and generating the target hearing content may include at least one of: converting the hearing material in text form into target hearing audio by using a text-to-speech technology; converting the hearing material in a voice form into a target hearing text by utilizing a voice-to-text ASR technology; and converting the hearing material in the form of a picture into target hearing audio using image recognition OCR technology and text-to-speech technology.
In some embodiments, the pilot content may include portions of the target hearing content (e.g., portions extracted or truncated from the target hearing content), and/or background introductions to the target hearing content, etc. In other embodiments, generating the read-guided content may include: converting the target hearing audio to target hearing text using speech-to-text ASR technology in response to the target hearing content including only the target hearing audio; and generating the read-through content based on the target hearing text. In one embodiment of the present invention, step 210 may further include: and generating the read-through content related to the target hearing content by utilizing a semantic analysis technology. In another embodiment, the correlation with the target hearing content may be understood as a semantic correlation.
As further shown in fig. 2, in yet another embodiment of the present invention, step 210 may include step 211 (shown in dashed box) and step 212 (shown in dashed box), wherein in step 211, semantics of the target hearing content may be extracted to determine subject information of the target hearing content. The subject information may include core meaning or stem content of the target hearing content. In some embodiments, the semantics of each sentence in the target hearing content may be extracted, and subject information of the target hearing content may be determined according to the semantics of each sentence. In other embodiments, the semantics of each sentence in the target hearing content may be extracted, and the subject information of the target hearing content may be determined according to the semantics of each sentence.
In other embodiments, semantic analysis techniques may be utilized to extract semantics of the target hearing content. For example, the process of step 211 may be similar to: the target hearing content includes "One person on Twitter published a photograph of himself waiting in a very long line at a grocery store", "and the obtained main sentence is" One person published a photo "," after the semantics of the sentence are extracted by using the semantic analysis technology. The implementation of the process by using semantic analysis technology can effectively reduce the labor input cost.
Next, in step 212, read-guided content may be generated based at least on the subject information. In some embodiments, the read-through content may include at least subject information of the target hearing content. In yet another embodiment of the present invention, the read-through content may also be generated based on at least one of: keywords in the target hearing content; and/or a target question associated with the target hearing content. Specifically, keywords in the target hearing content may be extracted using a keyword extraction technique; and/or semantic analysis techniques may be utilized to generate target questions related to the target hearing content. For example, semantics of the target hearing content are extracted using semantic analysis techniques, and semantically related target questions are generated from the extracted semantics. In other embodiments, the target issue may include one or more.
For example, in some application scenarios, the target hearing content is related to how to make bread, the subject information, keywords, and/or target questions of the target hearing content may be extracted through the steps described above, and then the generated guidance content may include: the subject matter of the present subject hearing content is how to make bread, where the english of the bread is break, then several english words of key materials can be introduced, and finally the subject matter "do you know what is the decisive material for fermenting the bread? ".
In one embodiment of the invention, the read-through content may include one or more language expressions. For example, the pilot content may include language expressions that are the same as the language of the target hearing content, or may include language expressions of the user that are a combination of the native language and the bilingual language (e.g., the same language as the target hearing content).
Then, after step 210 is described above, flow may proceed to step 220 where the read-guidance content may be output in response to entering the pre-listen phase. In one embodiment, outputting the read-guided content may include outputting in a visual and/or audible manner. For example, it may be presented in text form on a display screen for the user to read and/or it may be played in speech form for the user to hear.
Based on the above description about the guidance content, it will be understood by those skilled in the art that by generating the guidance content and outputting the guidance content in the pre-listening link, the user can learn the gist of the target hearing content before formally listening to the target hearing content, learn the possible word, hard word, key word with core meaning, etc. in advance, which is beneficial to helping the user understand and master the target hearing content more easily in the following listening link. Further, when the guide-reading content comprises the target problem, the function of prompting the user to take the problem to listen to the original text can be achieved, so that the user can pay attention to the content related to the target problem in the target hearing audio more pertinently in listening, and the hearing training efficiency and the information capturing capability of the user in listening are improved.
As further shown in fig. 2, in response to entering the listening link, target hearing content may be output in step 230, wherein the target hearing content may include at least target hearing audio. That is, at least the target hearing audio of the target hearing content may be output in step 230. The target hearing audio may be pre-stored, or may be obtained by converting the pre-stored target hearing text using a speech-to-text technique.
In another embodiment of the invention, the target hearing content may further comprise target hearing text corresponding to the target hearing audio, and outputting the target hearing content may comprise outputting at least one of: target hearing audio; target hearing text; translation of the target hearing text; and/or a translation of a user-selected portion of the text in the target hearing text. The target hearing text corresponding to the target hearing audio may be understood as a one-to-one correspondence of the speech content of the target hearing audio to the text content in the target hearing text.
In some application scenarios, the user may not look at the subtitles but rather "blind" at all, at which time outputting the target hearing content may include outputting only the target hearing audio. In other application scenarios, the user may also choose to listen to the target hearing audio while viewing the subtitle, where outputting the target hearing content may include: and when the target hearing audio is output, synchronously outputting corresponding target hearing text and/or a translation of the target hearing text, namely displaying a single-language or double-language caption. In still other embodiments, when outputting the target hearing audio and the corresponding target hearing text, the user may respond to the selection operation of the partial text in the target hearing text, and present the translated version of the partial text, so as to meet the application requirement of the user in viewing the translated result of certain words, phrases, sentences or the like in the process of listening to the target hearing audio.
In yet another embodiment of the present invention, outputting the target hearing content may include performing at least one of the following output settings: an output speed; the number of cycles of each sentence audio in the target hearing audio; each sentence is output in a circulating way and then is stopped; and after each sentence audio frequency is circularly output, the next sentence audio frequency is automatically output. The settings can be preset or adjusted at any time in the listening process according to the needs.
In some embodiments, the output speed is the playing speed of the target hearing audio. By setting the output speed of the target hearing audio, the difficulty of hearing training can be regulated in an auxiliary way. In some application scenarios, the user needs to repeatedly listen to each statement audio in the target hearing audio, or needs to repeatedly listen to a part of statement audio in the target hearing audio, which can be achieved by adjusting the cycle number setting of each statement audio. In other embodiments, by setting a pause after each sentence audio is circularly output, the function of prompting the user that the sentence audio is played is finished can be achieved, the function of decomposing target hearing audio can be achieved, the listening difficulty of the target hearing audio can be reduced to adapt to the user requirement, and the user can select to listen again or continue to listen to the next sentence audio.
According to the setting, users with different abilities can be helped to set a hearing mode suitable for themselves. Through the combination of output setting and output content, the hearing training device can also help a user to understand such repeated step training from slow to fast and from the need to see the caption to try blind hearing, so as to realize progressive hearing training and be beneficial to improving the hearing training effect of the user.
While a machine-implemented hearing training method according to an embodiment of the present invention has been described above with reference to fig. 2, it will be appreciated by those skilled in the art that the above description is exemplary and not limiting, e.g., steps 220 and 230 may not be limited to the sequential execution in the illustration, but only step 220 or only step 230 may be performed as desired. For example, in another embodiment, step 220 may be performed after step 230 to act as a summary of the target hearing content of the listening to the output of the ring segment, helping the user to verify his or her hearing performance. Further, the hearing training method according to the embodiment of the present invention may not be limited to adjusting the training difficulty only in the training manner to improve the hearing training effect of the user, but may also be combined with the content difficulty of the target hearing content to match the hearing ability of the user, which will be described in the following exemplary manner with reference to fig. 3.
Fig. 3 schematically shows a flow chart of a hearing training method comprising content difficulty rating according to an embodiment of the invention. As shown in fig. 3, the hearing training method 300 may include: in step 310, each of the plurality of candidate hearing content may be difficulty rated in terms of content difficulty. In some embodiments, step 310 may further include: and grading the difficulty of each candidate hearing content in the plurality of candidate hearing contents according to the content difficulty by utilizing a natural language processing NLP technology and a keyword extraction technology.
Specifically, in another embodiment of the present invention, the content difficulty may be determined based on at least one of the following: a subject of candidate hearing content; candidate vocabulary of hearing content; grammar of candidate hearing content; and sentence length of the candidate hearing content. The subject of the candidate hearing content can be extracted by utilizing a natural language processing NLP technology, the vocabulary of the candidate hearing content can be extracted by utilizing a keyword extraction technology, the grammar of the candidate hearing content can be analyzed by utilizing the natural language NLP technology, and the sentence length of the candidate hearing content can also be analyzed by utilizing the natural language NLP technology. In some embodiments, the vocabulary of candidate hearing content may include keyword clusters, keywords, and/or keyword phrases.
The difficulty grading may be performed by comprehensively evaluating the subject difficulty, vocabulary difficulty, grammar difficulty, sentence length, etc. of each candidate hearing content, so as to grade the content difficulty of each candidate hearing content. In some embodiments, the respective difficulty level may be determined by matching the subject matter, vocabulary, grammar, and/or sentence length, etc. of the candidate hearing content with language level criteria of different levels, e.g., keywords extracted using keyword extraction techniques may be matched with vocabularies in the yasi, college english level four, college english level six, high school english, junior middle school english level criteria, to determine the difficulty level of the keyword, and thus the difficulty level of the candidate hearing content. In other embodiments, the subject difficulty, vocabulary difficulty, grammar difficulty, and/or sentence length, etc. of each candidate hearing content may be integrated to rank the plurality of candidate hearing content in a difficulty order and divided into a plurality of difficulty levels, e.g., level one through level five.
Next, in step 320, a range of user-selectable candidate hearing content may be determined based on the difficulty level corresponding to the user's hearing level, such that the user selects the target hearing content within the range. For example, in some application scenarios where the user's hearing level is a high-middle level, the range of candidate hearing content selectable by the user may be determined to be a high-middle level range, such that the user selects a target hearing content for which hearing training is desired among a plurality of candidate hearing content within the high-middle level range, without recommending candidate hearing content of a higher difficulty level (e.g., yasii) or a lower difficulty level (e.g., primary school level) to the user.
In some embodiments, the hearing level of the user may be set autonomously by the user, or may be obtained by the machine based on a comprehensive judgment of the user's hearing training history information. For example, in other embodiments, the hearing training method 300 may further include: and determining whether to output the candidate hearing content of the next content difficulty level according to the training result of the hearing training of the user under the current content difficulty level. For example, in other application scenarios, where the training results of the hearing training of the user with the target hearing content in level one reach a preset criterion, it may be determined that the hearing level of the user has exceeded level one, and candidate hearing content within level two may be presented for selection by the user. In some embodiments, the training results may be determined by real-time inspection of the listening link and/or testing in the post-listening link. For example, in another embodiment, the training results may include hearing training effects of the in-listen link and/or hearing test results of the post-listen link.
In other embodiments, the hearing training method 300 may further comprise: according to the training result of the user in the hearing training of the current target hearing content, determining whether to enter a pre-hearing link when the hearing training is performed on the next target hearing content in the difficulty level of the current target hearing content. According to the setting, the training mode from easy to difficult can be provided in the same content difficulty level, so that the hearing ability of the user in the difficulty level is improved gradually, and the English hearing learning adapting to various difficulties from shallow to deep is facilitated for the user.
Flow may then proceed to step 330 where, based on the target hearing content, guide-reading content associated with the target hearing content may be generated. Further, in step 340, the read-guidance content may be output in response to entering the pre-listen segment. Next, in step 350, in response to entering the listening link, target hearing content may be output. It will be appreciated that the steps 330, 340 and 350 have been described in detail in connection with the steps 210, 220 and 230 in fig. 2, and will not be repeated here.
While the hearing training method including content difficulty classification according to the embodiment of the present invention is described above with reference to fig. 3, it may be appreciated that, according to the hearing training method of the present embodiment, the content difficulty of the target hearing content and the hearing training manner may be combined to better match different user types and different training requirements of the user, so that users with different capability classes may perform step training according to hearing learning content suitable for themselves.
It will also be appreciated that the above description is exemplary and not limiting, and that, for example, the determination of content difficulty may not be limited to being based solely on the subject matter, vocabulary, grammar, and/or sentence length described above, but may be based on the original voice rate of the target hearing audio, the amount of information delivered, etc. Also for example, step 350 may not be limited to outputting only the target hearing content, but may output, for example, keywords or the like to examine the real-time in-listen training effect of the user, as will be exemplarily described below with reference to fig. 4.
Fig. 4 schematically shows a flow chart of a hearing training method for presenting keyword text in a hearing ring according to an embodiment of the invention. It should be appreciated that the method 400 illustrated in fig. 4 may be a specific representation of the step 230 illustrated in fig. 2 and, therefore, the description hereinabove in connection with the step 230 of fig. 2 may also be applicable to the description of the method 400 that follows.
As shown in fig. 4, the hearing training method 400 may include: in step 410, keywords in the target hearing content may be determined. In some embodiments, determining keywords in the target hearing content may include extracting keyword text in the target hearing text using a keyword extraction technique based on the target hearing text in the target hearing content. In still other embodiments, determining keywords in the target hearing content may include: one or more keywords in the target hearing content are determined.
In some embodiments, step 410 may include step 411 (shown in dashed boxes), step 412 (shown in dashed boxes), and step 413 (shown in dashed boxes), wherein in step 411, candidate keywords in the target hearing content may be extracted. Specifically, candidate keywords in the target hearing content can be extracted by using a keyword extraction technology, namely weak connection words, virtual words and the like can be screened out, so that the part affecting the meaning of the core sentence is reserved. In some embodiments, the candidate keywords extracted from each target hearing content may be one or more. In other embodiments, one or more candidate keywords may be extracted from each statement in the target hearing content.
Next, in step 412, candidate keywords may be found in a pre-set vocabulary matching the user's hearing level. The user hearing level has been described above in connection with fig. 3 and will not be described here. The preset vocabulary matching the user's hearing level may comprise a corresponding vocabulary that is specified domestically or internationally to be mastered for different hearing level levels. For example, in some embodiments, if the user's hearing level is a high school level, the preset vocabulary that matches it may be a high school vocabulary. In other embodiments, if the user's hearing level is a yajean level, the preset vocabulary that matches it may be a yajean vocabulary.
Then, as shown in fig. 4, the flow may proceed to step 413, and the keyword may be determined according to the search result in the preset vocabulary. In some embodiments, determining the keywords from the search results may include: determining one or more candidate keywords as keywords in the target hearing content in response to the one or more candidate keywords belonging to the vocabulary in the preset vocabulary; or in response to the one or more candidate keywords not belonging to a vocabulary in the preset vocabulary, determining that the one or more candidate keywords are not keywords in the target hearing content.
In other embodiments, the process of determining keywords may be understood as a process of determining intersections of candidate keywords with words in a preset vocabulary. In still other embodiments, step 410 may be a step performed prior to entering an listening ring segment (e.g., step 230 shown in fig. 2). In still other embodiments, step 410 may be a concrete representation of step 210 described above in connection with FIG. 2, i.e., when the read-through content includes keywords for the target hearing content, the read-through content may be generated based on the method of determining the keywords in step 410.
As further shown in fig. 4, after determining the keywords in step 410, the flow may proceed to step 420, where the keyword text corresponding to the keyword audio may be presented based on the location of the keywords in the target hearing content before outputting the keyword audio in the target hearing audio under the hearing ring segment. The keyword audio can be obtained by processing a keyword text by using a text-to-speech technology, and the keyword text corresponds to the content of the keyword audio one by one.
In one embodiment, step 420 may further include: the location of the keywords in the target hearing content is determined. Specifically, the purpose of determining the position of the keyword in the target hearing content can be achieved by dotting and sentence-breaking on the target hearing audio in the target hearing content, and corresponding each keyword text to the time point of occurrence of the keyword audio in the audio track of the target hearing audio. Then, in the process of playing the target hearing audio by the hearing ring, before playing the keyword audio or sentence audio where the keyword audio is located, the corresponding keyword text is presented to the user.
In another embodiment of the present invention, step 410 may further include: determining a plurality of keywords in the target hearing content; and step 420 may further comprise: based on the positional relationship of the plurality of keywords in the target hearing content, the plurality of keywords are presented simultaneously before one or more sentence audios including the plurality of keywords are sequentially output. That is, when the sentence audios in the target hearing content are played in sequence, one sentence audio or a plurality of keywords in a plurality of adjacent sentence audios can be formed into a batch, and before the sentence audios in which the keywords in the batch are output, the keywords in the batch are presented simultaneously (or once). In one embodiment, a batch may include 4 or 5 keywords. Of course, it should be understood that, depending on the length of the target hearing content and the total number of keywords determined, the number of such batches may not be limited to one, and a plurality of batches may occur as required, where the keywords of each batch are presented in a similar manner, and will not be described herein.
Alternatively or additionally, in step 430 (shown in dashed boxes), it may be determined whether a user selection operation of the corresponding keyword text is received within a preset time period after outputting the keyword audio, to determine a hearing training effect of the user. The step can train the capturing capability and quick response capability of the user on the keyword audio in the target hearing audio in the hearing middle section, namely whether the user can accurately hear the keyword audio in the target hearing audio. The preset time period can be set according to the needs. The selection operation of the corresponding keyword text may be understood as a correct selection operation of the keyword text that correctly corresponds to the outputted keyword audio.
In some embodiments, step 430 may further comprise: determining that a correct selection operation of the user has not been received in response to at least one of: receiving a selection operation before the keyword audio output; receiving a selection operation exceeding a preset time period; an erroneous selection operation is received. In another embodiment, when a plurality of keywords in the same batch appear, the user is required to sequentially select the corresponding keyword text according to the playing sequence of the plurality of keyword audios, otherwise, the correct selection operation of the user is still determined not to be received. In yet another embodiment, step 430 may further include: and determining the hearing training effect of the user in the listening link according to the received times of correct selection operation.
For example, when the preset duration is set to be 3 seconds and 4 or 5 keyword texts appear at a time, the user needs to perform a selection operation (for example, a clicking operation) within 3 seconds after playing the keyword audio corresponding to the keyword texts, and if a timeout click, a clicking error, a clicking of an unplayed keyword in advance, or a clicking of the keyword audio not in the order of playing the 4 or 5 keyword texts occurs, it is determined that a correct selection operation is not received. Therefore, only when the corresponding keyword text is correctly clicked within 3 seconds after the keyword audio is played, it can be determined as a correct selection operation.
While the hearing training method for presenting keyword text in a listening ring according to the embodiment of the present invention has been described above with reference to fig. 4, it will be understood by those skilled in the art that the above description is exemplary and not limiting, for example, presenting keywords in a listening ring in step 420 may not be limited to presenting only keywords, and in one embodiment, both keywords and interfering words may be presented simultaneously to check whether a user can accurately select keyword text in options including interfering words, and according to such a setting, the training difficulty of listening training may be increased to meet the training requirement that the user further improves the hearing level. The interfering word may be a word other than the target hearing content, and may include, for example, a near-meaning word, an anti-meaning word, and/or a word having a similar pronunciation but a different meaning, etc. of the keyword. For example, in still other embodiments, the hearing training effect of the hearing intermediate ring may be comprehensively evaluated based on at least one of the content difficulty, the output speed, the number of correct selection operations, and the like of the target hearing content. In still other embodiments, the method 400 may be performed independently as desired without entering the pre-listen loop. Further, the hearing training method according to an embodiment of the present invention may not be limited to include only the pre-hearing link and the mid-hearing link, but may also include the post-hearing link in yet another embodiment, and will be exemplarily described below with reference to fig. 5.
Fig. 5 schematically shows a flow chart of a hearing training method comprising a post-hearing link according to an embodiment of the invention. As shown in fig. 5, the hearing training method 500 may include: in step 510, read-through content associated with the target hearing content may be generated based on the target hearing content. Next, in step 520, the pilot content may be output in response to entering the pre-listen phase. Then, in step 530, the target hearing content may be output in response to entering the listening link. Step 510, step 520 and step 530 are described in detail above in connection with fig. 2-4, and are not described in detail herein.
As further shown in fig. 5, flow may proceed to step 540, in response to entering the post-listening link, a test question associated with the target hearing content may be output, wherein the test question may include at least one of: reading a target problem in the content; and/or detail problems in the target hearing content. In some application scenarios, when a user receives a target question in the guide content in a pre-listening link, the user may listen to the target hearing audio with the target question in the listening link, and then in a post-listening link, the machine may output the target question for the user to verify hearing learning results. In other embodiments, questions related to details in the target hearing content may be provided to verify the hearing training outcome of the user.
In one embodiment of the present invention, step 540 may further include: answer options associated with the test question are output to guide the user in making the selection. Multiple answer options may be provided for each test question. In another embodiment of the present invention, the test questions and/or answer options related to the test questions may be output in at least one of the following forms: a picture; audio frequency; and/or text in one or more languages.
For example, in some application scenarios, for users with weak primary school or hearing ability, a test question and/or answer option in the form of a picture may be set, or a test question described in a native language and an answer option described in a native language or bilingual may be output. In other application scenarios, for users with strong hearing ability, assuming that the language of the target hearing content is english, the test questions described in english and answer options described in english may be output, or the test questions and/or answer options may also be output in the form of english audio.
In some embodiments, a portion of the audio in the target hearing audio may also be output before the test question is output, and then test questions and answer options associated with the portion of the audio are output. In other embodiments, answer choices may include words, phrases, sentences, and the like.
In another embodiment of the present invention, the hearing training method 500 may further include: presenting to the user one or more of the following information: a hearing test result; correct answer at error; the error is at a corresponding location in the target hearing content. In some embodiments, based on the user's selection of the answer options described above, the user's test score (or answer accuracy or hearing index) may be given, and the machine may present the user with hearing test results including, for example, the test score, and/or the question details of all questions, their resolution, and the like. In other embodiments, the training results of the user may be comprehensively evaluated in combination with one or more of content difficulty (including theme difficulty, vocabulary difficulty, grammar difficulty, and/or sentence length) of the target hearing content, audio playing speed, hearing test results, and the like, so as to evaluate the hearing ability of the user. In still other embodiments, the user may be presented with the wrong answer questions and their correct answers, the corresponding locations of the correct answers to the wrong answer questions in the target hearing content, etc.
While an exemplary hearing training method including a post-hearing link according to an embodiment of the present invention has been described above with reference to fig. 5, it may be understood that, from an inspection objective, a user's hearing training result may be comprehensively inspected from both a gist sense and a detail understanding by setting objective questions and detail questions in the post-hearing link. Through the targeted test question test of the target hearing content in the post-hearing link, the method can help users to test learning results and further strengthen hearing training effects.
It should be further appreciated that the hearing training method 500 of the embodiment of the present invention may not be limited to executing in order of pre-hearing link, in-hearing link and post-hearing link, and may execute only one or two links or repeatedly execute one of the links as required, so as to perform targeted reinforcement training on the user. For example, in some application scenarios, step 530 may be performed multiple times in succession to allow the user to iterate through the audiometric training. In other embodiments, step 540 may be performed multiple times in succession after the user has understood the target hearing content, such that the user repeatedly swipes questions to train the hearing ability to help enhance the post-hearing effect.
Through the above description of the technical solution and the embodiments of the present invention with reference to the drawings, it can be understood by those skilled in the art that by generating the guidance content and outputting the guidance content in the pre-listening link, the user can fully understand what he or she will listen to, and can clearly know what information he or she will obtain or solve what problem after hearing is finished, so as to implement a progressive hearing training mode, and help to improve the understanding degree of the user on the target hearing audio in the listening middle section and the hearing training efficiency.
In some embodiments, by combining the content difficulty of the target hearing content, selective training patterns may be provided for different levels of users. In other embodiments, the setting of the keywords is presented in the listening middle section, so that the competitive feeling and the interestingness in the listening middle section training process are improved. Compared with the traditional boring single question making mode, selecting the correct keywords in the listening ring is more beneficial to realizing the real-time detection of the hearing effect, training the quick response capability of the user and improving the hearing training interest and experience of the user. In still other embodiments, through multiple forms of test questions and/or answer option settings in the post-hearing link, efficient assessment of hearing training results can be achieved automatically, enabling the user to obtain benign feedback of the training process in each progress,
furthermore, although the operations of the methods of the present invention are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Use of the verb "comprise," "include" and its conjugations in this application does not exclude the presence of elements or steps other than those stated in the application. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. It should also be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, specification and drawings of the present invention are used for distinguishing between different objects and not for describing a particular sequential order.
While the spirit and principles of the present invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments nor does it imply that features of the various aspects are not useful in combination, nor are they useful in any combination, such as for convenience of description. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (15)

1. A machine-implemented hearing training method, comprising:
Extracting semantics of target hearing content based on the target hearing content to determine subject information of the target hearing content;
generating, based at least on the subject information, read-through content related to the target hearing content;
outputting the read-guiding content in response to entering a pre-hearing link; and
outputting the target hearing content in response to entering an in-listen link, wherein the target hearing content comprises at least target hearing audio;
the hearing training method further comprises:
determining keywords in the target hearing content;
dotting and sentence breaking are carried out on the target hearing audio, each keyword text is corresponding to the time point of occurrence of the keyword audio in the audio track of the target hearing audio, so that the position of the keyword in the target hearing content is determined; and
based on the position of the keyword in the target hearing content, before outputting the keyword audio in the target hearing audio under the hearing ring section, presenting the keyword text corresponding to the keyword audio.
2. The hearing training method of claim 1, comprising, prior to generating the readthrough content:
Performing difficulty grading on the content difficulty of each candidate hearing content in the plurality of candidate hearing contents; and
and determining a range of candidate hearing contents selectable by the user based on a difficulty level corresponding to the hearing level of the user, so that the user can select the target hearing contents in the range.
3. The hearing training method of claim 2, wherein the content difficulty is determined based on at least one of:
a subject of candidate hearing content;
candidate vocabulary of hearing content;
grammar of candidate hearing content; and
statement length of candidate hearing content.
4. The hearing training method of claim 1, the read-through content further generated based on at least one of:
keywords in the target hearing content; and/or
A target question associated with the target hearing content.
5. The hearing training method according to any one of claims 1 to 4, wherein,
the read-through content comprises one or more language expressions;
outputting the read-through content includes outputting in a visual and/or audible manner.
6. The hearing training method of any of claims 1-3, wherein the target hearing content further comprises target hearing text corresponding to the target hearing audio, and outputting the target hearing content comprises outputting at least one of:
The target hearing audio;
the target hearing text;
a translation of the target hearing text;
and translating the part of the text selected by the user in the target hearing text.
7. The hearing training method of claim 6, wherein outputting the target hearing content comprises performing at least one of the following output settings:
an output speed;
the number of cycles of each sentence audio in the target hearing audio;
each sentence is output in a circulating way and then is stopped;
and after each sentence audio frequency is circularly output, the next sentence audio frequency is automatically output.
8. The hearing training method of claim 1, wherein the determining keywords comprises:
extracting candidate keywords in the target hearing content;
searching the candidate keywords in a preset word list matched with the hearing level of the user; and
and determining the keywords according to the search result in the preset word list.
9. The hearing training method of claim 1, further comprising:
determining a plurality of keywords in the target hearing content; and
based on the positional relationship of the plurality of keywords in the target hearing content, the plurality of keywords are presented simultaneously before one or more sentence audios including the plurality of keywords are sequentially output.
10. The hearing training method of any one of claims 1-9, further comprising:
and judging whether the selection operation of the user on the keyword text is received or not within the preset time after the keyword audio is output, so as to determine the hearing training effect of the user.
11. The hearing training method of claim 1, further comprising:
in response to entering the post-hearing segment, outputting a test question related to the target hearing content, wherein the test question comprises at least one of:
target problems in the read-through content; and/or
Details in the target hearing content.
12. The hearing training method of claim 11, wherein the test question and/or answer options related to the test question are output in at least one of the following forms:
a picture;
audio frequency; and/or
Text in one or more languages.
13. The hearing training method of claim 11 or 12, further comprising: presenting to the user one or more of the following information:
a hearing test result;
correct answer at error;
the error is at a corresponding location in the target hearing content.
14. A device for performing hearing training, comprising,
A processor configured to execute program instructions;
a memory configured to store the program instructions, which when executed by the processor, cause the device to perform the hearing training method according to any one of claims 1-13.
15. A computer readable storage medium storing program instructions which, when loaded and executed by a processor, cause the processor to perform the method of any of claims 1-13.
CN202111481160.9A 2021-12-06 2021-12-06 Machine-implemented hearing training method, apparatus, and readable storage medium Active CN114170856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111481160.9A CN114170856B (en) 2021-12-06 2021-12-06 Machine-implemented hearing training method, apparatus, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111481160.9A CN114170856B (en) 2021-12-06 2021-12-06 Machine-implemented hearing training method, apparatus, and readable storage medium

Publications (2)

Publication Number Publication Date
CN114170856A CN114170856A (en) 2022-03-11
CN114170856B true CN114170856B (en) 2024-03-12

Family

ID=80483517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111481160.9A Active CN114170856B (en) 2021-12-06 2021-12-06 Machine-implemented hearing training method, apparatus, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114170856B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI819951B (en) * 2023-01-10 2023-10-21 弘光科技大學 language training system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365849A (en) * 2012-03-27 2013-10-23 富士通株式会社 Keyword search method and equipment
CN103942990A (en) * 2013-01-23 2014-07-23 郭毓斌 Language learning device
CN104021805A (en) * 2014-06-06 2014-09-03 杨红岩 Implementation method and device of language listening training
CN108133632A (en) * 2017-12-20 2018-06-08 刘昳旻 The training method and system of English Listening Comprehension
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109887364A (en) * 2019-01-17 2019-06-14 深圳市柯达科电子科技有限公司 Assist the method and readable storage medium storing program for executing of foreign language learning
CN110853422A (en) * 2018-08-01 2020-02-28 世学(深圳)科技有限公司 Immersive language learning system and learning method thereof
CN112100335A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Question generation method, model training method, device, equipment and storage medium
CN112951207A (en) * 2021-02-10 2021-06-11 网易有道信息技术(北京)有限公司 Spoken language evaluation method and device and related product
WO2021134524A1 (en) * 2019-12-31 2021-07-08 深圳市欢太科技有限公司 Data processing method, apparatus, electronic device, and storage medium
CN113611310A (en) * 2021-07-28 2021-11-05 网易有道信息技术(北京)有限公司 Recitation test method, device and related product
CN113611172A (en) * 2021-08-18 2021-11-05 江苏熙枫教育科技有限公司 English listening comprehension training method based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7818164B2 (en) * 2006-08-21 2010-10-19 K12 Inc. Method and system for teaching a foreign language
BR122017002789B1 (en) * 2013-02-15 2021-05-18 Voxy, Inc systems and methods for language learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365849A (en) * 2012-03-27 2013-10-23 富士通株式会社 Keyword search method and equipment
CN103942990A (en) * 2013-01-23 2014-07-23 郭毓斌 Language learning device
CN104021805A (en) * 2014-06-06 2014-09-03 杨红岩 Implementation method and device of language listening training
CN108133632A (en) * 2017-12-20 2018-06-08 刘昳旻 The training method and system of English Listening Comprehension
CN110853422A (en) * 2018-08-01 2020-02-28 世学(深圳)科技有限公司 Immersive language learning system and learning method thereof
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109887364A (en) * 2019-01-17 2019-06-14 深圳市柯达科电子科技有限公司 Assist the method and readable storage medium storing program for executing of foreign language learning
WO2021134524A1 (en) * 2019-12-31 2021-07-08 深圳市欢太科技有限公司 Data processing method, apparatus, electronic device, and storage medium
CN112100335A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Question generation method, model training method, device, equipment and storage medium
CN112951207A (en) * 2021-02-10 2021-06-11 网易有道信息技术(北京)有限公司 Spoken language evaluation method and device and related product
CN113611310A (en) * 2021-07-28 2021-11-05 网易有道信息技术(北京)有限公司 Recitation test method, device and related product
CN113611172A (en) * 2021-08-18 2021-11-05 江苏熙枫教育科技有限公司 English listening comprehension training method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王兰成.《知识集成方法与技术-知识组织与知识检索》.国防工业出版社,2010,(第一版),第37-38页. *

Also Published As

Publication number Publication date
CN114170856A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
US20180061256A1 (en) Automated digital media content extraction for digital lesson generation
CN108133632B (en) The training method and system of English Listening Comprehension
CN110797010A (en) Question-answer scoring method, device, equipment and storage medium based on artificial intelligence
CN111462553B (en) Language learning method and system based on video dubbing and sound correction training
US7160112B2 (en) System and method for language education using meaning unit and relational question
CN104115221A (en) Audio human interactive proof based on text-to-speech and semantics
KR20160008949A (en) Apparatus and method for foreign language learning based on spoken dialogue
Gürbüz Understanding fluency and disfluency in non-native speakers' conversational English
CN111459453A (en) Reading assisting method and device, storage medium and electronic equipment
US8019591B2 (en) Rapid automatic user training with simulated bilingual user actions and responses in speech-to-speech translation
CN114170856B (en) Machine-implemented hearing training method, apparatus, and readable storage medium
O'Mahony et al. Combining conversational speech with read speech to improve prosody in text-to-speech synthesis
JP6656529B2 (en) Foreign language conversation training system
Cheng Unfamiliar accented English negatively affects EFL listening comprehension: It helps to be a more able accent mimic
CN101739852B (en) Speech recognition-based method and device for realizing automatic oral interpretation training
Ferdiansyah et al. Effect of captioning lecture videos for learning in foreign language
CN116403583A (en) Voice data processing method and device, nonvolatile storage medium and vehicle
Proença et al. The LetsRead corpus of Portuguese children reading aloud for performance evaluation
CN114255759A (en) Method, apparatus and readable storage medium for spoken language training using machine
KR20190070682A (en) System and method for constructing and providing lecture contents
KR102098377B1 (en) Method for providing foreign language education service learning grammar using puzzle game
Shukla Development of a human-AI teaming based mobile language learning solution for dual language learners in early and special educations
Shivakumar et al. AI-ENABLED LANGUAGE SPEAKING COACHING FOR DUAL LANGUAGE LEARNERS.
CN111475708A (en) Push method, medium, device and computing equipment for follow-up reading content
Kasrani et al. A Mobile Cloud Computing Based Independent Language Learning System with Automatic Intelligibility Assessment and Instant Feedback.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant