CN111639218A - Interactive method for spoken language training and terminal equipment - Google Patents

Interactive method for spoken language training and terminal equipment Download PDF

Info

Publication number
CN111639218A
CN111639218A CN202010396479.0A CN202010396479A CN111639218A CN 111639218 A CN111639218 A CN 111639218A CN 202010396479 A CN202010396479 A CN 202010396479A CN 111639218 A CN111639218 A CN 111639218A
Authority
CN
China
Prior art keywords
spoken
spoken language
training
input
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010396479.0A
Other languages
Chinese (zh)
Inventor
武志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010396479.0A priority Critical patent/CN111639218A/en
Publication of CN111639218A publication Critical patent/CN111639218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Abstract

The embodiment of the invention discloses an interactive method for spoken language training and terminal equipment, which are applied to the technical field of terminal equipment and can solve the problems that the operation cost is increased, the flexibility and the interestingness of a real conversation scene in life are lacked, and the requirements of users on the rapidness and the convenience of intelligent equipment are not met. The method comprises the following steps: displaying at least one avatar, each of the at least one avatar representing a different spoken dialog character; under the condition that a first input of a user for a first virtual image is detected, judging whether the first input is matched with a preset input, wherein the first virtual image is one of at least one virtual image; and if the first input is matched with the preset input, determining to execute the spoken training of the first spoken dialog role corresponding to the first virtual image. The method is applied to the scene of spoken language training by adopting the terminal equipment.

Description

Interactive method for spoken language training and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminal equipment, in particular to an interactive method for spoken language training and the terminal equipment.
Background
The English dialogue practice aiming at textbooks by using the family education machine needs to select one or more roles in advance, and the practice can be carried out only according to the sequence of the text contents in the process. If the user wants to play other roles, the user needs to end the exercise and then reselect the role, and then start the talking exercise again. The operation cost is increased, the flexibility and the interestingness of real conversation scenes in life are lacked, and the requirements of users on rapidness and convenience of intelligent equipment are not met.
Disclosure of Invention
The embodiment of the invention provides an interactive method and terminal equipment for spoken language training, which are used for solving the problems that the operation cost is increased, the flexibility and the interestingness of a real conversation scene in life are lacked, and the requirements of a user on rapidness and convenience of intelligent equipment are not met in the prior art. In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an interactive method for spoken language training is provided, which is applied to a terminal device, and the method includes: displaying at least one avatar, each of the at least one avatar representing a different spoken dialog character;
under the condition that a first input of a user for a first virtual image is detected, judging whether the first input is matched with a preset input, wherein the first virtual image is one of the at least one virtual image;
and if the first input is matched with the preset input, determining to execute the spoken language training of the first spoken language dialogue role corresponding to the first virtual image.
As an alternative implementation, in the first aspect of the embodiment of the present invention, the at least one avatar includes a second avatar; in a case where a first input of a user to a first avatar is detected, before determining whether the first input matches a preset input, the method further includes:
reading first voice information of spoken training of a user aiming at a second spoken dialog role corresponding to the second virtual image;
judging whether the user completes the spoken language training of the second spoken language dialogue role or not according to the first voice information;
if the spoken training of the second spoken dialog character is completed, detecting whether the user has input for the at least one avatar.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, after determining to perform the spoken training of the first spoken language dialogue character corresponding to the first avatar, the method further includes:
reading second voice information of the user for spoken training of a first spoken dialog role corresponding to the first virtual image;
analyzing the content and pronunciation of the second voice information, and generating an evaluation result aiming at the second voice information according to the content and pronunciation of the second voice information;
judging whether the spoken language training aiming at the first spoken language dialogue role is passed or not according to the evaluation result;
if the spoken language training for the first spoken language dialog character is not passed, the spoken language training of the first spoken language dialog character is repeatedly performed.
As an alternative implementation manner, in the first aspect of the embodiments of the present invention, the evaluation result includes: evaluating the scores; the judging whether the spoken language training for the first spoken language dialogue character is passed or not according to the evaluation result comprises the following steps:
judging whether the evaluation score is greater than or equal to a preset score or not;
if the evaluation score is greater than or equal to a preset score, determining that the spoken language training for the first spoken language dialog role is passed;
if the evaluation score is less than the preset score, determining that the spoken training for the first spoken dialog character fails.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the evaluation result further includes: improvement suggestion information corresponding to the evaluation score, the improvement suggestion information including: at least one of an improved suggestion for content and an improved suggestion for pronunciation;
if the spoken language training for the first spoken language dialog character is not passed, then repeatedly performing the spoken language training for the first spoken language dialog character includes:
if the evaluation score is smaller than the preset score, determining that the spoken training aiming at the first spoken dialog role is not passed, and prompting a user whether to check the improvement suggestion information;
if a second input that the user determines to view the improvement suggestion information is detected, displaying the improvement suggestion information;
and if the third input of the improved suggestion information is detected to be closed by the user, closing the improved suggestion information, and repeatedly executing the spoken language training of the first spoken language dialogue role.
In a second aspect, a terminal device is provided, which includes: the display module is used for displaying at least one virtual image, and each virtual image in the at least one virtual image represents different spoken language dialogue roles;
the device comprises a judging module, a judging module and a judging module, wherein the judging module is used for judging whether a first input of a user for a first virtual image is matched with a preset input or not under the condition that the first input of the user for the first virtual image is detected, and the first virtual image is one of the at least one virtual image;
and the determining module is used for determining to execute the spoken training of the first spoken dialog role corresponding to the first virtual image if the first input is matched with the preset input.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the at least one avatar includes a second avatar, and the terminal device further includes:
the reading module is used for reading first voice information of spoken training of a user aiming at a second spoken language dialogue role corresponding to the second virtual image;
the judging module is also used for judging whether the user completes the spoken language training of the second spoken language dialogue role according to the first voice information;
and the detection module is used for detecting whether the user has input aiming at the at least one virtual image or not if the spoken language training of the second spoken language dialogue role is completed.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the terminal device further includes:
the reading module is used for reading first voice information of spoken training of a user aiming at a second spoken language dialogue role corresponding to the second virtual image;
the analysis generation module is used for analyzing the content and pronunciation of the second voice information and generating an evaluation result aiming at the second voice information according to the content and pronunciation of the second voice information;
the judging module is also used for judging whether the spoken language training aiming at the first spoken language dialogue role passes or not according to the evaluation result;
and the execution module is used for repeatedly executing the spoken language training of the first spoken language dialogue role if the spoken language training aiming at the first spoken language dialogue role is not passed.
As an alternative implementation manner, in the second aspect of the embodiment of the present invention, the evaluation result includes: evaluating the scores;
the judging module comprises:
the judgment submodule is used for judging whether the evaluation score is greater than or equal to a preset score or not;
the first determining submodule is used for determining that the spoken language training aiming at the first spoken language dialogue role is passed if the evaluation score is greater than or equal to a preset score;
if the evaluation score is less than the preset score, determining that the spoken training for the first spoken dialog character fails.
As an alternative implementation manner, in the second aspect of the embodiment of the present invention, the evaluation result further includes: improvement suggestion information corresponding to the evaluation score, the improvement suggestion information including: at least one of an improved suggestion for content and an improved suggestion for pronunciation;
the execution module comprises:
a second determining sub-module, configured to determine that the spoken training for the first spoken dialog role is failed if the evaluation score is smaller than the preset score, and prompt a user whether to check the improvement suggestion information;
the detection submodule is used for displaying the improvement suggestion information if a second input that the user determines to view the improvement suggestion information is detected; and if the third input of the improved suggestion information is detected to be closed by the user, closing the improved suggestion information, and repeatedly executing the spoken language training of the first spoken language dialogue role.
In a third aspect, a terminal device is provided, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the interactive method for spoken language training in the first aspect of the embodiment of the present invention.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program, where the computer program makes a computer execute the interactive method for spoken language training in the first aspect of the embodiment of the present invention. The computer readable storage medium includes a ROM/RAM, a magnetic or optical disk, or the like.
In a fifth aspect, there is provided a computer program product for causing a computer to perform some or all of the steps of any one of the methods of the first aspect when the computer program product is run on the computer.
A sixth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the terminal equipment can display at least one virtual image, and each virtual image in the at least one virtual image represents different spoken language conversation roles; under the condition that a first input of a user for a first virtual image is detected, judging whether the first input is matched with a preset input, wherein the first virtual image is one of at least one virtual image; and if the first input is matched with the preset input, determining to execute the spoken training of the first spoken dialog role corresponding to the first virtual image. Through the scheme, the spoken language training project corresponding to the first spoken language dialogue role corresponding to the first virtual image can be selected through the input of the user to the displayed at least one virtual image, and then the spoken language training project is executed, so that different dialogue roles can be switched at any time in the spoken language training process to perform spoken language training, the operation cost can be saved, the flexibility and the interestingness of a real dialogue scene are improved, and the requirements of the user on rapidness and convenience of intelligent equipment are met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart illustrating an interactive method for spoken language training according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an interactive method for spoken language training according to an embodiment of the present invention;
fig. 3 is a schematic flowchart illustrating a third interactive method for spoken language training according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an interactive method for spoken language training according to an embodiment of the present invention;
fig. 5 is a first schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first avatar and the second avatar, etc. are for distinguishing different avatars, not for describing a particular order of avatars.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides an interactive method and terminal equipment for spoken language training, which can select a spoken language training project corresponding to a first spoken language dialogue role corresponding to a first virtual image through the input of a user to at least one displayed virtual image, and then execute the spoken language training project, so that different dialogue roles can be switched at any time to perform spoken language training in the process of spoken language training, the operation cost can be saved, the flexibility and interest of a real dialogue scene are improved, and the requirements of the user on rapidness and convenience of intelligent equipment are met.
The terminal device according to the embodiment of the present invention may be an electronic device such as a Mobile phone, a tablet Computer, a notebook Computer, a palmtop Computer, a vehicle-mounted terminal device, a wearable device, an Ultra-Mobile Personal Computer (UMPC), a netbook, or a Personal Digital Assistant (PDA). The wearable device may be a smart watch, a smart bracelet, a watch phone, a smart foot ring, a smart earring, a smart necklace, a smart headset, or the like, and the embodiment of the present invention is not limited.
The execution subject of the interactive method for spoken language training provided by the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the interactive method for spoken language training in the terminal device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain the interactive method for spoken language training provided by the embodiment of the present invention.
The interactive method for spoken language training provided by the embodiment of the invention can be applied to scenes in which terminal equipment is adopted to perform spoken language training.
Example one
As shown in fig. 1, an embodiment of the present invention provides an interactive method for spoken language training, which may include the following steps:
101. the terminal device displays at least one avatar.
Wherein each of the at least one avatar represents a different spoken dialog character.
Optionally, in the embodiment of the present invention, the avatar may be a character avatar with different identities, a cartoon avatar, or a game character avatar.
For example, the figure images of different identities may be police images, doctor images, nurse images, teacher images, and student images.
Optionally, in the embodiment of the present invention, the avatar displayed by the terminal device may be related to a scene of spoken language training.
In a possible implementation manner, before the foregoing 101, the interactive method for spoken language training provided in the embodiment of the present invention may further include:
01. and the terminal equipment triggers and starts the spoken language training function.
02. And the terminal equipment displays a scene selection interface for spoken language training.
In an alternative mode, options of a plurality of scenes can be displayed in the scene selection interface for the user to select.
Illustratively, the options for the plurality of scenarios may be: buying things in supermarket, seeing doctor in hospital and getting money in bank. Different spoken language training contents can be set to adapt to the scenes.
03. The terminal equipment receives selection input of a user to the target scene.
The target scene is one of the scenes.
04. The terminal device determines at least one avatar corresponding to the target scene in response to the selection input.
In the embodiment of the invention, a plurality of scenes can be provided for the user to select, and after the user selects the scene of the dialogue, at least one virtual image corresponding to the scene can be determined and displayed, so that the flexibility of selecting the scene during the spoken language training and the fit degree of the virtual image and the scene can be improved, and the scene can be close to the real dialogue scene in life.
102. The terminal equipment judges whether the first input is matched with the preset input or not under the condition that the first input of the user for the first virtual image is detected.
The preset input may be an input preset in the terminal device in advance.
In a first optional manner, the preset input may be a touch input of the first avatar by the user. For example, a single-click input, a double-click input, a long-press input, or a double-press input to the first avatar, etc.
In a second alternative, the preset input may be a gesture input by the user for the first avatar, for example, a gesture action of the user in front of the screen, which is performed toward the first avatar.
In the optional mode, the hand-striking gesture simulates the conventional gesture when changing people in games, ball practice games and football games in life, and can bring the sense of substitution of real scenes to users.
In a third alternative, the preset input may be a fingerprint input corresponding to the first avatar, for example, assuming that there are four avatars 1, 2, 3 and 4, each avatar corresponding to a fingerprint of a finger; for example, if number 1 corresponds to the fingerprint of the index finger of the right hand, number 2 corresponds to the fingerprint of the middle finger of the right hand, number 3 corresponds to the fingerprint of the ring finger of the right hand, and number 4 corresponds to the fingerprint of the little finger of the right hand, then the corresponding avatar number 2 is determined when the user performs fingerprint verification using the fingerprint of the middle finger of the right hand.
103. And if the first input is matched with the preset input, the terminal equipment determines to execute the spoken training of the first spoken dialog role corresponding to the first virtual image.
In the embodiment of the invention, the terminal equipment can display at least one virtual image, and each virtual image in the at least one virtual image represents different spoken language conversation roles; under the condition that a first input of a user for a first virtual image is detected, judging whether the first input is matched with a preset input, wherein the first virtual image is one of at least one virtual image; and if the first input is matched with the preset input, determining to execute the spoken training of the first spoken dialog role corresponding to the first virtual image. Through the scheme, the spoken language training project corresponding to the first spoken language dialogue role corresponding to the first virtual image can be selected through the input of the user to the displayed at least one virtual image, and then the spoken language training project is executed, so that different dialogue roles can be switched at any time in the spoken language training process to perform spoken language training, the operation cost can be saved, the flexibility and the interestingness of a real dialogue scene are improved, and the requirements of the user on rapidness and convenience of intelligent equipment are met.
Example two
As shown in fig. 2, an embodiment of the present invention provides an interactive method for spoken language training, which may include the following steps:
201. the terminal equipment displays at least one virtual image, and the at least one virtual image comprises a second virtual image.
In the embodiment of the invention, the at least one avatar displayed by the terminal equipment can comprise a second avatar.
Optionally, the second avatar may be an avatar corresponding to a spoken language training being performed by the user.
202. And the terminal equipment reads the first voice information of the spoken training of the user aiming at the second spoken dialogue role corresponding to the second virtual image.
In the embodiment of the invention, during the process of the spoken language training of the second spoken language dialogue role corresponding to the second virtual image, the voice information (namely the first voice information) of the user can be read, so that whether the spoken language training of the second spoken language dialogue role is finished or not can be judged through the second voice information.
In an alternative implementation manner, one avatar may correspond to one spoken dialog character, and corresponding spoken training content may be set for each spoken dialog character based on a spoken training scenario.
203. And the terminal equipment judges whether the user completes the spoken language training of the second spoken language dialogue role or not according to the first voice information.
In the embodiment of the invention, when judging whether the user finishes the spoken language training of the second spoken language dialogue role, the judgment can be carried out according to at least one of the completion degree of the content and the completion degree of the pronunciation. Specifically, the content of the first speech information may be recognized, and then the recognized content may be matched with the spoken language training content set in advance, and in a case that the matching degree between the recognized content and the spoken language training content set in advance is greater than a preset matching degree (for example, 80%), it may be determined that the spoken language training of the second spoken language dialogue character is completed.
204. If the spoken training of the second spoken dialog character is completed, the terminal device detects whether the user has input for the at least one avatar.
In the embodiment of the invention, whether the user selects the input of another virtual image or not can be detected after the spoken language training of the spoken language dialogue role corresponding to one virtual image is finished, so that the spoken language dialogue role can be quickly switched to another spoken language dialogue role for spoken language training after the spoken language training is finished for one spoken language dialogue role, and the switching efficiency is improved.
205. The terminal equipment judges whether the first input is matched with the preset input or not under the condition that the first input of the user for the first virtual image is detected.
206. And if the first input is matched with the preset input, the terminal equipment determines to execute the spoken training of the first spoken dialog role corresponding to the first virtual image.
For the above descriptions 205 and 206, reference may be made to the description of the embodiments 102 and 103, which are not repeated herein.
In the embodiment of the invention, the spoken language training corresponding to the second spoken language dialogue role can be switched to the spoken language training corresponding to the first spoken language dialogue role through the input of the user to the displayed at least one virtual image, and then the spoken language training project is executed, so that different dialogue roles can be switched at any time to carry out spoken language training in the process of spoken language training, the operation cost can be saved, the flexibility and the interestingness of the real dialogue Yangtze river can be improved, and the requirements of the user on the rapidness and the convenience of intelligent equipment can be met.
EXAMPLE III
As shown in fig. 3, an embodiment of the present invention provides an interactive method for spoken language training, which may include the following steps:
301. the terminal device displays at least one avatar, including a second avatar.
302. And the terminal equipment reads the first voice information of the spoken training of the user aiming at the second spoken dialogue role corresponding to the second virtual image.
303. And the terminal equipment judges whether the user completes the spoken language training of the second spoken language dialogue role or not according to the first voice information.
304. If the spoken training of the second spoken dialog character is completed, the terminal device detects whether the user has input for the at least one avatar.
305. The terminal equipment judges whether the first input is matched with the preset input or not under the condition that the first input of the user for the first virtual image is detected.
In the embodiment of the present invention, the descriptions of 301 to 305 may refer to the descriptions of 201 to 205 in the second embodiment, and are not repeated herein.
306. And if the first input is matched with the preset input, reading second voice information of the spoken training of the user aiming at the first spoken language dialogue role corresponding to the first virtual image.
307. The terminal equipment analyzes the content and the pronunciation of the second voice information and generates an evaluation result aiming at the second voice information according to the content and the pronunciation of the second voice information.
Optionally, the evaluation result comprises: and (5) evaluating the scores. Further, the evaluation result further includes improvement suggestion information.
308. And the terminal equipment judges whether the spoken language training aiming at the first spoken language dialogue role is passed or not according to the evaluation result.
309. If the spoken language training for the first spoken language dialogue character is not passed, the terminal device repeatedly executes the spoken language training for the first spoken language dialogue character.
In the interactive method for spoken language training provided by the embodiment of the invention, the speech information of the spoken language training can be analyzed to obtain the evaluation result, and whether the spoken language training needs to be repeatedly performed is judged according to the evaluation result, so that when the spoken language training effect is not good, the training can be repeatedly performed to ensure the spoken language training effect.
Example four
As shown in fig. 4, an embodiment of the present invention provides an interactive method for spoken language training, where the method may include the following steps:
401. the terminal equipment displays at least one virtual image, and the at least one virtual image comprises a second virtual image.
402. And the terminal equipment reads the first voice information of the spoken training of the user aiming at the second spoken dialogue role corresponding to the second virtual image.
403. And the terminal equipment judges whether the user completes the spoken language training of the second spoken language dialogue role or not according to the first voice information.
404. If the spoken training of the second spoken dialog character is completed, the terminal device detects whether the user has input for the at least one avatar.
405. The terminal equipment judges whether the first input is matched with the preset input or not under the condition that the first input of the user for the first virtual image is detected.
406. And if the first input is matched with the preset input, the terminal equipment reads second voice information of the spoken training of the user aiming at the first spoken language dialogue role corresponding to the first virtual image.
407. The terminal equipment analyzes the content and the pronunciation of the second voice information and generates an evaluation result aiming at the second voice information according to the content and the pronunciation of the second voice information.
Optionally, the evaluation result comprises: and (5) evaluating the scores. Further, the evaluation result further includes improvement suggestion information.
408. And judging whether the evaluation score is smaller than a preset score or not.
409. And if the evaluation score is greater than or equal to a preset score, determining that the spoken training for the first spoken dialog character passes.
410. And if the evaluation score is smaller than the preset score, determining that the spoken training aiming at the first spoken dialog role is not passed, and prompting the user whether to check the improvement suggestion information.
411. And if a second input that the user determines to view the improvement suggestion information is detected, displaying the improvement suggestion information.
412. If a third input is detected that the user closes the improvement suggestion information, closing the improvement suggestion information and repeating the spoken training of the first spoken dialog character.
In the embodiment of the invention, the user can be prompted whether to check the improvement suggestion information or not according to the failure of the evaluation result display, so that the user can improve according to the improvement suggestion information when the spoken language training is repeatedly performed, and the effect of the spoken language training can be further improved.
In a possible implementation manner, the improvement suggestion information may further include voice demonstration information. For example, speech information of a word or a sentence in spoken training is demonstrated so that the user can listen to the speech demonstration information while watching the improvement suggestion information, so that the user can improve the training effect while performing repeated speech training.
Further, when a certain spoken language training is repeatedly performed for multiple times, and the speech information read by the terminal device does not pass (for example, the evaluation score is lower than 80), the terminal device may further send the speech demonstration information of the spoken language training to the wearable device associated with the terminal device, so that the user may listen to the speech demonstration information through the wearable device anytime and anywhere.
EXAMPLE five
As shown in fig. 5, an embodiment of the present invention provides a terminal device, where the terminal device includes:
a display module 501, configured to display at least one avatar, where each avatar of the at least one avatar represents a different spoken language dialog character;
a judging module 502, configured to, in a case that a first input of a user for a first avatar is detected, judge whether the first input matches a preset input, where the first avatar is one of the at least one avatar;
a determining module 503, configured to determine to execute spoken language training of a first spoken language dialog role corresponding to the first avatar if the first input matches the preset input.
Optionally, the at least one avatar includes a second avatar, and the terminal device further includes:
a reading module 504, configured to read first voice information of spoken training of a user for a second spoken language dialog role corresponding to the second avatar;
the determining module 502 is further configured to determine whether the user completes spoken training of the second spoken language dialog role according to the first voice message;
a detecting module 505, configured to detect whether the user has an input for the at least one avatar if the spoken training of the second spoken language dialog character is completed.
Optionally, the reading module 504 is further configured to read first voice information of spoken training of a user for a second spoken language dialog role corresponding to the second avatar;
the terminal device further includes:
the analysis generating module 506 is configured to analyze the content and pronunciation of the second voice information, and generate an evaluation result for the second voice information according to the content and pronunciation of the second voice information;
the judging module 502 is further configured to judge whether the spoken language training for the first spoken language dialog role passes according to the evaluation result;
the executing module 507 is configured to repeatedly execute the spoken language training of the first spoken language dialogue character if the spoken language training for the first spoken language dialogue character is not passed.
Optionally, the evaluation result includes: evaluating the scores;
the determining module 502 includes:
the judgment sub-module 5021 is used for judging whether the evaluation score is greater than or equal to a preset score;
a first determining sub-module 5022, configured to determine that the spoken language training for the first spoken language dialog role is passed if the evaluation score is greater than or equal to a preset score;
if the evaluation score is less than the preset score, determining that the spoken training for the first spoken dialog character fails.
Optionally, the evaluation result further includes: improvement suggestion information corresponding to the evaluation score, the improvement suggestion information including: at least one of an improved suggestion for content and an improved suggestion for pronunciation;
the execution module 507 includes:
a second determination sub-module 5071, configured to determine that the spoken training for the first spoken dialog character is not passed if the evaluation score is less than the preset score, and prompt a user whether to view the improvement suggestion information;
a detection sub-module 5072, configured to display the improvement suggestion information if a second input is detected that the user determines to view the improvement suggestion information; and if the third input of the improved suggestion information is detected to be closed by the user, closing the improved suggestion information, and repeatedly executing the spoken language training of the first spoken language dialogue role.
As shown in fig. 6, an embodiment of the present invention further provides a terminal device, where the terminal device may include:
a memory 601 in which executable program code is stored;
a processor 602 coupled to a memory 601;
the processor 602 calls the executable program code stored in the memory 601 to execute the interactive method of spoken language training executed by the terminal device in the above embodiments of methods.
It should be noted that the terminal device shown in fig. 6 may further include components, which are not shown, such as a battery, an input key, a speaker, a microphone, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
Embodiments of the present invention provide a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention further provide an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform some or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
The terminal device provided by the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described herein again to avoid repetition.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.

Claims (11)

1. An interactive method for spoken language training is applied to terminal equipment, and is characterized in that the method comprises the following steps:
displaying at least one avatar, each of the at least one avatar representing a different spoken dialog character;
under the condition that a first input of a user for a first virtual image is detected, judging whether the first input is matched with a preset input, wherein the first virtual image is one of the at least one virtual image;
and if the first input is matched with the preset input, determining to execute the spoken language training of the first spoken language dialogue role corresponding to the first virtual image.
2. The method of claim 1, wherein the at least one avatar includes a second avatar; in a case where a first input of a user to a first avatar is detected, before determining whether the first input matches a preset input, the method further includes:
reading first voice information of spoken training of a user aiming at a second spoken dialog role corresponding to the second virtual image;
judging whether the user completes the spoken language training of the second spoken language dialogue role or not according to the first voice information;
if the spoken training of the second spoken dialog character is completed, detecting whether the user has input for the at least one avatar.
3. The method according to claim 1 or 2, wherein said determining to perform spoken training of the first spoken dialog character corresponding to the first avatar comprises:
reading second voice information of the user for spoken training of a first spoken dialog role corresponding to the first virtual image;
analyzing the content and pronunciation of the second voice information, and generating an evaluation result aiming at the second voice information according to the content and pronunciation of the second voice information;
judging whether the spoken language training aiming at the first spoken language dialogue role is passed or not according to the evaluation result;
if the spoken language training for the first spoken language dialog character is not passed, the spoken language training of the first spoken language dialog character is repeatedly performed.
4. The method of claim 3, wherein the evaluation results comprise: evaluating the scores; the judging whether the spoken language training for the first spoken language dialogue character is passed or not according to the evaluation result comprises the following steps:
judging whether the evaluation score is greater than or equal to a preset score or not;
if the evaluation score is greater than or equal to a preset score, determining that the spoken language training for the first spoken language dialog role is passed;
if the evaluation score is less than the preset score, determining that the spoken training for the first spoken dialog character fails.
5. The method of claim 4, wherein the evaluation further comprises: improvement suggestion information corresponding to the evaluation score, the improvement suggestion information including: at least one of an improved suggestion for content and an improved suggestion for pronunciation;
if the spoken language training for the first spoken language dialog character is not passed, then repeatedly performing the spoken language training for the first spoken language dialog character includes:
if the evaluation score is smaller than the preset score, determining that the spoken training aiming at the first spoken dialog role is not passed, and prompting a user whether to check the improvement suggestion information;
if a second input that the user determines to view the improvement suggestion information is detected, displaying the improvement suggestion information;
and if the third input of the improved suggestion information is detected to be closed by the user, closing the improved suggestion information, and repeatedly executing the spoken language training of the first spoken language dialogue role.
6. A terminal device, characterized in that the method comprises:
the display module is used for displaying at least one virtual image, and each virtual image in the at least one virtual image represents different spoken language dialogue roles;
the device comprises a judging module, a judging module and a judging module, wherein the judging module is used for judging whether a first input of a user for a first virtual image is matched with a preset input or not under the condition that the first input of the user for the first virtual image is detected, and the first virtual image is one of the at least one virtual image;
and the determining module is used for determining to execute the spoken training of the first spoken dialog role corresponding to the first virtual image if the first input is matched with the preset input.
7. The terminal device of claim 6, wherein a second avatar is included in the at least one avatar, the terminal device further comprising:
the reading module is used for reading first voice information of spoken training of a user aiming at a second spoken language dialogue role corresponding to the second virtual image;
the judging module is also used for judging whether the user completes the spoken language training of the second spoken language dialogue role according to the first voice information;
and the detection module is used for detecting whether the user has input aiming at the at least one virtual image or not if the spoken language training of the second spoken language dialogue role is completed.
8. The terminal device according to claim 6 or 7, wherein the terminal device further comprises:
the reading module is used for reading first voice information of spoken training of a user aiming at a second spoken language dialogue role corresponding to the second virtual image;
the analysis generation module is used for analyzing the content and pronunciation of the second voice information and generating an evaluation result aiming at the second voice information according to the content and pronunciation of the second voice information;
the judging module is also used for judging whether the spoken language training aiming at the first spoken language dialogue role passes or not according to the evaluation result;
and the execution module is used for repeatedly executing the spoken language training of the first spoken language dialogue role if the spoken language training aiming at the first spoken language dialogue role is not passed.
9. The terminal device of claim 8, wherein the evaluation result comprises: evaluating the scores;
the judging module comprises:
the judgment submodule is used for judging whether the evaluation score is greater than or equal to a preset score or not;
the first determining submodule is used for determining that the spoken language training aiming at the first spoken language dialogue role is passed if the evaluation score is greater than or equal to a preset score;
if the evaluation score is less than the preset score, determining that the spoken training for the first spoken dialog character fails.
10. The terminal device of claim 9, wherein the evaluation result further comprises: improvement suggestion information corresponding to the evaluation score, the improvement suggestion information including: at least one of an improved suggestion for content and an improved suggestion for pronunciation;
the execution module comprises:
a second determining sub-module, configured to determine that the spoken training for the first spoken dialog role is failed if the evaluation score is smaller than the preset score, and prompt a user whether to check the improvement suggestion information;
the detection submodule is used for displaying the improvement suggestion information if a second input that the user determines to view the improvement suggestion information is detected; and if the third input of the improved suggestion information is detected to be closed by the user, closing the improved suggestion information, and repeatedly executing the spoken language training of the first spoken language dialogue role.
11. A computer-readable storage medium, comprising: computer program which, when executed on a computer, can implement the interactive method of spoken language training according to any of claims 1 to 5.
CN202010396479.0A 2020-05-12 2020-05-12 Interactive method for spoken language training and terminal equipment Pending CN111639218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010396479.0A CN111639218A (en) 2020-05-12 2020-05-12 Interactive method for spoken language training and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010396479.0A CN111639218A (en) 2020-05-12 2020-05-12 Interactive method for spoken language training and terminal equipment

Publications (1)

Publication Number Publication Date
CN111639218A true CN111639218A (en) 2020-09-08

Family

ID=72329974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010396479.0A Pending CN111639218A (en) 2020-05-12 2020-05-12 Interactive method for spoken language training and terminal equipment

Country Status (1)

Country Link
CN (1) CN111639218A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908068A (en) * 2021-02-06 2021-06-04 江苏电子信息职业学院 College spoken English conversation interactive system
CN113395201A (en) * 2021-06-10 2021-09-14 广州繁星互娱信息科技有限公司 Head portrait display method, device, terminal and server in chat session
CN117541444A (en) * 2023-12-04 2024-02-09 新励成教育科技股份有限公司 Interactive virtual reality talent expression training method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI224759B (en) * 2003-09-12 2004-12-01 Strawberry Software Inc Apparatus and methods for English learning by multiple roles playing in a virtual classroom
CN101197084A (en) * 2007-11-06 2008-06-11 安徽科大讯飞信息科技股份有限公司 Automatic spoken English evaluating and learning system
US20080274805A1 (en) * 2007-05-02 2008-11-06 Ganz, An Ontario Partnership Consisting Of 2121200 Ontario Inc. And 2121812 Ontario Inc. Attribute building for characters in a virtual environment
CN107340991A (en) * 2017-07-18 2017-11-10 百度在线网络技术(北京)有限公司 Switching method, device, equipment and the storage medium of speech roles
CN107564510A (en) * 2017-08-23 2018-01-09 百度在线网络技术(北京)有限公司 A kind of voice virtual role management method, device, server and storage medium
CN109637215A (en) * 2019-01-16 2019-04-16 陕西国际商贸学院 A kind of college English Teaching spoken language training system
CN110955675A (en) * 2019-10-30 2020-04-03 中国银联股份有限公司 Robot dialogue method, device, equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI224759B (en) * 2003-09-12 2004-12-01 Strawberry Software Inc Apparatus and methods for English learning by multiple roles playing in a virtual classroom
US20080274805A1 (en) * 2007-05-02 2008-11-06 Ganz, An Ontario Partnership Consisting Of 2121200 Ontario Inc. And 2121812 Ontario Inc. Attribute building for characters in a virtual environment
CN101197084A (en) * 2007-11-06 2008-06-11 安徽科大讯飞信息科技股份有限公司 Automatic spoken English evaluating and learning system
CN107340991A (en) * 2017-07-18 2017-11-10 百度在线网络技术(北京)有限公司 Switching method, device, equipment and the storage medium of speech roles
CN107564510A (en) * 2017-08-23 2018-01-09 百度在线网络技术(北京)有限公司 A kind of voice virtual role management method, device, server and storage medium
CN109637215A (en) * 2019-01-16 2019-04-16 陕西国际商贸学院 A kind of college English Teaching spoken language training system
CN110955675A (en) * 2019-10-30 2020-04-03 中国银联股份有限公司 Robot dialogue method, device, equipment and computer readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908068A (en) * 2021-02-06 2021-06-04 江苏电子信息职业学院 College spoken English conversation interactive system
CN113395201A (en) * 2021-06-10 2021-09-14 广州繁星互娱信息科技有限公司 Head portrait display method, device, terminal and server in chat session
CN113395201B (en) * 2021-06-10 2024-02-23 广州繁星互娱信息科技有限公司 Head portrait display method, device, terminal and server in chat session
CN117541444A (en) * 2023-12-04 2024-02-09 新励成教育科技股份有限公司 Interactive virtual reality talent expression training method, device, equipment and medium
CN117541444B (en) * 2023-12-04 2024-03-29 新励成教育科技股份有限公司 Interactive virtual reality talent expression training method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111639218A (en) Interactive method for spoken language training and terminal equipment
CN108053839B (en) Language exercise result display method and microphone equipment
CN109766412B (en) Learning content acquisition method based on image recognition and electronic equipment
CN109165336B (en) Information output control method and family education equipment
CN108877334B (en) Voice question searching method and electronic equipment
CN109086590B (en) Interface display method of electronic equipment and electronic equipment
CN109462603A (en) Voiceprint authentication method, equipment, storage medium and device based on blind Detecting
CN110808038A (en) Mandarin assessment method, device, equipment and storage medium
CN111899576A (en) Control method and device for pronunciation test application, storage medium and electronic equipment
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN113377200A (en) Interactive training method and device based on VR technology and storage medium
CN111079499B (en) Writing content identification method and system in learning environment
CN112182173A (en) Human-computer interaction method and device based on virtual life and electronic equipment
CN111444729A (en) Information processing method, device, equipment and readable storage medium
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
CN110992947A (en) Voice-based interaction method, device, medium and electronic equipment
CN111159379B (en) Automatic question setting method, device and system
US10008206B2 (en) Verifying a user
CN117520490A (en) Man-machine conversation method, system and related device
CN115660909B (en) Digital school platform immersion type digital learning method and system
CN112951013B (en) Learning interaction method and device, electronic equipment and storage medium
CN111091821B (en) Control method based on voice recognition and terminal equipment
JP2002159741A (en) Game device and information storage medium
CN110327061B (en) Character determining device, method and equipment based on eye movement tracking technology
CN110610697B (en) Voice recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination