CN111639222A - Spoken language training method and electronic equipment - Google Patents

Spoken language training method and electronic equipment Download PDF

Info

Publication number
CN111639222A
CN111639222A CN202010428488.3A CN202010428488A CN111639222A CN 111639222 A CN111639222 A CN 111639222A CN 202010428488 A CN202010428488 A CN 202010428488A CN 111639222 A CN111639222 A CN 111639222A
Authority
CN
China
Prior art keywords
role
user selection
spoken language
selection gesture
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010428488.3A
Other languages
Chinese (zh)
Inventor
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010428488.3A priority Critical patent/CN111639222A/en
Publication of CN111639222A publication Critical patent/CN111639222A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Library & Information Science (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a spoken language training method and electronic equipment, wherein the method comprises the following steps: acquiring spoken language practice content; displaying each preset role corresponding to the spoken language practice content; under the condition that a first user selection gesture is detected, determining a first role corresponding to the first user selection gesture from all preset roles; and starting a training mode corresponding to the first character. By implementing the embodiment of the application, the training effect of the spoken language can be improved.

Description

Spoken language training method and electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to a spoken language training method and electronic equipment.
Background
Learning a language needs to master 4 key skills, namely listening, speaking, reading and writing, wherein the listening, reading and writing can be performed by students mostly, but for the speaking, the single spoken language practice is monotonous and tedious, the interest and the enjoyment of learning cannot be improved, and the training effect of the spoken language is usually not good.
Disclosure of Invention
The embodiment of the application discloses a spoken language training method and electronic equipment, which can improve the training effect of spoken language.
The first aspect of the embodiments of the present application discloses a spoken language training method, including:
acquiring spoken language practice content;
displaying each preset role corresponding to the spoken language practice content;
under the condition that a first user selection gesture is detected, determining a first role corresponding to the first user selection gesture from all preset roles;
and starting a training mode corresponding to the first character.
As an optional implementation manner, in the first aspect of this embodiment of the present application, after the starting of the training mode corresponding to the first character, the method further includes:
under the condition that a second user selection gesture is detected, determining a second role corresponding to the second user selection gesture from all preset roles; wherein the second persona is different from the first persona;
and switching the training mode corresponding to the first role to the training mode corresponding to the second role.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the spoken language practice content is obtained by scanning a paper book page by a camera of an electronic device, and after determining, in a case that a first user selection gesture is detected, a first role corresponding to the first user selection gesture from among the preset roles, and before starting a training mode corresponding to the first role, the method further includes:
detecting a current reading mode of the electronic equipment;
when the current reading mode is a paper book reading mode, controlling a display screen of the electronic equipment to be switched from a bright screen state to a black screen state;
when the current reading mode is an electronic book reading mode, marking the practice content of the first role in the spoken language practice content; and displaying the marked spoken language practice content on a display screen of the electronic device.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after determining, in the case that the first user selection gesture is detected, the first role corresponding to the first user selection gesture from the preset roles, the method further includes:
sending each preset role, the first role and the spoken language practice content to terminal equipment associated with electronic equipment;
receiving a third triangle color fed back by the terminal equipment; wherein the third role and the first role are different;
starting a training mode corresponding to the first character, comprising:
and starting a training mode corresponding to the first role and the third role.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after determining, in the case that the first user selection gesture is detected, the first role corresponding to the first user selection gesture from the preset roles, the method further includes:
when a role tone color selection instruction is detected, determining a target tone color corresponding to the role tone color selection instruction from preset tone colors of a fourth role displayed on a display screen of the electronic equipment; the fourth role is a role except the first role in each preset role;
after the training mode corresponding to the first character is started, the method further comprises the following steps:
and controlling the fourth character to perform spoken language reading according to the target tone in a training mode corresponding to the first character.
A second aspect of an embodiment of the present application discloses an electronic device, including:
an acquisition unit configured to acquire spoken language practice content;
the display unit is used for displaying each preset role corresponding to the spoken language practice content;
the determining unit is used for determining a first role corresponding to a first user selection gesture from all preset roles under the condition that the first user selection gesture is detected;
and the starting unit is used for starting the training mode corresponding to the first character.
As an optional implementation manner, in a second aspect of the embodiment of the present application, the determining unit is further configured to determine, after the starting unit starts the training mode corresponding to the first role, a second role corresponding to a second user selection gesture from the preset roles when the second user selection gesture is detected; wherein the second persona is different from the first persona;
the electronic device further includes:
and the switching unit is used for switching the training mode corresponding to the first role to the training mode corresponding to the second role.
As an optional implementation manner, in the second aspect of this embodiment of this application, the spoken language practice content is obtained by scanning a paper book page by a camera of an electronic device, and the electronic device further includes:
the detection unit is used for detecting the current reading mode of the electronic equipment after the determination unit determines the first role corresponding to the first user selection gesture from the preset roles and before the starting unit starts the training mode corresponding to the first role under the condition that the determination unit detects the first user selection gesture;
the first processing unit is used for controlling the display screen of the electronic equipment to be switched from a bright screen state to a black screen state when the current reading mode is a paper book reading mode;
the second processing unit is used for marking the practice content of the first role in the spoken language practice content when the current reading mode is an electronic book reading mode; and displaying the marked spoken language practice content on a display screen of the electronic device.
As an optional implementation manner, in the second aspect of the embodiments of the present application, the electronic device further includes:
the sending unit is used for sending each preset role, the first role and the spoken language practice content to terminal equipment associated with electronic equipment after the determining unit determines a first role corresponding to the first user selection gesture from each preset role under the condition that the determining unit detects the first user selection gesture;
a receiving unit, configured to receive a third triangle color fed back by the terminal device; wherein the third role and the first role are different;
the starting unit is specifically configured to start the training modes corresponding to the first character and the third character.
As an optional implementation manner, in a second aspect of the embodiment of the present application, the determining unit is further configured to determine, when a first user selection gesture is detected, a first role corresponding to the first user selection gesture from among the preset roles, and then determine a target tone of a fourth role from among preset tones of the fourth role; the fourth role is a role except the first role in each preset role;
the electronic device further includes:
and the control unit is used for controlling the fourth character to perform spoken language reading according to the target tone in the training mode corresponding to the first character after the starting unit starts the training mode corresponding to the first character.
A third aspect of the embodiments of the present application discloses a terminal device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect of the present application.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium storing a computer program comprising a program code for performing some or all of the steps of any one of the methods of the first aspect of the present application.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
A sixth aspect of embodiments of the present application discloses an application issuing system, configured to issue a computer program product, where the computer program product is configured to, when run on a computer, cause the computer to perform part or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
implementing the embodiment of the application, and acquiring spoken language practice content; displaying each preset role corresponding to the spoken language practice content; under the condition that a first user selection gesture is detected, determining a first role corresponding to the first user selection gesture from all preset roles; and starting a training mode corresponding to the first character. By implementing the method, the role selection of the spoken language practice content is realized based on the user gesture, so that the spoken language training method in a dialogue scene type is provided for students, the defect of poor training effect of single spoken language practice is overcome, and the spoken language training effect is favorably improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without making a creative effort.
FIG. 1 is a schematic flow chart illustrating a method for spoken language training disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another spoken language training method disclosed in the embodiments of the present application;
fig. 3 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present application;
fig. 5 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises," "comprising," and any variations thereof in the embodiments and drawings of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The spoken language training method disclosed in the embodiment of the application can be applied to an electronic device, the electronic device may be a family education machine, and an operating system of the family education machine may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a Black Berry operating system, a Windows Phone8 operating system, and the like.
The electronic device may be a terminal device or other electronic devices. The terminal device may be referred to as a User Equipment (UE), a Mobile Station (MS), a mobile terminal (mobile terminal), an intelligent terminal, and the like, and may communicate with one or more core networks through a Radio Access Network (RAN). For example, the terminal equipment may be a mobile phone (or so-called "cellular" phone), a computer with a mobile terminal, etc., and the terminal equipment may also be a portable, pocket, hand-held, computer-included or vehicle-mounted mobile device and terminal equipment in future NR networks, which exchange voice or data with a radio access network.
The embodiment of the application discloses a spoken language training method and electronic equipment, and the spoken language training effect can be improved.
The details will be described below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a speech training method according to an embodiment of the present application. The spoken language training method shown in fig. 1 may specifically include the following steps:
101. and acquiring spoken language practice content.
In the embodiment of the present application, obtaining spoken language practice content includes, but is not limited to, the following implementation manners:
mode 1: when the current time point is a preset time point, acquiring a content identification corresponding to the preset time point from preset learning plan information, and searching spoken language practice content corresponding to the content identification from a preset database; the preset learning plan information comprises a plurality of preset time points, each preset time point corresponds to one content identifier, and the content identifiers corresponding to different preset time points can be the same or different.
For example, when the current time point is a preset time point, it may be further detected whether a content acquisition instruction is received, and if so, a content identifier corresponding to the preset time point is acquired from preset learning plan information; if not, outputting prompt information for prompting the user to practice the spoken language. By implementing the method, the user is facilitated to carry out planned spoken language training.
Mode 2: scanning a paper book page containing a spoken language practice module by using a camera of the electronic equipment to obtain a page image; and obtaining the spoken language practice content by performing OCR recognition on the page image.
102. And displaying each preset role corresponding to the spoken language practice content.
103. And under the condition that the first user selection gesture is detected, determining a first role corresponding to the first user selection gesture from the preset roles.
Wherein, displaying each preset role corresponding to the spoken language practice content may include: and displaying the role identification and the legal selection gesture corresponding to each preset role, wherein the role identification can be a number, characters or an animation figure. Based on this, in the case that the first user selection gesture is detected, determining the first role corresponding to the first user selection gesture from the preset roles may include: under the condition that the first user selection gesture is detected, the first user selection gesture is recognized to judge whether the first user selection gesture is a legal selection gesture or not, and if the first user selection gesture is the legal selection gesture, a preset role corresponding to the first user selection gesture is determined as a first role. By implementing the method, the efficient selection of the role can be realized.
104. And starting a training mode corresponding to the first character.
The training mode corresponding to the first role is that the training content of the first role is independently completed by a user, the training content of each preset role except the first role is completed by the electronic equipment, and when the training content of the first role is detected, the first voice of the user of the electronic equipment is collected through a sound pickup device of the electronic equipment, and the first voice can be analyzed and scored.
Optionally, the training mode corresponding to the first character may include a friend participation mode and a non-friend participation mode, in the friend participation mode, the electronic device user may invite friends to participate in spoken language training together remotely, in the non-friend participation mode, only the electronic device user participates in spoken language training, and if the training mode corresponding to the first character is the friend participation mode, after step 103, the following steps may be further performed: sending each preset role, the first role and the spoken language practice content to terminal equipment associated with electronic equipment; receiving a third triangle fed back by the terminal equipment; wherein the third role is different from the first role; the starting of the training mode corresponding to the first character may include: and starting the training modes corresponding to the first role and the third role, wherein in the training modes corresponding to the first role and the third role, the training contents of the first role and the third role are independently completed by the electronic equipment user and the terminal equipment user, and the training contents of all preset roles except the first role and the third role are completed by the electronic equipment. When the exercise content of the first role is detected, the first voice of the user of the electronic equipment is collected through a sound pickup device of the electronic equipment, when the exercise content of the third role is detected, the second voice of the user of the terminal equipment is obtained from the terminal equipment, and the first voice and the second voice can be analyzed and scored.
Further, before sending the preset roles, the first role and the spoken language practice content to the terminal device associated with the electronic device, an online friend list may be displayed, and after detecting that the user of the electronic device determines the target friend, the electronic device may be associated with the terminal device of the target friend. By implementing the method, based on synchronous spoken language training of multiple persons, the interestingness of the spoken language training can be further improved.
By implementing the method, the spoken language training effect is improved by providing the spoken language training method of the dialogue scene type for the user, the planned spoken language training is facilitated for the user, the efficient selection of the role can be realized, and the interestingness of the spoken language training can be further improved based on the synchronous spoken language training of multiple persons.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another spoken language training method disclosed in the embodiment of the present application. The spoken language training method shown in fig. 2 may specifically include the following steps:
for detailed descriptions of step 201 to step 203, please refer to detailed descriptions of step 101 to step 103 in the first embodiment, which is not described again in this embodiment. It should be noted that the spoken language practice content mentioned in step 101 is obtained by scanning a paper book page through a camera of the electronic device.
204. A current reading mode of the electronic device is detected.
205. When the current reading mode is the paper book reading mode, controlling the display screen of the electronic device to be switched from the bright screen state to the black screen state, and executing step 207.
When the display screen of the electronic device is in the black screen state, only the display program of the electronic device is terminated, and the other programs are normally operated. And step 204 to step 205 are executed, if the user of the electronic device performs spoken language training by reading a paper book page, the display screen of the electronic device is controlled to be in a black screen state, and the power consumption of the electronic device can be reduced.
206. When the current reading mode is the electronic book reading mode, marking the practice content of the first role in the spoken language practice content; and displaying the marked spoken language practice content on a display screen of the electronic device, and executing step 207.
And step 204 and step 207 are executed, if the user of the electronic device performs spoken language training by reading the page of the electronic book, the spoken language training content marked with the training content of the first role is presented to the user of the electronic device, so that the user of the electronic device can intuitively obtain the training content of the first role, and the error probability of the user of the electronic device is favorably reduced.
207. And starting a training mode corresponding to the first character.
In the embodiment of the present application, please refer to the description in the first embodiment for the training mode corresponding to the first character, which is not repeated herein.
The roles except the first role in the preset roles are the fourth role, and the tone of the fourth role can be selected by a user independently, so that the interestingness of spoken language training can be further improved. Specifically, after step 203, the following steps may be further performed: determining a target tone of the fourth role from preset tones of the fourth role; after the training mode corresponding to the first character is started, the following steps can be further executed: and controlling the fourth character to perform spoken language reading according to the target tone in the training mode corresponding to the first character.
Optionally, determining the target tone color of the fourth character from the preset tone colors of the fourth character may include: detecting whether a tone setting instruction is received or not, if so, determining a fourth tone from the preset roles, acquiring the preset tone of the fourth tone, displaying tone marks of the preset tones of the fourth role on a display screen of the electronic equipment, determining a target mark selected by a user of the electronic equipment from the tone marks of the preset tones, and taking the preset tone corresponding to the target mark as the target tone.
Further, after determining the fourth role from the preset roles, personalized information of the electronic device user may be obtained, where the personalized information may include an age, a gender, a grade, and a preference of the electronic device user, and obtaining the preset timbre of the fourth role may include: and acquiring the preset tone of the fourth role according to the personalized information. By implementing the method, the spoken language training can be more personalized.
208. Under the condition that a second user selection gesture is detected, determining a second role corresponding to the second user selection gesture from all the preset roles; wherein the second persona is different from the first persona.
In this embodiment of the application, when the second user selection gesture is detected, determining, from the preset roles, a second role corresponding to the second user selection gesture may include, but is not limited to, the following implementation manners:
mode 1: under the condition that a second user selection gesture is detected, judging whether the second user selection gesture is the same as the first user selection gesture, and if not, determining a second role corresponding to the second user selection gesture from all preset roles; by implementing the method, the role switching process of the electronic equipment user can be simplified, and the use experience of the electronic equipment user is better.
Mode 2: detecting whether a role switching instruction is received or not, if so, judging whether a second user selection gesture is the same as a first user selection gesture or not under the condition that the second user selection gesture is detected, and if not, determining a second role corresponding to the second user selection gesture from all preset roles; the role switching instruction can be input by the user of the electronic equipment through a contact type (virtual/physical button pressing and contact type gestures) or a non-contact type (voice, air-separating gestures or throwing actions).
209. And switching the training mode corresponding to the first role to the training mode corresponding to the second role.
By performing step 208 to step 209, efficient switching of roles can be realized.
By implementing the method, the spoken language training effect is improved by providing the spoken language training method of the user dialogue scene type, the planned spoken language training is facilitated for the user, the efficient selection of the role can be realized, the interestingness of the spoken language training can be further improved based on multi-user synchronous spoken language training, the use experience can be better, the power consumption of the electronic equipment can be reduced, the error probability of the user of the electronic equipment can be reduced, and the efficient switching of the role can be realized.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 3, the electronic device may include:
an obtaining unit 301, configured to obtain spoken language practice content.
In this embodiment of the application, the manner of the obtaining unit 301 for obtaining the spoken language practice content may specifically be:
an obtaining unit 301, configured to obtain, when a current time point is a preset time point, a content identifier corresponding to the preset time point from preset learning plan information, and search, from a preset database, a spoken language practice content corresponding to the content identifier; the preset learning plan information comprises a plurality of preset time points, each preset time point corresponds to one content identifier, and the content identifiers corresponding to different preset time points can be the same or different.
For example, the obtaining unit 301 is further configured to detect whether a content obtaining instruction is received when the current time point is a preset time point, and if the content obtaining instruction is received, obtain a content identifier corresponding to the preset time point from preset learning plan information; if not, outputting prompt information for prompting the user to practice the spoken language. By implementing the mode, the user is helped to develop planned spoken language training.
Mode 2: an obtaining unit 301, configured to scan a paper book page including a spoken language practice module by using a camera of an electronic device, so as to obtain a page image; and obtaining the spoken language practice content by performing OCR recognition on the page image.
A display unit 302, configured to display each preset character corresponding to the spoken language practice content.
A determining unit 303, configured to determine, from the preset roles, a first role corresponding to a first user selection gesture when the first user selection gesture is detected;
the determining unit 303, configured to determine, from the preset roles, a first role corresponding to the first user selection gesture specifically may be: the determining unit 303 is configured to recognize the first user selection gesture when the first user selection gesture is detected, to determine whether the first user selection gesture is a legal selection gesture, and if the first user selection gesture is the legal selection gesture, determine a preset role corresponding to the first user selection gesture as the first role. By implementing the method, the role can be efficiently selected.
The starting unit 304 is configured to start a training mode corresponding to the first character.
For introduction of the training mode corresponding to the first character, please refer to the description in the first embodiment, and the embodiments of the present application are not described again.
As an optional implementation manner, in an embodiment of the present application, the electronic device further includes:
a sending unit, configured to, when the determining unit 303 detects a first user selection gesture, determine a first character corresponding to the first user selection gesture from the preset characters, and then send the preset characters, the first character, and the spoken language practice content to a terminal device associated with the electronic device;
the receiving unit is used for receiving the third triangle fed back by the terminal equipment; wherein the third role is different from the first role;
the starting unit 304 is specifically configured to start a training mode corresponding to the first character and the third character.
Further, the sending unit is further configured to display an online friend list before sending the preset roles, the first role, and the spoken language practice content to a terminal device associated with the electronic device, and associate the electronic device with the terminal device of the target friend after detecting that the target friend is determined by the user of the electronic device. By implementing the method, based on synchronous spoken language training of multiple persons, the interestingness of the spoken language training can be further improved.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. The electronic device shown in fig. 4 is optimized from the electronic device shown in fig. 3, and the electronic device shown in fig. 4 may further include:
the detecting unit 305 is configured to, when the determining unit 303 detects the first user selection gesture, detect the current reading mode of the electronic device after determining the first role corresponding to the first user selection gesture from the preset roles and before the starting unit 304 starts the training mode corresponding to the first role.
The first processing unit 306 is configured to control the display screen of the electronic device to switch from a bright screen state to a black screen state when the current reading mode is the paper book reading mode, and trigger the starting unit 304 to execute the training mode corresponding to the starting of the first character.
For the description of the black screen state, please refer to the description in embodiment two, which is not repeated herein.
A second processing unit 307, configured to mark practice content of the first character in the spoken language practice content when the current reading mode is an electronic book reading mode; and displaying the marked spoken language practice content on a display screen of the electronic device, and triggering the starting unit 304 to execute the training mode corresponding to the starting of the first character.
The determining unit 303 is further configured to determine, after the starting unit 304 starts the training mode corresponding to the first role, a second role corresponding to a second user selection gesture from the preset roles when the second user selection gesture is detected; wherein the second persona is different from the first persona.
The electronic device may further include:
the switching unit 308 is configured to switch the training mode corresponding to the first role to the training mode corresponding to the second role.
In this embodiment of the application, the manner that the determining unit 303 is configured to determine, when the second user selection gesture is detected, the second role corresponding to the second user selection gesture from the preset roles may specifically be:
the determining unit 303 is configured to, when a second user selection gesture is detected, determine whether the second user selection gesture is the same as the first user selection gesture, and if the second user selection gesture is different from the first user selection gesture, determine a second role corresponding to the second user selection gesture from the preset roles.
A determining unit 303, configured to detect whether a role switching instruction is received by using a language, determine whether a second user selection gesture is the same as the first user selection gesture when a second user selection gesture is detected if the role switching instruction is received, and determine a second role corresponding to the second user selection gesture from the preset roles if the second user selection gesture is different from the first user selection gesture; the role switching instruction may be input by a user of the electronic device through a contact type (pressing a virtual/physical button, a contact type gesture) or a non-contact type (voice, an air gesture, or a swing action), which is not limited in the embodiment of the present application.
Optionally, the determining unit 303 is further configured to determine, when the first user selection gesture is detected, a first role corresponding to the first user selection gesture from the preset roles, and then determine a target tone of a fourth role from preset tones of the fourth role; the fourth role is a role except the first role in each preset role;
the electronic device may further include:
and the control unit is used for controlling the fourth character to perform spoken language reading according to the target tone in the training mode corresponding to the first character after the starting unit 303 starts the training mode corresponding to the first character.
Further, the determining unit 303 is further configured to obtain personalized information of the electronic device user after determining a fourth role from the preset roles, where the personalized information may include an age, a gender, a grade, and a preference of the electronic device user, and a manner for the determining unit 303 to obtain the preset timbre of the fourth role may specifically be: the determining unit 303 is configured to obtain a preset tone of the fourth character according to the personalized information.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, the electronic device may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
the processor 502 calls the executable program code stored in the memory 501 to execute any one of the spoken language training methods shown in fig. 1-2.
An embodiment of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program enables a computer to execute any one of the spoken language training methods shown in fig. 1 to 2.
The embodiment of the application discloses a computer program product, which enables a computer to execute any one of the speech training methods shown in fig. 1-2 when the computer program product runs on the computer.
An embodiment of the present application discloses an application issuing system, which is configured to issue a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute any one of the spoken language training methods shown in fig. 1 to fig. 2.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The foregoing describes in detail a speech training method and an electronic device disclosed in an embodiment of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the step numbers in the specific examples do not mean the order of execution, and the execution order of each process should be determined by its function and inherent logic, but should not be limited to the implementation process of the embodiment of the present application. The units described as separate parts may or may not be physically separate, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. If the integrated unit is implemented as a software functional unit and sold or used as a stand-alone product, it may be stored in a memory accessible to a computer. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
The above description of the embodiments is only for the purpose of helping to understand the method of the present application and its core ideas; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for spoken language training, the method comprising:
acquiring spoken language practice content;
displaying each preset role corresponding to the spoken language practice content;
under the condition that a first user selection gesture is detected, determining a first role corresponding to the first user selection gesture from all preset roles;
and starting a training mode corresponding to the first character.
2. The method of claim 1, wherein after the initiating of the training mode for the first character, the method further comprises:
under the condition that a second user selection gesture is detected, determining a second role corresponding to the second user selection gesture from all preset roles; wherein the second persona is different from the first persona;
and switching the training mode corresponding to the first role to the training mode corresponding to the second role.
3. The method as claimed in claim 1 or 2, wherein the spoken exercise content is obtained by scanning a paper book page with a camera of an electronic device, and wherein, when a first user selection gesture is detected, after determining a first character corresponding to the first user selection gesture from the preset characters and before initiating the training mode corresponding to the first character, the method further comprises:
detecting a current reading mode of the electronic equipment;
when the current reading mode is a paper book reading mode, controlling a display screen of the electronic equipment to be switched from a bright screen state to a black screen state;
when the current reading mode is an electronic book reading mode, marking the practice content of the first role in the spoken language practice content; and displaying the marked spoken language practice content on a display screen of the electronic device.
4. The method of claim 1, wherein after determining the first role corresponding to the first user selection gesture from the preset roles in the case that the first user selection gesture is detected, the method further comprises:
sending each preset role, the first role and the spoken language practice content to terminal equipment associated with electronic equipment;
receiving a third triangle color fed back by the terminal equipment; wherein the third role and the first role are different;
wherein the starting the training mode corresponding to the first character comprises:
and starting a training mode corresponding to the first role and the third role.
5. The method according to claim 1 or 2, wherein after determining the first role corresponding to the first user selection gesture from the preset roles in the case that the first user selection gesture is detected, the method further comprises:
determining a target tone of a fourth role from preset tones of the fourth role; the fourth role is a role except the first role in each preset role;
after the training mode corresponding to the first character is started, the method further comprises the following steps:
and controlling the fourth character to perform spoken language reading according to the target tone in a training mode corresponding to the first character.
6. An electronic device, comprising:
an acquisition unit configured to acquire spoken language practice content;
the display unit is used for displaying each preset role corresponding to the spoken language practice content;
the determining unit is used for determining a first role corresponding to a first user selection gesture from all preset roles under the condition that the first user selection gesture is detected;
and the starting unit is used for starting the training mode corresponding to the first character.
7. The electronic device according to claim 6, wherein the determining unit is further configured to determine, after the starting unit starts the training mode corresponding to the first character, a second character corresponding to a second user selection gesture from the preset characters when the second user selection gesture is detected; wherein the second persona is different from the first persona;
the electronic device further includes:
and the switching unit is used for switching the training mode corresponding to the first role to the training mode corresponding to the second role.
8. The electronic device of claim 6 or 7, wherein the spoken exercise content is obtained by scanning a paper book page with a camera of the electronic device, and the electronic device further comprises:
the detection unit is used for detecting the current reading mode of the electronic equipment after the determination unit determines the first role corresponding to the first user selection gesture from the preset roles and before the starting unit starts the training mode corresponding to the first role under the condition that the determination unit detects the first user selection gesture;
the first processing unit is used for controlling the display screen of the electronic equipment to be switched from a bright screen state to a black screen state when the current reading mode is a paper book reading mode;
the second processing unit is used for marking the practice content of the first role in the spoken language practice content when the current reading mode is an electronic book reading mode; and displaying the marked spoken language practice content on a display screen of the electronic device.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the method according to any one of claims 1 to 5.
10. A computer-readable storage medium having stored thereon a computer program comprising instructions for carrying out some or all of the steps of the method according to any one of claims 1 to 5.
CN202010428488.3A 2020-05-20 2020-05-20 Spoken language training method and electronic equipment Pending CN111639222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010428488.3A CN111639222A (en) 2020-05-20 2020-05-20 Spoken language training method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010428488.3A CN111639222A (en) 2020-05-20 2020-05-20 Spoken language training method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111639222A true CN111639222A (en) 2020-09-08

Family

ID=72332028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010428488.3A Pending CN111639222A (en) 2020-05-20 2020-05-20 Spoken language training method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111639222A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104838386A (en) * 2012-03-30 2015-08-12 电子湾有限公司 User authentication and authorization using personas
CN105425953A (en) * 2015-11-02 2016-03-23 小天才科技有限公司 Man-machine interaction method and system
CN107564510A (en) * 2017-08-23 2018-01-09 百度在线网络技术(北京)有限公司 A kind of voice virtual role management method, device, server and storage medium
US10204525B1 (en) * 2007-12-14 2019-02-12 JeffRoy H. Tillis Suggestion-based virtual sessions engaging the mirror neuron system
CN109637215A (en) * 2019-01-16 2019-04-16 陕西国际商贸学院 A kind of college English Teaching spoken language training system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204525B1 (en) * 2007-12-14 2019-02-12 JeffRoy H. Tillis Suggestion-based virtual sessions engaging the mirror neuron system
CN104838386A (en) * 2012-03-30 2015-08-12 电子湾有限公司 User authentication and authorization using personas
CN105425953A (en) * 2015-11-02 2016-03-23 小天才科技有限公司 Man-machine interaction method and system
CN107564510A (en) * 2017-08-23 2018-01-09 百度在线网络技术(北京)有限公司 A kind of voice virtual role management method, device, server and storage medium
CN109637215A (en) * 2019-01-16 2019-04-16 陕西国际商贸学院 A kind of college English Teaching spoken language training system

Similar Documents

Publication Publication Date Title
CN107885826B (en) Multimedia file playing method and device, storage medium and electronic equipment
CN106293403B (en) Learning control method and device in black screen standby state and mobile terminal
CN108877334B (en) Voice question searching method and electronic equipment
CN109165336B (en) Information output control method and family education equipment
CN109783613B (en) Question searching method and system
CN106203235A (en) Live body discrimination method and device
CN111640417A (en) Information input method, device, equipment and computer readable storage medium
CN110019757A (en) Books point reads interaction device and its control method, computer readable storage medium
CN111182387B (en) Learning interaction method and intelligent sound box
CN111639218A (en) Interactive method for spoken language training and terminal equipment
CN111639158B (en) Learning content display method and electronic equipment
CN111176537B (en) Man-machine interaction method in answering process and sound box
CN111724638B (en) AR interactive learning method and electronic equipment
CN113282725A (en) Dialogue interaction method and device, electronic equipment and storage medium
CN112732379A (en) Operation method of application program on intelligent terminal, terminal and storage medium
CN111639222A (en) Spoken language training method and electronic equipment
CN115565518B (en) Method for processing player dubbing in interactive game and related device
CN111639209A (en) Book content searching method, terminal device and storage medium
CN106412272A (en) Method and device for prompting position of mobile terminal and mobile terminal
CN110166351A (en) A kind of exchange method based on instant messaging, device and electronic equipment
CN109712443A (en) A kind of content is with reading method, apparatus, storage medium and electronic equipment
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN111079727B (en) Click-to-read control method and electronic equipment
CN111553365B (en) Question selection method and device, electronic equipment and storage medium
CN107733471B (en) Interaction control method, system and equipment based on microphone equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908

RJ01 Rejection of invention patent application after publication