CN111091731B - Dictation prompting method based on electronic equipment and electronic equipment - Google Patents

Dictation prompting method based on electronic equipment and electronic equipment Download PDF

Info

Publication number
CN111091731B
CN111091731B CN201910622524.7A CN201910622524A CN111091731B CN 111091731 B CN111091731 B CN 111091731B CN 201910622524 A CN201910622524 A CN 201910622524A CN 111091731 B CN111091731 B CN 111091731B
Authority
CN
China
Prior art keywords
word
dictation
words
consolidated
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910622524.7A
Other languages
Chinese (zh)
Other versions
CN111091731A (en
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910622524.7A priority Critical patent/CN111091731B/en
Publication of CN111091731A publication Critical patent/CN111091731A/en
Application granted granted Critical
Publication of CN111091731B publication Critical patent/CN111091731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a dictation prompting method based on electronic equipment and the electronic equipment, wherein the method comprises the following steps: when the electronic equipment is in a dictation mode, behavior information of a user is collected; the behavior information is expression information or voice information; judging whether the current word read by the electronic equipment is the word to be prompted or not by analyzing the behavior information; and outputting preset prompting information associated with the current word when the current word is the word to be prompted. By implementing the embodiment of the invention, the use experience of the user in the process of autonomously dictating can be effectively improved.

Description

Dictation prompting method based on electronic equipment and electronic equipment
Technical Field
The invention relates to the technical field of education, in particular to a dictation prompting method based on electronic equipment and the electronic equipment.
Background
In the practice of listening to and writing words, the condition that a student cannot remember the strokes of a certain Chinese character sometimes occurs, and when the student manually reads the words, the reader can actively provide prompt information for guiding the student to think the strokes of the Chinese character to write, so that the impression of the student on the Chinese character is deepened.
However, with the advent of the home teaching device with the reading and reporting function, more and more students can perform autonomous dictation based on the home teaching device, and the home teaching device with the reading and reporting function can not actively provide the prompt information to the students when automatically reading and reporting, so that the user experience of autonomous dictation based on the electronic device is poor.
Disclosure of Invention
The embodiment of the invention discloses a dictation prompting method based on electronic equipment and the electronic equipment, which can improve the use experience of a user in autonomous dictation.
The first aspect of the embodiments of the present invention discloses a dictation prompting method based on electronic equipment, which includes:
when the electronic equipment is in a dictation mode, behavior information of a user is collected; the behavior information is expression information or voice information;
judging whether the current word reported and read by the electronic equipment is a word to be prompted or not by analyzing the behavior information;
if the word is the word to be prompted, outputting preset prompting information associated with the current word; and the preset prompt information is used for guiding the user to think out the writing strokes of the current words.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the acquiring behavior information of the user when the electronic device is in the dictation mode includes:
when the electronic equipment is in a dictation mode, detecting whether a user writes within a preset time length after the electronic equipment reads a current word or not;
and if the writing action does not occur, acquiring behavior information of the user.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, when the behavior information is the expression information, the determining, by analyzing the behavior information, whether a current word read by the electronic device is a word to be prompted includes:
acquiring state information of key points of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose and muscles of the user;
setting an expression label of a user according to the state information of the key point;
when the expression label is a preset label, determining that the current word reported and read by the electronic equipment is a word to be prompted;
and when the expression label is not the preset label, determining that the current word is not the word to be prompted.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after outputting the preset hint information associated with the current word, the method further includes:
adding a prompt tag for the current word;
when the dictation mode of the electronic equipment is terminated, acquiring dictation error words and dictation correct words;
judging whether words carrying the prompt tags exist in the dictation correct words or not;
and if the word exists, recording the word carrying the prompt tag in the dictation correct word and the dictation error word into a word library to be consolidated.
As an optional implementation manner, in a first aspect of an embodiment of the present invention, the receiving and recording the words, which carry the prompt tags in the dictation correct words, and the dictation incorrect words into a word library to be consolidated includes:
determining the dictation error words as first-level words to be consolidated;
determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated;
setting the strengthening time periods of the first-level words to be strengthened and the second-level words to be strengthened respectively according to preset rules;
and storing the first-level words to be consolidated and the consolidation time periods corresponding to the first-level words to be consolidated into a word bank to be consolidated in a correlated manner, and storing the second-level words to be consolidated and the consolidation time periods corresponding to the second-level words to be consolidated into the word bank to be consolidated in a correlated manner.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the acquisition unit is used for acquiring the behavior information of the user when the electronic equipment is in a dictation mode; the behavior information is expression information or voice information;
the judging unit is used for judging whether the current word reported and read by the electronic equipment is the word to be prompted or not by analyzing the behavior information;
and the prompting unit is used for outputting preset prompting information associated with the current word when the current word is the word to be prompted, wherein the preset prompting information is used for guiding a user to think out the writing strokes of the current word.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, when the electronic device is in the dictation mode, the manner of collecting the behavior information of the user is specifically:
the acquisition unit is used for detecting whether a user writes within a preset time after the electronic equipment reads the current word when the electronic equipment is in a dictation mode, and acquiring behavior information of the user when the writing is detected.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, when the behavior information is the expression information, the determining unit includes:
the obtaining subunit is used for obtaining the state information of key points of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose and muscles of the user;
the setting subunit is used for setting an expression label of the user according to the state information of the key point;
the determining subunit is configured to determine, when the expression tag is a preset tag, that the current word read by the electronic device is a word to be prompted; and when the expression label is not the preset label, determining that the current word is not the word to be prompted.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the setting unit is used for adding a prompt tag to the current word after the prompt unit outputs preset prompt information associated with the current word;
an acquisition unit configured to acquire a dictation error word and a dictation correct word when the dictation mode of the electronic device is terminated;
and the receiving and recording unit is used for judging whether the words carrying the prompt tags exist in the dictation correct words or not, and receiving and recording the words carrying the prompt tags in the dictation correct words and the dictation error words into a word library to be consolidated when the words carrying the prompt tags exist in the dictation correct words.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, a manner that the receiving and recording unit is configured to receive, into a word library to be consolidated, a word that carries the prompt tag in the dictation correct word and a word that carries the dictation error word is specifically:
the recording unit is used for determining the dictation error words as first-level words to be consolidated; determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated; and according to preset rules, setting the first-level to-be-consolidated word and the second-level to-be-consolidated word consolidation time periods respectively, storing the first-level to-be-consolidated word and the consolidation time period corresponding to the first-level to-be-consolidated word in a word bank to be consolidated in an associated manner, and storing the second-level to-be-consolidated word and the consolidation time period corresponding to the second-level to-be-consolidated word in the word bank to be consolidated in an associated manner.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program comprising a program for performing some or all of the steps of any one of the methods of the first aspect of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application distribution system, configured to distribute a computer program product, where the computer program product, when running on a computer, causes the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when the electronic equipment is in a dictation mode, behavior information of a user is collected; the behavior information is expression information or voice information; judging whether the current word read by the electronic equipment is the word to be prompted or not by analyzing the behavior information; and outputting preset prompting information associated with the current word when the current word is the word to be prompted. By implementing the embodiment of the invention, the behavior information of the user in the dictation process is monitored, and the prompt information for guiding the user to think the writing strokes of the current word is actively provided for the user, so that the impression of the user on the current word is favorably deepened, and the use experience of the user in the autonomous dictation process can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without making a creative effort.
Fig. 1 is a schematic flowchart of a dictation prompting method based on electronic equipment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another dictation prompting method based on electronic equipment according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of another dictation prompting method based on electronic equipment according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terms "comprises," "comprising," and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a dictation prompting method based on electronic equipment and the electronic equipment, which can effectively improve the use experience of a user in autonomous dictation. In the embodiment of the present invention, the dictation prompting method based on the electronic device may be applied to various electronic devices such as a smart phone, a smart watch, and a tablet, and the embodiment of the present invention is not limited. The operating system of each electronic device may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a blackberry operating system, a Windows Phone8 operating system, and the like.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a dictation prompting method based on an electronic device according to an embodiment of the present invention. The dictation prompting method based on the electronic device shown in fig. 1 may specifically include the following steps:
101. when the electronic equipment is in a dictation mode, behavior information of a user is collected; wherein the behavior information is expression information or voice information.
In this embodiment of the present invention, the dictation mode of the electronic device may be divided into two types, one is a dictation mode allowing prompting, and the other is a dictation mode prohibiting prompting, so that steps 101 to 103 occur in the dictation mode allowing prompting, if the behavior information is expression information, the device acquiring the behavior information may be a camera module, if the behavior information is voice information, the device acquiring the behavior information may be a sound pickup, it should be noted that the camera module may be an independent device installed on the electronic device or wirelessly (bluetooth or WIFI) connected to the electronic device, which is not limited in this embodiment of the present invention. Similarly, the sound pickup may be installed on the electronic device, or may be an independent device wirelessly (bluetooth or WIFI) connected to the electronic device.
As an optional implementation manner, in an embodiment of the present invention, when the electronic device is in the dictation mode, the collecting behavior information of the user may include:
when the electronic equipment is in a dictation mode, detecting whether a user writes within a preset time after the electronic equipment reads a current word or not;
and if the writing action does not occur, acquiring the behavior information of the user.
By implementing the method, the acquisition of the behavior information is controlled based on the occurrence condition of the writing action, the electronic equipment is not required to continuously acquire the behavior information, and the power consumption of the electronic equipment can be effectively reduced.
102. Judging whether the current word reported and read by the electronic equipment is the word to be prompted or not by analyzing the behavior information, and if so, executing step 103; if not, the flow is ended.
According to the above description, the behavior information may be expression information or may also be voice information, and when the behavior information is expression information, please refer to the description in the following method embodiment for a specific implementation manner of determining whether the current word reported by the electronic device is a word to be prompted, which is not described again in the embodiments of the present invention.
If the behavior information is voice information, determining whether the current word read by the electronic device is a word to be prompted by analyzing the behavior information, which may include:
analyzing whether preset keywords exist in the behavior information or not;
when the preset keywords exist, determining that the current words read by the electronic equipment are words to be prompted, and when the preset keywords do not exist, determining that the current words read by the electronic equipment are not words to be prompted.
By implementing the method, a plurality of dictation prompt modes are provided for the user, and the use experience of the user can be further improved.
103. Outputting preset prompt information associated with the current word; the preset prompting information is used for guiding a user to think out the writing strokes of the words to be prompted.
As an optional implementation manner, in the embodiment of the present invention, after the determination result in step 102 is yes and before step 103, the following steps may also be performed:
obtaining identity information of the user according to the behavior information;
determining a target virtual character matched with the identity information of the user in a virtual character library;
acquiring audio characteristic information of a target virtual character;
determining a prompt text of a current word in a prompt information base;
and synthesizing the audio characteristic information and the prompt text to obtain preset prompt information associated with the current word.
In the embodiment of the invention, the virtual character library of the electronic equipment comprises a plurality of virtual characters, a user can select a target virtual character before the user performs the autonomous dictation, and the electronic equipment can bind the target virtual character with the identity information of the user when detecting the target virtual character selected by the user, so that the dictation prompt information is output by the tone of the target virtual character.
By implementing the method, the use experience of the user in the autonomous dictation process can be effectively improved, the power consumption of the electronic equipment can be effectively reduced, and the dictation effect of the user can be improved by enhancing the interestingness of the user in the autonomous dictation process.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another dictation prompting method based on electronic equipment according to an embodiment of the present invention. The dictation prompting method based on the electronic device shown in fig. 2 may specifically include the following steps:
201. when the electronic equipment is in a dictation mode, behavior information of a user is collected; wherein the behavior information is expression information or voice information.
For detailed description of step 201, please refer to the description of step 101 in the first embodiment, which is not repeated herein.
202. When the behavior information is expression information, acquiring the state information of key points of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose, and muscles of the user.
203. And setting the expression label of the user according to the state information of the key point.
In the embodiment of the present invention, the eyebrow state of the user may be normal, the eyebrow is curved, the eyebrow is frown, the eyebrow is upright or straight, the eye state of the user may be normal, squinted, downward-facing (auxiliary judgment can be performed by observing whether the upper eyelid is drooping), gladio or upward-facing (auxiliary judgment can be performed by observing whether the eyeball is upward), the mouth state of the user may be normal, upward-facing (auxiliary judgment can be performed by observing whether the mouth is upward), downward-facing (auxiliary judgment can be performed by observing whether the arc formed by the mouth is downward), mouth opening (auxiliary judgment can be performed by observing whether the mouth is open or forms an O-shape), tooth biting (auxiliary judgment can be performed by observing whether the object in the image to be marked bites or cuts the teeth or opens the mouth (bleb mouth or closes the mouth), and the nose state of the user may be normal, nostril contraction (auxiliary judgment can be performed by observing whether the muscle of the nose part is tight), the nose state of the user may be normal, nasal cavity is curved, the eye is curved, and the eye is curved or the eye is curved, The nostril is open (auxiliary judgment is carried out by observing whether the muscle of the nose part is relaxed) or the nose rises (auxiliary judgment is carried out by observing whether the nose root part is contracted), and the muscle state of the user can be normal, tight (auxiliary judgment is carried out by combining the observation of whether the object to be marked is painful and the direct observation of whether the muscle of the object to be marked is tight) or relaxed (auxiliary judgment is carried out by observing whether the muscle of the object to be marked is relaxed).
204. And when the expression label is a preset label, determining that the current word reported and read by the electronic equipment is the word to be prompted.
In the embodiment of the invention, when the expression label is not the preset label, it is determined that the current word reported and read by the electronic equipment is not the word to be prompted.
In this embodiment of the present invention, the electronic device may determine the expression label by using an expression recognition model, where the expression recognition model may be obtained based on deep learning, and the setting the expression label of the user according to the state information of the key point includes: and importing the state information of the key points into an expression recognition model to obtain the expression label of the user. By implementing the method, the determination accuracy of the expression label can be improved.
205. Outputting preset prompt information associated with the current word; the preset prompting information is used for guiding a user to think out the writing strokes of the words to be prompted.
By implementing the method, the use experience of the user in the autonomous dictation process can be effectively improved, the power consumption of the electronic equipment can be effectively reduced, the dictation effect of the user can be improved by enhancing the interestingness of the user in the autonomous dictation process, and the determination precision of the expression label can be improved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a dictation prompting method based on an electronic device according to an embodiment of the present invention. The dictation prompting method based on the electronic device shown in fig. 3 may specifically include the following steps:
for detailed descriptions of step 301 to step 305, please refer to the description of step 201 to step 205 in the second embodiment, which is not repeated herein.
306. And adding a prompt tag for the current word.
307. And when the dictation mode of the electronic equipment is terminated, acquiring the dictation error words and the dictation correct words.
308. Judging whether words with prompt tags exist in the dictation correct words, if so, executing step 309; if not, the flow is ended.
309. And recording the words carrying the prompt tags in the dictation correct words and the dictation wrong words into a word library to be consolidated.
And step 306 to step 309 are executed, after the user completes the independent dictation, the words carrying the prompt tags in the dictation correct words and the dictation error words are automatically recorded, so that the user can conveniently perform subsequent consolidation.
In the embodiment of the present invention, when the determination result in step 308 is negative, the dictation error words may be further included in the word bank to be consolidated.
As an optional implementation manner, in an embodiment of the present invention, the receiving and recording words, which carry the hint tags in the dictation correct words, and dictation incorrect words into a word library to be consolidated may include:
determining the dictation error words as first-level words to be consolidated;
determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated;
setting a first-stage word to be consolidated and a second-stage word to be consolidated consolidation time period respectively according to a preset rule;
and storing the first-level words to be consolidated and the consolidation time periods corresponding to the first-level words to be consolidated into a word bank to be consolidated in a correlated manner, and storing the second-level words to be consolidated and the consolidation time periods corresponding to the second-level words to be consolidated into the word bank to be consolidated in a correlated manner.
By implementing the method, the consolidation time periods are respectively set for words carrying the prompt tags in the dictation correct words and words carrying the dictation wrong words, so that the subsequent consolidation of the user is more reasonable, and the consolidation efficiency is higher.
Further optionally, according to a preset rule, setting a first-level to-be-consolidated word and a second-level to-be-consolidated word consolidation time period respectively, which may include:
acquiring a history time period for the user to learn words from a history learning record of the user;
obtaining a target time period range for the user to learn words according to the historical time period; wherein the target time period range is 24 hours;
synthesizing the target time period range and a first preset consolidation curve aiming at the first-stage words to be consolidated, and determining a consolidation time period of the first-stage words to be consolidated;
and determining the consolidation time period of the second-level words to be consolidated by integrating the target time period range and a second preset consolidation curve aiming at the second-level words to be consolidated.
In the above embodiment, the first preset consolidation curve and the second preset consolidation curve may be obtained according to an einbihaos forgetting curve, where the first preset consolidation curve may reflect a single consolidation duration for a first-level word to be consolidated and an interval number of days for two adjacent consolidations, and similarly, the second preset consolidation curve may reflect a single consolidation duration for a second-level word to be consolidated and an interval number of days for two adjacent consolidations. By implementing the method, the obtained first-stage to-be-consolidated word consolidation time period and the second-stage to-be-consolidated word consolidation time period not only accord with the learning habit of the user, but also are scientific, and the consolidation efficiency of the user can be further improved.
By implementing the method, the use experience sense of the user in the independent dictation process can be effectively improved, the power consumption of the electronic equipment can be effectively reduced, the interestingness of the user in the independent dictation process can be enhanced, the dictation effect of the user can be improved, the determination accuracy of the expression tag can be improved, words carrying the prompt tag in the correct dictation words and words carrying wrong dictation words can be automatically recorded, the user can conveniently perform subsequent consolidation, the subsequent consolidation of the user can be more reasonable, the consolidation efficiency is higher, and the consolidation efficiency of the user can be further improved.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 4, the electronic device may include:
the acquisition unit 401 is configured to acquire behavior information of a user when the electronic device is in a dictation mode; wherein the behavior information is expression information or voice information.
The determining unit 402 is configured to determine whether the current word read by the electronic device is a word to be prompted by analyzing the behavior information.
The prompting unit 403 is configured to output preset prompting information associated with the current word when the current word reported and read by the electronic device is a word to be prompted, where the preset prompting information is used to guide a user to think of a writing stroke of the current word.
As an optional implementation manner, in an embodiment of the present invention, when the electronic device is in the dictation mode, the manner of the collecting unit 401 used for collecting the behavior information of the user may specifically be:
the collecting unit 401 is configured to detect whether a user performs a writing action within a preset time period after the electronic device reads a current word when the electronic device is in a dictation mode, and collect behavior information of the user when the writing action is detected.
If the behavior information is voice information, the manner that the determining unit 402 is configured to determine whether the current word reported and read by the electronic device is a word to be prompted by analyzing the behavior information may specifically be:
a determining unit 402, configured to analyze whether a preset keyword exists in the behavior information; and when the preset keywords exist, determining that the current words read by the electronic equipment are words to be prompted, and when the preset keywords do not exist, determining that the current words read by the electronic equipment are not words to be prompted. By implementing the method, a plurality of dictation prompt modes are provided for the user, and the use experience of the user can be further improved.
As an optional implementation manner, in an embodiment of the present invention, the determining unit 402 may be further configured to, when it is determined that the current word read by the electronic device is a word to be prompted, obtain identity information of the user according to the behavior information; determining a target virtual character matched with the identity information of the user in the virtual character library; acquiring audio characteristic information of the target virtual character; determining a prompt text of the current word in a prompt information base; and synthesizing the audio characteristic information and the prompt text to obtain preset prompt information associated with the current word.
In the embodiment of the invention, the virtual character library of the electronic equipment comprises a plurality of virtual characters, a user can select a target virtual character before the user performs autonomous dictation, and when the electronic equipment detects the target virtual character selected by the user, the target virtual character can be bound with the identity information of the user, so that the dictation prompt information is output in the tone of the target virtual character.
Through implementing above-mentioned electronic equipment, the use experience sense when can effectively improve the user independently dictating can also effectively reduce electronic equipment's consumption, can also improve user's dictation effect through the interest when improving user independently dictation.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is optimized from the electronic device shown in fig. 4, and as shown in fig. 5, the determining unit 402 in the electronic device may include:
an obtaining subunit 4021, configured to obtain state information of a key point of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose, and muscles of the user.
The setting subunit 4022 is configured to set an emoticon of the user according to the status information of the key point.
In this embodiment of the present invention, the electronic device may determine the expression label by using an expression recognition model, where the expression recognition model may be obtained based on deep learning, and the setting subunit 4022 is configured to set the expression label of the user according to the state information of the key point, specifically, the setting subunit may be: the setting subunit 4022 is configured to import the state information of the key points into an expression recognition model, so as to obtain an expression label of the user. By implementing the method, the determination accuracy of the expression label can be improved.
The determining subunit 4023 is configured to determine, when the expression tag is a preset tag, that the current word reported and read by the electronic device is a word to be prompted; and when the expression label is not the preset label, determining that the current word is not the word to be prompted.
Through implementing above-mentioned electronic equipment, the use experience sense when can effectively improve the user independently dictating can also effectively reduce electronic equipment's consumption, can also improve user's dictation effect through the interest when reinforcing user independently dictation, can also improve expression label's definite precision.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 6 is optimized from the electronic device shown in fig. 5, and the electronic device shown in fig. 6 may further include:
a setting unit 404, configured to add a prompt tag to the current word after the prompt unit 403 outputs preset prompt information associated with the current word.
It should be noted that, the prompt unit 403 may be further configured to send a start instruction to the setting unit 404 after outputting preset prompt information associated with the current word, so as to trigger the setting unit 404 to perform the operation of adding the prompt tag to the current word.
An obtaining unit 405, configured to obtain the dictation error word and the dictation correct word when the dictation mode of the electronic device is terminated.
The receiving and recording unit 406 is configured to determine whether a word carrying the prompt tag exists in the dictation correct word, and receive the word carrying the prompt tag in the dictation correct word and the dictation error word into a word library to be consolidated when the word carrying the prompt tag exists in the dictation correct word.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, a manner that the receiving and recording unit 406 is configured to receive, in the word library to be consolidated, a word that carries the hint tag in the dictation correct word and a dictation incorrect word may specifically be:
the recording unit 406 is configured to determine the dictation error word as a first-level word to be consolidated; determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated; and setting a first-level word to be consolidated and a second-level word to be consolidated respectively according to a preset rule, storing the first-level word to be consolidated and the consolidation time period corresponding to the first-level word to be consolidated into a word bank to be consolidated in an associated manner, and storing the second-level word to be consolidated and the consolidation time period corresponding to the second-level word to be consolidated into the word bank to be consolidated in an associated manner. By implementing the method, the consolidation time periods are respectively set for words carrying the prompt tags in the dictation correct words and words carrying the dictation wrong words, so that the subsequent consolidation of the user is more reasonable, and the consolidation efficiency is higher.
Further optionally, the manner that the receiving and recording unit 406 is configured to set the strengthening time periods of the first-level to-be-strengthened word and the second-level to-be-strengthened word respectively according to the preset rule may specifically be:
the recording unit 406 is configured to obtain a history time period for the user to learn words from the history learning record of the user; obtaining a target time period range for the user to learn words according to the historical time period; wherein the target time period range is 24 hours; the target time period range and a first preset consolidation curve aiming at the first-stage words to be consolidated are integrated, and the consolidation time period of the first-stage words to be consolidated is determined; and determining the consolidation time period of the second-level words to be consolidated by integrating the target time period range and a second preset consolidation curve aiming at the second-level words to be consolidated.
In the above embodiment, the first preset consolidation curve and the second preset consolidation curve may be obtained according to an einblos forgetting curve, where the first preset consolidation curve may reflect a consolidation duration for each consolidation of the first-level words to be consolidated and an interval number of days for two adjacent consolidation, and similarly, the second preset consolidation curve may reflect a consolidation duration for each consolidation of the second-level words to be consolidated and an interval number of days for two adjacent consolidation. By implementing the method, the obtained first-stage to-be-consolidated word consolidation time period and the second-stage to-be-consolidated word consolidation time period not only accord with the learning habit of the user, but also are scientific, and the consolidation efficiency of the user can be further improved.
Through implementing above-mentioned electronic equipment, the use experience sense when can effectively improving the user independently dictating, can also effectively reduce electronic equipment's consumption, interest when can also be through reinforcing the user independently dictating, improve user's dictation effect, can also improve expression label's definite precision, can also carry the word and the dictation wrong word of suggestion label in the correct word through automatic radio-recorder dictation, convenience of customers carries out follow-up consolidation, can also make the subsequent consolidation of user more reasonable, consolidation efficiency is higher, user's consolidation efficiency can also further be improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute any one of the dictation prompting methods based on the electronic device in fig. 1 to 3.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one dictation prompting method based on electronic equipment in figures 1-3.
The embodiment of the invention discloses a computer program product, which enables a computer to execute any one dictation prompting method based on electronic equipment in figures 1-3 when the computer program product runs on the computer.
The embodiment of the invention discloses an application issuing system, which is used for issuing a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute any one of the dictation prompting methods based on electronic equipment in figures 1-3.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The dictation prompting method based on the electronic device and the electronic device disclosed in the embodiments of the present invention are described in detail above, specific examples are applied in this document to explain the principles and implementations of the present invention, and the size of the step numbers in the specific examples does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and internal logic, but should not constitute any limitation to the implementation process of the embodiments of the present invention. The units described as separate parts may or may not be physically separate, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. If the integrated unit is implemented as a software functional unit and sold or used as a stand-alone product, it may be stored in a memory accessible to a computer. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core ideas; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A dictation prompting method based on electronic equipment is characterized by comprising the following steps:
when the electronic equipment is in a dictation mode, behavior information of a user is collected; the behavior information is expression information or voice information;
judging whether the current word reported and read by the electronic equipment is a word to be prompted or not by analyzing the behavior information;
if the word is the word to be prompted, outputting preset prompting information associated with the current word; the preset prompt information is used for guiding a user to think out the writing strokes of the current words;
when the electronic device is in the dictation mode, acquiring behavior information of a user, including:
when the electronic equipment is in a dictation mode, detecting whether a user writes within a preset time length after the electronic equipment reads a current word or not;
and if the writing action does not occur, acquiring behavior information of the user.
2. The method of claim 1, wherein when the behavior information is the emotion information, the determining whether the current word read by the electronic device is a word to be prompted by analyzing the behavior information includes:
acquiring state information of key points of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose and muscles of the user;
setting an expression label of a user according to the state information of the key point;
when the expression label is a preset label, determining that the current word reported and read by the electronic equipment is a word to be prompted;
and when the expression label is not the preset label, determining that the current word is not the word to be prompted.
3. The method according to claim 1 or 2, wherein after outputting the preset hint information associated with the current word, the method further comprises:
adding a prompt tag for the current word;
when the dictation mode of the electronic equipment is terminated, acquiring dictation error words and dictation correct words;
judging whether words carrying the prompt tags exist in the dictation correct words or not;
and if the word exists, recording the word carrying the prompt tag in the dictation correct word and the dictation error word into a word library to be consolidated.
4. The method according to claim 3, wherein the step of recording the words, which carry the prompt tags, in the dictation correct words and the dictation incorrect words into a word bank to be consolidated comprises:
determining the dictation error words as first-level words to be consolidated;
determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated;
setting the strengthening time periods of the first-level words to be strengthened and the second-level words to be strengthened respectively according to preset rules;
and storing the first-level words to be consolidated and the consolidation time periods corresponding to the first-level words to be consolidated into a word bank to be consolidated in a correlated manner, and storing the second-level words to be consolidated and the consolidation time periods corresponding to the second-level words to be consolidated into the word bank to be consolidated in a correlated manner.
5. An electronic device, comprising:
the acquisition unit is used for acquiring the behavior information of the user when the electronic equipment is in a dictation mode; the behavior information is expression information or voice information;
the judging unit is used for judging whether the current word reported and read by the electronic equipment is the word to be prompted or not by analyzing the behavior information;
the prompting unit is used for outputting preset prompting information associated with the current word when the current word is the word to be prompted, wherein the preset prompting information is used for guiding a user to think out the writing strokes of the current word;
the acquisition unit is used for acquiring the behavior information of the user when the electronic equipment is in a dictation mode in a specific way:
the acquisition unit is used for detecting whether a user writes within a preset time after the electronic equipment reads the current word when the electronic equipment is in a dictation mode, and acquiring behavior information of the user when the writing is detected.
6. The electronic device according to claim 5, wherein when the behavior information is the expression information, the determination unit includes:
the obtaining subunit is used for obtaining the state information of key points of the face of the user from the behavior information; wherein the key points are eyebrows, eyes, mouth, nose and muscles of the user;
the setting subunit is used for setting an expression label of the user according to the state information of the key point;
the determining subunit is configured to determine, when the expression tag is a preset tag, that the current word read by the electronic device is a word to be prompted; and when the expression label is not the preset label, determining that the current word is not the word to be prompted.
7. The electronic device of claim 5 or 6, further comprising:
the setting unit is used for adding a prompt tag to the current word after the prompt unit outputs preset prompt information associated with the current word;
an acquisition unit configured to acquire a dictation error word and a dictation correct word when the dictation mode of the electronic device is terminated;
and the receiving and recording unit is used for judging whether the words carrying the prompt tags exist in the dictation correct words or not, and receiving and recording the words carrying the prompt tags in the dictation correct words and the dictation error words into a word library to be consolidated when the words carrying the prompt tags exist in the dictation correct words.
8. The electronic device according to claim 7, wherein the manner for receiving and recording the words carrying the prompt tag in the dictation correct words and the dictation incorrect words in the word bank to be consolidated by the receiving and recording unit is specifically:
the recording unit is used for determining the dictation error words as first-level words to be consolidated; determining the words carrying the prompt tags in the dictation correct words as second-level words to be consolidated; and according to preset rules, setting the first-level to-be-consolidated word and the second-level to-be-consolidated word consolidation time periods respectively, storing the first-level to-be-consolidated word and the consolidation time period corresponding to the first-level to-be-consolidated word in a word bank to be consolidated in an associated manner, and storing the second-level to-be-consolidated word and the consolidation time period corresponding to the second-level to-be-consolidated word in the word bank to be consolidated in an associated manner.
CN201910622524.7A 2019-07-11 2019-07-11 Dictation prompting method based on electronic equipment and electronic equipment Active CN111091731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910622524.7A CN111091731B (en) 2019-07-11 2019-07-11 Dictation prompting method based on electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910622524.7A CN111091731B (en) 2019-07-11 2019-07-11 Dictation prompting method based on electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN111091731A CN111091731A (en) 2020-05-01
CN111091731B true CN111091731B (en) 2021-11-26

Family

ID=70393347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910622524.7A Active CN111091731B (en) 2019-07-11 2019-07-11 Dictation prompting method based on electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN111091731B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101706A (en) * 2006-07-05 2008-01-09 香港理工大学 Chinese writing study machine and Chinese writing study method
CN109544421A (en) * 2018-12-20 2019-03-29 合肥凌极西雅电子科技有限公司 A kind of intelligent tutoring management system and method based on children
CN109635096A (en) * 2018-12-20 2019-04-16 广东小天才科技有限公司 A kind of dictation reminding method and electronic equipment
CN109669661A (en) * 2018-12-20 2019-04-23 广东小天才科技有限公司 A kind of control method and electronic equipment of dictation progress

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CH713517B1 (en) * 2017-02-27 2021-03-31 Molex S R L System of electronic devices for increased learning paths and simultaneous constant evaluation of the users' learning level.
US20190087830A1 (en) * 2017-09-15 2019-03-21 Pearson Education, Inc. Generating digital credentials with associated sensor data in a sensor-monitored environment
US20190180637A1 (en) * 2017-12-08 2019-06-13 The Regents Of The University Of Colorado, A Body Corporate Virtually Resilient Simulator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101706A (en) * 2006-07-05 2008-01-09 香港理工大学 Chinese writing study machine and Chinese writing study method
CN109544421A (en) * 2018-12-20 2019-03-29 合肥凌极西雅电子科技有限公司 A kind of intelligent tutoring management system and method based on children
CN109635096A (en) * 2018-12-20 2019-04-16 广东小天才科技有限公司 A kind of dictation reminding method and electronic equipment
CN109669661A (en) * 2018-12-20 2019-04-23 广东小天才科技有限公司 A kind of control method and electronic equipment of dictation progress

Also Published As

Publication number Publication date
CN111091731A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN106658129B (en) Terminal control method and device based on emotion and terminal
CN107256428B (en) Data processing method, data processing device, storage equipment and network equipment
US10475484B2 (en) Method and device for processing speech based on artificial intelligence
CN109634552A (en) It is a kind of to enter for control method and terminal device applied to dictation
CN111160514B (en) Conversation method and system
CN110085261A (en) A kind of pronunciation correction method, apparatus, equipment and computer readable storage medium
CN109637521A (en) A kind of lip reading recognition methods and device based on deep learning
CN108538293B (en) Voice awakening method and device and intelligent device
CN109165336B (en) Information output control method and family education equipment
CN109240786B (en) Theme changing method and electronic equipment
CN112836691A (en) Intelligent interviewing method and device
CN106611447A (en) Work attendance method and apparatus
CN108806686B (en) Starting control method of voice question searching application and family education equipment
CN105677636A (en) Information processing method and device for intelligent question-answering system
TWI674517B (en) Information interaction method and device
CN111081260A (en) Method and system for identifying voiceprint of awakening word
CN109408175B (en) Real-time interaction method and system in general high-performance deep learning calculation engine
CN114299959A (en) Method and device for generating visual multi-turn conversations through voice commands
CN111091731B (en) Dictation prompting method based on electronic equipment and electronic equipment
CN111081079A (en) Dictation control method and device based on dictation condition
CN111027358A (en) Dictation and reading method based on writing progress and electronic equipment
CN109543048B (en) Note generation method and terminal equipment
TWM629643U (en) Emotion analysis system
CN111081089B (en) Dictation control method and device based on facial feature information
CN111091821B (en) Control method based on voice recognition and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant