CN111081079A - Dictation control method and device based on dictation condition - Google Patents

Dictation control method and device based on dictation condition Download PDF

Info

Publication number
CN111081079A
CN111081079A CN201910387346.4A CN201910387346A CN111081079A CN 111081079 A CN111081079 A CN 111081079A CN 201910387346 A CN201910387346 A CN 201910387346A CN 111081079 A CN111081079 A CN 111081079A
Authority
CN
China
Prior art keywords
user
dictation
target content
content
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910387346.4A
Other languages
Chinese (zh)
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910387346.4A priority Critical patent/CN111081079A/en
Publication of CN111081079A publication Critical patent/CN111081079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a dictation control method and a dictation control device based on dictation conditions, wherein the method comprises the following steps: reading target content needing to be listened to by a user, and monitoring the listening and writing conditions of the user aiming at the target content after reading the target content; judging whether an intelligent reading mode needs to be started or not according to the dictation condition aiming at the target content; and when the intelligent reading mode is judged to be required to be started, the intelligent reading mode is started, and first prompt information matched with the read target content is output in the intelligent reading mode for the user to refer to. Therefore, by implementing the embodiment of the invention, whether the intelligent reading and writing mode needs to be started or not can be intelligently judged according to the dictation condition of the user when the user carries out dictation exercise, and if so, prompt information matched with the dictation content is output for the user to refer to, so that the dictation effect and the dictation accuracy of the user can be improved, and the dictation experience of the user can be further improved.

Description

Dictation control method and device based on dictation condition
Technical Field
The invention relates to the technical field of intelligent terminal equipment, in particular to a dictation control method and device based on dictation conditions.
Background
At present, based on the reason that at least one reader is required to read the dictation content, the traditional dictation mode can not meet the dictation requirements of the dictating person anywhere and anytime, and based on the traditional dictation mode, some dictation applications, dictation terminals, learning applications with the dictation function and the like appear on the market, so that the intelligent dictation mode is brought to the dictating person, and the dictating person can practice dictation anytime anywhere. Taking the dictation terminal as an example, when a dictating person needs to practice dictation, the dictating person only needs to open the dictation terminal and select the dictation content, and the dictation terminal automatically reads the dictation content, so that the dictating person finishes the dictation practice.
Practice shows that due to the existence of content with similar pronunciation but different meaning as dictation content, limited dictation capability of a dictating person, and other factors, the current intelligent dictation mode can cause the dictating person to not distinguish the actual dictation content, such as: the dictation contents reported and read by the dictation terminal are 'plants', and the dictating person considers the dictation contents as 'jobs', so that the dictation effect and the dictation accuracy are reduced.
Disclosure of Invention
The embodiment of the invention discloses a dictation control method and device based on dictation conditions, which can improve the dictation effect and the dictation accuracy of a user.
The first aspect of the embodiment of the invention discloses a dictation control method based on dictation conditions, which comprises the following steps:
reading target content needing to be listened to by a user, and monitoring the listening and writing conditions of the user aiming at the target content after reading the target content;
judging whether an intelligent reading mode needs to be started according to the dictation condition aiming at the target content;
and when the intelligent reading mode is judged to be required to be started, the intelligent reading mode is started, and first prompt information matched with the target contents which are read in a reported mode is output in the intelligent reading mode for a user to refer to.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the dictation condition includes at least a duration of pen-writing failure of the user after reading the target content;
the method for judging whether the intelligent reading mode needs to be started according to the dictation condition aiming at the target content comprises the following steps:
and judging that the duration is greater than or equal to a preset duration threshold, and determining that an intelligent reading mode needs to be started when the duration is greater than or equal to the preset duration threshold.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the dictation condition further includes a content written by a pen of the user after reading the target content;
wherein the method further comprises:
when the duration is judged to be smaller than the preset duration threshold, judging whether the content written by the user in the pen is the writing content matched with the target content;
and when the content written by the user is judged not to be the writing content matched with the target content, determining that the intelligent reading mode needs to be started.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before outputting, in the intelligent reading mode, first prompt information matching the target content that has been read for reference by a user, the method further includes:
positioning the current geographic position of a user, and determining the current real-time scene of the user according to the geographic position;
the outputting of the first prompt information matched with the reported target content in the intelligent reporting mode for reference by a user includes:
and under the intelligent reading mode, acquiring first prompt information matched with the target content which is read in the real-time scene, and outputting the first prompt information.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after outputting, in the intelligent reading mode, first prompt information matching the target content that has been read for reference by a user, the method further includes:
collecting facial expression information of a user, and judging whether the user determines writing content matched with the target content according to the first prompt information according to the facial expression information;
when the fact that the user does not determine the writing content matched with the target content according to the first prompt information is judged according to the facial expression information, outputting second prompt information matched with the target content;
the content of the second prompt message is greater than that of the first prompt message, or the prompt degree of the second prompt message to the target content is higher than that of the first prompt message to the target content.
The second aspect of the embodiment of the invention discloses a dictation control device based on dictation conditions, which comprises:
the reading module is used for reading target contents needing to be written by the user;
the monitoring module is used for monitoring the dictation condition of a user aiming at the target content after the target content is read by the reading module;
the first judgment module is used for judging whether an intelligent reading and reporting mode needs to be started according to the dictation condition of the target content monitored by the monitoring module;
the starting module is used for starting the intelligent reading mode when the first judging module judges that the intelligent reading mode needs to be started;
and the output module is used for outputting first prompt information matched with the reported target content in the intelligent reporting mode for a user to refer to after the intelligent reporting mode is started by the starting module.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the dictation condition includes at least a duration of pen-writing failure of the user after reading the target content;
the first judgment module comprises a judgment submodule and a determination submodule, wherein:
the judgment submodule is used for judging whether the duration is greater than or equal to a preset duration threshold;
the determining submodule is used for determining that the intelligent reading mode needs to be started when the judging submodule judges that the duration is greater than or equal to the preset duration threshold.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the dictation condition further includes a content written by a pen of the user after reading the target content;
the judging submodule is further used for judging whether the content written by the user in the pen is the writing content matched with the target content or not when the duration is judged to be smaller than the preset duration threshold;
the determining submodule is further configured to determine that the intelligent reading mode needs to be started when the judging submodule judges that the content written by the user is not the writing content matched with the target content.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the positioning module is used for positioning the current geographic position of the user before the output module outputs first prompt information matched with the reported target content in the intelligent reporting mode for the user to refer to;
the determining module is used for determining the current real-time scene of the user according to the geographic position positioned by the positioning module;
the specific way that the output module outputs the first prompt information matched with the reported target content for the reference of the user in the intelligent reporting mode is as follows:
and under the intelligent reading mode, acquiring first prompt information matched with the target content which is read in the real-time scene, and outputting the first prompt information.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the apparatus further includes:
the acquisition module is used for acquiring facial expression information of a user after the output module outputs first prompt information matched with the reported target content in the intelligent reporting mode for the user to refer to;
the second judging module is used for judging whether the user determines writing content matched with the target content according to the first prompt information or not according to the facial expression information;
the output module is further configured to output second prompt information matched with the target content when the second judging module judges that the user does not determine the writing content matched with the target content according to the first prompt information according to the facial expression information;
the content of the second prompt message is greater than that of the first prompt message, or the prompt degree of the second prompt message to the target content is higher than that of the first prompt message to the target content.
A third aspect of the embodiments of the present invention discloses another dictation control apparatus based on dictation conditions, the apparatus including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of any one of the methods disclosed in the first aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute all or part of the steps in any one of the methods disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the target content which needs to be listened and written by the user is reported and read, and after the target content is reported and read, the listening and writing condition of the user aiming at the target content is monitored; judging whether an intelligent reading mode needs to be started or not according to the dictation condition aiming at the target content; and when the intelligent reading mode is judged to be required to be started, the intelligent reading mode is started, and first prompt information matched with the read target content is output in the intelligent reading mode for the user to refer to. Therefore, by implementing the embodiment of the invention, whether the intelligent reading and writing mode needs to be started or not can be intelligently judged according to the dictation condition of the user when the user carries out dictation exercise, and if so, prompt information matched with the dictation content is output for the user to refer to, so that the dictation effect and the dictation accuracy of the user can be improved, and the dictation experience of the user can be further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a dictation control method based on dictation conditions disclosed in an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another dictation control method based on dictation conditions disclosed in the embodiments of the present invention;
FIG. 3 is a schematic structural diagram of a dictation control apparatus based on dictation conditions according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another dictation control device based on dictation conditions, disclosed in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of another dictation control device based on dictation conditions according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a dictation control method and device based on dictation conditions, which can intelligently judge whether an intelligent reading and writing mode needs to be started or not according to the dictation conditions of a user when the user carries out dictation exercise, and if so, prompt information matched with the dictation contents is output for the user to refer to, so that the dictation effect and the dictation accuracy of the user can be improved, and the dictation experience of the user can be improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a dictation control method based on dictation conditions according to an embodiment of the present invention. The method shown in fig. 1 may be applied to any user terminal with a dictation control function, such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a smart wearable device, and a Mobile Internet Device (MID), and the embodiment of the present invention is not limited thereto. As shown in fig. 1, the dictation control method based on dictation conditions may include the following operations:
101. the user terminal reads target content needing the user to listen and write, and monitors the listening and writing conditions of the user aiming at the target content after reading the target content.
In the embodiment of the present invention, the reading and reporting of the target content that needs to be written by the user terminal may include:
the user terminal determines the content needing to be reported and read currently as the target content, and reports and reads the target content according to the detected dictation reporting and reading instruction aiming at the target content.
As an optional implementation manner, the user terminal may determine the content that needs to be read currently according to the reading content selection operation triggered by the user, generate a dictation reading instruction according to the dictation reading request triggered by the user, and read the target content according to the dictation reading instruction. The user-triggered reading content selection operation and the user-triggered dictation reading request can be integrated or independent.
As another optional implementation, the user terminal may also determine the content that needs to be read currently according to a reading content selection operation sent by a management terminal that establishes a connection relationship with the user terminal in advance, generate a dictation reading instruction according to a dictation reading instruction sent by the management terminal, and read the target content according to the dictation reading instruction. The reading content selection operation sent by the management terminal and the dictation reading instruction sent by the management terminal can be integrated or independent.
As another alternative implementation, the user terminal may also determine a target content that needs to be listened to by the user according to a predetermined reading order of each content in the dictation content set and the reported content in the dictation content set, automatically generate a dictation control instruction after a target duration after the last content is completely read, and report the dictation content according to the dictation control instruction, where the reading order of the last content is adjacent to the reading order of the target content, and the reading order of the last content is earlier than the reading order of the target content. The target time length may be a preset fixed time length, or a time length which is determined by the user terminal according to a content parameter of a previous content and is matched with a content parameter of the previous content, where the content parameter may include at least one of a content length, a content complexity, and a content type, so that the time length of the reading interval may be adaptively adjusted according to the content parameter of the reading content, which is beneficial to reducing the occurrence of a situation that a user is not ready to write the content that the user listens to the writing due to an excessively short reading interval, and is also beneficial to reducing the occurrence of a situation that the listening efficiency is low due to an excessively long reading interval. In this further optional implementation manner, after the user terminal reads the target content, monitoring the dictation condition of the user for the target content may specifically be:
and starting timing at the moment of finishing reading the target content, and monitoring the dictation condition of the user aiming at the target content in the process of increasing the real-time length of timing to the time length corresponding to the content parameter of the target content. It should be noted that, if the dictation condition of the target content is monitored at a certain time in the process of increasing the timed real-time duration to the duration corresponding to the content parameter of the target content, the user terminal stops timing and triggers to execute step 102.
102. The user terminal judges whether an intelligent reading mode needs to be started according to the dictation condition aiming at the target content, and when the judgment result in the step 102 is yes, the step 103 is triggered and executed; when the determination result in step 102 is negative, the present process may be ended.
In the embodiment of the present invention, it should be noted that, when the determination result in step 102 is negative, the user terminal does not need to start the intelligent reading mode, and can use the content that needs to be read after the target content as the target content that needs to be listened to by the user (also called a dictating person), and re-trigger execution of step 101. Therefore, extra prompt is not needed under the condition that the user does not have the hesitation of writing, and the condition that the dictation time length is increased due to redundant output prompt information is favorably reduced.
In the embodiment of the invention, the intelligent reading mode is used for outputting prompt information matched with the target content after reading the target content needing to be listened and written by the user and under the condition that the user cannot understand or confirm the target content according to the dictation condition of the user aiming at the target content.
103. And the user terminal starts an intelligent reading mode and outputs first prompt information matched with the read target content in the intelligent reading mode for the user to refer to.
In the embodiment of the present invention, the first prompt information matched with the target content is used to prompt the target content to a user. The user terminal can output the first prompt information matched with the reported target content in a voice mode and/or a text mode. When the first prompt information matched with the target content is output in a text mode, the first prompt information matched with the target content can be output in a mode of popping up a text box. Optionally, when the first prompt information matched with the target content is output in a text manner, if the first prompt information matched with the target content includes all or part of the target content, the content of the first prompt information matched with the target content, which is the same as the target content, is processed in a preset processing manner, for example, the content of the first prompt information matched with the target content, which is the same as all or part of the target content, is replaced by other characters or a mosaic is printed on the content of the first prompt information, which is the same as all or part of the target content.
In an embodiment of the present invention, when the target content is a word that needs to be written by the user, the first prompt information may include prompt information for prompting a part of speech of the target content, prompt information for prompting an application scene of the target content, prompt information for prompting a chronological order of the content, and prompt information for explaining a meaning of the target content (a word group of the target content, a specific explanation of the target content, a sentence making of the target content, an exclusion prompt of a homophonic and heterophonic content of the target content, and a text original sentence corresponding to the target content, etc.).
In an optional embodiment, the dictation control method for dictation may further include:
and starting timing at the moment of outputting any prompt message matched with the target content, monitoring the dictation condition of the user aiming at the target content in the process of gradually increasing the timed duration to the duration corresponding to the content parameter of the target content, and outputting other prompt messages matched with the target content in an intelligent reading mode if the user is judged not to write or confirm the target content according to the dictation condition aiming at the target content, wherein the other prompt messages are at least one prompt message in the prompt message set matched with the target content, and the prompt messages are not output yet.
Further optionally, the degree of prompting the target content by the currently output prompt message is higher than the degree of prompting the target content by any prompt message matched with the target content and already output before the other prompt message is output in the dictation process of this time.
Still further optionally, after determining that the intelligent reading mode needs to be started according to the dictation condition of the target content, the user terminal may further perform the following operations:
the user terminal judges the total times of outputting the prompt information matched with the target content, when the total times reaches a preset time threshold, after the timed time length is gradually increased to the time length corresponding to the content parameter of the target content, the content needing to be read after the target content is taken as the target content needing to be listened and written by a user (also called a dictating person), and the step 101 is triggered and executed again;
and when the total times does not reach the preset times threshold value, the user terminal executes the operation of starting the intelligent reading mode.
Therefore, the optional embodiment can output the prompt information matched with the target content for multiple times under the condition that the user does not write or master the target content, so that the dictation accuracy of the user is improved, the prompt information can be output according to the rule from low prompt degree to high prompt degree, the progressive guidance of the user is facilitated, the target content can be prompted, the thinking of the user can be triggered, the user can learn knowledge related to the target content, and the condition that the dictation efficiency is low due to the fact that the user cannot confirm the content needing dictation for a long time can be avoided by setting the maximum prompt times.
Still further optionally, in the process that the timed duration gradually increases to the duration corresponding to the content parameter of the target content, the user terminal may further perform the following operations:
the user terminal monitors the dictation condition of the user for the target content, judges whether the user confirms the target content according to the dictation condition for the target content, judges whether the user correctly writes the dictation content corresponding to the target content after judging that the user confirms the target content, and sets a prompt frequency label for the target content when the judgment result is yes, wherein the prompt frequency label is used for indicating the total frequency of the prompt information matched with the target content output by the user terminal before the user correctly writes the dictation content corresponding to the target content in the dictation process;
when the user does not confirm the target content or the user does not correctly write the dictation content matched with the target content, the user terminal sets an uncontrolled tag for the target content, wherein the uncontrolled tag is used for indicating that the user cannot accurately write the dictation content matched with the target content in the dictation process.
Still further optionally, the dictation control method based on dictation conditions may further include the operations of:
after the dictation process is finished, the user terminal screens out a first content set with a prompt frequency label from all contents read in the dictation process, and screens out a second content set with an unconfined label from all contents read in the dictation process;
and the user terminal evaluates the dictation achievement of the user in the dictation process based on the first content set and a third content set, wherein the third content set is a set formed by the content included in the first content set and the residual content before the content included in the second content set in all the contents reported and read in the dictation process.
Therefore, the optional embodiment can intelligently set corresponding tags for the reported different contents in the dictation process of the user, and intelligently and comprehensively evaluate the dictation achievement of the user according to the tags corresponding to the different contents after the dictation process is finished, so as to improve the accuracy of the dictation achievement.
It should be noted that, if it is determined that the intelligent reading mode needs to be started and is already started according to the writing condition of the user, the operation of outputting the first prompt information matched with the read target content in the intelligent reading mode for reference by the user may be directly performed.
Therefore, by implementing the dictation control method based on the dictation condition described in fig. 1, whether the intelligent reading and writing mode needs to be started or not can be intelligently judged according to the dictation condition of the user when the user carries out dictation exercise, and if so, prompt information matched with the dictation content is output for the user to refer to, so that the dictation effect and the dictation accuracy of the user can be improved, and the dictation experience of the user can be improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another dictation control method based on dictation conditions disclosed in the embodiment of the present invention. The method shown in fig. 2 may be applied to any user terminal with a dictation control function, such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a smart wearable device, and a Mobile Internet Device (MID), and the embodiment of the present invention is not limited thereto. As shown in fig. 2, the dictation control method based on dictation conditions may include the following operations:
201. the user terminal reads target content needing the user to listen and write, and monitors the listening and writing conditions of the user aiming at the target content after reading the target content.
202. The user terminal judges whether an intelligent reading mode needs to be started according to the dictation condition aiming at the target content, and when the judgment result in the step 202 is yes, the step 203 is triggered and executed; when the determination result in step 202 is negative, the present flow may be ended.
As an alternative implementation, the dictation condition of the target content may include a duration of the user's non-pen writing after reading the target content and/or a content of the user's pen writing after reading the target content. The user terminal may monitor a writing trace of a preset writing area of the user terminal to determine a duration that the user does not write after reading the target content and/or a content that the user writes after reading the target content, may also determine a duration that the user does not write after reading the target content and/or a content that the user writes after reading the target content by monitoring a writing trace of a writing panel that is connected with the user terminal in advance, and may also determine a duration that the user does not write after reading the target content and/or a content that the user writes after reading the target content by monitoring a writing trace of an electronic writing pen that is connected with the user terminal in advance, which is not limited in the embodiments of the present invention.
In this optional implementation, when the dictation condition of the target content includes a duration that the user does not write after reading the target content, the determining, by the user terminal, whether the intelligent reading mode needs to be turned on according to the dictation condition of the target content may include:
and the user terminal judges that the duration is greater than or equal to a preset duration threshold, and when the duration is greater than or equal to the preset duration threshold, the user terminal determines that an intelligent reading mode needs to be started.
When the duration is greater than or equal to the preset duration threshold, the user terminal confirms that the user is late and does not start writing, determines that the reported and read target content is puzzled, and triggers the execution step 203, so that an intelligent prompt function under the condition that the user is puzzled on the target content is realized. The confusion of the user about the reported target content may include the following situations: the target content that the user does not hear the reading, the target content that the user does not know the reading, the dictation content that the user knows the target content but the user cannot specifically confirm the target content matching based on the homophonic character, or the dictation content that the user confirms the target content matching but does not know the correct writing track, etc., and the embodiments of the present invention are not limited.
In this optional implementation, when the dictation condition of the target content includes a user writing content after reading the target content, the user writing content is the user writing content monitored during the process of starting timing at the time of reading the target content and timing to a time length corresponding to the target content. Wherein, the user terminal judges whether to start the intelligent reading mode according to the dictation condition aiming at the target content, and may include:
the user terminal judges whether the contents written by the user are the writing contents matched with the target contents, and when the contents written by the user are not the writing contents matched with the target contents, the intelligent reading and reporting mode is determined to be required to be started.
The determining, by the user terminal, whether the content written by the user in a pen is the writing content matched with the target content may include:
when the contents written by the user are empty, the user terminal determines that the contents written by the user are not the writing contents matched with the target contents;
when the content written by the user is not empty, the user terminal determines a first writing track of the content written by the user, determines a second writing track of the writing content matched with the target content, judges the similarity between the first writing track and the second writing track, and confirms that the content written by the user is not the writing content matched with the target content when the similarity does not exceed a preset threshold (for example, 50%). It can be seen that this alternative embodiment can determine whether the user confirms the target content by the similarity between the written trajectory and the actual trajectory.
It should be noted that, when the similarity exceeds a preset threshold (e.g. 50%), two cases may be included: the user confirms the target content and correctly writes the dictation content matched with the target content, and the user confirms the target content but does not completely correctly write the dictation content matched with the target content.
For example, when the target content is "job", if the trajectory of the content written by the user is "job", it indicates that the user has confirmed that the target content is "job" instead of "plant", but the user does not write the "job" character of "job", and if the trajectory written by the user is "plant", "thing", "cow", "wood", "straight", "don't care", or "plant", it indicates that the user has not confirmed the target content.
In this optional implementation, when the dictation condition of the target content includes a duration that the user does not write after reading the target content and a content that the user writes after reading the target content, the determining, by the user terminal, whether to start the intelligent reading mode according to the dictation condition for the target content may include:
the user terminal judges whether the duration is greater than or equal to a preset duration threshold, and when the duration is greater than or equal to the preset duration threshold, the intelligent reading mode is determined to be required to be started;
and when the duration is judged to be less than the preset duration threshold, judging whether the contents written by the user are the writing contents matched with the target contents, and when the contents written by the user are not judged to be the writing contents matched with the target contents, determining that the intelligent reading mode needs to be started.
Therefore, the optional implementation manner can be combined with the duration that the user does not write after reading the target content and the content that the user writes when the duration is not greater than the preset duration threshold value to judge whether the intelligent reading mode needs to be started, so that the accuracy of starting the intelligent reading mode is improved.
203. And the user terminal starts an intelligent reading mode.
204. The user terminal locates the current geographical position of the user and determines the current real-time scene of the user according to the geographical position.
205. In the intelligent reading mode, the user terminal acquires first prompt information matched with the read target content in the real-time scene and outputs the first prompt information.
In the embodiment of the present invention, optionally, all the prompt messages matched with the target content may be stored in the dictation database of the user terminal, or after the intelligent reading mode is started, the user terminal searches all the prompt messages matched with the target content from a search engine accessible to the user terminal. Further optionally, the obtaining, by the user terminal, the first prompt information matched with the reported target content in the real-time scene may include:
the user terminal analyzes the specific content of each prompt message to obtain a scene label corresponding to each prompt message, and screens a prompt message set matched with the real-time scene from all prompt messages matched with the reported target content on the basis of the real-time scene and the scene label corresponding to each prompt message;
and the user terminal filters at least one piece of prompt information which is not output yet from the prompt information set matched with the real-time scene based on a predetermined filtering rule, and the prompt information is used as first prompt information matched with the reported target content in the real-time scene.
The screening rule is a screening rule with simple content, a screening rule with the lowest prompting degree or a screening rule matched with learning ability. It should be noted that, if it is necessary to output the prompt information matching the target content for multiple times, each time the prompt information needs to be output, the user terminal may filter, according to a predetermined filtering rule, at least one piece of prompt information that has not been output from the prompt information set matching the real-time scene, as the prompt information that needs to be output.
Therefore, after the intelligent reading mode is started, the current real-time scene of the user can be determined through the geographic position obtained through positioning, the prompt information matched with the read content in the real-time scene of the user is output, the acceptance of the user on the prompt information is favorably improved, the efficiency and the accuracy of recognizing the target content according to the prompt information by the user are favorably improved, and the listening and writing interestingness of the user can be enhanced.
In an optional embodiment, after completing step 205, the dictation control method based on dictation condition may further include the following operations:
206. the user terminal collects facial expression information of the user, judges whether the user determines writing content matched with the target content according to the first prompt information according to the facial expression information, triggers and executes the step 207 when the judgment result in the step 206 is negative, and can end the process when the judgment result in the step 206 is positive.
207. And the user terminal outputs second prompt information matched with the target content.
In an embodiment of the present invention, the content of the second prompting message is greater than the content of the first prompting message, or the degree of prompting the target content by the second prompting message is greater than the degree of prompting the target content by the first prompting message.
In the embodiment of the present invention, the user terminal may acquire facial expression information of the user through an image acquisition device (e.g., a camera) on the user terminal, where the facial expression information may be represented by a change condition of a target feature in all facial features of the user, that is, the change condition of the target feature is used to determine facial expression information of the user, and the target feature may be at least one of a mouth feature, an eye feature, and an eyebrow feature.
In an alternative embodiment, if the user terminal does not collect the facial expression information of the user, the user terminal may perform the following operations:
if the image acquisition device on the user terminal is a rotatable image acquisition device, the user terminal controls the image acquisition device to rotate until the image acquisition device captures facial expression information of the user, and further, the face of the user captured by the image acquisition device is located in the center position in the field range of the image acquisition device, so that a large shaking range can be reserved for the user, and the situation that the image acquisition device cannot capture the facial expression information of the user due to slight shaking of the user is reduced;
if the image acquisition device on the user terminal is a non-rotatable image acquisition device, the user terminal outputs a voice prompt to prompt the user to move until the image acquisition device can acquire facial expression information of the user or prompt the user to move the user terminal until the image acquisition device can acquire the facial expression information of the user.
Therefore, the optional embodiment can improve the reliability and accuracy of the collected facial expression information of the user.
It can be seen that, by implementing the dictation control method based on the dictation condition described in fig. 2, it can be intelligently determined whether the intelligent reading and writing mode needs to be started according to the dictation condition of the user when the user performs the dictation exercise, and if so, prompt information matched with the dictation content is output for the user to refer to, which is beneficial to improving the dictation effect and the dictation accuracy of the user, and further beneficial to improving the dictation experience of the user. The method and the device are beneficial to improving the acceptance of the user to the prompt information, further beneficial to improving the efficiency and the accuracy of the user for identifying the reported and read content according to the prompt information, and simultaneously capable of enhancing the dictation interestingness of the user.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a dictation control apparatus based on dictation conditions according to an embodiment of the present invention. As shown in fig. 3, the dictation control apparatus based on dictation may include:
and the reading module 301 is configured to read target content that needs to be written by a user.
A monitoring module 302, configured to monitor a dictation condition of the user for the target content after the reading module 301 reads the target content.
The first determining module 303 is configured to determine whether an intelligent reporting mode needs to be started according to the dictation condition of the target content monitored by the monitoring module 302.
The starting module 304 is configured to start the intelligent reading mode when the first determining module 303 determines that the intelligent reading mode needs to be started.
The output module 305 is configured to output, in the intelligent reading mode, the first prompt information matched with the target content that has been read by the reading module 301 for the user to refer to after the intelligent reading mode is started by the starting module 304.
In an alternative embodiment, the dictation condition of the target content may include the duration of the user's non-pen writing after reading the target content and/or the user's pen writing after reading the target content. As shown in fig. 4, the first determining module 303 may include a determining sub-module 3031 and a determining sub-module 3032.
In this alternative embodiment, when the dictation condition of the target content includes a duration that the user does not write after reading the target content, the determining sub-module 3031 is configured to determine whether the duration is greater than or equal to a preset duration threshold.
The determining submodule 3032 is configured to determine that the intelligent reading mode needs to be started when the determining submodule 3031 determines that the duration is greater than or equal to the preset duration threshold.
In this alternative embodiment, when the dictation condition of the target content includes the content written by the user after reading the target content, the determining sub-module 3031 is configured to determine whether the content written by the user is the writing content matching the target content.
The determining submodule 3032 is configured to determine that the intelligent reading mode needs to be started when the determining submodule 3031 determines that the content written by the pen of the user is not the writing content matched with the target content.
In this alternative embodiment, when the dictation condition of the target content includes a duration that the user has not written after reading the target content and a content that the user has written after reading the target content, the determining sub-module 3031 is configured to determine whether the duration is greater than or equal to a preset duration threshold.
The determining submodule 3032 is configured to determine that the intelligent reading mode needs to be started when the determining submodule 3031 determines that the duration is greater than or equal to the preset duration threshold.
The determining submodule 3031 is further configured to determine whether the content written by the user is the writing content matched with the target content when the determining submodule 3031 determines that the duration is smaller than the preset duration threshold.
The determining submodule 3032 is further configured to determine that the intelligent reading mode needs to be started when the determining submodule 3031 determines that the duration is smaller than the preset duration threshold and determines that the content written by the pen of the user is not the writing content matched with the target content.
In another alternative embodiment, as shown in fig. 4, the dictation control apparatus based on dictation condition may further include:
and the positioning module 306 is configured to position the current geographic location of the user before the output module 305 outputs the first prompt information matched with the reported target content in the intelligent reporting mode for the user to refer to.
A determining module 307, configured to determine the current real-time scene of the user according to the geographic location located by the locating module 306.
The specific way for the output module 305 to output the first prompt information matched with the read target content in the intelligent reading mode for the user to refer to is as follows:
and under the intelligent reading mode, acquiring first prompt information matched with the read target content in a real-time scene, and outputting the first prompt information.
In yet another alternative embodiment, as shown in fig. 4, the dictation control apparatus based on dictation condition may further include:
the collecting module 308 is configured to collect facial expression information of the user after the outputting module 305 outputs the first prompt information matching the reported target content for the user to refer to in the smart reading mode.
A second judging module 309, configured to judge, according to the facial expression information, whether the user determines, according to the first prompt information, the writing content that matches the target content.
The output module 305 is further configured to output a second prompt message matching the target content when the second determining module 309 determines, according to the facial expression information, that the user does not determine the written content matching the target content according to the first prompt message.
The content of the second prompting message is greater than that of the first prompting message, or the prompting degree of the second prompting message to the target content is higher than that of the first prompting message to the target content.
It can be seen that, by implementing the dictation control device based on the dictation condition described in fig. 4, it can be intelligently determined whether the intelligent reading and writing mode needs to be started according to the dictation condition of the user when the user performs the dictation exercise, and if so, prompt information matched with the dictation content is output for the user to refer to, which is beneficial to improving the dictation effect and the dictation accuracy of the user, and further beneficial to improving the dictation experience of the user. The method and the device are also beneficial to improving the acceptance of the user to the prompt information, further beneficial to improving the efficiency and the accuracy of the user for identifying the reported and read content according to the prompt information, and simultaneously capable of enhancing the dictation interestingness of the user
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of another dictation control apparatus based on dictation conditions according to an embodiment of the present invention. As shown in fig. 5, the dictation control apparatus based on dictation may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
the processor 502 calls the executable program code stored in the memory 501 to execute the steps in the dictation control method based on dictation situations described in fig. 1 or fig. 2.
EXAMPLE five
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute the steps in the dictation control method based on dictation conditions, which is described in figure 1 or figure 2.
EXAMPLE six
An embodiment of the invention discloses a computer program product, which comprises a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to make a computer execute the steps in the dictation control method based on dictation situations described in fig. 1 or fig. 2.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The dictation control method and device based on dictation conditions disclosed in the embodiments of the present invention are described in detail above, and specific embodiments are applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A dictation control method based on dictation conditions, the method comprising:
reading target content needing to be listened to by a user, and monitoring the listening and writing conditions of the user aiming at the target content after reading the target content;
judging whether an intelligent reading mode needs to be started according to the dictation condition aiming at the target content;
and when the intelligent reading mode is judged to be required to be started, the intelligent reading mode is started, and first prompt information matched with the target contents which are read in a reported mode is output in the intelligent reading mode for a user to refer to.
2. A dictation control method as claimed in claim 1, characterized in that the dictation conditions comprise at least the duration of the pen-free writing by the user after reading the target content;
the method for judging whether the intelligent reading mode needs to be started according to the dictation condition aiming at the target content comprises the following steps:
and judging whether the duration is greater than or equal to a preset duration threshold, and determining that an intelligent reading mode needs to be started when the duration is greater than or equal to the preset duration threshold.
3. A dictation control method as claimed in claim 2, characterized in that the dictation conditions further comprise what the user writes with a pen after reading the target content;
wherein the method further comprises:
when the duration is judged to be smaller than the preset duration threshold, judging whether the content written by the user in the pen is the writing content matched with the target content;
and when the content written by the user is judged not to be the writing content matched with the target content, determining that the intelligent reading mode needs to be started.
4. The dictation control method based on dictation as claimed in any of claims 1-3, wherein before outputting the first prompt message matching the target content that has been posted for user reference in the smart newspaper mode, the method further comprises:
positioning the current geographic position of a user, and determining the current real-time scene of the user according to the geographic position;
the outputting of the first prompt information matched with the reported target content in the intelligent reporting mode for reference by a user includes:
and under the intelligent reading mode, acquiring first prompt information matched with the target content which is read in the real-time scene, and outputting the first prompt information.
5. The dictation control method based on dictation as claimed in any of claims 1-4, wherein after outputting the first prompt message matching the target content that has been posted for user reference in the smart newspaper mode, the method further comprises:
collecting facial expression information of a user, and judging whether the user determines writing content matched with the target content according to the first prompt information according to the facial expression information;
when the fact that the user does not determine the writing content matched with the target content according to the first prompt information is judged according to the facial expression information, outputting second prompt information matched with the target content;
the content of the second prompt message is greater than that of the first prompt message, or the prompt degree of the second prompt message to the target content is higher than that of the first prompt message to the target content.
6. A dictation control apparatus based on dictation conditions, the apparatus comprising:
the reading module is used for reading target contents needing to be written by the user;
the monitoring module is used for monitoring the dictation condition of a user aiming at the target content after the target content is read by the reading module;
the first judgment module is used for judging whether an intelligent reading and reporting mode needs to be started according to the dictation condition of the target content monitored by the monitoring module;
the starting module is used for starting the intelligent reading mode when the first judging module judges that the intelligent reading mode needs to be started;
and the output module is used for outputting first prompt information matched with the reported target content in the intelligent reporting mode for a user to refer to after the intelligent reporting mode is started by the starting module.
7. A dictation control device as claimed in claim 6, characterized in that the dictation conditions comprise at least the duration of the pen-free writing by the user after reading the target content;
the first judgment module comprises a judgment submodule and a determination submodule, wherein:
the judgment submodule is used for judging whether the duration reaches or exceeds a preset duration threshold value;
the determining submodule is used for determining that the intelligent reading mode needs to be started when the judging submodule judges that the duration is greater than or equal to the preset duration threshold.
8. A dictation control device as claimed in claim 7, characterized in that the dictation conditions further comprise what the user writes with a pen after reading the target content;
the judging submodule is further used for judging whether the content written by the user in the pen is the writing content matched with the target content or not when the duration is judged to be smaller than the preset duration threshold;
the determining submodule is further configured to determine that the intelligent reading mode needs to be started when the judging submodule judges that the content written by the user is not the writing content matched with the target content.
9. A dictation control device as claimed in any of claims 6-8, characterized in that the device further comprises:
the positioning module is used for positioning the current geographic position of the user before the output module outputs first prompt information matched with the reported target content in the intelligent reporting mode for the user to refer to;
the determining module is used for determining the current real-time scene of the user according to the geographic position positioned by the positioning module;
the specific way that the output module outputs the first prompt information matched with the reported target content for the reference of the user in the intelligent reporting mode is as follows:
and under the intelligent reading mode, acquiring first prompt information matched with the target content which is read in the real-time scene, and outputting the first prompt information.
10. A dictation control device as claimed in any of claims 6-9, characterized in that the device further comprises:
the acquisition module is used for acquiring facial expression information of a user after the output module outputs first prompt information matched with the reported target content in the intelligent reporting mode for the user to refer to;
the second judging module is used for judging whether the user determines writing content matched with the target content according to the first prompt information or not according to the facial expression information;
the output module is further configured to output second prompt information matched with the target content when the second judging module judges that the user does not determine the writing content matched with the target content according to the first prompt information according to the facial expression information;
the content of the second prompt message is greater than that of the first prompt message, or the prompt degree of the second prompt message to the target content is higher than that of the first prompt message to the target content.
CN201910387346.4A 2019-05-10 2019-05-10 Dictation control method and device based on dictation condition Pending CN111081079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910387346.4A CN111081079A (en) 2019-05-10 2019-05-10 Dictation control method and device based on dictation condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910387346.4A CN111081079A (en) 2019-05-10 2019-05-10 Dictation control method and device based on dictation condition

Publications (1)

Publication Number Publication Date
CN111081079A true CN111081079A (en) 2020-04-28

Family

ID=70310315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910387346.4A Pending CN111081079A (en) 2019-05-10 2019-05-10 Dictation control method and device based on dictation condition

Country Status (1)

Country Link
CN (1) CN111081079A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157239A (en) * 2021-04-26 2021-07-23 读书郎教育科技有限公司 Dictation content prompt control system and method
CN113298082A (en) * 2021-07-28 2021-08-24 北京猿力未来科技有限公司 Dictation data processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005431A (en) * 2015-07-22 2015-10-28 王玉娇 Dictation device, data processing method thereof and related devices
US20170186338A1 (en) * 2015-12-28 2017-06-29 Amazon Technologies, Inc. System for assisting in foreign language learning
CN107329760A (en) * 2017-06-30 2017-11-07 珠海市魅族科技有限公司 Information cuing method, device, terminal and storage medium
CN108389440A (en) * 2018-03-15 2018-08-10 广东小天才科技有限公司 A kind of speech playing method, device and voice playing equipment based on microphone
CN109300347A (en) * 2018-12-12 2019-02-01 广东小天才科技有限公司 A kind of dictation householder method and private tutor's equipment based on image recognition
CN109346059A (en) * 2018-12-20 2019-02-15 广东小天才科技有限公司 A kind of recognition methods of dialect phonetic and electronic equipment
CN109473001A (en) * 2018-12-12 2019-03-15 广东小天才科技有限公司 A kind of study coach method and study coach client based on cue scale
CN109635096A (en) * 2018-12-20 2019-04-16 广东小天才科技有限公司 A kind of dictation reminding method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005431A (en) * 2015-07-22 2015-10-28 王玉娇 Dictation device, data processing method thereof and related devices
US20170186338A1 (en) * 2015-12-28 2017-06-29 Amazon Technologies, Inc. System for assisting in foreign language learning
CN107329760A (en) * 2017-06-30 2017-11-07 珠海市魅族科技有限公司 Information cuing method, device, terminal and storage medium
CN108389440A (en) * 2018-03-15 2018-08-10 广东小天才科技有限公司 A kind of speech playing method, device and voice playing equipment based on microphone
CN109300347A (en) * 2018-12-12 2019-02-01 广东小天才科技有限公司 A kind of dictation householder method and private tutor's equipment based on image recognition
CN109473001A (en) * 2018-12-12 2019-03-15 广东小天才科技有限公司 A kind of study coach method and study coach client based on cue scale
CN109346059A (en) * 2018-12-20 2019-02-15 广东小天才科技有限公司 A kind of recognition methods of dialect phonetic and electronic equipment
CN109635096A (en) * 2018-12-20 2019-04-16 广东小天才科技有限公司 A kind of dictation reminding method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157239A (en) * 2021-04-26 2021-07-23 读书郎教育科技有限公司 Dictation content prompt control system and method
CN113298082A (en) * 2021-07-28 2021-08-24 北京猿力未来科技有限公司 Dictation data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110047467B (en) Voice recognition method, device, storage medium and control terminal
CN105895093A (en) Voice information processing method and device
CN108039175B (en) Voice recognition method and device and server
CN112037799A (en) Voice interrupt processing method and device, computer equipment and storage medium
CN111797632A (en) Information processing method and device and electronic equipment
CN111026949A (en) Question searching method and system based on electronic equipment
CN111081079A (en) Dictation control method and device based on dictation condition
CN111860121A (en) Reading ability auxiliary evaluation method and system based on AI vision
CN109086431B (en) Knowledge point consolidation learning method and electronic equipment
CN113782026A (en) Information processing method, device, medium and equipment
CN110111778B (en) Voice processing method and device, storage medium and electronic equipment
CN113516994B (en) Real-time voice recognition method, device, equipment and medium
CN111079726B (en) Image processing method and electronic equipment
CN111081089B (en) Dictation control method and device based on facial feature information
WO2017020794A1 (en) Voice recognition method applicable to interactive system and device utilizing same
CN111078098B (en) Dictation control method and device
CN112714058B (en) Method, system and electronic device for immediately interrupting AI voice
CN111339517B (en) Voiceprint feature sampling method, user identification method, device and electronic equipment
CN111078890B (en) Raw word collection method and electronic equipment
CN114171029A (en) Audio recognition method and device, electronic equipment and readable storage medium
CN111078082A (en) Point reading method based on image recognition and electronic equipment
CN110209934A (en) Information-pushing method and relevant apparatus based on micro- Expression Recognition
CN113129902A (en) Voice processing method and device, electronic equipment and storage medium
CN111369985A (en) Voice interaction method, device, equipment and medium
CN111091731B (en) Dictation prompting method based on electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication