CN111077982B - Man-machine interaction method under dictation environment and electronic equipment - Google Patents

Man-machine interaction method under dictation environment and electronic equipment Download PDF

Info

Publication number
CN111077982B
CN111077982B CN201910716357.2A CN201910716357A CN111077982B CN 111077982 B CN111077982 B CN 111077982B CN 201910716357 A CN201910716357 A CN 201910716357A CN 111077982 B CN111077982 B CN 111077982B
Authority
CN
China
Prior art keywords
dictation
content
page
image
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910716357.2A
Other languages
Chinese (zh)
Other versions
CN111077982A (en
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910716357.2A priority Critical patent/CN111077982B/en
Publication of CN111077982A publication Critical patent/CN111077982A/en
Application granted granted Critical
Publication of CN111077982B publication Critical patent/CN111077982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Abstract

A man-machine interaction method and electronic equipment under dictation environment, the method includes: the electronic equipment shoots a page image of a certain writing page of the dictation book; the electronic equipment identifies the writing content from the page image; the electronic equipment corrects the writing content according to the dictation newspaper reading content so as to obtain a correction result; if the correction result shows that the dictation content with the dictation error exists in the writing content, the electronic equipment determines the position information of the dictation content with the dictation error in the writing page; and the electronic equipment projects a cursor prompt box to the position information in the writing page. By implementing the embodiment of the invention, the efficiency of the user for finding the dictation content with the dictation error on the dictation book can be improved.

Description

Man-machine interaction method under dictation environment and electronic equipment
Technical Field
The invention relates to the technical field of education, in particular to a man-machine interaction method and electronic equipment in a dictation environment.
Background
Currently, some electronic devices (such as home education machines) on the market can identify dictation contents written on a dictation book by a user (such as a student) in a dictation mode, and output a correction result of the dictation contents on a display screen. However, if the user wants to correct some dictation contents with dictation errors on the dictation book, the user usually needs to compare the correction result output on the display screen, so that the user can find the dictation contents with dictation errors on the dictation book one by one to correct, which is complicated in steps.
Disclosure of Invention
The embodiment of the invention discloses a man-machine interaction method and electronic equipment in a dictation environment, which can improve the efficiency of a user in finding dictation contents with dictation errors on a dictation book.
The first aspect of the embodiment of the invention discloses a man-machine interaction method in a dictation environment, which comprises the following steps:
the electronic equipment shoots a page image of a certain writing page of the dictation book;
the electronic equipment identifies the writing content from the page image;
the electronic equipment corrects the writing content according to the dictation newspaper reading content so as to obtain a correction result;
if the correction result shows that the dictation content with the dictation error exists in the writing content, the electronic equipment determines the position information of the dictation content with the dictation error in the writing page;
and the electronic equipment projects a cursor prompt box to the position information in the writing page.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the electronic device captures a page image of a certain writing page of the dictation book, the method further includes:
the electronic equipment detects whether a preset dictation result detection condition is met;
If yes, executing the page image of a certain written page of the photographed dictation book.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the electronic device detecting whether a preconfigured dictation result detection condition is met includes:
and the electronic equipment detects whether the dictation result detection time preset in the dictation environment is reached, and if so, determines that the preset dictation result detection condition is met.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the electronic device detects that the dictation result detection time preconfigured in the dictation environment is reached, and before the determining that the preconfigured dictation result detection condition is met, the method further includes:
and the electronic equipment detects whether the audio information which is played in the dictation environment and is used for triggering the dictation result detection is heard, and if so, the step of determining that the preset dictation result detection condition is met is executed.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the electronic device projects a cursor prompt box to the position information in the writing page, the method further includes:
The electronic equipment detects a certain seat number in the dictation environment written in the cursor prompt box;
the electronic equipment marks the dictation content of the dictation error in the page image to form a marked image;
the electronic equipment associates the annotation image with the seat code and reports the annotation image to service equipment in the dictation environment, so that the service equipment sends the annotation image to user equipment corresponding to a seat to which the seat number belongs;
the electronic equipment acquires a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment;
the electronic device outputs the corrected image on a display screen.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the shooting unit is used for shooting a page image of a certain writing page of the dictation book;
an identification unit for identifying the writing content from the page image;
The correcting unit is used for correcting the writing content according to the dictation newspaper reading content so as to obtain a correcting result;
the determining unit is used for determining the position information of the dictation content with the dictation error in the writing page when the correcting result obtained by the correcting unit indicates that the dictation content with the dictation error exists in the writing content;
and the projection unit is used for projecting a cursor prompt box to the position information in the writing page.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the first detection unit is used for detecting whether a preset dictation result detection condition is met before the shooting unit shoots a page image of a certain writing page of the dictation book;
the shooting unit is specifically configured to shoot a page image of a certain writing page of the dictation book when the first detection unit detects that a preset dictation result detection condition is met.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first detection unit includes:
the first detection subunit is used for detecting whether the dictation result detection time preset by the dictation environment is reached before the shooting unit shoots a page image of a certain writing page of the dictation book;
And the determining subunit is used for determining that the preset dictation result detection condition is met when the first detection subunit detects that the preset dictation result detection time reaches the dictation environment.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first detection unit further includes:
the second detection subunit is used for detecting whether audio information which is played by the dictation environment and used for triggering the dictation result detection is heard or not after the first detection subunit detects that the dictation result detection time which is preset by the dictation environment is reached;
the determining subunit is specifically configured to determine that a detection condition of a preconfigured dictation result is satisfied when the first detecting subunit detects that a preconfigured dictation result detection time of the dictation environment arrives, and when the second detecting subunit detects that audio information, which is played by the dictation environment and is used for triggering the dictation result detection, is heard.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the second detection unit is used for detecting a certain seat number in the dictation environment written in the cursor prompt box after the projection unit projects the cursor prompt box to the position information in the writing page;
The marking unit is used for marking the dictation content of the dictation error in the page image so as to form a marked image;
the association unit is used for associating the annotation image with the seat code and reporting the annotation image to service equipment in the dictation environment;
the interaction unit is used for sending the annotation image to user equipment used by a user corresponding to the seat to which the seat number belongs;
the interaction unit is further used for acquiring a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment;
and the output unit is used for outputting the corrected image on a display screen.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to execute part or all of the steps of the human-computer interaction method under any one of the dictation environments disclosed in the first aspect of the embodiment of the present invention.
A fourth aspect of the embodiment of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute part or all of the steps of a human-computer interaction method in any one of the dictation environments disclosed in the first aspect of the embodiment of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which when run on a computer causes the computer to perform part or all of the steps of a human-computer interaction method in any one of the dictation environments disclosed in the first aspect of the embodiments of the present invention.
A sixth aspect of the embodiment of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the human-computer interaction method in any one of the dictation environments disclosed in the first aspect of the embodiment of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a page image of a certain writing page of the dictation book can be shot, the writing content is identified from the page image, the writing content is modified according to the dictation newspaper reading content, so as to obtain a modification result, if the modification result shows that the dictation content with the dictation error exists in the writing content, the position information of the dictation content with the dictation error is determined in the writing page, and a prompt cursor is projected to the position information of the dictation content with the dictation error in the writing page. Therefore, by implementing the embodiment of the invention, the user can quickly find out the wrong word on the writing page to correct according to the position of the projection cursor of the electronic equipment, so that the step of correcting the wrong word by the user is reduced, and the efficiency of correcting the wrong word is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a man-machine interaction method in a dictation environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a human-computer interaction method in a dictation environment according to an embodiment of the present invention;
FIG. 3 is a flow chart of a human-computer interaction method in a dictation environment according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another electronic device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another electronic device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of still another electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a man-machine interaction method and electronic equipment in a dictation environment, which can improve the efficiency of a user in finding dictation contents with dictation errors on a dictation book. The following detailed description is made with reference to the accompanying drawings.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a man-machine interaction method in a dictation environment according to an embodiment of the invention. As shown in fig. 1, the human-computer interaction method in the dictation environment may include the following steps:
101. the electronic device photographs a page image of a certain written page of the dictation book.
By way of example, the electronic device may be an educational electronic device or robot such as a point reader, a home teaching machine, etc.; or the electronic equipment can also be non-educational electronic equipment or robots such as tablet computers, mobile phones, intelligent televisions and the like; the embodiments of the present invention are not limited.
In one embodiment, the electronic device may capture a page image of a certain writing page of the dictation book through a capturing module of the electronic device itself or an external capturing module.
For example, the electronic device may control the photographing module to photograph a mirror image in a mirror mounted on the electronic device, where the mirror image is an image of a certain writing page of the dictation book in the mirror, so as to photograph a page image of a certain writing page of the dictation book.
For another example, the electronic device may control the photographing module on the smart watch worn by the dictation user to photograph a page image of a certain writing page of the dictation book. Preferably, before the electronic device controls the shooting module on the smart watch worn by the dictation user to shoot the page image of a certain writing page of the dictation book, the electronic device can firstly control the shooting module on the smart watch worn by the dictation user to rotate so as to adjust the shooting direction of the shooting module on the smart watch worn by the dictation user to cover the certain writing page of the dictation book, and then the electronic device controls the shooting module on the smart watch worn by the dictation user to shoot the page image of the certain writing page of the dictation book.
In one embodiment, the manner in which the electronic device controls the photographing module on the smart watch worn by the dictation user to rotate may include:
the electronic equipment can send (such as Bluetooth mode sending) a shooting module rotation instruction to the intelligent watch worn by the dictation user, wherein the instruction comprises the identity information of the electronic equipment;
correspondingly, after receiving the instruction, the smart watch worn by the dictation user can check whether the instruction comprises the identity information of the electronic device and the identity information of legal equipment preset by the smart watch and allowing the shooting module of the smart watch to rotate, if the instruction is matched with the identity information of legal equipment, the smart watch can control the shooting module of the smart watch to rotate, in the process that the shooting module of the smart watch rotates, if the smart watch detects that the shooting direction of the shooting module of the smart watch covers a certain writing page of the dictation book, the smart watch can control the shooting module of the smart watch to pause rotating and send a notification message to the electronic device, and the notification message is used for indicating to adjust the shooting direction of the shooting module of the smart watch worn by the dictation user to a certain writing page of the dictation book; further, the electronic device may send a shooting control instruction to the smart watch, and after the smart watch receives the shooting control instruction, the shooting module on the smart watch may be controlled to shoot a page image of a certain writing page of the dictation book and send the page image to the electronic device.
102. The electronic device identifies the written content from the page image.
In one embodiment, the manner in which the electronic device identifies the written content from the page image may be:
the electronic equipment compares the handwriting depth of each content unit included in the text content in the page image with the handwriting depth collected in advance, if the handwriting depth of the content unit is smaller than the handwriting depth collected in advance and the difference value between the handwriting depth of the content unit and the handwriting depth collected in advance is not in a specified range, the electronic equipment can regard the content unit as the content printed from the other surface (such as the reverse surface) corresponding to a certain writing page, and correspondingly can regard the content unit as the content to be filtered; and the electronic equipment can filter all the contents to be filtered from the text contents, and takes the rest contents after the text contents are filtered out of all the contents to be filtered as the written contents written on the certain written page, so that the accuracy of distinguishing the written contents on the certain written page (such as the front side) can be improved, and the problem of influencing the electronic equipment to distinguish the written contents written on the front side of the written page under the condition that the contents on the back side of the written page of the dictation book are printed on the front side is solved.
For example, when the text content is text content in chinese form, the content unit may be any kanji; when the text content is text content in English form, the content unit can be any word; when the text content is in pinyin form, the content unit may be any pinyin.
103. The electronic equipment corrects the written content according to the dictation newspaper reading content so as to obtain a correction result.
For example, the above-described correction results may be expressed in terms of scores.
As an alternative embodiment, the electronic device may send the dictation report and the written content to a teacher client associated with the electronic device for modification to obtain modification results. For example:
the electronic equipment generates a correction request and sends the correction request to a teacher client associated with the electronic equipment;
the electronic equipment receives a correctable instruction fed back by the teacher client;
the electronic equipment packages the dictation newspaper reading content and the writing content to generate a file to be corrected and sends the file to the teacher client side, so that the teacher client side corrects the writing content according to the dictation newspaper reading content to obtain a correction result;
And the electronic equipment receives the correction result sent by the teacher client, so that the power consumption of the electronic equipment is prevented from being increased caused by correction of the electronic equipment.
104. The electronic equipment judges whether dictation content with dictation errors exists in the writing content indicated by the correction result; if yes, executing the steps 105 to 106; if not, the process is ended.
The electronic equipment can judge whether the modification result expressed by the score is full score or not, if not, the indicated written content has the dictation content with the dictation error; if so, indicating that the dictation content with the dictation error does not exist in the writing content.
105. The electronic device determines the position information of the dictation content of the dictation error from the writing page.
For example, the electronic device may determine the location information of the dictation content of the dictation error in the page image, and determine the location information of the dictation content of the dictation error in the written page according to the location mapping relationship.
106. The electronic device projects a cursor prompt box to the position information of the dictation content with the dictation error in the writing page.
In one embodiment, the electronic device may incorporate a light projecting device, and the electronic device may control the light projecting device to project the cursor prompt box on the position information of each of the dictation contents of the dictation error one by one. The display duration of the cursor prompt box projected by the light projection device at the position information of each dictation content with the dictation error can be preset duration (for example, 5 seconds).
Therefore, by implementing the man-machine interaction method described in fig. 1, the user can quickly find out the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, the accuracy of distinguishing the written content on a certain written page (such as the front surface) can be improved by implementing the man-machine interaction method described in fig. 1.
Example two
Referring to fig. 2, fig. 2 is a flow chart of a man-machine interaction method in a dictation environment according to another embodiment of the present invention. As shown in fig. 2, the human-computer interaction method in the dictation environment may include the following steps:
201. the electronic device photographs a page image of a certain written page of the dictation book.
202. The electronic device identifies the written content from the page image.
203. The electronic equipment corrects the written content according to the dictation newspaper reading content so as to obtain a correction result.
204. The electronic equipment judges whether dictation content with dictation errors exists in the writing content indicated by the correction result; if yes, go to step 205-step 211; if not, the process is ended.
205. The electronic device determines the position information of the dictation content of the dictation error from the writing page.
206. The electronic device projects a cursor prompt box to the position information of the dictation content with the dictation error in the writing page.
207. The electronic device detects a seat number in the dictation environment written in the cursor prompt box.
208. The electronic device marks the dictation content of the dictation error in the page image to form a marked image.
209. And the electronic equipment associates the annotation image with the seat code and reports the annotation image to service equipment in the dictation environment, so that the service equipment sends the annotation image to user equipment used by a user corresponding to the seat to which the seat number belongs.
In the embodiment of the invention, a support group seat area can be set for a dictation user (such as a student) using electronic equipment to carry out dictation competition in the dictation environment, so that the dictation user can seek help from a user on one seat in the support seat area when encountering dictation content which cannot be written. The service device in the dictation environment may pre-establish a mapping relationship between the electronic device and the user device used by the user corresponding to each seat in the support group seat area of the dictation user, and after the service device receives the labeling picture and the seat number (belonging to the unique code) sent by the electronic device, the service device may find, according to the seat number, the user device used by the user corresponding to the seat using the support group seat area of the dictation user, and then send the labeling image to the user device used by the user corresponding to the seat using the seat number to perform dictation.
210. The electronic equipment acquires a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment.
For example, the user corresponding to the seat to which the seat number belongs may correct, on the user device, the dictation content of the dictation error marked in the marking image into the correct dictation content, so as to form a corrected image and report the corrected image to the service device, and the service device issues the corrected image to the electronic device.
211. The electronic device outputs the corrected image on the display screen.
Therefore, by implementing the man-machine interaction method described in fig. 2, the user can quickly find out the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, the accuracy of distinguishing the written content on a certain written page (such as the front surface) can be improved by implementing the man-machine interaction method described in fig. 2.
In addition, by implementing the man-machine interaction method described in fig. 2, when the dictation user encounters the dictation content which cannot be written, the dictation user can seek help from the user on one seat in the seat area of the support group, so that the dictation user can grasp the writing of the dictation content which cannot be written.
Example III
Referring to fig. 3, fig. 3 is a flow chart of a man-machine interaction method in a dictation environment according to an embodiment of the invention. As shown in fig. 3, the human-computer interaction method in the dictation environment may include the following steps:
301. the electronic equipment detects whether a detection condition of a pre-configured dictation result is met; if yes, go to step 302-step 305; if not, the process is ended.
As an alternative embodiment, the detecting by the electronic device whether the detection condition of the preconfigured dictation result is satisfied may include:
the electronic equipment detects whether the dictation result detection time preset in the dictation environment is reached, and if so, the electronic equipment determines that the preset dictation result detection condition is met; otherwise, if the test result does not reach the test result, the test result meets the test condition of the dictation result which is not configured in advance.
As another alternative embodiment, the detecting by the electronic device whether the detection condition of the preconfigured dictation result is satisfied may include:
the electronic equipment detects whether the dictation result detection time preset in the dictation environment is reached, if so, the electronic equipment further detects whether audio information which is played by the dictation environment and is used for triggering the dictation result detection is heard, and if so, the preset dictation result detection condition is determined to be met; otherwise, if not, determining that the detection condition of the dictation result which is not configured in advance is satisfied.
302. The electronic device photographs a page image of a certain written page of the dictation book.
303. The electronic device identifies the written content from the page image.
304. The electronic device alters the writing content according to the dictation newspaper reading content to obtain an altering result (namely, an altering result of the writing content).
305. The electronic equipment judges whether dictation content with dictation errors exists in the writing content indicated by the correction result; if so, executing steps 306-312; if not, the process is ended.
306. The electronic device determines the position information of the dictation content of the dictation error from the writing page.
307. The electronic device projects a cursor prompt box to the position information of the dictation content with the dictation error in the writing page.
308. The electronic device detects a seat number in the dictation environment written in the cursor prompt box.
309. The electronic device marks the dictation content of the dictation error in the page image to form a marked image.
310. And the electronic equipment associates the annotation image with the seat code and reports the annotation image to service equipment in the dictation environment, so that the service equipment sends the annotation image to user equipment used by a user corresponding to the seat to which the seat number belongs.
311. The electronic equipment acquires a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment.
312. The electronic device outputs the corrected image on the display screen.
As an alternative embodiment, the human-computer interaction method shown in fig. 3 may further perform the following steps after performing step 312:
the electronic equipment detects a dictation result and issues an instruction;
the electronic equipment sends the corrected image and the identity information of the electronic equipment to the service equipment according to the dictation result issuing instruction, and the service equipment correlates the corrected image and the identity information of the electronic equipment and issues the correlated corrected image and the identity information of the electronic equipment to a public display screen in the dictation environment for display; the identity information of the electronic device may include, but is not limited to, a name, a school number, etc. of a dictation user that uses the electronic device to dictate.
The implementation of the embodiment is beneficial to improving the user interactivity in the dictation environment.
As another alternative embodiment, in the man-machine interaction method shown in fig. 3, after the service device associates the corrected image with the identity information of the electronic device and then issues the corrected image to the public display screen in the dictation environment for display, the man-machine interaction method shown in fig. 3 may further perform the following steps:
The electronic equipment obtains dictation rewarding resources pushed by the service equipment; the dictation rewarding resource can be a virtual resource (such as a virtual coin) determined by the service equipment according to the collected total number of praise initiated by the audience user in the audience area set in the dictation environment for the correction image after the correction image and the identity information of the electronic equipment are related and then released to a public display screen in the dictation environment for display by the service equipment;
the electronic equipment corrects the corrected image to obtain a correction result of the corrected image;
the electronic equipment determines the effective content modification quantity of the user corresponding to the seat to which the seat number belongs according to the correction result of the correction image and the correction result of the writing content; the correction result of the correction image and the correction result of the writing content can be represented by scores, and accordingly, the electronic equipment can calculate the score difference between the correction result of the correction image and the correction result of the writing content and take the score difference as an effective content modifier of a user corresponding to a seat to which the seat number belongs; for example, the score difference is 20, and the effective content modifier of the user corresponding to the seat to which the seat number belongs is 20; the electronic device correcting the corrected image means that the electronic device corrects the content (including corrected content and uncorrected content) in the corrected image according to the dictation report content after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image;
And the electronic equipment divides part of the reward resources which are positively related to the effective content modifier from the dictation reward resources according to the effective content modifier of the user corresponding to the seat to which the seat number belongs, and distributes the part of the reward resources to user equipment used by the user corresponding to the seat to which the seat number belongs through the service equipment so as to excite the user in the support group to participate in dictation help more and promote the popularity of the dictation environment.
Therefore, by implementing the man-machine interaction method described in fig. 3, the user can quickly find out the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, the accuracy of distinguishing the written content on a certain written page (such as the front surface) can be improved by implementing the man-machine interaction method described in fig. 3.
In addition, by implementing the man-machine interaction method described in fig. 3, when the dictation user encounters the dictation content which cannot be written, the dictation user can seek help from the user on a certain seat in the seat area of the support group, so that the dictation user can grasp the writing of the dictation content which cannot be written.
In addition, the man-machine interaction method described in fig. 3 is not only beneficial to improving the user interactivity in the dictation environment, but also can excite the users in the support group to participate in dictation help, and improve the popularity of the dictation environment.
Example IV
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention. As shown in fig. 4, the electronic device may include:
and a photographing unit 401 for photographing a page image of a certain written page of the dictation book.
And an identification unit 402 for identifying writing content from the page image.
And the correcting unit 403 is configured to correct the writing content according to the dictation newspaper reading content to obtain a correcting result.
And a determining unit 404, configured to determine, in the writing page, the position information of the dictation content with the dictation error when the correction result obtained by the correction unit 403 indicates that the dictation content with the dictation error exists in the writing content.
A projection unit 405 for projecting a cursor prompt box to the position information of the dictation content of the dictation error in the writing page.
In one embodiment, the shooting unit 401 may control a shooting module of the electronic device itself or an external shooting module to shoot a page image of a certain writing page of the dictation book.
For example, the photographing unit 401 may control its photographing module to photograph a mirror image in a mirror mounted on the electronic device, where the mirror image is an image of a certain writing page of the dictation book in the mirror, so as to photograph a page image of a certain writing page of the dictation book.
For another example, the photographing unit 401 may control the photographing module on the smart watch worn by the dictation user to photograph a page image of a certain writing page of the dictation book. Preferably, before the shooting unit 401 controls the shooting module on the smart watch worn by the dictation user to shoot the page image of a certain writing page of the dictation book, the shooting unit 401 may first control the shooting module on the smart watch worn by the dictation user to rotate so as to adjust the shooting direction of the shooting module on the smart watch worn by the dictation user to cover the certain writing page of the dictation book, and then the shooting unit 401 controls the shooting module on the smart watch worn by the dictation user to shoot the page image of the certain writing page of the dictation book.
In one embodiment, the manner in which the photographing unit 401 controls the photographing module on the smart watch worn by the dictation user to rotate may include:
The shooting unit 401 can send (e.g. send in a bluetooth mode) a shooting module rotation instruction to a smart watch worn by a dictation user, where the instruction includes identity information of the electronic device;
correspondingly, after receiving the instruction, the smart watch worn by the dictation user can check whether the instruction comprises the identity information of the electronic device and the identity information of legal equipment preset by the smart watch and allowing the shooting module of the smart watch to rotate, if the instruction is matched with the identity information of legal equipment, the smart watch can control the shooting module of the smart watch to rotate, and in the process of rotating the shooting module of the smart watch, if the smart watch detects that the shooting direction of the shooting module of the smart watch covers a certain writing page of the dictation book, the smart watch can control the shooting module of the smart watch to pause rotating and send a notification message to the electronic device, wherein the notification message is used for indicating that the shooting direction of the shooting module of the smart watch worn by the dictation user is adjusted to a certain writing page of the dictation book; further, the shooting unit 401 may send a shooting control instruction to the smart watch, and after the smart watch receives the shooting control instruction, the shooting module on the smart watch may be controlled to shoot a page image of a certain writing page of the dictation book and send the page image to the electronic device.
In one embodiment, the manner in which the identification unit 402 identifies the written content from the page image may be:
the recognition unit 402 compares the handwriting depth of each content unit included in the text content in the page image with the handwriting depth collected in advance, if the handwriting depth of the content unit is compared to be smaller than the handwriting depth collected in advance and the difference between the handwriting depth of the content unit and the handwriting depth collected in advance is not within the specified range, the recognition unit 402 may consider the content unit as the content printed from the other surface (such as the reverse surface) corresponding to the certain writing page, and may correspondingly regard the content unit as the content to be filtered; and the recognition unit 402 may filter all the content to be filtered from the text content, and use the remaining content after the text content is filtered out of all the content to be filtered as the written content written on the certain written page, so as to improve the accuracy of distinguishing the written content on the certain written page (such as the front side), and solve the problem of influencing the recognition of the written content written on the front side of the written page by the electronic device when the content printed on the back side of the written page of the dictation book is printed on the front side.
In one embodiment, the electronic device may incorporate a light projecting device, and the projecting unit 405 may control the light projecting device to project the cursor prompt box on the position information of each of the dictation contents of the dictation error one by one. The display duration of the cursor prompt box projected by the light projection device at the position information of each dictation content with the dictation error can be preset duration (for example, 5 seconds).
Therefore, by implementing the electronic device described in fig. 4, the user can quickly find the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, implementing the electronic device described in fig. 4 may improve accuracy in recognizing the writing on the certain writing page (e.g., front surface).
Example five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the present invention. The electronic device shown in fig. 5 is optimized from the electronic device shown in fig. 4. Compared to the electronic device shown in fig. 4, the electronic device shown in fig. 5 may further include:
the second detection unit 406 is configured to detect a certain seat number in the dictation environment written in the cursor prompt box after the projection unit 405 projects the cursor prompt box to the position information of the dictation content of the dictation error in the writing page.
The labeling unit 407 is configured to label the dictation content of the dictation error in the page image to form a labeled image.
And a associating unit 408, configured to associate the annotation image annotated by the annotating unit 407 with the seat code detected by the second detecting unit 406, and report the association to a service device in the dictation environment.
The interaction unit 409 is further configured to send the labeling image labeled by the labeling unit 407 to a user device used by a user corresponding to the seat to which the seat number belongs.
As an optional implementation manner, the interaction unit 409 is further configured to obtain, after the annotation image is sent to the user device used by the user corresponding to the seat to which the seat number belongs, a correction image sent by the service device; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs completes the correction of the dictation content of the dictation error marked in the marked image on the user equipment.
An output unit 410 for outputting the corrected image on the display screen.
Therefore, by implementing the electronic device described in fig. 5, the user can quickly find the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, implementing the electronic device described in FIG. 5 can improve the accuracy of distinguishing the written contents on a certain written page (such as the front side)
In addition, implementing the electronic device described in fig. 5, the dictation user may seek help from the user on a certain seat in the support group seat area when encountering non-writable dictation content, thereby facilitating the dictation user to grasp writing of the non-writable dictation content.
Example six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present invention. The electronic device shown in fig. 6 is optimized from the electronic device shown in fig. 5. Compared to the electronic device shown in fig. 5, the electronic device shown in fig. 6 may further include:
a first detecting unit 411 is configured to detect whether the electronic device satisfies a preset dictation result detection condition before the capturing unit 401 captures a page image of a certain writing page of the dictation book.
Accordingly, the photographing unit 401 is specifically configured to photograph a page image of a certain writing page of the dictation book when the first detecting unit 411 detects that the preset dictation result detecting condition is satisfied.
As an alternative embodiment, the first detecting unit 411 shown in fig. 6 may include:
A first detecting subunit 412, configured to detect whether the dictation result detection time preconfigured in the dictation environment is reached before the capturing unit 401 captures a page image of a certain writing page of the dictation book.
A determining subunit 413, configured to determine that the preconfigured dictation result detection condition is satisfied when the first detecting subunit 412 detects that the preconfigured dictation result detection time reaches the dictation environment.
As another alternative embodiment, the first detecting unit 411 shown in fig. 6 may further include:
the second detecting subunit 414 is configured to detect whether audio information, which is played by the dictation environment and is used to trigger the dictation result detection, is heard after the first detecting subunit 412 detects that the dictation result detection time preconfigured in the dictation environment is reached.
Accordingly, the determining subunit 413 is specifically configured to determine that the detection condition of the preconfigured dictation result is satisfied when the first detecting subunit 412 detects that the preconfigured dictation result detection time of the dictation environment is reached, and when the second detecting subunit 414 detects that the audio information for triggering the dictation result detection played by the dictation environment is heard.
In an embodiment of the present invention, after the output unit 410 outputs the corrected image on the display screen, the electronic device may further perform the following operations:
The electronic equipment detects a dictation result and issues an instruction;
the electronic equipment sends the corrected image and the identity information of the electronic equipment to the service equipment according to the dictation result issuing instruction, and the service equipment correlates the corrected image and the identity information of the electronic equipment and issues the correlated corrected image and the identity information of the electronic equipment to a public display screen in the dictation environment for display; the identity information of the electronic device may include, but is not limited to, a name, a school number, etc. of a dictation user that uses the electronic device to dictate.
The implementation of the embodiment is beneficial to improving the user interactivity in the dictation environment.
As another alternative implementation manner, after the service device associates the corrected image with the identity information of the electronic device and then issues the corrected image to a public display screen in the dictation environment for display, the electronic device may further perform the following operations:
the electronic equipment obtains dictation rewarding resources pushed by the service equipment; the dictation rewarding resource can be a virtual resource (such as a virtual coin) determined by the service equipment according to the collected total number of praise initiated by the audience user in the audience area set in the dictation environment for the correction image after the correction image and the identity information of the electronic equipment are related and then released to a public display screen in the dictation environment for display by the service equipment;
The electronic equipment corrects the corrected image to obtain a correction result of the corrected image;
the electronic equipment determines the effective content modification quantity of the user corresponding to the seat to which the seat number belongs according to the correction result of the correction image and the correction result of the writing content; the correction result of the correction image and the correction result of the writing content can be represented by scores, and accordingly, the electronic equipment can calculate the score difference between the correction result of the correction image and the correction result of the writing content and take the score difference as an effective content modifier of a user corresponding to a seat to which the seat number belongs; for example, the score difference is 20, and the effective content modifier of the user corresponding to the seat to which the seat number belongs is 20; the electronic device correcting the corrected image means that the electronic device corrects the content (including corrected content and uncorrected content) in the corrected image according to the dictation report content after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image;
And the electronic equipment divides part of the reward resources which are positively related to the effective content modifier from the dictation reward resources according to the effective content modifier of the user corresponding to the seat to which the seat number belongs, and distributes the part of the reward resources to user equipment used by the user corresponding to the seat to which the seat number belongs through the service equipment so as to excite the user in the support group to participate in dictation help more and promote the popularity of the dictation environment.
Therefore, by implementing the electronic device described in fig. 6, the user can quickly find the wrong word on the writing page according to the position of the projection cursor of the electronic device to correct the wrong word, so that the step of modifying the wrong word by the user is reduced, and the efficiency of modifying the wrong word is improved.
In addition, implementing the electronic device described in fig. 6 can improve accuracy in recognizing the writing content on the certain writing page (e.g., front surface).
In addition, implementing the electronic device described in fig. 6, the dictation user may seek help from the user on a certain seat in the support group seat area when encountering non-writable dictation content, thereby facilitating the dictation user to grasp writing of the non-writable dictation content.
In addition, implementing the electronic device described in fig. 6 is not only beneficial to improving user interactivity in the dictation environment, but also can encourage users in the support group to participate in dictation help, and improve popularity of the dictation environment.
Example seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the invention. As shown in fig. 7, the electronic device may include:
a memory 701 storing executable program code;
a processor 702 coupled to the memory 701;
the processor 702 invokes executable program codes stored in the memory 701 to execute part or all of the steps of the human-computer interaction method in any one of the dictation environments of fig. 1 to 3.
The embodiment of the invention discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute part or all of the steps of a human-computer interaction method in any one of the dictation environments of fig. 1-3.
The embodiment of the invention also discloses a computer program product, wherein the computer program product enables the computer to execute part or all of the steps of a human-computer interaction method in a dictation environment as in any one of the above method embodiments when running on the computer.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above describes in detail a man-machine interaction method and an electronic device under a dictation environment disclosed in the embodiments of the present invention, and specific examples are applied herein to illustrate the principles and embodiments of the present invention, where the above description of the embodiments is only for helping to understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (8)

1. A man-machine interaction method in dictation environment is characterized by comprising the following steps:
the electronic equipment shoots a page image of a certain writing page of the dictation book;
the electronic equipment identifies the writing content from the page image;
the electronic equipment corrects the writing content according to the dictation newspaper reading content so as to obtain a correction result;
if the correction result shows that the dictation content with the dictation error exists in the writing content, the electronic equipment determines the position information of the dictation content with the dictation error in the writing page;
the electronic equipment projects a cursor prompt box to the position information in the writing page;
The electronic equipment detects a certain seat number in the dictation environment written in the cursor prompt box;
the electronic equipment marks the dictation content of the dictation error in the page image to form a marked image;
the electronic equipment associates and reports the annotation image with the seat number to service equipment in the dictation environment, so that the service equipment sends the annotation image to user equipment corresponding to a seat to which the seat number belongs;
the electronic equipment acquires a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment;
the electronic equipment outputs the corrected image on a display screen;
the electronic device alters the writing content according to the dictation newspaper reading content to obtain an altering result, and the method comprises the following steps:
and the electronic equipment sends the dictation newspaper reading content and the writing content to a teacher client associated with the electronic equipment for correction so as to obtain the correction result.
2. The human-computer interaction method according to claim 1, wherein before the electronic device captures a page image of a certain written page of the dictation book, the method further comprises:
the electronic equipment detects whether a preset dictation result detection condition is met;
if yes, executing the page image of a certain written page of the photographed dictation book.
3. The human-computer interaction method according to claim 2, wherein the electronic device detecting whether a preconfigured dictation result detection condition is satisfied comprises:
and the electronic equipment detects whether the dictation result detection time preset in the dictation environment is reached, and if so, determines that the preset dictation result detection condition is met.
4. A human-machine interaction method according to claim 3, wherein after the electronic device detects that the dictation result detection time of the pre-configuration of the dictation environment is reached and before the determination that the pre-configuration dictation result detection condition is satisfied, the method further comprises:
and the electronic equipment detects whether the audio information which is played in the dictation environment and is used for triggering the dictation result detection is heard, and if so, the step of determining that the preset dictation result detection condition is met is executed.
5. An electronic device, comprising:
the shooting unit is used for shooting a page image of a certain writing page of the dictation book in the dictation environment;
an identification unit for identifying the writing content from the page image;
the correcting unit is used for correcting the writing content according to the dictation newspaper reading content so as to obtain a correcting result;
the determining unit is used for determining the position information of the dictation content with the dictation error in the writing page when the correcting result obtained by the correcting unit indicates that the dictation content with the dictation error exists in the writing content;
the projection unit is used for projecting a cursor prompt box to the position information in the writing page;
the second detection unit is used for detecting a certain seat number in the dictation environment written in the cursor prompt box after the projection unit projects the cursor prompt box to the position information in the writing page;
the marking unit is used for marking the dictation content of the dictation error in the page image so as to form a marked image;
the association unit is used for associating the annotation image with the seat number and reporting the annotation image to the service equipment in the dictation environment;
The interaction unit is used for sending the annotation image to user equipment used by a user corresponding to the seat to which the seat number belongs;
the interaction unit is further used for acquiring a corrected image sent by the service equipment; the corrected image is an image obtained after the user corresponding to the seat to which the seat number belongs finishes correcting the dictation content of the dictation error marked in the marked image on the user equipment;
an output unit configured to output the corrected image on a display screen;
the electronic device alters the writing content according to the dictation newspaper reading content to obtain an altering result, and the method comprises the following steps:
and the electronic equipment sends the dictation newspaper reading content and the writing content to a teacher client associated with the electronic equipment for correction so as to obtain the correction result.
6. The electronic device of claim 5, wherein the electronic device further comprises:
the first detection unit is used for detecting whether a preset dictation result detection condition is met before the shooting unit shoots a page image of a certain writing page of the dictation book;
The shooting unit is specifically configured to shoot a page image of a certain writing page of the dictation book when the first detection unit detects that a preset dictation result detection condition is met.
7. The electronic device of claim 6, wherein the first detection unit comprises:
the first detection subunit is used for detecting whether the dictation result detection time preset by the dictation environment is reached before the shooting unit shoots a page image of a certain writing page of the dictation book;
and the determining subunit is used for determining that the preset dictation result detection condition is met when the first detection subunit detects that the preset dictation result detection time reaches the dictation environment.
8. The electronic device of claim 7, wherein the first detection unit further comprises:
the second detection subunit is used for detecting whether audio information which is played by the dictation environment and used for triggering the dictation result detection is heard or not after the first detection subunit detects that the dictation result detection time which is preset by the dictation environment is reached;
the determining subunit is specifically configured to determine that a detection condition of a preconfigured dictation result is satisfied when the first detecting subunit detects that a preconfigured dictation result detection time of the dictation environment arrives, and when the second detecting subunit detects that audio information, which is played by the dictation environment and is used for triggering the dictation result detection, is heard.
CN201910716357.2A 2019-08-02 2019-08-02 Man-machine interaction method under dictation environment and electronic equipment Active CN111077982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910716357.2A CN111077982B (en) 2019-08-02 2019-08-02 Man-machine interaction method under dictation environment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910716357.2A CN111077982B (en) 2019-08-02 2019-08-02 Man-machine interaction method under dictation environment and electronic equipment

Publications (2)

Publication Number Publication Date
CN111077982A CN111077982A (en) 2020-04-28
CN111077982B true CN111077982B (en) 2023-11-24

Family

ID=70310136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910716357.2A Active CN111077982B (en) 2019-08-02 2019-08-02 Man-machine interaction method under dictation environment and electronic equipment

Country Status (1)

Country Link
CN (1) CN111077982B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009097125A1 (en) * 2008-01-30 2009-08-06 American Institutes For Research Recognition of scanned optical marks for scoring student assessment forms
CN103400512A (en) * 2013-07-16 2013-11-20 步步高教育电子有限公司 Learning assisting device and operating method thereof
CN103646582A (en) * 2013-12-04 2014-03-19 广东小天才科技有限公司 Method and device for prompting writing errors
CN105469662A (en) * 2016-01-18 2016-04-06 黄道成 Student answer information real-time collection and efficient and intelligent correcting system and use method in teaching process
JP2017009664A (en) * 2015-06-17 2017-01-12 株式会社リコー Image projection device, and interactive type input/output system
CN106991198A (en) * 2017-06-01 2017-07-28 江苏学正教育科技有限公司 A kind of subjective problem database system participated at many levels based on characteristics of image collection and student
CN109147469A (en) * 2018-07-09 2019-01-04 安徽慧视金瞳科技有限公司 A kind of calligraphy exercising method
CN109460209A (en) * 2018-12-20 2019-03-12 广东小天才科技有限公司 A kind of control method and electronic equipment for dictating the progress that enters for
CN111081093A (en) * 2019-07-11 2020-04-28 广东小天才科技有限公司 Dictation content identification method and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009097125A1 (en) * 2008-01-30 2009-08-06 American Institutes For Research Recognition of scanned optical marks for scoring student assessment forms
CN103400512A (en) * 2013-07-16 2013-11-20 步步高教育电子有限公司 Learning assisting device and operating method thereof
CN103646582A (en) * 2013-12-04 2014-03-19 广东小天才科技有限公司 Method and device for prompting writing errors
JP2017009664A (en) * 2015-06-17 2017-01-12 株式会社リコー Image projection device, and interactive type input/output system
CN105469662A (en) * 2016-01-18 2016-04-06 黄道成 Student answer information real-time collection and efficient and intelligent correcting system and use method in teaching process
CN106991198A (en) * 2017-06-01 2017-07-28 江苏学正教育科技有限公司 A kind of subjective problem database system participated at many levels based on characteristics of image collection and student
CN109147469A (en) * 2018-07-09 2019-01-04 安徽慧视金瞳科技有限公司 A kind of calligraphy exercising method
CN109460209A (en) * 2018-12-20 2019-03-12 广东小天才科技有限公司 A kind of control method and electronic equipment for dictating the progress that enters for
CN111081093A (en) * 2019-07-11 2020-04-28 广东小天才科技有限公司 Dictation content identification method and electronic equipment

Also Published As

Publication number Publication date
CN111077982A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN108665742B (en) Method and device for reading through reading device
CN109960809B (en) Dictation content generation method and electronic equipment
CN111077996B (en) Information recommendation method and learning device based on click-to-read
CN109597943B (en) Learning content recommendation method based on scene and learning equipment
CN111027537B (en) Question searching method and electronic equipment
CN108665764B (en) Method and device for reading through reading device
CN111081117A (en) Writing detection method and electronic equipment
CN111079501B (en) Character recognition method and electronic equipment
CN111079483A (en) Writing standard judgment method and electronic equipment
CN111078179B (en) Dictation, newspaper and read progress control method and electronic equipment
CN111081093B (en) Dictation content identification method and electronic equipment
CN111079737B (en) Character inclination correction method and electronic equipment
CN111077982B (en) Man-machine interaction method under dictation environment and electronic equipment
CN108197620B (en) Photographing and question searching method and system based on eye positioning and handheld photographing equipment
CN111159433B (en) Content positioning method and electronic equipment
CN111091120B (en) Dictation correction method and electronic equipment
CN111079486B (en) Method for starting dictation detection and electronic equipment
CN111079760B (en) Character recognition method and electronic equipment
CN111077989B (en) Screen control method based on electronic equipment and electronic equipment
CN109783679B (en) Learning auxiliary method and learning equipment
CN111028560A (en) Method for starting functional module in learning application and electronic equipment
CN111028558A (en) Dictation detection method and electronic equipment
CN111027317A (en) Control method for dictation and reading progress and electronic equipment
CN111079414A (en) Dictation detection method, electronic equipment and storage medium
CN111031232B (en) Dictation real-time detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant