CN111081102A - Dictation result detection method and learning equipment - Google Patents

Dictation result detection method and learning equipment Download PDF

Info

Publication number
CN111081102A
CN111081102A CN201910690048.2A CN201910690048A CN111081102A CN 111081102 A CN111081102 A CN 111081102A CN 201910690048 A CN201910690048 A CN 201910690048A CN 111081102 A CN111081102 A CN 111081102A
Authority
CN
China
Prior art keywords
learning
user
dictation
modification
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910690048.2A
Other languages
Chinese (zh)
Other versions
CN111081102B (en
Inventor
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910690048.2A priority Critical patent/CN111081102B/en
Publication of CN111081102A publication Critical patent/CN111081102A/en
Application granted granted Critical
Publication of CN111081102B publication Critical patent/CN111081102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Abstract

The embodiment of the invention relates to the technical field of learning equipment, and discloses a dictation result detection method and learning equipment, wherein the method comprises the following steps: acquiring a handwriting modification mark used by a user in a writing process for modifying writing contents; establishing a matching relationship between the handwritten modification mark and a standard modification mark, wherein the standard modification mark is a modification symbol which is recorded in standard editing and correcting data and is used for modifying the written content; according to the matching relation, determining a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user from the standard modification marks; modifying the dictation answer according to the modification mode corresponding to the target standard modification mark to determine a target dictation answer; and correcting the target dictation answer according to the standard dictation answer to determine and output a correction result. By implementing the embodiment of the invention, the accuracy of dictation detection can be improved.

Description

Dictation result detection method and learning equipment
Technical Field
The invention relates to the technical field of learning equipment, in particular to a dictation result detection method and learning equipment.
Background
At present, most of learning devices in the market are equipped with a dictation learning function, and the traditional learning devices can identify images of dictation answers uploaded by users after dictation is finished so as to judge whether the dictation answers written by the users are matched with standard answers.
In practice, it is found that modification marks handwritten by the user often exist in the writing content of the user (such as adding marks, deleting marks, and the like), and the conventional learning device cannot recognize the handwritten modification marks, so that recognition of the dictation answers by the learning device and proofreading of the dictation answers are influenced, and accuracy of dictation detection is not improved.
Disclosure of Invention
The embodiment of the invention discloses a dictation result checking method and learning equipment, which can improve the accuracy of dictation detection.
The first aspect of the embodiments of the present invention discloses a method for detecting dictation results, including:
acquiring a handwriting modification mark used by a user in a writing process for modifying writing contents;
establishing a matching relation between the handwriting modification mark and a standard modification mark, wherein the standard modification mark is a modification symbol recorded in standard editing proofreading data and used for modifying the writing content;
according to the matching relation, determining a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user in the standard modification marks;
modifying the dictation answer according to a modification mode corresponding to the target standard modification mark to determine a target dictation answer;
and correcting the target dictation answer according to the standard dictation answer to determine and output a correction result.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before determining, according to the matching relationship, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user in the standard modification marks, the method further includes:
acquiring environmental sound information around learning equipment;
judging whether the environmental sound information is white noise; if the white noise is the white noise, judging whether the decibel value of the white noise is larger than or equal to a preset decibel threshold value or not;
and if the decibel value of the white noise is greater than or equal to the preset decibel value, determining that the ambient environment of the learning equipment is noisy, and outputting guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the ambient environment of the learning device is noisy and before outputting guidance information to guide a user to listen to the dictation content played by the learning device using headphones, the method further includes:
starting a Bluetooth function of the learning equipment so as to establish communication connection between the learning equipment and the earphone with the Bluetooth function started;
sending a starting instruction to the earphone so that the earphone starts an infrared human body induction function, and judging whether a human body is induced or not through the infrared human body induction function of the earphone;
and if the human body is not sensed, executing the step of outputting the guide information to guide the user to listen to the dictation content played by the learning equipment by using an earphone.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the ambient environment of the learning device is noisy and before outputting guidance information to guide a user to listen to the dictation content played by the learning device using headphones, the method further includes:
shooting the facial image information of the user through a camera module of the learning equipment;
judging whether expression features representing user confusion exist in the face image information of the user or not;
and if the expressive features which show that the user is confused exist, executing the step of outputting the guide information to guide the user to listen to the dictation contents played by the learning equipment by using the earphone.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the ambient environment of the learning device is noisy, the method further includes:
acquiring the instant position of the learning equipment;
determining a learning area closest to the instant position according to the instant position of the learning equipment;
planning a navigation route according to the instant position and the position corresponding to the learning area;
outputting the navigation route to guide the user to move to the learning area for learning.
A second aspect of the embodiments of the present invention discloses a learning apparatus, including:
the acquisition unit is used for acquiring a handwriting modification mark used by a user in the writing process for modifying the writing content;
the establishing unit is used for establishing a matching relation between the handwriting modification mark and a standard modification mark, and the standard modification mark is a modification symbol which is recorded in standard editing and proofreading data and is used for modifying the writing content;
a first determining unit, configured to determine, according to the matching relationship, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user from among the standard modification marks;
the modification unit is used for modifying the dictation answer according to a modification mode corresponding to the target standard modification mark so as to determine a target dictation answer;
and the first output unit is used for correcting the target dictation answer according to the standard dictation answer so as to determine and output a correction result.
As an alternative implementation, in the second aspect of the embodiment of the present invention, the learning apparatus further includes:
the first obtaining unit is used for obtaining environmental sound information around the learning equipment before the first determining unit determines a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user from the standard modification marks according to the matching relation;
the first judging unit is used for judging whether the environmental sound information is white noise;
the second judging unit is used for judging whether the decibel value of the white noise is greater than or equal to a preset decibel threshold value or not when the first judging unit judges that the environmental sound information is the white noise;
and the second output unit is used for determining that the ambient environment of the learning equipment is noisy when the second judgment unit judges that the decibel value of the white noise is greater than or equal to a preset decibel threshold value, and outputting guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone.
As an alternative implementation, in the second aspect of the embodiment of the present invention, the learning apparatus further includes:
the starting unit is used for starting the Bluetooth function of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy and before the second output unit outputs guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone, so that the learning equipment and the earphone with the started Bluetooth function are in communication connection;
the transmitting unit is used for transmitting a starting instruction to the earphone so as to enable the earphone to start an infrared human body induction function;
the third judging unit is used for judging whether a human body is sensed or not through the infrared human body sensing function of the earphone;
and the second output unit is specifically configured to output guidance information to guide a user to listen to the dictation content played by the learning device using an earphone when the third determination unit determines that the human body is not sensed.
As an alternative implementation, in the second aspect of the embodiment of the present invention, the learning apparatus further includes:
the shooting unit is used for shooting the facial image information of the user through the camera module of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy and before the second output unit outputs guide information to guide the user to listen to the dictation content played by the learning equipment through an earphone;
a fourth judging unit configured to judge whether or not an expressive feature indicating user confusion exists in the face image information of the user;
and the second output unit is specifically configured to output guide information to guide the user to listen to the dictation content played by the learning device using an earphone when the fourth determination unit determines that the facial image information of the user includes an expressive feature indicating user confusion.
As an alternative implementation, in the second aspect of the embodiment of the present invention, the learning apparatus further includes:
the second acquisition unit is used for acquiring the instant position of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy;
the second determining unit is used for determining a learning area closest to the instant position according to the instant position of the learning equipment;
the planning unit is used for planning a navigation route according to the instant position and the position corresponding to the learning area;
a third output unit for outputting the navigation route to guide the user to move to the learning area for learning.
A third aspect of an embodiment of the present invention discloses a learning apparatus, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the dictation result detection method disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium, which stores a computer program, wherein the computer program enables a computer to execute the method for detecting a dictation result disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which, when running on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect of the embodiments of the present invention.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to perform part or all of the steps of any one of the methods in the first aspect of the present embodiment.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the learning equipment can collect the handwriting modification marks used by the user in the writing process for modifying the writing content; establishing a matching relationship between the handwritten modification mark and a standard modification mark, wherein the standard modification mark is a modification symbol which is recorded in standard editing and correcting data and is used for modifying the written content; determining a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user in the standard modification marks according to the matching relation; modifying the dictation answer according to the modification mode corresponding to the target standard modification mark to determine a target dictation answer; and correcting the target dictation answer according to the standard dictation answer to determine and output a correction result. By implementing the embodiment of the invention, the learning equipment can collect the handwritten modification marks used in the ordinary writing process of the user and establish a matching relation between the handwritten modification marks and the standard modification marks, and because each standard modification mark corresponds to a writing content modification mode, the learning equipment can modify the dictation answer of the user according to the modification mode corresponding to the standard modification mark so as to determine the target dictation answer which can be identified by the learning equipment, thereby improving the accuracy of dictation detection.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting dictation results according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another dictation result detection method disclosed in the embodiments of the present invention;
FIG. 3 is a schematic structural diagram of a learning device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another learning device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", "third" and "fourth" etc. in the description and claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a dictation result checking method and learning equipment, which can improve the accuracy of dictation detection.
The technical solution of the present invention will be described in detail with reference to specific examples.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for detecting dictation results according to an embodiment of the present invention. As shown in fig. 1, the method for detecting dictation result may include the following steps:
101. the learning device collects handwriting modification indicia used by the user during writing to modify the written content.
In this embodiment of the present invention, the learning Device may be a learning tablet, a learning machine, a learning mobile phone, a point reading machine, a teaching machine, a mobile tablet, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a television, or the like, which is not limited in this embodiment of the present invention.
In the embodiment of the present invention, the learning device may collect a large number of handwritten dictation answers uploaded by the user, and collect a handwritten modification mark used by the user to modify the written content in the dictation answers, for example: add marks, delete marks, or replace marks, etc. It should be noted that: the handwriting modification marks can be written by a user imitating standard modification marks in standard editing proofreading data, and the handwriting modification marks and the standard modification marks can be different, but do not influence the judgment of the user on the modification modes corresponding to the handwriting modification marks.
102. The learning device establishes a matching relationship for the handwritten modification marks and the standard modification marks.
In the embodiment of the invention, the mark modification marks can be modification symbols recorded in the standard editing and proofreading data and used for modifying the writing content, so that the learning equipment can collect a large number of standard modification marks from the standard editing and proofreading data and can establish a matching relation between the handwriting modification marks with the same action positions and modification effects and the standard modification marks. For example: a matching relationship may be established for both handwritten deletion marks used to delete content and standard deletion marks.
103. And the learning equipment determines a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user from the standard modification marks according to the matching relation.
In the embodiment of the present invention, the learning device may recognize the handwriting modification mark in the dictation answer uploaded by the user through an image processing technology (for example, an Optical Character Recognition (OCR), image segmentation, etc.). The OCR technology converts characters of various bills, newspapers, books, manuscripts and other printed matters into image information by means of optical input methods such as scanning, and then converts the image information into a usable computer input technology by means of a character recognition technology.
Further, the learning device may determine, for each target handwritten modification mark, a corresponding target standard modification mark according to the established matching relationship.
As an optional implementation manner, the learning device may further obtain environmental sound information around the learning device before determining, from the standard modification marks, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user according to the matching relationship; judging whether the ambient sound information around the learning equipment is white noise; if the white noise is the white noise, judging whether the decibel value of the white noise is greater than or equal to a preset decibel threshold value or not; and if the decibel value of the white noise is greater than or equal to the preset decibel value, determining that the ambient environment of the learning equipment is noisy, and outputting guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone.
It should be noted that: the learning device may have built-in microphone means (alternatively referred to as a microphone) which may convert sound signals into electrical signals for processing by a processor of the learning device, i.e. the learning device may collect ambient sound information around the learning device by means of the built-in microphone means.
It needs to be further explained that: white noise means that the power of frequency components in a section of sound is uniform in the whole audible range (0-20 KHZ). Since the human ear is sensitive to high frequencies, this sound appears to be a very loud, rustling sound; for example: the speech of pedestrians in shopping malls, the engine sound of vehicles in stations, etc. may be white noise. And then the learning equipment can judge whether the ambient sound information around the learning equipment is white noise according to the characteristic that the power of the frequency component of the white noise is uniform in the whole audible range (0-20 KHZ). In addition, the specific value of the preset decibel threshold may be set by a developer according to a large amount of development data, and the embodiment of the present invention is not limited.
By implementing the method, the learning device can automatically guide the user to listen to the dictation content played by the learning device by using the earphone when judging that the surrounding environment of the learning device is noisy, so that the dictation efficiency of the user is improved, and the use experience of the user is also improved.
As an optional implementation manner, after the learning device determines that the surrounding environment of the learning device is noisy and before the learning device outputs the guiding information to guide the user to listen to the dictation content played by the learning device using the earphone, the bluetooth function of the learning device may be turned on, so that the learning device establishes a communication connection with the earphone with the bluetooth function turned on; sending a starting instruction to the earphone so that the earphone starts an infrared human body induction function, and judging whether a human body is induced or not through the infrared human body induction function of the earphone; and if the human body is not sensed, executing a step of outputting guide information to guide the user to listen to the dictation content played by the learning equipment by using the earphone.
It should be noted that: the learning equipment and the earphone can be internally provided with a Bluetooth functional module for executing Bluetooth function, Bluetooth is a wireless technical standard, and short-distance data exchange (using UHF radio waves of ISM wave band of 2.4-2.485 GHz) among fixed equipment, mobile equipment and a building personal area network can be realized. And then learning equipment and earphone can establish communication connection through the bluetooth.
It needs to be further explained that: the earphone can also be internally provided with a thermal infrared human body sensor which can detect human body signals when a human body is in a certain range of the thermal infrared human body sensor, and then can be used for executing an infrared human body sensing function. When can understand, when the human response function of infrared ray of earphone does not sense the human body, then explain that the user does not dress the earphone on the ear, and then learning equipment can export guide information and guide the user to use the earphone to listen the dictation content that learning equipment broadcast.
By implementing the method, the learning device can judge whether the user wears the earphone on the ear through the earphone in communication connection with the learning device, and guides the user to wear the earphone when judging that the user does not wear the earphone on the ear, so that the intelligent degree of the learning device is improved.
As another optional implementation, after the learning device determines that the surrounding environment of the learning device is noisy and before the learning device outputs the guiding information to guide the user to listen to the dictation content played by the learning device through the earphone, the learning device may further capture facial image information of the user through a camera module of the learning device; judging whether expression features representing user confusion exist in the face image information of the user or not; and if the expressive features indicating that the user is confused exist, outputting guide information to guide the user to listen to the dictation contents played by the learning equipment through the earphones.
It should be noted that: the learning device may have a built-in camera, which may be a front camera or a rear camera, and the embodiment of the present invention is not limited. Expressive features that represent user confusion may include, but are not limited to: frown, squint and close mouth etc. and then when learning equipment judges that user's facial image information includes one or more in the expression characteristics that the expression user is puzzled such as frown, squint and close mouth, can judge that the user probably listens not clearly to the dictation content that learning equipment broadcast, and then can output guide information and guide the user to use the earphone to listen to the dictation content that learning equipment broadcast.
By implementing the method, the learning device can judge whether the user listens to the dictation content played by the learning device unclearly or not through the expression characteristics of the user, and can guide the user to wear the earphone when judging that the user listens to the dictation content played by the learning device unclearly, so that the intelligent degree of the learning device is improved.
104. And the learning equipment modifies the dictation answer according to the modification mode corresponding to the target standard modification mark so as to determine the target dictation answer.
In the embodiment of the invention, the learning device can firstly convert each target handwriting modification mark in the dictation answer into the target standard modification mark, and because each target standard modification mark can inquire the corresponding modification mode in the standard editing and correcting data, the learning device can modify the dictation answer according to the position acted by the target standard modification mark and the corresponding modification mode so as to determine the target dictation answer.
For example, assuming that the target standard modification flag is a deletion flag, and the corresponding modification manner is to delete the marked content, the learning device may delete the content marked by the deletion flag; for another example, assuming that the target standard modification mark is a replacement mark, and the corresponding modification manner is to replace the marked content, the learning device may replace the content marked by the replacement mark; by analogy, the learning device may modify the dictation answer in a modification manner corresponding to each target standard modification mark in the dictation answer to determine the target dictation answer.
105. And the learning equipment corrects the target dictation answer according to the standard dictation answer so as to determine and output a correction result.
In the embodiment of the invention, a standard dictation answer can be correspondingly stored in the learning equipment aiming at each section of dictation content stored in the learning equipment, and then after the target dictation answer is determined, the learning equipment can compare the standard dictation answer with the image of the target dictation answer through an image comparison technology so as to modify the target dictation answer and output the determined modification result to a user.
As an optional implementation manner, the learning device may further turn on a bluetooth function to establish a communication connection with an intelligent bracelet worn by the user; collecting first physiological characteristics of a user through an intelligent bracelet worn by the user; when the learning device judges that the first physiological characteristics of the user indicate that the user is in a fatigue state, the learning device can output prompt information to prompt the user to have a rest.
It should be noted that: the intelligent bracelet that the user dressed can embed have blood pressure sensor and rhythm of the heart sensor to be used for gathering user's physiological characteristics, and when blood pressure sensor gathered user's first blood pressure numerical value and was higher than the first blood pressure threshold value of predetermineeing to and rhythm of the heart sensor gathered user's first heart rate numerical value and was higher than the first heart rate threshold value of predetermineeing, can judge that the user is in fatigue state. It can be understood that: the human body can be accelerated in heartbeat when being tired, and then the blood pressure can be increased, so when the blood pressure and the heart rate of the user collected by the intelligent bracelet exceed the normal range, the user can be judged to be in a fatigue state, and then prompt information can be output to prompt the user to have a rest.
By implementing the method, the learning equipment can acquire the physiological characteristics of the user through the intelligent bracelet worn by the user, and prompt the user to have a rest when the user is judged to be in a fatigue state according to the physiological characteristics of the user, so that the use experience of the user is improved.
As another optional implementation, after the learning device outputs the prompt information to prompt the user to take a rest, the learning device may further acquire a second physiological characteristic of the user through an intelligent bracelet worn by the user; when the learning device judges that the second physiological characteristic of the user represents that the user is in a sleep state, the learning device can be controlled to enter a standby mode so as to save energy consumption of the learning device.
It should be noted that: the intelligent bracelet that the user dressed can embed has blood pressure sensor and rhythm of the heart sensor to be used for gathering user's physiological characteristics, and when blood pressure sensor gathered user's second blood pressure numerical value and is less than preset second blood pressure threshold value to and rhythm of the heart sensor gathered user's second heart rate numerical value and is less than preset second heart rate threshold value, can judge that the user is in the sleep state. It can be understood that: the human body can reduce the frequency of heartbeat when sleeping, and then makes blood pressure keep at low level state, so when user's blood pressure and rhythm of the heart were gathered to intelligent bracelet and normal scope lower limit, can judge that the user is in the sleep state, and then can control learning equipment and get into standby mode to save learning equipment's energy consumption.
By implementing the method, the learning device can also control the learning device to enter the standby mode when judging that the user is in the sleep state according to the physiological characteristics of the user, so that the energy consumption of the learning device is saved.
As another optional implementation, after the learning device outputs the prompt information to prompt the user to have a rest, a plurality of rest durations can be generated for the user to select; receiving a target rest duration selected by a user according to a plurality of rest durations; when the learning device judges that the second physiological characteristics of the user indicate that the user is in a sleep state, the learning device can acquire a current first time point as a sleep time point, and determine a time point of getting up according to the sleep time point and the target rest duration; the learning device can output prompt information to prompt the user to get up for learning when judging that the current second time point is the same as the time point of getting up.
For example, the time point when the learning device acquires that the user falls asleep is 13 pm: 00, and the target rest duration selected by the user is 30 minutes, the learning device can determine that the getting-up time point is 13 pm according to the falling-asleep time point and the target rest duration: 30, of a nitrogen-containing gas; further, the learning device may learn 13: 00 to prompt the user to get up to study.
By implementing the method, the learning equipment can intelligently plan the rest time for the user and remind the user to learn on time, so that the learning efficiency of the user is improved, and the user experience is also improved.
It can be seen that, by implementing the method described in fig. 1, the learning device may collect the handwritten modification marks used by the user in the ordinary writing process, and establish a matching relationship between the handwritten modification marks and the standard modification marks, and because each standard modification mark corresponds to a writing content modification manner, the learning device may modify the dictation answer of the user according to the modification manner corresponding to the standard modification mark, so as to determine the target dictation answer that the learning device can recognize, and further improve the accuracy of dictation detection.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another dictation result detection method disclosed in the embodiment of the present invention. As shown in fig. 2, the method for detecting dictation result may include the following steps:
201-202; step 201 to step 202 are the same as step 101 to step 102 in the first embodiment, and are not described herein again.
203. The learning apparatus acquires environmental sound information around the learning apparatus.
204. The learning equipment judges whether the environmental sound information around the learning equipment is white noise; if yes, go to step 205; if not, the flow is ended.
205. The learning equipment judges whether the decibel value of the white noise is greater than or equal to a preset decibel threshold value or not; if yes, go to step 206; if not, the flow is ended.
206. The learning device determines that the surrounding environment of the learning device is noisy and obtains the instant location of the learning device.
In the embodiment of the present invention, a positioning module, such as a Global Positioning System (GPS) module, may be built in the learning device, and the GPS module is taken as an example in the embodiment of the present invention, which should not be construed as a limitation to the present invention.
The GPS module has high integration sensitivity and low power consumption, can simultaneously track up to 20 satellites and quickly position, and realizes 1Hz navigation updating; are widely used in battery operated navigation systems such as palm top computers, personal digital assistants, navigators, cell phones, computers or others. The learning device can obtain its instant location through the location module.
207. The learning device determines a learning area closest to the instant position of the learning device according to the instant position of the learning device.
In the embodiment of the present invention, the learning region may include, but is not limited to: library, study room, etc., the learning device may determine a learning area closest to the immediate location of the learning device in a map of the map-like application.
208. And the learning equipment plans a navigation route according to the instant position of the learning equipment and the position corresponding to the learning area.
In the embodiment of the invention, the learning device can take the instant position of the learning device as the starting point of the navigation route, the position corresponding to the learning area as the end point of the navigation route to plan the navigation route, and in addition, the learning device can also output road condition information such as traffic light information, overpass information and the like in the navigation route to assist the user to move to the learning area.
209. The learning device outputs a navigation route to guide the user to move to a learning area for learning.
210 to 212; step 210 to step 212 are the same as step 103 to step 105 in the first embodiment, and are not described herein again.
It can be seen that, compared with the implementation of the method described in fig. 1, with the implementation of the method described in fig. 2, when it is determined that the environment around the learning device is noisy, the learning device can automatically find a nearby learning area for the user, and output a navigation route to guide the user to move to the learning area for dictation learning, thereby improving the user experience of the user.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a learning device according to an embodiment of the present invention. As shown in fig. 3, the learning apparatus may include:
an acquisition unit 301, configured to acquire a handwriting modification mark used by a user in a writing process to modify writing contents;
an establishing unit 302, configured to establish a matching relationship between the handwritten modification mark and a standard modification mark, where the standard modification mark is a modification symbol recorded in standard editing proofreading data and used for modifying the written content;
a first determining unit 303, configured to determine, according to the matching relationship, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user from among the standard modification marks;
a modifying unit 304, configured to modify the dictation answer according to a modification manner corresponding to the target standard modification mark, so as to determine a target dictation answer;
the first output unit 305 is configured to modify the target dictation answer according to the standard dictation answer to determine and output a modification result.
As an optional implementation, the learning apparatus may further include:
a first obtaining unit 306, configured to obtain ambient sound information around the learning device before the first determining unit 303 determines, according to the matching relationship, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user from the standard modification marks;
a first judging unit 307 configured to judge whether the environmental sound information is white noise;
a second determining unit 308, configured to determine whether a decibel value of the white noise is greater than or equal to a preset decibel threshold when the first determining unit 307 determines that the ambient sound information is the white noise;
the second output unit 309 is configured to determine that the surrounding environment of the learning device is noisy when the second determining unit 308 determines that the decibel value of the white noise is greater than or equal to the preset decibel threshold, and output guidance information to guide the user to listen to the dictation content played by the learning device using the earphone.
By implementing the method, the learning device can automatically guide the user to listen to the dictation content played by the learning device by using the earphone when judging that the surrounding environment of the learning device is noisy, so that the dictation efficiency of the user is improved, and the use experience of the user is also improved.
As an optional implementation, the learning apparatus may further include:
the starting unit 310 is configured to start the bluetooth function of the learning device after the second output unit 309 determines that the surrounding environment of the learning device is noisy and before the second output unit outputs the guidance information to guide the user to listen to the dictation content played by the learning device using the headset, so that the learning device and the headset with the bluetooth function started establish communication connection;
a sending unit 311, configured to send a start instruction to the earphone, so that the earphone starts an infrared human body sensing function;
a third judging unit 312, configured to judge whether a human body is sensed through an infrared human body sensing function of the earphone;
and the second output unit 309 is specifically configured to output guidance information to guide the user to listen to the dictation content played by the learning device using the earphone when the third determining unit 312 determines that the human body is not sensed.
By implementing the method, the learning device can judge whether the user wears the earphone on the ear through the earphone in communication connection with the learning device, and guides the user to wear the earphone when judging that the user does not wear the earphone on the ear, so that the intelligent degree of the learning device is improved.
As an optional implementation, the learning apparatus may further include:
the shooting unit 313 is used for shooting the facial image information of the user through the shooting module of the learning device after the second output unit 309 determines that the surrounding environment of the learning device is noisy and before the second output unit outputs the guide information to guide the user to listen to the dictation content played by the learning device through the earphone;
a fourth judging unit 314 for judging whether an expressive feature indicating confusion of the user exists in the face image information of the user;
and the second output unit 309 is specifically configured to output guidance information to guide the user to listen to the dictation content played by the learning apparatus using the headphones when the fourth determination unit 314 determines that the expressive features indicating user confusion exist in the face image information of the user.
By implementing the method, the learning device can judge whether the user listens to the dictation content played by the learning device unclearly or not through the expression characteristics of the user, and can guide the user to wear the earphone when judging that the user listens to the dictation content played by the learning device unclearly, so that the intelligent degree of the learning device is improved.
As an optional implementation, the learning apparatus may further include:
a second obtaining unit 315, configured to obtain an instant position of the learning device after the second output unit 309 determines that the surrounding environment of the learning device is noisy;
a second determining unit 316, configured to determine, according to the instant position of the learning device, a learning area closest to the instant position of the learning device;
the planning unit 317 is used for planning a navigation route according to the instant position of the learning device and the position corresponding to the learning area;
a third output unit 318 for outputting a navigation route to guide the user to move to the learning area for learning.
By implementing the method, the learning device can automatically find a nearby learning area for the user when judging that the environment around the learning device is noisy, and outputs the navigation route to guide the user to move to the learning area for dictation learning, so that the use experience of the user is improved.
It can be seen that, with the learning device described in fig. 3, the handwritten modification marks used in the ordinary writing process of the user can be collected, and a matching relationship is established between the handwritten modification marks and the standard modification marks, and because each standard modification mark corresponds to one writing content modification mode, the learning device can modify the dictation answer of the user according to the modification mode corresponding to the standard modification mark to determine the target dictation answer that can be recognized by the learning device, thereby improving the accuracy of dictation detection.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another learning apparatus according to an embodiment of the present invention. As shown in fig. 4, the learning apparatus may include:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 401;
the processor 402 calls the executable program code stored in the memory 401 to execute the method for detecting the dictation result in any one of fig. 1 or fig. 2.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the method for detecting the dictation result in any one of the figures 1 or 2.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The method for detecting dictation results and the learning device disclosed in the embodiments of the present invention are described in detail above, and the principle and the implementation of the present invention are explained in the present document by applying specific examples, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for detecting dictation results, the method comprising:
acquiring a handwriting modification mark used by a user in a writing process for modifying writing contents;
establishing a matching relation between the handwriting modification mark and a standard modification mark, wherein the standard modification mark is a modification symbol recorded in standard editing proofreading data and used for modifying the writing content;
according to the matching relation, determining a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user in the standard modification marks;
modifying the dictation answer according to a modification mode corresponding to the target standard modification mark to determine a target dictation answer;
and correcting the target dictation answer according to the standard dictation answer to determine and output a correction result.
2. The method according to claim 1, wherein before determining, according to the matching relationship, a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user among the standard modification marks, the method further comprises:
acquiring environmental sound information around learning equipment;
judging whether the environmental sound information is white noise; if the white noise is the white noise, judging whether the decibel value of the white noise is larger than or equal to a preset decibel threshold value or not;
and if the decibel value of the white noise is greater than or equal to the preset decibel value, determining that the ambient environment of the learning equipment is noisy, and outputting guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone.
3. The method of claim 2, wherein after determining that the ambient environment of the learning device is noisy and before outputting guidance information to guide the user to listen to the dictation content played by the learning device using headphones, the method further comprises:
starting a Bluetooth function of the learning equipment so as to establish communication connection between the learning equipment and the earphone with the Bluetooth function started;
sending a starting instruction to the earphone so that the earphone starts an infrared human body induction function, and judging whether a human body is induced or not through the infrared human body induction function of the earphone;
and if the human body is not sensed, executing the step of outputting the guide information to guide the user to listen to the dictation content played by the learning equipment by using an earphone.
4. The method of claim 2, wherein after determining that the ambient environment of the learning device is noisy and before outputting guidance information to guide the user to listen to the dictation content played by the learning device using headphones, the method further comprises:
shooting the facial image information of the user through a camera module of the learning equipment;
judging whether expression features representing user confusion exist in the face image information of the user or not;
and if the expressive features which show that the user is confused exist, executing the step of outputting the guide information to guide the user to listen to the dictation contents played by the learning equipment by using the earphone.
5. The method of claim 2, wherein after determining that the ambient environment of the learning device is noisy, the method further comprises:
acquiring the instant position of the learning equipment;
determining a learning area closest to the instant position according to the instant position of the learning equipment;
planning a navigation route according to the instant position and the position corresponding to the learning area;
outputting the navigation route to guide the user to move to the learning area for learning.
6. A learning apparatus characterized by comprising:
the acquisition unit is used for acquiring a handwriting modification mark used by a user in the writing process for modifying the writing content;
the establishing unit is used for establishing a matching relation between the handwriting modification mark and a standard modification mark, and the standard modification mark is a modification symbol which is recorded in standard editing and proofreading data and is used for modifying the writing content;
a first determining unit, configured to determine, according to the matching relationship, a target standard modification mark corresponding to a target handwritten modification mark in the dictation answers uploaded by the user from among the standard modification marks;
the modification unit is used for modifying the dictation answer according to a modification mode corresponding to the target standard modification mark so as to determine a target dictation answer;
and the first output unit is used for correcting the target dictation answer according to the standard dictation answer so as to determine and output a correction result.
7. The learning apparatus according to claim 6, characterized in that the learning apparatus further comprises:
the first obtaining unit is used for obtaining environmental sound information around the learning equipment before the first determining unit determines a target standard modification mark corresponding to a target handwriting modification mark in the dictation answers uploaded by the user from the standard modification marks according to the matching relation;
the first judging unit is used for judging whether the environmental sound information is white noise;
the second judging unit is used for judging whether the decibel value of the white noise is greater than or equal to a preset decibel threshold value or not when the first judging unit judges that the environmental sound information is the white noise;
and the second output unit is used for determining that the ambient environment of the learning equipment is noisy when the second judgment unit judges that the decibel value of the white noise is greater than or equal to a preset decibel threshold value, and outputting guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone.
8. The learning apparatus according to claim 7, characterized in that the learning apparatus further comprises:
the starting unit is used for starting the Bluetooth function of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy and before the second output unit outputs guide information to guide a user to listen to the dictation content played by the learning equipment through an earphone, so that the learning equipment and the earphone with the started Bluetooth function are in communication connection;
the transmitting unit is used for transmitting a starting instruction to the earphone so as to enable the earphone to start an infrared human body induction function;
the third judging unit is used for judging whether a human body is sensed or not through the infrared human body sensing function of the earphone;
and the second output unit is specifically configured to output guidance information to guide a user to listen to the dictation content played by the learning device using an earphone when the third determination unit determines that the human body is not sensed.
9. The learning apparatus according to claim 7, characterized in that the learning apparatus further comprises:
the shooting unit is used for shooting the facial image information of the user through the camera module of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy and before the second output unit outputs guide information to guide the user to listen to the dictation content played by the learning equipment through an earphone;
a fourth judging unit configured to judge whether or not an expressive feature indicating user confusion exists in the face image information of the user;
and the second output unit is specifically configured to output guide information to guide the user to listen to the dictation content played by the learning device using an earphone when the fourth determination unit determines that the facial image information of the user includes an expressive feature indicating user confusion.
10. The learning apparatus according to claim 7, characterized in that the learning apparatus further comprises:
the second acquisition unit is used for acquiring the instant position of the learning equipment after the second output unit determines that the ambient environment of the learning equipment is noisy;
the second determining unit is used for determining a learning area closest to the instant position according to the instant position of the learning equipment;
the planning unit is used for planning a navigation route according to the instant position and the position corresponding to the learning area;
a third output unit for outputting the navigation route to guide the user to move to the learning area for learning.
CN201910690048.2A 2019-07-29 2019-07-29 Dictation result detection method and learning equipment Active CN111081102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910690048.2A CN111081102B (en) 2019-07-29 2019-07-29 Dictation result detection method and learning equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910690048.2A CN111081102B (en) 2019-07-29 2019-07-29 Dictation result detection method and learning equipment

Publications (2)

Publication Number Publication Date
CN111081102A true CN111081102A (en) 2020-04-28
CN111081102B CN111081102B (en) 2022-03-25

Family

ID=70310126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910690048.2A Active CN111081102B (en) 2019-07-29 2019-07-29 Dictation result detection method and learning equipment

Country Status (1)

Country Link
CN (1) CN111081102B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861815A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and device for evaluating memory level of user in word listening learning
CN111861370A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and device for planning best review time of word listening

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10261049A (en) * 1997-03-18 1998-09-29 Fujitsu Ltd Character recognizing device
CN101278337A (en) * 2005-07-22 2008-10-01 索福特迈克斯有限公司 Robust separation of speech signals in a noisy environment
US20080279303A1 (en) * 2005-04-28 2008-11-13 Matsushita Electric Industrial Co., Ltd. Repetition-Dependent Mapping For Higher Order Modulation Schemes
CN201594388U (en) * 2009-11-13 2010-09-29 宇龙计算机通信科技(深圳)有限公司 Terminal
CN102067153A (en) * 2008-04-03 2011-05-18 智思博公司 Multi-modal learning system
CN102903136A (en) * 2012-09-28 2013-01-30 王平 Method and system for electronizing handwriting
CN103400512A (en) * 2013-07-16 2013-11-20 步步高教育电子有限公司 Learning assisting device and operating method thereof
CN103646582A (en) * 2013-12-04 2014-03-19 广东小天才科技有限公司 Method and device for prompting writing errors
CN105187614A (en) * 2015-04-09 2015-12-23 深圳市金立通信设备有限公司 Terminal prompting method
CN105450859A (en) * 2015-11-11 2016-03-30 中国联合网络通信集团有限公司 Reminding method and apparatus for wearing earphone
CN105988567A (en) * 2015-02-12 2016-10-05 北京三星通信技术研究有限公司 Handwritten information recognition method and device
CN106412188A (en) * 2016-10-13 2017-02-15 深圳市冠旭电子股份有限公司 Reminding method and apparatus
CN106599941A (en) * 2016-12-12 2017-04-26 西安电子科技大学 Method for identifying handwritten numbers based on convolutional neural network and support vector machine
CN106997223A (en) * 2016-01-25 2017-08-01 姜洪军 Mobile visual field
CN107025614A (en) * 2017-03-20 2017-08-08 广东小天才科技有限公司 Teaching efficiency detection method, system and device in a kind of live video
CN107092497A (en) * 2017-03-15 2017-08-25 深圳市金立通信设备有限公司 The method to set up and device of terminal general parameter
US20170308507A1 (en) * 2016-04-20 2017-10-26 Kyocera Document Solutions Inc. Image processing apparatus
CN107566604A (en) * 2017-07-12 2018-01-09 广东小天才科技有限公司 The control method and user terminal of a kind of prompting message
CN108469913A (en) * 2018-02-28 2018-08-31 北京小米移动软件有限公司 Change the method, apparatus and storage medium of input information
CN108897579A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of information processing method, electronic equipment and system
CN109064814A (en) * 2018-06-27 2018-12-21 深圳中兴网信科技有限公司 Examination question reads and makes comments method, examination question reads and makes comments system and computer readable storage medium
CN109948572A (en) * 2019-03-27 2019-06-28 联想(北京)有限公司 A kind of automatic marking method and system
CN110059450A (en) * 2019-05-25 2019-07-26 韶关市启之信息技术有限公司 A method of remind teacher to change classroom instruction speed

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10261049A (en) * 1997-03-18 1998-09-29 Fujitsu Ltd Character recognizing device
US20080279303A1 (en) * 2005-04-28 2008-11-13 Matsushita Electric Industrial Co., Ltd. Repetition-Dependent Mapping For Higher Order Modulation Schemes
CN101278337A (en) * 2005-07-22 2008-10-01 索福特迈克斯有限公司 Robust separation of speech signals in a noisy environment
CN102067153A (en) * 2008-04-03 2011-05-18 智思博公司 Multi-modal learning system
CN201594388U (en) * 2009-11-13 2010-09-29 宇龙计算机通信科技(深圳)有限公司 Terminal
CN102903136A (en) * 2012-09-28 2013-01-30 王平 Method and system for electronizing handwriting
CN103400512A (en) * 2013-07-16 2013-11-20 步步高教育电子有限公司 Learning assisting device and operating method thereof
CN103646582A (en) * 2013-12-04 2014-03-19 广东小天才科技有限公司 Method and device for prompting writing errors
CN105988567A (en) * 2015-02-12 2016-10-05 北京三星通信技术研究有限公司 Handwritten information recognition method and device
CN105187614A (en) * 2015-04-09 2015-12-23 深圳市金立通信设备有限公司 Terminal prompting method
CN105450859A (en) * 2015-11-11 2016-03-30 中国联合网络通信集团有限公司 Reminding method and apparatus for wearing earphone
CN106997223A (en) * 2016-01-25 2017-08-01 姜洪军 Mobile visual field
US20170308507A1 (en) * 2016-04-20 2017-10-26 Kyocera Document Solutions Inc. Image processing apparatus
CN106412188A (en) * 2016-10-13 2017-02-15 深圳市冠旭电子股份有限公司 Reminding method and apparatus
CN106599941A (en) * 2016-12-12 2017-04-26 西安电子科技大学 Method for identifying handwritten numbers based on convolutional neural network and support vector machine
CN107092497A (en) * 2017-03-15 2017-08-25 深圳市金立通信设备有限公司 The method to set up and device of terminal general parameter
CN107025614A (en) * 2017-03-20 2017-08-08 广东小天才科技有限公司 Teaching efficiency detection method, system and device in a kind of live video
CN107566604A (en) * 2017-07-12 2018-01-09 广东小天才科技有限公司 The control method and user terminal of a kind of prompting message
CN108469913A (en) * 2018-02-28 2018-08-31 北京小米移动软件有限公司 Change the method, apparatus and storage medium of input information
CN109064814A (en) * 2018-06-27 2018-12-21 深圳中兴网信科技有限公司 Examination question reads and makes comments method, examination question reads and makes comments system and computer readable storage medium
CN108897579A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of information processing method, electronic equipment and system
CN109948572A (en) * 2019-03-27 2019-06-28 联想(北京)有限公司 A kind of automatic marking method and system
CN110059450A (en) * 2019-05-25 2019-07-26 韶关市启之信息技术有限公司 A method of remind teacher to change classroom instruction speed

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯丽玲: "数学作业批改中的评语问题探讨", 《吉林教育》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861815A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and device for evaluating memory level of user in word listening learning
CN111861370A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method and device for planning best review time of word listening
CN111861815B (en) * 2020-06-19 2024-02-02 北京国音红杉树教育科技有限公司 Method and device for evaluating memory level of user in word listening learning
CN111861370B (en) * 2020-06-19 2024-02-06 北京国音红杉树教育科技有限公司 Word listening optimal review time planning method and device

Also Published As

Publication number Publication date
CN111081102B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN109558512B (en) Audio-based personalized recommendation method and device and mobile terminal
CN110826358B (en) Animal emotion recognition method and device and storage medium
CN110556127B (en) Method, device, equipment and medium for detecting voice recognition result
CN111933112B (en) Awakening voice determination method, device, equipment and medium
CN110972112B (en) Subway running direction determining method, device, terminal and storage medium
CN113099031B (en) Sound recording method and related equipment
CN111081102B (en) Dictation result detection method and learning equipment
CN111105788B (en) Sensitive word score detection method and device, electronic equipment and storage medium
CN111524501A (en) Voice playing method and device, computer equipment and computer readable storage medium
CN108735218A (en) voice awakening method, device, terminal and storage medium
CN110830368A (en) Instant messaging message sending method and electronic equipment
CN110910876A (en) Article sound searching device and control method, and voice control setting method and system
CN111743740A (en) Blind guiding method and device, blind guiding equipment and storage medium
CN111081275B (en) Terminal processing method and device based on sound analysis, storage medium and terminal
CN112667844A (en) Method, device, equipment and storage medium for retrieving audio
CN111341317B (en) Method, device, electronic equipment and medium for evaluating wake-up audio data
CN113220590A (en) Automatic testing method, device, equipment and medium for voice interaction application
CN112614507A (en) Method and apparatus for detecting noise
CN111652624A (en) Ticket buying processing method, ticket checking processing method, device, equipment and storage medium
CN112788174B (en) Intelligent retrieving method of wireless earphone and related device
CN112559794A (en) Song quality identification method, device, equipment and storage medium
CN111479005A (en) Volume adjusting method and electronic equipment
CN110491380A (en) Electricity-saving control method, intelligent terminal and the storage medium of intelligent terminal
CN115331672B (en) Device control method, device, electronic device and storage medium
CN110989963B (en) Wake-up word recommendation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant