CN108399813B - Robot-based learning tutoring method and system, robot and handwriting equipment - Google Patents

Robot-based learning tutoring method and system, robot and handwriting equipment Download PDF

Info

Publication number
CN108399813B
CN108399813B CN201810420086.1A CN201810420086A CN108399813B CN 108399813 B CN108399813 B CN 108399813B CN 201810420086 A CN201810420086 A CN 201810420086A CN 108399813 B CN108399813 B CN 108399813B
Authority
CN
China
Prior art keywords
robot
writing
handwriting
unit
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810420086.1A
Other languages
Chinese (zh)
Other versions
CN108399813A (en
Inventor
朱向军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201810420086.1A priority Critical patent/CN108399813B/en
Publication of CN108399813A publication Critical patent/CN108399813A/en
Application granted granted Critical
Publication of CN108399813B publication Critical patent/CN108399813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Manipulator (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

A learning and tutoring method and system based on a robot, the robot and a handwriting device are provided, the method comprises: the robot transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting equipment; the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal; the robot identifies the writing content of the handwriting equipment on a certain writing plane according to a plurality of continuously received writing point positions; and the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result. By implementing the embodiment of the invention, the handwritten content input by the handwriting equipment can be recognized and judged, so that the learning and tutoring effect is improved.

Description

Robot-based learning tutoring method and system, robot and handwriting equipment
Technical Field
The invention relates to the technical field of robots, in particular to a learning tutoring method and system based on a robot, the robot and handwriting equipment.
Background
The robot for assisting the children in learning is available in the market at present, if the children have a question in the process of doing homework, the robots can be inquired through voice, and the robots can provide corresponding prompt information for the children, so that the children are helped to solve the problem. However, in practice, it is found that the robot cannot judge whether the child has understood the prompt information, and if the child does not correctly understand the prompt information, which results in writing an incorrect answer, the robot cannot further provide corresponding help for the child, which results in poor learning and tutoring effects.
Disclosure of Invention
The embodiment of the invention discloses a robot-based learning and tutoring method and system, a robot and handwriting equipment, which can identify and judge handwriting content input by the handwriting equipment, thereby improving the learning and tutoring effect.
The embodiment of the invention discloses a robot-based learning and tutoring method in the first aspect, which comprises the following steps:
the robot transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting equipment;
the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal;
the robot identifies the writing content of the handwriting equipment on a certain writing plane according to a plurality of continuously received writing point positions;
and the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the robot transmits a detection signal according to a specific frequency, and receives a feedback signal of the detection signal sent by the handwriting device, including:
the robot emits light signals according to a specific frequency, at least two ultrasonic emission modules built in the robot are controlled to respectively emit first ultrasonic waves while emitting the light signals, and time sets and writing pressure information sent by the handwriting equipment are received, wherein the time sets comprise first time points when the light signals are received and second time points when each first ultrasonic wave is received;
and the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal, and the method comprises the following steps:
the robot judges whether the handwriting equipment has writing operation or not according to the writing pressure information, and if so, the robot calculates the time interval between the first time point and each second time point;
the robot determines a first relative distance between each ultrasonic wave transmitting module and the handwriting equipment according to each time interval;
and the robot determines the writing point position of the handwriting equipment on a certain writing plane by combining the known distance between every two ultrasonic emission modules and every first relative distance.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the robot transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting device, the method further includes:
the robot receives an input voice question, inquires and outputs prompt information corresponding to the voice question;
and after the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result, the method further comprises the following steps:
and the robot determines the writing matching degree of the writing content and the standard content according to the judgment result, generates a writing suggestion corresponding to the writing matching degree and outputs and displays the writing suggestion.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the robot transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting device, the method further includes:
the robot controls a camera of the robot to shoot multiple continuous images, and judges whether a moving target in a camera viewing range is a human body or not by analyzing the multiple continuous images;
if the moving target is a human body, the robot moves and/or adjusts the view finding range of the camera so that the camera shoots the face of the human body, and the face is always in the view finding range of the camera;
the robot controls the camera to shoot a plurality of frames of face images containing the face, and the posture of the human body is determined by analyzing the face images;
if the posture of the human body is a preset writing posture, the robot acquires a first relative position of the handwriting equipment relative to the robot;
the robot judges whether the robot is in a preset recognizable area according to the first relative position, wherein the recognizable area takes the handwriting equipment as a center;
if the robot is not in the recognizable area, the robot moves to the recognizable area, the emission of the detection signal according to the specific frequency is carried out, and a feedback signal of the detection signal sent by the handwriting equipment is received.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the moving the robot into the identifiable area includes:
the robot moves to a first target position within the identifiable region, the robot facing the face when the robot is at the first target position.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, during the moving of the robot, the method further includes:
the robot controls at least two ultrasonic wave transmitting modules arranged in the robot to transmit second ultrasonic waves in turn and receive reflected waves of the second ultrasonic waves;
the robot calculates a reception time period required from the transmission of the second ultrasonic wave to the reception of the reflected wave;
the robot determines a second relative distance between the robot and the object reflecting the second ultrasonic wave according to the receiving time length;
and the robot judges whether the second relative distance is smaller than a specified threshold value, and if so, the robot stops moving or adjusts the moving direction of the robot.
The second aspect of the embodiment of the present invention discloses another learning guidance method based on a robot, including:
the handwriting equipment receives the detection signal transmitted by the robot according to a specific frequency and sends a feedback signal of the detection signal to the robot, so that the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal and identifies the writing content of the handwriting equipment on the certain writing plane according to a plurality of writing point positions, thereby performing writing judgment on the writing content, obtaining a judgment result and outputting the judgment result.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the receiving, by the handwriting device, a detection signal emitted by the robot, and sending a feedback signal of the detection signal to the robot includes:
the handwriting equipment detects writing pressure on the certain writing plane to obtain writing pressure information;
the handwriting equipment receives an optical signal emitted by the robot and records a first time point of receiving the optical signal;
the handwriting equipment receives first ultrasonic waves transmitted by at least two ultrasonic transmitting modules arranged in the robot and records a second time point of receiving each first ultrasonic wave;
the handwriting device sends a time set and the writing pressure information to the robot, wherein the time set comprises the first time point and each second time point.
A third aspect of the embodiments of the present invention discloses a robot, including:
the receiving and sending unit is used for transmitting a detection signal according to a specific frequency and receiving a feedback signal of the detection signal sent by the handwriting equipment;
the acquisition unit is used for acquiring the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal;
the recognition unit is used for recognizing the writing content of the handwriting equipment on a certain writing plane according to a plurality of continuously received writing point positions;
and the writing judgment unit is used for performing writing judgment on the writing content to obtain a judgment result and outputting the judgment result.
As an optional implementation manner, in the third aspect of the embodiment of the present invention, the transceiver unit includes: the device comprises a light emitting module, a first control module and a first receiving module;
the optical transmitting module is used for transmitting optical signals according to a specific frequency;
the first control module is used for controlling at least two ultrasonic wave emitting modules to respectively emit first ultrasonic waves while the light emitting modules emit the optical signals;
the first receiving module is configured to receive a time set and writing pressure information sent by the handwriting device, where the time set includes a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received;
and, the acquisition unit includes:
the pressure judging module is used for judging whether the handwriting equipment has writing operation according to the writing pressure information;
the time calculation module is used for calculating the time interval between the first time point and each second time point when the pressure judgment module judges that the handwriting equipment has writing operation;
the distance determining module is used for determining a first relative distance between each ultrasonic wave transmitting module and the handwriting equipment according to each time interval;
and the position determining module is used for determining the writing point position of the handwriting equipment on a certain writing plane by combining the known distance between every two ultrasonic transmitting modules and every first relative distance.
As an optional implementation manner, in the third aspect of the embodiment of the present invention, the robot further includes:
the voice input unit is used for receiving an input voice question, inquiring and outputting prompt information corresponding to the voice question before the transceiving unit transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal transmitted by handwriting equipment;
and the suggestion determining unit is used for determining the writing matching degree of the writing content and the standard content according to the judgment result obtained by the writing judging unit, generating the writing suggestion corresponding to the writing matching degree, and outputting and displaying the writing suggestion.
As an optional implementation manner, in the third aspect of the embodiment of the present invention, the robot further includes:
the camera shooting unit is used for controlling a camera of the robot to shoot multiple frames of continuous images before the transceiver unit transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by handwriting equipment;
the first judging unit is used for judging whether a moving target in the framing range of the camera is a human body or not by analyzing the continuous images of the plurality of frames;
the driving unit is used for controlling the robot to move until the camera shoots the face of the human body and enabling the face to be always in a view finding range of the camera when the first judging unit judges that the moving target is the human body;
and/or the camera shooting unit is further used for adjusting the view finding range of the camera until the camera shoots the face of the human body when the first judging unit judges that the moving target is the human body, and enabling the face to be always in the view finding range of the camera; the camera is also used for controlling the camera to shoot a plurality of frames of face images of the face;
the first judging unit is further configured to determine the posture of the human body by analyzing the face image, and judge whether the posture of the human body is a preset writing posture;
the second judging unit is used for acquiring a first relative position of the handwriting equipment relative to the robot when the first judging unit judges that the posture of the human body is a preset writing posture, and judging whether the robot is in a preset recognizable area or not according to the first relative position;
the driving unit is further configured to control the robot to move into the recognizable area and trigger the transceiver unit to execute the operation of transmitting the detection signal at the specific frequency and receiving a feedback signal of the detection signal sent by the handwriting device when the second determination unit determines that the robot is not located in the recognizable area.
As an optional implementation manner, in the third aspect of the embodiment of the present invention, the manner that the driving unit is used to control the robot to move into the identifiable area is specifically:
the driving unit is used for controlling the robot to move to a first target position in the recognizable area, and when the robot is located at the first target position, the robot is over against the face.
As an optional implementation manner, in the third aspect of the embodiment of the present invention, the transceiver unit includes a second control module and a second receiving module;
the second control module is used for controlling at least two ultrasonic emission modules to emit second ultrasonic waves in turn in the process that the driving unit controls the robot to move;
the second receiving module is used for receiving the reflected wave of the second ultrasonic wave;
and, the robot further comprises:
a calculation unit for calculating a reception time period required from transmission of the second ultrasonic wave to reception of the reflected wave;
a distance determination unit for determining a second relative distance between the robot and the object reflecting the second ultrasonic wave according to the receiving duration;
a third judging unit, configured to judge whether the second relative distance is smaller than a specified threshold;
the driving unit is further configured to control the robot to stop moving or adjust a moving direction of the robot when the third determining unit determines that the second relative distance is smaller than the specified threshold.
A fourth aspect of the present invention discloses a handwriting device, including:
and the communication unit is used for receiving the detection signal transmitted by the robot according to a specific frequency, sending a feedback signal of the detection signal to the robot, so that the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal, identifies the writing content of the handwriting equipment on the certain writing plane according to a plurality of writing point positions, performs writing judgment on the writing content, obtains a judgment result, and outputs the judgment result.
As an optional implementation manner, in the fourth aspect of the embodiment of the present invention, the handwriting apparatus further includes: a pressure detection unit and a recording unit;
the pressure detection unit is used for detecting the writing pressure on the certain writing plane to obtain writing pressure information;
the communication unit is specifically used for receiving optical signals transmitted by the robot and receiving first ultrasonic waves transmitted by at least two ultrasonic wave transmitting modules arranged in the robot; and sending a time set and the pressure information to the robot, the time set including a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received;
the recording unit is used for recording the first time point when the communication unit detects the optical signal; and recording each second time point when each first ultrasonic wave is received by the communication unit.
The fifth aspect of the embodiment of the present invention discloses a learning guidance system based on a robot, including:
any robot disclosed in the third aspect of the embodiment of the present invention, and any handwriting device disclosed in the fourth aspect of the embodiment of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the robot can judge the handwriting content by recognizing the handwriting content of the handwriting equipment and output a judgment result, so that a user of the handwriting equipment can know the mastering degree of certain content, and the learning and tutoring effect by using the robot can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a robot-based learning guidance method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another robot-based learning guidance method disclosed in the embodiments of the present invention;
FIG. 3 is a schematic flow chart of another robot-based learning guidance method disclosed in the embodiments of the present invention;
FIG. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another robot disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another robot disclosed in the embodiments of the present invention;
FIG. 7 is a schematic structural diagram of a handwriting device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a robot-based learning guidance system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a robot-based learning and tutoring method and system, a robot and handwriting equipment, which can identify and judge handwriting content input by the handwriting equipment, thereby improving the learning and tutoring effect. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart of a learning guidance method based on a robot according to an embodiment of the present invention. As shown in fig. 1, the application usage management method may include the steps of:
101. the robot emits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting equipment.
In the embodiment of the present invention, the detection signal may be an optical signal and/or an ultrasonic signal, and the robot may transmit the optical signal and/or the ultrasonic signal through its own built-in optical transmitter and/or ultrasonic transmitter. Accordingly, the handwriting device may be provided with a corresponding optical receiver and/or ultrasonic receiver (e.g. a microphone), and for each detection signal received, the handwriting device may send a corresponding feedback signal to the robot, i.e. the handwriting device sends the feedback signal the same number of times as the robot sends the detection signal. The handwriting equipment can also send a feedback signal containing the feedback information of the detection signals to the robot after receiving the detection signals, namely the number of times of sending the feedback signal by the handwriting equipment can be less than that of sending the detection signal by the robot, so that the electricity consumption in the communication process between the handwriting equipment and the robot can be reduced, and the cruising ability of the handwriting equipment can be improved. Optionally, the robot and the handwriting device may also transmit and receive the detection signal and the feedback signal in a wireless communication manner such as Wi-Fi, bluetooth, mobile cellular network, and the like, which is not limited in the embodiment of the present invention.
102. And the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal.
In the embodiment of the invention, the feedback signal can comprise necessary information required for determining the position of the writing point, and the robot calculates the position of the writing point of the handwriting equipment according to the information provided by the feedback signal; or the feedback signal can also directly carry the writing point position of the handwriting equipment, and the robot can directly read the writing point position of the handwriting equipment after decoding the feedback signal.
As an alternative embodiment, the writing plane may be provided with a grid for calibrating the position, and the handwriting device may have a camera built in, and the viewing range of the camera can include the contact point of the writing end (e.g. pen tip) of the handwriting device and the writing plane; the handwriting equipment can control a camera of the handwriting equipment to acquire images according to a specific frequency, recognize a contact point of a writing end and a writing plane from the images, and calculate the position of the contact point by referring to a grid with a known position. When the handwriting equipment detects a detection signal emitted by the robot, the handwriting equipment reads the position of the contact point as the position of a writing point of the handwriting equipment on the writing plane, and the position of the writing point is fed back to the robot through a feedback signal.
103. And the robot identifies the writing content of the handwriting equipment on a certain writing plane according to the continuously received positions of the plurality of writing points.
In the embodiment of the invention, when the robot receives the feedback signal sent by the handwriting equipment, the time point of receiving the feedback signal is recorded, and after the plurality of feedback signals are continuously received and the plurality of writing point positions are obtained by analyzing the feedback signal, the robot can connect the plurality of writing point positions according to the receiving sequence of the feedback signal to obtain the moving track of the handwriting equipment, so that the writing content can be identified according to the moving track in the modes of optical symbol identification and the like.
104. And the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result.
In the embodiment of the present invention, the determining that the robot writes the writing content may include determining whether an order of a certain character is correct or determining whether the writing content is a correct answer to a certain question, and accordingly, the determination result obtained by the robot may include an order of a stroke error or an answer error, and the like.
It can be seen that, in the method described in fig. 1, the robot may determine the writing content by recognizing the writing content of the handwriting device, so as to provide corresponding suggestions or suggestions to a user of the handwriting device, so that the user of the handwriting device can find errors in the writing content in time, and the robot can simulate a tutoring process of a real-man teacher, provide more intelligent learning tutoring, and further improve the effect of learning tutoring by using the robot.
Example two
Referring to fig. 2, fig. 2 is a schematic diagram illustrating another robot-based learning and tutoring method according to an embodiment of the present invention. As shown in fig. 2, the robot-based learning guidance method may include the steps of:
201. and the robot receives the input voice question, inquires and outputs prompt information corresponding to the voice question.
In the embodiment of the invention, a microphone for receiving voice information can be arranged in the robot, a user can input a question to be consulted to the robot through voice, the robot analyzes the voice question after receiving the voice question, and inquires prompt information corresponding to the voice question through a database or the Internet, wherein the prompt information can be an answer to the voice question and can also be a key clue required for solving the voice question; the robot may output the prompt information on its display screen, or may output the prompt information through voice, which is not limited in the embodiments of the present invention.
202. The robot emits an optical signal at a specific frequency, and controls at least two ultrasonic wave emitting modules built in the robot to emit a first ultrasonic wave simultaneously with the optical signal.
In this embodiment of the present invention, the optical signal may be an optical signal with a specific frequency, and preferably, the optical signal may be an infrared optical signal, which is not limited in this embodiment of the present invention. In addition, the first ultrasonic wave may be an ultrasonic pulse signal, and different ultrasonic wave transmitting modules may transmit the first ultrasonic wave with the same or different frequencies. If the frequency of the first ultrasonic wave is different, the robot can support handwriting recognition on a plurality of handwriting devices at the same time by controlling the receiving frequency of the ultrasonic wave received by the handwriting device.
203. The handwriting equipment receives the optical signal and the first ultrasonic wave transmitted by the robot and records a first time point when the optical signal is received and a second time point when each first ultrasonic wave is received.
204. The handwriting equipment detects the writing pressure on a certain writing plane to obtain writing pressure information.
205. The handwriting apparatus transmits writing pressure information and a time set including the first time point and each of the second time points to the robot.
206. The robot judges whether the handwriting equipment has handwriting operation according to the writing pressure information, if so, the robot executes step 207, and if not, the robot continues to execute step 202.
In the embodiment of the present invention, if the writing pressure information does not include the appointed valid information (e.g., is zero or null), the robot determines that the handwriting operation does not exist in the handwriting device, and continues to execute step 202 to perform writing detection on the handwriting device. If the robot judges that the handwriting operation exists, the following steps are continuously executed.
207. The robot calculates a time interval between the first point in time and each second point in time in the received time set.
208. The robot determines a first relative distance between each ultrasonic transmitting module and the handwriting equipment according to each time interval, and determines the writing point position of the handwriting equipment on a certain writing plane by combining the known distance between every two ultrasonic transmitting modules and each first relative distance.
In an embodiment of the present invention, the robot may include N ultrasonic emission modules (N ≧ 2), for example, N ═ 2, where the ultrasonic emission module a emits the first ultrasonic signal a while the robot emits the optical signal1The ultrasonic wave emitting module B emits a first ultrasonic wave signal B1. The handwriting equipment C records the received specific frequencyOf the optical signal1And recording the reception of the first ultrasonic signal a1At a second point in time T21And receiving a first ultrasonic signal b1At a second point in time T22And transmitting the time set { T } to the robot1,T21,T22}. After receiving the time sets, the robot respectively calculates T1And T21Time interval d between1And T1And T22Time interval d between2. When the distance is short, the propagation time of light may be approximately zero, and thus, the robot may take a first time point at which the optical signal is received by the handwriting device C as a time point at which the ultrasonic wave emitting modules a and B emit the ultrasonic wave, so that the time interval d may be set1Is approximated by a first ultrasonic signal a1Propagation time required for propagation to handwriting device C, time interval d2Is approximated by a first ultrasonic signal b1Propagation time required to propagate to handwriting device C.
The robot can calculate the first relative distance L of the ultrasonic wave transmitting module A relative to the handwriting equipment C according to the propagation velocity v of the sound waveac=d1V, first relative distance L of ultrasonic emission module B with respect to handwriting devicebc=d2V. Due to the distance L between the ultrasonic transmitting module A and the ultrasonic transmitting module BabIt is known to use Lab、Lac、LbcAnd determining the position of the handwriting equipment C relative to the robot as the writing point position of the handwriting equipment C on a certain writing plane by combining the principle of triangulation. When N > 2, the robot may determine a plurality of possible writing point positions using a plurality of triangles formed by combining two of the ultrasonic wave emitting modules, and determine one position from the plurality of possible writing point positions as a writing point position of the handwriting apparatus on the writing plane.
209. And the robot identifies the writing content of the handwriting equipment on a certain writing plane according to the continuously received positions of the plurality of writing points.
In the embodiment of the invention, the robot can also identify the writing content by combining the writing pressure information, and the smaller the writing pressure information is, the lighter the writing force of the handwriting equipment is, so that the writing content identified by the robot is closer to the real handwriting.
210. And the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result.
211. And the robot determines the writing matching degree of the writing content and the standard content according to the judgment result, generates a writing suggestion corresponding to the writing matching degree and outputs and displays the writing suggestion.
In the embodiment of the invention, a user can seek help for the robot through voice input, the robot can identify and judge the writing content of the user after outputting corresponding prompt information, and the robot can further generate a writing suggestion according to the matching degree of the writing content and the standard content after outputting a judgment result, so that more comprehensive learning guidance can be provided for the user. For example, the user may ask how a certain character is written through voice, after the robot outputs the prompt information, the robot recognizes the writing content of the user, and determines whether the writing content of the user is correct through the character composition of the writing content and the composition of the character, if the writing content is correct, the robot may further recognize the matching degree between the stroke order of the writing content and the standard stroke order of the character, and generates a corresponding writing suggestion according to the matching and outputs and displays the writing suggestion. Or, the user asks for the mathematical questions through voice, the robot can identify the writing content of the user as the user answer after outputting the solution thought, the robot judges whether the user answer is correct, and if not, the robot can compare the solution step in the user answer with the solution step in the standard answer to determine the writing matching degree of the writing content and the standard content, so that corresponding writing suggestions (namely answer suggestions) are provided for the user.
It can be seen that, with the implementation of the method described in fig. 2, after the user inputs a question through voice, the robot can not only output corresponding prompt information, but also recognize and judge the written content of the user, and output a judgment result; furthermore, writing suggestions can be generated according to the matching degree of the writing contents and the standard contents, so that the understanding degree of the user on the prompt information can be judged, different learning aids are provided according to different understanding degrees of the user, and more comprehensive learning tutoring is provided for the user.
EXAMPLE III
Referring to fig. 3, fig. 3 is a diagram illustrating another robot-based learning and tutoring method according to an embodiment of the present invention. As shown in fig. 3, the robot-based learning guidance method may include the steps of:
301. and the robot receives the input voice question, inquires and outputs prompt information corresponding to the voice question.
302. The robot controls the camera to shoot multiple continuous images, judges whether the moving target in the camera view range is a human body or not by analyzing the multiple continuous images, if so, executes step 303, and if not, ends the process.
In the embodiment of the present invention, as an optional implementation manner, the robot may obtain the change map by performing difference processing on multiple continuous images, and match a target feature in the change map with a human body contour feature, so as to determine whether the moving target is a human body. Further, if the moving target is a human body, the robot can also judge the position change of the human body and the camera (namely the human body approaches or leaves the camera) by analyzing the position change of the target characteristics, so that the robot can be controlled to track the target of the human body.
303. The robot moves and/or adjusts the viewing range of the camera so that the camera shoots the face of the human body, and the face of the human body is always in the viewing range of the camera.
304. The robot control camera shoots a plurality of frames of face images containing the face, and the posture of the human body is determined through the face images.
In the embodiment of the present invention, the face may include all or part of the face structure of the human body, such as a front face or a side face. As an alternative embodiment, the robot may determine the posture of the human body by determining the height change of the face position, such as determining the human body to be standing or sitting. Optionally, the face image may also include a human body trunk portion, and the robot may also determine the posture of the human body by analyzing the human body trunk portion in the face image. Compared with a human body image without the face, the face image may contain more detailed information related to the writing gesture, and the robot analyzes the face image, so that the accuracy of recognizing the writing gesture is improved.
305. And if the posture of the human body is the preset writing posture, the robot acquires a first relative position of the handwriting equipment relative to the robot.
In the embodiment of the present invention, the robot may obtain the first relative position of the handwriting device with respect to the robot through distance measurement methods such as infrared distance measurement and ultrasonic distance measurement, which is not limited in the embodiment of the present invention.
306. And the robot judges whether the robot is in a preset recognizable area or not according to the first relative position, if so, directly executing step 308, and if not, executing step 307-step 308.
307. The robot moves to the recognizable area.
In the embodiment of the invention, the recognizable area takes the handwriting equipment as the center, and when the robot is in the recognizable area, the communication signal strength between the robot and the handwriting equipment is stronger. The shape of the recognizable area may be a circle, a rectangle, a sector, or any irregular shape, and the embodiment of the present invention is not limited. The robot executes the above steps 305 to 307, and can automatically move to a position suitable for handwriting recognition, which is beneficial to improving the accuracy of handwriting recognition and improving the use experience of the user. Preferably, the robot can move to a first target position in the recognizable area, and when the robot is located at the first target position, the robot is over against the face of the human body, so that signal radiation received by the human body can be reduced while signal intensity is ensured to be suitable for handwriting recognition.
In the embodiment of the present invention, as an optional implementation manner, in the process of performing the movement in step 303 or 307, the robot may further perform the following steps:
the robot controls at least two ultrasonic wave transmitting modules arranged in the robot to transmit second ultrasonic waves in turn and receive reflected waves of the second ultrasonic waves;
the robot calculates the receiving time required between the emission of the second ultrasonic wave and the reception of the reflected wave;
the robot determines a second relative distance between the robot and the object reflecting the second ultrasonic wave according to the receiving time;
and the robot judges whether the second relative distance is smaller than a specified threshold value, and if so, the robot stops moving or adjusts the moving direction of the robot.
In the above step, the at least two ultrasonic wave emitting modules may be respectively disposed at both sides of the robot body, and the ultrasonic wave emitting module for emitting the ultrasonic wave and the receiving module for receiving the reflected wave may be independent circuit modules separately disposed at the robot body, or may be integrated in one unit in the form of an ultrasonic wave transceiver. By implementing the embodiment, the robot can perform anti-collision detection through ultrasonic waves in the moving process, so that the collision in the moving process of the robot is reduced. Furthermore, the second ultrasonic waves are transmitted in turn, so that the power consumption in the collision prevention detection process is reduced, and the cruising ability of the robot is improved.
308. The robot detects the writing content of the handwriting equipment on a certain writing plane.
In the embodiment of the present invention, the manner in which the robot executes step 308 may be as shown in steps 101 to 103 in embodiment one or as shown in steps 202 to 209 in embodiment two, and details are not repeated below.
As an alternative embodiment, if the robot performs step 306 to move to the first target position, the robot may also perform the following steps while performing step 308:
the robot controls a camera of the robot to shoot target images including handwriting equipment and a writing plane, and records shooting time points of each target image; the robot identifies the contact position of the handwriting equipment contacting the writing plane in the target image, performs time synchronization by using the shooting time point and the time set sent by the handwriting equipment, and establishes the corresponding relation between the writing point position and the contact position, thereby establishing the relation between the writing content identified by the robot and the writing plane and solving the problem that the existing handwriting identification technology cannot correspond the writing content to the writing plane.
Further optionally, the robot may further determine whether the posture of the human body is a correct writing posture by analyzing the target image, and if not, the robot outputs prompt information for correcting the posture of the human body.
309. And the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result.
310. And the robot determines the writing matching degree of the writing content and the standard content according to the judgment result, generates a writing suggestion corresponding to the writing matching degree and outputs and displays the writing suggestion.
It can be seen that in the method described in fig. 3, the robot may obtain the question of the user through voice interaction and output corresponding prompt information, and determine the written content of the user through handwriting recognition. Further, the robot may also determine whether a handwritten scene may exist through image analysis, and automatically move to an appropriate position to support handwriting in a case where the handwritten scene may exist. Furthermore, the robot can also perform anti-collision detection in the moving process, so that the collision probability of the robot is reduced.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention. As shown in fig. 4, the robot includes:
a transceiver unit 401, configured to transmit a detection signal according to a specific frequency and receive a feedback signal of the detection signal sent by the handwriting device; in this embodiment of the present invention, the detection signal may be an optical signal and/or an ultrasonic signal, and the transceiver unit 401 may also transmit the detection signal and receive the feedback signal through a wireless communication method such as Wi-Fi, bluetooth, or a mobile cellular network, which is not limited in this embodiment of the present invention.
An obtaining unit 402, configured to obtain, according to the feedback signal received by the transceiver unit 401, a writing point position of the handwriting apparatus on a certain writing plane; in this embodiment of the present invention, the feedback signal may include necessary information required to determine the position of the writing point, and the obtaining unit 402 may calculate the position of the writing point of the handwriting device according to the information provided by the feedback signal; or the feedback signal may also directly carry the writing point position of the handwriting device, and the obtaining unit 402 may decode the feedback signal and then directly read the writing point position of the handwriting device, which is not limited in the embodiment of the present invention;
a recognition unit 403, configured to recognize, according to a plurality of writing point positions continuously received by the obtaining unit 402, writing contents of the handwriting device on a certain writing plane;
a writing judgment unit 404, configured to perform writing judgment on the writing content recognized by the recognition unit 403, obtain a judgment result, and output the judgment result.
The robot shown in the figure 4 can judge the writing content by recognizing the writing content of the handwriting equipment, so that corresponding opinions or suggestions can be provided for a user of the handwriting equipment, the user of the handwriting equipment can find errors in the writing content in time, the robot can simulate the tutoring process of a real-person teacher, more intelligent learning tutoring is provided, and the effect of learning tutoring by using the robot is further improved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another robot disclosed in the embodiments of the present invention. The robot shown in fig. 5 is optimized from the robot shown in fig. 4.
In comparison with the robot shown in fig. 4, in the robot shown in fig. 5, the transceiver unit 401 includes:
a light emitting module 4011, a first control module 4012, and a first receiving module 4013;
an optical transmitting module 4011 for transmitting an optical signal at a specific frequency;
a first control module 4012 for controlling the at least two ultrasonic wave emitting modules to emit first ultrasonic waves, respectively, while the light emitting module 4011 emits the light signals; in this embodiment of the present invention, the light emitting module 4011 may communicate with the first control module 4012, and the light emitting module 4011 sends control information to the first control module 4012 when emitting a light signal, so as to trigger the first control module 4012 to control the ultrasonic module to emit the first ultrasonic wave;
the first receiving module 4013 is configured to receive a time set and writing pressure information sent by the handwriting device, where the time set includes a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received; in this embodiment of the present invention, after the light emitting module 4011 emits the light signal and/or after the first control module 4012 emits the first ultrasonic wave, the first receiving module 4013 may be triggered to receive the time set and writing pressure information sent by the handwriting equipment;
further, the acquisition unit 402 includes:
the pressure judging module 4021 is configured to judge whether a writing operation exists on the handwriting device according to the writing pressure information received by the first receiving module 4013;
the time calculating module 4022 is configured to calculate a time interval between the first time point and each second time point in the time set received by the first receiving module 4013 when the pressure determining module 4021 determines that the handwriting device has writing operation;
a distance determining module 4023, configured to determine a first relative distance between each acoustic wave transmitting module and the handwriting device according to each time interval calculated by the time calculating module 4022;
the position determining module 4024 is configured to determine a writing point position of the handwriting device on a certain writing plane by combining the known distance between every two ultrasound transmitting modules and each first relative distance determined by the distance determining module 4023.
In this embodiment of the present invention, a manner that the recognition unit 403 is configured to recognize, according to the multiple writing point positions continuously received by the obtaining unit 402, the writing content of the handwriting device on a certain writing plane may specifically be:
the recognition unit 403 is configured to recognize, according to a plurality of continuous writing point positions determined by the position determining module 4024 and the writing pressure information received by the first receiving module 4013, the writing content of the handwriting device on a certain writing plane, so that the recognized writing content can be closer to a real writing trace.
Optionally, the robot shown in fig. 5 may further include:
a voice input unit 405, configured to receive an input voice question, query and output prompt information corresponding to the voice question before the transceiver 401 transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting device; in this embodiment of the present invention, after the voice input unit 405 outputs the prompt information, the transceiver unit 401 may be triggered to execute an operation of transmitting a detection signal according to a specific frequency and receiving a feedback signal of the detection signal sent by the handwriting device;
and an advice determining unit 406, configured to determine, according to the determination result obtained by the writing determining unit 404, a writing matching degree between the writing content and the standard content, generate a writing advice corresponding to the writing matching degree, and output and display the writing advice.
Further optionally, the robot shown in fig. 5 may further include: the robot can judge whether a handwritten scene may exist currently by means of image recognition, and automatically move to an appropriate position to support handwriting in the case that the handwritten scene may exist, by using the interaction of the imaging unit 407, the first judging unit 408, the driving unit 409, and the second judging unit 410. The following specifically describes the above units:
the camera unit 407 is used for controlling the camera of the robot to shoot multiple frames of continuous images before the transceiver unit 401 transmits the detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting equipment;
a first judgment unit 408 for judging whether the moving object within the camera view range is a human body by analyzing the continuous images of the plurality of frames captured by the image capturing unit 407;
a driving unit 409, configured to control the robot to move until the camera captures the face of the human body and make the face always within the viewing range of the camera when the first determining unit 408 determines that the moving target is the human body;
and/or the camera 407 is further configured to, when the first determining unit 408 determines that the moving target is a human body, adjust a viewing range of the camera until the camera captures the face of the human body, and make the face always within the viewing range of the camera; and the camera unit 407 is further configured to control the camera to capture face images of a plurality of frames of faces;
the first judging unit 408 is further configured to determine a posture of the human body by analyzing the face image captured by the image capturing unit 407, and judge whether the posture of the human body is a preset writing posture;
a second judging unit 410, configured to, when the first judging unit 408 judges that the posture of the human body is the preset writing posture, acquire a first relative position of the handwriting apparatus with respect to the robot, and judge whether the robot is within a preset recognizable area according to the first relative position;
the driving unit 409 is further configured to, when the second determining unit 410 determines that the robot is not located in the recognizable area, control the robot to move into the recognizable area, and trigger the transceiver unit 401 to perform an operation of transmitting a detection signal according to a specific frequency and receiving a feedback signal of the detection signal sent by the handwriting device; specifically, the manner in which the driving unit 409 is used to control the robot to move into the recognizable area may be: the driving unit is used for controlling the robot to move to a first target position in the recognizable area, and when the robot is located at the first target position, the robot is over against the face of the human body.
In addition, in the robot shown in fig. 5, the transceiver unit 401 may further include: a second control module 4014 and a second receiving module 4015;
the second control module 4014 is used for controlling the at least two ultrasonic wave transmitting modules to transmit second ultrasonic waves in turn in the moving process of the robot;
a second receiving module 4015 configured to receive a reflected wave of the second ultrasonic wave;
accordingly, the robot shown in fig. 5 may further include: a calculation unit 411, a distance determination unit 412 and a third judgment unit 413.
A calculation unit 411 for calculating a reception time period required between transmission of the second ultrasonic wave from the second control module 4014 and reception of the reflected wave of the second ultrasonic wave by the second reception module 4015;
a distance determining unit 412 for determining a second relative distance between the robot and the object reflecting the second ultrasonic wave based on the reception time length calculated by the calculating unit 411;
a third judgment unit 413 for judging whether the second relative distance determined by the distance determination unit 412 is smaller than a specified threshold;
the driving unit 409 is further configured to control the robot to stop moving or adjust the moving direction of the robot when the third determining unit 413 determines that the second relative distance is smaller than the predetermined threshold.
As a possible application manner, in the embodiment of the present invention, after the voice input unit 405 receives the voice question, the camera unit 407 may be triggered to shoot multiple frames of continuous images, and when the first determining unit 408 analyzes the images and determines that the moving object is a human body, the robot is moved to the recognizable area through the cooperation of the camera unit 407, the first determining unit 408 and the driving unit 409; in the moving process of the driving unit 409, the transceiver unit 401, the calculating unit 411, the distance determining unit 412 and the third judging unit 413 are matched with the driving unit 409 so as to reduce the occurrence of collision in the moving process of the robot; after the driving unit 409 controls the robot to move to the recognizable area, the transceiving unit 401 may be triggered to send a detection signal and receive a feedback signal of the handwriting device to the detection signal, the content written by the handwriting device on a certain writing plane is recognized by the obtaining unit 402 and the recognition unit 403, the content written is judged by the writing judgment unit 404 and the suggestion determination unit 406, and a corresponding writing suggestion is generated according to the judgment result and is output and displayed. It will be understood that the interaction sequence of the units described above should be determined by their function and inherent logic, and the manner of application described above constitutes any limitation to the working process of the robot.
It can be seen that, with the robot shown in fig. 5, the question of the user can be obtained through voice interaction, the corresponding prompt information is output, and the written content of the user is judged through handwriting recognition. Further, the robot shown in fig. 5 may also determine whether a handwritten scene may exist through image analysis, and automatically move to an appropriate position to support handwriting in a case where the handwritten scene may exist. Furthermore, the robot shown in fig. 5 can also perform collision prevention detection in the moving process, so that the probability of collision of the robot is reduced
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another robot disclosed in the embodiments of the present invention. As shown in fig. 6, the robot may include: memory 601, power supply 602, microphone 603, display unit 604, wireless communication module 605, ultrasound probe 606, light emitter 607, camera 608, power device 609, and processor 610. Those skilled in the art will appreciate that the robot configuration shown in fig. 6 does not constitute a limitation of the robot, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the robot in detail with reference to fig. 6:
the memory 601 may be used to store software programs and modules, and the processor 610 executes various functional applications and data processing of the robot by operating the software programs and modules stored in the memory 601. The memory 601 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, data (such as written contents) created according to the use of the robot, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The power supply 602 is configured to supply power to various components of the robot, and preferably, the power supply 602 may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
The microphone 603 may convert the collected sound signal into an electrical signal, which is received by an audio circuit of the robot and converted into audio data, and the audio data is output to the processor 610 for processing. Preferably, the microphone 603 may be an array microphone for picking up the voice of the user to receive the voice question inputted by the user.
The display unit 604 may display prompt information corresponding to the voice question, and may also display a result of the determination of the content of the written content by the robot and a written suggestion. The display unit may include a display panel, and further, may further include a touch panel covering the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel according to the type of the touch event.
The wireless communication module 605 may transmit a wireless communication signal to the handwriting device, or may receive a wireless communication signal transmitted by the handwriting device.
The ultrasonic probe 606 may be used to transmit and receive ultrasonic signals for handwriting recognition and collision avoidance detection. Wherein at least two ultrasonic transmission modules may be included. Preferably, the ultrasonic probe may be a transmitting-receiving integrated ultrasonic probe.
And a light emitter 607 for emitting a light signal for handwriting recognition. Specifically, the optical transmitter 607 may be an infrared LED, and the emitted optical signal is an infrared optical signal.
And a camera 608 for taking images. Specifically, the method can be used for continuously shooting multiple frames of images after the microphone 603 receives an input voice question and transmitting the shot images to the processor 610 to judge whether a moving object in the images is a human body; and may also be configured to capture a face image containing the face of the human body and transmit the face image to the processor 610 to determine whether the posture of the human body is a preset writing posture.
And a power device 609 can drive the robot to move. In particular, it may be used to drive the robot to move so that the camera 608 captures a face image containing the face of the human body; and the robot can be driven to move to a recognizable area with the handwriting device as the center so as to support handwriting.
The processor 610, which is a control center of the robot, connects various parts of the entire robot using various interfaces and lines, and performs various functions of the robot and processes data by operating or executing software programs and/or modules stored in the memory 601 and calling data stored in the memory 601. Specifically, the processor 610 may execute a robot-based learning coaching method as illustrated in any of FIGS. 1-3. Alternatively, processor 601 may include one or more processing units; preferably, the processor 601 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
It should be noted that the robot shown in fig. 6 may further include components, such as an input key, a speaker, an RF circuit, and a sensor, which are not shown, and details are not described in this embodiment.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of a handwriting device according to an embodiment of the present invention. As shown in fig. 7, the handwriting may include:
the communication unit 701 is configured to receive a detection signal transmitted by the robot according to a specific frequency, and transmit a feedback signal of the detection signal to the robot, so that the robot acquires a writing point position of the handwriting device on a certain writing plane according to the feedback signal, and identifies the writing content of the handwriting device on the certain writing plane according to a plurality of writing point positions, thereby performing writing judgment on the writing content, obtaining a judgment result, and outputting the judgment result.
Optionally, the handwriting device shown in fig. 7 may further include: a pressure detection unit 702 and a recording unit 703;
a pressure detection unit 702, configured to detect writing pressure on a certain writing plane to obtain writing pressure information;
the communication unit 701 may specifically be configured to receive the detection signal transmitted by the robot according to the specific frequency:
a communication unit 701 for receiving an optical signal emitted by the robot according to a specific frequency and receiving first ultrasonic waves emitted by at least two ultrasonic emission modules built in the robot;
a recording unit 703 for recording a first time point when the communication unit 701 detects the optical signal; and, when the communication unit 701 receives each first ultrasonic wave, record each second time point;
accordingly, the above-mentioned communication unit 701 may specifically be configured to transmit the feedback signal of the detection signal to the robot by:
a communication unit 701 for transmitting the time set recorded by the recording unit 703 and the pressure information detected by the pressure detection unit 702 to the robot, the time set including a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received.
The handwriting equipment shown in the figure 7 is implemented, handwriting recognition can be carried out through communication of the robot, the handwriting equipment only needs to send a feedback signal corresponding to the detection signal to the robot, and most calculation operations needed by the handwriting recognition are carried out by the robot according to the feedback signal, so that the calculation amount of the handwriting equipment side can be reduced, the electric quantity consumption of the handwriting equipment can be reduced, the size of the handwriting equipment is reduced, and the handwriting equipment is more convenient to carry.
Example eight
Referring to fig. 8, fig. 8 is a schematic structural diagram of a robot-based learning guidance system according to an embodiment of the present invention. As shown in fig. 8, the system may include:
a robot 801 and a handwriting device 802; the robot 801 may be any one of the robots shown in fig. 5 or fig. 6, and the handwriting device 802 may be as shown in fig. 7. With the system shown in fig. 8, the robot 801 can recognize the writing contents of the handwriting device 802 on a certain writing plane and determine the writing contents, so that the effect of learning and tutoring using the robot can be improved.
Further, an embodiment of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the robot-based learning guidance methods of fig. 1 to 3.
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute any one of the robot-based learning tutoring methods of fig. 1-3.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The above detailed description is provided for a learning tutoring method and system based on a robot, a robot and a handwriting device, which are disclosed in the embodiments of the present invention, and the specific examples are applied in the present disclosure to explain the principle and the implementation of the present invention. Meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (13)

1. A robot-based learning coaching method, the method comprising:
the robot controls a camera of the robot to shoot multiple continuous images, and judges whether a moving target in a camera viewing range is a human body or not by analyzing the multiple continuous images;
if the moving target is a human body, the robot moves and/or adjusts the view finding range of the camera so that the camera shoots the face of the human body, and the face is always in the view finding range of the camera;
the robot controls the camera to shoot a plurality of frames of face images containing the face;
if the posture of the human body is a preset writing posture, the robot acquires a first relative position of handwriting equipment relative to the robot;
the robot judges whether the robot is in a preset recognizable area according to the first relative position, wherein the recognizable area takes the handwriting equipment as a center;
if the robot is not within the identifiable region, the robot moves to a first target position within the identifiable region, the robot facing the face when the robot is at the first target position;
the robot transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by the handwriting equipment;
the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal;
the robot identifies the writing content of the handwriting equipment on a certain writing plane according to a plurality of continuously received writing point positions;
and the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result.
2. The robot-based learning coaching method of claim 1, wherein the robot transmits a detection signal at a specific frequency and receives a feedback signal of the detection signal transmitted by the handwriting device, comprising:
the robot emits light signals according to a specific frequency, at least two ultrasonic emission modules built in the robot are controlled to respectively emit first ultrasonic waves while emitting the light signals, and time sets and writing pressure information sent by the handwriting equipment are received, wherein the time sets comprise first time points when the light signals are received and second time points when each first ultrasonic wave is received;
and the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal, and the method comprises the following steps:
the robot judges whether the handwriting equipment has writing operation or not according to the writing pressure information, and if so, the robot calculates the time interval between the first time point and each second time point;
the robot determines a first relative distance between each ultrasonic wave transmitting module and the handwriting equipment according to each time interval;
and the robot determines the writing point position of the handwriting equipment on a certain writing plane by combining the known distance between every two ultrasonic emission modules and every first relative distance.
3. The robot-based learning guidance method of claim 1, wherein before the robot transmits a detection signal at a specific frequency and receives a feedback signal of the detection signal transmitted by the handwriting device, the method further comprises:
the robot receives an input voice question, inquires and outputs prompt information corresponding to the voice question;
and after the robot performs writing judgment on the written content to obtain a judgment result and outputs the judgment result, the method further comprises the following steps:
and the robot determines the writing matching degree of the writing content and the standard content according to the judgment result, generates a writing suggestion corresponding to the writing matching degree and outputs and displays the writing suggestion.
4. The robot-based learning coaching method of claim 1, wherein during the movement of the robot, the method further comprises:
the robot controls at least two ultrasonic wave transmitting modules arranged in the robot to transmit second ultrasonic waves in turn and receive reflected waves of the second ultrasonic waves;
the robot calculates a reception time period required from the transmission of the second ultrasonic wave to the reception of the reflected wave;
the robot determines a second relative distance between the robot and the object reflecting the second ultrasonic wave according to the receiving time length;
and the robot judges whether the second relative distance is smaller than a specified threshold value, and if so, the robot stops moving or adjusts the moving direction of the robot.
5. A robot-based learning coaching method, the method comprising:
the handwriting equipment receives a detection signal transmitted by the robot according to a specific frequency and sends a feedback signal of the detection signal to the robot, so that the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal and identifies the writing content of the handwriting equipment on the certain writing plane according to a plurality of writing point positions, thereby performing writing judgment on the writing content, obtaining a judgment result and outputting the judgment result;
wherein the detection signal is transmitted by the robot when moving into an identifiable region; when the robot judges that a moving target in the camera view range is a human body through multiple continuous images shot by a camera, the robot moves and/or adjusts the view range of the camera to enable the camera to shoot the face of the human body and enable the face to be always in the view range of the camera; controlling the camera to shoot a plurality of frames of face images containing the face, and determining the posture of the human body by analyzing the face images; when the posture of the human body is a preset writing posture, acquiring a first relative position of the handwriting equipment relative to the robot; and when the robot is judged not to be in a preset recognizable area according to the first relative position, controlling the robot to move to a first target position in the recognizable area, wherein the recognizable area takes the handwriting equipment as a center, and when the robot is in the first target position, the robot is over against the face.
6. The robot-based learning coaching method of claim 5, wherein the handwriting device receives a detection signal emitted by the robot and sends a feedback signal of the detection signal to the robot, and the method comprises:
the handwriting equipment detects writing pressure on the certain writing plane to obtain writing pressure information;
the handwriting equipment receives an optical signal emitted by the robot and records a first time point of receiving the optical signal;
the handwriting equipment receives first ultrasonic waves transmitted by at least two ultrasonic transmitting modules arranged in the robot and records a second time point of receiving each first ultrasonic wave;
the handwriting device sends a time set and the writing pressure information to the robot, wherein the time set comprises the first time point and each second time point.
7. A robot, comprising:
the receiving and sending unit is used for transmitting a detection signal according to a specific frequency and receiving a feedback signal of the detection signal sent by the handwriting equipment;
the acquisition unit is used for acquiring the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal;
the recognition unit is used for recognizing the writing content of the handwriting equipment on a certain writing plane according to a plurality of continuously received writing point positions;
the writing judgment unit is used for performing writing judgment on the writing content to obtain a judgment result and outputting the judgment result;
the robot further includes:
the camera shooting unit is used for controlling a camera of the robot to shoot multiple frames of continuous images before the transceiver unit transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal sent by handwriting equipment;
the first judging unit is used for judging whether a moving target in the framing range of the camera is a human body or not by analyzing the continuous images of the plurality of frames;
the driving unit is used for controlling the robot to move until the camera shoots the face of the human body and enabling the face to be always in a view finding range of the camera when the first judging unit judges that the moving target is the human body;
and/or the camera unit is further configured to adjust a viewing range of the camera until the camera shoots the face of the human body and make the face always within the viewing range of the camera when the first determination unit determines that the moving target is the human body; the camera is also used for controlling the camera to shoot a plurality of frames of face images of the face;
the first judging unit is further configured to determine the posture of the human body by analyzing the face image, and judge whether the posture of the human body is a preset writing posture;
the second judging unit is used for acquiring a first relative position of the handwriting equipment relative to the robot when the first judging unit judges that the posture of the human body is a preset writing posture, and judging whether the robot is in a preset recognizable area or not according to the first relative position;
the driving unit is further configured to control the robot to move to a first target position in the recognizable area and trigger the transceiver unit to execute the operation of transmitting the detection signal at the specific frequency and receiving a feedback signal of the detection signal sent by the handwriting device when the second determination unit determines that the robot is not located in the recognizable area; when the robot is at the first target position, the robot is facing the face.
8. The robot of claim 7, wherein the transceiver unit comprises: the device comprises a light emitting module, a first control module and a first receiving module;
the optical transmitting module is used for transmitting optical signals according to a specific frequency;
the first control module is used for controlling at least two ultrasonic wave emitting modules to respectively emit first ultrasonic waves while the light emitting modules emit the optical signals;
the first receiving module is configured to receive a time set and writing pressure information sent by the handwriting device, where the time set includes a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received;
and, the acquisition unit includes:
the pressure judging module is used for judging whether the handwriting equipment has writing operation according to the writing pressure information;
the time calculation module is used for calculating the time interval between the first time point and each second time point when the pressure judgment module judges that the handwriting equipment has writing operation;
the distance determining module is used for determining a first relative distance between each ultrasonic wave transmitting module and the handwriting equipment according to each time interval;
and the position determining module is used for determining the writing point position of the handwriting equipment on a certain writing plane by combining the known distance between every two ultrasonic transmitting modules and every first relative distance.
9. The robot of claim 7, further comprising:
the voice input unit is used for receiving an input voice question, inquiring and outputting prompt information corresponding to the voice question before the transceiving unit transmits a detection signal according to a specific frequency and receives a feedback signal of the detection signal transmitted by handwriting equipment;
and the suggestion determining unit is used for determining the writing matching degree of the writing content and the standard content according to the judgment result obtained by the writing judging unit, generating the writing suggestion corresponding to the writing matching degree, and outputting and displaying the writing suggestion.
10. The robot of claim 7, wherein the transceiver unit comprises a second control module and a second receiving module;
the second control module is used for controlling at least two ultrasonic emission modules to emit second ultrasonic waves in turn in the process that the driving unit controls the robot to move;
the second receiving module is used for receiving the reflected wave of the second ultrasonic wave;
and, the robot further comprises:
a calculation unit for calculating a reception time period required from transmission of the second ultrasonic wave to reception of the reflected wave;
a distance determination unit for determining a second relative distance between the robot and the object reflecting the second ultrasonic wave according to the receiving duration;
a third judging unit, configured to judge whether the second relative distance is smaller than a specified threshold;
the driving unit is further configured to control the robot to stop moving or adjust a moving direction of the robot when the third determining unit determines that the second relative distance is smaller than the specified threshold.
11. A handwriting device, comprising:
the communication unit is used for receiving a detection signal transmitted by the robot according to a specific frequency, sending a feedback signal of the detection signal to the robot, so that the robot acquires the writing point position of the handwriting equipment on a certain writing plane according to the feedback signal, identifies the writing content of the handwriting equipment on the certain writing plane according to a plurality of writing point positions, performs writing judgment on the writing content, obtains a judgment result, and outputs the judgment result;
wherein the detection signal is transmitted by the robot when moving into an identifiable region; when the robot judges that a moving target in the camera view range is a human body through multiple continuous images shot by a camera, the robot moves and/or adjusts the view range of the camera to enable the camera to shoot the face of the human body and enable the face to be always in the view range of the camera; controlling the camera to shoot a plurality of frames of face images containing the face, and determining the posture of the human body by analyzing the face images; when the posture of the human body is a preset writing posture, acquiring a first relative position of the handwriting equipment relative to the robot; and when the robot is judged not to be in a preset recognizable area according to the first relative position, controlling the robot to move to a first target position in the recognizable area, wherein the recognizable area takes the handwriting equipment as a center, and when the robot is in the first target position, the robot is over against the face.
12. The handwriting device according to claim 11, characterized in that said handwriting device further comprises: a pressure detection unit and a recording unit;
the pressure detection unit is used for detecting the writing pressure on the certain writing plane to obtain writing pressure information;
the communication unit is specifically used for receiving the optical signal emitted by the robot according to a specific frequency and receiving first ultrasonic waves emitted by at least two ultrasonic emission modules built in the robot; and sending a time set and the pressure information to the robot, the time set including a first time point at which the optical signal is received and a second time point at which each of the first ultrasonic waves is received;
the recording unit is used for recording the first time point when the communication unit detects the optical signal; and recording each second time point when each first ultrasonic wave is received by the communication unit.
13. A robot-based learning coaching system, characterized in that the system comprises a robot according to any one of claims 7-10, and a handwriting device according to any one of claims 11-12.
CN201810420086.1A 2018-05-04 2018-05-04 Robot-based learning tutoring method and system, robot and handwriting equipment Active CN108399813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810420086.1A CN108399813B (en) 2018-05-04 2018-05-04 Robot-based learning tutoring method and system, robot and handwriting equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810420086.1A CN108399813B (en) 2018-05-04 2018-05-04 Robot-based learning tutoring method and system, robot and handwriting equipment

Publications (2)

Publication Number Publication Date
CN108399813A CN108399813A (en) 2018-08-14
CN108399813B true CN108399813B (en) 2021-06-01

Family

ID=63101336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810420086.1A Active CN108399813B (en) 2018-05-04 2018-05-04 Robot-based learning tutoring method and system, robot and handwriting equipment

Country Status (1)

Country Link
CN (1) CN108399813B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300347B (en) * 2018-12-12 2021-01-26 广东小天才科技有限公司 Dictation auxiliary method based on image recognition and family education equipment
CN111078100B (en) * 2019-06-03 2022-03-01 广东小天才科技有限公司 Point reading method and electronic equipment
CN110992740A (en) * 2019-10-30 2020-04-10 河南智业科技发展有限公司 Family education robot and working method thereof
CN110989844A (en) * 2019-12-16 2020-04-10 广东小天才科技有限公司 Input method, watch, system and storage medium based on ultrasonic waves
CN111613107A (en) * 2020-05-19 2020-09-01 富邦教育科技(深圳)有限公司 Artificial intelligence operating system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1349149A (en) * 2001-12-17 2002-05-15 北京中文之星数码科技有限公司 Hand writing input method and device adopting suspersonic wave positioning
CN1975804A (en) * 2006-12-15 2007-06-06 华南理工大学 Education robot with character-learning and writing function and character recognizing method thereof
CN103744541A (en) * 2014-01-26 2014-04-23 上海鼎为电子科技(集团)有限公司 Writing pen, electronic terminal and writing system
CN105786222A (en) * 2015-12-24 2016-07-20 广东小天才科技有限公司 Ultrasonic sensing writing method and system, intelligent terminal and writing end
CN205620959U (en) * 2016-04-14 2016-10-05 刘俊哲 Multi -functional several pen
CN107301803A (en) * 2017-06-29 2017-10-27 广东小天才科技有限公司 A kind of order of strokes observed in calligraphy correcting method, device, terminal device and computer-readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1987762B (en) * 2005-12-23 2011-01-12 深圳市朗科科技股份有限公司 Input method and device for determining hand writing track by ultrasonic wave
CN105759650A (en) * 2016-03-18 2016-07-13 北京光年无限科技有限公司 Method used for intelligent robot system to achieve real-time face tracking
CN106409026A (en) * 2016-11-01 2017-02-15 河池学院 Intelligent after-school tutoring robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1349149A (en) * 2001-12-17 2002-05-15 北京中文之星数码科技有限公司 Hand writing input method and device adopting suspersonic wave positioning
CN1975804A (en) * 2006-12-15 2007-06-06 华南理工大学 Education robot with character-learning and writing function and character recognizing method thereof
CN103744541A (en) * 2014-01-26 2014-04-23 上海鼎为电子科技(集团)有限公司 Writing pen, electronic terminal and writing system
CN105786222A (en) * 2015-12-24 2016-07-20 广东小天才科技有限公司 Ultrasonic sensing writing method and system, intelligent terminal and writing end
CN205620959U (en) * 2016-04-14 2016-10-05 刘俊哲 Multi -functional several pen
CN107301803A (en) * 2017-06-29 2017-10-27 广东小天才科技有限公司 A kind of order of strokes observed in calligraphy correcting method, device, terminal device and computer-readable storage medium

Also Published As

Publication number Publication date
CN108399813A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
CN108399813B (en) Robot-based learning tutoring method and system, robot and handwriting equipment
US9274744B2 (en) Relative position-inclusive device interfaces
US10671342B2 (en) Non-contact gesture control method, and electronic terminal device
Steckel et al. BatSLAM: Simultaneous localization and mapping using biomimetic sonar
US8169404B1 (en) Method and device for planary sensory detection
US20060221769A1 (en) Object position estimation system, apparatus and method
US11388333B2 (en) Audio guided image capture method and device
CN105260024B (en) A kind of method and device that gesture motion track is simulated on screen
CN103455171A (en) Three-dimensional interactive electronic whiteboard system and method
US20160077206A1 (en) Ultrasonic depth imaging
CN108737934B (en) Intelligent sound box and control method thereof
US20150219755A1 (en) Mapping positions of devices using audio
US10416305B2 (en) Positioning device and positioning method
CN110572600A (en) video processing method and electronic equipment
CN108604143B (en) Display method, device and terminal
CN110119209A (en) Audio device control method and device
US20160073087A1 (en) Augmenting a digital image with distance data derived based on acoustic range information
US20180307302A1 (en) Electronic device and method for executing interactive functions
CN113219450B (en) Ranging positioning method, ranging device and readable storage medium
CN114756129A (en) Method and device for executing operation of AR equipment, storage medium and AR glasses
CN111246339B (en) Method and system for adjusting pickup direction, storage medium and intelligent robot
CN112416001A (en) Automatic following method and device for vehicle and vehicle
CN112929577A (en) Flash lamp control method and terminal equipment
US20240119943A1 (en) Apparatus for implementing speaker diarization model, method of speaker diarization, and portable terminal including the apparatus
CN110730378A (en) Information processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant