CN112233471B - Teaching material transfer system for intelligent education robot - Google Patents

Teaching material transfer system for intelligent education robot Download PDF

Info

Publication number
CN112233471B
CN112233471B CN202011132821.2A CN202011132821A CN112233471B CN 112233471 B CN112233471 B CN 112233471B CN 202011132821 A CN202011132821 A CN 202011132821A CN 112233471 B CN112233471 B CN 112233471B
Authority
CN
China
Prior art keywords
voice
module
matching
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011132821.2A
Other languages
Chinese (zh)
Other versions
CN112233471A (en
Inventor
刘日华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tibet Dianhe Education Technology Co.,Ltd.
Original Assignee
Tibet Dianhe Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tibet Dianhe Education Technology Co ltd filed Critical Tibet Dianhe Education Technology Co ltd
Priority to CN202011132821.2A priority Critical patent/CN112233471B/en
Publication of CN112233471A publication Critical patent/CN112233471A/en
Application granted granted Critical
Publication of CN112233471B publication Critical patent/CN112233471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Abstract

The invention discloses a teaching material calling system for an intelligent education robot, which comprises a processor, an induction identification module, a material calling module, a power management module, a data storage module, a display control module and a learning content analysis module, wherein the induction identification module is used for acquiring teaching material; the intelligent education robot is provided with the induction recognition module, and the induction recognition module is used for collecting and recognizing the interaction information of the user, so that the accuracy of learning material retrieval is improved, and the energy-saving property of the intelligent education robot is ensured; the learning data retrieval system is provided with the data retrieval module, the data retrieval module comprises an automatic retrieval unit and a voice retrieval unit, and the intelligence of learning data retrieval is improved; the intelligent education robot power supply management system is provided with the power supply management module, the power supply management module is used for managing the power supply of the intelligent education robot, the power supply management module analyzes the residual electric quantity of the power supply through the power supply analysis unit and sends a prompt to the display control module, and therefore damage to learning materials in the data storage module due to sudden power failure is prevented.

Description

Teaching material transfer system for intelligent education robot
Technical Field
The invention belongs to the technical field of intelligent education robots, and particularly relates to a teaching material calling system for an intelligent education robot.
Background
With the development of science and technology, various electronic products are more and more popularized and appear in common families, and with the appearance of the internet and the lower and lower price of the broadband, the living habits of people are greatly changed; in order to facilitate the learning of children, various educational robots are available on the market.
The invention patent with publication number CN109215412A discloses an intelligent education robot system, which comprises a robot, wherein a central processing unit and a power management module are arranged in the robot, the power management module supplies power to the whole robot, the central processing unit comprises a storage module, an induction recognition module and an information output module, the induction recognition module and the information output module are connected with the storage module, the storage module stores related learning videos and voice messages, the induction recognition module is used for recognizing specific external information and transmitting the information into the central processing unit, and the information output module is used for outputting corresponding audio or images; the infrared camera is installed in the eyes part of the robot, the voice input device is installed in the ears part of the robot, the voice output device is installed in the mouth part of the robot, the display screen is installed in the body part of the robot, the infrared camera and the voice input device are connected with the induction recognition module, and the voice output device and the display screen are connected with the information output module.
The scheme can realize the autonomous learning and the interactive learning of children to a certain extent; because the above solution has the problems of single learning type, and the classification of teaching materials is not detailed, the above solution still needs to be further improved.
Disclosure of Invention
In order to solve the problems of the scheme, the invention provides a teaching material calling system for an intelligent education robot.
The purpose of the invention can be realized by the following technical scheme: a teaching material calling system for an intelligent education robot comprises a processor, an induction identification module, a material calling module, a power management module, a data storage module, a display control module and a learning content analysis module;
the induction identification module is used for collecting and identifying the interaction information of the user, and the specific collection steps are as follows:
z1: the video information of the front side of the intelligent education robot is acquired through the video acquisition unit, and the voice information is acquired through the voice acquisition unit; sending the video information and the voice information to a processor;
z2: after the processor receives the video information, analyzing the video information to obtain a video analysis coefficient SF;
z3: after the processor receives the voice information, the processor analyzes the voice information to obtain a voice analysis coefficient YF, and the specific obtaining step is as follows:
z31: acquiring pre-stored voice characteristic information through a data storage module;
z32: matching and identifying the voice information and the voice characteristic information to obtain a word error rate and a sentence error rate, and respectively marking the word error rate and the sentence error rate as ZW and JW; the matching identification of the voice information is the prior art, and the terminal fuzzy voice high-precision identification method based on the voice association in the thesis 'terminal fuzzy voice high-precision identification method based on the semantic association' can realize the matching identification of the voice information;
z33: acquiring a voice matching coefficient YP through a formula YP of alpha 2 xZW + alpha 3 xJW; wherein alpha 2 and alpha 3 are preset proportionality coefficients;
z34: when the voice matching coefficient YP is larger than L2, judging that the voice information is successfully matched with the voice characteristic information, and assigning a voice analysis coefficient YF to be 1; otherwise, judging that the matching of the voice information and the voice characteristic information fails, and assigning a voice analysis coefficient YF to be 0; wherein L2 is a preset speech matching coefficient threshold;
z4: acquiring an induction recognition coefficient GS through a formula GS which is SF multiplied by YF; when the induction recognition coefficient GS is larger than L3, a data calling instruction is sent to the data calling module through the processor; otherwise, the processor does not send any instruction; wherein L3 is a preset inductance identification coefficient threshold;
z5: sending a video analysis coefficient, a voice analysis coefficient and a data calling instruction sending record to a data storage module through a processor, wherein the data calling instruction sending record comprises a data calling instruction and data calling instruction sending time;
the data calling module is used for calling learning data, and the specific calling steps are as follows:
x1: after the data calling module receives a data calling instruction, starting a command processing unit, wherein the command processing unit is used for acquiring a keyword matching result, and the specific acquisition steps are as follows:
x11: acquiring voice information of a user through a voice acquisition unit;
x12: performing feature extraction on the voice information through a command processing unit, and marking keywords subjected to feature extraction as keywords to be analyzed;
x13: matching the keywords to be analyzed with the subject keywords one by one, and marking the keyword matching result as GZ;
x14: when the keyword matching result GZ is empty, sending an automatic calling instruction to an automatic calling unit; when the keyword matching result GZ is not empty, sending a keyword calling instruction to the voice calling unit; when the keyword matching result GZ is not empty, the GZ represents one item in the subject keywords;
x2: when the automatic calling unit receives the automatic calling instruction, the automatic calling unit calls the learning materials, and the specific calling steps are as follows:
x21: acquiring preset voice characteristic information through a data storage module, and marking the voice characteristic information as XYi, i is 1,2, … …, n;
x22: matching the voice information with the voice characteristic information, and acquiring a serial number i after the matching is successful;
x23: acquiring the learning data with the serial number i at the end of the last learning through a data storage module;
x24: sending the learning materials in the X23 to a display control module through a processor;
x3: when the voice calling module receives a keyword calling instruction, learning materials corresponding to a subject keyword matching result GZ are matched through the data storage module; and the learning data is sent to a display control module;
x4: and sending the automatic calling instruction sending record, the voice calling instruction sending record and the learning material sending record to the data storage module through the processor.
Preferably, the display control module is used for playing learning materials, and the display control module comprises a touch display screen and a loudspeaker; the user can control the playing process of the learning materials by clicking the touch display screen; and the upper right corner of the touch display screen displays the residual electric quantity of the intelligent education robot.
Preferably, the data retrieval module comprises a command processing unit, a voice retrieval unit and an automatic retrieval unit.
Preferably, the learning materials include Chinese learning materials, mathematics learning materials, English learning materials, music learning materials, dance learning materials, calligraphy learning materials and art learning materials, and the subject keywords include Chinese, mathematics, English, music, dance, calligraphy and art.
Preferably, the voice feature information is the voice feature information of the user recorded by the intelligent education robot when the user uses the intelligent education robot to learn.
Preferably, the power management module is used for managing a power supply of the intelligent education robot, the power management module comprises a power supply and a power supply analysis unit, and the specific management steps are as follows:
c1: acquiring the residual electric quantity of the power supply in real time, marking the residual electric quantity as SD, sending the residual electric quantity SD to a display control module through a processor, displaying the residual electric quantity in real time by the display control module, and sending the residual electric quantity to a power supply analysis unit;
c2: when the remaining power SD is less than or equal to L4, sending a power shortage instruction to the display control module through the processor, wherein L4 is a preset remaining power threshold;
c3: controlling the intelligent education robot to enter a dormant state through the processor at L5 minutes after the power supply shortage instruction is sent, wherein L5 is a preset time threshold value;
c4: when the intelligent education robot is in a dormant state, the user is prompted to charge through the display control module.
Preferably, the video analysis coefficient obtaining step is as follows:
z21: video preprocessing is carried out on video information, wherein the video preprocessing comprises video shot segmentation, key frame extraction and video feature extraction;
z22: acquiring a figure image to be analyzed in video information, performing image preprocessing on the figure image to be analyzed, and acquiring a pre-stored standard figure image through a data storage module, wherein the standard figure image is a whole body image of a user; the image preprocessing comprises gray level transformation, image correction, image cutting and image enhancement;
z23: performing multiple matching analysis on the figure image to be analyzed and the standard figure image by a characteristic matching method, and marking the matching probability and the matching precision as PG and PJ; image matching based on a feature matching method is the prior art, and the method in a paper image matching algorithm based on image feature points can realize image matching;
z24: by the formula
Figure BDA0002735697860000051
Acquiring an image matching coefficient TP, wherein alpha 1 is a preset proportionality coefficient;
z25: when the image matching coefficient TP is larger than L1, judging that the matching between the character image to be analyzed and the standard character image is successful, and assigning a video analysis coefficient SF to be 1; otherwise, judging that the matching between the figure image to be analyzed and the standard figure image fails, and assigning a video analysis coefficient SF to be 0; where L1 is a preset image matching coefficient threshold.
Preferably, the matching probability is a ratio of a correct matching number to a total matching number, and the matching precision is a mean square error of a matching error of a correct match.
Preferably, the character image to be analyzed is a character image cropped from a key frame of the video information.
Preferably, the interactive information comprises video information and voice information, the induction recognition module comprises a video acquisition unit and a voice acquisition unit, the video acquisition unit is arranged at the position of eyes of the intelligent education robot, and the voice acquisition unit is arranged at the position of ears of the intelligent education robot; the induction identification module is electrically connected with the processor.
Preferably, the learning content analysis module is configured to analyze the content learned by the user, and the specific analysis steps include:
n1: acquiring learning time of a user through a data storage module, wherein the learning time is learning duration of the user on learning materials, and the learning time is marked as XSj, j is 1,2, … …, m; j represents the jth learning material;
n2: sorting the learned times XSj from large to small; and by the formula
Figure BDA0002735697860000061
Acquiring a learning time average value XSP;
n3: calculating the difference between the maximum value and the minimum value of the learning time, and marking the difference as CZ;
n4: by the formula XF ═ gamma 1 XSP × e-γ2×CZObtaining a subject analysis coefficient XF, wherein gamma 1 and gamma 2Is a predetermined scale factor, and 0<γ1<γ2;
N5: when XF is larger than L6, judging that the user has a partial phenomenon, and sending a partial prompt to a preset intelligent terminal through a processor; when XF is less than or equal to L6, marking the subject with the largest learning time as a preference subject, and sending the preference subject to a preset intelligent terminal through a processor;
n6: and sending the partial department reminding sending record and the preference subject sending record to a data storage module through a processor.
Preferably, the video information is a video at the front side of the intelligent education robot acquired by the video acquisition unit, the voice information is a voice signal of a square circle K1 m of the intelligent robot acquired by the voice acquisition unit, and the voice characteristic information is a voice signal of a user stored in the data storage module; where K1 is a preset distance threshold.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention is provided with an induction identification module, which is used for collecting and identifying the interaction information of a user; the video information of the front side of the intelligent education robot is acquired through the video acquisition unit, and the voice information is acquired through the voice acquisition unit; sending the video information and the voice information to a processor; after the processor receives the video information, analyzing the video information to obtain a video analysis coefficient; after receiving the voice information, the processor analyzes the voice information to obtain a voice analysis coefficient; acquiring an induction identification coefficient through a formula, and sending a data calling instruction to a data calling module through a processor when the induction identification coefficient is larger than a preset induction identification coefficient threshold value, or else, not sending any instruction by the processor; the induction recognition module acquires the interactive information through the video acquisition unit and the voice acquisition unit, processes the interactive information and acquires an induction recognition coefficient, judges whether to transfer the learning materials or not by analyzing the induction recognition coefficient, improves the accuracy of transferring the learning materials and ensures the energy conservation of the intelligent education robot;
2. the invention is provided with a data calling module, which is used for calling learning data; after the data calling module receives the data calling instruction, starting the command processing unit, acquiring a keyword matching result through the command processing unit, and sending an automatic calling instruction to the automatic calling unit when the keyword matching result is empty; when the keyword matching result is not empty, sending a keyword calling instruction to the voice calling unit; when the automatic calling unit receives the automatic calling instruction, the automatic calling unit calls the learning materials; when the voice calling module receives a keyword calling instruction, learning materials corresponding to a subject keyword matching result GZ are matched through the data storage module; and display the control module while sending the learning materials; the data calling module comprises an automatic calling unit and a voice calling unit, so that the intelligence of learning data calling is improved;
3. the intelligent education robot power supply management system is provided with a power supply management module, wherein the power supply management module is used for managing the power supply of the intelligent education robot; acquiring the residual electric quantity of the power supply in real time, marking the residual electric quantity as SD, sending the residual electric quantity SD to a display control module through a processor, displaying the residual electric quantity in real time by the display control module, and sending the residual electric quantity to a power supply analysis unit; when the remaining power SD is less than or equal to L4, sending a power shortage instruction to the display control module through the processor, wherein L4 is a preset remaining power threshold; controlling the intelligent education robot to enter a dormant state through the processor at L5 minutes after the power supply shortage instruction is sent, wherein L5 is a preset time threshold value; when the intelligent education robot is in a dormant state, prompting a user to charge through the display control module; the power management module analyzes the residual electric quantity of the power supply through the power analysis unit and sends a prompt to the display control module, so that damage to learning materials in the data storage module caused by sudden power failure is prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a teaching material retrieval system for an intelligent education robot includes a processor, an induction recognition module, a material retrieval module, a power management module, a data storage module, a display control module, and a learning content analysis module;
the induction identification module is used for collecting and identifying the interaction information of the user, and the specific collection steps are as follows:
z1: the video information of the front side of the intelligent education robot is acquired through the video acquisition unit, and the voice information is acquired through the voice acquisition unit; sending the video information and the voice information to a processor;
z2: after the processor receives the video information, analyzing the video information to obtain a video analysis coefficient SF;
z3: after the processor receives the voice information, the processor analyzes the voice information to obtain a voice analysis coefficient YF, and the specific obtaining step is as follows:
z31: acquiring pre-stored voice characteristic information through a data storage module;
z32: matching and identifying the voice information and the voice characteristic information to obtain a word error rate and a sentence error rate, and respectively marking the word error rate and the sentence error rate as ZW and JW; the matching identification of the voice information is the prior art, and the terminal fuzzy voice high-precision identification method based on the voice association in the thesis 'terminal fuzzy voice high-precision identification method based on the semantic association' can realize the matching identification of the voice information;
z33: acquiring a voice matching coefficient YP through a formula YP of alpha 2 xZW + alpha 3 xJW; wherein alpha 2 and alpha 3 are preset proportionality coefficients;
z34: when the voice matching coefficient YP is larger than L2, judging that the voice information is successfully matched with the voice characteristic information, and assigning a voice analysis coefficient YF to be 1; otherwise, judging that the matching of the voice information and the voice characteristic information fails, and assigning a voice analysis coefficient YF to be 0; wherein L2 is a preset speech matching coefficient threshold;
z4: acquiring an induction recognition coefficient GS through a formula GS which is SF multiplied by YF; when the induction recognition coefficient GS is larger than L3, a data calling instruction is sent to the data calling module through the processor; otherwise, the processor does not send any instruction; wherein L3 is a preset inductance identification coefficient threshold;
z5: sending the video analysis coefficient, the voice analysis coefficient and the data calling instruction sending record to a data storage module through a processor, wherein the data calling instruction sending record comprises a data calling instruction and data calling instruction sending time;
the data calling module is used for calling learning data, and the specific calling steps are as follows:
x1: after the data calling module receives the data calling instruction, starting a command processing unit, wherein the command processing unit is used for acquiring a keyword matching result, and the specific acquisition steps are as follows:
x11: acquiring voice information of a user through a voice acquisition unit;
x12: performing feature extraction on the voice information through a command processing unit, and marking keywords subjected to feature extraction as keywords to be analyzed;
x13: matching the keywords to be analyzed with the subject keywords one by one, and marking the keyword matching result as GZ;
x14: when the keyword matching result GZ is empty, sending an automatic calling instruction to an automatic calling unit; when the keyword matching result GZ is not empty, sending a keyword calling instruction to the voice calling unit; when the keyword matching result GZ is not empty, the GZ represents one item in the subject keywords;
x2: when the automatic calling unit receives the automatic calling instruction, the automatic calling unit calls the learning materials, and the specific calling steps are as follows:
x21: acquiring preset voice characteristic information through a data storage module, and marking the voice characteristic information as XYi, i is 1,2, … …, n;
x22: matching the voice information with the voice characteristic information, and acquiring a serial number i after the matching is successful;
x23: acquiring the learning data with the serial number i at the end of the last learning through a data storage module;
x24: the learning materials in the X23 are sent to a display control module through a processor;
x3: when the voice calling module receives a keyword calling instruction, learning materials corresponding to a subject keyword matching result GZ are matched through the data storage module; and the learning data is sent to a display control module;
x4: and sending the automatic calling instruction sending record, the voice calling instruction sending record and the learning material sending record to the data storage module through the processor.
Furthermore, the display control module is used for playing learning materials and comprises a touch display screen and a loudspeaker; the user can control the playing process of the learning materials by clicking the touch display screen; and the upper right corner of the touch display screen displays the residual electric quantity of the intelligent education robot.
Furthermore, the data calling module comprises a command processing unit, a voice calling unit and an automatic calling unit.
Further, the learning materials include Chinese learning materials, mathematics learning materials, English learning materials, music learning materials, dance learning materials, calligraphy learning materials and art learning materials, and the subject keywords include Chinese, mathematics, English, music, dance, calligraphy and art.
Further, the power management module is used for managing the power of the intelligent education robot, the power management module comprises a power supply and a power supply analysis unit, and the specific management steps are as follows:
c1: acquiring the residual electric quantity of the power supply in real time, marking the residual electric quantity as SD, sending the residual electric quantity SD to a display control module through a processor, displaying the residual electric quantity in real time by the display control module, and sending the residual electric quantity to a power supply analysis unit;
c2: when the remaining power SD is less than or equal to L4, sending a power shortage instruction to the display control module through the processor, wherein L4 is a preset remaining power threshold;
c3: controlling the intelligent education robot to enter a dormant state through the processor at L5 minutes after the power supply shortage instruction is sent, wherein L5 is a preset time threshold value;
c4: when the intelligent education robot is in a dormant state, the user is prompted to charge through the display control module.
Further, the video analysis coefficient acquisition step is as follows:
z21: video preprocessing is carried out on the video information, wherein the video preprocessing comprises video shot segmentation, key frame extraction and video feature extraction;
z22: acquiring a figure image to be analyzed in video information, performing image preprocessing on the figure image to be analyzed, and acquiring a pre-stored standard figure image through a data storage module, wherein the standard figure image is a whole body image of a user; the image preprocessing comprises gray level transformation, image correction, image cutting and image enhancement;
z23: performing multiple matching analysis on the figure image to be analyzed and the standard figure image by a characteristic matching method, and marking the matching probability and the matching precision as PG and PJ; image matching based on a feature matching method is the prior art, and the method in a paper image matching algorithm based on image feature points can realize image matching;
z24: by the formula
Figure BDA0002735697860000121
Acquiring an image matching coefficient TP, wherein alpha 1 is a preset proportionality coefficient;
z25: when the image matching coefficient TP is larger than L1, judging that the matching between the character image to be analyzed and the standard character image is successful, and assigning a video analysis coefficient SF to be 1; otherwise, judging that the matching between the figure image to be analyzed and the standard figure image fails, and assigning a video analysis coefficient SF to be 0; where L1 is a preset image matching coefficient threshold.
Further, the matching probability is the ratio of the correct matching times to the total matching times, and the matching precision is the mean square error of the matching error of the correct matching.
Further, the personal image to be analyzed is a personal image cropped from the key frame of the video information.
Furthermore, the interactive information comprises video information and voice information, the induction recognition module comprises a video acquisition unit and a voice acquisition unit, the video acquisition unit is arranged at the position of eyes of the intelligent education robot, and the voice acquisition unit is arranged at the position of ears of the intelligent education robot; the induction identification module is electrically connected with the processor.
Furthermore, the video information is the video at the front side of the intelligent education robot acquired by the video acquisition unit, the voice information is the voice signal of the intelligent robot square circle K1 m acquired by the voice acquisition unit, and the voice characteristic information is the voice signal of the user stored in the data storage module; where K1 is a preset distance threshold.
Further, the learning content analysis module is used for analyzing the content learned by the user, and the specific analysis steps are as follows:
n1: acquiring learning time of a user through a data storage module, wherein the learning time is learning duration of the user on learning materials, and the learning time is marked as XSj, j is 1,2, … …, m; j represents the jth learning material;
n2: sorting the learned times XSj from large to small; and by the formula
Figure BDA0002735697860000131
Acquiring a learning time average value XSP;
n3: calculating the difference between the maximum value and the minimum value of the learning time, and marking the difference as CZ;
n4: by the formula XF ═ gamma 1 XSP × e-γ2×CZAcquiring a subject analysis coefficient XF, wherein gamma 1 and gamma 2 are preset proportionality coefficients and 0<γ1<γ2;
N5: when XF is larger than L6, judging that the user has a partial phenomenon, and sending a partial prompt to a preset intelligent terminal through a processor; when XF is less than or equal to L6, marking the subject with the largest learning time as a preference subject, and sending the preference subject to a preset intelligent terminal through a processor;
n6: and sending the partial department reminding sending record and the preference subject sending record to a data storage module through a processor.
The above formulas are all quantitative calculation, the formula is a formula obtained by acquiring a large amount of data and performing software simulation to obtain the latest real situation, and the preset parameters in the formula are set by the technical personnel in the field according to the actual situation.
The working principle of the invention is as follows:
the video information of the front side of the intelligent education robot is acquired through the video acquisition unit, and the voice information is acquired through the voice acquisition unit; sending the video information and the voice information to a processor; after the processor receives the video information, analyzing the video information to obtain a video analysis coefficient; after receiving the voice information, the processor analyzes the voice information to obtain a voice analysis coefficient; acquiring an induction identification coefficient through a formula, and sending a data calling instruction to a data calling module through a processor when the induction identification coefficient is larger than a preset induction identification coefficient threshold value, or else, not sending any instruction by the processor;
after the data calling module receives the data calling instruction, starting the command processing unit, acquiring a keyword matching result through the command processing unit, and sending an automatic calling instruction to the automatic calling unit when the keyword matching result is empty; when the keyword matching result is not empty, sending a keyword calling instruction to the voice calling unit; when the automatic calling unit receives the automatic calling instruction, the automatic calling unit calls the learning materials; when the voice calling module receives a keyword calling instruction, learning materials corresponding to a subject keyword matching result GZ are matched through the data storage module; and display the control module while sending the learning materials;
the power supply management module is used for managing a power supply of the intelligent education robot, acquiring the residual electric quantity of the power supply in real time, marking the residual electric quantity as SD, sending the residual electric quantity SD to the display control module through the processor, displaying the residual electric quantity in real time by the display control module, and sending the residual electric quantity to the power supply analysis unit; when the remaining power SD is less than or equal to L4, sending a power shortage instruction to the display control module through the processor, wherein L4 is a preset remaining power threshold; controlling the intelligent education robot to enter a dormant state through the processor at L5 minutes after the power supply shortage instruction is sent, wherein L5 is a preset time threshold value; when the intelligent education robot is in a dormant state, the user is prompted to charge through the display control module.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (5)

1. A teaching material retrieval system for an intelligent education robot is characterized by comprising a processor, an induction identification module, a material retrieval module, a power management module, a data storage module and a display control module;
the induction identification module is used for collecting and identifying the interaction information of the user, and the specific collection steps are as follows:
z1: the video information of the front side of the intelligent education robot is acquired through the video acquisition unit, and the voice information is acquired through the voice acquisition unit; sending the video information and the voice information to a processor;
z2: after the processor receives the video information, the processor analyzes the video information to obtain a video analysis coefficient
Figure DEST_PATH_IMAGE001
(ii) a The video analysis coefficient acquisition step comprises:
z21: video preprocessing is carried out on video information, wherein the video preprocessing comprises video shot segmentation, key frame extraction and video feature extraction;
z22: acquiring a figure image to be analyzed in video information, performing image preprocessing on the figure image to be analyzed, and acquiring a pre-stored standard figure image through a data storage module, wherein the standard figure image is a whole body image of a user; the image preprocessing comprises gray level transformation, image correction, image cutting and image enhancement;
z23: performing multiple matching analysis on the figure image to be analyzed and the standard figure image by a characteristic matching method, and marking the matching probability and the matching precision as
Figure 766657DEST_PATH_IMAGE002
And
Figure DEST_PATH_IMAGE003
z24: by the formula
Figure 462080DEST_PATH_IMAGE004
Obtaining image matching coefficients
Figure DEST_PATH_IMAGE005
Wherein
Figure 359498DEST_PATH_IMAGE006
Is a preset proportionality coefficient;
z25: when image matching coefficient
Figure DEST_PATH_IMAGE007
Then, the person to be analyzed is determinedSuccessfully matching the image with the standard figure image and analyzing the video coefficient
Figure 369042DEST_PATH_IMAGE008
The value is assigned to 1; otherwise, judging that the matching between the figure image to be analyzed and the standard figure image fails, and analyzing the video coefficient
Figure 12513DEST_PATH_IMAGE008
The value is assigned to 0; wherein
Figure DEST_PATH_IMAGE009
Matching a coefficient threshold value for a preset image;
z3: after the processor receives the voice information, the processor analyzes the voice information to obtain a voice analysis coefficient
Figure 941155DEST_PATH_IMAGE010
The specific acquisition steps are as follows:
z31: acquiring pre-stored voice characteristic information through a data storage module;
z32: matching and identifying the voice information and the voice characteristic information to obtain a word error rate and a sentence error rate, and respectively marking the word error rate and the sentence error rate as
Figure DEST_PATH_IMAGE011
And
Figure 201235DEST_PATH_IMAGE012
z33: by the formula
Figure DEST_PATH_IMAGE013
Obtaining a speech matching coefficient
Figure 670263DEST_PATH_IMAGE014
(ii) a Wherein
Figure DEST_PATH_IMAGE015
And
Figure 902661DEST_PATH_IMAGE016
is a preset proportionality coefficient;
z34: when the voice matches the coefficient
Figure DEST_PATH_IMAGE017
If yes, the voice information is judged to be successfully matched with the voice characteristic information, and the voice analysis coefficient is analyzed
Figure 330100DEST_PATH_IMAGE018
The value is assigned to 1; otherwise, judging that the matching of the voice information and the voice characteristic information fails, and analyzing the voice coefficient
Figure 546318DEST_PATH_IMAGE018
The value is assigned to 0; wherein
Figure DEST_PATH_IMAGE019
Matching a coefficient threshold value for a preset voice;
z4: by the formula
Figure 835348DEST_PATH_IMAGE020
Obtaining induction recognition coefficient
Figure DEST_PATH_IMAGE021
(ii) a When sensing the identification coefficient
Figure 741878DEST_PATH_IMAGE022
If so, sending a data calling instruction to the data calling module through the processor; otherwise, the processor does not send any instruction; wherein
Figure DEST_PATH_IMAGE023
Setting a preset induction identification coefficient threshold value;
z5: sending a video analysis coefficient, a voice analysis coefficient and a data calling instruction sending record to a data storage module through a processor, wherein the data calling instruction sending record comprises a data calling instruction and data calling instruction sending time;
the data calling module is used for calling learning data, and the specific calling steps are as follows:
x1: after the data calling module receives a data calling instruction, starting a command processing unit, wherein the command processing unit is used for acquiring a keyword matching result, and the specific acquisition steps are as follows:
x11: acquiring voice information of a user through a voice acquisition unit;
x12: performing feature extraction on the voice information through a command processing unit, and marking keywords subjected to feature extraction as keywords to be analyzed;
x13: matching the keywords to be analyzed with the subject keywords one by one, and marking the keyword matching result as the keyword matching result
Figure 90951DEST_PATH_IMAGE024
X14: when the keyword matches the result
Figure 981416DEST_PATH_IMAGE024
When the data is empty, sending an automatic calling instruction to an automatic calling unit; when the keyword matches the result
Figure 402033DEST_PATH_IMAGE024
When the time is not empty, a keyword calling instruction is sent to the voice calling unit;
x2: when the automatic calling unit receives the automatic calling instruction, the automatic calling unit calls the learning materials, and the specific calling steps are as follows:
x21: acquiring preset voice characteristic information through a data storage module, and marking the voice characteristic information as voice characteristic information
Figure DEST_PATH_IMAGE025
,i=1,2,……,n;
X22: matching the voice information with the voice characteristic information, and acquiring a serial number i after the matching is successful;
x23: acquiring the learning data with the serial number i at the end of the last learning through a data storage module;
x24: sending the learning materials in the X23 to a display control module through a processor;
x3: when the voice calling module receives a keyword calling instruction, subject keyword matching results are obtained through the data storage module
Figure 546706DEST_PATH_IMAGE026
Corresponding learning materials; and the learning data is sent to a display control module;
x4: and sending the automatic calling instruction sending record, the voice calling instruction sending record and the learning material sending record to the data storage module through the processor.
2. The system for calling up teaching materials for the intelligent education robot as claimed in claim 1, wherein the display control module is used for playing learning materials, the display control module includes a touch display screen and a speaker; the user can control the playing process of the learning materials by clicking the touch display screen.
3. The system of claim 1, wherein the learning materials include Chinese learning materials, mathematics learning materials, English learning materials, music learning materials, dance learning materials, calligraphy learning materials and art learning materials, and the subject keywords include Chinese, mathematics, English, music, dance, calligraphy and art.
4. The system of claim 1, wherein the power management module is configured to manage a power supply of the intelligent education robot, the power management module includes a power supply and a power supply analysis unit, and the specific management steps include:
c1: acquiring the residual capacity of the power supply in real time and marking the residual capacity as
Figure DEST_PATH_IMAGE027
The remaining power is measured by a processor
Figure 191314DEST_PATH_IMAGE027
The power supply monitoring device comprises a power supply analysis unit, a display control module, a power supply monitoring unit and a power supply monitoring unit, wherein the power supply analysis unit is used for analyzing the power supply of the power supply;
c2: when the remaining capacity of electricity
Figure 569075DEST_PATH_IMAGE028
Then, sending a power shortage instruction to the display control module through the processor, wherein
Figure DEST_PATH_IMAGE029
Is a preset residual electric quantity threshold value;
c3: after power-down command has been issued
Figure 199907DEST_PATH_IMAGE030
At the time of minutes, the intelligent education robot is controlled by the processor to enter a dormant state, wherein
Figure 526983DEST_PATH_IMAGE030
Is a preset time threshold;
c4: when the intelligent education robot is in a dormant state, the user is prompted to charge through the display control module.
5. The system for calling teaching materials for the intelligent education robot according to claim 1, wherein the interactive information includes video information and voice information, the induction recognition module includes a video acquisition unit and a voice acquisition unit, the video acquisition unit is disposed at a position where eyes of the intelligent education robot are located, and the voice acquisition unit is disposed at a position where ears of the intelligent education robot are located; the induction identification module is electrically connected with the processor.
CN202011132821.2A 2020-10-21 2020-10-21 Teaching material transfer system for intelligent education robot Active CN112233471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011132821.2A CN112233471B (en) 2020-10-21 2020-10-21 Teaching material transfer system for intelligent education robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011132821.2A CN112233471B (en) 2020-10-21 2020-10-21 Teaching material transfer system for intelligent education robot

Publications (2)

Publication Number Publication Date
CN112233471A CN112233471A (en) 2021-01-15
CN112233471B true CN112233471B (en) 2021-10-01

Family

ID=74108915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011132821.2A Active CN112233471B (en) 2020-10-21 2020-10-21 Teaching material transfer system for intelligent education robot

Country Status (1)

Country Link
CN (1) CN112233471B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671853A (en) * 2021-08-02 2021-11-19 广东启智创新教育科技有限公司 Control system who possesses interactive robot of dance

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108583478A (en) * 2018-04-28 2018-09-28 安徽江淮汽车集团股份有限公司 Accumulator low battery control method and system
CN108924608A (en) * 2018-08-21 2018-11-30 广东小天才科技有限公司 A kind of householder method and smart machine of video teaching
CN109215412A (en) * 2018-09-13 2019-01-15 天津西青区瑞博生物科技有限公司 A kind of Intelligent teaching robot system
CN109671438A (en) * 2019-01-28 2019-04-23 武汉恩特拉信息技术有限公司 It is a kind of to provide the device and method of ancillary service using voice
CN110852073A (en) * 2018-08-01 2020-02-28 世学(深圳)科技有限公司 Language learning system and learning method for customizing learning content for user
CN111158490A (en) * 2019-12-31 2020-05-15 重庆百事得大牛机器人有限公司 Auxiliary semantic recognition system based on gesture recognition
CN111402640A (en) * 2020-03-04 2020-07-10 香港生产力促进局 Children education robot and learning material pushing method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109033448B (en) * 2018-08-20 2021-06-01 广东小天才科技有限公司 Learning guidance method and family education equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108583478A (en) * 2018-04-28 2018-09-28 安徽江淮汽车集团股份有限公司 Accumulator low battery control method and system
CN110852073A (en) * 2018-08-01 2020-02-28 世学(深圳)科技有限公司 Language learning system and learning method for customizing learning content for user
CN108924608A (en) * 2018-08-21 2018-11-30 广东小天才科技有限公司 A kind of householder method and smart machine of video teaching
CN109215412A (en) * 2018-09-13 2019-01-15 天津西青区瑞博生物科技有限公司 A kind of Intelligent teaching robot system
CN109671438A (en) * 2019-01-28 2019-04-23 武汉恩特拉信息技术有限公司 It is a kind of to provide the device and method of ancillary service using voice
CN111158490A (en) * 2019-12-31 2020-05-15 重庆百事得大牛机器人有限公司 Auxiliary semantic recognition system based on gesture recognition
CN111402640A (en) * 2020-03-04 2020-07-10 香港生产力促进局 Children education robot and learning material pushing method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
景象匹配辅助组合导航中景象区域适配性研究进展;沈林成;《航空学报》;20100325;第31卷(第3期);第555-563页 *
浅谈自动语音识别测评指标字错率和句错率的应用;李强等;《现代传输》;20200315;第61-64页 *

Also Published As

Publication number Publication date
CN112233471A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
US11544588B2 (en) Image tagging based upon cross domain context
US11655622B2 (en) Smart toilet and electric appliance system
Brdiczka et al. Learning situation models in a smart home
US20200175264A1 (en) Teaching assistance method and teaching assistance system using said method
CN111651571B (en) Conversation realization method, device, equipment and storage medium based on man-machine cooperation
WO2020207249A1 (en) Notification message pushing method and apparatus, and storage medium and electronic device
CN111143569B (en) Data processing method, device and computer readable storage medium
CN111079833B (en) Image recognition method, image recognition device and computer-readable storage medium
CN109214001A (en) A kind of semantic matching system of Chinese and method
CN109711356B (en) Expression recognition method and system
CN112233471B (en) Teaching material transfer system for intelligent education robot
CN114926837B (en) Emotion recognition method based on human-object space-time interaction behavior
CN113656564A (en) Power grid service dialogue data emotion detection method based on graph neural network
CN111222330A (en) Chinese event detection method and system
CN111046655B (en) Data processing method and device and computer readable storage medium
CN112053205A (en) Product recommendation method and device through robot emotion recognition
CN111563147A (en) Entity linking method and device in knowledge question-answering system
WO2023173554A1 (en) Inappropriate agent language identification method and apparatus, electronic device and storage medium
CN110991155A (en) Text correction method, apparatus, and medium
CN110046922A (en) A kind of marketer terminal equipment and its marketing method
TWI761090B (en) Dialogue data processing system and method thereof and computer readable medium
CN114488831A (en) Internet of things intelligent home control system and method based on human-computer interaction
CN114550183A (en) Electronic equipment and error recording method
CN114998960A (en) Expression recognition method based on positive and negative sample comparison learning
JP2022543032A (en) Motion recognition method, motion recognition device, computer-readable storage medium, electronic device and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210916

Address after: No. 503, unit 4, building 13, Nanyuan, Beijing Avenue, Liuwu new area, Lhasa, Tibet Autonomous Region, 850000

Applicant after: Tibet Dianhe Education Technology Co.,Ltd.

Address before: 510000 115 Daxin Road, Yuexiu District, Guangzhou City, Guangdong Province

Applicant before: Addison international investment (Guangzhou) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant