CN112634684B - Intelligent teaching method and device - Google Patents

Intelligent teaching method and device Download PDF

Info

Publication number
CN112634684B
CN112634684B CN202011461962.9A CN202011461962A CN112634684B CN 112634684 B CN112634684 B CN 112634684B CN 202011461962 A CN202011461962 A CN 202011461962A CN 112634684 B CN112634684 B CN 112634684B
Authority
CN
China
Prior art keywords
user
virtual
robot
database
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011461962.9A
Other languages
Chinese (zh)
Other versions
CN112634684A (en
Inventor
黄元忠
卢庆华
张明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Muyu Technology Co ltd
Original Assignee
Shenzhen Muyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Muyu Technology Co ltd filed Critical Shenzhen Muyu Technology Co ltd
Priority to CN202011461962.9A priority Critical patent/CN112634684B/en
Publication of CN112634684A publication Critical patent/CN112634684A/en
Application granted granted Critical
Publication of CN112634684B publication Critical patent/CN112634684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations

Abstract

The application provides an intelligent teaching method, which comprises the following steps: loading teaching content; generating a plurality of virtual robots; enabling at least one virtual robot to interact with teaching contents of a user; and enabling at least two virtual robots to be used for interaction of teaching contents between the virtual robots. Through mutual interactive teaching between a plurality of virtual robots and users, a multi-person interactive learning atmosphere is created, and the plurality of virtual robots can be mutually matched, so that enthusiasm of students is fully guided, encouraged and inspired. And moreover, different virtual robots are differentiated, different feedback or response can be respectively made for the same user, so that the user experiences different when the different virtual robots perform the interaction, the interaction method is attractive to the user, and the enthusiasm of the user to participate in the interaction in teaching is improved.

Description

Intelligent teaching method and device
Technical Field
The application relates to the technical field of online teaching, in particular to an intelligent teaching method and device.
Background
The online education improves the uneven distribution of educational resources and breaks the time and place limit. In addition, due to the development of AI, in online education, online education using virtual teacher roles appears, and the method can implement personalized teaching aiming at the self situation of users. For example, patent application with application number of CN201910205680.3 discloses an intelligent robot teaching system and a student learning method, which are implemented in virtual teaching, through AI technology, user expression, action, emotion, etc. are identified, analysis and setting of different personality are implemented on an artificial intelligent robot, and lifelike human-computer personality interaction is implemented.
However, in the current intelligent teaching method, only one virtual teacher role is set, including the patent application, and a one-to-one teaching mode is adopted, that is, teaching interaction is performed between the virtual teacher role and the user, which is insufficient for stimulating learning interest of students and improving participation. The user can interact with the teacher and the classmates in a real scene, especially for student users, usually in class with multiple classmates. In a real scene, students can obtain knowledge input from teachers and learning partners due to participation of multiple learning partners and discussion objects, and learning atmosphere is created due to the existence of the learning partners, so that the enthusiasm of learning can be improved.
Therefore, how to build the interactive teaching effect of multiple users under the condition of only one user is a technical problem to be improved in the application.
Disclosure of Invention
In view of the above problems in the prior art, the present application provides an intelligent teaching method and apparatus, so as to create a learning atmosphere for co-interaction learning between multiple virtual robots and users, and improve the enthusiasm of user learning.
In order to achieve the above object, the present application provides an intelligent teaching method, including:
loading teaching content;
generating a plurality of virtual robots;
enabling at least one virtual robot to interact with teaching contents of a user;
and enabling at least two virtual robots to be used for interaction of teaching contents between the virtual robots.
By generating a plurality of virtual robots, enabling at least one virtual robot to interact with teaching contents of a user, and enabling at least two virtual robots to be used for interaction of the teaching contents between the virtual robots. The atmosphere that a plurality of virtual robots and users learn together is created, and the virtual robots can interact with each other and the users so as to guide the users to learn. The utility model discloses an atmosphere that a plurality of virtual robots and user interacted together has been realized, and a plurality of virtual robots can mutually support, fully guide, encourage, inspire student's enthusiasm.
Optionally, the generating the plurality of virtual robots includes: generating virtual robots with different parameter attributes according to different preset parameter attributes; the parameter attributes include at least one of: sex, age, personality, hobbies, height, body type, occupation, speech rate, intonation, and expression rate of change.
By the method, different virtual robots are endowed with different personality attributes, so that users feel different when interacting with different virtual robots, the virtual robots are attractive to the users, and the enthusiasm of the users to participate in the interaction is improved.
Optionally, when the virtual robot interacts with the user for teaching content, the method includes:
different virtual robots adopt deep neural networks with different parameters to understand natural language of a user or identify emotion;
and each virtual robot generates corresponding information in interaction according to the understood natural language or the identified emotion and outputs the information.
By the method, differentiation among different virtual robots is realized, different feedback or response can be respectively made to users, so that users feel different when interacting with different virtual robots, the method is attractive to the users, and the enthusiasm of the users to participate in the interaction is improved.
Optionally, performing understanding of the natural language on the collected text input or voice input of the user; and identifying the emotion according to the acquired limb image or facial expression image of the user.
Optionally, the outputting includes: and outputting the information in a mode of adapting to the action or expression of the virtual robot parameter attribute.
By the method, the differential output among different virtual robots is realized, so that users feel different when the different virtual robots interact, the method is attractive to the users, and the enthusiasm of the users to participate in the interaction is improved.
The application also provides an intelligent teaching system, comprising:
the online teaching module is used for loading teaching contents;
the crowd robot module is used for generating at least two virtual robots and enabling at least one virtual robot to interact with teaching contents of a user, so that the at least two virtual robots are used for the interaction of the teaching contents between the virtual robots.
Optionally, the required virtual robot is configured with:
the language collection module is used for collecting the natural language of the user, and the collected natural language comprises the language in a text form or a voice form.
The natural language understanding module is used for understanding the natural language through semantic recognition and generating voice information in interaction;
the language output module is used for outputting the language information generated by the natural language understanding module;
and the behavior output module is used for outputting the actions or expressions of the virtual robot.
Optionally, the required virtual robot is further configured with:
the image acquisition module is used for acquiring limb images or facial expression images of the user;
the emotion recognition module is used for recognizing emotion of the user according to the acquired limb image or facial expression image of the user;
the language output module is also used for outputting according to the identified emotion of the user;
the behavior output module is also used for outputting the action or expression of the virtual robot according to the identified emotion of the user.
Optionally, the method further comprises: a database module comprising a user database and a course database;
the user database is used for storing personal information, course progress and style preference of the user;
the course database is used for storing a general database and a per-course database; the universal database stores greetings and universal chat corpora, and each class database stores the number of virtual robots, roles and course contents required in the current class.
Optionally, the method further comprises: the evaluation module comprises a learning effect evaluation module and a post-class test module; the learning effect evaluation is used for evaluating the learning effect of the user through evaluation indexes consisting of active interaction duty ratio, answer accuracy and class emotion scores; the post-class test module is used for testing the content of the current course;
and the review module is used for sending review content and recording the review progress and the number of times of the user to the user database.
The present application also provides a computing device comprising: a communication interface, and at least one processor; wherein the at least one processor is configured to execute program instructions that, when executed by the at least one processor, cause the computing device to implement any of the methods described above.
The present application also provides a computer readable storage medium having stored thereon program instructions which when executed by a computer cause the computer to implement any of the methods described above.
In summary, the multiple virtual robots are generated, and the multiple virtual robots and users interact with each other to provide a learning atmosphere for interaction of multiple persons, and the multiple virtual robots can cooperate with each other to fully guide, encourage and inspire enthusiasm of students. And different virtual robots are generated by adopting different parameter attributes, so that different personality attributes are given, differentiation among different virtual robots is further realized through different deep neural networks, different feedback or response can be respectively made for the same user, when the different virtual robots perform the interaction, the user experiences different, the user is more attractive, and the enthusiasm of the user for participating in the interaction is improved.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The various features of the present application and the connections between the various features are further described below with reference to the figures. The figures are exemplary, some features are not shown in actual scale, and some features that are conventional in the art to which this application pertains and are not essential to the application may be omitted from some figures, or features that are not essential to the application may be additionally shown, and combinations of the various features shown in the figures are not meant to limit the application. In addition, throughout the specification, the same reference numerals refer to the same. The specific drawings are as follows:
FIG. 1 is a flow chart of an embodiment of the intelligent teaching method of the present application;
FIG. 2 is a schematic diagram of one embodiment of the intelligent teaching system of the present application;
fig. 3 is a schematic diagram of a computing device provided in an embodiment of the present application.
Detailed Description
The terms first, second, third, etc. or module a, module B, module C, etc. in the description and in the claims, etc. are used solely for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order, as may be appreciated, if permitted, to interchange particular orders or precedence orders to enable embodiments of the present application described herein to be implemented in orders other than those illustrated or described herein.
In the following description, reference numerals indicating steps such as S110, S120, … …, etc. do not necessarily indicate that the steps are performed in this order, and the order of the steps may be interchanged or performed simultaneously as allowed.
The term "comprising" as used in the description and claims should not be interpreted as being limited to what is listed thereafter; it does not exclude other elements or steps. Thus, it should be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the expression "a device comprising means a and B" should not be limited to a device consisting of only components a and B.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments as would be apparent to one of ordinary skill in the art from this disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. If there is a discrepancy, the meaning described in the present specification or the meaning obtained from the content described in the present specification is used. In addition, the terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
For the purpose of accurately describing the technical content of the present application, and for the purpose of accurately understanding the present application, the terms used in the present specification are given the following explanation or definition before explaining the specific embodiments:
crowd robot: the plurality of intelligent education robots cooperate with each other to complete teaching tasks, and when crowd intelligence is reflected, the education robots are called crowd intelligent robots. These educational robots may in some embodiments be virtual robots, such as different virtual characters simulated by software on a computer. In some embodiments, the program corresponding to each virtual robot may be downloaded to different entity robots capable of executing the corresponding program.
When implemented using an educational robot of an entity, the educational robot of the entity includes at least: a processor (CPU), a memory, a microphone, a camera, a speaker, and a display. The memory stores a program of the corresponding virtual robot and a trained deep neural network model; the microphone and the camera are used for collecting voice and images; the loudspeaker is used for playing the voice; the display is used for displaying the expression animation (such as smile picture) or limb animation (such as picture representing clapping) or characters and the like of the robot; the CPU is used for executing programs in the memory and using the deep neural network model to generate corresponding feedback voices for the collected voices and the images to be played through a loudspeaker or displayed through a display. In other words, when the physical education robot is adopted, it is equivalent to a device in which a computer is embedded as a robot. For convenience of description, the educational robots described below will be described by taking virtual robots as examples.
Man-machine interaction: the interaction between students and intelligent education robots is mainly interactive contents such as questions and answers, conversations, boring and the like, and man-machine interaction can be started and ended at any time of courses.
According to the intelligent learning system, the crowd-sourced robots are introduced to serve as accompanying roles, a plurality of virtual robots are utilized to participate in online teaching at the same time, the atmosphere for multi-user common learning is built, when the plurality of virtual robots play different roles, the plurality of virtual robots can be matched with each other, enthusiasm of students can be fully guided, encouraged and inspired, a thick learning atmosphere is built, learning enthusiasm and learning interests are stimulated, and learning efficiency is improved. The virtual teaching method with the crowd robot can be used for various virtual teaching, such as online virtual teaching, and the teaching can be any online virtual teaching of English, mathematics, chinese and the like. When the English virtual teaching is English on-line virtual teaching, the teaching of Chinese, english and Chinese-English mixed language modes can be supported. The present application is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a flowchart of an embodiment of an intelligent teaching method provided in the present application is illustrated by taking online teaching of a user as an example, and includes the following steps:
s110: loading teaching content; including importing a user database and a course database; the two databases will be described in detail later.
One specific implementation of this step may be: the user starts the online teaching application for realizing the invention on the terminal, such as a computer, PAD or mobile phone, and the online teaching application reads the database and course database corresponding to the user from the server database according to the user identification (such as the login name and mobile phone number of the user) so as to perform the initialization work of online teaching.
S120: according to the preference of the user to the virtual image in the user database and according to the information provided by the course database, such as the virtual robot image, the number, the roles and the like, a plurality of virtual robots are generated to form a crowd robot, and the plurality of virtual robots are displayed on the terminal of the user.
When each virtual robot is generated, generating the virtual robots with different parameter attributes according to different preset parameter attributes, wherein the parameter attributes comprise at least one of the following: sex, age, personality, hobbies, height, body type, occupation, speech rate, intonation, and expression rate of change. The virtual robots thus generated are differentiated in performance.
The virtual robot can be displayed in the form of a virtual robot character, a virtual animation character, a virtual animal, or the like. Taking a virtual machine character as an example, the specific implementation manner of the virtual machine character can be as follows: the human body structure is digitized by a computer technology, and the human body modeling, skeleton binding and real-time rendering technology are comprehensively utilized for implementation. Specifically, a three-dimensional model of a virtual teacher character is modeled by collecting a large amount of motion data including facial data, body data, eye and tooth motion and expression data, and a virtual teacher character image is synthesized to complete human modeling. Then, the artificial intelligence algorithm is assisted to drive the virtual teacher role in real time, including, constructing a bionic three-dimensional model with a three-dimensional skeleton structure, binding the bionic three-dimensional model with the skeleton position of the virtual teacher role three-dimensional model on the three-dimensional skeleton structure by adopting the skin algorithm, binding model vertices and simultaneously rendering textures, and generating motion vector data to drive the motion and expression of the three-dimensional model.
S130: and according to the loaded teaching contents, teaching in interaction with the user is realized by the multiple virtual robots.
In the teaching process, at least one virtual robot interacts with the user to carry out teaching content, such as question and answer with the user, or gives corresponding feedback information according to the natural language and emotion of the user which is understood and identified. The implementation of this interaction may comprise the sub-steps of:
s131: language and image information of a user is collected.
The collected language information can be in the form of text or voice. For example, the language of the text form is collected according to the text input of the user, the language of the voice form is collected according to the voice input of the user through the microphone, and in addition, when the language of the voice form is collected, the language of the voice form can be further recognized and converted into the language of the text form for facilitating subsequent processing.
The capturing of the user image may be capturing of an image of a limb or a facial expression of the user from a camera.
S132: different virtual robots adopt deep neural networks with different parameters to understand natural language of a user or identify emotion;
the understanding of natural language in this step specifically includes: and identifying the semantic content of the user according to the collected language information of the user, namely understanding the intention of the user, and generating corresponding output language content according to the identified semantic. The natural language understanding module may be implemented by a Redundant Neural Network (RNN). Specifically, after each word in the language of the user, such as the language of the text form, is sequentially input to the trained RNN, the RNN performs semantic recognition, and sequentially generates each word to form language output. Generally, the corresponding output language is generated from the input of language information, and an RNN implementation of an encoder-decoder architecture may be employed. The user's language information and the generated output language information form sentence-sentence type structures, such as question-answer classes, or sentence-response sentence classes.
The emotion recognition in the step specifically comprises the following steps: the emotion of the user is identified according to the collected behavior image of the user, and the emotion can be corresponding to happiness, depression, confusion, confirmation, attention and the like. The emotion recognition module can be realized by a Convolutional Neural Network (CNN), specifically, after the behavior image of the user is input into the trained CNN, the CNN judges the probability of corresponding to various emotions, and takes the emotion corresponding to the maximum probability as output, namely, the emotion corresponding to the behavior image of the user is recognized. In addition, a user emotion curve can be generated according to the identified emotion of the user, and emotion scores can be given.
When natural language understanding and emotion recognition are performed on different virtual robots, RNNs or CNNs with different parameters can be adopted, and parameters comprise the number of layers of a deep neural network, an activation function and the like, so that different neural networks can be generated in training, different virtual robots can have different recognition results on the same input, and accordingly different content languages or action expressions can be used for outputting, namely differentiation of different virtual robots is achieved.
S133: and each virtual robot generates corresponding information in interaction according to the understood emotion understood or identified by the natural language.
In this step, the deep neural network is used to output the corresponding information in the interaction directly according to the input language and image of the user when the deep neural network adopts the encoding-decoding structure.
S134: outputting the information in a mode of adapting to the actions and expressions of the virtual machine ginseng number attributes; the method specifically comprises the following steps:
and outputting the language generated by the natural language understanding module. The output mode can be a text mode, such as output through a display screen, or can convert the text into voice output through a voice synthesis module, and when voice synthesis is performed, the synthesized voice can be output according to the parameter attributes (such as gender, age, speech speed and voice thickness) of the virtual robot, so that the voice is consistent with the characters corresponding to the corresponding virtual robot parameter attributes.
And outputting the action and expression of the robot to display to the user. For example, when the robot is a virtual robot, the motion and expression of the robot can be played in an animation mode. The action expressions can be manual action, thinking action, expression which is the same as or similar to the expression of the user, and the like, and the virtual robot has actions and expressions and is easy to bring the user into the atmosphere teaching envelope.
In the teaching process, at least two virtual robots are required to be used for interaction of teaching contents between the virtual robots, for example, language (when the virtual robots interact with each other, the language output can be in a text form) or action and expression (when the virtual robots interact with each other, the action and expression output can be corresponding parameters, such as parameters representing a certain action) of one virtual robot are used as input of the other virtual robot, so that interaction between the virtual robots is matched, and matching question, mutual answer, mutual error correction, mutual dialogue and the like are realized, and user inspiring and guiding in teaching are realized. Compared with the interaction with a user, the teaching system directly uses the output of one virtual robot as the input of the other virtual robot, and the interaction between the virtual robots does not need to pass through external devices such as cameras, microphones and the like.
When a user uses a computer to conduct intelligent teaching, a microphone and a camera of the computer are used as equipment shared by all virtual robots for achieving language collection and image collection of the user.
In the teaching process, lessons can be given according to the read content of the lesson database, and the lesson content and progress are adjusted according to the evaluation result.
S140: after the course content is finished, execution of the related content items after the course can be performed, and the following descriptions are given by way of example:
after the lesson is finished, the knowledge point summary and review of the lesson can be called up, and then the lesson answering is carried out.
After the lesson is finished, the learning effect evaluation of the lesson can be started. The learning effect evaluation index consists of an active interaction proportion, an answer accuracy and a class emotion score, wherein the active interaction proportion is the proportion of the number of times that a user actively initiates communication to all interaction times, and is represented by a small number and two effective numbers; the answer accuracy is the proportion of the number of times that the user correctly answers questions in question answering to all questions, and the small number represents two valid digits; the class emotion score is the emotion level of the user, the value range is 1-10, wherein 5 represents neutrality, 1-4 represents the negative degree, the smaller the negative degree is, the higher the negative degree is, 6-10 represents the positive degree, and the larger the enthusiasm is. The learning effect evaluation index is stored in the course database.
After the lesson is finished, a post-lesson test can be started, the knowledge point grasping condition of the user is inspected through the content test of the current lesson, and the test result is stored in the lesson database.
After the lesson is finished, the review content can be sent according to the Ebinhaos forgetting curve, and the review progress and the number of times of the user are recorded and stored in the user database.
Therefore, in the teaching process, when judging that a user has certain emotion, such as depression (representing unintelligible teaching content), the method can drive at least one virtual robot to act and express corresponding voice or display corresponding words or pictures and the like so as to excite the emotion of the user, and can adjust the change rate of the action or expression of the virtual robot, namely improve the activity of the virtual robot and the like so as to more effectively excite the user and realize interaction in teaching. In addition, the teaching of real-time content adjustment can be implemented according to the actual situation of the user. On the other hand, the method can trigger the question-answer interaction between the virtual robots or between the virtual robots and the users, wherein the question-answer interaction has the property of guiding teaching contents, and achieves interactive teaching. When the virtual robot initiates a problem, guidance of learning of the user is achieved. Therefore, when the user encounters difficulty, prompt and guidance are timely given, and interactive learning of a plurality of virtual robots and the user is realized.
As shown in fig. 2, the virtual teaching device with crowd robot of the present application includes:
the database module 10, including the user database 110 and the course database 120, is used for storing user information and course data, respectively, and the database module 10 may be stored on a network side, such as a server side.
The user information stored in the user database 110 includes three parts, namely, user personal information, course progress and style preference. The personal information comprises necessary contact information such as name, age, contact information, home address and the like; the course schedule comprises necessary learning information such as schedule, learning effect, review schedule and the like; style preferences include user personal lesson habits, favorite lesson styles, favorite virtual robot figures.
The information stored in the course database 120 includes database attribute information, general databases, and databases per course. The database attribute information comprises a current database version, a modification record and a statement file; the general database comprises greetings and general chat corpuses, and is used for possibly greetings, boring, and the like before or after class; each class database contains the number of robots, roles and all the contents of courses required in the current class, and all the contents of courses consist of video recordings, pictures and characters, wherein the video recordings of the lessons can be named according to the user number, class time and date naming mode. Before the current intelligent education course is carried out, the previous information stored in the database is imported, and the content and progress of the course can be adjusted according to the evaluation result after the previous course. And generating a plurality of virtual robots according to the number and roles of the robots to form the crowd robot when the current intelligent education course is carried out.
Online tutorial module 20 is operative to load stored content from user database 110 and course database 120 of database module 10.
The crowd robot module 30 is used for generating at least two virtual robots. Each virtual robot can interact with the user respectively, and the user can ask and answer, and the interaction among the virtual robots can also realize the cooperation interaction among the virtual robots, so as to realize cooperation questioning, mutual answering, mutual error correction, mutual dialogue and realize the inspiring and guiding of the user in teaching, and the like.
Wherein, for each virtual robot, the following modules are set:
the language collection module 310 is configured to collect a language of a user, where the collected language may be in a text form or a voice form.
The image acquisition module 320 is configured to acquire a behavior image of a user, such as an image of a limb motion and an image of a face.
The natural language understanding module 330 is configured to identify semantic content for a user, that is, understand user intention, according to the language information of the user acquired by the language acquisition module 310, and generate a corresponding output language according to the identified semantic to reply to the user. The natural language understanding module 330 may be implemented by a Redundant Neural Network (RNN).
The emotion recognition module 340 is configured to recognize emotion of the user according to the behavioral image of the user collected by the image collection module 320. The emotion recognition module may be implemented by a Convolutional Neural Network (CNN). In addition, the method can also be used for generating a user emotion curve according to the identified emotion of the user and giving emotion scores.
The language output module 350 is configured to output the language generated by the natural language understanding module 330. The output mode is consistent with the character corresponding to the corresponding virtual robot parameter attribute.
The behavior output module 360 is configured to output the motion and expression of the robot for displaying to the user.
The evaluation module 40 includes a learning effect evaluation module 410 and a post-class test module 420. The learning effect evaluation module 410 is configured to evaluate a learning effect of a user. The post-class test module 420 is used for testing the content of the current course and examining knowledge point grasp conditions of the user.
The review module 50 is used for sending review content and recording the review progress and times of the user and storing the review progress and times in the user database 110.
Fig. 3 is a schematic diagram of a computing device 1500 provided by an embodiment of the present application. The computing device 1500 includes: processor 1510, memory 1520, communication interface 1530, bus 1540.
It should be appreciated that the communication interface 1530 in the computing device 1500 shown in fig. 3 may be used to communicate with other devices.
Wherein the processor 1510 may be coupled to a memory 1520. The memory 1520 may be used to store the program codes and data. Accordingly, the memory 1520 may be a storage unit inside the processor 1510, an external storage unit independent of the processor 1510, or a component including a storage unit inside the processor 1510 and an external storage unit independent of the processor 1510.
Optionally, computing device 1500 may also include a bus 1540. Memory 1520 and communication interface 1530 may be coupled to processor 1510 by bus 1540. Bus 1540 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus 1540 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one line is shown in fig. 3, but not only one bus or one type of bus.
It should be appreciated that in embodiments of the present application, the processor 1510 may employ a central processing unit (central processing unit, CPU). The processor may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 1510 may employ one or more integrated circuits for executing associated programs to carry out the techniques provided in accordance with embodiments of the present application.
The memory 1520 may include read only memory and random access memory and provide instructions and data to the processor 1510. A portion of the processor 1510 may also include non-volatile random access memory. For example, the processor 1510 may also store information of the device type.
When the computing device 1500 is running, the processor 1510 executes the computer-executable instructions in the memory 1520 to perform the operational steps of the methods described above.
It should be understood that the computing device 1500 according to embodiments of the present application may correspond to a respective subject performing the methods according to embodiments of the present application, and that the above and other operations and/or functions of the respective modules in the computing device 1500 are respectively for implementing the respective flows of the methods of the present embodiment, and are not described herein for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program for executing a diversified problem generating method when executed by a processor, the method comprising at least one of the aspects described in the respective embodiments above.
Any combination of one or more computer readable media may be employed as the computer storage media of the embodiments herein. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Thus, while the present application has been described in terms of the foregoing embodiments, the present application is not limited to the foregoing embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, all of which fall within the scope of the present application.

Claims (2)

1. An intelligent teaching method is characterized by comprising the following steps:
loading teaching content, including importing a user database and a course database;
generating a plurality of virtual robots according to the preference of the user to the virtual image in the user database and the information provided by the course database to form a crowd robot; when each virtual robot is generated, generating the virtual robots with different parameter attributes according to different preset parameter attributes, wherein the parameter attributes comprise at least one of the following: sex, age, personality, hobbies, height, body shape, occupation, speech rate, intonation, expression change rate;
enabling at least one virtual robot to interact with teaching contents of a user;
enabling at least two virtual robots to be used for interaction of teaching contents among the virtual robots; comprising the following steps: taking the corresponding parameters of the text form language or the action and the expression output by one virtual robot as the input of the other virtual robot;
when the virtual robot interacts with the user for teaching content, the method comprises the following steps:
collecting language and image information of a user, wherein the image information of the user can be a user limb image or a facial expression image collected according to a camera;
different virtual robots adopt deep neural networks with different parameters to understand natural language of a user or identify emotion; wherein, the understanding of the natural language is carried out on the collected text input or voice input of the user; identifying the emotion according to the acquired limb image or facial expression image of the user;
each virtual robot generates corresponding information in interaction according to the understood natural language or the identified emotion;
outputting the information in a mode of adapting to actions or expressions of the virtual robot parameter attributes, wherein the method comprises the following steps:
outputting the generated language, outputting synthesized voice according to the parameter attribute of the virtual robot when outputting the voice synthesis mode, and outputting the action and the expression of the virtual robot at the same time, and playing the action and the expression of the virtual robot in an animation mode, wherein the action expression comprises the expression which is the same as or similar to the expression of the user;
when judging that the user has a certain emotion, the emotion indicates that the user does not understand teaching content, the change rate of the actions or expressions of the virtual robot is adjusted, the activity of the virtual robot is improved, and the method is used for exciting the user and triggering question-answer interaction between the virtual robots or between the virtual robots and the user.
2. An intelligent teaching system, comprising:
the online teaching module is used for uploading teaching contents from a user database and a course database of the database module;
a database module comprising the user database and the course database; the user database is used for storing personal information, course progress and style preference of the user; the course database is used for storing a general database and a per-course database; the universal database stores greetings and universal chat corpora, and each class database stores the number of virtual robots, roles and course contents required in the current class;
the evaluation module comprises a learning effect evaluation module and a post-class test module; the learning effect evaluation module is used for evaluating the learning effect of the user through evaluation indexes consisting of the active interaction duty ratio, the answer accuracy and the class emotion scores; the post-class test module is used for testing the content of the current course;
the review module is used for sending review content and recording the review progress and the number of times of the user to the user database;
the crowd robot module is used for generating at least two virtual robots and enabling at least one virtual robot to interact with teaching contents of a user, so that the at least two virtual robots are used for the interaction of the teaching contents between the virtual robots; the method comprises the steps of generating a plurality of virtual robots according to the preference of a user to an virtual image in a user database and the information provided by a course database, and forming a crowd robot; when each virtual robot is generated, generating the virtual robots with different parameter attributes according to different preset parameter attributes, wherein the parameter attributes comprise at least one of the following: sex, age, personality, hobbies, height, body shape, occupation, speech rate, intonation, expression change rate; wherein, at least two virtual robots are used for interaction of teaching contents between the virtual robots; comprising the following steps: taking the corresponding parameters of the text form language or the action and the expression output by one virtual robot as the input of the other virtual robot;
the required virtual robot is configured with:
the language acquisition module is used for acquiring the natural language of the user, wherein the acquired natural language comprises a language in a text form or a voice form;
the natural language understanding module is used for understanding the natural language through semantic recognition and generating voice information in interaction;
the language output module is used for outputting the language information generated by the natural language understanding module; outputting the generated language, and outputting synthesized voice according to the parameter attribute of the virtual robot when outputting the generated language in a voice synthesis mode;
the behavior output module is used for outputting the actions or expressions of the virtual robot;
the image acquisition module is used for acquiring limb images or facial expression images of the user;
the emotion recognition module is used for recognizing emotion of the user according to the acquired limb image or facial expression image of the user;
the language output module is also used for outputting according to the identified emotion of the user;
the behavior output module is also used for outputting the action or expression of the virtual robot according to the identified emotion of the user; wherein, playing the action and expression of the virtual robot in an animation form, wherein the action expression comprises the same or similar expression as the expression of the user;
the crowd robot module is used for realizing the specific interaction between the virtual robot and the user in the teaching content: different virtual robots adopt deep neural networks with different parameters to understand natural language of a user or identify emotion; each virtual robot generates corresponding information in interaction according to the understood natural language or the identified emotion and outputs the information; wherein, the understanding of the natural language is carried out on the collected text input or voice input of the user; identifying the emotion according to the acquired limb image or facial expression image of the user; and when judging that the user has a certain emotion, the emotion indicates that the user does not understand teaching content, the change rate of the actions or the expressions of the virtual robot is adjusted, the liveness of the virtual robot is improved, and the method is used for exciting the user and triggering question-answer interaction between the virtual robot and the user or between the virtual robot and the user.
CN202011461962.9A 2020-12-11 2020-12-11 Intelligent teaching method and device Active CN112634684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011461962.9A CN112634684B (en) 2020-12-11 2020-12-11 Intelligent teaching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011461962.9A CN112634684B (en) 2020-12-11 2020-12-11 Intelligent teaching method and device

Publications (2)

Publication Number Publication Date
CN112634684A CN112634684A (en) 2021-04-09
CN112634684B true CN112634684B (en) 2023-05-30

Family

ID=75312331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011461962.9A Active CN112634684B (en) 2020-12-11 2020-12-11 Intelligent teaching method and device

Country Status (1)

Country Link
CN (1) CN112634684B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271842B1 (en) * 1997-04-04 2001-08-07 International Business Machines Corporation Navigation via environmental objects in three-dimensional workspace interactive displays
CN107491176A (en) * 2017-09-27 2017-12-19 樊友林 A kind of virtual emulation teaching method, system and ustomer premises access equipment and server
CN107765852A (en) * 2017-10-11 2018-03-06 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
CN109448467A (en) * 2018-11-01 2019-03-08 深圳市木愚科技有限公司 A kind of virtual image teacher teaching program request interaction systems
CN111489424A (en) * 2020-04-10 2020-08-04 网易(杭州)网络有限公司 Virtual character expression generation method, control method, device and terminal equipment
JP2020160641A (en) * 2019-03-26 2020-10-01 大日本印刷株式会社 Virtual person selection device, virtual person selection system and program

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11154981B2 (en) * 2010-02-04 2021-10-26 Teladoc Health, Inc. Robot user interface for telepresence robot system
CN105632251B (en) * 2016-01-20 2018-04-20 华中师范大学 3D virtual teacher system and method with phonetic function
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality
CN109841122A (en) * 2019-03-19 2019-06-04 深圳市播闪科技有限公司 A kind of intelligent robot tutoring system and student's learning method
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN111390919A (en) * 2020-03-09 2020-07-10 合肥贤坤信息科技有限公司 Accompany robot intelligence image recognition behavior analysis system
CN111428666A (en) * 2020-03-31 2020-07-17 齐鲁工业大学 Intelligent family accompanying robot system and method based on rapid face detection
CN111680137A (en) * 2020-05-20 2020-09-18 北京大米科技有限公司 Online classroom interaction method and device, storage medium and terminal
CN112017085B (en) * 2020-08-18 2021-07-20 上海松鼠课堂人工智能科技有限公司 Intelligent virtual teacher image personalization method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271842B1 (en) * 1997-04-04 2001-08-07 International Business Machines Corporation Navigation via environmental objects in three-dimensional workspace interactive displays
CN107491176A (en) * 2017-09-27 2017-12-19 樊友林 A kind of virtual emulation teaching method, system and ustomer premises access equipment and server
CN107765852A (en) * 2017-10-11 2018-03-06 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN109448467A (en) * 2018-11-01 2019-03-08 深圳市木愚科技有限公司 A kind of virtual image teacher teaching program request interaction systems
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
JP2020160641A (en) * 2019-03-26 2020-10-01 大日本印刷株式会社 Virtual person selection device, virtual person selection system and program
CN111489424A (en) * 2020-04-10 2020-08-04 网易(杭州)网络有限公司 Virtual character expression generation method, control method, device and terminal equipment

Also Published As

Publication number Publication date
CN112634684A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
Cole et al. Perceptive animated interfaces: First steps toward a new paradigm for human-computer interaction
CN110931111A (en) Autism auxiliary intervention system and method based on virtual reality and multi-mode information
CN110091335B (en) Method, system, device and storage medium for controlling learning partner robot
Morton et al. Interactive language learning through speech-enabled virtual scenarios
Zhang et al. StoryDrawer: a child–AI collaborative drawing system to support children's creative visual storytelling
JP2012516463A (en) Computer execution method
Hwang et al. Recognition-based physical response to facilitate EFL learning
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN112530218A (en) Many-to-one accompanying intelligent teaching system and teaching method
Talbot et al. Virtual human standardized patients for clinical training
Hwang et al. Collaborative kinesthetic English learning with recognition technology
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
Pennington et al. Using robot-assisted instruction to teach students with intellectual disabilities to use personal narrative in text messages
Mudrick et al. Toward affect-sensitive virtual human tutors: The influence of facial expressions on learning and emotion
Tolksdorf et al. Parents’ views on using social robots for language learning
Calvo et al. Introduction to affective computing
Ince et al. An audiovisual interface-based drumming system for multimodal human–robot interaction
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
CN112634684B (en) Intelligent teaching method and device
Caldwell Marin et al. Designing a cyber-physical robotic platform to assist speech-language pathologists
KR102341634B1 (en) conversation education system including user device and education server
KR20100043393A (en) System for english study service by communication network
Divekar AI enabled foreign language immersion: Technology and method to acquire foreign languages with AI in immersive virtual worlds
Daher et al. Embodied conversational agent for emotional recognition training
Adewole et al. Dialogue-based simulation for cultural awareness training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant