CN112530218A - Many-to-one accompanying intelligent teaching system and teaching method - Google Patents

Many-to-one accompanying intelligent teaching system and teaching method Download PDF

Info

Publication number
CN112530218A
CN112530218A CN202011303056.6A CN202011303056A CN112530218A CN 112530218 A CN112530218 A CN 112530218A CN 202011303056 A CN202011303056 A CN 202011303056A CN 112530218 A CN112530218 A CN 112530218A
Authority
CN
China
Prior art keywords
user
role
teaching
accompanying
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011303056.6A
Other languages
Chinese (zh)
Inventor
黄元忠
卢庆华
魏静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Muyu Technology Co ltd
Original Assignee
Shenzhen Muyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Muyu Technology Co ltd filed Critical Shenzhen Muyu Technology Co ltd
Priority to CN202011303056.6A priority Critical patent/CN112530218A/en
Publication of CN112530218A publication Critical patent/CN112530218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The application provides many-to-one companion intelligent teaching system, include: a virtual teacher module; a virtual partner module; an online interaction module; an attention detection module; an expression recognition module; the teaching classroom module is characterized in that a user can independently create a teaching classroom, a virtual student is selected according to preference, and the user plays a role of a teacher to teach the virtual student. The method and the system build the accompany role, build the participation of a plurality of learning partners, build the learning atmosphere, improve the enthusiasm of the user for learning, change the action, expression, voice and the like of the virtual teacher role or the accompany role according to the attention or expression of the user so as to attract the attention of the user better, call the corresponding voice or characters for display, adjust the online education content of the user, or initiate the problem by the accompany role, and build the learning atmosphere; novel little classroom of self-education brings the user novel study experience, more can arouse user's study enthusiasm and interest, initiatively consolidates the knowledge of study, promotes learning efficiency.

Description

Many-to-one accompanying intelligent teaching system and teaching method
Technical Field
The application relates to the technical field of online teaching, in particular to a many-to-one accompanying intelligent teaching system and a teaching method.
Background
The online education improves the unbalanced distribution of education resources and breaks the limitation of time and place. In addition, due to the development of AI, online education using virtual teacher roles appears in online education, and the method can implement personalized teaching according to the self condition of the user. For example, patent application No. CN201811296550.7 discloses an avatar lecture teacher on-demand interactive system, which can select a favorite avatar lecture teacher according to the user's needs to perform interaction during online lecture.
At present, the online education of virtual teacher roles adopts a one-to-one teaching mode, namely, teaching interaction is carried out between a virtual teacher role and a user. And users usually attend classes together with multiple students and interact with teachers in real scenes, particularly aiming at student users. Under the real scene, due to the participation of a plurality of learning partners and discussion objects, students can not only obtain knowledge input from teachers, but also obtain knowledge input from the learning partners, and due to the learning partners, the learning atmosphere is created, and the enthusiasm of learning can be improved.
Therefore, how to build online education for multiple users together under the condition of only one user, namely many-to-one accompanying teaching, is a technical problem to be improved by the application.
Disclosure of Invention
In view of the above problems in the prior art, the application provides a many-to-one companion intelligent teaching system and a teaching method, which can create a learning atmosphere and improve the learning enthusiasm of a user through more interactions.
In order to achieve the above object, the present application provides a many-to-one companion intelligent teaching system, comprising:
the virtual teacher module is used for constructing virtual teacher roles in teaching;
the virtual accompanying module is used for constructing at least one accompanying role and realizing accompanying in teaching;
and the online interaction module is used for realizing the interaction of the teaching contents among the virtual teacher role, the accompany role and the user.
Therefore, by constructing the accompanying role, the participation of a plurality of learning partners can be created, the learning atmosphere is created, and the learning enthusiasm of the user can be improved.
Optionally, the system further comprises an attention detection module for detecting the attention of the user; the virtual accompanying module is also used for driving the accompanying role to respond correspondingly according to the attention change.
Therefore, the attention detection module can detect the attention of the user in real time to provide the attention to the virtual teacher module or the virtual accompanying module, so that the virtual teacher module or the virtual accompanying module drives the action or expression of the virtual teacher role or the accompanying role, or voice and the like to attract the attention of the user better, and calls the corresponding voice or characters to display, so as to adjust the content of online education of the user.
Optionally, the system further comprises an expression recognition module, configured to detect an expression of the user; the virtual accompanying module is also used for driving the accompanying role to respond correspondingly according to the change of the expression.
Therefore, whether the teaching content is understood by the user can be judged by identifying the expression of the user so as to be provided for the virtual teacher module or the virtual accompanying module, so that the virtual teacher module or the virtual accompanying module drives the action or the expression of the virtual teacher role or the accompanying role, corresponding voice or characters are called for displaying, the content and the teaching mode of online education of the user are adjusted, and personalized teaching is realized.
Optionally, the driving companion character responding correspondingly includes: the accompanying character is driven to make a certain action or expression, or a certain character of the accompanying character is displayed, or a certain voice of the accompanying character is played.
By the above, by driving the behaviors of the accompanying roles, the atmosphere for learning together with the user can be created, the effects of guiding and questioning are achieved, the substituting sense of online education of the user is enhanced, and the enthusiasm of learning is improved.
Optionally, the driving companion character responding correspondingly includes: and triggering a question-answering and guiding scene, wherein in the scene, the accompanying role is driven to initiate a question and is answered by a virtual teacher role or a user, or the teacher role is driven to initiate the question and is answered by the accompanying role or the user.
By above, through triggering question answering and a guide scene and driving the behaviors of the accompanying role and the virtual teacher role, the atmosphere for learning together with the user can be created, the effects of guiding and asking questions are achieved, the substituting sense of online education of the user is enhanced, and the learning enthusiasm is improved.
Optionally, in the interaction with the virtual teacher in the question-answer and guidance scenario, when the virtual teacher gives an answer, the given answer is generated by combining the question with the background information of the virtual teacher through a deep neural network.
By the aid of the method, the answers of the same questions can be changed when different virtual teacher roles are adopted, so that when a user selects virtual teachers with different backgrounds, more diversified questions and answers can be obtained, and interestingness and diversity of teaching are enhanced. The background may be a knowledge background, such as a teaching level mathematical background, such as a math teaching background for a primary school level, a composition specialist background, etc., an english study reservation background, etc.
In one embodiment, the deep neural network is a GRU-based recurrent neural network, wherein,
rt=δ(Wr·[ht-1,xt,Ti])
zt=δ(Wz·[ht-1,xt,Ti])
Figure BDA0002787507360000021
Figure BDA0002787507360000022
wherein r istRepresenting a reset gate in the GRU cell.
Figure BDA0002787507360000023
Candidate hidden layers are indicated. z is a radical oftRepresenting an update gate. h istIs the last hidden layer information output. T isiIs an added teacher background vector, which can be associated with xtThe input GRU units are cascaded.
Optionally, the method further includes: the virtual student module is used for constructing at least one student role and realizing that the corresponding response is carried out on the teaching content carried out by the user when the user is used as a teacher for teaching;
the self-teaching classroom module is used for providing a self-teaching classroom scene, the user can be used as a teacher to perform teaching in the scene, and the at least one student role constructed by the virtual student module correspondingly responds to the teaching content performed by the user.
Therefore, the virtual student module and the self-teaching classroom module can realize that a user independently creates a classroom, virtual student roles are selected according to preferences, the user plays a teacher to teach the virtual students, and the student roles make response actions and voice according to the instructions of the user. The novel learning experience is brought to the user through the novel game teaching mode of role playing, the user is promoted to independently consolidate the learned knowledge, and the learning interest of the user is promoted.
The application also provides a many-to-one accompanying intelligent teaching method, which comprises the following steps:
constructing a virtual teacher role in teaching for teaching, constructing at least one accompanying role to realize accompanying in the teaching, and realizing interaction of teaching contents among the virtual teacher role, the accompanying role and a user;
and when the attention or the expression of the user is judged to be abnormal, the accompanying role is driven to respond correspondingly.
Therefore, by constructing the accompanying role, the participation of a plurality of learning partners can be created, the learning atmosphere is created, and the learning enthusiasm of the user can be improved. And through the detection of the attention or the expression, the attention and the expression of the user can be detected in real time to be provided for the virtual teacher module or the virtual accompanying module, so that the virtual teacher module or the virtual accompanying module drives the action or the expression of the virtual teacher role or the accompanying role, or voice and the like to attract the attention of the user better, and calls the corresponding voice or characters to display, so as to adjust the content of the online education of the user.
Optionally, the driving companion character responding correspondingly includes: the accompanying character is driven to make a certain action or expression, or a certain character of the accompanying character is displayed, or a certain voice of the accompanying character is played.
By the above, by driving the behaviors of the accompanying roles, the atmosphere for learning together with the user can be created, the effects of guiding and questioning are achieved, the substituting sense of online education of the user is enhanced, and the enthusiasm of learning is improved.
Optionally, the driving companion character responding correspondingly includes: and triggering a question-answering and guiding scene, wherein in the scene, the accompanying role is driven to initiate a question and is answered by a virtual teacher role or a user, or the teacher role is driven to initiate the question and is answered by the accompanying role or the user.
By above, through triggering question answering and a guide scene and driving the behaviors of the accompanying role and the virtual teacher role, the atmosphere for learning together with the user can be created, the effects of guiding and asking questions are achieved, the substituting sense of online education of the user is enhanced, and the learning enthusiasm is improved.
Optionally, the method further includes: constructing at least one student role; and triggering a small self-teaching classroom scene, wherein the user is used as a teacher to perform teaching in the scene, and the constructed at least one student role correspondingly responds to the teaching content performed by the user.
In the above manner, under the small self-teaching classroom triggering scene, a user autonomously creates a small classroom, selects a virtual student role according to preference, acts as a teacher by the user to teach the virtual student, and the virtual student role makes response action and voice according to the instruction of the user. The novel learning experience is brought to the user through the novel game teaching mode of role playing, the user is promoted to independently consolidate the learned knowledge, and the learning interest of the user is promoted.
The present application further provides a computing device comprising: a bus; a communication interface connected to the bus; at least one processor coupled to the bus; and at least one memory coupled to the bus and storing program instructions that, when executed by the at least one processor, cause the at least one processor to perform any of the methods described above.
The present application also provides a computer readable storage medium having stored thereon program instructions which, when executed by a computer, cause the computer to perform any of the methods described above.
In conclusion, the method for the intelligent teaching of the many-to-one companions has the advantages that the companions are introduced to learn and discuss together with the users, interaction is generated between the companions and the users, and a learning atmosphere is created; when the user encounters the problem that the user cannot understand or cannot reply, prompt and guide are given to the user in time, learning interest of the user is stimulated, progress and breakthrough of the user in learning are facilitated, learning efficiency is improved, and interactivity of intelligent teaching is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiment(s).
Drawings
The various features and the connections between the various features of the present application are further described below with reference to the drawings. The figures are exemplary, some features are not shown to scale, and some of the figures may omit features that are conventional in the art to which the application relates and are not essential to the application, or show additional features that are not essential to the application, and the combination of features shown in the figures is not intended to limit the application. In addition, the same reference numerals are used throughout the specification to designate the same components. The specific drawings are illustrated as follows:
FIG. 1 is a schematic diagram of a many-to-one companion intelligent teaching system according to the present application;
FIG. 2 is a schematic diagram of a GRU of the present application;
FIG. 3 is a flow chart of a many-to-one companion intelligent teaching method of the present application;
FIG. 4 is a schematic diagram of the architecture of a computing device of the present application;
fig. 5 is a schematic diagram of a conventional GRU.
Detailed Description
The terms "first, second, third and the like" or "module a, module B, module C and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, it being understood that specific orders or sequences may be interchanged where permissible to effect embodiments of the present application in other than those illustrated or described herein.
In the following description, reference to reference numerals indicating steps, such as S110, S120 … …, etc., does not necessarily indicate that the steps are performed in this order, and the order of the preceding and following steps may be interchanged or performed simultaneously, where permissible.
The term "comprising" as used in the specification and claims should not be construed as being limited to the contents listed thereafter; it does not exclude other elements or steps. It should therefore be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, and groups thereof. Thus, the expression "an apparatus comprising the devices a and B" should not be limited to an apparatus consisting of only the components a and B.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments, as would be apparent to one of ordinary skill in the art from this disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. In the case of inconsistency, the meaning described in the present specification or the meaning derived from the content described in the present specification shall control. In addition, the terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
To accurately describe the technical contents in the present application and to accurately understand the present application, the terms used in the present specification are given the following explanations or definitions before the description of the specific embodiments.
Knowledge graph: the method is special graph data and is a semantic network for revealing the relationship between entities. In particular, a knowledge-graph is a labeled directed property graph. Each node in the knowledge graph has a plurality of attributes and attribute values, edges between entities represent relationships between the nodes, the pointing direction of the edges represents the direction of the relationships, and labels on the edges represent the types of the relationships. Knowledge graphs support specific applications in many industries, such as: information retrieval, natural language understanding, question and answer systems, recommendation systems, electronic commerce, educational medicine, and the like.
Covering: a technique for producing three-dimensional animation. Adding bones to the three-dimensional bone model on the basis of the three-dimensional bone model established in the three-dimensional software. Since the skeleton and the three-dimensional skeleton model are independent of each other, in order to allow the skeleton to drive the three-dimensional skeleton model to generate reasonable motion, the technique of binding the three-dimensional skeleton model to the skeleton position is called skinning.
Parliai dialog framework: ParlAI is an open source AI dialog framework promulgated by Facebook, Inc., a framework for training and evaluating AI models on various publicly available dialog data sets.
Dialogue NLI dataset, a natural language reasoning (NLI) dataset published by Facebook corporation and new york university, containing pairs of labeled sentences used to train natural language reasoning models, can be used to improve the Dialogue model.
Gate recursion Unit (GRU, Gated recurrentunit): is an RNN (recurrent neural network) unit and is a variant of the Long Short-Term Memory network (LSTM). A basic GRU unit is shown in figure 5. In fig. 5, σ: sigmoid is an activation function that outputs a value between 0 and 1, describing how much of each part can pass through. 0 means "no amount is allowed to pass" and 1 means "any amount is allowed to pass". The sigmoid layer, called the "input gate layer," decides what values are to be updated. tan h: an activation function. zt is an update gate that is used to decide which information to discard and which new information to add. rt is a reset gate used to decide the proportion of the previous information discarded. Fig. 2 shows a GRU cell of the present application, differing from fig. 5 in that a cascade of xi and Ti is used as the original xi input to the GRU.
The utility model provides an aim at provides many-to-one accompanies intelligent teaching system utilizes a plurality of virtual images to participate in online teaching simultaneously, has solved the shortcoming that only a virtual teacher's role unilateral was input in the one-to-one online education that mentions in the background art, accompanies the user through a plurality of virtual images and learns and discuss together, builds strong learning atmosphere, arouses the enthusiasm of study and interest, promotes learning efficiency. The present application will now be described in detail with reference to the drawings.
As shown in fig. 1, the many-to-one companion intelligent teaching system of the present application includes:
the virtual teacher module 110 is configured to construct a virtual teacher character, and drive the actions and expressions of the virtual teacher character to be displayed to the user.
In an embodiment, the virtual teacher module 110 may be implemented in a manner of: the human body structure is digitalized through a computer technology, and the human body structure is realized by comprehensively utilizing human body modeling, bone binding and real-time rendering technologies. Specifically, a three-dimensional model of the virtual teacher role is modeled by collecting a large amount of action data including face data, body data, and action and expression data of eyes, teeth and the like, and the virtual teacher role image is synthesized to complete human body modeling. And then, the virtual teacher role is driven in real time by an artificial intelligence algorithm, the method comprises the steps of constructing a bionic three-dimensional model with a three-dimensional skeleton framework, binding the skeleton position of the virtual teacher role three-dimensional model on the three-dimensional skeleton framework of the bionic three-dimensional model by adopting a skin algorithm, binding model vertexes and simultaneously rendering textures, and generating action vector data to drive the action and expression of the three-dimensional model.
And the online interaction module 120 is configured to generate feedback information according to the received user question, and display the feedback information to the user to implement interaction with the user content. The mode of showing to the user can be characters, voice and the like, and the interaction module is equivalent to the characters, the voice and the like fed back to the user by the virtual teacher role. The received user's question may be text information input by the user through a keyboard or voice information input through a microphone, and the voice information may be converted into text information for convenience of processing.
In an embodiment, the online interaction module 120 may be implemented in a specific manner: and establishing an online interaction model by combining a deep neural network, a knowledge graph and a chat robot technology, performing intention identification on a question of a user through the model, and generating feedback information according to the intention of the user and the current scene to realize online interaction with the user. In one embodiment, in an interactive scenario for childhood education, three scenarios may be constructed: the method comprises three scenes of a scene of communication with a teacher, a professional knowledge question and answer scene and a chatting scene. When the online interaction model is constructed under different scenes, different deep neural networks obtained by training different training sets can be used, or a knowledge graph with different nodes or relationships can be used. These three scenarios will be further described later.
The attention detection module 130 is configured to detect the attention of the user in real time, so as to provide the attention to the virtual teacher module or the virtual accompanying module, so that the virtual teacher module or the virtual accompanying module drives the action or expression of the virtual teacher character or the virtual accompanying character, and invokes corresponding voice or text for display, thereby adjusting the content of online education of the user. Detecting and tracking in real time classroom attentiveness of a user
In an embodiment, the attention detection module 130 may be implemented in a manner of: the real-time images of the user are collected through the camera, attention data are calculated aiming at the images through an attention detection algorithm, and the learning attention condition of the user is detected. The attention data can be used for analyzing the classroom performance of the user and the interest degree of the teaching contents, so that the attention of the user in classroom learning can be guided in time and the teaching mode can be adjusted.
Specifically, the attention detection algorithm may be: estimating human face characteristic points according to the three-dimensional space position of the human face and the face orientation attitude angle in the collected real-time image of the user, collecting coordinate data of the left eye and the right eye, and inputting the coordinate data into a deep neural network model to calculate the attention points of the left eye and the right eye. When the time length of the attention deviation of the user from the screen reaches a set threshold value, the corresponding attention can be obtained. The deep neural network can adopt a Convolutional Neural Network (CNN), and can be used for generating attention strength after training through a pre-training set.
Specifically, the attention directing method may be: when the attention of the user is judged to be lower than the threshold, the virtual teacher character or the accompanying character can be driven to give corresponding actions and expressions, or corresponding voice is played or corresponding characters or pictures are displayed, so as to guide the regression of the attention of the user, and the virtual teacher module 110 can adjust the actions or expressions of the virtual teacher character, for example, the change rate of the action expressions is improved, or the virtual accompanying module adjusts the activity of the accompanying character, so as to attract the attention of the user more, so that the teaching which is adjusted in real time according to the actual situation of the user is realized.
The expression recognition module 140 is configured to recognize an expression of the user, determine whether the user understands the teaching content, that is, the receptivity to knowledge, and provide the receptivity to the virtual teacher module or the virtual partner module, so that the virtual teacher module or the virtual partner module drives the action or the expression of the virtual teacher character or the virtual partner character, calls corresponding voices or characters to display, adjusts the content and teaching mode of online education of the user, and implements personalized teaching. The virtual accompanying module adjusts the liveness of the accompanying characters and the like, so that question-answer interaction between more accompanying characters and a virtual teacher character is realized, or interaction between more accompanying characters and question-answer or guidance content of a user, or interaction between the virtual teacher character and more question-answer content of the user is realized.
In one embodiment, the expression recognition algorithm module 140 may be implemented as: and calling image acquisition equipment such as a camera and the like, capturing and extracting key points such as the face position, the eyebrow, the mouth and nose contour and the like of the user and key area information of the face, and inputting the key points and the key area information into a pre-trained deep neural network to recognize expressions in a classified manner. The deep neural network can adopt a Convolutional Neural Network (CNN), and can be used for recognizing expression categories after being trained by a pre-training set.
The virtual accompanying module 140 is configured to construct at least one accompanying character, and drive the action and expression of the accompanying character to be displayed to the user, so as to realize learning together with the user. And when the attention detection module 130 detects the change of the attention of the user, or the expression recognition module 140 recognizes some expressions of the user, the virtual accompanying module 140 drives the accompanying character to make some responses, which may be that the accompanying character makes some actions, expressions, or displays some characters sent by the accompanying character or plays some voice of the accompanying character, and may trigger a question and answer scene, which may be a question and answer between the accompanying character, the virtual teacher character and the user, and the accompanying character may serve as a question initiator, so as to implement guidance of learning of the user. Therefore, when the user encounters difficulty, the prompt and the guide are given timely, and the accompanying learning is realized.
In an embodiment, the implementation manner of the virtual accompanying module 140 may be the same as that of the virtual teacher module 110, and is not described in detail. In addition, the constructed accompanying character can be a cartoon image or a real person image.
The virtual student module 150 is used for constructing at least one student role and realizing that the corresponding response is carried out on the teaching content of the user when the user is used as a teacher for teaching;
and the self-teaching lesson module 160 is used for providing a self-teaching lesson scene under which the user performs teaching as a teacher, and the at least one student role constructed by the virtual student module responds to the teaching content performed by the user correspondingly. The response includes: the student character responds to the action expression, or plays the audio fed back by the virtual student, answers the user questions and the like.
Through the self-teaching lesson module 160, a user can independently create a teaching lesson, select one or more virtual student roles constructed by the virtual student module 150 according to preferences, and act as a teacher by the user to teach the virtual student roles. Novel little classroom of self-education brings the user novel study experience, more can arouse user's enthusiasm and interest in learning, initiatively consolidates the knowledge of study, promotes learning efficiency.
In addition, the answer of the question of each virtual student character may be based on a deep neural network mentioned later, and different student characters may be provided with different background vector sequences, so that different answers may be generated according to the question of the user and the background of the student character.
Next, the implementation of the interaction in the above online interaction module 120 in the scenarios of communication with the teacher, the professional knowledge question and answer scenario, and the chat scenario will be further described.
In the interaction process of communicating with the teacher, a deep neural network is constructed to realize the interaction. The scenario of the teacher's communication can be used in subsequent question-answering and guidance scenarios. In one embodiment, the deep neural network employs an Encoder-Decoder (Encoder-Decoder) network architecture, where an Encoder encodes an input, generates a context vector c, and then is output by a Decoder. The method specifically comprises the following steps: receiving the sentences of the problems input by the user, converting the sentences into word vectors xt corresponding to the sentences, inputting the word vectors xt into the network, encoding the word vectors xt into context vectors c by an encoder, then taking the context vectors c as the input of a decoder, outputting the word vectors by the decoder, converting the word vectors into corresponding words, and obtaining the sentences formed by the words, namely the sentences to be output by the teacher in the virtual interaction process. The sentences can be displayed in a text form or can be played in a voice mode so as to adapt to a designed interactive scene (a text interactive scene or a voice interactive scene), and interaction between the user and the virtual teacher role is realized through the network. In addition, in order to enhance the consistency of teacher background information in the interactive process, a teacher background vector input sequence Ti is added at the decoder end, and the teacher background is learned through training the whole conversation process. The deep neural network may be RNN, LSTM, GRU, or the like, which is described below with reference to fig. 2 showing a GRU as an example:
rt=δ(Wr·[ht-1,xt,Ti])
zt=δ(Wz·[ht-1,xt,Ti])
Figure BDA0002787507360000071
Figure BDA0002787507360000072
wherein r istRepresenting a reset gate in the GRU cell.
Figure BDA0002787507360000073
Candidate hidden layers are indicated. z is a radical oftRepresenting an update gate. h istIs the last hidden layer information output. T isiIs an added teacher background vector, which can be associated with xtThe input GRU units are cascaded. FIG. 2 shows an example in which a teacher context vector input sequence Ti is used as one of the inputs to each GRU unit, and during training, training can be performedThe answers in the question-answer pairs that are interacted with are related to the set teacher's background, thereby, it can be realized that the answers to the same question are changed when different virtual teacher roles are adopted. The background may be a knowledge background, such as a teaching level mathematical background, such as a math teaching background for a primary school level, a composition specialist background, etc., an english study reservation background, etc.
In another embodiment, when GRU is used, the teacher's background vector sequence Ti may also be used as one of the inputs to h0 of the decoder, as follows:
rt=δ(Wr·[ht-1,xt])
zt=δ(Wz·[ht-1,xt])
Figure BDA0002787507360000081
Figure BDA0002787507360000082
h0=[c,Ti]
where c is the context vector output by the encoder, and the vector of the teacher context vector input sequence Ti is used as the h0 input decoder of the decoder, and may be input in a manner of cascading c and Ti. The other parameters have the same meanings as above and are not described in detail.
In a professional knowledge question-answering scene, different subject knowledge points can be pre-established as nodes, the association between the knowledge points is analyzed and extracted, and a professional subject knowledge graph is established to build a professional subject knowledge graph, so that the question-answering scene is realized.
In a chatting scene, a chatting conversation robot can be built through a ParlAI conversation frame. Wherein the Dialogue NLI dataset, which is a dataset used to handle the consistency problem of the Dialogue model, can be used as a training set for chatting tasks.
Next, referring to a flow chart shown in fig. 3, a description is given of a many-to-one companion intelligent teaching method for implementing the present application by a computer, including the following steps:
s310: when a user performs online teaching through a computer, a virtual teacher role is constructed by the virtual teacher module 110 and displayed on a computer display, so that online teaching of the user is realized, and at least one accompanying role is constructed by the virtual accompanying module 140 and is displayed on the display in whole or in part.
S320-S330: in the teaching process, if the user has no doubt about the teaching content, for example, after the user picture is collected through a camera and subjected to attention detection or expression recognition and no abnormality is found, the online teaching is normally performed, and at the moment, the accompanying character can perform actions or release characters or voice at a lower frequency, so that the attention of the user is focused on the teaching content of the virtual teacher character. If an abnormality is found after the attention detection or the expression recognition is performed, for example, when the attention is detected to be lower than a threshold value or the expression belongs to an ambiguous category, the next step is performed.
S340: at this time, some action, expression of the accompanying character, such as a hand-raising action, a thinking action, or a voice of asking a question, is triggered to call the user's attention and continue the teaching contents, or go to the following question-answering and guide flow. As another example, if the identified user is a suspicious expression, some actions and expressions of the accompanying character are triggered, such as a hand-lifting action, a thinking action, an expression the same as or similar to the expression of the user, and the following question-answering and guidance flow is entered. At this time, the action expression of the accompanying character is the same as or similar to the expression of the user, so that the user can easily enter the atmosphere.
S350: question answering and guide flow: in the process, a companion role, a virtual teacher role and three users carry out question-answer interaction on the current teaching content, wherein the initiation of the question-answer can be initiated by the companion role, when the answer of the user is not received within a set time, the virtual teacher role is triggered to answer the question, the learning of the user is guided by the method, if the answer of the user is received within the set time, the virtual teacher role can give a conclusion that the answer is correct, or when the answer of the user is different from that recorded in the teaching system, the virtual teacher role gives the answer. In addition, a virtual teacher role can initiate a question, and when the answer of the user is not received within the set time, the accompanying role is triggered to read the answer in the teaching system to answer the question, so that the learning of the user is guided. In addition, when the virtual teacher role initiates a question, a certain proportion of accompanying roles can be set to give wrong answers, and then the virtual teacher role or the user corrects the answers so as to further stimulate the interest and the enthusiasm of the user.
S360: and after a certain time, returning to the step S320, continuously acquiring the user picture through the camera, and performing attention detection and expression recognition. Through the steps, when the user recovers the abnormal state of attention or expression, the question answering and guide process can be quitted, and the original teaching content is recovered to continue teaching.
In addition, in step S350, a self-teaching lesson scene may be triggered instead, and at least one trainee role is constructed in the scene; the user selects at least one virtual student role as a virtual student, the user is used as a teacher to perform teaching in the scene, and the constructed at least one student role performs corresponding response on the teaching content of the user, wherein the corresponding response comprises action expressions responded by the student, or audio fed back by the virtual student is played, the user questions can be answered and the like. In addition, the answer of the question of each virtual student role is also based on the deep neural network, and different student roles can also be provided with different background vector sequences, so that different answers can be generated according to the question of the user and the background of the student roles.
Fig. 4 is a schematic structural diagram of a computing device 1500 provided in an embodiment of the present application. The computing device 1500 includes: processor 1510, memory 1520, communications interface 1530, and bus 1540.
It is to be appreciated that the communication interface 1530 in the computing device 1500 illustrated in FIG. 4 can be utilized to communicate with other devices.
The processor 1510 may be connected to a memory 1520, among other things. The memory 1520 may be used to store the program code and data. Accordingly, the memory 1520 may be a storage unit inside the processor 1510, an external storage unit independent of the processor 1510, or a component including a storage unit inside the processor 1510 and an external storage unit independent of the processor 1510.
Optionally, computing device 1500 may also include a bus 1540. The memory 1520 and the communication interface 1530 may be connected to the processor 1510 via a bus 1540. Bus 1540 can be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 1540 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in FIG. 4, but it is not intended that there be only one bus or one type of bus.
It should be understood that, in the embodiment of the present application, the processor 1510 may adopt a Central Processing Unit (CPU). The processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 1510 uses one or more integrated circuits for executing related programs to implement the technical solutions provided in the embodiments of the present application.
The memory 1520, which may include both read-only memory and random access memory, provides instructions and data to the processor 1510. A portion of the processor 1510 may also include non-volatile random access memory. For example, the processor 1510 may also store information of the device type.
When the computing device 1500 is run, the processor 1510 executes the computer-executable instructions in the memory 1520 to perform the operational steps of the above-described method.
It should be understood that the computing device 1500 according to the embodiment of the present application may correspond to a corresponding main body for executing the method according to the embodiments of the present application, and the above and other operations and/or functions of each module in the computing device 1500 are respectively for implementing corresponding flows of each method of the embodiment, and are not described herein again for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The present embodiments also provide a computer-readable storage medium, on which a computer program is stored, the program being used for executing a diversification problem generation method when executed by a processor, the method including at least one of the solutions described in the above embodiments.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application.

Claims (10)

1. The utility model provides a many-to-one companion intelligent teaching system which characterized in that includes:
the virtual teacher module is used for constructing virtual teacher roles in teaching;
the virtual accompanying module is used for constructing at least one accompanying role and realizing accompanying in teaching;
and the online interaction module is used for realizing the interaction of the teaching contents among the virtual teacher role, the accompany role and the user.
2. The system of claim 1,
the attention detection module is used for detecting the attention of the user;
the virtual accompanying module is also used for driving the accompanying role to respond correspondingly according to the attention change.
3. The system of claim 1,
the system also comprises an expression recognition module for detecting the expression of the user;
the virtual accompanying module is also used for driving the accompanying role to respond correspondingly according to the change of the expression.
4. The system of claim 2 or 3, wherein the driving companion character responding accordingly comprises:
the accompanying character is driven to make a certain action or expression, or a certain character of the accompanying character is displayed, or a certain voice of the accompanying character is played.
5. The system of claim 2 or 3, wherein the driving companion character responding accordingly comprises:
and triggering a question-answering and guiding scene, wherein in the scene, the accompanying role is driven to initiate a question and is answered by a virtual teacher role or a user, or the teacher role is driven to initiate the question and is answered by the accompanying role or the user.
6. The system according to claim 5, wherein in the interaction with the virtual teacher in the question-answering and guidance scenario, when the virtual teacher gives an answer, the given answer is generated by combining the question with the background information of the virtual teacher by using a deep neural network.
7. The system of claim 1, further comprising:
the virtual student module is used for constructing at least one student role and realizing that the corresponding response is carried out on the teaching content carried out by the user when the user is used as a teacher for teaching;
the self-teaching classroom module is used for providing a self-teaching classroom scene, the user can be used as a teacher to perform teaching in the scene, and the at least one student role constructed by the virtual student module correspondingly responds to the teaching content performed by the user.
8. A many-to-one companion intelligent teaching method is characterized by comprising the following steps:
constructing a virtual teacher role in teaching for teaching, constructing at least one accompanying role to realize accompanying in the teaching, and realizing interaction of teaching contents among the virtual teacher role, the accompanying role and a user;
and when the attention or the expression of the user is judged to be abnormal, the accompanying role is driven to respond correspondingly.
9. The method of claim 8, wherein the driving companion character responding accordingly comprises:
driving the accompanying character to make a certain action or expression, or displaying a certain character of the accompanying character, or playing a certain voice of the accompanying character; or
And triggering a question-answering and guiding scene, wherein in the scene, the accompanying role is driven to initiate a question and is answered by a virtual teacher role or a user, or the teacher role is driven to initiate the question and is answered by the accompanying role or the user.
10. The method of claim 8, further comprising:
constructing at least one student role;
and triggering a small self-teaching classroom scene, wherein the user is used as a teacher to perform teaching in the scene, and the constructed at least one student role correspondingly responds to the teaching content performed by the user.
CN202011303056.6A 2020-11-19 2020-11-19 Many-to-one accompanying intelligent teaching system and teaching method Pending CN112530218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011303056.6A CN112530218A (en) 2020-11-19 2020-11-19 Many-to-one accompanying intelligent teaching system and teaching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011303056.6A CN112530218A (en) 2020-11-19 2020-11-19 Many-to-one accompanying intelligent teaching system and teaching method

Publications (1)

Publication Number Publication Date
CN112530218A true CN112530218A (en) 2021-03-19

Family

ID=74981644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011303056.6A Pending CN112530218A (en) 2020-11-19 2020-11-19 Many-to-one accompanying intelligent teaching system and teaching method

Country Status (1)

Country Link
CN (1) CN112530218A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362672A (en) * 2021-08-11 2021-09-07 深圳市创能亿科科技开发有限公司 Teaching method and device based on virtual reality and computer readable storage medium
WO2022183423A1 (en) * 2021-03-04 2022-09-09 深圳技术大学 Online teaching implementation method and apparatus based on gaze tracking, and storage medium
US11532179B1 (en) 2022-06-03 2022-12-20 Prof Jim Inc. Systems for and methods of creating a library of facial expressions
WO2024036899A1 (en) * 2022-08-16 2024-02-22 北京百度网讯科技有限公司 Information interaction method and apparatus, device and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070218433A1 (en) * 2006-03-14 2007-09-20 Apolonia Vanova Method of teaching arithmetic
CN101501741A (en) * 2005-06-02 2009-08-05 南加州大学 Interactive foreign language teaching
CN106023693A (en) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 Education system and method based on virtual reality technology and pattern recognition technology
CN106775198A (en) * 2016-11-15 2017-05-31 捷开通讯(深圳)有限公司 A kind of method and device for realizing accompanying based on mixed reality technology
CN109445579A (en) * 2018-10-16 2019-03-08 翟红鹰 Virtual image exchange method, terminal and readable storage medium storing program for executing based on block chain
CN109448467A (en) * 2018-11-01 2019-03-08 深圳市木愚科技有限公司 A kind of virtual image teacher teaching program request interaction systems
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN110091335A (en) * 2019-04-16 2019-08-06 威比网络科技(上海)有限公司 Learn control method, system, equipment and the storage medium with robot
CN111078005A (en) * 2019-11-29 2020-04-28 恒信东方文化股份有限公司 Virtual partner creating method and virtual partner system
CN111801730A (en) * 2017-12-29 2020-10-20 得麦股份有限公司 System and method for artificial intelligence driven automated companion

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501741A (en) * 2005-06-02 2009-08-05 南加州大学 Interactive foreign language teaching
US20070218433A1 (en) * 2006-03-14 2007-09-20 Apolonia Vanova Method of teaching arithmetic
CN106023693A (en) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 Education system and method based on virtual reality technology and pattern recognition technology
CN106775198A (en) * 2016-11-15 2017-05-31 捷开通讯(深圳)有限公司 A kind of method and device for realizing accompanying based on mixed reality technology
CN111801730A (en) * 2017-12-29 2020-10-20 得麦股份有限公司 System and method for artificial intelligence driven automated companion
CN109445579A (en) * 2018-10-16 2019-03-08 翟红鹰 Virtual image exchange method, terminal and readable storage medium storing program for executing based on block chain
CN109448467A (en) * 2018-11-01 2019-03-08 深圳市木愚科技有限公司 A kind of virtual image teacher teaching program request interaction systems
CN110091335A (en) * 2019-04-16 2019-08-06 威比网络科技(上海)有限公司 Learn control method, system, equipment and the storage medium with robot
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN111078005A (en) * 2019-11-29 2020-04-28 恒信东方文化股份有限公司 Virtual partner creating method and virtual partner system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022183423A1 (en) * 2021-03-04 2022-09-09 深圳技术大学 Online teaching implementation method and apparatus based on gaze tracking, and storage medium
CN113362672A (en) * 2021-08-11 2021-09-07 深圳市创能亿科科技开发有限公司 Teaching method and device based on virtual reality and computer readable storage medium
CN113362672B (en) * 2021-08-11 2021-11-09 深圳市创能亿科科技开发有限公司 Teaching method and device based on virtual reality and computer readable storage medium
US11532179B1 (en) 2022-06-03 2022-12-20 Prof Jim Inc. Systems for and methods of creating a library of facial expressions
US11790697B1 (en) 2022-06-03 2023-10-17 Prof Jim Inc. Systems for and methods of creating a library of facial expressions
US11922726B2 (en) 2022-06-03 2024-03-05 Prof Jim Inc. Systems for and methods of creating a library of facial expressions
WO2024036899A1 (en) * 2022-08-16 2024-02-22 北京百度网讯科技有限公司 Information interaction method and apparatus, device and medium

Similar Documents

Publication Publication Date Title
Burden et al. Virtual humans: Today and tomorrow
Oertel et al. Engagement in human-agent interaction: An overview
US11551804B2 (en) Assisting psychological cure in automated chatting
Ghorbandaei Pour et al. Human–robot facial expression reciprocal interaction platform: case studies on children with autism
CN112530218A (en) Many-to-one accompanying intelligent teaching system and teaching method
US20200125920A1 (en) Interaction method and apparatus of virtual robot, storage medium and electronic device
Lao Reorienting machine learning education towards tinkerers and ML-engaged citizens
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN110851760A (en) Human-computer interaction system for integrating visual question answering in web3D environment
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
Wan et al. Midoriko chatbot: LSTM-based emotional 3D avatar
Alshammari et al. Robotics Utilization in Automatic Vision-Based Assessment Systems From Artificial Intelligence Perspective: A Systematic Review
Gillet et al. Multiparty interaction between humans and socially interactive agents
Huang et al. Artificial intelligence combined with deep learning in film and television quality education for the youth
Sanusi et al. Developing middle school students’ understanding of machine learning in an African school
Barmaki et al. Gesturing and Embodiment in Teaching: Investigating the Nonverbal‎ Behavior of Teachers in a Virtual Rehearsal Environment‎
Cho et al. Implementation of human-robot VQA interaction system with dynamic memory networks
Vishnumolakala et al. In-class Student Emotion and Engagement Detection System (iSEEDS): An AI-based Approach for Responsive Teaching
Magyar et al. Towards adaptive cloud-based platform for robotic assistants in education
Divekar AI enabled foreign language immersion: Technology and method to acquire foreign languages with AI in immersive virtual worlds
Martínez et al. Augmented reality based intelligent interactive e-learning platform
Tuyen et al. Forecasting nonverbal social signals during dyadic interactions with generative adversarial neural networks
CN112634684B (en) Intelligent teaching method and device
Zatarain-Cabada et al. Integrating learning styles and affect with an intelligent tutoring system
Lee et al. FML-based intelligent agent for robotic e-learning and entertainment application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication