CN110046290B - Personalized autonomous teaching course system - Google Patents

Personalized autonomous teaching course system Download PDF

Info

Publication number
CN110046290B
CN110046290B CN201910143880.0A CN201910143880A CN110046290B CN 110046290 B CN110046290 B CN 110046290B CN 201910143880 A CN201910143880 A CN 201910143880A CN 110046290 B CN110046290 B CN 110046290B
Authority
CN
China
Prior art keywords
teaching
scene
materials
node
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910143880.0A
Other languages
Chinese (zh)
Other versions
CN110046290A (en
Inventor
王青
魏昊鹏
张汝民
黄东敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DMAI Guangzhou Co Ltd
Original Assignee
DMAI Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DMAI Guangzhou Co Ltd filed Critical DMAI Guangzhou Co Ltd
Priority to CN201910143880.0A priority Critical patent/CN110046290B/en
Publication of CN110046290A publication Critical patent/CN110046290A/en
Application granted granted Critical
Publication of CN110046290B publication Critical patent/CN110046290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9035Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a personalized autonomous teaching course system, which comprises a teacher terminal: the design of the autonomous teaching method and the teaching process for teaching material manufacturing and based on education and/or graph structure-EAOG; student terminal: the system is used for finishing scene rendering and display, and voice and visual interaction information acquisition and uploading; a scene design terminal: the method is used for designing and developing teaching scenes; cloud server: the method is used for storing teaching materials and teaching scene animations, storing and calling teaching methods, and serving AI and instructions. The invention realizes the independent control of the teaching process and the teaching method and the realization of the individual teaching target according to the current learning state of the learner and various teacher teaching targets, and fully exerts the advantages of the novel teaching technology.

Description

Personalized autonomous teaching course system
Technical Field
The invention belongs to the technical field of artificial intelligence information and education, and particularly relates to a personalized autonomous teaching course system.
Background
With the rapid development of artificial intelligence technology, computer technology and the like and the increasing abundance of teaching resource types, the teaching method is greatly changed, and the teaching process is mainly guided by teachers and is converted by students. The teaching mode is gradually changed from off-line real person teaching to network real person video teaching and then to virtual teacher teaching. In order to fully exert the advantages of the novel teaching technology and realize the personalized teaching target. The problem of how to perform a corresponding teaching process according to the current learning state of the learner needs to be solved.
Teaching materials comprise a series of carriers of relevant knowledge such as texts, pictures, audios and videos, teaching processes are processes for elaborating knowledge by using corresponding teaching materials, and a problem to be solved is that how to generate a relatively flexible and personalized teaching method and system according to limited teaching materials is urgent.
Disclosure of Invention
The invention aims to: the system solves the problems that the advantages of a novel teaching technology are difficult to fully play in the existing artificial intelligent autonomous teaching system, the personalized teaching target is realized, and the independent control of the teaching process and the teaching method according to the current learning state of a learner and various teacher teaching targets is not realized, and provides the personalized autonomous teaching course system.
The technical scheme adopted by the invention is as follows:
a personalized, self-contained instructional lesson system, comprising:
a teacher terminal: the design of the autonomous teaching method and the teaching process for teaching material manufacturing and based on education and/or graph structure-EAOG;
student terminal: the system is used for autonomously finishing scene rendering and display, and voice and visual interaction information acquisition and uploading;
a scene design terminal: the method is used for designing and developing the autonomous teaching scene;
a cloud server: the method is used for storing teaching materials and teaching scene animations, storing and calling teaching methods, and serving AI and instructions.
Further, the teaching method and the teaching process design of the teacher terminal comprise the following steps:
step1, acquiring teaching materials, naming acquired teaching material files containing descriptions of a unified step, wherein the file names of various files in different formats used in the same teaching step are the same;
step 2, designing different explanation interactive behavior modes and teaching processes for different teaching materials based on the teaching material files named and processed in the step1, wherein the teaching processes and the teaching method are designed based on a proposed education and/or map structure-EAOG, and the education and/or map structure comprises the following steps:
1) root node: representing a starting node and not containing semantic information;
2) and a node, comprising:
2.1) sequence and node: the node represents a sequential execution relationship;
2.2) parallel to node: the node represents a simultaneous concurrent execution relationship;
3) or a node, comprising:
3.1) random or nodal: representing a random relationship, randomly selected among a plurality of similar contents or behaviors;
3.2) conditions or nodes: representing condition nodes, and calling corresponding behavior nodes when corresponding conditions are met;
4) behavior nodes:
4.1) terminal node: the system is used for representing various specific behaviors, such as behavior of an avatar or scene behavior and the like;
4.2) end node: to indicate the end of the activity.
Further, the education and or graph structure further comprises a reference node: indicating a return to a condition or node.
Further, the cloud server includes:
an AI service module: analyzing and understanding the image and voice signals transmitted by the student user side, and giving corresponding work instructions;
the work instruction module: the teaching instruction module is mainly used for searching, calling and loading teaching materials in the memory, the scene instruction module is mainly used for searching and calling the scene materials in the memory, and the two instruction modules perform corresponding operations when receiving commands from student user terminals;
a memory module: the system is used for storing teaching materials uploaded by a teacher user side and scene materials uploaded by a scene designer at a server side, and calling corresponding resources to student user sides after receiving teaching instructions and scene instructions.
Further, the student user terminal includes:
a loudspeaker: the audio playing of teaching and scenes is responsible;
a sound receiver: the voice information of the student users is collected and transmitted to the control center;
camera: collecting image information of student users and transmitting the image information to a control center;
a key module: providing a manual operation instruction;
a display screen: presenting to the user module visual data;
a scene driving engine: correspondingly calling and rendering the teaching materials and the scene materials in the memory according to the working instructions given by the control center, and displaying the teaching materials and the scene materials on a display;
a memory: storing teaching materials and scene materials requested by a control center to a cloud locally;
the control center: and mutually transmitting and operating the instructions of the whole student user module, and transmitting the information or the instructions of other modules to a scene driving engine or a cloud server.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. in the invention, the design of the autonomous teaching method and the teaching process based on the education and the figure structure-EAOG systematically decomposes the problems into mutually independent small problems and then solves the problems, and the method has the characteristics of graphic visual description and structural knowledge expression. The structural model can describe and model the teaching process simply, conveniently and quickly, can independently complete teaching tasks based on corresponding teaching materials, and fully develops the advantages of the novel teaching technology. The system has the advantages that the student terminals are adopted to complete scene rendering and display, and voice and visual interaction information acquisition and uploading, so that the current learning state of learners is fully known and fed back, the personalized autonomous teaching process and teaching method control is realized, the scene design terminal completes the design and development of autonomous teaching scenes, the application enables the system to be more flexible, the cloud server is used for storing, but not limited to, teaching materials and teaching scene animations, the teaching method is stored and called, and AI (artificial intelligence) service and instruction service are realized. The teaching process and the teaching method can be controlled independently according to the current learning state of the learner and various teacher teaching targets, and the personalized teaching targets can be realized, so that the advantages of the novel teaching technology can be fully exerted.
2. In the invention, education and or graph structure-EAOG is distinguished from main nodes which are simply divided into primary, secondary and tertiary, and or relations with nodes and/or nodes, and the simple and or relations are difficult to realize the control of teaching processes and methods with variable conditions.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a block diagram of the overall system architecture of the present invention;
FIG. 2 is a schematic diagram of node design of a welome in an educational process based on an EAOG as an educational and/or graph structure in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of the node design of interior-the-number in the teaching process based on the education and/or graph structure-EAOG in embodiment 1 of the present invention;
FIG. 4 is a schematic view of the design of the How-mann-animals node in the teaching process based on the education and/or map structure EAOG in embodiment 1 of the present invention;
FIG. 5 is a schematic diagram of a node design of make-the-sound in an educational process based on an educational and/or graph structure-EAOG in embodiment 1 of the present invention;
FIG. 6 is a block diagram of a student user terminal according to the present invention;
fig. 7 is a block diagram of a cloud server according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention, i.e., the described embodiments are only a subset of, and not all, embodiments of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The features and properties of the present invention are described in further detail below with reference to examples.
Example 1
A system for personalized autonomous teaching courses according to a preferred embodiment of the present invention includes:
a teacher terminal: the design of the autonomous teaching method and the teaching process for teaching material manufacturing and based on education and/or graph structure-EAOG;
student terminal: the system is used for autonomously finishing scene rendering and displaying, and voice and visual interaction information acquisition and uploading;
a scene design terminal: the method is used for designing and developing the autonomous teaching scene;
cloud server: the method is used for storing teaching materials and teaching scene animations, storing and calling teaching methods, and serving AI and instructions.
And-or graph is a structure diagram consisting of nodes and or nodes, which systematically decomposes problems into small problems independent of each other and then resolves them. It has the characteristics of graphic, visual and intuitive description and also has the capability of expressing structured knowledge. It has certain limitations to the description of the teaching process.
The invention provides a set of education And map structure-EAOG (education And Or graph) for teaching based on And map basic framework. The graph model can describe and model the teaching process simply, conveniently and quickly, and can independently complete teaching tasks based on corresponding teaching materials.
The teaching method is a general term of behavior modes adopted by teachers and students in teaching activities to achieve teaching purposes and teaching task requirements in the teaching process.
The complete teaching process of different specific teaching method levels to a certain knowledge point is called a teaching process. The teaching process can be divided into, but not limited to, the following processes: a warm-up link, a teaching link, a free exercise link, a group learning link, a review link and the like.
The teaching method and the teaching process design of the teacher terminal comprise the following steps:
step1, obtaining teaching materials, naming the obtained teaching material files containing the description of the unified step, wherein the file names of various files with different formats used in the same teaching step are the same. Teaching materials are the most important requirements in teaching, and different knowledge points can be comprehensively analyzed from different dimensions and angles by using different teaching materials. Recording in the teaching materials is recorded through corresponding recording equipment according to the teaching steps of the courses or is acquired from the Internet, and importantly, the recording files are named specifically, so that naming requirements are simple and clear and can indicate the teaching steps, such as step1. wav. Besides audio files, picture materials required for teaching are acquired through the internet or are designed autonomously, the naming method of the pictures corresponds to the audio files, and for example, the audio files and the picture files simultaneously used in step1 can be described as step1.wav and step1. jpg. The source and naming rules of the video file are as above. Here, an example can be illustrated: the method is to name the text display, the picture display and the video display in the teaching and the voice file spoken by the teacher separately, but the specific naming rule is based on the semantic naming associated with the specific teaching content. For example, at a specific teaching step in a step of teaching, which is used to explain the problem B, all files used for the specific behavior are named B, for example, a teacher says that a picture is displayed on a blackboard at the same time, which means that the step needs two files, one is an audio file spoken by the teacher and the other is a picture file displayed on the blackboard, which need to be named with the same name and have a semantic function of explaining the step.
And 2, designing different explanation interactive behavior modes and teaching processes for different teaching materials based on the teaching material files processed in the step1, wherein the teaching processes and the teaching methods are designed based on a proposed education and/or map structure-EAOG, and the education and/or map structure comprises the following steps:
1) a root node: root is used for representing the starting node and does not contain semantic information;
2) and nodes, denoted by and, comprising:
2.1) sequences and nodes, denoted by s-and: the node represents a sequential execution relationship;
2.2) parallel and node, denoted by p-and: the node represents a simultaneous concurrent execution relationship;
3) or node, denoted by or, comprising:
3.1) random or nodal, denoted r-or: representing a random relationship, randomly selected among a plurality of similar contents or behaviors;
3.2) conditions or nodes, denoted by c-or: and expressing the condition nodes, and calling corresponding behavior nodes when corresponding conditions are met.
4) And the action node is represented by action:
4.1) terminal node, denoted by terminal: used for representing various specific behaviors, such as the behavior of an avatar or scene behavior and the like;
4.2) end node, denoted by end: to indicate the end of the activity.
Further, the education and/or graph structure further comprises a reference node, which is represented by reference: indicating a return to the condition or node.
The autonomous teaching process based on the education and map structure can automatically perform steps in the teaching process according to different actual feedback conditions during actual use, wherein the steps are conditions or nodes, and the conditions are judged according to feedback obtained during teaching and then the next step is selected.
In the present invention, teaching methods include, but are not limited to, audio interpretation, text display, picture display and interaction, video presentation, and the like.
Specifically, for audio explanation, the audio explanation is recorded in advance by a user or obtained through the internet. The specific audio explanation actions can be divided into a plurality of classes, and the explanation modes of the same content, different expressions and postures can be realized corresponding to different expressions and action combinations, picture display and the like.
For picture display and interaction, a picture display mode is defined, namely the picture display mode with action and the picture display mode without action, and the picture display mode with interaction and the picture display mode without interaction. For the picture with interaction, when the picture is clicked, the picture generates corresponding action and is accompanied with corresponding sound effect. Of course, there are many possible combinations of picture display and interaction with audio interpretation and animation.
For video interpretation, some individual knowledge points are specifically explained by video while being interpreted. Of course, the device can be matched with the animation image at the moment to generate various explanation combinations.
In the present invention, the teaching process can be divided into, but not limited to, the following processes: a warm-up link, a teaching link, a free exercise link, a group learning link, a review link and the like.
The teaching process and the teaching method are based on the proposed EAOG, and can freely control teaching materials and animation materials to form personalized interactive teaching courses.
Taking the specific teaching process of teaching numbers in english as an example, the teaching materials include audio, video, images, and the like. The teaching method comprises various teaching methods, such as teacher speaking, teacher listening, teacher observing, blackboard displaying pictures and the like; can be defined as: teacher-say, teacher-last-to-user, teacher-watch, blackberry-show-picture, and the like.
The teaching process refers to a general process for teaching numbers such as 1-3, and a number teaching game is designed by taking a zoo as a background, wherein the specific process is as follows:
firstly, Greeting — here, audio and video files appear, in order to ensure the generality and the generality of the teaching method, here, the corresponding (corresponding meaning that the corresponding picture is displayed while the audio is played) audio and picture in this step are named as: mpl3, png, the file names are the same, and only the file types are different. In addition to the first time, go-to-the-zo.mp 3 and go-to-the-zo.png are also present; si ng-a-song.mp3, si ng-a-song.mp4,
number learning
Introductions-the-number introduces animals associated with numbers, such as lecture 1, where lion is used, and thus, the audio and image names: png and mp 3;
how-mani-animals, this step is the exercise part, and the learning is strengthened by the exercise-the related files are audio and image files, named as How-mani-animals.mp 3, How-mani-animals.png, respectively;
make-the-sound, follow-up link; mp3 audio file;
make-the-get, visual interaction link, user needs to give gesture, do-and-say-number, mp3, as required;
thirdly, Good Bye-the course is finished; the goodbye audio and image are named: mp3, goodbye. png;
fourthly, in addition, some general terms and pictures are provided, for example, when the user answers the correct feedback correctly, we name active-response-1.mp3, active-response-1.txt, active-response-1. png-1, which means that the resources in this category have several different expression modes.
Specific node designs of the teaching process and the teaching method based on education and/or graph structure-EAOG are shown in figures 2, 3, 4 and 5, and the method and the process control of the make-the-measure are similar to those of the make-the-sound. Fig. 2 is an autonomous teaching process of welome, fig. 3 is an autonomous teaching process of Introduce-the-number, fig. 4 is an autonomous teaching process of How-manimal, and fig. 5 is an autonomous teaching process of make-the-sound.
Schemes 1-18 in the figures are shown below (where a represents audio and p represents picture):
1: teacher: and (6) say: a ('hello'); (i.e. broadcasting the audio of the teacher saying "hello")
2: a blackberry: show: p ('hello'); (i.e., pictures showing "hello" on black board)
3-6: the blackboard: hide! (ii) a (i.e. hide the blackboard)
4: teacher: and (6) say: a ('welgome'); (i.e. broadcasting the audio of the teacher saying "welcom")
5: a blackberry: show: p ('welome'); (i.e., pictures showing "welome" on a black plate)
7: teacher: and (6) say: a ('animal'); (i.e. broadcasting the audio of the teacher saying "animal")
8: a blackberry: show: p ('animal'): (i.e., pictures showing "animal" on the black board)
9: teacher: and (6) say: a ('how-mann'); (i.e. broadcasting the audio that the teacher said "how-many")
10: a blackberry: show: p ('how-mann'); (i.e., pictures showing "how-many" on the black plate)
11: teacher: and (6) say: a ('animal + number'); (i.e., audio broadcasting teacher saying "animal + number")
12: a blackberry: show: p ('animal + number'); (i.e., pictures showing "animal + number" on the black plate)
13: teacher: and (6) say: a ('make-the-sound'), 'happy' -expression; (i.e., broadcasting the audio of the teacher saying "make-the-sound" and animation showing the expression of "happy")
14: teacher-list-to-user; (i.e., performing the step of the teacher listening to the user's speech)
15: teacher: and (6) say: a ('please-say-again'), 'hide' -expression; (i.e. broadcasting the audio of the teacher saying "please-say-again", animation hiding expression)
17: teacher: and (6) say: a ('tremendous'), 'happy'; (i.e. broadcasting the audio of the teacher saying "tremendous", showing the expression of "happy" in animation)
18: a blackberry: hide. (and go hidden blackboard)
The student user terminal includes: the system comprises a control center, a scene driving engine, a memory, a display screen, a camera, a loudspeaker, a sound receiver and a key module. As shown in fig. 6.
The loudspeaker is responsible for playing some audio of teaching and scenes; the sound receiver is responsible for collecting and transmitting the voice information of the student user to the control center; the camera is used for collecting image information of student users and transmitting the image information to the control center; the key module is characterized in that a basic template can be an actual key or a virtual key of a touch screen and mainly comprises instructions of starting, stopping, temporarily setting, skipping and the like; the display screen is used for showing teaching scenes, teaching materials and the like to the user module; the scene driving engine is mainly used for carrying out corresponding calling and rendering on the teaching materials and the scene materials in the memory according to the working instructions given by the control center and presenting the teaching materials and the scene materials to the display; the memory is used for locally storing teaching materials and scene materials requested by the control center to the cloud; the control center is the core of the whole student user module, and mainly functions to transmit voice and image information from the sound receiver and the camera to the cloud server, and in addition, transmits instructions transmitted by the keys to the scene driving engine or the cloud server.
The scene design terminal is mainly used for designing teaching scenes, animation images, teaching tools and animation characters, and actions and expressions of the animation images and mouth shapes. Expressions include, but are not limited to, calm, happy, sad, and actions include, but are not limited to, pointing to a blackboard, waving a hand, and the like. The mouth-type motion of the animated character is automatically generated from the audio file and the method may refer to LipSync.
The cloud server structure is shown in fig. 7, and includes:
an AI service module: and analyzing and understanding the image and voice signals transmitted by the student user side, and giving a corresponding work instruction. The analysis understanding of the image signal mainly refers to face recognition analysis and expression analysis understanding of the student, head posture analysis understanding, and the like. The voice analysis mainly refers to natural language processing of voice signals to obtain user intentions, and then corresponding work instructions are generated.
The work instruction module: the teaching instruction module is mainly used for searching, calling and loading teaching materials in the memory, and the scene instruction module is mainly used for searching and calling scene materials in the memory. Meanwhile, the two instruction services can receive commands of starting, ending language pausing, skipping and the like from the student user terminal and perform corresponding operations.
A memory module: the system is used for storing teaching materials uploaded by a teacher user side and scene materials uploaded by a scene designer at a server side, and calling corresponding resources to student user sides after receiving teaching instructions and scene instructions.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. A personalized autonomous teaching course system is characterized in that: the method comprises the following steps:
a teacher terminal: the design of the autonomous teaching method and the teaching process for teaching material manufacturing and based on education and/or graph structure-EAOG;
a student terminal: the system is used for autonomously finishing scene rendering and displaying, and voice and visual interaction information acquisition and uploading;
a scene design terminal: the method is used for designing and developing the autonomous teaching scene;
cloud server: the system is used for storing teaching materials and teaching scene animations, storing and calling a teaching method, and serving AI and instructions;
the teaching method and the teaching process design of the teacher terminal comprise the following steps:
step1, acquiring teaching materials, naming acquired teaching material files containing descriptions of a unified step, wherein the file names of various files in different formats used in the same teaching step are the same;
step 2, designing different explanation interactive behavior modes and teaching processes for different teaching materials based on the teaching material files named and processed in the step1, wherein the teaching processes and the teaching method are designed based on a proposed education and/or map structure-EAOG, and the education and/or map structure comprises the following steps:
1) root node: represents the start node, contains no semantic information:
2) and a node, comprising:
2.1) sequence and node: the node represents a sequential execution relationship;
2.2) parallel to nodes: the node represents a simultaneous concurrent execution relationship;
3) or a node, comprising:
3.1) random or nodal: representing a random relationship, randomly selected among a plurality of similar contents or behaviors;
3.2) conditions or nodes: representing condition nodes, and calling corresponding behavior nodes when corresponding conditions are met;
4) behavior nodes:
4.1) terminal node: the system is used for representing various specific behaviors, including the behavior of an avatar or scene behavior;
4.2) end node: to indicate the end of a behavior;
the education and or graph structure also comprises a reference node: indicating a return to the condition or node.
2. The system of claim 1, wherein: the cloud server comprises:
an AI service module: analyzing and understanding the image and voice signals transmitted by the student terminal, and giving corresponding work instructions;
the work instruction module: the teaching instruction module is used for searching, calling and loading teaching materials in the memory, the scene instruction module is used for searching and calling the scene materials in the memory, and the two instruction modules can perform corresponding operations when receiving commands from the student terminal;
a memory module: the system is used for storing teaching materials uploaded by the teacher terminal and scene materials uploaded by the scene designers in the server side, and calling corresponding resources to the student terminals after receiving the teaching instructions and the scene instructions.
3. The system of claim 1, wherein: the student terminal includes:
a loudspeaker: the audio playing of teaching and scenes is responsible;
a sound receiver: the voice information of the student users is collected and transmitted to the control center;
camera: collecting image information of student users and transmitting the image information to a control center;
a key module: providing a manual operation instruction;
a display screen: presenting to the user module visual data;
a scene driving engine: correspondingly calling and rendering the teaching materials and the scene materials in the memory according to the working instructions given by the control center, and displaying the teaching materials and the scene materials on a display;
a memory: storing teaching materials and scene materials requested by a control center to a cloud end locally;
the control center: and mutually transmitting and operating the instructions of the whole student terminal, and transmitting the information or the instructions of other modules to a scene driving engine or a cloud server.
CN201910143880.0A 2019-02-26 2019-02-26 Personalized autonomous teaching course system Active CN110046290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910143880.0A CN110046290B (en) 2019-02-26 2019-02-26 Personalized autonomous teaching course system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910143880.0A CN110046290B (en) 2019-02-26 2019-02-26 Personalized autonomous teaching course system

Publications (2)

Publication Number Publication Date
CN110046290A CN110046290A (en) 2019-07-23
CN110046290B true CN110046290B (en) 2022-09-23

Family

ID=67274282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910143880.0A Active CN110046290B (en) 2019-02-26 2019-02-26 Personalized autonomous teaching course system

Country Status (1)

Country Link
CN (1) CN110046290B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766578A (en) * 2019-10-21 2020-02-07 江苏晓创教育科技有限公司 Automatic arrangement method and device for IT experiment courses
CN114205640B (en) * 2021-11-24 2023-12-12 安徽新华传媒股份有限公司 VR scene control system is used in teaching
CN115019575B (en) * 2022-06-09 2024-04-16 北京新唐思创教育科技有限公司 Full-true scene course processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133896A (en) * 2017-05-22 2017-09-05 浙江精益佰汇数字技术有限公司 The immersion teaching platform based on virtual reality technology and implementation method of multi-person interactive
CN107230403A (en) * 2017-07-30 2017-10-03 成都优瑞商务服务有限公司 A kind of intelligent tutoring system
KR20180072130A (en) * 2016-12-21 2018-06-29 박용철 Computer program for coaching self-study and medium recording the computer program
CN109147440A (en) * 2018-09-18 2019-01-04 周文 A kind of interactive education system and method
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109377802A (en) * 2018-11-26 2019-02-22 暗物质(香港)智能科技有限公司 A kind of automatic and interactive intellectual education system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106205248B (en) * 2016-08-31 2019-01-18 北京师范大学 A kind of representative learning person generates system and method in the on-line study cognitive map of domain-specific knowledge learning and mastering state
CN107992195A (en) * 2017-12-07 2018-05-04 百度在线网络技术(北京)有限公司 A kind of processing method of the content of courses, device, server and storage medium
CN109308604A (en) * 2018-08-31 2019-02-05 温州大学 A kind of education/training management system and education training method
CN108986574B (en) * 2018-09-06 2020-12-29 北京春秋泰阁文化传播有限公司 Instant interaction type and big data analysis online teaching platform and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180072130A (en) * 2016-12-21 2018-06-29 박용철 Computer program for coaching self-study and medium recording the computer program
CN107133896A (en) * 2017-05-22 2017-09-05 浙江精益佰汇数字技术有限公司 The immersion teaching platform based on virtual reality technology and implementation method of multi-person interactive
CN107230403A (en) * 2017-07-30 2017-10-03 成都优瑞商务服务有限公司 A kind of intelligent tutoring system
CN109189535A (en) * 2018-08-30 2019-01-11 北京葡萄智学科技有限公司 Teaching method and device
CN109147440A (en) * 2018-09-18 2019-01-04 周文 A kind of interactive education system and method
CN109377802A (en) * 2018-11-26 2019-02-22 暗物质(香港)智能科技有限公司 A kind of automatic and interactive intellectual education system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多地远程互动同步教学系统的设计与实现;练志坚;《电脑与电信》;20181231;第60-63页 *

Also Published As

Publication number Publication date
CN110046290A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN113095969B (en) Immersion type turnover classroom teaching system based on multiple virtualization entities and working method thereof
Clark et al. E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning
CN110033659B (en) Remote teaching interaction method, server, terminal and system
CN105632251B (en) 3D virtual teacher system and method with phonetic function
Cole et al. Perceptive animated interfaces: First steps toward a new paradigm for human-computer interaction
Kenny et al. Building interactive virtual humans for training environments
Hong et al. A courseware to script animated pedagogical agents in instructional material for elementary students in English education
Edwards et al. Multimedia interface design in education
Delamarre et al. The interactive virtual training for teachers (IVT-T) to practice classroom behavior management
Higgins et al. Video as a mediating artefact of science learning: cogenerated views of what helps students learn from watching video
Fyfield et al. Improving instructional video design: A systematic review
CN110046290B (en) Personalized autonomous teaching course system
KR102035088B1 (en) Storytelling-based multimedia unmanned remote 1: 1 customized education system
Louca et al. The use of computer‐based programming environments as computer modelling tools in early science education: The cases of textual and graphical program languages
CN111343507A (en) Online teaching method and device, storage medium and electronic equipment
Kohnke Using technology to design ESL/EFL microlearning activities
Dietz et al. Visual StoryCoder: A Multimodal Programming Environment for Children’s Creation of Stories
Thomas et al. Language teaching with video-based technologies: Creativity and CALL teacher education
Herbst et al. Depict: A tool to represent classroom scenarios
Noskova et al. Communication models in the digital learning environment
Chetty et al. Embodied conversational agents and interactive virtual humans for training simulators
Divekar AI enabled foreign language immersion: Technology and method to acquire foreign languages with AI in immersive virtual worlds
CN110852922A (en) Dynamic scenario-oriented language digital teaching method and system
CN109993671B (en) EAOG-based autonomous teaching design method and system applied to autonomous teaching
Hensen et al. Mixed reality agents for automated mentoring processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210310

Address after: 16 / F, No. 37, Jinlong Road, Nansha District, Guangzhou City, Guangdong Province (office only)

Applicant after: DMAI (GUANGZHOU) Co.,Ltd.

Address before: Room 1901, 19 / F, Lee court I, 33 Hysan Road, Causeway Bay

Applicant before: DARK MATTER (HONG KONG) INTELLIGENT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant