CN114119932A - VR teaching method, apparatus, electronic device, storage medium and program product - Google Patents

VR teaching method, apparatus, electronic device, storage medium and program product Download PDF

Info

Publication number
CN114119932A
CN114119932A CN202111199977.7A CN202111199977A CN114119932A CN 114119932 A CN114119932 A CN 114119932A CN 202111199977 A CN202111199977 A CN 202111199977A CN 114119932 A CN114119932 A CN 114119932A
Authority
CN
China
Prior art keywords
learning
learner
information
prediction model
process data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111199977.7A
Other languages
Chinese (zh)
Inventor
刘希未
边思宇
宫晓燕
赵红霞
唐瑛
荆思凤
王晓
王飞跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202111199977.7A priority Critical patent/CN114119932A/en
Publication of CN114119932A publication Critical patent/CN114119932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a VR teaching method, a VR teaching device, an electronic device, a storage medium and a program product, wherein the method comprises the steps of collecting learning process data of a learner when the learner is detected to learn through a VR learning course; and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance. According to the invention, the learning process of the learner is analyzed by obtaining the learning process data of the learner, so that accurate and personalized learning evaluation and teaching guidance are realized for the learner, the learning experience of the learner is further improved, and the intelligent level of VR teaching is finally improved.

Description

VR teaching method, apparatus, electronic device, storage medium and program product
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a VR teaching method, a VR teaching device, electronic equipment, a storage medium and a program product.
Background
The intelligent teaching is an important research field in education technology, and according to basic rules of education science, the teaching level of human teachers is improved by means of an artificial intelligence technology, so that each learner is helped to acquire required knowledge and improve weak skills.
A Virtual Reality system (VR) is a computer simulation system that can create and experience a Virtual world. VR technology is used to create a simulated environment between a computer and a display device, and a user is immersed in the environment, and the presented content can break the time and space limitations. Because it has many sense organs stimulation and the immersive experience of directly perceived image, VR technique is widely used in the intelligent teaching field, promotes learner's learning experience and interest through immersive, the mode of playing.
At present, a VR teaching system takes VR equipment as a carrier and video as a medium to create and present a vivid learning environment. However, the VR learning course of the VR teaching system is fixed and the same for different learners, so that most learners cannot adapt to the VR learning course, thereby reducing the learning experience of the learners. Therefore, the problem that the existing VR teaching is low in intelligentization level exists.
Disclosure of Invention
The invention provides a VR teaching method, a VR teaching device, electronic equipment, a storage medium and a program product, which are used for overcoming the defect of low intelligent level of VR teaching in the prior art and realizing intelligent VR teaching.
The invention provides a VR teaching method, which comprises the following steps:
collecting learning process data of a learner when the learner is detected to learn through a VR learning course;
and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance.
According to the VR teaching method provided by the invention, the step of collecting learning process data of the learner comprises the following steps:
acquiring eye movement information and first interaction information of the learner through eye movement equipment; and/or the presence of a gas in the gas,
and acquiring second interactive information of the learner through a control device, wherein the control device is a device for controlling the VR learning course by the learner.
According to the VR teaching method provided by the invention, the method for acquiring the eye movement information and the first interaction information of the learner through the eye movement equipment comprises the following steps:
through wear-type VR eye movement equipment, gather learner's eye movement information, first mutual information and the course test achievement of VR study course.
According to the VR teaching method provided by the present invention, the analyzing learning situation types of the learning process data to obtain the learning situation information of the learner includes:
and inputting the learning process data into a trained learning situation prediction model, and performing learning situation type prediction to obtain learning situation information of the learner output by the learning situation prediction model.
According to the VR teaching method provided by the invention, the learning situation prediction model comprises a learning behavior prediction model, a cognition style prediction model and a digital image prediction model, the learning situation prediction model inputs the learning process data into the trained learning situation prediction model to perform learning situation category prediction to obtain the learning situation information of the learner, and the VR teaching method comprises the following steps:
inputting the learning process data into the learning behavior prediction model, and performing learning behavior category prediction to obtain the learning behavior of the learner output by the learning behavior prediction model;
inputting the learning process data into the cognitive style prediction model to perform cognitive style type prediction to obtain the cognitive style of the learner;
and inputting the learning process data into the digital portrait prediction model to perform digital portrait type prediction so as to obtain the digital portrait of the learner output by the digital portrait prediction model.
According to the VR teaching method provided by the present invention, after the learning situation category analysis is performed on the learning process data to obtain the learning situation information of the learner, the VR teaching method further includes:
updating learning information of the learner based on the learning situation information;
and updating the VR learning course based on the learning information so that the learner can learn through the updated VR learning course.
The present invention also provides a VR teaching apparatus, comprising:
the data acquisition device is used for acquiring learning process data of a learner when the learner is detected to learn through the VR learning course;
and the learning situation analysis device is used for carrying out learning situation type analysis on the learning process data to obtain learning situation information of the learner, and the learning situation information is used for intelligently guiding learning.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the VR teaching method as described in any one of the above.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the VR teaching method as any one of above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the VR teaching method as described in any one of the above.
According to the VR teaching method, the VR teaching device, the electronic equipment, the storage medium and the program product, when it is detected that a learner learns through a VR learning course, learning process data of the learner are collected, and then learning situation type analysis is carried out on the learning process data to obtain learning situation information of the learner so that the learning situation information can be used for intelligent guidance. Through the mode, the learning process of the learner is analyzed by obtaining the learning process data of the learner, so that accurate and personalized learning evaluation and teaching guidance are realized for the learner, the learning experience of the learner is further improved, and the intelligent level of VR teaching is finally improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a VR instruction method provided by the present invention;
FIG. 2 is a second flowchart of a VR teaching method according to the present invention;
FIG. 3 is a schematic view of a VR instructional device provided by the present invention;
fig. 4 illustrates a physical structure diagram of an electronic device.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a VR teaching method provided by the present invention, and as shown in fig. 1, the VR teaching method provided by the present invention includes:
step 100, collecting learning process data of a learner when the learner is detected to learn through a VR learning course;
in this embodiment, the VR teaching method can be applied to a VR teaching system, and the VR teaching system can include a VR teaching module, a learner data acquisition module, and a learner data analysis module. Further, the VR teaching system can also include a learner data storage module.
The VR teaching module is used for providing VR learning courses for learners to learn, and particularly, the VR teaching module is used for providing interactive learning materials by combining personal basic information and learning information of the learners so as to output learning condition reports of the learners. The VR teaching module may include learner personal basic information, learner learning information, VR learning lessons, and the like.
The personal basic information of the learner comprises name, grade, age, historical performance of relevant tests of VR learning courses and the like. The learner learns the information, including learning behaviors, cognitive styles, digital portraits and the like. VR learning courses include development and interaction of teaching content (i.e., knowledge points), mastery testing of knowledge points (i.e., course testing), and learning reports of learners.
The learner learning report comprises the learning information of the learner at the present stage, the performance of the course test, the teaching method and the strategy suggestion. Which is used for the teacher and the learner to view.
Wherein, VR study course is used for the learner to study, and this VR study course includes the VR video. The VR learning course may be any subject or field course, such as a geographic VR learning course, a biological VR learning course, a kinematic VR learning course, and the like, without limitation. Further, VR learning courses may be provided by a VR teaching module, which may be a VR device.
In one embodiment, learning process data of a learner is collected upon detecting that the learner has learned a VR learning course through a VR teaching module. Specifically, when it is detected that a learner learns the VR learning course through the VR device, learning process data of the learner is collected.
In one embodiment, learning process data of the learner is collected by a learner data collection module. The learner data acquisition module is used for acquiring learning process data of a learner, and can comprise an eye movement device, a control device, a positioner, a bracket and the like.
And the eye movement equipment is used for acquiring the eye movement information of the learner and the interactive information of the course. The eye movement equipment can be head-mounted eye movement equipment so as to be conveniently worn by a learner and conveniently and accurately acquire the eye movement information and the interaction information of the learner. This eye movement equipment can combine with VR equipment, and then obtains head-mounted VR eye movement equipment, and this head-mounted VR eye movement equipment is used for gathering learner's eye movement information, the mutual information of course and the achievement of learner course test.
It should be noted that the eye movement equipment stores an eye movement tracking technology, and by the eye movement tracking technology, the detailed information such as the fixation point and the sight line movement of the learner when the learner performs a certain task can be accurately obtained, and the eye movement tracking technology can be used for analyzing indexes such as the psychological state, the learning style and the cognitive process of the learner in the learning process.
And the control equipment is used for acquiring the interaction information of the learner and the course. The control device can comprise a control handle, a mouse, a keyboard and the like.
And the positioner is used for ensuring that the VR equipment and the eye movement equipment can acquire various kinds of information in the field.
The support is used for fixing the positioner to stably obtain various kinds of information collected by the VR equipment and the eye movement equipment.
The learning process data is related data generated in the learning process of the learner, and can represent the cognitive reasoning process and the cognitive logic of the learner, namely, the learning process data is used for judging what the learner is doing. The learning process data may include eye movement information, interaction information, and session test performance for VR learning sessions.
The eye movement information includes: blink, gaze, eye closure, gaze point coordinates, and the like. The interactive information comprises: the operation to VR equipment, the operation to eye-moving equipment, the operation to the control handle, the operation in the VR study course etc.. A course test achievement, which is obtained through a test in the VR learning course.
Further, after the step 100, the VR teaching method further includes:
storing the learning process data in a local database; or storing the learning process data in a cloud database. That is, the learner data storage module may comprise a local database or a cloud database. The local database is used for storing the acquired learning process data locally, for example, the data can be stored by a hard disk or a flash memory; and the cloud database is used for storing the acquired learning process data in the cloud database so as to support large-scale online education.
And 200, analyzing the learning situation type of the learning process data to obtain learning situation information of the learner, wherein the learning situation information is used for intelligent learning guidance.
In this embodiment, the learning context information is used to represent the cognitive reasoning ability and learning characteristics of the learner. The emotional information may include learning behaviors, cognitive styles, digital portraits, and the like.
In an embodiment, a learner data analysis module is used for analyzing the learning process data in learning condition categories to obtain learning condition information of the learner. The learner data analysis module may include a learning behavior unit, a cognitive style unit, and a digital representation unit.
And the learning behavior unit is used for analyzing the behavior of the learner during learning according to the learning process data of the learner. The behaviors of the learner while learning include: concentration, fatigue, stupor, boredom, excitement, confusion, boredom, etc. Further, the learning behavior unit may be a predictive model for machine learning, i.e. the learning behavior of the learner is analyzed by a machine learning algorithm. For example, the learning behavior category analysis is performed on the eye movement information of the learner, and whether the learning behavior of the learner is excited or bored is obtained.
And the cognition style unit is used for reasoning the cognition style of the learner by analyzing the cognition process data of the learner, namely knowing how the learner analyzes and solves the problem. Wherein the learner's cognitive process data comprises: learning abnormal behavior data and problem solving data in the process data. The information that needs to be recorded by the abnormal behavior data includes: the timestamp and duration of the abnormal behavior, the interactive operation during the abnormal behavior, the restoration of the scene and the like; the information to be recorded for solving the problem data includes: the elements and sequence of operations of the interoperation, key information on success/failure, all interactive data before the problem is successfully solved, a timestamp when the problem is solved, the length of time it takes for the problem to be solved, etc.
Further, the cognitive style unit may be a predictive model for machine learning, i.e. a cognitive style of the learner is analyzed by a machine learning algorithm.
The cognitive style of the learner is a habitual behavior mode expressed by the learner in the cognitive process, and the habitual behavior mode comprises an information processing mode, a thinking style, a problem solving style and the like.
The information processing method comprises simultaneous processing of information and time-relay processing of information. It should be noted that learners who process information simultaneously are good at adopting divergent thinking to comprehensively think about problems from multiple visual angles and can link all components with the whole object when solving the problems; when a learner who processes information chronologically solves problems, the learner usually adopts a mode of taking work by force and buckling a ring by a ring, and the learners have obvious sequence in time.
The thinking style includes wide and narrow traits of analysis and synthesis, divergence and concentration, and classification. It should be noted that analysis means that the learner decomposes the concept or problem of understanding in mind to recognize, while synthesis means that the learner grasps things as a whole, and has lower thought depth and accuracy, and higher intuition and ambiguity; people who are good at diverging thinking are more enthusiastic and motivated, and can think about problems, reorganize information or interact along different directions and angles, while focused thinking means that learners are colder and cautious, and prefer to think in one direction according to known information and by using familiar rules; learners with broad classifications will use fuzzy criteria to classify new information into categories that are too broad, while learners with narrow classifications will use precise criteria to identify new information.
The style of the solution, including meditation and impulsion, reflects the speed and accuracy of the cognitive process. It should be noted that the thought learner can consider the solutions of various problems with sufficient time, and then select an optimal scheme meeting various conditions, although the reaction speed is slow, the quality of the problem solution is high; the impulsive learners often make decisions in an intuitive way, the reaction speed is high, but errors are easy to occur.
The digital portrait unit is used for obtaining a digital portrait of the learner through the analysis of the learning process data of the learner, and the digital portrait can comprise a knowledge map of the knowledge mastered by the learner and personal learning characteristics.
Further, the digital portrait unit can be a prediction model for machine learning, i.e. a digital portrait of a learner is analyzed by a machine learning algorithm.
The knowledge map comprises the mastered breadth, depth and difficulty of knowledge points and the relation between the knowledge points before and after.
The individual learning characteristic and the investigated indexes are the learning efficiency and the learning interest of the learner. It should be noted that the less time is used for solving the problem, the higher the knowledge point mastering degree is, or the more problems are solved, the higher the learning efficiency is; the learner is in the proportion condition of normal behaviors in the learning process, and the learning interest of the learner is reflected.
Further, after the step 200, the VR teaching method further includes:
based on the emotional information, determining reminding information and/or guiding operation for the learner to learn based on the reminding information and/or guiding operation. For example, after the VR teaching system identifies abnormal behavior data or cognitive characteristics in the learning behavior unit and the cognitive style unit, corresponding reminders and guidance can be provided in real time.
In addition, the various data of the learner can be transmitted in one or more modes of a wired network, a 4G network, a 5G network, a GPRS network and a WIFI network.
In a specific embodiment, before the step 100, the VR teaching further includes:
initiating the VR learning course and obtaining personal basic information of the learner; and if the learner learns the VR learning course again, updating the learning information of the learner based on the learning situation information learned last time. It should be noted that, when entering the VR teaching system for the first time, the personal basic information and the learning information of the learner need to be input for initialization, and if not entering the VR teaching system for the first time, the learning information is updated based on the previous learning information.
In a specific embodiment, after the step 200, the VR teaching method further includes:
and generating a study report based on the study information. Wherein, the information includes information of learning, the result of the course test and counseling advice.
According to the VR teaching method provided by the embodiment of the invention, when a learner is detected to learn through a VR learning course, learning process data of the learner are collected, and then learning situation type analysis is carried out on the learning process data to obtain learning situation information of the learner, so that the learning situation information can be used for intelligent learning guidance. Through the mode, the embodiment of the invention analyzes the learning process of the learner by obtaining the learning process data of the learner, thereby realizing accurate and personalized learning evaluation and teaching guidance for the learner, further improving the learning experience of the learner and finally improving the intelligent level of VR teaching.
Further, based on the above first embodiment, a second embodiment of the VR teaching method of the present invention is provided. In this embodiment, the step 100 of collecting the learning process data of the learner includes:
step 110, collecting the eye movement information and the first interaction information of the learner through eye movement equipment;
in this embodiment, the eye movement device is used for collecting the learner's eye movement information and the first interaction information of the lesson. The eye movement information includes: blink, gaze, eye closure, gaze point coordinates, and the like. The first interactive information includes: specifically, the first interaction information may be obtained through eye movement information analysis, for example, for the operation of the eye movement device and the operation in the VR learning course.
It should be noted that the eye movement equipment stores an eye movement tracking technology, and by the eye movement tracking technology, the detailed information such as the fixation point and the sight line movement of the learner when the learner performs a certain task can be accurately obtained, and the eye movement tracking technology can be used for analyzing indexes such as the psychological state, the learning style and the cognitive process of the learner in the learning process.
In one embodiment, the step 110 includes:
and acquiring the eye movement information of the learner, the first interaction information and the course test result of the VR learning course through VR eye movement equipment. Wherein, VR eye movement equipment carries out the equipment that combines for VR equipment and eye movement equipment, and this VR eye movement equipment possesses the function of VR equipment and the function of eye movement equipment simultaneously.
A course test achievement, which is obtained through a test in the VR learning course. Specifically, the course test result determined by the VR device in the VR eye movement device may be obtained, or the course test result may be determined by integrating the eye movement information.
In another embodiment, the step 110 includes:
and step 111, collecting the eye movement information of the learner, the first interaction information and the course test result of the VR learning course through a head-mounted VR eye movement device.
Wherein, wear-type VR eye movement equipment compares above-mentioned VR eye movement equipment, possesses the head function of wearing, and it is convenient for the learner to wear to be convenient for accurately acquire learner's eye movement information and mutual information.
And 120, acquiring second interactive information of the learner through a control device, wherein the control device is a device for the learner to control the VR learning course.
In this embodiment, the control device is configured to acquire second interaction information of the learner and the lesson. The control device can comprise a control handle, a mouse, a keyboard and the like.
Wherein, the mutual information includes: the operation to VR equipment, the operation to eye-moving equipment, the operation to the control handle, the operation in the VR study course etc.. Further, the operation of the VR device and the operation of the eye movement device may be operations of the VR eye movement device.
Further, after the step 120, the VR teaching method further includes:
and aggregating the first interactive information and the second interactive information to obtain aggregated interactive information. And the aggregation interactive information comprises first interactive information and second interactive information which are used for representing the interactive characteristics of the learner and the VR learning course.
In the embodiment, the eye movement information and the interaction information of the learner are acquired through the eye movement equipment, so that the psychological state, the learning style, the cognitive process and other indexes of the learner in the learning process are analyzed based on the eye movement information and the interaction information, the learning condition information of the learner is accurately acquired, and the accuracy of the VR teaching method is improved.
Further, based on the first embodiment described above, a third embodiment of the VR teaching method of the present invention is provided. In this embodiment, the step 200 includes:
step 210, inputting the learning process data into the trained learning situation prediction model, and performing learning situation category prediction to obtain the learning situation information of the learner output by the learning situation prediction model.
In the present embodiment, the learning situation prediction model is a machine learning model. Specifically, the learning condition prediction model is a model obtained by performing iterative training on a model to be trained based on training data in a learning process. The learning situation information of the learner is the output of the learning situation prediction model.
In a particular embodiment, the situational prediction model includes a feature extractor and a classifier. Specifically, the learning process characteristic information in the learning process data is extracted based on a characteristic extractor in the trained learning condition prediction model, and then the learning process characteristic information is classified and predicted according to the learning process characteristic information and a classifier in the learning condition prediction model to obtain a classification prediction result, namely the learning condition information of the learner.
The specific implementation process of the classifier is to obtain a classification probability vector, and then determine learning situation information corresponding to the maximum classification probability value in the classification probability vector.
In one embodiment, the learning process data includes eye movement information, interaction information, and lesson test achievements for the VR learning lesson. Correspondingly, the feature extractor of the learning situation prediction model comprises a first feature extractor, a second feature extractor and a third feature extractor.
Specifically, the eye movement feature information in the eye movement information is extracted based on a first feature extractor in the trained learning condition prediction model, the interactive feature information in the interactive information is extracted based on a second feature extractor in the trained learning condition prediction model, the achievement feature information in the curriculum test achievement is extracted based on a third feature extractor in the trained learning condition prediction model, and then the eye movement feature information, the interactive feature information and the achievement feature information are classified and predicted according to the eye movement feature information, the interactive feature information, the achievement feature information and a classifier in the learning condition prediction model to obtain a classification prediction result, namely learning condition information of the learner.
In some embodiments, the emotion prediction model is a codec neural network model, the emotion prediction model includes an encoder, a decoder and a classifier, and the step 210 includes:
based on the encoder, respectively performing feature extraction on an eye movement vector corresponding to the eye movement information, an interaction vector corresponding to the interaction information and a result vector corresponding to a course test result to obtain an eye movement feature vector, an interaction feature vector and a result feature vector, and performing aggregation processing on the eye movement feature vector, the interaction feature vector and the result feature vector to obtain an aggregation feature vector; decoding the aggregation characteristic vector based on the decoder to obtain a decoding vector; and performing classified prediction on the decoding vector based on the classifier to obtain a classified prediction result (namely learning situation information).
It should be noted that the encoder may be composed of a recurrent neural network, which may be an LSTM (long short-term memory) neural network, or the encoder may be composed of a deep convolutional neural network, or the like. Accordingly, the decoder may be composed of a recurrent neural network, which may be an LSTM (long short-term memory) neural network, or may be composed of a deep convolutional neural network, or the like. The classifier may be composed of fully connected layers.
Of course, the situational prediction model may employ other neural network models, such as convolutional neural network models and cyclic neural network models.
In one embodiment, the emotional prediction model includes a learning behavior prediction model, a cognitive style prediction model, and a digital portrait prediction model, the emotional information includes a learning behavior, a cognitive style, and a digital portrait, and step 210 includes:
step 211, inputting the learning process data into the learning behavior prediction model, performing learning behavior category prediction, and obtaining the learning behavior of the learner output by the learning behavior prediction model;
in the present embodiment, the learning behavior prediction model is a machine learning model. Specifically, the learning behavior prediction model is a model obtained by performing iterative training on a model to be trained based on learning behavior training data. The learning behavior of the learner is the output of the learning behavior prediction model.
In a particular embodiment, the learning behavior prediction model includes a feature extractor and a classifier. Specifically, the learning process characteristic information in the learning process data is extracted based on a characteristic extractor in the trained learning behavior prediction model, and then the learning process characteristic information is classified and predicted according to the learning process characteristic information and a classifier in the learning behavior prediction model to obtain a classification prediction result, namely the learning behavior of the learner.
The specific implementation process of the classifier is to obtain a classification probability vector, and then determine a learning behavior corresponding to the maximum classification probability value in the classification probability vector.
In one embodiment, the learning process data includes eye movement information, interaction information, and lesson test achievements for the VR learning lesson. Accordingly, the feature extractor of the learning behavior prediction model includes a first feature extractor, a second feature extractor, and a third feature extractor.
Specifically, the eye movement feature information in the eye movement information is extracted based on a first feature extractor in the trained learning behavior prediction model, the interactive feature information in the interactive information is extracted based on a second feature extractor in the trained learning behavior prediction model, the achievement feature information in the curriculum test achievement is extracted based on a third feature extractor in the trained learning behavior prediction model, and then the eye movement feature information, the interactive feature information and the achievement feature information are classified and predicted according to the eye movement feature information, the interactive feature information, the achievement feature information and a classifier in the learning behavior prediction model to obtain a classification prediction result, namely the learning behavior of the learner.
To train the learning behavior prediction model, before the step 211, the VR teaching method further includes:
acquiring learning process training data, and performing label labeling aiming at learning behaviors on the learning process training data to obtain learning behavior label data; obtaining a model to be trained, and selecting training sample data from the training data in the learning process and the learning behavior label data; and performing iterative training on the model to be trained on the basis of the training sample data to obtain the learning behavior prediction model.
Step 212, inputting the learning process data into the cognitive style prediction model to perform cognitive style type prediction to obtain the cognitive style of the learner;
in this embodiment, the cognitive style prediction model is a machine learning model. Specifically, the cognitive style prediction model is obtained by performing iterative training on a model to be trained based on cognitive style training data. The cognitive style of the learner is the output of the cognitive style prediction model.
In a particular embodiment, the cognitive style prediction model includes a feature extractor and a classifier. Specifically, learning process characteristic information in learning process data is extracted based on a characteristic extractor in a trained cognitive style prediction model, and then the learning process characteristic information is classified and predicted according to the learning process characteristic information and a classifier in the cognitive style prediction model to obtain a classification prediction result, namely the cognitive style of a learner.
The specific implementation process of the classifier is to obtain a classification probability vector, and then determine the cognitive style corresponding to the maximum classification probability value in the classification probability vector.
In one embodiment, the learning process data includes eye movement information, interaction information, and lesson test achievements for the VR learning lesson. Accordingly, the feature extractor of the cognitive style prediction model includes a first feature extractor, a second feature extractor, and a third feature extractor.
Specifically, the eye movement feature information in the eye movement information is extracted based on a first feature extractor in a trained cognitive style prediction model, the interactive feature information in the interactive information is extracted based on a second feature extractor in the trained cognitive style prediction model, the achievement feature information in the curriculum test achievement is extracted based on a third feature extractor in the trained cognitive style prediction model, and then the eye movement feature information, the interactive feature information and the achievement feature information are classified and predicted according to the eye movement feature information, the interactive feature information, the achievement feature information and a classifier in the cognitive style prediction model to obtain a classification prediction result, namely the cognitive style of the learner.
To train the cognitive style prediction model, before step 212, the VR teaching method further includes:
acquiring learning process training data, and labeling the learning process training data aiming at the cognitive style to obtain cognitive style label data; obtaining a model to be trained, and selecting training sample data from the training data in the learning process and the cognitive style label data; and performing iterative training on the model to be trained on the basis of the training sample data to obtain the cognitive style prediction model.
Step 213, inputting the learning process data into the digital portrait prediction model to perform digital portrait type prediction, so as to obtain the digital portrait of the learner outputted by the digital portrait prediction model.
In this embodiment, the digital image prediction model is a machine learning model. Specifically, the digital portrait prediction model is a model obtained by performing iterative training on a model to be trained based on digital portrait training data. The digital portrait of the learner is the output of the digital portrait prediction model.
In a particular embodiment, the digital representation prediction model includes a feature extractor and a classifier. Specifically, the learning process characteristic information in the learning process data is extracted based on a characteristic extractor in the trained digital portrait prediction model, and then the learning process characteristic information is classified and predicted according to the learning process characteristic information and a classifier in the digital portrait prediction model to obtain a classification prediction result, namely the digital portrait of the learner.
The classifier is specifically executed to obtain a classification probability vector, and then, a digital portrait corresponding to a maximum classification probability value in the classification probability vector is determined.
In one embodiment, the learning process data includes eye movement information, interaction information, and lesson test achievements for the VR learning lesson. Accordingly, the feature extractor of the digital portrait prediction model includes a first feature extractor, a second feature extractor, and a third feature extractor.
Specifically, the eye movement feature information in the eye movement information is extracted based on a first feature extractor in a trained digital portrait prediction model, the interactive feature information in the interactive information is extracted based on a second feature extractor in the trained digital portrait prediction model, the achievement feature information in the curriculum test achievement is extracted based on a third feature extractor in the trained digital portrait prediction model, and then the eye movement feature information, the interactive feature information and the achievement feature information are classified and predicted according to the eye movement feature information, the interactive feature information, the achievement feature information and a classifier in the digital portrait prediction model to obtain a classification prediction result, namely the digital portrait of the learner.
To train the digital portrait prediction model, prior to step 213, the VR teaching method further includes:
acquiring training data in a learning process, and labeling labels aiming at the digital portrait on the training data in the learning process to obtain digital portrait label data; acquiring a model to be trained, and selecting training sample data from the training data in the learning process and the digital portrait label data; and performing iterative training on the model to be trained on the basis of the training sample data to obtain the digital portrait prediction model.
To train the situational prediction model, in an embodiment, before step 210, the VR teaching method further includes:
acquiring learning process training data, and labeling learning condition information labels on the learning process training data to obtain learning condition information label data; acquiring a model to be trained, and selecting training sample data from the training data in the learning process and the learning situation information label data; and performing iterative training on the model to be trained on the basis of the training sample data to obtain the learning situation prediction model.
Specifically, each learning process representation value in the learning process training data is extracted, and then corresponding learning situation information is matched for the learning process training data based on each learning process representation value, so that learning situation information label data is obtained.
Wherein the learning process training data comprises at least one learning process.
In this embodiment, the training sample data at least includes a training sample, and the training sample includes a learning process from the training data of the learning process and a learning context information tag from the learning context information tag data.
Further, the training sample data is divided into a training set and a test set, for example, the training sample data is divided into the training set and the test set according to a certain proportion. Wherein the training set is used for training the model, and the testing set is used for testing the model.
In this embodiment, a training sample is selected from training sample data, a learning process and learning information corresponding to the training sample are input into a model to be trained, model prediction is performed to obtain a model output label, a difference between the model output label and the learning information label corresponding to the training sample is further calculated to obtain a model loss, and the model to be trained is further updated based on the model loss until the iteration number of the model to be trained reaches a preset iteration number or a corresponding loss function (objective function) reaches a preset value.
It should be noted that the suitable number of iterations may be continuously adjusted in combination with the training effect. In addition, through gradient descent, the optimal weight value which enables the target function to be minimum can be found, the weight value can be automatically learned through training, and then the model to be trained is updated.
In addition, it should be noted that the training set is used for training, so that the smaller the objective function is, the better the objective function is, and the test set is used for evaluating the verification model after each round of training until the weight of the model is derived after the model converges, so as to obtain the final learning situation prediction model.
In this embodiment, based on the situation of learning prediction model after setting up and training, automatic study process data analysis is carried out, and then obtains learner's situation of learning information to the situation of learning information that obtains based on intelligent analysis carries out more accurate individualized teaching, thereby further improves the accuracy of VR teaching, finally further improves the intelligent level of VR teaching.
Further, based on the above embodiments, a fourth embodiment of the VR teaching method of the present invention is provided. Fig. 2 is a second flowchart of the VR teaching method provided in the present invention, and as shown in fig. 2, in this embodiment, after the step 200, the VR teaching method further includes:
step 300, updating the learning information of the learner based on the learning situation information;
in the present embodiment, the learning information of the learner includes learning behavior, cognitive style, digital representation, and the like. If the learning information also comprises learning behaviors, cognitive styles and digital portraits, the learning behaviors, the cognitive styles and the digital portraits in the learning information are replaced by the learning behaviors, the cognitive styles and the digital portraits in the original learning information so as to update the learning information of learners in real time.
Specifically, when it is detected that the learner needs to learn the VR learning course again, the learning information of the learner is updated based on the learning situation information. In addition, if it is detected that the learner does not need to learn the VR learning course again, the learner exits from the VR teaching system, and the learning information of the learner may not be updated.
In one embodiment, the step 300 includes:
sending the learning situation information to a VR teaching module; and updating the learning information of the learner in the VR teaching module based on the learning situation information. It should be noted that, the learning information of the learner in the VR teaching module is updated, so that the VR teaching module can perform teaching based on the latest learning information of the learner.
Step 400, updating the VR learning course based on the learning information, so that the learner can learn through the updated VR learning course.
Specifically, based on the learning information, learning assistance of the learner is generated; updating the VR learning course based on the learning aid for the learner to learn via the updated VR learning course, i.e., to learn the VR learning course again according to the learning aid.
In an embodiment, based on the learning information, the VR learning course in the VR teaching module is updated for the learner to learn through the updated VR learning course.
In this embodiment, the learning information of the learner is updated based on the learning situation information, and then the VR learning course is updated, so that when the learner learns the VR learning course again, the learner can perform personalized setting according to the previous learning situation, thereby satisfying the personalized requirements of the learner, and compared with a consistent VR learning course, the intelligent level of VR teaching is further improved.
In the following, the VR teaching apparatus provided by the present invention is described, and the VR teaching apparatus described below and the VR teaching method described above may be referred to in correspondence with each other.
Fig. 3 is a schematic view of a VR teaching device provided by the present invention, and as shown in fig. 3, the VR teaching device provided by the present invention includes:
the data acquisition device 310 is used for acquiring learning process data of a learner when the learner is detected to learn through the VR learning course;
the learning context analyzing device 320 is configured to perform learning context category analysis on the learning process data to obtain learning context information of the learner, where the learning context information is used for intelligent learning guidance.
Fig. 4 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 4: a processor (processor)410, a communication Interface 420, a memory (memory)430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a VR teaching method comprising: collecting learning process data of a learner when the learner is detected to learn through a VR learning course; and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the VR teaching method provided by the above methods, the method comprising: collecting learning process data of a learner when the learner is detected to learn through a VR learning course; and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a VR teaching method provided by the above methods, the method comprising: collecting learning process data of a learner when the learner is detected to learn through a VR learning course; and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A VR teaching method, comprising:
collecting learning process data of a learner when the learner is detected to learn through a VR learning course;
and analyzing the learning process data according to learning condition categories to obtain learning condition information of the learner, wherein the learning condition information is used for intelligent learning guidance.
2. The VR teaching method of claim 1, wherein the collecting learning process data of the learner comprises:
acquiring eye movement information and first interaction information of the learner through eye movement equipment; and/or the presence of a gas in the gas,
and acquiring second interactive information of the learner through a control device, wherein the control device is a device for controlling the VR learning course by the learner.
3. The VR teaching method of claim 2, wherein the collecting the learner's eye movement information and first interaction information via an eye movement device comprises:
through wear-type VR eye movement equipment, gather learner's eye movement information, first mutual information and the course test achievement of VR study course.
4. The VR teaching method of claim 1, wherein the performing a learning context category analysis on the learning process data to obtain learning context information of the learner comprises:
and inputting the learning process data into a trained learning situation prediction model, and performing learning situation type prediction to obtain learning situation information of the learner output by the learning situation prediction model.
5. The VR teaching method of claim 4, wherein the learning context prediction model comprises a learning behavior prediction model, a cognitive style prediction model and a digital image prediction model, and the inputting of the learning process data into the trained learning context prediction model for learning context category prediction to obtain learning context information of the learner comprises:
inputting the learning process data into the learning behavior prediction model, and performing learning behavior category prediction to obtain the learning behavior of the learner output by the learning behavior prediction model;
inputting the learning process data into the cognitive style prediction model to perform cognitive style type prediction to obtain the cognitive style of the learner;
and inputting the learning process data into the digital portrait prediction model to perform digital portrait type prediction so as to obtain the digital portrait of the learner output by the digital portrait prediction model.
6. The VR teaching method of any of claims 1 to 5, further comprising, after performing learning context category analysis on the learning process data to obtain learning context information of the learner:
updating learning information of the learner based on the learning situation information;
and updating the VR learning course based on the learning information so that the learner can learn through the updated VR learning course.
7. A VR teaching device comprising:
the data acquisition device is used for acquiring learning process data of a learner when the learner is detected to learn through the VR learning course;
and the learning situation analysis device is used for carrying out learning situation type analysis on the learning process data to obtain learning situation information of the learner, and the learning situation information is used for intelligently guiding learning.
8. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the steps of the VR teaching method of any of claims 1 to 6 are implemented when the program is executed by the processor.
9. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the VR teaching method of any of claims 1 to 6.
10. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, performs the steps of the VR teaching method of any of claims 1 to 6.
CN202111199977.7A 2021-10-14 2021-10-14 VR teaching method, apparatus, electronic device, storage medium and program product Pending CN114119932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111199977.7A CN114119932A (en) 2021-10-14 2021-10-14 VR teaching method, apparatus, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111199977.7A CN114119932A (en) 2021-10-14 2021-10-14 VR teaching method, apparatus, electronic device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN114119932A true CN114119932A (en) 2022-03-01

Family

ID=80375647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111199977.7A Pending CN114119932A (en) 2021-10-14 2021-10-14 VR teaching method, apparatus, electronic device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114119932A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309258A (en) * 2022-05-27 2022-11-08 中国科学院自动化研究所 Intelligent learning guiding method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309258A (en) * 2022-05-27 2022-11-08 中国科学院自动化研究所 Intelligent learning guiding method and device and electronic equipment
CN115309258B (en) * 2022-05-27 2024-03-15 中国科学院自动化研究所 Intelligent learning guiding method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Alam The Secret Sauce of Student Success: Cracking the Code by Navigating the Path to Personalized Learning with Educational Data Mining
Alkhatlan et al. Intelligent tutoring systems: A comprehensive historical survey with recent developments
Juhaňák et al. Using process mining to analyze students' quiz-taking behavior patterns in a learning management system
EP3361467A1 (en) Interactive and adaptive training and learning management system using face tracking and emotion detection with associated methods
Wess et al. Measuring professional competence for the teaching of mathematical modelling: A test instrument
Thille et al. The future of data-enriched assessment.
Baker et al. 16. Interaction-based affect detection in educational software
CN111274411A (en) Course recommendation method and device, electronic equipment and readable storage medium
Conati et al. Student modeling: Supporting personalized instruction, from problem solving to exploratory open ended activities
US11475788B2 (en) Method and system for evaluating and monitoring compliance using emotion detection
Miller et al. Automated detection of proactive remediation by teachers in Reasoning Mind classrooms
Paquette et al. Sensor-free affect detection for a simulation-based science inquiry learning environment
US20170039876A1 (en) System and method for identifying learner engagement states
Sonnenberg et al. Evaluating the Impact of Instructional Support Using Data Mining and Process Mining: A Micro-Level Analysis of the Effectiveness of Metacognitive Prompts.
Alyüz et al. Towards an emotional engagement model: Can affective states of a learner be automatically detected in a 1: 1 learning scenario?
WO2019180652A1 (en) Interactive, adaptive, and motivational learning systems using face tracking and emotion detection with associated methods
Ilić et al. Intelligent techniques in e-learning: a literature review
Rathi et al. Analysis of user’s learning styles and academic emotions through web usage mining
McCusker et al. Intelligent assessment and content personalisation in adaptive educational systems
CN114119932A (en) VR teaching method, apparatus, electronic device, storage medium and program product
CN116701774B (en) Teaching scheme recommendation method and device based on student behavior analysis
US20230105077A1 (en) Method and system for evaluating and monitoring compliance, interactive and adaptive learning, and neurocognitive disorder diagnosis using pupillary response, face tracking emotion detection
Hsieh et al. Algorithm and intelligent tutoring system design for programmable controller programming
Ke et al. Tracking representational flexibility development through speech data mining
Wang Providing intelligent and adaptive support in concept map-based learning environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination