CN111258433B - Teaching interaction system based on virtual scene - Google Patents

Teaching interaction system based on virtual scene Download PDF

Info

Publication number
CN111258433B
CN111258433B CN202010137104.2A CN202010137104A CN111258433B CN 111258433 B CN111258433 B CN 111258433B CN 202010137104 A CN202010137104 A CN 202010137104A CN 111258433 B CN111258433 B CN 111258433B
Authority
CN
China
Prior art keywords
teaching
data
virtual scene
scene
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010137104.2A
Other languages
Chinese (zh)
Other versions
CN111258433A (en
Inventor
王鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Original Assignee
Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd filed Critical Shanghai Squirrel Classroom Artificial Intelligence Technology Co Ltd
Priority to CN202010137104.2A priority Critical patent/CN111258433B/en
Publication of CN111258433A publication Critical patent/CN111258433A/en
Application granted granted Critical
Publication of CN111258433B publication Critical patent/CN111258433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a virtual scene-based teaching interaction system, which constructs preparation data based on scenes obtained by preprocessing teaching related data, constructs teaching virtual scenes with different modes, adjusts adaptive running states and/or running parameters of teachers and/or students according to state change data of the teachers and/or students in the running process of the teaching virtual scenes, can switch the adaptive teaching scenes according to different teaching contents and teaching requirements, improves the interactivity and scene variability of the teaching process, and further fully fuses and applies virtual scene technologies to teaching, thereby improving the teaching efficiency and teaching interestingness.

Description

Teaching interaction system based on virtual scene
Technical Field
The invention relates to the technical field of intelligent interactive teaching, in particular to a teaching interactive system based on a virtual scene.
Background
At present, intelligent teaching is the main development direction of teaching mode, can satisfy different teachers or students 'needs through intelligent teaching to intelligent teaching still can rely on the mode of online teaching to carry out corresponding course teaching anytime and anywhere, this greatly improved intelligent teaching's suitability to different users and flexibility in time and place. However, the intelligent teaching mode in the prior art is limited to unidirectional course teaching, the teaching interaction between a teacher and students cannot be realized, and the intelligent teaching mode cannot switch adaptive teaching scenes according to different teaching contents and teaching requirements, so that the interactivity and scene variability of intelligent teaching are seriously affected; in addition, the existing teaching mode can not fully integrate and apply the virtual scene technology to actual teaching, and therefore the teaching efficiency and the teaching interestingness of interactive teaching can not be improved.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a virtual scene-based teaching interaction system, which comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjustment module; the practical teaching data acquisition module is used for acquiring teaching related data about teacher objects and/or student objects in the history teaching process; the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data; the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes; the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process; the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data; therefore, the virtual scene-based teaching interaction system constructs preparation data based on scenes obtained by preprocessing teaching related data, constructs teaching virtual scenes with different modes, adjusts adaptive running states and/or running parameters of teachers and/or students according to state change data of the teachers and/or students in the running process of the teaching virtual scenes, can switch the adaptive teaching scenes according to different teaching contents and teaching requirements, improves interactivity and scene variability of the teaching process, and further improves teaching efficiency and teaching interestingness by fully fusing and applying virtual scene technologies to teaching.
The invention provides a teaching interaction system based on a virtual scene, which is characterized in that:
the virtual scene-based teaching interaction system comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjustment module; wherein,
the actual teaching data acquisition module is used for acquiring teaching related data about teacher objects and/or student objects in the history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scenes and constructing teaching virtual scenes in different modes;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process;
the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data;
further, the actual teaching data acquisition module comprises a teaching objective data acquisition sub-module, a teacher object related data acquisition sub-module and a student object related data acquisition sub-module; wherein,
The teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different history teaching phases to serve as a part of teaching related data;
the teacher object related data acquisition submodule is used for acquiring teaching state data about a teacher object in different history teaching phases to serve as a part of teaching related data;
the student object related data acquisition submodule is used for acquiring learning state data about student objects in different historic teaching phases to serve as a part of teaching related data;
further, the actual teaching data acquisition module further comprises a history teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein,
the history teaching stage decomposition sub-module is used for decomposing the history teaching process according to a preset teaching progress and/or a preset teaching course setting so as to obtain the corresponding different history teaching stages;
the teacher object determining submodule is used for determining a teacher object which corresponds to the teacher object related data acquisition submodule according to preset teaching requirements;
The student object determination submodule is used for acquiring student objects which act correspondingly according to the student object related data acquisition submodule;
further, the teaching data processing module comprises a data attribute identification sub-module, a data classification sub-module, a data picking sub-module and a data transformation sub-module; wherein,
the data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data in terms of data persistence form and/or data content so as to obtain attribute information on the teaching related data;
the data classification submodule is used for carrying out classification processing on the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data persistence forms and/or different data contents;
the data picking submodule is used for carrying out picking processing on the effectiveness of the data on the teaching related data set so as to obtain the effective teaching related data set meeting the preset effectiveness condition;
the data transformation submodule is used for carrying out transformation processing on the effective teaching related data set with respect to teaching scene matching so as to obtain corresponding scene construction preparation data;
Further, the data picking submodule comprises a data confidence coefficient calculating unit, a confidence coefficient judging unit and a data picking executing unit; wherein,
the data confidence coefficient calculating unit is used for calculating an actual data confidence coefficient value corresponding to the teaching related data set;
the confidence degree judging unit is used for comparing and judging the actual data confidence degree value with an expected data confidence degree range so as to determine the data validity of the teaching related data set;
the data picking execution unit is used for executing the picking processing according to the data effectiveness so as to obtain the effective teaching related data set meeting preset effectiveness conditions;
or,
the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation execution unit is used for executing the transformation processing according to the teaching scene matching degree value so as to obtain corresponding scene construction preparation data;
further, the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual sub-scene matching sub-module, a teaching virtual sub-scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein,
The teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating first matching degree values of different teaching virtual sub-scenes and second matching degree values of each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for carrying out splicing processing on the different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene pre-judging submodule is used for pre-judging the scene applicability of the teaching virtual scene so as to determine an applicability sorting list of different teaching virtual scenes;
further, the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing executing unit; wherein,
the sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value to form a plurality of teaching virtual sub-scene sets with splicing feasibility;
The sub-scene splicing execution unit is used for splicing different teaching virtual sub-scenes in the teaching virtual sub-scene set so as to obtain the corresponding teaching virtual scene;
further, the virtual scene operation monitoring module comprises an external environment state change determining sub-module, a teacher object state change determining sub-module and a student object state change determining sub-module; wherein,
the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining teacher object state change data corresponding to the teaching virtual scene in the operation process;
the student object state change determining submodule is used for determining student object state change data corresponding to the teaching virtual scene in the operation process;
further, the external environment state change determining submodule comprises an external environment sound data determining unit, an external environment illuminance data determining unit and an external environment temperature data determining unit; wherein,
the external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
The external environment illuminance data determining unit is used for determining external environment illuminance change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process;
or,
the teacher object state change determining submodule comprises a teacher object sound data determining unit and a teacher object limb action data determining unit; wherein,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of a teacher object in the operation process of the teaching virtual scene;
or,
the student object state change determining submodule comprises a student object face data determining unit and a student object limb action data determining unit; wherein,
the student object face data determining unit is used for determining facial expression data of a student object in the operation process of the teaching virtual scene;
the student object limb action data determining unit is used for determining the limb action data of the student object in the operation process of the teaching virtual scene;
Further, the virtual scene adjusting module comprises a virtual scene atmosphere adjusting sub-module, a virtual scene three-dimensional space adjusting sub-module and a virtual scene dynamic progress adjusting sub-module; wherein,
the virtual scene atmosphere adjustment submodule is used for adjusting the scene operation sound and/or the scene operation illuminance of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjustment submodule is used for adjusting the scene operation three-dimensional space depth of field and/or fusion of the current teaching virtual scene according to the object state change data;
and the virtual scene dynamic progress adjustment submodule is used for adjusting the scene operation dynamic progress of the current teaching virtual scene according to the object state change data.
Compared with the prior art, the virtual scene-based teaching interaction system constructs preparation data based on scenes obtained by preprocessing teaching related data, constructs teaching virtual scenes in different modes, adjusts adaptive running states and/or running parameters of the teaching virtual scenes according to state change data of teachers and/or students in the running process of the teaching virtual scenes, can switch the adaptive teaching scenes according to different teaching contents and teaching requirements, improves interactivity and scene variability of the teaching process, and further fully fuses and applies virtual scene technologies to teaching, so that teaching efficiency and teaching interestingness are improved.
Further, the virtual scene based teaching interaction system according to claim 1; wherein,
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process; the method also comprises the steps of accurately combining the standardized teaching virtual scene according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expression of the student object during the running of the teaching virtual scene, wherein the specific implementation steps are as follows:
a1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and performing preliminary statistical classification processing according to subject, grade and knowledge point difficulty grade characteristic parameters to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
wherein e is a natural constant, N represents the total number of subjects in the standardized teaching virtual scene database, m represents the total number of grades in the standardized teaching virtual scene database, x represents the number value corresponding to a certain subject (the number value takes the value of an integer of 0,1,2,3, … N), y represents the number value corresponding to a certain grade, z represents the number value of a knowledge point, S x Representing a subject S, G with a number value x y Represents a certain grade G, L with a number value y z A certain knowledge point L with a number value z is represented,the characteristic parameters are normalized and randomly combined, vir (S x ,G y ,L z ) Representing the acquired teaching virtual scene set;
step A3, in the operation process of the teaching virtual scene, according to a formula (2), obtaining facial expression state change data of the student object, and performing kernel function assignment processing to obtain a student object facial expression standard value set;
wherein pi is a circumference rate, exp is an exponential function based on a natural constant e, sin and cos are sine and cosine functions respectively, K represents the number of image pixel points of an effective area such as an eyelid, a lip angle, a forehead and the like of a face in an image collected by the scene operation object monitoring module in real time, r represents the value of a diagonal line of each pixel point, i represents the transverse coordinate number value of each collected pixel point, j represents the longitudinal coordinate number value of each collected pixel point, A represents the value of the longitudinal coordinate number of each collected pixel point 0 Representing the corresponding curve length transverse space vector value and B when the transverse coordinate number value of the pixel point is 0 when the right lower corner of the facial expression image is taken as a reference point and extends leftwards 0 Representing the longitudinal spatial vector value of the curve length corresponding to the pixel point longitudinal coordinate number value of 0 when the pixel point longitudinal coordinate number value extends upwards by taking the right lower corner of the facial expression image as a reference point, A i Representing the corresponding curve length abscissa space vector value when the pixel point transverse coordinate number value is i, B i Representing the corresponding curve length abscissa space vector value when the pixel point longitudinal coordinate number value is j,the method is used for carrying out kernel function processing on the longitudinal space vector value of the curve length of each pixel point,the method is characterized in that the length transverse space vector value of each pixel curve is subjected to kernel function summarization, F (A) i ,B j ) Representing the obtained standard value set of facial expression values of the student object such as happiness, confusion and the like after kernel function processingAnd (5) combining.
Step A4, comparing the student object facial expression standard value set obtained in the step A3 with the teaching virtual scene set obtained in the step A2 to execute the operation of adaptively adjusting the running state and/or running parameters of the current teaching virtual scene;
wherein x is 0 Representing a subject number value, y, after dynamic adjustment 0 Representing a grade number value, z after dynamic adjustment 0 Represents the number value of a certain knowledge point after dynamic adjustment,representing dynamically adjusting the current teaching virtual scene data according to the standard value of the facial expression of the student object>Representing the adjusted teaching virtual scene data when +. >If the teaching virtual scene data is not 1, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene is required to be executed.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a teaching interaction system based on a virtual scene.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a schematic structural diagram of a teaching interaction system based on a virtual scene according to an embodiment of the present invention is provided. The virtual scene-based teaching interaction system comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjustment module; wherein,
the actual teaching data acquisition module is used for acquiring teaching related data about teacher objects and/or student objects in the history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes;
The scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process;
the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data.
Preferably, the actual teaching data acquisition module comprises a teaching objective data acquisition sub-module, a teacher object related data acquisition sub-module and a student object related data acquisition sub-module; wherein,
the teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different history teaching phases to serve as a part of teaching related data;
the teacher object related data acquisition submodule is used for acquiring teaching state data about a teacher object in different history teaching phases to serve as a part of teaching related data;
the student object related data collection submodule is used for collecting learning state data about student objects in different historic teaching phases to serve as a part of teaching related data.
Preferably, the actual teaching data acquisition module further comprises a history teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein,
The history teaching stage decomposition sub-module is used for decomposing the history teaching process according to a preset teaching progress and/or a preset teaching course setting so as to obtain corresponding different history teaching stages;
the teacher object determining submodule is used for determining a teacher object which corresponds to the teacher object related data acquisition submodule according to preset teaching requirements;
the student object determination submodule is used for acquiring student objects which are correspondingly acted by the submodule according to the student object related data.
Preferably, the teaching data processing module comprises a data attribute identification sub-module, a data classification sub-module, a data picking sub-module and a data transformation sub-module; wherein,
the data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data in terms of data persistence form and/or data content so as to obtain attribute information on the teaching related data;
the data classification submodule is used for classifying the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data persistence forms and/or different data contents;
the data picking submodule is used for carrying out picking processing on the effectiveness of the data on the teaching related data set so as to obtain the effective teaching related data set meeting the preset effectiveness condition;
The data transformation submodule is used for carrying out transformation processing on the effective teaching related data set with respect to teaching scene matching so as to obtain corresponding scene construction preparation data.
Preferably, the data picking submodule comprises a data confidence calculating unit, a confidence judging unit and a data picking executing unit; wherein,
the data confidence calculating unit is used for calculating an actual data confidence value corresponding to the teaching related data set;
the confidence degree judging unit is used for comparing and judging the actual data confidence degree value with the expected data confidence degree range so as to determine the data validity of the teaching related data set;
the data picking execution unit is used for executing the picking processing according to the data effectiveness, so as to obtain the effective teaching related data set meeting the preset effectiveness condition.
Preferably, the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation execution unit is used for executing the transformation processing according to the teaching scene matching degree value so as to obtain corresponding scene construction preparation data.
Preferably, the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual scene matching sub-module, a teaching virtual scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein,
the teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating first matching degree values of different teaching virtual sub-scenes and second matching degree values of each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for splicing different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene prejudging submodule is used for carrying out prejudging processing on the scene applicability of the teaching virtual scene so as to determine an applicability sorting list of different teaching virtual scenes.
Preferably, the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing executing unit; wherein,
The sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value to form a plurality of teaching virtual sub-scene sets with splicing feasibility;
the sub-scene splicing execution unit is used for carrying out splicing processing on different teaching virtual sub-scenes in the teaching virtual sub-scene set so as to obtain the corresponding teaching virtual scene.
Preferably, the virtual scene operation monitoring module comprises an external environment state change determining sub-module, a teacher object state change determining sub-module and a student object state change determining sub-module; wherein,
the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining teacher object state change data corresponding to the teaching virtual scene in the running process;
the student object state change determining submodule is used for determining student object state change data corresponding to the teaching virtual scene in the operation process.
Preferably, the external environment state change determining submodule includes an external environment sound data determining unit, an external environment illuminance data determining unit and an external environment temperature data determining unit; wherein,
The external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
the external environment illuminance data determining unit is used for determining external environment illuminance change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process.
Preferably, the teacher object state change determination submodule includes a teacher object sound data determination unit and a teacher object limb motion data determination unit; wherein,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of a teacher object in the operation process of the teaching virtual scene.
Preferably, the student object state change determination submodule includes a student object face data determination unit and a student object limb motion data determination unit; wherein,
the student object face data determining unit is used for determining face expression data of a student object in the operation process of the teaching virtual scene;
The student object limb action data determining unit is used for determining the limb action data of the student object in the operation process of the teaching virtual scene.
Preferably, the virtual scene adjusting module comprises a virtual scene atmosphere adjusting sub-module, a virtual scene three-dimensional space adjusting sub-module and a virtual scene dynamic progress adjusting sub-module; wherein,
the virtual scene atmosphere adjustment submodule is used for adjusting the scene operation sound and/or the scene operation illuminance of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjustment submodule is used for adjusting the scene operation three-dimensional space depth of field and/or fusion of the current teaching virtual scene according to the object state change data;
the virtual scene dynamic progress adjustment submodule is used for adjusting the current teaching virtual scene about the scene running dynamic progress according to the object state change data.
Preferably, the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process; wherein,
the method also comprises the steps of accurately combining the standardized teaching virtual scene according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expression of the student object during the running of the teaching virtual scene, wherein the specific implementation steps are as follows:
A1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and performing preliminary statistical classification processing according to subject, grade and knowledge point difficulty grade characteristic parameters to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
wherein e is a natural constant, N represents the total number of subjects in the standardized teaching virtual scene database, m represents the total number of grades in the standardized teaching virtual scene database, x represents the number value corresponding to a certain subject (the number value takes the value of an integer of 0,1,2,3, … N), y represents the number value corresponding to a certain grade, z represents the number value of a knowledge point, S x Representing a subject S, G with a number value x y Represents a certain grade G, L with a number value y z A certain knowledge point L with a number value z is represented,the characteristic parameters are normalized and randomly combined, vir (S x ,G y ,L z ) Representing the acquired teaching virtual scene set;
step A3, in the operation process of the teaching virtual scene, according to a formula (2), obtaining facial expression state change data of the student object, and performing kernel function assignment processing to obtain a student object facial expression standard value set;
Wherein pi is a circumference rate, exp is an exponential function based on a natural constant e, sin and cos are sine and cosine functions respectively, K represents the number of image pixel points of an effective area such as an eyelid, a lip angle, a forehead and the like of a face in an image collected by the scene operation object monitoring module in real time, r represents the value of a diagonal line of each pixel point, i represents the transverse coordinate number value of each collected pixel point, j represents the longitudinal coordinate number value of each collected pixel point, A represents the value of the longitudinal coordinate number of each collected pixel point 0 Representing the corresponding curve length transverse space vector value and B when the transverse coordinate number value of the pixel point is 0 when the right lower corner of the facial expression image is taken as a reference point and extends leftwards 0 Representing the pixel point when extending upwards by taking the lower right corner of the facial expression image as a reference pointA corresponding curve length longitudinal space vector value when the longitudinal coordinate number value is 0, A i Representing the corresponding curve length abscissa space vector value when the pixel point transverse coordinate number value is i, B i Representing the corresponding curve length abscissa space vector value when the pixel point longitudinal coordinate number value is j,the method is used for carrying out kernel function processing on the longitudinal space vector value of the curve length of each pixel point,the method is characterized in that the length transverse space vector value of each pixel curve is subjected to kernel function summarization, F (A) i ,B j ) And the obtained standard value set of the facial expression of the student object such as happiness, confusion and the like after the kernel function processing is shown.
Step A4, comparing the student object facial expression standard value set obtained in the step A3 with the teaching virtual scene set obtained in the step A2 to execute the operation of adaptively adjusting the running state and/or running parameters of the current teaching virtual scene;
wherein x is 0 Representing a subject number value, y, after dynamic adjustment 0 Representing a grade number value, z after dynamic adjustment 0 Represents the number value of a certain knowledge point after dynamic adjustment,representing dynamically adjusting the current teaching virtual scene data according to the standard value of the facial expression of the student object>Representing the adjusted teaching virtual scene data when +.>If the teaching virtual scene data is not 1, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene is required to be executed.
The beneficial effects of the technical scheme are as follows: according to the technical scheme, the scene operation object monitoring module collects and analyzes the facial expression data of the student object in real time so as to judge whether the teaching virtual scene data generated according to the needs of the student object accords with the understanding capability of the student object; therefore, the operation of adaptively adjusting the running state and/or running parameters of the current teaching virtual scene is executed, the technical scheme provides technical support for implementing on-line intelligent automatic dynamic adjustment of the teaching virtual scene for the teaching interaction system based on the virtual scene, and the interest and the teaching efficiency of teaching are also greatly improved by formulating the corresponding teaching virtual scene according to the characteristics of each student object.
According to the content of the embodiment, the teaching interaction system based on the virtual scene constructs the preparation data based on the scene obtained by preprocessing the teaching related data, constructs the teaching virtual scene with different modes, adjusts the adaptive running state and/or running parameters of the teaching virtual scene according to the state change data of teachers and/or students in the running process of the teaching virtual scene, can switch the adaptive teaching scene according to different teaching contents and teaching requirements, improves the interactivity and scene variability of the teaching process, and further fully fuses and applies the virtual scene technology to teaching, thereby improving the teaching efficiency and teaching interestingness.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. A teaching interaction system based on virtual scenes is characterized in that:
the virtual scene-based teaching interaction system comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjustment module; wherein,
The actual teaching data acquisition module is used for acquiring teaching related data about teacher objects and/or student objects in the history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scenes and constructing teaching virtual scenes in different modes;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process;
the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the teaching virtual scene operation process; the method also comprises the steps of accurately combining the standardized teaching virtual scene according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expression of the student object during the running of the teaching virtual scene, wherein the specific implementation steps are as follows:
A1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and performing preliminary statistical classification processing according to subject, grade and knowledge point difficulty grade characteristic parameters to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
wherein e is a natural constant, n represents the total number of subjects in the standardized teaching virtual scene database, m represents the total number of grades in the standardized teaching virtual scene database, x represents the number value corresponding to a subject, y represents the number value corresponding to a grade, z represents the number value of a knowledge point, S x Representing a subject S, G with a number value x y Represents a certain grade G, L with a number value y z A certain knowledge point L with a number value z is represented,the characteristic parameters are normalized and randomly combined, vir (S x ,G y ,L z ) Representing the acquired teaching virtual scene set;
step A3, in the operation process of the teaching virtual scene, according to a formula (2), obtaining facial expression state change data of the student object, and performing kernel function assignment processing to obtain a student object facial expression standard value set;
Wherein pi is a circumference rate, exp is an exponential function based on a natural constant e, sin and cos are sine and cosine functions respectively, K represents the number of image pixel points of an effective area such as an eyelid, a lip angle, a forehead and the like of a face in an image acquired by the scene operation object monitoring module in real time, r represents the value of a diagonal line of each pixel point, i represents the transverse coordinate number value of each acquired pixel point, and j represents each acquired pixel pointLongitudinal coordinate number value of point, A 0 Representing the corresponding curve length transverse space vector value and B when the transverse coordinate number value of the pixel point is 0 when the right lower corner of the facial expression image is taken as a reference point and extends leftwards 0 Representing the longitudinal spatial vector value of the curve length corresponding to the pixel point longitudinal coordinate number value of 0 when the pixel point longitudinal coordinate number value extends upwards by taking the right lower corner of the facial expression image as a reference point, A i Representing the corresponding curve length abscissa space vector value when the pixel point transverse coordinate number value is i, B i Representing the corresponding curve length abscissa space vector value when the pixel point longitudinal coordinate number value is j,the method is used for carrying out kernel function processing on the longitudinal space vector value of the curve length of each pixel point,the method is characterized in that the length transverse space vector value of each pixel curve is subjected to kernel function summarization, F (A) i ,B j ) And the obtained standard value set of the facial expression of the student object such as happiness, confusion and the like after the kernel function processing is shown.
Step A4, comparing the student object facial expression standard value set obtained in the step A3 with the teaching virtual scene set obtained in the step A2 to execute the operation of adaptively adjusting the running state and/or running parameters of the current teaching virtual scene;
wherein x is 0 Representing a subject number value, y, after dynamic adjustment 0 Representing a grade number value, z after dynamic adjustment 0 Represents the number value of a certain knowledge point after dynamic adjustment,representing dynamically adjusting the current teaching virtual scene data according to the standard value of the facial expression of the student object>Representing the adjusted teaching virtual scene data when +.>If the teaching virtual scene data is not 1, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene is required to be executed.
2. The virtual scene based teaching interactive system according to claim 1, wherein:
the actual teaching data acquisition module comprises a teaching objective data acquisition sub-module, a teacher object related data acquisition sub-module and a student object related data acquisition sub-module; the teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different history teaching phases to serve as a part of teaching related data; the teacher object related data acquisition submodule is used for acquiring teaching state data about a teacher object in different history teaching phases to serve as a part of teaching related data;
The student object related data acquisition submodule is used for acquiring learning state data about student objects in different historic teaching phases to serve as a part of teaching related data.
3. The virtual scene based teaching interaction system according to claim 2, wherein:
the actual teaching data acquisition module further comprises a history teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein,
the history teaching stage decomposition sub-module is used for decomposing the history teaching process according to a preset teaching progress and/or a preset teaching course setting so as to obtain the corresponding different history teaching stages;
the teacher object determining submodule is used for determining a teacher object which corresponds to the teacher object related data acquisition submodule according to preset teaching requirements;
and the student object determination submodule is used for acquiring student objects which are correspondingly acted by the submodule according to the related data of the student objects.
4. The virtual scene based teaching interactive system according to claim 1, wherein:
the teaching data processing module comprises a data attribute identification sub-module, a data classification sub-module, a data picking sub-module and a data conversion sub-module; wherein,
The data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data in terms of data persistence form and/or data content so as to obtain attribute information on the teaching related data;
the data classification submodule is used for carrying out classification processing on the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data persistence forms and/or different data contents;
the data picking submodule is used for carrying out picking processing on the effectiveness of the data on the teaching related data set so as to obtain the effective teaching related data set meeting the preset effectiveness condition; the data transformation submodule is used for carrying out transformation processing on the effective teaching related data set with respect to teaching scene matching so as to obtain corresponding scene construction preparation data.
5. The virtual scene based teaching interactive system according to claim 4, wherein
The data picking submodule comprises a data confidence calculating unit, a confidence judging unit and a data picking executing unit; wherein,
the data confidence coefficient calculating unit is used for calculating an actual data confidence coefficient value corresponding to the teaching related data set;
The confidence degree judging unit is used for comparing and judging the actual data confidence degree value with an expected data confidence degree range so as to determine the data validity of the teaching related data set;
the data picking execution unit is used for executing the picking processing according to the data effectiveness so as to obtain the effective teaching related data set meeting preset effectiveness conditions;
or,
the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation execution unit is used for executing the transformation processing according to the teaching scene matching degree value so as to obtain corresponding scene construction preparation data.
6. The virtual scene based teaching interactive system according to claim 1, wherein:
the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual sub-scene matching sub-module, a teaching virtual sub-scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein,
The teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating first matching degree values of different teaching virtual sub-scenes and second matching degree values of each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for carrying out splicing processing on the different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene prejudging submodule is used for carrying out prejudging processing on the scene applicability of the teaching virtual scene so as to determine an applicability sorting list of different teaching virtual scenes.
7. The virtual scene based tutorial interaction system of claim 6, wherein:
the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing executing unit; wherein,
the sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes according to the first matching degree value and/or the second matching degree value to form a plurality of teaching virtual sub-scene sets with splicing feasibility;
The sub-scene splicing execution unit is used for carrying out splicing processing on different teaching virtual sub-scenes in the teaching virtual sub-scene set so as to obtain the corresponding teaching virtual scene.
8. The virtual scene based teaching interactive system according to claim 1, wherein:
the virtual scene operation monitoring module comprises an external environment state change determining sub-module, a teacher object state change determining sub-module and a student object state change determining sub-module; the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining teacher object state change data corresponding to the teaching virtual scene in the operation process;
the student object state change determining submodule is used for determining student object state change data corresponding to the teaching virtual scene in the operation process;
the external environment state change determining submodule comprises an external environment sound data determining unit, an external environment illumination data determining unit and an external environment temperature data determining unit; the external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
The external environment illuminance data determining unit is used for determining external environment illuminance change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process;
or,
the teacher object state change determining submodule comprises a teacher object sound data determining unit and a teacher object limb action data determining unit; wherein,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of a teacher object in the operation process of the teaching virtual scene;
or,
the student object state change determining submodule comprises a student object face data determining unit and a student object limb action data determining unit; wherein,
the student object face data determining unit is used for determining facial expression data of a student object in the operation process of the teaching virtual scene;
the student object limb action data determining unit is used for determining the limb action data of the student object in the operation process of the teaching virtual scene.
9. The virtual scene based teaching interactive system according to claim 1, wherein:
the virtual scene adjusting module comprises a virtual scene atmosphere adjusting sub-module, a virtual scene three-dimensional space adjusting sub-module and a virtual scene dynamic progress adjusting sub-module; wherein,
the virtual scene atmosphere adjustment submodule is used for adjusting the scene operation sound and/or the scene operation illuminance of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjustment submodule is used for adjusting the scene operation three-dimensional space depth of field and/or fusion of the current teaching virtual scene according to the object state change data; and the virtual scene dynamic progress adjustment submodule is used for adjusting the scene operation dynamic progress of the current teaching virtual scene according to the object state change data.
CN202010137104.2A 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene Active CN111258433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137104.2A CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137104.2A CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Publications (2)

Publication Number Publication Date
CN111258433A CN111258433A (en) 2020-06-09
CN111258433B true CN111258433B (en) 2024-04-02

Family

ID=70947494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137104.2A Active CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Country Status (1)

Country Link
CN (1) CN111258433B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017085B (en) * 2020-08-18 2021-07-20 上海松鼠课堂人工智能科技有限公司 Intelligent virtual teacher image personalization method
CN112017496B (en) * 2020-08-30 2021-07-30 上海松鼠课堂人工智能科技有限公司 Student computing power analysis method based on game learning
CN111985582B (en) * 2020-09-27 2021-06-01 上海松鼠课堂人工智能科技有限公司 Knowledge point mastering degree evaluation method based on learning behaviors
CN112508162B (en) * 2020-11-17 2024-04-05 珠海格力电器股份有限公司 Emergency management method, device and system based on system linkage
CN113096252B (en) * 2021-03-05 2021-11-02 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113409635A (en) * 2021-06-17 2021-09-17 上海松鼠课堂人工智能科技有限公司 Interactive teaching method and system based on virtual reality scene
CN115100004B (en) * 2022-06-23 2023-05-30 北京新唐思创教育科技有限公司 Online teaching system, method, device, equipment and medium
CN115114537B (en) * 2022-08-29 2022-11-22 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193709A1 (en) * 2016-05-12 2017-11-16 深圳市鹰硕技术有限公司 Internet-based teaching and learning method and system
CN110069139A (en) * 2019-05-08 2019-07-30 上海优谦智能科技有限公司 VR technology realizes the experiencing system of Tourism teaching practice

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626875B2 (en) * 2007-08-01 2017-04-18 Time To Know Ltd. System, device, and method of adaptive teaching and learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193709A1 (en) * 2016-05-12 2017-11-16 深圳市鹰硕技术有限公司 Internet-based teaching and learning method and system
CN110069139A (en) * 2019-05-08 2019-07-30 上海优谦智能科技有限公司 VR technology realizes the experiencing system of Tourism teaching practice

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡华 ; .虚拟现实技术在工程制图课件中的应用.辽宁师专学报(自然科学版).2016,(04),全文. *

Also Published As

Publication number Publication date
CN111258433A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111258433B (en) Teaching interaction system based on virtual scene
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
US20210312166A1 (en) System and method for face recognition based on dynamic updating of facial features
CN104572804A (en) Video object retrieval system and method
US20180349716A1 (en) Apparatus and method for recognizing traffic signs
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
CN110796018A (en) Hand motion recognition method based on depth image and color image
CN113505854A (en) Method, device, equipment and medium for constructing facial image quality evaluation model
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN111860091A (en) Face image evaluation method and system, server and computer readable storage medium
CN112668638A (en) Image aesthetic quality evaluation and semantic recognition combined classification method and system
CN115546861A (en) Online classroom concentration degree identification method, system, equipment and medium
CN108830222A (en) A kind of micro- expression recognition method based on informedness and representative Active Learning
CN110889366A (en) Method and system for judging user interest degree based on facial expression
CN111243373B (en) Panoramic simulation teaching system
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
CN113159146A (en) Sample generation method, target detection model training method, target detection method and device
CN116910302A (en) Multi-mode video content effectiveness feedback visual analysis method and system
CN110633641A (en) Intelligent security pedestrian detection method, system and device and storage medium
CN116386118A (en) Drama matching cosmetic system and method based on human image recognition
CN115984968A (en) Student time-space action recognition method and device, terminal equipment and medium
CN115984647A (en) Remote sensing distributed collaborative reasoning method, device, medium and satellite for constellation
CN114745592A (en) Bullet screen message display method, system, device and medium based on face recognition
CN113673421A (en) Loss assessment method, device and equipment based on video stream and storage medium
Cui Research on garden landscape reconstruction based on geographic information system under the background of deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200237 9 / F and 10 / F, building 2, No. 188, Yizhou Road, Xuhui District, Shanghai

Applicant after: Shanghai squirrel classroom Artificial Intelligence Technology Co.,Ltd.

Address before: 200237 9 / F and 10 / F, building 2, No. 188, Yizhou Road, Xuhui District, Shanghai

Applicant before: SHANGHAI YIXUE EDUCATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant