CN114999237A - Intelligent education interactive teaching method - Google Patents

Intelligent education interactive teaching method Download PDF

Info

Publication number
CN114999237A
CN114999237A CN202210634110.8A CN202210634110A CN114999237A CN 114999237 A CN114999237 A CN 114999237A CN 202210634110 A CN202210634110 A CN 202210634110A CN 114999237 A CN114999237 A CN 114999237A
Authority
CN
China
Prior art keywords
user
scene
signals
teaching
health
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210634110.8A
Other languages
Chinese (zh)
Other versions
CN114999237B (en
Inventor
刘明强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Technology
Original Assignee
Qingdao University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Technology filed Critical Qingdao University of Technology
Priority to CN202210634110.8A priority Critical patent/CN114999237B/en
Publication of CN114999237A publication Critical patent/CN114999237A/en
Application granted granted Critical
Publication of CN114999237B publication Critical patent/CN114999237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Pulmonology (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Educational Administration (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)

Abstract

The application provides an interactive teaching method of intelligent education, which comprises the following steps: a user logs in a VR teaching system and loads an initial scene in a personalized manner; after the user logs in, the system judges the reality of the simulated world scene and the user motion perception in real time and calculates the difference with the real world; after a user logs in, the system monitors human health data in real time, splits the human health data into different dimensions and stores the human health data to a cloud platform; based on the demand and the health condition, judging the adaptive condition of the user to the current scene and predicting the health condition trend of the user; performing authenticity adjustment and personalized reminding according to the adaptation degree, and recording to the cloud platform; the risk that the user used the VR teaching is evaluateed, can judge VR interactive teaching content, authenticity degree better to judge whether the VR content adapts to by the teacher, let the video of must real teaching give the adapter, adopt different evaluation criteria, let the interactive people that are fit for various demands of VR teaching.

Description

Intelligent education interactive teaching method
The invention relates to the technical field of information, in particular to an intelligent education interactive teaching method.
[ background ] A method for producing a semiconductor device
In VR-based smart classes, interactive teaching is often performed by adopting virtual reality technology, and many risks are caused by introducing a new teaching form, for example, in VR immersive teaching, psychological problems are caused because people cannot accept excessive immersive teaching. For example, a classmate with panic disorder can feel in the unmanned aerial vehicle by controlling the flight of the unmanned aerial vehicle through VR teaching, and is very afraid. In VR teaching, the teaching effect for different people is sometimes the final teaching result is not good due to the inadaptation, rather than the ability problem. On the other hand, although the computer setting by VR has a certain teaching effect, there may be more uncontrollable factors generated in the process under the real environment, resulting in the problem that the setting of the computer program is inconsistent with the objective fact. Therefore, unreasonable phenomena that may exist in VR need to be recognized so as to achieve teaching effect improvement. Meanwhile, the risk factors generated by teaching need to be optimized by the related technology, so that wrong teaching is avoided.
[ summary of the invention ]
The invention provides an intelligent education interactive teaching method, which mainly comprises the following steps:
a user logs in a VR teaching system and loads an initial scene in a personalized manner; after the user logs in, the system judges the reality of the simulated world scene and the user motion perception in real time and calculates the difference with the real world; after a user logs in, the system monitors human health data in real time, splits the human health data into different dimensions and stores the human health data to a cloud platform; based on the demand and the health condition, judging the adaptive condition of the user to the current scene and predicting the health condition trend of the user; performing authenticity adjustment and personalized reminding according to the adaptation degree, and recording to the cloud platform; evaluating the risk of using VR teaching by a user;
further optionally, the user logs in the VR teaching system, and the personalized loading of the initial scene includes:
after a user wears VR equipment and starts the VR equipment, pulse signals of the user are automatically collected, user information is identified, the requirement of the user on teaching is obtained, and a VR scene is loaded for the user in a personalized mode by combining the past medical history of the user on the cloud; firstly, acquiring the pulse condition of a user, collecting the health information of the user, preprocessing the health information and extracting characteristics, performing characteristic matching in a database to identify the identity of the user, and calling or creating a user file; then, a brain-computer interface is utilized to obtain the requirement of the user for teaching, a lower precision limit is automatically calculated by taking the requirement as a standard, and the scene requirement is preliminarily determined according to the personal health information of the user; a Unity3D development engine is utilized, a cloud computing mode is used, a three-dimensional scene is drawn under a large neural network model on the cloud, and a user initial scene is loaded; the method comprises the following steps: collecting the biological characteristics of a user and identifying the identity of the user; acquiring the requirements of a user by utilizing a brain-computer interface technology; personalized loading of an initial scene;
the collecting of the biological characteristics of the user and the identification of the user identity specifically comprise:
the identity of the user is identified from this biometric feature of the PPG signal. Collecting PPG signals at fingertips by using a photoelectric volume pulse sensor, transmitting the signals by using UWB, and carrying out classification and identification by using a decision maker. Preprocessing the signals by using a wavelet threshold method, eliminating baseline singular points and high-frequency noise, then performing monocycle division on the PPG signals and extracting a cycle value, performing sparse decomposition on the monocycle signals by using an atom library consisting of Gabor atoms, extracting characteristic parameters, and importing the characteristic parameters into a decision tree for identification.
The method for acquiring the requirements of the user by using the brain-computer interface technology specifically comprises the following steps:
an interactive system is established by analyzing the electroencephalogram signals through a brain-computer interface, so that direct communication between the brain and a computer is realized, and the requirement of a user on the pesticide spraying scene of the unmanned aerial vehicle is obtained. And collecting brain wave signal sources by using the micro-nano processing electrode array, preprocessing by using a Butterworth filter, filtering noise and acquiring a specified signal. And then, constructing a characteristic set for the signals by using a common space mode algorithm, classifying different physiological and psychological conditions by using an artificial neuron network, constructing a brain network structure imagined by a scene, and calculating strength values and accurate values of the requirements of the user on terrain, wind speed, humidity, temperature and soil conditions. And automatically calculating a lower requirement limit according to the requirement to provide basis for scene loading.
The personalized loading initial scene specifically comprises the following steps:
and selecting a proper authenticity level within an interval range according to a requirement lower limit given by user requirements and by combining the personal health state of the user and the tolerance capability of the VR authenticity, and providing an individualized initial scene standard for the user. For the loading of the initial scene of the unmanned aerial vehicle pesticide spraying, the influences of terrain, wind speed, humidity, temperature and soil are considered, the Unity3D is used for developing engine and real world data, a cloud computing mode is used for drawing a three-dimensional scene under a large neural network model, and the initial scene of a user is loaded. For the terrain, the terrain gradient and the terrain area are considered, and the DEM is adopted for simulation. When the requirement of a user on the terrain is higher than a preset threshold value, a TIN irregular triangular model method is adopted for representing, then a VDPM algorithm is used for increasing and decreasing terrain grids, and the nearest similar real terrain is created. When the demand of a user on the terrain is lower than a preset threshold value, original data are represented by an RSG regular grid model, and then primitives describing the terrain surface are segmented and combined by using a binary tree ROAM algorithm based on a binary triangular tree to create the terrain. Regarding the wind speed, the wind speed and the wind angle are considered, and a WRF mesoscale numerical mode is adopted to simulate a wind speed sample. For soil, soil conditions are simulated by performing regression simulation on soil conditions based on GBM, taking into account soil type, soil oxygen content, soil nutrients, soil humidity and soil temperature, using meteorological data and vegetation data as covariate factors. The temperature was also simulated by the GBM method in consideration of the liquid temperature, the air temperature, and the light intensity, in addition to the soil temperature.
Further optionally, after the user logs in, the system judges reality of the simulated world scene and the user motion perception in real time, and calculates a difference from the real world includes:
learning physical theory knowledge and the performance of unmanned aerial vehicle pesticide spraying under various influence factors based on a neural network, and dividing the influence factors into two types, wherein the first type is a scene factor and comprises environmental factors and user real operation adjustment factors, wherein the environmental factors comprise terrain, wind speed, humidity, temperature, soil and unmanned aerial vehicle equipment; the second type is a perception factor, namely the perception of a user in the using process, wherein the perception comprises visual perception and action perception; in the judging process, recombining and corresponding the scene requirement of the user on the simulated world and the perception requirement of the physical condition of the user on the simulated world with the real world, comparing the scene requirement of the user on the simulated world with the real world by respectively utilizing a minimum regression analysis method and an SVR algorithm, and calculating the confidence coefficient; the evaluation of scene factors is obtained through the evaluation of scene reality gaps, a user comfort evaluation model is established through the analysis of a three-dimensional VR video, and human perception characteristics are obtained; the method comprises the following steps: calculating the reality gap of the virtual world scene; calculating the difference of the reality of the user motion perception;
the virtual world scene reality gap calculation specifically comprises the following steps:
when calculating the difference between the real scene and the real world, acquiring a scene model of the simulated world by utilizing the built-in architecture of the development engine Unity3D, and comparing the scene model with a real terrain data high-level chart, a satellite diagram and a weather forecast of the real world: the method comprises the steps of utilizing an independent component analysis method to make users independent of requirements of the virtual world, namely the requirements of the users on terrain, wind speed, humidity, temperature and soil conditions, enabling each requirement to correspond to a requirement branch, enabling the characteristics of each branch to correspond to the characteristics of the virtual world in a one-to-one and combined mode, utilizing a minimum regression analysis method to analyze, and calculating the difference between the virtual world and the real world.
The calculating of the user motion perception authenticity gap specifically comprises the following steps:
according to the generation of VR dizzy, the motion perception of a user is divided into two judgment standards, namely the matching degree of vestibular vision and the synchronization degree of VR visual angle and human rotation. Establishing a user comfort evaluation model based on the two judgment standards to obtain user perception characteristics: the method comprises the steps of firstly carrying out optical flow estimation on a three-dimensional VR video to calculate horizontal and vertical motion matrixes of video frames, then calculating video frame speed and frame acceleration, finally taking extracted multi-dimensional motion information of the video frame speed and the acceleration as characteristic vectors, and establishing a user comfort evaluation model by combining with an SVR algorithm and human body actions for subsequent comfort optimization.
Further optionally, after the user logs in, the system monitors the human health data in real time, splits the human health data into different dimensions, and stores the human health data to the cloud platform includes:
monitoring the blood pressure, heart rate and respiration conditions of the user through analyzing the pulse; monitoring the dizziness state, psychological load and psychological mood of the user through the analysis of the EEG; monitoring the motion by using an infrared sensor, tracking the head and the hands of a user through motion projection, drawing a user motion image, and judging whether the user state is abnormal or not; measuring the sound, and judging the health condition of the user from the sound trembling degree and the voice recognition of the user through a sound sensor; the method comprises the following steps: filtering motion artifacts, acquiring a PPG signal, and monitoring blood pressure, heart rate and respiration conditions of a user; acquiring an EEG signal, and monitoring the dizzy and psychological conditions of a user;
filtering motion artifact obtains the PPG signal, monitors user's blood pressure, rhythm of the heart, breathing condition, specifically includes:
filtering out motion artifacts by using a TROIKA framework: and after the user identification is finished, continuously acquiring the PPG signal, and performing compression, transmission, reconstruction and preprocessing. Then, singular spectrum analysis is adopted for signal decomposition: the time series are first mapped into a trajectory matrix and then singular value decomposition, grouping and reassembly are performed. And after a high-resolution frequency spectrum is obtained, tracking and verifying a spectrum peak to obtain a PPG signal with a motion artifact filtered. And then, estimating the high-order statistical characteristics of the signals by adopting an independent component analysis method to obtain independent component components of the signals, decomposing a plurality of signals related to blood pressure, heart rate and respiration, and analyzing the signals respectively.
The acquiring of the EEG signal and monitoring of the dizziness and psychological conditions of the user specifically include:
and (3) dynamically acquiring an EEG signal in real time by using B-Alert Live to monitor the health condition of the user. And then preprocessing, filtering noise, adopting an independent component analysis method, utilizing a filter to decompose signals, and monitoring conventional health indexes. For brain diseases, preprocessing signals by adopting a principal component analysis method, extracting the characteristics of common brain disease signals by adopting a hestert index and a detrending index, and realizing automatic detection by an SVM (support vector machine) method. For monitoring the psychological load of the user, the psychological load state change of the user is researched by taking Alpha waves of channels Fz and F4 and Theta waves of channels Fz and F3 as indexes. Processing the signals by using B-Alert Lab analysis software, calculating power spectral density, performing FFT calculation by using a Kaiser window obtained by correcting a zero-order Bessel function, obtaining an effective PSD by calculation and correction, and processing the PSD after removing electroencephalogram artifacts. And comprehensively considering all indexes to obtain the psychological state of the user and judge the adaptation degree of the user to VR.
Further optionally, the determining, based on the demand and the health condition, a user adaptation to a current scenario and predicting a health condition trend of the user includes:
according to big data of a hospital, pre-establishing physiological index data of various diseases as a judgment basis for the health of a user, and performing personalized adjustment according to historical data of the user; acquiring health data of a user from multiple angles including blood pressure, heart rate, dizziness, psychological states, actions and sounds by PPG, EEG, actions and sounds, and constructing two analysis models by taking basic characteristics of the user and inspection indexes as subjects around basic characteristics related to symptoms of 3D dizziness, VR discomfort, high fear, heart disease and other sudden diseases as analysis angles; integrating the two models, mining association rules by using an HANA platform, evaluating data quantity, calculating the adaptation degree of a user to the current scene, and predicting and monitoring the health condition of the user; meanwhile, the user data is established into an analysis report and stored in the cloud platform to be used as historical record training, and the user can freely check the analysis report.
Further optionally, the adjusting authenticity and the prompting personalization are performed according to the adaptation degree, and the recording to the cloud platform includes:
firstly, judging whether the scene of the simulated world is reliable or not according to teaching requirements and the reality of the scene of the simulated world, and taking the bearing capacity of a user as a threshold value through calculation, and adjusting the reality of the scene of the simulated world; then, judging the health condition of the user according to the analysis report, further judging the adaptation degree of the user to VR, and using the adaptation degree as a basis for adjusting the motion perception authenticity degree of the user; if various indexes of the user are good in performance and the user is judged to need to accept the virtual world with higher authenticity, correcting according to the authenticity difference obtained in the previous step, and improving the operation sensitivity; if the health index of the user is predicted or monitored to be abnormal, if the fatigue of the user is predicted and a health problem is about to occur, the vr scene needs to be adjusted in a personalized mode, and even an alarm is sent to the user; meanwhile, recording the operation to a user file, and recording the vr use habit of the user; the method comprises the following steps: the system adjusts the reality of the simulated world according to the user analysis report;
the system adjusts the reality of the simulated world according to the user analysis report, and specifically comprises the following steps:
if the user is judged not to be suitable for the current VR scene, user motion perception adjustment is needed, frame speed and frame acceleration are modified in an individualized mode, the VR visual hysteresis problem is reduced, and abnormal motion is reduced by reducing speed and reducing visual passive movement; if the user is predicted to be tired, the accuracy is properly reduced according to the teaching requirement, and the energy consumption of the user is reduced; and if the health condition of the user is predicted or monitored to be abnormal, an alarm is given out, personalized adjustment is carried out, and the VR use habit of the user is recorded. Meanwhile, the user also has the authority to freely adjust the parameters.
Further optionally, the assessing the risk of the user using VR teaching includes:
for the teaching effect, the pleasure feeling of the user is judged by analyzing psychological factors in the electroencephalogram signals, the difference of the electroencephalogram signals before and after teaching is carried out, and the teaching effect is evaluated; through extracting the health data of the user, the uncomfortable feeling of the user using VR is quickly found, and the adjustment or the alarm is timely given out; in the using process, more personal data are recorded to form user habits, and the user condition is monitored point to point; according to modern medical treatment, disease data are updated in time, and discomfort of a user is discovered in time; meanwhile, the data of the user is encrypted and stored, and the user data is obtained only when the deep learning algorithm is trained and the user himself has the authority.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the invention can better judge VR interactive teaching content and authenticity degree and judge whether the VR content is suitable for a learner or not. And (4) giving the video which must be really taught to the adapter. Recommending the VR content which is not too real to the non-adaptor, and adopting different evaluation criteria. Let VR teaching interactive, be fit for the people of various demands.
[ description of the drawings ]
FIG. 1 is a flow chart of an interactive teaching method for intelligent education according to the present invention;
FIG. 2 is a flowchart illustrating an interactive teaching method for intelligent education according to the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of an interactive teaching method for intelligent education according to the present invention. As shown in fig. 1, the interactive intelligent education teaching method of the present embodiment may specifically include:
step 101, a user logs in a VR teaching system and loads an initial scene in a personalized mode.
After the user wears the VR equipment and starts, the pulse signals of the user are automatically collected, the user information is recognized, the requirement of the user for teaching is obtained, and the VR scene is loaded for the user in a personalized mode by combining the past medical history of the user on the cloud. Firstly, the pulse condition of a user is collected, the health information of the user is collected and is preprocessed and subjected to feature extraction, feature matching is carried out in a database to identify the identity of the user, and a user file is called or newly created. And then, acquiring the requirement of the user on teaching by using a brain-computer interface, automatically calculating a lower precision limit by taking the requirement as a standard, and preliminarily determining the scene requirement according to the personal health information of the user. A Unity3D development engine is utilized, a cloud computing mode is used, a three-dimensional scene is drawn under a large neural network model on the cloud, and a user initial scene is loaded.
Collecting the biological characteristics of the user and identifying the identity of the user.
The identity of the user is identified from this biometric feature of the PPG signal. Collecting PPG signals at fingertips by using a photoelectric volume pulse sensor, transmitting the signals by using UWB, and carrying out classification and identification by using a decision maker. Preprocessing the signals by using a wavelet threshold method, eliminating baseline singular points and high-frequency noise, then performing monocycle division on the PPG signals and extracting a cycle value, performing sparse decomposition on the monocycle signals by using an atom library consisting of Gabor atoms, extracting characteristic parameters, and importing the characteristic parameters into a decision tree for identification.
And acquiring the requirements of the user by utilizing brain-computer interface technology.
The brain-computer interface is used for establishing an interactive system by analyzing the electroencephalogram signals, so that direct communication between the brain and a computer is realized, and the requirement of a user on the unmanned aerial vehicle pesticide spraying scene is obtained. And collecting brain wave signal sources by using the micro-nano processing electrode array, preprocessing by using a Butterworth filter, filtering noise and acquiring a specified signal. And then, constructing a characteristic set for the signals by using a common space mode algorithm, classifying different physiological and psychological conditions by using an artificial neuron network, constructing a brain network structure imagined by a scene, and calculating strength values and accurate values of the requirements of the user on terrain, wind speed, humidity, temperature and soil conditions. And automatically calculating a lower requirement limit according to the requirement to provide basis for scene loading. Many scholars at home and abroad research the identity recognition technology based on pulse signals, the highest recognition rate is the improved matching pursuit sparse decomposition algorithm of Yang thinking and the like at present, and the recognition rate can reach 98.3%. When a new user enters VR, the pulse signal of the user is identified to be unable to match with the information in the user file, and the user file is newly built. And then, acquiring the requirements of the user on the terrain, the wind speed, the humidity, the temperature and the soil through a brain-computer interface, setting a lower precision limit, and loading an initial scene in an interval according to the physical condition of the user. In the scheme, the user information is acquired by the authorization of the user, and the safety of the personal information of the user is guaranteed by the privacy calculation and other technologies.
And personalized loading of the initial scene.
Based on a requirement lower limit given by user requirements, the personal health state of the user and the tolerance capacity for VR reality are combined, and a proper authenticity level is selected within an interval range to provide an individualized initial scene standard for the user. For the loading of the initial scene of the unmanned aerial vehicle pesticide spraying, the influences of terrain, wind speed, humidity, temperature and soil are considered, the Unity3D development engine and real world data are utilized, the cloud computing mode is used, the three-dimensional scene is drawn under the large neural network model, and the user initial scene is loaded. For the terrain, the terrain gradient and the terrain area are considered, and the DEM is adopted for simulation. When the requirement of a user on the terrain is higher than a preset threshold value, a TIN irregular triangular model method is adopted for representing, then a VDPM algorithm is used for increasing and decreasing terrain grids, and the nearest similar real terrain is created. When the demand of a user on the terrain is lower than a preset threshold value, original data are represented by an RSG regular grid model, and then primitives describing the terrain surface are segmented and combined by using a binary tree ROAM algorithm based on a binary triangular tree to create the terrain. And regarding the wind speed, the wind speed and the wind angle are considered, and a WRF mesoscale numerical model is adopted to simulate a wind speed sample. For soil, soil conditions are simulated by performing regression simulation on the soil conditions based on GBM by taking the soil type, the soil oxygen content, the soil nutrients, the soil humidity and the soil temperature into consideration and using meteorological data and vegetation data as covariate factors. The temperature was also simulated by the GBM method in consideration of the chemical liquid temperature, the air temperature, and the light intensity, in addition to the soil temperature. PPG, known as Photo pulse graph, is a non-invasive detection method for detecting pulse waves by monitoring blood volume changes in living tissues by optical means. The PPG signal, i.e., a photoplethysmographic pulse wave detected by photoplethysmography, is one type of pulse signal. The decision tree needs to be trained: and extracting 21 characteristics of the single-period PPG signal subjected to preprocessing and sparse decomposition, substituting the 21 characteristics into a calculation formula of information gain, and selecting 5 characteristics with the largest contribution as decision tree attribute nodes. And combining the periodic characteristics, training data, and further dividing the characteristics of the recognition object by the classifier to build the decision tree classifier. In the identification process, if the identified user is matched with the user in the database, the identification is successful, the user is judged to be an old user, the user file is loaded, and the VR experience habit is obtained. And if the identified user cannot be matched with the user in the database, judging the user to be a new user, and creating a new leaf node and a user profile. The module for collecting the PPG signal is mainly concentrated on a fingertip part of the flexible printing electronic VR glove; UWB, as a wireless signal transmission technology, has the advantages of high rate, strong confidentiality and the like, and is very suitable for medical treatment and medical image processing.
And 102, after the user logs in, the system judges the reality of the simulated world scene and the user motion perception in real time and calculates the difference between the simulated world scene and the real world.
Based on the neural network learning physical theory knowledge and the unmanned aerial vehicle pesticide spraying performance under various influence factors, the influence factors are divided into two types, the first type is a scene factor and comprises an environmental factor and a user real operation adjusting factor, wherein the first type comprises terrain, wind speed, humidity, temperature, soil and unmanned aerial vehicle equipment; the second category is perception factors, namely the perception of the user during use, including visual perception and motion perception. In the judging process, the scene requirement of the user on the simulated world and the perception requirement of the physical condition of the user on the simulated world are recombined and correspond to the real world, the minimum regression analysis method and the SVR algorithm are respectively utilized to compare with the real world, and the confidence coefficient is calculated. The evaluation of scene factors is obtained through evaluating the scene authenticity gap, a user comfort evaluation model is established through analyzing the three-dimensional VR video, and human perception characteristics are obtained. A communication interface made of the micro-nano processing electrode array is positioned in the VR helmet, and a large number of channels are arranged in the communication interface to realize high-speed reading and writing of information in the brain and acquire information in the brain of a user. The common space mode algorithm for constructing the feature set on the signals is a filtering feature extraction algorithm, each type of space distribution components can be extracted from multi-channel brain-computer interface data, and the basic principle is that a group of optimal space filters are found for projection by utilizing diagonalization of a matrix, so that the variance difference of the two types of signals is maximized, and the feature vector with high discrimination is obtained. For example, brain wave signals of a user are acquired by using an electrode array, ten channels of brain wave signals of F3, F4, C3, C4, FZ, CZ, FC1, FC2, FC5 and FC6 are extracted, the ten channels of brain wave signals are matched with requirements of the user on terrain, wind speed, humidity, temperature and soil conditions, a brain network structure is built, and required intensity and accurate values are calculated. If different intervals of the required intensity value of the terrain correspond to different terrains, the higher the intensity value is, the steeper the terrain is; the higher the precise value, the smaller the error of the virtual world from the real world.
And calculating the reality gap of the virtual world scene.
When calculating the difference between the real scene and the real world, acquiring a scene model of the simulated world by utilizing the built-in architecture of the development engine Unity3D, and comparing the scene model with a real terrain data high-level chart, a satellite diagram and a weather forecast of the real world: the method comprises the steps of utilizing an independent component analysis method to make users independent of requirements of the virtual world, namely the requirements of the users on terrain, wind speed, humidity, temperature and soil conditions, enabling each requirement to correspond to a requirement branch, enabling the characteristics of each branch to correspond to the characteristics of the virtual world in a one-to-one and combined mode, utilizing a minimum regression analysis method to analyze, and calculating the difference between the virtual world and the real world. For example, for the wind speed, when the user requires three levels of wind speed, the wind speed is adjusted to 12-19 km/h by referring to observation data of a meteorological station in the real world, and the influence of the three levels of wind speed on the rainwater, vegetation, animals and unmanned aerial vehicle flight is integrated to design a scene.
The DEM is a digital elevation model, and the digital simulation of the ground terrain is realized through limited terrain elevation data; the VDPM algorithm is a full-scale view-dependent detail level grid simplification algorithm, and is based on DEM elevation data of irregular grids, the geometric subdivision and gradual change of the terrain are reflected through errors, the problem of cracks generated when the terrain is drawn is processed by using convenient vertexes of the forced partitioned terrain, progressive grid optimization is better realized, and complex terrain is processed; the binary tree ROAM algorithm is a full-scale real-time optimization adaptive regular grid algorithm, and divides and merges primitives describing a terrain surface according to a binary triangular tree to generate the terrain surface; the WRF mode system is open source meteorological simulation software widely applied to the meteorological industry, and provides a large number of options for researching the atmospheric process; GBM is a short for general gradient regression model, and a generalized regression model of boosting algorithm is applied, so that the problem of regression can be solved, and the problem of classification can also be solved.
And calculating the reality gap of the user motion perception.
According to the generation of VR dizziness, the motion perception of a user is divided into two judgment standards, namely the matching degree of vestibular vision and the synchronization degree of VR visual angle and human rotation. Establishing a user comfort evaluation model based on the two judgment standards to obtain user perception characteristics: the method comprises the steps of firstly carrying out optical flow estimation on a three-dimensional VR video to calculate horizontal and vertical motion matrixes of video frames, then calculating video frame speed and frame acceleration, finally taking extracted multi-dimensional motion information of the video frame speed and the acceleration as characteristic vectors, and establishing a user comfort evaluation model by combining with an SVR algorithm and human body actions for subsequent comfort optimization. For example, the reality of the simulated world is judged based on real world scenes, human perception, user requirements such as real terrain data, real scenes, abnormal motion information and user requirements, and edge calculation is adopted to relieve resource congestion and protect privacy. The method comprises the steps of constructing a network model with a deep layer number by using a cloud-based distributed deep neural network of a distributed computer hierarchy, taking scene image features of the real world, such as features of a topographic map, a weather map and an unmanned aerial vehicle device drawing, and human perception features, such as features of static vision, dynamic vision and hearing, as input, extracting and classifying features of data, learning the condition of the real world, and training the neural network. The method comprises the steps of splitting the demands of users on the virtual world, comparing the corresponding characteristics of the outlet of each demand branch with the corresponding characteristics of the real world, and taking the information entropy as the measurement of confidence.
And 103, after the user logs in, monitoring the human health data in real time by the system, splitting the human health data into different dimensions, and storing the human health data to the cloud platform.
Monitoring the blood pressure, heart rate and respiration conditions of the user through analyzing the pulse; monitoring a user's state of dizziness, psychological load, psychological mood through analysis of the EEG; monitoring the motion by using an infrared sensor, tracking the head and the hands of a user through motion projection, drawing a user motion image, and judging whether the user state is abnormal or not; the sound measurement is performed by a sound sensor, and the health condition of the user is determined from the degree of the sound trembling of the user and speech recognition. For example, soil conditions have an effect on the growth of crops and the effectiveness of drone sprays. When the soil has strong air permeability, the crops can absorb nutrients more fully and absorb medicines better. When the water content of the soil is different, the absorption rate and the absorption concentration of the soil to the medicine are different. When the difference between the soil condition and the real world is judged, the trained neural network is compared with the real world, and the minimum regression analysis method is applied to calculate the difference for subsequent authenticity optimization.
And filtering the motion artifact, acquiring a PPG signal, and monitoring the blood pressure, heart rate and breathing condition of the user.
Filtering out motion artifacts by using a TROIKA framework: and after the user identification is finished, continuously acquiring the PPG signal, and performing compression, transmission, reconstruction and pretreatment. Then, singular spectrum analysis is adopted for signal decomposition: the time series are first mapped into a trajectory matrix and then singular value decomposition, grouping and reassembly are performed. And after a high-resolution frequency spectrum is obtained, tracking and verifying a spectrum peak to obtain a PPG signal with a motion artifact filtered. And then, estimating the high-order statistical characteristics of the signals by adopting an independent component analysis method to obtain independent component components of the signals, decomposing a plurality of signals related to blood pressure, heart rate and respiration, and analyzing the signals respectively. The user motion perception is not only an important basis for judging the authenticity, but also the mismatching with the actual motion is an important factor influencing the comfort of the user. For example, for abnormal movements such as sharp turns, it is easier for VR patients to feel discomfort, so user motion perception adjustment for abnormal movements is a very important task. When a user makes a sharp turn, human vision has the effect of making the sharp turn, but the body does not obtain the experience, so the user is easy to feel dizzy; when the user turns around, squats, and the like, the vr video may delay the change of the corresponding scene, so that the user is easy to feel dizzy. And calculating the true speed and the acceleration of the video, and establishing a user comfort evaluation model as a characteristic value of the motion perception of the scene user. The result obtained by the user comfort evaluation model has strong correlation with the subjective scoring result of people, so that the user comfort evaluation model can be used for evaluating the VR experience comfort of the user for subsequent optimization.
EEG signals are acquired, and the dizzy and psychological conditions of the user are monitored.
And (3) dynamically acquiring an EEG signal in real time by using B-Alert Live to monitor the health condition of the user. And then preprocessing, filtering noise, adopting an independent component analysis method, utilizing a filter to decompose signals, and monitoring conventional health indexes. For brain diseases, preprocessing signals by adopting a principal component analysis method, extracting the characteristics of common brain disease signals by adopting a hestert index and a detrending index, and realizing automatic detection by an SVM (support vector machine) method. For monitoring the psychological load of the user, the psychological load state change of the user is researched by taking Alpha waves of channels Fz and F4 and Theta waves of channels Fz and F3 as indexes. Processing the signals by using B-Alert Lab analysis software, calculating power spectral density, performing FFT calculation by using a Kaiser window obtained by correcting a zero-order Bessel function, obtaining an effective PSD by calculation and correction, and processing the PSD after removing electroencephalogram artifacts. And comprehensively considering all indexes to obtain the psychological state of the user and judge the adaptation degree of the user to VR. Discomfort of a user to VR is reflected in two aspects of physiology and psychology, and in the aspect of physiology, multi-dimensional judgment is carried out by monitoring PPG signals, EEG signals, motions and sounds of the user; in psychological terms, the determination is made by monitoring the EEG signal of the user. When each terminal collects the PPG signal, the EEG signal, the action path and the sound condition of the user, the PPG signal, the EEG signal, the action path and the sound condition are transmitted to the host unit, the information is disassembled, classified and analyzed on the cloud, real-time monitoring is carried out, and the health state of the user is judged together.
And 104, judging the adaptive condition of the user to the current scene and predicting the health condition trend of the user based on the demand and the health condition.
According to big data of a hospital, physiological index data of various diseases are established in advance and used as a judgment basis for the health of a user, and personalized adjustment is carried out according to historical data of the user. The method comprises the steps of acquiring health data of a user from multiple angles including blood pressure, heart rate, dizziness, psychological states, actions and sounds through PPG, EEG, actions and sounds, taking basic features related to symptoms of 3D dizziness, VR discomfort, high fear, heart diseases and other sudden diseases as analysis angles, and respectively constructing two analysis models by taking the basic features and the inspection indexes of the user as subjects. And integrating the two models, mining association rules by using an HANA platform, evaluating data quantity, calculating the adaptation degree of the user to the current scene, and predicting and monitoring the health condition of the user. Meanwhile, user data is established into an analysis report and stored in a cloud platform to be used as historical record training, and a user can freely check the historical record training. Singular spectrum analysis is a method for researching nonlinear time series data, and constructs a track matrix according to an observed time series, and decomposes and reconstructs the track matrix, thereby extracting signals representing different components of an original time series and analyzing the structure of the time series. The PPG signal is disassembled, and monitoring on arterial blood oxygen saturation, heart rate, blood pressure and respiration can be achieved. For example, for blood pressure signal monitoring, a blood pressure detection model based on the systolic rising waveform characteristic parameters of the PPG signal is established by using a least square method, and blood pressure estimation is performed by using the multi-characteristic parameters of the PPG signal, so as to realize blood pressure monitoring and blood pressure abnormality early warning. For the prediction of the blood pressure, a multi-parameter MIV optimization neural network model is established to predict the blood pressure. MIV is a neural network algorithm that determines the degree of influence of the independent variable influencing factors.
And 105, performing authenticity adjustment and personalized reminding according to the adaptation degree, and recording to the cloud platform.
Referring to fig. 2, firstly, according to the teaching requirement and the reality of the simulated world scene, and the bearing capacity of the user is calculated as a threshold, it is determined whether the simulated world scene is reliable, and the reality of the simulated world scene is adjusted. And then, judging the health condition of the user according to the analysis report, further judging the adaptation degree of the user to the VR, and using the adaptation degree as a basis for adjusting the motion perception authenticity degree of the user. And if the indexes of the user are good in performance and the user is judged to need to accept the virtual world with higher authenticity, correcting according to the authenticity gap obtained in the previous step, and improving the operation sensitivity. If the health index of the user is predicted or monitored to be abnormal, if fatigue of the user is predicted and a health problem is about to occur, personalized adjustment needs to be carried out on the vr scene, and even an alarm is sent to the user. And meanwhile, recording the operation to a user file, and recording the vr use habit of the user. For example, for an epileptic signal, a preprocessed brain wave signal is divided into a plurality of sub-data sets and is subjected to multiple VMD decomposition, two features of RCMDE and RCMFE are extracted from an obtained variational mode function VMF, and the variable mode function VMF is sent to an SVM for feature classification so as to screen out a focus signal and a non-focus signal, and the epileptic EEG signal is automatically detected. For psychological burden, in each frequency band of the EEG signal, Alpha waves and Theta waves are most significantly changed by the emotion of the user to reflect the neural activities of the user such as sensation, motor control, and higher mental functions when using VR. Meanwhile, the prefrontal cortex of the brain is related to the psychological cognitive state of an individual, and the change of the concentration of the oxygenated hemoglobin can well reflect the physiological change of the individual. Therefore, the fluctuation of Alpha waves of the channels Fz and F4 and Theta waves of the channels Fz and F3 are considered, and the relative concentration change amount of O2Hb (oxygenated hemoglobin) of PFC (brain prefrontal lobe) is detected. Such as when the user is under stress, there is a significant change in the O2Hb concentration of PFC in the right brain. EEG data of the user is detected and historical data of the user is taken into account to determine whether the user can afford such a psychology. The B-Alert Live software can process and analyze EEG signals in real time, acquire cognitive states and workload indexes of a user within a period of time, acquire EEG heat maps for data visualization and classify emotional states of the user; the hessian index reflects the result of a long string of interrelated events, which synthesizes the trend of brain disease signals, while the trend removal is to extract the features of brain disease signals in the current EEG signal in order to eliminate the influence of the trend.
The system adjusts the reality of the simulated world according to the user analysis report.
If the user is judged not to be suitable for the current VR scene, user motion perception adjustment is needed, frame speed and frame acceleration are modified in an individualized mode, the VR visual hysteresis problem is reduced, and abnormal motion is reduced by reducing speed and reducing visual passive movement; if the user is predicted to be tired, the accuracy is properly reduced according to the teaching requirement, and the energy consumption of the user is reduced; and if the health condition of the user is predicted or monitored to be abnormal, an alarm is given out, personalized adjustment is carried out, and the VR use habit of the user is recorded. Meanwhile, the user also has the authority to freely adjust the parameters. For example, before sudden death, the most typical symptoms of a person are acute dyspnea, sudden palpitation, dizziness, severe chest pain and dying sensation. In the constructed user basic characteristic analysis model, three health data including respiration, heart rate and dizzy feeling are provided, comprehensive analysis modeling is carried out by combining typical health data of sudden death in big data and a user personal health report, association rules of disease characteristics and inspection indexes are mined, and real-time monitoring and prediction are carried out on the health of a user.
And 106, evaluating the risk of the user using VR teaching.
For the teaching effect, the pleasure feeling of the user is judged by analyzing psychological factors in the electroencephalogram signals, the difference of the electroencephalogram signals before and after teaching is carried out, and the teaching effect is evaluated. Through extracting the health data of the user, the uncomfortable feeling of the user using VR is rapidly found, and the adjustment or the alarm is timely given out. In the using process, more personal data are recorded, user habits are formed, and the user condition is monitored point to point. Also according to modern medical treatment, the disease data is updated in time, and the user is discovered to feel uncomfortable. Meanwhile, the data of the user is encrypted and stored, and the user data is obtained only when the deep learning algorithm is trained and the user himself has the authority. For example, for sudden illness, when three indexes of respiration, heart rate and dizziness in a user analysis report are found to be abnormal and meet the symptom of sudden death, an alarm is immediately sent to remind the user, and reasonable rest guidance suggestions are personally provided for the user.

Claims (7)

1. An interactive teaching method for intelligent education, which is characterized in that the method comprises the following steps:
the user logs in the VR teaching system, and the initial scene is loaded in a personalized manner, and the user logs in the VR teaching system and loads the initial scene in a personalized manner, and the method specifically comprises the following steps: acquiring biological characteristics of a user and identifying the identity of the user, acquiring the requirements of the user by using a brain-computer interface technology, and loading an initial scene in a personalized manner; after the user logs in, the system judges the reality of the simulated world scene and the user motion perception in real time and calculates the difference with the real world, and the method specifically comprises the following steps: calculating the reality gap of the virtual world scene, and calculating the reality gap of the user motion perception; after the user logs in, the system monitors the human health data in real time, and the system is split into different dimensions and stored to the cloud platform, and the method specifically comprises the following steps: filtering motion artifacts, acquiring PPG signals, monitoring blood pressure, heart rate and breathing conditions of a user, acquiring EEG signals, and monitoring dizziness and psychological conditions of the user; based on the demand and the health condition, judging the adaptive condition of the user to the current scene and predicting the health condition trend of the user; authenticity adjustment and personalized reminding are carried out according to the adaptation degree, and the records are recorded to a cloud platform, and the method specifically comprises the following steps: the system adjusts the reality of the simulated world according to the user analysis report; and evaluating the risk of the user using VR teaching.
2. The method of claim 1, wherein the user logs into a VR tutorial system to personalize loading an initial scene, comprising:
after a user wears VR equipment and starts the VR equipment, pulse signals of the user are automatically collected, user information is identified, the requirement of the user for teaching is obtained, and a VR scene is loaded for the user in a personalized mode by combining the past medical history of the user on the cloud; firstly, acquiring the pulse condition of a user, collecting the health information of the user, preprocessing the health information and extracting characteristics, performing characteristic matching in a database to identify the identity of the user, and calling or creating a user file; then, acquiring the requirement of the user on teaching by using a brain-computer interface, automatically calculating a lower precision limit by taking the requirement as a standard, and preliminarily determining the scene requirement according to the personal health information of the user; a Unity3D development engine is utilized, a cloud computing mode is used, a three-dimensional scene is drawn under a large neural network model on the cloud, and a user initial scene is loaded; the method comprises the following steps: collecting the biological characteristics of a user and identifying the identity of the user; acquiring the requirements of a user by utilizing a brain-computer interface technology; personalized loading of an initial scene;
the collecting of the biological characteristics of the user and the identification of the user identity specifically comprise:
identifying the user identity according to the biological feature of the PPG signal; collecting PPG signals at fingertips by using a photoplethysmography pulse sensor, transmitting the signals by using UWB, and performing classification and identification by using a decision maker; preprocessing a signal by using a wavelet threshold method, eliminating a baseline singular point and high-frequency noise, then performing monocycle division on a PPG signal and extracting a cycle value, performing sparse decomposition on the monocycle signal by using an atom library consisting of Gabor atoms, extracting characteristic parameters, and importing the characteristic parameters into a decision tree for identification;
the method for acquiring the user's requirements by using the brain-computer interface technology specifically includes:
establishing an interactive system by analyzing the electroencephalogram signals through a brain-computer interface, realizing direct communication between the brain and a computer, and acquiring the requirement of a user on the pesticide spraying scene of the unmanned aerial vehicle; collecting brain wave signal sources by using a micro-nano processing electrode array, preprocessing by using a Butterworth filter, filtering noise and acquiring designated signals; then, a common space mode algorithm is used for constructing a characteristic set for the signals, an artificial neuron network is used for classifying different physiological and psychological conditions, a brain network structure imagined by a scene is constructed, and strength values and accurate values of users on requirements of terrain, wind speed, humidity, temperature and soil conditions are calculated; according to the requirements, automatically calculating a lower limit of the requirements to provide basis for scene loading;
the personalized loading initial scene specifically comprises the following steps:
based on a requirement lower limit given by user requirements, combining the personal health state of the user and the tolerance capability of VR reality, selecting a proper authenticity level in an interval range, and providing an individualized initial scene standard for the user; for the loading of the initial scene of the pesticide spraying of the unmanned aerial vehicle, the influences of terrain, wind speed, humidity, temperature and soil are considered, the Unity3D is used for developing an engine and real world data, a cloud computing mode is used for drawing a three-dimensional scene under a large neural network model, and the initial scene of a user is loaded; for the terrain, the terrain gradient and the terrain area are considered, and the DEM is adopted for simulation; when the requirement of a user on the terrain is higher than a preset threshold value, a TIN irregular triangular model method is adopted for representing, then a VDPM algorithm is used for increasing and decreasing the terrain grids, and a nearest similar real terrain is created; when the requirement of a user on the terrain is lower than a preset threshold value, original data are represented by an RSG regular grid model, and then primitives describing the surface of the terrain are segmented and combined by using a binary tree ROAM algorithm based on a binary triangular tree to create the terrain; regarding the wind speed, considering the wind speed and the wind angle, and simulating a wind speed sample by adopting a WRF mesoscale numerical mode; for soil, taking the soil type, the soil oxygen content, the soil nutrients, the soil humidity and the soil temperature into consideration, using meteorological data and vegetation data as covariate factors, performing regression simulation on the soil condition based on GBM, and simulating the soil condition; the temperature was also simulated by the GBM method in consideration of the liquid temperature, the air temperature, and the light intensity, in addition to the soil temperature.
3. The method of claim 1, wherein after the user logs in, the system determines in real time the reality of the simulated world scene from the user's perception of motion and calculates the gap from the real world, comprising:
learning physical theory knowledge and the performance of unmanned aerial vehicle pesticide spraying under various influence factors based on a neural network, and dividing the influence factors into two types, wherein the first type is a scene factor and comprises environmental factors and user real operation adjustment factors, wherein the environmental factors comprise terrain, wind speed, humidity, temperature, soil and unmanned aerial vehicle equipment; the second type is a perception factor, namely the perception of a user in the using process, wherein the perception comprises visual perception and action perception; in the judging process, recombining and corresponding the scene requirement of the user on the simulated world and the perception requirement of the physical condition of the user on the simulated world with the real world, comparing the scene requirement of the user on the simulated world with the real world by respectively utilizing a minimum regression analysis method and an SVR algorithm, and calculating confidence; the evaluation of scene factors is obtained through the evaluation of scene reality gaps, a user comfort evaluation model is established through the analysis of a three-dimensional VR video, and human perception characteristics are obtained; the method comprises the following steps: calculating the reality gap of the virtual world scene; calculating the reality gap of user motion perception;
the virtual world scene reality gap calculation specifically comprises the following steps:
when calculating the difference between the real scene and the real world, acquiring a scene model of the simulated world by utilizing the built-in architecture of the development engine Unity3D, and comparing the scene model with a real terrain data high-level chart, a satellite diagram and a weather forecast of the real world: utilizing an independent component analysis method to make the demands of users on the virtual world, namely the demands of the users on terrain, wind speed, humidity, temperature and soil conditions independent, wherein each demand corresponds to a demand branch, the characteristics of each branch correspond to the characteristics of the virtual world one by one and in combination, analyzing by using a minimum regression analysis method, and calculating the difference between the virtual world and the real world;
the calculating of the user motion perception authenticity gap specifically comprises the following steps:
according to the generation of VR dizzy, the motion perception of a user is divided into two judgment standards, namely the matching degree of vestibular vision and the synchronization degree of VR visual angle and human rotation; establishing a user comfort evaluation model based on the two judgment standards to obtain user perception characteristics: the method comprises the steps of firstly carrying out optical flow estimation on a three-dimensional VR video to calculate a horizontal motion matrix and a vertical motion matrix of a video frame, then calculating the speed of the video frame and calculating the acceleration of the frame, and finally establishing a user comfort evaluation model by taking multi-dimensional motion information of the extracted speed and acceleration of the video frame as a feature vector and combining with an SVR algorithm and human body action for subsequent comfort optimization.
4. The method of claim 1, wherein after the user logs in, the system monitors human health data in real time, splits the human health data into different dimensions, and stores the human health data in a cloud platform, comprising:
monitoring the blood pressure, heart rate and respiration conditions of the user through analyzing the pulse; monitoring the dizziness state, psychological load and psychological mood of the user through the analysis of the EEG; monitoring the motion by using an infrared sensor, tracking the head and the hands of a user through motion projection, drawing a user motion image, and judging whether the user state is abnormal or not; measuring the sound, and judging the health condition of the user from the sound trembling degree and the voice recognition of the user through a sound sensor; the method comprises the following steps: filtering out motion artifacts, acquiring a PPG signal, and monitoring blood pressure, heart rate and breathing conditions of a user; acquiring an EEG signal, and monitoring the dizzy and psychological conditions of a user;
filtering motion artifact obtains the PPG signal, monitors user's blood pressure, heart rate, breathing condition, specifically includes:
filtering out motion artifacts by using a TROIKA framework: after the user identification is finished, continuously acquiring PPG signals, and performing compression, transmission, reconstruction and pretreatment; then, singular spectrum analysis is adopted for signal decomposition: firstly, mapping a time sequence into a track matrix, and then carrying out singular value decomposition, grouping and recombination; after obtaining a high-resolution frequency spectrum, tracking and verifying a spectrum peak to obtain a PPG signal with a motion artifact filtered; then, estimating the high-order statistical characteristics of the signals by adopting an independent component analysis method to obtain independent component components of the signals, decomposing a plurality of signals related to blood pressure, heart rate and respiration, and analyzing the signals respectively;
the acquiring of the EEG signal and monitoring of the dizziness and psychological conditions of the user specifically include:
B-Alert Live is used for dynamically acquiring EEG signals in real time, and the health condition of a user is monitored; then preprocessing, filtering noise, adopting an independent component analysis method, utilizing a filter to decompose signals, and monitoring conventional health indexes; for brain diseases, preprocessing signals by adopting a principal component analysis method, extracting the characteristics of common brain disease signals by adopting a hestert index and a trend-removing index number, and realizing automatic detection by an SVM method; for monitoring the psychological load of the user, researching the psychological load state change of the user by taking Alpha waves of channels Fz and F4 and Theta waves of channels Fz and F3 as indexes; processing the signals by using B-Alert Lab analysis software, calculating power spectral density, performing FFT calculation by using a Kaiser window obtained by correcting a zero-order Bessel function, obtaining an effective PSD by calculation and correction, and processing the PSD after removing electroencephalogram artifacts; and comprehensively considering all indexes to obtain the psychological state of the user and judge the adaptation degree of the user to VR.
5. The method of claim 1, wherein the determining a user's fitness to a current scenario and predicting a user's health trend based on demand and health comprises:
according to big data of a hospital, pre-establishing physiological index data of various diseases as a judgment basis for the health of a user, and performing personalized adjustment according to historical data of the user; acquiring health data of a user from multiple angles including blood pressure, heart rate, dizziness, psychological states, actions and sounds by PPG, EEG, actions and sounds, and constructing two analysis models by taking basic characteristics of the user and inspection indexes as subjects around basic characteristics related to symptoms of 3D dizziness, VR discomfort, high fear, heart disease and other sudden diseases as analysis angles; integrating the two models, mining association rules by using an HANA platform, evaluating data quantity, calculating the adaptation degree of a user to the current scene, and predicting and monitoring the health condition of the user; meanwhile, user data is established into an analysis report and stored in a cloud platform to be used as historical record training, and a user can freely check the historical record training.
6. The method according to claim 1, wherein the adjusting of authenticity and the personalized reminding according to the adaptation degree and recording to the cloud platform comprise:
firstly, judging whether the scene of the simulated world is reliable or not according to teaching requirements and the reality of the scene of the simulated world, and taking the bearing capacity of a user as a threshold value through calculation, and adjusting the reality of the scene of the simulated world; then, judging the health condition of the user according to the analysis report, further judging the adaptation degree of the user to VR, and using the adaptation degree as a basis for adjusting the motion perception authenticity degree of the user; if various indexes of the user are good in performance and the user is judged to need to accept the virtual world with higher authenticity, correcting according to the authenticity difference obtained in the previous step, and improving the operation sensitivity; if the health index of the user is predicted or monitored to be abnormal, if the fatigue of the user is predicted and a health problem is about to occur, the vr scene needs to be adjusted in a personalized mode, and even an alarm is sent to the user; meanwhile, recording the operation to a user file, and recording the vr use habit of the user; the method comprises the following steps: the system adjusts the reality of the simulated world according to the user analysis report;
the system adjusts the reality of the simulated world according to the user analysis report, and specifically comprises the following steps:
if the user is judged not to be suitable for the current VR scene, user motion perception adjustment is needed, frame speed and frame acceleration are modified in an individualized mode, the VR visual hysteresis problem is reduced, and abnormal motion is reduced by reducing speed and reducing visual passive movement; if the user is predicted to be tired, the accuracy is properly reduced according to the teaching requirement, and the energy consumption of the user is reduced; if the health condition of the user is predicted or monitored to be abnormal, an alarm is given out, personalized adjustment is carried out, and the VR use habit of the user is recorded; meanwhile, the user also has the authority to freely adjust the parameters.
7. The method of claim 1, wherein the assessing a risk of a user using VR teaching comprises:
for the teaching effect, the pleasure feeling of the user is judged by analyzing psychological factors in the electroencephalogram signals, the difference of the electroencephalogram signals before and after teaching is carried out, and the teaching effect is evaluated; through extracting the health data of the user, the uncomfortable feeling of the user using VR is quickly found, and the adjustment or the alarm is timely given out; in the using process, more personal data are recorded to form user habits, and the user condition is monitored point to point; according to modern medical treatment, disease data are updated in time, and discomfort of a user is discovered in time; meanwhile, the data of the user is encrypted and stored, and the user data is obtained only when the deep learning algorithm is trained and the user himself has the authority.
CN202210634110.8A 2022-06-07 2022-06-07 Intelligent education interactive teaching method Active CN114999237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210634110.8A CN114999237B (en) 2022-06-07 2022-06-07 Intelligent education interactive teaching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210634110.8A CN114999237B (en) 2022-06-07 2022-06-07 Intelligent education interactive teaching method

Publications (2)

Publication Number Publication Date
CN114999237A true CN114999237A (en) 2022-09-02
CN114999237B CN114999237B (en) 2023-09-29

Family

ID=83032995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210634110.8A Active CN114999237B (en) 2022-06-07 2022-06-07 Intelligent education interactive teaching method

Country Status (1)

Country Link
CN (1) CN114999237B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115494959A (en) * 2022-11-15 2022-12-20 四川易景智能终端有限公司 Multifunctional intelligent helmet and management platform thereof
CN115587347A (en) * 2022-09-28 2023-01-10 支付宝(杭州)信息技术有限公司 Virtual world content processing method and device
CN116453387A (en) * 2023-04-10 2023-07-18 哈尔滨师范大学 AI intelligent teaching robot control system and method
CN117610806A (en) * 2023-10-19 2024-02-27 广东清正科技有限公司 Virtual reality interactive teaching management system and method based on VR technology
CN117708571A (en) * 2024-02-06 2024-03-15 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Teaching management method and system based on virtual reality

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108471451A (en) * 2018-05-28 2018-08-31 北京主场小将体育文化有限公司 A kind of campus athletic training/match sports ground/shop informationization Internet of things system
US20180260026A1 (en) * 2017-03-13 2018-09-13 Disney Enterprises, Inc. Configuration for adjusting a user experience based on a biological response
CN108883335A (en) * 2015-04-14 2018-11-23 约翰·詹姆斯·丹尼尔斯 The more sensory interfaces of wearable electronics for people and machine or person to person
US20190065970A1 (en) * 2017-08-30 2019-02-28 P Tech, Llc Artificial intelligence and/or virtual reality for activity optimization/personalization
CN111863261A (en) * 2020-07-18 2020-10-30 纽智医疗科技(苏州)有限公司 Method and system for relieving virtual reality disease through adaptive training

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108883335A (en) * 2015-04-14 2018-11-23 约翰·詹姆斯·丹尼尔斯 The more sensory interfaces of wearable electronics for people and machine or person to person
US20180260026A1 (en) * 2017-03-13 2018-09-13 Disney Enterprises, Inc. Configuration for adjusting a user experience based on a biological response
US20190065970A1 (en) * 2017-08-30 2019-02-28 P Tech, Llc Artificial intelligence and/or virtual reality for activity optimization/personalization
CN108471451A (en) * 2018-05-28 2018-08-31 北京主场小将体育文化有限公司 A kind of campus athletic training/match sports ground/shop informationization Internet of things system
CN111863261A (en) * 2020-07-18 2020-10-30 纽智医疗科技(苏州)有限公司 Method and system for relieving virtual reality disease through adaptive training

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587347A (en) * 2022-09-28 2023-01-10 支付宝(杭州)信息技术有限公司 Virtual world content processing method and device
CN115494959A (en) * 2022-11-15 2022-12-20 四川易景智能终端有限公司 Multifunctional intelligent helmet and management platform thereof
CN115494959B (en) * 2022-11-15 2023-02-28 四川易景智能终端有限公司 Multifunctional intelligent helmet and management platform thereof
CN116453387A (en) * 2023-04-10 2023-07-18 哈尔滨师范大学 AI intelligent teaching robot control system and method
CN116453387B (en) * 2023-04-10 2023-09-19 哈尔滨师范大学 AI intelligent teaching robot control system and method
CN117610806A (en) * 2023-10-19 2024-02-27 广东清正科技有限公司 Virtual reality interactive teaching management system and method based on VR technology
CN117708571A (en) * 2024-02-06 2024-03-15 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Teaching management method and system based on virtual reality
CN117708571B (en) * 2024-02-06 2024-04-26 江西工业贸易职业技术学院(江西省粮食干部学校、江西省粮食职工中等专业学校) Teaching management method and system based on virtual reality

Also Published As

Publication number Publication date
CN114999237B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
CN114999237B (en) Intelligent education interactive teaching method
US11696714B2 (en) System and method for brain modelling
Bota et al. A review, current challenges, and future possibilities on emotion recognition using machine learning and physiological signals
CN110292378B (en) Depression remote rehabilitation system based on brain wave closed-loop monitoring
Schmidt et al. Wearable affect and stress recognition: A review
US6546378B1 (en) Signal interpretation engine
KR102045569B1 (en) Appratus for controlling integrated supervisory of pilots status and method for guiding task performance ability of pilots using the same
KR102378278B1 (en) The biological signal analysis system and biological signal analysis method for operating by the system
CN113397546B (en) Method and system for constructing emotion recognition model based on machine learning and physiological signals
CN105827731A (en) Intelligent health management server, system and control method based on fusion model
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
CN110600103B (en) Wearable intelligent service system for improving eyesight
CN109859570A (en) A kind of brain training method and system
Szczuko Real and imaginary motion classification based on rough set analysis of EEG signals for multimedia applications
CN111028919A (en) Phobia self-diagnosis and treatment system based on artificial intelligence algorithm
CN105212949A (en) A kind of method using skin pricktest signal to carry out culture experience emotion recognition
Zeng et al. Classifying driving fatigue by using EEG signals
Vicente-Samper et al. Data acquisition devices towards a system for monitoring sensory processing disorders
Vaitheeshwari et al. Stress recognition based on multiphysiological data in high-pressure driving VR scene
CN109727670A (en) A kind of intelligence stroke rehabilitation monitoring method and system
CN117547270A (en) Pilot cognitive load feedback system with multi-source data fusion
CN116884288A (en) Dizziness-resistant training platform and method
CN116307401A (en) Method and system for improving living history street living environment
CN116570283A (en) Perioperative patient emotion monitoring system and method
Yasemin et al. Emotional state estimation using sensor fusion of EEG and EDA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant