CN110852284A - System for predicting user concentration degree based on virtual reality environment and implementation method - Google Patents

System for predicting user concentration degree based on virtual reality environment and implementation method Download PDF

Info

Publication number
CN110852284A
CN110852284A CN201911111980.1A CN201911111980A CN110852284A CN 110852284 A CN110852284 A CN 110852284A CN 201911111980 A CN201911111980 A CN 201911111980A CN 110852284 A CN110852284 A CN 110852284A
Authority
CN
China
Prior art keywords
data
user
model
unit
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911111980.1A
Other languages
Chinese (zh)
Inventor
王晓敏
张琨
柴贵山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Geruling Technology Co Ltd
Original Assignee
Beijing Geruling Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Geruling Technology Co Ltd filed Critical Beijing Geruling Technology Co Ltd
Priority to CN201911111980.1A priority Critical patent/CN110852284A/en
Publication of CN110852284A publication Critical patent/CN110852284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system for predicting user concentration degree based on a virtual reality environment and an implementation method thereof. The invention is convenient and quick, greatly saves storage space, improves detection accuracy, can analyze the concentration degree of students in real time, and helps teachers improve teaching effects.

Description

System for predicting user concentration degree based on virtual reality environment and implementation method
Technical Field
The invention relates to the field of computers, in particular to a system for predicting user concentration degree based on a virtual reality environment and an implementation method.
Background
Currently, a known concentration degree prediction system based on a virtual reality environment generally adopts two types of methods, one is based on a human eye recognition technology of a user, and used data is in a face or eye picture format of the user and also in a video format; the other is based on eye movement data of the user, and the data is statistical characteristics of the eye movement data. The problems with these methods are:
1. the image and video memory occupies a large amount and is not suitable for the long-time concentration prediction;
2. the conversion of pictures and videos into digital formats recognized by a computer prediction system is time-consuming, and the burden of the system is increased;
3. it is time-consuming to extract the statistical features of the eye movement data, and some features are not extracted at present with accuracy at all, such as events corresponding to each segment of eye movement data, such as gaze, eye jump, and blinking.
Disclosure of Invention
In order to solve the technical problems, the invention provides a system for predicting user concentration degree based on a virtual reality environment, which comprises a training module, a user module, a testing module, a feedback module and a data transmission module, wherein the training module is mainly used for preprocessing data and training model parameters so as to obtain a model with optimal data fitting;
the user module is used for providing watching equipment and content for a user and collecting original data generated in the process that the user uses the equipment;
the test module is used for predicting and scoring new data generated by a user;
the feedback module is used for collecting the degree of consistency between the score of the user feedback prediction result and the real situation, and if the difference between the score and the real situation is too much, the sample is stored for the next version iteration of the model;
the data transmission module is used for transmitting data among or in the modules.
Further, the training module comprises a data preprocessing unit, a model establishing unit and a model training unit, wherein the data preprocessing unit is used for carrying out operations such as standardization, random disordering and segmentation on the original data so as to obtain data normalized to the model;
the model building unit is used for building a specific model to obtain a model framework needing to be trained;
the model training unit is used for inputting the preprocessed eye movement data and the corresponding labels into the model and obtaining the optimal parameter solution of the model by using a specific optimization means.
Furthermore, the user module comprises a virtual reality equipment unit, a virtual reality content unit, a user self unit and a data acquisition unit;
the virtual reality equipment unit comprises a virtual reality terminal and a data acquisition sensor, and provides equipment of a virtual reality environment for a user and the acquisition and storage of original data in the process;
the virtual reality content unit is used for providing a specific virtual reality scene for a user, and in the specific scene, glasses of the user can generate different eye movement data according to different contents;
the user's own unit is used for generating a source of data and is also the core of the whole module;
the data acquisition unit is used for acquiring eye movement data generated by a subject when the subject watches the content by using the equipment: binocular smooth annotation point coordinates, pupil sizes of the left and right eyes, and corresponding concentration category labels: high, medium, and low, and save the data temporarily locally.
Further, the test module comprises a data preprocessing unit and a prediction scoring unit;
the data preprocessing unit is used for cutting and standardizing real-time data in the process of using the equipment by a user in a segmented manner to obtain a standard data format required by the model;
and the prediction scoring unit is used for inputting the preprocessed data into the trained model so as to obtain the class label corresponding to the maximum probability given by the final model.
Further, the feedback module comprises a user feedback unit and a data storage unit;
the user feedback unit is used for providing a scoring link for a user so as to evaluate the consistency of the predicted scoring and the real situation;
the data storage unit is used for storing the data with lower user scores so as to expand the coverage of the training data to fulfill the aim of perfecting the model.
An implementation method for predicting user concentration based on a virtual reality environment is used for the system for predicting user concentration based on a virtual reality environment, and comprises the following steps:
the method comprises the steps that firstly, based on a user module, original eye movement data generated when different test objects watch contents are collected through virtual reality equipment, and concentration degree categories of users are obtained through modes such as user questionnaire feedback;
secondly, establishing a CNN + LSTM + FC model based on a training module, and inputting the preprocessed data and the corresponding concentration degree label into the model for model training;
step three, based on a test module, preprocessing the real-time eye movement data generated by the user using the equipment, and then inputting the preprocessed real-time eye movement data into the trained model in the step two to obtain a final predicted concentration degree category label for scoring;
based on a feedback module, the user feeds back and scores the prediction result according to the actual concentration degree in some modes, and data with low scores of the user are collected and stored;
and step five, returning the data stored in the step four and the original data to execute the step two, wherein the migration learning can be used.
The invention has the beneficial effects that:
the invention is convenient and fast, greatly saves the storage space and improves the detection accuracy.
The invention can analyze the concentration degree of students in real time and help teachers improve teaching effects.
Drawings
FIG. 1 is a schematic of the frame structure of the system of the present invention;
FIG. 2 is a schematic flow chart of a method for implementing the present invention;
fig. 3 is a schematic diagram of data flow in the implementation method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention is described in further detail below:
as shown in FIG. 1, the invention provides a system for predicting user concentration based on a virtual reality environment, which comprises a user module, a training module, a testing module, a feedback module and a data transmission module.
The user module is used for providing an actual virtual reality environment for a user and generating original data, wherein the original data is divided into two types, one type is an original data set of the user required by the training module in the research and development process, and the data set is transmitted to the training module through the data transmission module; the other is used for the original data required by the test module in normal use of a user and sending the data to the test module through the data transmission module;
the training module is used for establishing a model according to actual requirements, preprocessing the received original data set, inputting the processed data into the model for training, finally adjusting the model according to an evaluation method related to the model until the model meets the use requirements, and storing the optimal parameters of the model;
the test module is used for generating concentration degree prediction corresponding to the data according to the optimal parameters of the training module and the received data and sending the prediction result to the feedback module through the data transmission module;
the feedback module is used for obtaining the accuracy evaluation of the user on the prediction result and collecting the 'extranormal' data for the next updating iteration of the model.
In this embodiment, the user module specifically includes:
the virtual reality equipment unit comprises a virtual reality terminal and a data acquisition sensor and is used for providing a visual operation interface for a user and acquiring eye movement data of the user in the using process, wherein the virtual reality terminal is a common head-mounted VR helmet in the market; the data acquisition sensor is divided into an external connection type and an integrated type, the external connection type is that the sensor is connected with the VR helmet through a data line or Bluetooth, and the integrated type is that the sensor is embedded in the helmet;
the virtual reality content unit can be in the form of panoramic pictures, 3D videos, 3D games and the like and is used for providing a specific virtual reality environment for a user;
the user self unit is divided into two types, one type is a test user for generating original data required by the training module, and the number is generally larger; the other is that the common user in the actual use of the product generally only generates one piece of data at a time;
and the data acquisition unit is used for acquiring original data generated when the user uses the specific virtual reality content unit and temporarily storing the original data in the local.
In this embodiment, the training module specifically includes:
the data preprocessing unit comprises data cutting and standardization processing, wherein the data cutting is used for removing noise data at the beginning and the end and converting the data into a standard length which can be used as model input; the standardization processing is to eliminate the dimensional relationship among the characteristics, avoid the influence of abnormal values and extreme values and improve the speed of solving the optimal solution by a gradient descent method;
the model establishing unit is used for establishing a CNN + LSTM + FC neural network framework based on Python language and PyTorch framework, for example, a 3-layer CNN network is adopted for extracting deep features of original data, a double-layer bidirectional LSTM is adopted for inputting data points according to time sequence, the data points are input into one or two FC layers through operations, and finally SoftMax processing is carried out;
and the data training unit selects a proper loss function, an optimizer and a model evaluation method, such as a CrossEncopy loss function, an Adam optimizer, accuracy evaluation and the like, then inputs the data and the corresponding concentration degree category into the model, and trains until the evaluation condition is reached.
The implementation method of the system for predicting the user concentration based on the virtual reality environment comprises the following specific steps as shown in FIG. 2:
firstly, acquiring original eye movement data generated when different test objects watch contents through virtual reality equipment based on a user module, wherein the original data comprises common eye movement data characteristics such as coordinates of smooth annotation points of two eyes, pupil sizes of left and right eyes and the like; then, acquiring the concentration degree category of the real content watching process of the user by adopting questionnaire survey, user grading feedback and other modes; the form of the training data set obtained finally is D { (x1, y1), (x2, y2), …, (xi, yi), …, (xn, yn) }, and the obtained data set is transmitted to the training module through the data transmission module, wherein xi represents the original eye movement data of the ith tested person, and yi is the concentration label corresponding to xi.
Step two, establishing a model (marked as a model) and training based on a training module, wherein the specific flow is as follows:
inputting: d { (x1, y1), (x2, y2), …, (xi, yi), …, (xn, yn) }
model parameter initialization
Dividing data into training set Dtrain, verification set Dval and test set Dtest
for i in EPOCH:
Fetch data are extracted from Dtrain in a random traversal without playback:
Xbatch=(x1,x2,…,xbatch)
ybatch=(y1,y2,…,ybatch)
predictions=model(Xbatch)
loss is the cross entropy (syndromes, ybatch)
Calculating the accuracy on the test set, and storing the model parameter with the highest accuracy;
updating model parameters by an optimizer such as Adam or SGD;
importing the model parameter with the highest accuracy;
and calculating the accuracy on the verification set, and finishing the model training if the accuracy reaches the standard.
Thirdly, based on the test module, transmitting the received user original data into the trained model, and finally returning the result obtained by the model to a display of the virtual reality equipment of the user;
on the basis of the feedback module, firstly, a user scores a model predicted value according to the real situation of the user in an interactive mode, then, according to the scoring condition, data with lower scores are collected and summarized, the data are compared with the original test data, and the data are stored in a data storage unit of the feedback module after being determined to be not maliciously scored so as to update and iterate the later model parameters;
and step five, when needed, the collected low-scoring data and the data set of the original test set are used as a new test set together to perform the operation of the step two, so that the whole system enters a closed loop.
In the above steps, as shown in fig. 3, the specific data circulation and operation positions are:
step 1, operating under virtual reality equipment, generating data and transmitting the data, if a test data set is generated, transmitting the data to a local computer through a data transmission module, and if real-time data of a user is generated, transmitting the data to a server through the data transmission module;
step 2, operating on a local computer, namely performing off-line training, receiving a training data set from a user module, training to obtain a model with optimal parameters, and sending the model to a server through a data transmission module;
step 3, operating in the server, receiving the user real-time data from the user module, taking the user real-time data as an input operation model, obtaining the concentration degree category and sending the concentration degree category to the feedback module;
and 4, operating on the virtual reality equipment, and after the user gives a feedback result, feeding the feedback result back to the server by the equipment for subsequent processing.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the present invention, and these modifications should be considered within the scope of the present invention.

Claims (6)

1. A system for predicting user concentration degree based on a virtual reality environment is characterized by comprising a training module, a user module, a testing module, a feedback module and a data transmission module, wherein the training module is mainly used for preprocessing data and training model parameters so as to obtain a model which is optimal in fitting to the data;
the user module is used for providing watching equipment and content for a user and collecting original data generated in the process that the user uses the equipment;
the test module is used for predicting and scoring new data generated by a user;
the feedback module is used for collecting the degree of consistency between the score of the user feedback prediction result and the real situation, and if the difference between the score and the real situation is too much, the sample is stored for the next version iteration of the model;
the data transmission module is used for transmitting data among or in the modules.
2. The system for predicting user attentiveness based on virtual reality environment according to claim 1, wherein the training module includes a data preprocessing unit, a model building unit, and a model training unit, the data preprocessing unit is configured to perform operations such as normalization, random scrambling, and segmentation on raw data to obtain data normalized to a model;
the model building unit is used for building a specific model to obtain a model framework needing to be trained;
the model training unit is used for inputting the preprocessed eye movement data and the corresponding labels into the model and obtaining the optimal parameter solution of the model by using a specific optimization means.
3. The system for predicting user attentiveness based on virtual reality environment of claim 1, wherein the user module includes a virtual reality device unit, a virtual reality content unit, a user's own unit, and a data acquisition unit;
the virtual reality equipment unit comprises a virtual reality terminal and a data acquisition sensor, and provides equipment of a virtual reality environment for a user and the acquisition and storage of original data in the process;
the virtual reality content unit is used for providing a specific virtual reality scene for a user, and in the specific scene, glasses of the user can generate different eye movement data according to different contents;
the user's own unit is used for generating a source of data and is also the core of the whole module;
the data acquisition unit is used for acquiring eye movement data generated by a subject when the subject watches the content by using the equipment: binocular smooth annotation point coordinates, pupil sizes of the left and right eyes, and corresponding concentration category labels: high, medium, and low, and save the data temporarily locally.
4. The system for predicting user concentration based on virtual reality environment of claim 1, wherein the testing module comprises a data preprocessing unit and a prediction scoring unit;
the data preprocessing unit is used for cutting and standardizing real-time data in the process of using the equipment by a user in a segmented manner to obtain a standard data format required by the model;
and the prediction scoring unit is used for inputting the preprocessed data into the trained model so as to obtain the class label corresponding to the maximum probability given by the final model.
5. The system for predicting user concentration based on virtual reality environment of claim 1, wherein the feedback module comprises a user feedback unit and a data storage unit;
the user feedback unit is used for providing a scoring link for a user so as to evaluate the consistency of the predicted scoring and the real situation;
the data storage unit is used for storing the data with lower user scores so as to expand the coverage of the training data to fulfill the aim of perfecting the model.
6. An implementation method for predicting user concentration based on virtual reality environment, the implementation method being used in the system for predicting user concentration based on virtual reality environment according to claims 1 to 5, and the method comprising the following steps:
the method comprises the steps that firstly, based on a user module, original eye movement data generated when different test objects watch contents are collected through virtual reality equipment, and concentration degree categories of users are obtained through modes such as user questionnaire feedback;
secondly, establishing a CNN + LSTM + FC model based on a training module, and inputting the preprocessed data and the corresponding concentration degree label into the model for model training;
step three, based on a test module, preprocessing the real-time eye movement data generated by the user using the equipment, and then inputting the preprocessed real-time eye movement data into the trained model in the step two to obtain a final predicted concentration degree category label for scoring;
based on a feedback module, the user feeds back and scores the prediction result according to the actual concentration degree in some modes, and data with low scores of the user are collected and stored;
and step five, returning the data stored in the step four and the original data to execute the step two, wherein the migration learning can be used.
CN201911111980.1A 2019-11-14 2019-11-14 System for predicting user concentration degree based on virtual reality environment and implementation method Pending CN110852284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911111980.1A CN110852284A (en) 2019-11-14 2019-11-14 System for predicting user concentration degree based on virtual reality environment and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911111980.1A CN110852284A (en) 2019-11-14 2019-11-14 System for predicting user concentration degree based on virtual reality environment and implementation method

Publications (1)

Publication Number Publication Date
CN110852284A true CN110852284A (en) 2020-02-28

Family

ID=69601788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911111980.1A Pending CN110852284A (en) 2019-11-14 2019-11-14 System for predicting user concentration degree based on virtual reality environment and implementation method

Country Status (1)

Country Link
CN (1) CN110852284A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436039A (en) * 2021-07-02 2021-09-24 北京理工大学 Student class concentration state detection method based on eye tracker in distance education
CN113870639A (en) * 2021-09-13 2021-12-31 上海市精神卫生中心(上海市心理咨询培训中心) Training evaluation method and system based on virtual reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108542404A (en) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 Attention appraisal procedure, device, VR equipment and readable storage medium storing program for executing
CN109117711A (en) * 2018-06-26 2019-01-01 西安交通大学 Layered characteristic based on eye movement data extracts and the focus detection device and method that merge
CN109388227A (en) * 2017-08-08 2019-02-26 浙江工商职业技术学院 A method of user experience is predicted using eye movement data recessiveness
US20190171280A1 (en) * 2017-12-05 2019-06-06 Electronics And Telecommunications Research Institute Apparatus and method of generating machine learning-based cyber sickness prediction model for virtual reality content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388227A (en) * 2017-08-08 2019-02-26 浙江工商职业技术学院 A method of user experience is predicted using eye movement data recessiveness
US20190171280A1 (en) * 2017-12-05 2019-06-06 Electronics And Telecommunications Research Institute Apparatus and method of generating machine learning-based cyber sickness prediction model for virtual reality content
CN108542404A (en) * 2018-03-16 2018-09-18 成都虚实梦境科技有限责任公司 Attention appraisal procedure, device, VR equipment and readable storage medium storing program for executing
CN109117711A (en) * 2018-06-26 2019-01-01 西安交通大学 Layered characteristic based on eye movement data extracts and the focus detection device and method that merge

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王亨等: "虚拟现实技术概述及其用于辅助康复治疗的研究进展", 《生命科学仪器》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436039A (en) * 2021-07-02 2021-09-24 北京理工大学 Student class concentration state detection method based on eye tracker in distance education
CN113870639A (en) * 2021-09-13 2021-12-31 上海市精神卫生中心(上海市心理咨询培训中心) Training evaluation method and system based on virtual reality

Similar Documents

Publication Publication Date Title
WO2021238631A1 (en) Article information display method, apparatus and device and readable storage medium
US11409791B2 (en) Joint heterogeneous language-vision embeddings for video tagging and search
CN105516280A (en) Multi-mode learning process state information compression recording method
CN112995652B (en) Video quality evaluation method and device
CN113835522A (en) Sign language video generation, translation and customer service method, device and readable medium
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
EP3872652A2 (en) Method and apparatus for processing video, electronic device, medium and product
US11777787B2 (en) Video-based maintenance method, maintenance terminal, server, system and storage medium
CN114120432A (en) Online learning attention tracking method based on sight estimation and application thereof
CN112188306B (en) Label generation method, device, equipment and storage medium
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112768070A (en) Mental health evaluation method and system based on dialogue communication
CN111046148A (en) Intelligent interaction system and intelligent customer service robot
CN110852284A (en) System for predicting user concentration degree based on virtual reality environment and implementation method
CN115237255B (en) Natural image co-pointing target positioning system and method based on eye movement and voice
CN115188074A (en) Interactive physical training evaluation method, device and system and computer equipment
CN113282840B (en) Comprehensive training acquisition management platform
CN115147935B (en) Behavior identification method based on joint point, electronic device and storage medium
CN115543075A (en) VR teaching system with long-range interactive teaching function
CN111339878B (en) Correction type real-time emotion recognition method and system based on eye movement data
CN114565804A (en) NLP model training and recognizing system
US11361491B2 (en) System and method of generating facial expression of a user for virtual environment
CN110163043B (en) Face detection method, device, storage medium and electronic device
CN112270231A (en) Method for determining target video attribute characteristics, storage medium and electronic equipment
CN111950472A (en) Teacher grinding evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228