CN113436039A - Student class concentration state detection method based on eye tracker in distance education - Google Patents

Student class concentration state detection method based on eye tracker in distance education Download PDF

Info

Publication number
CN113436039A
CN113436039A CN202110754933.XA CN202110754933A CN113436039A CN 113436039 A CN113436039 A CN 113436039A CN 202110754933 A CN202110754933 A CN 202110754933A CN 113436039 A CN113436039 A CN 113436039A
Authority
CN
China
Prior art keywords
model
student
state
track
state detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110754933.XA
Other languages
Chinese (zh)
Inventor
宋家强
王庆林
纪野
戴亚平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110754933.XA priority Critical patent/CN113436039A/en
Publication of CN113436039A publication Critical patent/CN113436039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an online course (MOOC), in particular to a method for detecting the tracking of the learning state of a student and the evaluation of the teaching effect in the online teaching process, belonging to the field of computer vision. The eye tracker is used for tracking the eye movement locus in real time, the student class attending state detection model network model is used, the student class attending state is identified according to the eye movement locus characteristic data, and the class attending state of the student in the MOOC teaching process can be identified in real time and accurately. Compared with the traditional method, the method has higher effectiveness and accuracy in the aspect of identifying the concentration state of students in the admiration class.

Description

Student class concentration state detection method based on eye tracker in distance education
Technical Field
The invention relates to an online course (MOOC), in particular to a method for detecting the tracking of the learning state of a student and the evaluation of the teaching effect in the online teaching process, belonging to the field of computer vision.
Background
In this information explosion era, internet is rapidly developed, human attention begins to be shortened, fragmented learning becomes a main learning mode of human, and online network courses are rapidly favored by broad scholars due to the advantages of a micro-class teaching mode, relative freedom of time, relatively low cost and the like. However, the online network course is still in the perfect stage at present, and has some defects, due to the time freedom and the present unsupervised mode, the learning efficiency of students is difficult to be guaranteed, and the students are likely to be careless due to insufficient autonomy or boring course.
For such a situation, it is necessary to analyze and evaluate the learning state of the learner to monitor the learning state of the student, provide evaluation basis for relevant institutions, and feed back to the online lecturer to help the educator improve the quality of lectures. The detection of the concentration state of the students in the course of MOOC (massive open online course) teaching is not significant to the improvement of MOOC education and teaching.
At present, more achievements are made for detecting human behaviors. In most of detection methods for the class-taking state of students in the MOOC teaching process, the behavior information of the students is recorded in real time by utilizing equipment such as a computer, a tablet and a mobile phone with a camera, and the class-taking state of the students is predicted or identified by detecting the class behaviors of the students. However, the method has an obvious defect that the class state of the student is not completely dependent on the characteristics of the behavior of the student, and the state misjudgment exists according to the difference of the class behaviors of different students.
Disclosure of Invention
The invention aims to provide a student class concentration state detection method based on an eye tracker in remote education, which utilizes an eyeball tracking technology of the eye tracker and uses an LSTM (Long Short Time memory) network frame to identify the class concentration state of a student in an MOOC teaching process; compared with the traditional method, the method has more effectiveness and accuracy in the aspect of identifying the concentration state of students in the admiration class;
the purpose of the invention is realized by the following technical scheme.
The student classroom concentration state detection method based on the eye tracker in the distance education comprises the following steps:
the method comprises the following steps: building an eyeball motion track data set and preprocessing the data set.
1. The subject is in front of the screen and is concentrating on or not in the MOOC teaching process. Recording the corresponding movement track of the eyeballs tracked by the eye tracker in the screen each time;
2. extracting data information of eyeball motion tracks to serve as a training set and a test set; the data information comprises category (concentration state or non-concentration state) and coordinate information; the coordinates are Cartesian rectangular coordinates taking the center of the screen as the origin of coordinates and the direction right above the screen of the computer as the positive half axis of the y axis, and the central point of the eyeball focus area is taken as the coordinate point of the motion trail at the position;
3. and (3) two horizontal and vertical coordinate features of the coordinate points obtained in the step (2) are used as training features, and the track information of the eyeball at a certain moment is effectively represented.
4. Extracting N adjacent track coordinates with the same time interval as a section of target track, wherein a single point is not repeatedly used when the track is constructed; storing category labels corresponding to the tracks, and building an eyeball motion track database in a state of concentration or not; dividing the data in each state database into a training set and a test set; the number of the training sets is larger than that of the test sets;
step two, building a student class concentration state detection model in the MOOC teaching process, and selecting an LSTM network model as a basic network framework, wherein the specific calculation process of the model prediction is as follows:
Figure BDA0003145937450000021
wherein x istIs the overall input; f. oftTo forget the door, wfTo forget the weight of the door, bfBiasing the item for the forgetting gate; i.e. itIs an input gate, wiTo input gate weights, biBiasing terms for the input gate;
Figure BDA0003145937450000022
as candidate state (new memory information), CtBeing a cellular state (long-term memory), Ct-1The cellular state at the previous moment, wcAs a cellular state weight, bcIs a cellular state bias term; o istTo output gate, woAre output gate weights; boIs the output gate weight bias term; h istIs memory (short term memory), ht-1A memory at the previous moment; tan h is an excitation function, and sigma is sigmod function operation;
the formula (1) is a cycle core, and the cycle core realizes the information extraction of the time sequence through parameter sharing at different moments. The cycle cores are expanded according to time steps (equivalent to a plurality of same (parameter sharing) cycle cores are connected in series) to form a student attending state detection network model, and the model is implemented as shown in figure 1, wherein x is in the figuretRepresenting the position coordinates of the eyeball motion trail at the time t as the input of the model; h istRepresents the output of the cycle core at time t; all weights and bias terms in the model are parameters to be trained. After the model is trained, a plurality of continuous eyeball motion trajectory coordinates at adjacent time intervals are sequentially input into the network model, and prediction categories are output, (the output result is a 2 x 1 vector, two element values in the vector respectively represent prediction probabilities of concentration and non-concentration, and the higher probability is the prediction category), so that the concentration state detection of students on class in the MOOC teaching process is realized.
But the class state of students has uncertainty and instability, and neuroscience research shows that: the process of human brain concentration is actually that the brain is in balance between concentration and distraction and reaches an optimal state. That is to say, the relatively short non-concentration state of the student in the class belongs to the normal phenomenon in the whole class-listening process, and does not influence the class-listening effect. This will also cause misjudgment of the network model.
Therefore, the invention provides a detection model based on a weight window on the basis of the network model, and the model can effectively inhibit the influence of uncertainty and instability of the student class state on model prediction, and the detection model is specifically as follows:
the weight window is in a matrix form, an intermediate matrix (the dimension of the weight matrix is the same as that of the output matrix) is obtained by utilizing the product of the weight window and the corresponding element of the output matrix formed by outputting the network model at N continuous moments, and then each row element in the intermediate matrix is summed to obtain a 2 multiplied by 1 column vector as a final output result (note that the weight window and the LSTM network model are in no coupling connection, so the weight window does not participate in the training process of the model, and only plays a role of fusing the output result in the prediction of the model).
The weight window provided by the invention has no coupling connection with the constructed student class concentration state detection network model, and the weight window and the constructed student class concentration state detection network model are relatively independent. The weight window fuses the output results of the network models at the adjacent moments in time step through the coupling-free connection, so that the stability of the prediction result is greatly improved.
Training the LSTM network model by adopting the training set obtained in the step one 4: taking a track coordinate set as the input of an LSTM network model, taking a label value corresponding to a track, namely a predicted category as the output, comparing the label value with a real label value to calculate a cross entropy function, repeatedly iterating by using a self-adaptive optimization algorithm, minimizing the cross entropy function, and gradually updating the weight to obtain a final model; performing model verification by adopting the test set obtained in the step one 4; and if the model is qualified, carrying out the next step, if the model is not qualified, repeating the step I to acquire more data or changing the number of the cycle cores developed according to the time steps, training parameters and the like to repeatedly train and test the model.
And step three, applying a student class concentration state detection model in the MOOC teaching process, acquiring an eyeball motion trajectory coordinate set by the eye tracker in real time in the MOOC teaching process as input of the trained network model in the step two, outputting a prediction result by the network model in real time, and finally realizing class concentration state detection of the student in the MOOC teaching process.
Advantageous effects
1. The eye tracker is used for tracking the eye movement locus in real time, the student class attending state detection model network model is used, the student class attending state is identified according to the eye movement locus characteristic data, and the class attending state of the student in the MOOC teaching process can be identified in real time and accurately.
2. The weight window provided by the invention has no coupling connection with the built student focus state detection network model, the weight window and the built student focus state detection network model are relatively independent, and the weight window connects the network models at all times in parallel on a time step through the coupling-free connection, so that the stability of a prediction result is greatly improved.
Drawings
FIG. 1 is a model for detecting concentration of students in a admiration class;
FIG. 2 is a flow chart of the detection of the concentration status of students in the admiration class;
fig. 3 is a model for detecting concentration state of students in a admiring class based on a weight window.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, a method for detecting the lecture listening status of a student during an MOOC lecture process based on lstm (long Short Time memory) according to an embodiment of the present invention will be described with reference to fig. 1-2, and fig. 3 is a system architecture diagram illustrating a method for detecting the lecture listening status of a student during an MOOC lecture process based on a weight window according to an embodiment of the present invention. As shown in fig. 1-3, the method comprises the steps of:
as shown in fig. 1, the technical solution adopted for implementing the above method of the present invention comprises the following steps:
step S101: and constructing an eyeball motion track data set and preprocessing the data set. The method comprises the following steps:
step S1011: the subject is in two classes of lecture states of concentration or not in the process of pretending MOOC teaching in front of the screen; tracking the corresponding movement track of the eyeballs in the screen by using an eye tracker, and storing, recording and labeling a plurality of movement tracks of the eyeballs of the testee with the time length of 3 s;
step S1012: extracting data information of eyeball motion tracks as a training set and a test set; the data information includes category (concentration status or inattention status) and coordinate information; the coordinates are Cartesian rectangular coordinates taking the center of the screen as the origin of coordinates and the direction right above the screen of the computer as the positive half axis of the y axis, and the central point of the eyeball focus area is taken as the coordinate point of the motion trail at the position;
step S1013: the recognition algorithm of the lecture listening state is carried out according to the eyeball track, the real-time requirement of application is considered, the feature dimension is selected from simple, two features of horizontal coordinates and vertical coordinates are respectively selected as training features, and the eyeball track information at a certain moment is effectively represented.
Step S1014: extracting 300 track coordinates with adjacent time intervals of 0.01s as a section of target track, and constructing the track without reusing a single point; storing labels, and constructing an eyeball motion track database focusing on two states, wherein a data set of each state is divided into a training set and a test set according to a ratio of 9: 2;
step S102: set up MOOC and give lessons in-process student and attend lessons and concentrate on state detection model, include:
step S1021: the invention uses the eye tracker to obtain real-time eyeball motion trajectory data, and because the data is collected based on time, the obtained monitoring data is most suitable for modeling by a sequential Neural network, such as a Recurrent Neural Network (RNN). The invention uses deep learning algorithm to carry out modeling analysis, adjusts the parameters of the network to be optimal, judges the quality of the model according to the test result, and takes the optimal parameter model for practical application. The invention provides an eyeball tracking technology by utilizing an eye tracker, and a detection method of the lecture listening state of a student in an MOOC (multimedia over coax) lecture process based on LSTM (Long Short Time memory);
step S1022: the neural network used in the present invention is a variant of RNN, LSTM. LSTM is a special RNN, which is proposed by Hochreiter to solve the problems of gradient disappearance and gradient explosion during back propagation, namely the problem of incapability of long-term memory. LSTM and RNN have basically the same structure, but the calculation inside each unit is more complicated, and the structure diagram is shown in FIG. 2. Equation (1) illustrates the calculation process therein:
Figure BDA0003145937450000051
wherein x istIs the overall input; f. oftTo forget the door, wfTo forget the weight of the door, bfBiasing the item for the forgetting gate; i.e. itIs an input gate, wiTo input gate weights, biBiasing terms for the input gate;
Figure BDA0003145937450000052
as candidate state (new memory information), CtBeing a cellular state (long-term memory), Ct-1The cellular state at the previous moment, wcAs a cellular state weight, bcIs a cellular state bias term; o istTo output gate, woAre output gate weights; boIs the output gate weight bias term; h istIs memory (short term memory), ht-1A memory at the previous moment; tan h is an excitation function, and sigma is sigmod function operation;
step S103: RNN-LSTM recurrent neural network parameter analysis, comprising:
step S1031: the selection of RNN-LSTM input layer num steps is the truncation length of the training data, which can also be considered as the length of the sequence. The number of the trace points to be recognized is represented by the recognition num _ steps of the class listening state based on the eyeball trace, the fact that the trace points can be applied to a system for real-time judgment after model training is considered, the number of the trace points is not too large, and the accuracy and the stability of a model recognition result are considered. In the current experiment, 30 uniformly distributed track points are selected as num _ steps;
step S1032: the number of the RNN-LSTM hidden layer memory neurons and the number of the hidden layer neurons are selected to have great influence on the network prediction result. If the number of neurons in the hidden layer is too small, the data cannot be well fitted, the network output result is influenced, and the expected prediction precision cannot be achieved. Conversely, if the number of hidden layer neurons is too large, it may lead to extended training times and network overfitting. It is important to select the number of neurons in the hidden layer reasonably. Current experiments used 32 and 64 hidden layer neurons, respectively;
step S104: coding of eyeball motion trail data, and detection of the class listening state of students in the MOOC teaching process is essentially to solve the two-classification problem by using a recurrent neural network RNN-LSTM. For a single target, the trajectory characteristics at time t are represented as:
X(t)=(x,y)
where (x, y) represents the coordinates of the target at time t.
The detection of the lecture state based on the eyeball track takes eyeball motion track characteristic data X (t-n), …, X (t-1), X (t) of continuous n moments as input, and outputs One-Hot Encoding, namely One-bit effective Encoding. The identification network of the current lecture listening state adopts a two-bit state register to encode two states, each state has independent register bits, only one bit is effective at any time, wherein [1,0] represents concentration type encoding, and [0,1] represents non-concentration type encoding.
Step S105: in the MOOC teaching process, students focus on the state detection model training, a track coordinate set is used as the input of an LSTM network model, a label value corresponding to a track is used as the output, the label value is compared with a real label value to calculate the cross entropy, a self-adaptive optimization algorithm is used for repeated iteration, the weight is gradually updated, and a preliminary model is obtained; and adopting the test set obtained in the step S1014 to carry out model verification; if the model is qualified, the next step is carried out, if the model is not qualified, the step 101 is repeated, more data are collected, or the number of the cycle cores which are unfolded according to the time steps is changed, the training parameters are changed, and the model is repeatedly trained and tested.
Step S106: and loading a weight window on the basis of the network model to obtain a final student class concentration state detection model of the eye tracker, which specifically comprises the following steps:
the weight window adopts a 2 XN matrix, each element of the weight window is distributed with a corresponding weight, the sum of the weights of all rows is equal to 1, N column vectors at continuous time output by the network model form a 2 XN output matrix, the product of the output matrix and the corresponding element of the weight window obtains an intermediate matrix, the intermediate matrix is multiplied by a matrix with the size of N X1 and the size of each element of 1, and the specific implementation is as shown in FIG. 3, and the output result, namely the final prediction result.
The weight window provided by the invention has no coupling connection with the constructed student attending state detection network model, the weight window and the constructed student attending state detection network model are relatively independent, and the weight window fuses the output results of the network models at adjacent moments on a time step through the coupling-free connection, so that the stability of the prediction result is greatly improved.
Step S107: and applying a student class concentration state detection model in the MOOC teaching process. In the MOOC teaching process, the eye tracker acquires an eyeball motion trajectory coordinate set in real time to serve as the input of the student class concentration state detection network model obtained in the step 106, the prediction result is output by the network model in real time, and finally the student class concentration state detection in the MOOC teaching process is realized.
The invention provides a detection method for detecting the lecture listening state of a student in an MOOC teaching process. Compared with the prior art, the motion state of the eyeballs is directly related to the lecture listening state of the student, the eyetracker is used for tracking the eyeball motion trail in real Time, an LSTM (Long Short Time memory) network model based on a weight window is used, and the eyeball motion trail coordinates of a series of adjacent continuous moments are used as model input, so that the lecture listening state of the student in the MOOC teaching process can be accurately identified in real Time.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above-described embodiments are merely exemplary embodiments of the present invention, which should not be construed as limiting the invention in any way, and any simple modifications, equivalent variations and modifications of the above-described embodiments according to the technical spirit of the present invention are still within the scope of the technical solution of the present invention.

Claims (3)

1. Student class concentration state detection method based on eye tracker in distance education, its characterized in that: the method comprises the following steps:
the method comprises the following steps: building an eyeball motion track data set and preprocessing the data set;
providing eyeball motion track data information of a subject as a training set and a test set; extracting N adjacent track coordinates with the same time interval as a section of target track, wherein a single point is not repeatedly used when the track is constructed; storing category labels corresponding to the tracks, and building an eyeball motion track database in a state of concentration or not; dividing the data in each state database into a training set and a test set; the number of the training sets is larger than that of the test sets;
step two, building a student class concentration state detection model in the MOOC teaching process, and selecting an LSTM network model as a basic network framework;
the weight window is not coupled with the constructed student class concentration state detection network model, and the weight window and the constructed student class concentration state detection network model are relatively independent; the weight window fuses the output results of the network models at adjacent moments in a time step through coupling-free connection;
the weight window adopts a matrix form, an intermediate matrix is obtained by utilizing the product of corresponding elements of an output matrix formed by outputting the weight window and the LSTM network model at N continuous moments, and then each row element in the intermediate matrix is summed to obtain a 2 multiplied by 1 column vector as a final output result;
training the model by adopting the training set obtained in the step one: taking a track coordinate set as the input of an LSTM network model, taking a label value corresponding to a track, namely a predicted category as the output, comparing the label value with a real label value to calculate a cross entropy function, repeatedly iterating by using a self-adaptive optimization algorithm, minimizing the cross entropy function, and gradually updating the weight to obtain a final model; performing model verification by adopting the test set obtained in the first step; if the model is qualified, carrying out the next step, if the model is not qualified, repeating the step I to acquire more data or changing the number of the cycle cores developed according to the time steps, training parameters and the like to repeatedly train and test the model;
and step three, applying a student class concentration state detection model in the MOOC teaching process, acquiring an eyeball motion trajectory coordinate set by the eye tracker in real time in the MOOC teaching process as input of the trained network model in the step two, outputting a prediction result by the network model in real time, and finally realizing class concentration state detection of the student in the MOOC teaching process.
2. The method for detecting concentration status of an eye tracker-based student in distance education as claimed in claim 1, wherein: the method for providing the eyeball motion trail data information of the subject comprises the following steps:
1) the examinee assumes the two classes of lecture states of concentration or not in the MOOC teaching process in front of the screen; recording the corresponding movement track of the eyeballs tracked by the eye tracker in the screen each time;
2) extracting data information of the eyeball motion trail to be used as a training set and a test set; the data information comprises category (concentration state or non-concentration state) and coordinate information; the coordinates are Cartesian rectangular coordinates taking the center of the screen as the origin of coordinates and the direction right above the screen of the computer as the positive half axis of the y axis, and the central point of the eyeball focus area is taken as the coordinate point of the motion trail at the position;
3) and 2) obtaining the horizontal and vertical coordinate characteristics of the coordinate point in the step 2) as training characteristics, and effectively representing the track information of the eyeball at a certain moment.
3. The method for detecting concentration status of an eye tracker-based student in distance education as claimed in claim 1, wherein: the specific implementation manner of the second step is as follows:
Figure FDA0003145937440000021
wherein x istIs the overall input; f. oftTo forget the door, wfTo forget the weight of the door, bfBiasing the item for the forgetting gate; i.e. itIs an input gate, wiTo input gate weights, biBiasing terms for the input gate;
Figure FDA0003145937440000022
as a candidate state, CtIs cellular state (long-term memory), ct-1The cellular state at the previous moment, wcIs a cellState weight, bcIs a cellular state bias term; o istTo output gate, woAre output gate weights; boIs the output gate weight bias term; h istAs a memory, ht-1A memory at the previous moment; tan h is the excitation function and σ is the sigmod function operation.
CN202110754933.XA 2021-07-02 2021-07-02 Student class concentration state detection method based on eye tracker in distance education Pending CN113436039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110754933.XA CN113436039A (en) 2021-07-02 2021-07-02 Student class concentration state detection method based on eye tracker in distance education

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110754933.XA CN113436039A (en) 2021-07-02 2021-07-02 Student class concentration state detection method based on eye tracker in distance education

Publications (1)

Publication Number Publication Date
CN113436039A true CN113436039A (en) 2021-09-24

Family

ID=77758916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110754933.XA Pending CN113436039A (en) 2021-07-02 2021-07-02 Student class concentration state detection method based on eye tracker in distance education

Country Status (1)

Country Link
CN (1) CN113436039A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107929007A (en) * 2017-11-23 2018-04-20 北京萤视科技有限公司 A kind of notice and visual capacity training system and method that tracking and intelligent evaluation technology are moved using eye
CN109117711A (en) * 2018-06-26 2019-01-01 西安交通大学 Layered characteristic based on eye movement data extracts and the focus detection device and method that merge
CN110673742A (en) * 2019-11-14 2020-01-10 北京格如灵科技有限公司 System and method for evaluating learning ability of students in classroom based on virtual reality
CN110852284A (en) * 2019-11-14 2020-02-28 北京格如灵科技有限公司 System for predicting user concentration degree based on virtual reality environment and implementation method
US20200364539A1 (en) * 2020-07-28 2020-11-19 Oken Technologies, Inc. Method of and system for evaluating consumption of visual information displayed to a user by analyzing user's eye tracking and bioresponse data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107929007A (en) * 2017-11-23 2018-04-20 北京萤视科技有限公司 A kind of notice and visual capacity training system and method that tracking and intelligent evaluation technology are moved using eye
CN109117711A (en) * 2018-06-26 2019-01-01 西安交通大学 Layered characteristic based on eye movement data extracts and the focus detection device and method that merge
CN110673742A (en) * 2019-11-14 2020-01-10 北京格如灵科技有限公司 System and method for evaluating learning ability of students in classroom based on virtual reality
CN110852284A (en) * 2019-11-14 2020-02-28 北京格如灵科技有限公司 System for predicting user concentration degree based on virtual reality environment and implementation method
US20200364539A1 (en) * 2020-07-28 2020-11-19 Oken Technologies, Inc. Method of and system for evaluating consumption of visual information displayed to a user by analyzing user's eye tracking and bioresponse data

Similar Documents

Publication Publication Date Title
CN110478883B (en) Body-building action teaching and correcting system and method
CN107766447A (en) It is a kind of to solve the method for video question and answer using multilayer notice network mechanism
CN112328077B (en) College student behavior analysis system, method, device and medium
CN109840595A (en) A kind of knowledge method for tracing based on group study behavior feature
CN114385801A (en) Knowledge tracking method and system based on hierarchical refinement LSTM network
CN110956142A (en) Intelligent interactive training system
CN113591988A (en) Knowledge cognitive structure analysis method, system, computer equipment, medium and terminal
CN115795015A (en) Comprehensive knowledge tracking method for enhancing test question difficulty
CN116402134A (en) Knowledge tracking method and system based on behavior perception
CN113436039A (en) Student class concentration state detection method based on eye tracker in distance education
Zhang et al. Neural Attentive Knowledge Tracing Model for Student Performance Prediction
Wang et al. [Retracted] Design of Sports Training Simulation System for Children Based on Improved Deep Neural Network
Tanwar et al. Engagement measurement of a learner during e-learning: A deep learning architecture
Bajaj et al. Classification of student affective states in online learning using neural networks
CN112785039A (en) Test question answering score prediction method and related device
CN113723233A (en) Student learning participation degree evaluation method based on layered time sequence multi-example learning
Dong Educational behaviour analysis using convolutional neural network and particle swarm optimization algorithm
Hu et al. 3DACRNN Model Based on Residual Network for Speech Emotion Classification.
Yu et al. [Retracted] A Russian Continuous Speech Recognition System Based on the DTW Algorithm under Artificial Intelligence
Hung et al. Building an online learning model through a dance recognition video based on deep learning
Pinto et al. Deep Learning for Educational Data Science
Cheng et al. Metacognitive ability evaluation based on behavior sequence of online learning process
Hu et al. Research on the Application of Artificial Neural Network‐Based Virtual Image Technology in College Tennis Teaching
Chen et al. Design of Assessment Judging Model for Physical Education Professional Skills Course Based on Convolutional Neural Network and Few‐Shot Learning
Tang et al. Design and implementation of intelligent evaluation system based on pattern recognition for microteaching skills training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210924

WD01 Invention patent application deemed withdrawn after publication