CN113506027A - Course quality assessment and improvement method based on student visual attention and teacher behavior - Google Patents
Course quality assessment and improvement method based on student visual attention and teacher behavior Download PDFInfo
- Publication number
- CN113506027A CN113506027A CN202110849982.1A CN202110849982A CN113506027A CN 113506027 A CN113506027 A CN 113506027A CN 202110849982 A CN202110849982 A CN 202110849982A CN 113506027 A CN113506027 A CN 113506027A
- Authority
- CN
- China
- Prior art keywords
- student
- data
- teacher
- gazing
- students
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000000007 visual effect Effects 0.000 title claims abstract description 24
- 230000006872 improvement Effects 0.000 title claims abstract description 15
- 238000001303 quality assessment method Methods 0.000 title claims description 14
- 210000001508 eye Anatomy 0.000 claims description 90
- 210000001747 pupil Anatomy 0.000 claims description 44
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 8
- 230000004424 eye movement Effects 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000006399 behavior Effects 0.000 abstract description 14
- 238000013441 quality evaluation Methods 0.000 abstract 1
- 238000004458 analytical method Methods 0.000 description 4
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 230000010365 information processing Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06395—Quality analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
Abstract
The invention discloses a course quality evaluation and improvement method based on student visual attention and teacher behaviors.
Description
Technical Field
The invention relates to the technical field of visual information processing, in particular to a course quality assessment and improvement method based on student visual attention and teacher behaviors.
Background
Education not only needs to pay attention to contents, but also needs to pay attention to quality, the concentration degree of student attention and the relevance between the focusing direction of student eyesight and teaching key points are important judgment standards of classroom quality. Through a large amount of classroom data, classroom contents which are more inclined to be noticed by students can be analyzed, and therefore instructive teaching suggestions are provided for teachers. CN106599881A, methods, apparatuses, and systems for determining student status disclose that data such as expression data, eye data, and body data collected by a form collecting apparatus are obtained through a student testing terminal, and the data is analyzed and processed correspondingly, and finally status information of a student is generated through comprehensive analysis.
Therefore, it is desirable to provide a course quality assessment and improvement method based on student visual attention and teacher behavior, so as to implement the association between student attention and teacher behavior and teaching focus, thereby assessing and proposing improvement opinions for teacher teaching.
Disclosure of Invention
In view of the above, the present invention provides a course quality assessment and improvement method based on student visual attention and teacher behavior, comprising the steps of:
determining courses and acquiring basic information of the courses;
pupil data of students and eye jump data of the students are obtained, pupil data of a teacher, eye jump data of the teacher, teaching video data and audio data of the teacher are obtained, the eye jump data comprise gazing information and information of the eye jump in time dimension and space dimension, and the pupil data comprise gazing time and pupil size;
analyzing and converting pupil data of students and eye jump data of the students into student concentration degree data, screening, classifying and counting the student concentration degree data, and respectively drawing a total concentration degree distribution graph and an individual concentration degree distribution graph according to the distribution of each student concentration degree data, wherein the abscissa in the total concentration degree distribution graph is time, the ordinate is an average concentration degree, the abscissa in the individual concentration degree distribution graph is time, and the ordinate is individual concentration degree;
normalizing the eye jump data of students at different time intervals to obtain student gazing direction information, and detecting by using a TensorFlow API frame target to obtain a student gazing coordinate matched with the student gazing direction information; normalizing the eye jump data of the teacher in different time periods to obtain the fixation orientation information of the teacher, and detecting and obtaining a fixation coordinate matched with the fixation orientation information of the teacher by using a TensorFlow API frame target;
labeling the audio and the video to obtain the key coordinates of teaching of the teacher;
comparing whether the student gazing coordinate is matched with the coordinate of the teacher teaching key point, and if the student gazing coordinate is matched with the coordinate of the teacher teaching key point, the teaching quality is high; comparing whether the student gazing coordinate is matched with the teacher gazing coordinate, and if so, ensuring that the teacher and the student have higher mutual attention;
and extracting video information of a time period with high student attention according to the student attention data.
Optionally, pupil data of the students and eye jump data of the students are acquired through the eye tracker, pupil data of the teacher and eye jump data of the teacher are acquired through the eye tracker, and eye movement track characteristics of the students and the teacher when processing visual information are recorded through the eye tracker;
and recording the classroom of the teacher through the photographic equipment, and acquiring teaching video data and audio data of the teacher.
Optionally, the method further includes the step of constructing a first data set and a second data set, where the first data set is used to store pupil data of students and eye jump data of students, and obtain pupil data of teachers, eye jump data of teachers, and video data and audio data of teaching of teachers; the second data set is used for storing student concentration degree data, student gazing coordinates, teacher gazing coordinates and teacher teaching key coordinates.
Optionally, the analyzing and converting the pupil data of the student and the eye jump data of the student into the student concentration degree data includes:
the method comprises the steps that the student concentration degree data are judged according to the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil, the smaller the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil is, the higher the student concentration degree is, the larger the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil is, and the lower the student concentration degree is.
Optionally, the eye jump data of the students in different periods are normalized to obtain the student gazing orientation information, and the student gazing coordinate matched with the student gazing orientation information is obtained by using tensrflow API frame target detection and an eye tracker, including:
the information of the eye jumps of the students in the space dimensions at different time intervals is normalized to obtain the information of the gazing directions of the students, and the information of the gazing directions of the students is subjected to noise reduction processing;
after the student gazing orientation information is subjected to noise reduction processing, based on an object recognition application programming library API in an artificial intelligence framework Tensiloflow, an SSD _ mobile net _ v1_ coco training model is adopted, and a formula d is utilized0=(xA-xB,yA-yB,zA-zB) Establishing a mapping model from A to B, obtaining the position of a viewpoint in a world image when the position of a pupil gazing target changes, and obtaining a three-dimensional coordinate of a gazing position, wherein d0 is the distance between an object and human eyes, A is the imaging position of a gazing object in a world camera, and B is the imaging position of the object in an eye tracker;
mapping the three-dimensional image to a two-dimensional plane by using the following formula to obtain a two-dimensional coordinate of the gaze position:
Where s is the scaling factor and A is the camera intrinsic parameter matrix, [ R t ]]Is a matrix of external parameters of the camera,andcorresponding to the two-dimensional coordinates and three-dimensional coordinates of the object, X, Y, Z is the three-dimensional coordinates of the object in the world camera, fx, fy, cx, cy areThe camera internal parameters are related, wherein fx is f/dX, fy is f/dY, f is the camera focal length, dX and dY are the unit pixel size on the x axis and the y axis of the sensor respectively, and cx、cyRepresenting an optical center, wherein an intersection point of an optical axis of the camera and an image plane is the center of an image, the value of the intersection point is half of the resolution, R and t are related to external parameters of the camera, wherein R constitutes a rotation matrix of 3 multiplied by 3, and t constitutes a translation vector of 3 multiplied by 1;
converting the world coordinate system to a camera-based coordinate system using the following formula;
wherein x, y and z are three-dimensional coordinates of the object in the camera, X, Y, Z is three-dimensional coordinates of the object in the world camera, R is the rotation matrix, t is the translation vector, u and v are two-dimensional coordinates of the object in the camera, and the obtained real-time azimuth information is converted into real-time coordinate information.
Optionally, when the camera of the camera generates a certain deformation, the camera is corrected by using the following formula:
wherein u 'and v' are corrected two-dimensional coordinates, (k1, k2, k3) and (p1, p2) are radial and tangential distortion parameters of lens distortion, respectively, and r is an intermediate parameter.
Optionally, the tensrflow API framework is built into the eye tracker.
Optionally, the method for acquiring the teacher gazing coordinate is the same as the method for acquiring the student gazing coordinate.
Compared with the prior art, the course quality assessment and improvement method based on the visual attention of the students and the behavior of the teachers provided by the invention at least realizes the following beneficial effects:
when the application obtains the content that the teacher teaches the teacher and considers as the key, the average attention concentration degree of the students, the proportion of the students with concentrated attention and the distribution situation of the gazing points of the students are evaluated to the teaching quality of the teacher, the content of the gazing points with higher average attention degree of the students and the influence of the teacher on the students is obtained, and the improvement suggestion is provided for the teaching of the teacher.
According to the method, attention information of a teacher of a student is acquired through the eye tracker, the class of the teacher is recorded through the photographic equipment, the distance between the eyeball tracking point and a wearer is acquired in real time by using a measuring method combining TensorFlow API frame target detection and the eye tracker, so that real-time gazing information of the user is acquired, video content is analyzed, the eye tracking point is associated with eye movement coordinates, and the relation between the attention information of the student and the behavior of the teacher and the key content of the class is acquired.
Of course, it is not necessary for any product in which the present invention is practiced to achieve all of the above-described technical effects simultaneously.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart of a course quality assessment and promotion method based on student visual attention and teacher behavior according to the present invention;
FIG. 2 is a schematic diagram of acquiring gaze coordinates;
fig. 3 is an SSD _ mobile net basic architecture.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Referring to fig. 1 to 3, fig. 1 is a flowchart of a course quality assessment and promotion method based on student visual attention and teacher behavior according to the present invention, fig. 2 is a schematic diagram of obtaining a gazing coordinate, fig. 3 is a basic architecture of SSD _ mobile net, the course quality assessment and promotion method based on student visual attention and teacher behavior according to the present invention includes the following steps:
s1: determining courses and acquiring basic information of the courses;
specifically, a course is selected, and basic information of the course is obtained, for example, the basic information may include subject information of the course.
S2: pupil data of students and eye jump data of the students are obtained, pupil data of a teacher, eye jump data of the teacher, teaching video data and audio data of the teacher are obtained, the eye jump data comprise gazing information and information of the eye jump in a time dimension and a space dimension, and the pupil data comprise gazing time and pupil size;
optionally, pupil data of the students and eye jump data of the students are acquired through the eye tracker, pupil data of the teacher and eye jump data of the teacher are acquired through the eye tracker, and eye movement track characteristics of the students and the teacher when processing visual information are recorded through the eye tracker; and recording the classroom of the teacher through the photographic equipment, and acquiring teaching video data and audio data of the teacher.
The eye tracker is an important instrument for basic psychological research and is used for recording eye movement track characteristics of people in processing visual information.
Acquiring information of the student vision related to time dimension by using an eye tracker device, and analyzing the attention quality of the student; acquiring information of visual related space dimensions of students for analyzing the attention focuses of the students; and acquiring the visual information of the teacher about the spatial dimension for analyzing the influence of the attention of the teacher to the students on the students. Classroom video and audio information obtained by the photographic recording equipment are utilized.
The eye tracker can obtain eye tracking indexes of human related time dimensions, including first watching time, namely time from the beginning of an experiment to the first watching of a certain interest area, and can reflect the processing difficulty of the information of the interest area; the eye jump latency period is the time interval from the formation of stimulation to the start of the execution of eye jump, and can reflect the information searching efficiency of students; the total gazing time and the average gazing time are the total time and the average time of gazing duration at a gazing point, so that the attention degree and the information processing difficulty of students to a certain teaching content and the gazing time of a teacher to each student can be reflected. The human eye movement index related to the space dimension comprises the pupil diameter, namely the average size of the pupil diameter when the human eye looks at an interest area; the gazing direction is the pupil gazing angle and is used for obtaining gazing methods of students and teachers.
The invention utilizes a measuring method combining tensiorflow api frame target detection and an eye tracker on the basis of the head-mounted eye tracker to obtain the distance between an eyeball tracking point and a wearer in real time so as to obtain the real-time gazing information of the user.
It should be noted that the method further includes the step of constructing a first data set and a second data set, wherein the first data set is used for storing pupil data of students and eye jump data of the students, and acquiring pupil data of a teacher, eye jump data of the teacher, and teaching video data and audio data of the teacher; the second data set is used for storing student concentration degree data, student gazing coordinates, teacher gazing coordinates and teacher teaching key coordinates.
S3: analyzing and converting pupil data of students and eye jump data of the students into student concentration degree data, screening, classifying and counting the student concentration degree data, and respectively drawing a total concentration degree distribution graph and an individual concentration degree distribution graph according to the distribution of each student concentration degree data, wherein the abscissa in the total concentration degree distribution graph is time, the ordinate is an average concentration degree value, the abscissa in the individual concentration degree distribution graph is time, and the ordinate is individual concentration degree;
the step is used for analyzing the attention direction (dimension information acquired by the eye tracker) and the attention focusing condition (gazing information acquired by the eye tracker) of the student.
Optionally, the student concentration degree data is judged according to the ratio of the information of the student eye jump in the time dimension to the fixation time of the pupil, the smaller the ratio of the information of the student eye jump in the time dimension to the fixation time of the pupil is, the higher the student concentration degree is, the larger the ratio of the information of the student eye jump in the time dimension to the fixation time of the pupil is, and the lower the student concentration degree is.
S4: normalizing the eye jump data of the students in different time periods to obtain student gazing direction information, and detecting by using a TensorFlow API frame target to obtain a student gazing coordinate matched with the student gazing direction information; normalizing the eye jump data of the teacher in different time periods to obtain the fixation orientation information of the teacher, and detecting and obtaining a fixation coordinate matched with the fixation orientation information of the teacher by using a TensorFlow API frame target;
specifically, eye jump data of students in different periods are normalized to obtain student gazing direction information, student gazing coordinates matched with the student gazing direction information are obtained by utilizing TensorFlow API frame target detection and an eye tracker, and the obtained gazing coordinate information is stored according to a time sequence. The method comprises the following steps:
normalizing the information of the eye jumps of the students in different time periods in the spatial dimension to obtain the information of the gazing directions of the students, and performing noise reduction on the information of the gazing directions of the students;
student gaze orientation informationAfter the noise reduction processing, referring to fig. 2 and 3, based on the object recognition application programming library API in the artificial intelligence framework tensflo, the SSD _ mobile net _ v1_ coco training model is adopted, and the formula d is used0=(xA-xB,yA-yB,zA-zB) Establishing a mapping model from A to B, obtaining the position of a viewpoint in a world image when the position of a pupil gazing target changes, and obtaining a three-dimensional coordinate of a gazing position, wherein d0 is the distance between an object and human eyes, A is the imaging position of a gazing object in a world camera, and B is the imaging position of the object in an eye tracker;
mapping the three-dimensional image to a two-dimensional plane by using the following formula to obtain a two-dimensional coordinate of the gaze position:
Where s is the scaling factor and A is the camera intrinsic parameter matrix, [ R t ]]Is a matrix of external parameters of the camera,andcorresponding to the two-dimensional coordinates and three-dimensional coordinates of the object respectively, X, Y, Z is the three-dimensional coordinates of the object in the world camera, fx, fy, cx and cy are related to the internal parameters of the camera, wherein fx ═ f/dX and fy ═ f/dY, f is the focal length of the camera, dX and dY are the unit pixel size on the x axis and y axis of the sensor respectively, and c is the unit pixel size on the x axis and y axis of the sensor respectivelyx、cyRepresenting an optical center, wherein an intersection point of an optical axis of the camera and an image plane is the center of an image, the value of the intersection point is half of the resolution, R and t are related to external parameters of the camera, wherein R constitutes a rotation matrix of 3 multiplied by 3, and t constitutes a translation vector of 3 multiplied by 1;
converting the world coordinate system to a camera-based coordinate system using the following formula;
wherein x, y and z are three-dimensional coordinates of the object in the camera, X, Y, Z is three-dimensional coordinates of the object in the world camera, R is the rotation matrix, t is the translation vector, u and v are two-dimensional coordinates of the object in the camera, and the obtained real-time azimuth information is converted into real-time coordinate information.
In general, a camera of a video camera generates a certain deformation, and the deformation is corrected by using the following formula:
wherein u 'and v' are corrected two-dimensional coordinates, (k1, k2, k3) and (p1, p2) are radial and tangential distortion parameters of lens distortion, respectively, and r is an intermediate parameter.
The tensrflow API framework is built into the eye tracker.
Of course, the teacher's gaze coordinate is acquired in the same manner as the student's gaze coordinate.
The SSD _ Mobilene is a lightweight, low-delay and efficient embedded visual model with a network as a basic network structure, and can meet the real-time requirement of target object detection. The target detection plug-in tenorflow api is built in an eyeball tracking system (namely an eye tracker), so that the real-time performance of measurement can be ensured. Fig. 3 shows a basic architecture of SSD _ mobile, which is a basic architecture of model SSD _ mobile. The conv1-5 of the model is a convolutional layer of a neural network, the fully-connected layer in conv6 and conv7 is converted into two convolutional layers, then conv8, conv9 and conv10 are convolutional layers, and finally pool11 is an average pooling layer. The model obtains the position and proportion information of the target object by a regression method, and obtains the actual area of the optimal target object by utilizing the characteristics of multiple layers.
S5: marking the audio and the video to obtain the key coordinates of teaching of the teacher;
the video information processing relates to video content analysis, video content is extracted according to coordinates, and deep learning and analysis aiming at key content are carried out, wherein the video content relates to teaching subjects and the key content can be classified through human-computer interaction, so that a more accurate training set is established.
The key coordinate here is a plane coordinate where key knowledge is located in the video frame, and includes a timestamp and a center xy coordinate of a key position. The teacher mainly takes PPT content as a main part for teaching, and needs to judge key content aiming at teaching voice in a data processing stage, label the key content and use the key content as a training set for machine learning.
The space dimension data that this scheme was watched through the student combines together with the student coordinate, obtains the position that the student was concerned about. Through the steps, the characteristics of whether the students pay attention to the key contents taught by the teacher and the key contents which the students pay attention to are obtained, and instructive suggestions are provided for the teaching of the teacher.
S6: comparing whether the gazing coordinates of the students are matched with the coordinates of key points of teaching of the teachers, and if so, ensuring the teaching quality to be high; comparing whether the student gazing coordinate is matched with the teacher gazing coordinate, and if so, ensuring that the teacher and the student have higher mutual attention;
s7: and extracting video information of a time period with high student attention according to the student attention data.
The sequence of step S6 and step S7 is not divided into front and rear.
The information is used for correlating the attention time of the students with the spatial information and analyzing the correlation between the attention focuses of the students and the classroom behaviors of the teacher (including teaching subjects, teaching blackboard-writing and ppt, teaching aids, language fluctuation and the like).
Sorting all data arranged according to time, analyzing the attention of students and evaluating the teaching quality; the method comprises the steps of extracting the attention of a teacher, analyzing the relation between the attention of the teacher and the attention of students, wherein the learning in the method is that the teacher carries out artificial analysis on the classroom part with higher general concentration of the students and the learning scheme is established on the same practice axis, so that the teacher can obtain the parts with higher teaching quality (namely the parts with more concentration of the students) for the teacher to study and study, and the learning method is self-analyzed by the teacher; and extracting the video information of the time period with the highest attention of the students for learning, and providing guidance suggestions for teacher teaching improvement.
By the embodiment, the course quality assessment and improvement method based on the visual attention of the students and the behavior of the teachers provided by the invention at least achieves the following beneficial effects:
when the application obtains the content that the teacher teaches the teacher and considers as the key, the average attention concentration degree of the students, the proportion of the students with concentrated attention and the distribution situation of the gazing points of the students are evaluated to the teaching quality of the teacher, the content of the gazing points with higher average attention degree of the students and the influence of the teacher on the students is obtained, and the improvement suggestion is provided for the teaching of the teacher.
According to the method, attention information of a teacher of a student is acquired through the eye tracker, the class of the teacher is recorded through the photographic equipment, the distance between the eyeball tracking point and a wearer is acquired in real time by using a measuring method combining TensorFlow API frame target detection and the eye tracker, so that real-time gazing information of the user is acquired, video content is analyzed, the eye tracking point is associated with eye movement coordinates, and the relation between the attention information of the student and the behavior of the teacher and the key content of the class is acquired.
Although some specific embodiments of the present invention have been described in detail by way of examples, it should be understood by those skilled in the art that the above examples are for illustrative purposes only and are not intended to limit the scope of the present invention. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.
Claims (8)
1. A course quality assessment and improvement method based on student visual attention and teacher behavior is characterized by comprising the following steps:
determining courses and acquiring basic information of the courses;
pupil data of students and eye jump data of the students are obtained, pupil data of a teacher, eye jump data of the teacher, teaching video data and audio data of the teacher are obtained, the eye jump data comprise gazing information and information of the eye jump in time dimension and space dimension, and the pupil data comprise gazing time and pupil size;
analyzing and converting pupil data of students and eye jump data of the students into student concentration degree data, screening, classifying and counting the student concentration degree data, and respectively drawing a total concentration degree distribution graph and an individual concentration degree distribution graph according to the distribution of each student concentration degree data, wherein the abscissa in the total concentration degree distribution graph is time, the ordinate is an average concentration degree, the abscissa in the individual concentration degree distribution graph is time, and the ordinate is individual concentration degree;
normalizing the eye jump data of students at different time intervals to obtain student gazing direction information, and detecting by using a TensorFlow API frame target to obtain a student gazing coordinate matched with the student gazing direction information; normalizing the eye jump data of the teacher in different time periods to obtain the fixation orientation information of the teacher, and detecting and obtaining a fixation coordinate matched with the fixation orientation information of the teacher by using a TensorFlow API frame target;
labeling the audio and the video to obtain the key coordinates of teaching of the teacher;
comparing whether the student gazing coordinate is matched with the coordinate of the teacher teaching key point, and if the student gazing coordinate is matched with the coordinate of the teacher teaching key point, the teaching quality is high; comparing whether the student gazing coordinate is matched with the teacher gazing coordinate, and if so, ensuring that the teacher and the student have higher mutual attention;
and extracting video information of a time period with high student attention according to the student attention data.
2. The course quality assessment and improvement method based on student visual attention and teacher behavior according to claim 1, characterized in that pupil data of students and eye jump data of students are obtained through an eye tracker, pupil data of teachers and eye jump data of teachers are obtained through an eye tracker, and eye tracker records eye movement track characteristics of students and teachers when processing visual information;
and recording the classroom of the teacher through the photographic equipment, and acquiring teaching video data and audio data of the teacher.
3. The course quality assessment and improvement method based on student visual attention and teacher's behavior as claimed in claim 1, further comprising a step of constructing a first data set and a second data set, wherein said first data set is used for storing pupil data of students and eye jump data of students, and obtaining pupil data of teacher, eye jump data of teacher, and video data and audio data of teacher's teaching; the second data set is used for storing student concentration degree data, student gazing coordinates, teacher gazing coordinates and teacher teaching key coordinates.
4. The method as claimed in claim 1, wherein the step of analyzing and transforming into student concentration degree data according to pupil data and eye jump data of students comprises:
the method comprises the steps that the student concentration degree data are judged according to the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil, the smaller the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil is, the higher the student concentration degree is, the larger the proportion of the information of the student eye jump in the time dimension to the fixation time of the pupil is, and the lower the student concentration degree is.
5. The method as claimed in claim 2, wherein the student's visual attention and teacher's behavior based course quality assessment and promotion method,
normalizing the eye jump data of the students in different periods to obtain the gazing orientation information of the students, and obtaining the gazing coordinates of the students matched with the gazing orientation information of the students by utilizing TensorFlow API frame target detection and an eye tracker, wherein the method comprises the following steps:
the information of the eye jumps of the students in the space dimensions at different time intervals is normalized to obtain the information of the gazing directions of the students, and the information of the gazing directions of the students is subjected to noise reduction processing;
after the student gazing orientation information is subjected to noise reduction processing, based on an object recognition application programming library API in an artificial intelligence framework Tensiloflow, an SSD _ mobile net _ v1_ coco training model is adopted, and a formula d is utilized0=(xA-xB,yA-yB,zA-zB) Establishing a mapping model from A to B, obtaining the position of a viewpoint in a world image when the position of a pupil gazing target changes, and obtaining a three-dimensional coordinate of a gazing position, wherein d0 is the distance between an object and human eyes, A is the imaging position of a gazing object in a world camera, and B is the imaging position of the object in an eye tracker;
mapping the three-dimensional image to a two-dimensional plane by using the following formula to obtain a two-dimensional coordinate of the gaze position:
Where s is the scaling factor and A is the camera intrinsic parameter matrix, [ R t ]]Is a matrix of external parameters of the camera,andcorresponding to the two-dimensional coordinates and three-dimensional coordinates of the object respectively, X, Y, Z is the three-dimensional coordinates of the object in the world camera, fx, fy, cx and cy are related to the internal parameters of the camera, wherein fx ═ f/dX and fy ═ f/dY, f is the focal length of the camera, dX and dY are the unit pixel size on the x axis and y axis of the sensor respectively, and c is the unit pixel size on the x axis and y axis of the sensor respectivelyx、cyRepresenting the optical center, the intersection of the optical axis of the camera with the image plane, which is the center of the image, is taken to be half the resolution, r, t and the external parameters of the cameraNumber dependent, where R constitutes a 3 × 3 rotation matrix and t constitutes a 3 × 1 translation vector;
converting the world coordinate system to a camera-based coordinate system using the following formula;
wherein x, y and z are three-dimensional coordinates of the object in the camera, X, Y, Z is three-dimensional coordinates of the object in the world camera, R is the rotation matrix, t is the translation vector, u and v are two-dimensional coordinates of the object in the camera, and the obtained real-time azimuth information is converted into real-time coordinate information.
6. The method as claimed in claim 5, wherein the camera of the camera is modified according to the following formula when the camera is deformed:
7. The method of claim 5, wherein the TensorFlow API framework is built into the eye tracker.
8. The method for assessing and improving the quality of lessons based on the visual attention of students and the behavior of teachers as claimed in claim 5, wherein the method for obtaining the coordinate of teacher's gaze is the same as the method for obtaining the coordinate of student's gaze.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110849982.1A CN113506027A (en) | 2021-07-27 | 2021-07-27 | Course quality assessment and improvement method based on student visual attention and teacher behavior |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110849982.1A CN113506027A (en) | 2021-07-27 | 2021-07-27 | Course quality assessment and improvement method based on student visual attention and teacher behavior |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113506027A true CN113506027A (en) | 2021-10-15 |
Family
ID=78014844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110849982.1A Pending CN113506027A (en) | 2021-07-27 | 2021-07-27 | Course quality assessment and improvement method based on student visual attention and teacher behavior |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113506027A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114063780A (en) * | 2021-11-18 | 2022-02-18 | 兰州乐智教育科技有限责任公司 | Method and device for determining user concentration degree, VR equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107515677A (en) * | 2017-08-31 | 2017-12-26 | 杭州极智医疗科技有限公司 | Notice detection method, device and storage medium |
CN108682189A (en) * | 2018-04-20 | 2018-10-19 | 南京脑桥智能科技有限公司 | A kind of learning state confirmation system and method |
CN108710204A (en) * | 2018-05-15 | 2018-10-26 | 北京普诺兴科技有限公司 | A kind of quality of instruction test method and system based on eye tracking |
CN111881763A (en) * | 2020-06-30 | 2020-11-03 | 北京小米移动软件有限公司 | Method and device for determining user gaze position, storage medium and electronic equipment |
CN112070641A (en) * | 2020-09-16 | 2020-12-11 | 东莞市东全智能科技有限公司 | Teaching quality evaluation method, device and system based on eye movement tracking |
CN112489138A (en) * | 2020-12-02 | 2021-03-12 | 中国船舶重工集团公司第七一六研究所 | Target situation information intelligent acquisition system based on wearable equipment |
-
2021
- 2021-07-27 CN CN202110849982.1A patent/CN113506027A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107515677A (en) * | 2017-08-31 | 2017-12-26 | 杭州极智医疗科技有限公司 | Notice detection method, device and storage medium |
CN108682189A (en) * | 2018-04-20 | 2018-10-19 | 南京脑桥智能科技有限公司 | A kind of learning state confirmation system and method |
CN108710204A (en) * | 2018-05-15 | 2018-10-26 | 北京普诺兴科技有限公司 | A kind of quality of instruction test method and system based on eye tracking |
CN111881763A (en) * | 2020-06-30 | 2020-11-03 | 北京小米移动软件有限公司 | Method and device for determining user gaze position, storage medium and electronic equipment |
CN112070641A (en) * | 2020-09-16 | 2020-12-11 | 东莞市东全智能科技有限公司 | Teaching quality evaluation method, device and system based on eye movement tracking |
CN112489138A (en) * | 2020-12-02 | 2021-03-12 | 中国船舶重工集团公司第七一六研究所 | Target situation information intelligent acquisition system based on wearable equipment |
Non-Patent Citations (3)
Title |
---|
傅剑: "《智能机器人》", 30 September 2020, 武汉理工大学出版社, pages: 43 - 45 * |
徐德: "《机器人视觉测量与控制》", 31 January 2016, 国防工业出版社, pages: 39 * |
陈敏楠: "教学视频中教师面部表情和眼睛注视对学习者学习的影响", 中国优秀硕士学位论文全文数据库, 15 February 2021 (2021-02-15), pages 8 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114063780A (en) * | 2021-11-18 | 2022-02-18 | 兰州乐智教育科技有限责任公司 | Method and device for determining user concentration degree, VR equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10643487B2 (en) | Communication and skills training using interactive virtual humans | |
CN110349667B (en) | Autism assessment system combining questionnaire and multi-modal model behavior data analysis | |
CN105516280B (en) | A kind of Multimodal Learning process state information packed record method | |
Yun et al. | Automatic recognition of children engagement from facial video using convolutional neural networks | |
CN111528859B (en) | Child ADHD screening and evaluating system based on multi-modal deep learning technology | |
CN101453938B (en) | Image recording apparatus | |
Hu et al. | Research on abnormal behavior detection of online examination based on image information | |
CN114209324B (en) | Psychological assessment data acquisition method based on image visual cognition and VR system | |
Geng et al. | Learning deep spatiotemporal feature for engagement recognition of online courses | |
Wang et al. | Automated student engagement monitoring and evaluation during learning in the wild | |
CN114170537A (en) | Multi-mode three-dimensional visual attention prediction method and application thereof | |
CN113486744B (en) | Student learning state evaluation system and method based on eye movement and facial expression paradigm | |
CN113506027A (en) | Course quality assessment and improvement method based on student visual attention and teacher behavior | |
Muhamada et al. | Review on recent computer vision methods for human action recognition | |
Tang et al. | Automatic facial expression analysis of students in teaching environments | |
Sidhu et al. | Deep learning based emotion detection in an online class | |
Ashwin et al. | Unobtrusive students' engagement analysis in computer science laboratory using deep learning techniques | |
CN113658697A (en) | Psychological assessment system based on video fixation difference | |
CN112529054A (en) | Multi-dimensional convolution neural network learner modeling method for multi-source heterogeneous data | |
CN115607153B (en) | Psychological scale answer quality assessment system and method based on eye movement tracking | |
Xu et al. | Analyzing students' attention by gaze tracking and object detection in classroom teaching | |
CN115429271A (en) | Autism spectrum disorder screening system and method based on eye movement and facial expression | |
CN114998440A (en) | Multi-mode-based evaluation method, device, medium and equipment | |
Tan et al. | Towards automatic engagement recognition of autistic children in a machine learning approach | |
Chen et al. | Intelligent Recognition of Physical Education Teachers' Behaviors Using Kinect Sensors and Machine Learning. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |