CN115273180B - Online examination invigilating method based on random forest - Google Patents

Online examination invigilating method based on random forest Download PDF

Info

Publication number
CN115273180B
CN115273180B CN202210773448.1A CN202210773448A CN115273180B CN 115273180 B CN115273180 B CN 115273180B CN 202210773448 A CN202210773448 A CN 202210773448A CN 115273180 B CN115273180 B CN 115273180B
Authority
CN
China
Prior art keywords
max
follows
face
gaze
examinee
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210773448.1A
Other languages
Chinese (zh)
Other versions
CN115273180A (en
Inventor
徐慧
赵晨薇
尹必才
王惠荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202210773448.1A priority Critical patent/CN115273180B/en
Publication of CN115273180A publication Critical patent/CN115273180A/en
Application granted granted Critical
Publication of CN115273180B publication Critical patent/CN115273180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The invention relates to the technical field of machine learning vision, in particular to an on-line examination invigilating method based on random forests, which comprises the following steps: s1: after obtaining the picture of the video frame, carrying out face detection, feature extraction and face alignment by utilizing an improved MTCNN method; s2: calculating a head pose based on the facial feature points; s3: based on random forests, fusing the head gesture and the facial features to perform vision estimation; s4: and judging the cheating behavior. The face detection method based on transfer learning can well detect the face under the conditions of large angle deflection and dim light of the face, and can accurately acquire the characteristic point information under the condition of wearing glasses. The invention only needs to use the network camera, thereby reducing the requirement on equipment and being beneficial to the development of on-line examination; the cheating situation of the examinee can be detected in real time, and the occurrence probability of the cheating behavior is reduced.

Description

Online examination invigilating method based on random forest
Technical Field
The invention relates to the technical field of machine learning vision, in particular to an on-line examination invigilating method based on random forests.
Background
As online learning continues to spread, online examination is popular with more and more people. However, one of the main challenges in online testing is how to conduct online proctoring in an efficient and reliable manner. According to the survey, about 74% of students show that online examination is easy to cheat, and nearly 29% of students have cheat when online examination is performed. These cheating actions can compromise the public confidence of the on-line test, which makes on-line proctoring critical for further expanding the application of the on-line test.
Researchers have proposed many different online proctoring methods, which can be generally divided into three categories: manual proctor, fully automatic proctor and semi-automatic proctor. Manual invigilation, namely, invigilator needs to check examination videos of all examinees, and the work is time-consuming and has high labor intensity. In contrast, the fully-automatic invigilation is that a computer analyzes behaviors during examination of an examinee based on a machine learning technology, automatically detects suspicious behaviors and directly classifies the suspicious behaviors as cheating or non-cheating. However, the existing full-automatic invigilation method is difficult to achieve very high accuracy, and misjudgment is often caused. The semi-automatic invigilation method combines a machine learning method and further manual confirmation of invigilation staff, and improves the on-line invigilation efficiency while reducing the labor intensity.
The invigilation software currently in use has a high model cost, is complex to operate, makes it less user friendly, and often requires external equipment such as an eye tracker. However, in remote monitoring, it is impossible to equip all test takers with such devices, and it is difficult to put them into practical use on a large scale.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an on-line examination invigoration method based on a random forest.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an on-line examination invigilating method based on random forest comprises the following specific steps:
s1: after obtaining the picture of the video frame, carrying out face detection, feature extraction and face alignment by utilizing an improved MTCNN method;
s2: calculating a head pose based on the facial feature points;
s3: based on random forests, fusing the head gesture and the facial features to perform vision estimation;
s4: and judging the cheating behavior.
Preferably, in step S1, the specific steps are as follows:
s101: training the MTCNN network model by using the LaPa data set to obtain face boundary box information and 106 feature point labels;
s102: carrying out alignment treatment on the identified face;
s103: and carrying out normalization processing on the aligned face images.
Preferably, in step S2, the head pose is calculated using the obtained facial feature points, specifically as follows:
s201: selecting an AFLW data set, horizontally overturning the pictures in the training set by phi degrees, and expanding the data;
s202: after the adjustment rotation, the vertices of the new bounding box are repositioned as follows:
let the coordinates of the original frame be (X min ,X max ,Y min ,Y max ) The coordinates of the center of the image are (X c ,Y c );
The vertex of the original image is wound (X c ,Y c ) And rotating phi degrees to obtain new frame vertex coordinates:
x′ min =min{x cosφ+y sinφ}+x 0
x′ max =max{x cosφ+y sinφ}+x 0
y′ min =min{-x cosφ+y sinφ}+y 0
y′ max =max{-x cosφ+y sinφ}+y 0
wherein:
x o =x c (1-cosφ)-y c sinφ
y o =x c sinφ+y c (1-cosφ)
s203: from the above transformation, the horizontal rotation matrix can be found as follows:
s204: obtaining Euler angles as follows, wherein alpha represents pitching, beta represents yawing, and gamma represents rolling;
β′=sin -1 M 31
preferably, in step S3, the specific steps are as follows:
s301: selecting a cart regression tree, and taking a least square method as a basis of node splitting:
in actual operation, the traditional cart regression tree selects the minimum mean square error as the basis of node splitting, and if the number of samples reaching leaf nodes is large and the numerical value difference is large, the accuracy of model training and prediction is seriously reduced, so that the classification attribute of the nodes is improved by using a least square method. The least square method is a curve fitting method, the basic idea is an error square sum minimization principle, and the best function matching of the data is found out, and the specific algorithm steps are as follows:
(1) At the initial moment, distributing data in a training set to a root node, and squaring and summing the dispersion in the training data set, wherein the formula is as follows:
wherein y is i Representing the objective function of the ith sample, y representing the average of the objective functions of m samples;
(2) X for each attribute i Ordering all values of the same attribute, taking the average value X of two adjacent samples of the same attribute i,k Dividing the training set into a left part and a right part by taking the training set as a threshold value, and calculating the square sum D of the dispersion of the two parts i,R ,D i,L
(3) Find a value such that Δd=d i -D i,R -D i,L Maximized X i,k As classification points, the samples are classified into S i,L ,S i,R The two parts repeat the same steps for the two subsets until the current subset meets the division termination rule, fit all target values reaching leaf nodes into a function expression, prune the regression tree, and obtain a rule set;
s302: aiming at the problem of overfitting, L2 regularization terms are adopted for optimization, so that the generalization capability of the model is improved;
to improve the generalization capability of the system and solve the over-fitting problem, one method commonly used is to add regularization terms to constrain the model, where the L2 norm of the added weight vector is selected.
Wherein the initial loss function of the random forest is assumed to be L 0 The optimized loss function is:
s303: and inputting the eye feature point information and the head posture information into a trained random forest vision estimation model, and outputting a vision estimation result.
Preferably, in step S4, the specific steps are as follows:
s401: before starting eye tracking, the gaze estimation model provides 9 calibration points, the system collects gaze point information by the user clicking on the calibration points, and detects the screen angle from the collected gaze data, the screen angle x, y axis being calculated as follows:
x min =(x 1 +x 7 )/2
x max =(x 3 +x 9 )/2
y min =(y 1 +y 3 )/2
y max =(y 7 +y 9 )/2
suppose the eye gaze coordinates are (G x ,G y ) If G x <x min |G x >x max |G y <y min |G y >y max The examinee is considered not to look at the screen;
s402: during the examination, the gaze point is recorded frame by frame, if the time of the examinee's gaze outside the screen is more than 50% in 20 seconds, the examinee is marked as abnormal, a warning is sent to the examinee, the complete gaze record is generated into a visual view after the examination is finished and sent to the invigorator, and the invigorator can select to manually judge the cheating behavior of the examinee in the abnormal time period.
Compared with the prior art, the invention has the following beneficial effects:
1. the face detection method based on transfer learning can well detect the face under the conditions of large angle deflection and dim light of the face, and can accurately acquire the characteristic point information under the condition of wearing glasses.
2. The invention adopts L2 regularization to optimize the loss function, so that the problem of overfitting can be solved.
3. The invention optimizes the classification attribute by adopting the least square algorithm, and can improve the accuracy of training and prediction.
4. According to the invention, when the head posture model is trained, the data set is enhanced by horizontal overturning, so that the influence of the head posture on the gaze point can be further reduced.
5. The invention only needs to use the network camera, thereby reducing the requirement on equipment and being beneficial to the development of on-line examination; the cheating situation of the examinee can be detected in real time, and the occurrence probability of the cheating behavior is reduced.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a facial feature point annotation in accordance with the present invention;
FIG. 3 is a view of a frame of view estimation in accordance with the present invention;
FIG. 4 is a view line calibration dot diagram in accordance with the present invention.
Detailed Description
The following technical solutions in the embodiments of the present invention will be clearly and completely described with reference to the accompanying drawings, so that those skilled in the art can better understand the advantages and features of the present invention, and thus the protection scope of the present invention is more clearly defined. The described embodiments of the present invention are intended to be only a few, but not all embodiments of the present invention, and all other embodiments that may be made by one of ordinary skill in the art without inventive faculty are intended to be within the scope of the present invention.
As shown in fig. 1, an on-line examination invigilating method based on random forest specifically comprises the following steps:
s1: after obtaining a picture of a video frame, performing face detection by using an improved MTCNN method to obtain a face image area, and then performing face alignment on the detected face area to obtain key feature information of the face;
s101: training an MTCNN network model by using a LaPa data set to obtain face boundary box information and 106 feature point labels, wherein pupil center coordinates are assumed to be (x) 0 ,y 0 ). The feature points are shown in fig. 2;
s102: carrying out alignment treatment on the identified face;
s103: carrying out normalization processing on the aligned face images, and adopting normalized Euclidean distance as an evaluation standard, wherein the specific calculation method comprises the following steps:
where N represents the number of feature points,predictive coordinates representing the ith feature point, +.>Representing the true coordinates of the ith feature point, d io The distance between the left and right outer corners of the eye is represented, and e is the error value obtained and represents the alignment accuracy.
S2: calculating a head pose using the obtained facial feature points;
s201: AFLW data set is selected according to 8:2, dividing the training set and the testing set, horizontally turning the pictures in the training set by phi degrees, and expanding data;
s202: after the adjustment rotation, the vertices of the new bounding box are repositioned as follows:
let the coordinates of the original frame be (X min ,X max ,Y min ,Y max ) The coordinates of the center of the image are (X c ,Y c );
The vertex of the original image is wound (X c ,Y c ) Rotating phi degrees to obtain new frame vertex coordinates;
x′ min =min{x cosφ+y sinφ}+x 0
x′ max =max{x cosφ+y sinφ}+x 0
y′ min =min{-x cosφ+y sinφ}+y 0
y′ max =max{-x cosφ+y sinφ}+y 0
wherein, the liquid crystal display device comprises a liquid crystal display device,
x o =x c (1-cosφ)-y c sinφ
y o =x c sinφ+y c (1-cosφ)
s203: from the above transformation, the horizontal rotation matrix can be found as follows:
s204: the euler angles were found as follows:
β′=sin -1 M 31
s3: based on random forests, fusing the head gesture and the facial features to perform vision estimation;
s301: selecting a cart regression tree, and taking a least square method as a basis of node classification;
in actual operation, the traditional cart regression tree selects the minimum mean square error as the basis of node splitting, and if the number of samples reaching leaf nodes is large and the numerical value difference is large, the accuracy of model training and prediction is seriously reduced, so that the classification attribute of the nodes is improved by using a least square method. The least square method is a curve fitting method, the basic idea is an error square sum minimization principle, and the best matching function is found out, and the specific algorithm steps are as follows:
(1) At the initial moment, distributing data in a training set to a root node, and squaring and summing the dispersion in the training data set, wherein the formula is as follows:
wherein y is i Representing the objective function of the ith sample, y representing the average of the objective functions of m samples;
(2) X for each attribute i Ordering all values of the same attribute, taking the average value X of two adjacent samples of the same attribute i,k Dividing the training set into a left part and a right part by taking the training set as a threshold value, and calculating the square sum D of the dispersion of the two parts i,R ,D i,L
(3) Find a value such that Δd=d i -D i,R -D i,L Maximized X i,k As classification points, the samples are classified into S i,L ,S i,R The two parts repeat the same steps for the two subsets until the current subset meets the division termination rule, all target values reaching leaf nodes are fitted into a function expression, and the regression tree is pruned to obtain a rule set;
s302: aiming at the problem of overfitting, adopting an L2 regular term to optimize;
to improve the generalization capability of the system and solve the over-fitting problem, one method commonly used is to add regularization terms to constrain the model, where the L2 norm of the added weight vector is selected.
Wherein the initial loss function of the random forest is assumed to be L 0 The new loss function after optimization is:
when the parameter lambda is selected, if lambda is too small, regularization is invalid, so that the problem of over fitting cannot be solved, otherwise, if the value is too large, the problem of under fitting can occur. Experiments have found that λ=1.5 works best.
S303: the eye feature point information and the head posture information are input into a trained random forest vision estimation model, a vision estimation result is output, and a vision estimation frame diagram is shown in fig. 3.
S4: judging cheating behaviors;
s401: before starting eye tracking, the gaze estimation model provides 9 calibration points, the system collects gaze point information by the user clicking on the calibration points, and screen angles are detected by the collected gaze data, the calibration point distribution being as shown in fig. 4. The screen angle x, y axis is calculated as follows:
x min =(x 1 +x 7 )/2
x max =(x 3 +x 9 )/2
y min =(y 1 +y 3 )/2
y max =(y 7 +y 9 )/2
suppose the eye gaze coordinates are (G x ,G y ) If G x <x min |G x >x max |G y <y min |G y >y max The examinee is considered not to look at the screen;
the user needs to confirm 5 times of clicking on each correction point, and the sight line estimation model takes the average value of 5 times of sight line point information as a real fixation point;
s402: during the examination, the gaze point is recorded frame by frame, if the time of the examinee's gaze outside the screen is more than 50% in 20 seconds, the examinee is marked as abnormal, a warning is sent to the examinee, the complete gaze record is generated into a visual view after the examination is finished and sent to the invigorator, and the invigorator can select to manually judge the cheating behavior of the examinee in the abnormal time period.
In summary, the invention only needs to use the network camera, thereby reducing the requirement on equipment and being beneficial to the development of on-line examination; the cheating situation of the examinee can be detected in real time, and the occurrence probability of the cheating behavior is reduced.
The description and practice of the invention disclosed herein will be readily apparent to those skilled in the art, and may be modified and adapted in several ways without departing from the principles of the invention. Accordingly, modifications or improvements may be made without departing from the spirit of the invention and are also to be considered within the scope of the invention.

Claims (2)

1. An on-line examination invigilating method based on random forest is characterized by comprising the following specific steps:
s1: after obtaining the picture of the video frame, carrying out face detection, feature extraction and face alignment by utilizing an improved MTCNN method;
s2: calculating a head pose based on the facial feature points;
s3: based on random forests, fusing the head gesture and the facial features to perform vision estimation;
s4: judging cheating behaviors;
in step S2, the head pose is calculated using the obtained facial feature points, specifically as follows:
s201: selecting an AFLW data set, horizontally overturning the pictures in the training set by phi degrees, and expanding the data;
s202: after the adjustment rotation, the vertices of the new bounding box are repositioned as follows:
let the coordinates of the original frame be (X min ,X max ,Y min ,Y max ) The coordinates of the center of the image are (X c ,Y c );
The vertex of the original image is wound (X c ,Y c ) Rotating phi degree to obtainTo new frame vertex coordinates:
x′ min =min{x cosφ+y sinφ}+x 0
x’ max =max{x cosφ+y sinφ}+x 0
y′ min =min{-x cosφ+y sinφ}+y 0
y max =max{-x cosφ+y sinφ}+y 0
wherein:
x o =x c (1-cosφ)-y c sinφ
y o =x c sinφ+y c (1-cosφ)
s203: from the above transformation, the horizontal rotation matrix can be found as follows:
s204: obtaining Euler angles as follows, wherein alpha represents pitching, beta represents yawing, and gamma represents rolling;
β’=Sin -1 M 31
in step S3, the specific steps are as follows:
s301: selecting a cart regression tree, and taking a least square method as a basis of node splitting, wherein the method comprises the following specific steps of:
(1) At the initial moment, distributing data in a training set to a root node, and squaring and summing the dispersion in the training data set, wherein the formula is as follows:
wherein y is i Representing the objective function of the ith sample, y representing the average of the objective functions of m samples;
(2) X for each attribute i Ordering all values of the same attribute, taking the average value X of two adjacent samples of the same attribute i,k Dividing the training set into a left part and a right part by taking the training set as a threshold value, and calculating the square sum D of the dispersion of the two parts i,R ,D i,L
(3) Find a value such that Δd=d i -D i,R -D i,L Maximized X i,k As classification points, the samples are classified into S i,L ,S i,R The two parts repeat the same steps for the two subsets until the current subset meets the division termination rule, fit all target values reaching leaf nodes into a function expression, prune the regression tree, and obtain a rule set;
s302: aiming at the problem of overfitting, L2 regularization terms are adopted for optimization, so that the generalization capability of the model is improved;
wherein the initial loss function of the random forest is assumed to be L 0 The optimized loss function is:
s303: the eye feature point information and the head gesture information are input into a trained random forest vision estimation model, and a vision estimation result is output;
in step S4, the specific steps are as follows:
s401: before starting eye tracking, the gaze estimation model provides 9 calibration points, the system collects gaze point information by the user clicking on the calibration points, and detects the screen angle from the collected gaze data, the screen angle x, y axis being calculated as follows:
x min =(x 1 +x 7 )/2
x max =(x 3 +x 9 )/2
y min =(y 1 +y 3 )/2
y max =(y 7 +y 9 )/2
suppose the eye gaze coordinates are (G x ,G y ) If G x <x min |G x >x max |G y <y min |G y >y max The examinee is considered not to look at the screen;
s402: during the examination, the gaze point is recorded frame by frame, if the time of the examinee's gaze outside the screen is more than 50% in 20 seconds, the examinee is marked as abnormal, a warning is sent to the examinee, the complete gaze record is generated into a visual view after the examination is finished and sent to the invigorator, and the invigorator can select to manually judge the cheating behavior of the examinee in the abnormal time period.
2. The method for on-line examination proctoring based on random forest according to claim 1, wherein in step S1, the specific steps are as follows:
s101: training the MTCNN network model by using the LaPa data set to obtain face boundary box information and 106 feature point labels;
s102: carrying out alignment treatment on the identified face;
s103: and carrying out normalization processing on the aligned face images.
CN202210773448.1A 2022-07-01 2022-07-01 Online examination invigilating method based on random forest Active CN115273180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210773448.1A CN115273180B (en) 2022-07-01 2022-07-01 Online examination invigilating method based on random forest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210773448.1A CN115273180B (en) 2022-07-01 2022-07-01 Online examination invigilating method based on random forest

Publications (2)

Publication Number Publication Date
CN115273180A CN115273180A (en) 2022-11-01
CN115273180B true CN115273180B (en) 2023-08-15

Family

ID=83763602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210773448.1A Active CN115273180B (en) 2022-07-01 2022-07-01 Online examination invigilating method based on random forest

Country Status (1)

Country Link
CN (1) CN115273180B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189275B (en) * 2023-02-13 2024-01-30 南通大学 Online examination invigilating method based on facial landmark heat map
CN116894978B (en) * 2023-07-18 2024-03-29 中国矿业大学 Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491717A (en) * 2016-06-13 2017-12-19 科大讯飞股份有限公司 The detection method that cheats at one's exam and device
CN108960302A (en) * 2018-06-20 2018-12-07 同济大学 A kind of head pose estimation method based on random forest
CN109033960A (en) * 2018-06-20 2018-12-18 同济大学 A kind of gaze estimation method based on random forest
CN110263774A (en) * 2019-08-19 2019-09-20 珠海亿智电子科技有限公司 A kind of method for detecting human face
CN112464793A (en) * 2020-11-25 2021-03-09 大连东软教育科技集团有限公司 Method, system and storage medium for detecting cheating behaviors in online examination

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11113842B2 (en) * 2018-12-24 2021-09-07 Samsung Electronics Co., Ltd. Method and apparatus with gaze estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491717A (en) * 2016-06-13 2017-12-19 科大讯飞股份有限公司 The detection method that cheats at one's exam and device
CN108960302A (en) * 2018-06-20 2018-12-07 同济大学 A kind of head pose estimation method based on random forest
CN109033960A (en) * 2018-06-20 2018-12-18 同济大学 A kind of gaze estimation method based on random forest
CN110263774A (en) * 2019-08-19 2019-09-20 珠海亿智电子科技有限公司 A kind of method for detecting human face
CN112464793A (en) * 2020-11-25 2021-03-09 大连东软教育科技集团有限公司 Method, system and storage medium for detecting cheating behaviors in online examination

Also Published As

Publication number Publication date
CN115273180A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN115273180B (en) Online examination invigilating method based on random forest
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN108537160A (en) Risk Identification Method, device, equipment based on micro- expression and medium
Hu et al. Deep neural network-based speaker-aware information logging for augmentative and alternative communication
CN111598038B (en) Facial feature point detection method, device, equipment and storage medium
Indi et al. Detection of malpractice in e-exams by head pose and gaze estimation
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN111507592A (en) Evaluation method for active modification behaviors of prisoners
CN109598226B (en) Online examination cheating judgment method based on Kinect color and depth information
CN112464793A (en) Method, system and storage medium for detecting cheating behaviors in online examination
CN111563449A (en) Real-time classroom attention detection method and system
CN115546861A (en) Online classroom concentration degree identification method, system, equipment and medium
KR102369152B1 (en) Realtime Pose recognition system using artificial intelligence and recognition method
CN112861809B (en) Classroom head-up detection system based on multi-target video analysis and working method thereof
CN112818796B (en) Intelligent gesture distinguishing method and storage device suitable for online prison scene
CN115937793B (en) Student behavior abnormality detection method based on image processing
CN112488647A (en) Attendance system and method, storage medium and electronic equipment
CN114639168B (en) Method and system for recognizing running gesture
CN110852284A (en) System for predicting user concentration degree based on virtual reality environment and implementation method
CN115937928A (en) Learning state monitoring method and system based on multi-vision feature fusion
CN115937923A (en) On-line teaching platform capable of realizing student concentration degree detection
CN109508089B (en) Sight line control system and method based on hierarchical random forest
TW202219494A (en) A defect detection method and a defect detection device
CN113283340B (en) Method, device and system for detecting vaccination condition based on ocular surface characteristics
CN116894978B (en) Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant