CN113743209A - Auxiliary invigilation method for large-scale online examination - Google Patents

Auxiliary invigilation method for large-scale online examination Download PDF

Info

Publication number
CN113743209A
CN113743209A CN202110875667.6A CN202110875667A CN113743209A CN 113743209 A CN113743209 A CN 113743209A CN 202110875667 A CN202110875667 A CN 202110875667A CN 113743209 A CN113743209 A CN 113743209A
Authority
CN
China
Prior art keywords
examinee
neural network
eye image
abnormal behavior
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110875667.6A
Other languages
Chinese (zh)
Inventor
史仓州
王林波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Changfeng Kewei Photoelectric Technology Co ltd
Original Assignee
Beijing Changfeng Kewei Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Changfeng Kewei Photoelectric Technology Co ltd filed Critical Beijing Changfeng Kewei Photoelectric Technology Co ltd
Priority to CN202110875667.6A priority Critical patent/CN113743209A/en
Publication of CN113743209A publication Critical patent/CN113743209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to an auxiliary invigilation method for large-scale online examination, which comprises the steps of acquiring a front image of an examinee when the examinee answers by using a computer with a front camera used on an examinee answering site, extracting an eye image by using a Viola-Jones algorithm, judging the visual range of the examinee by using a neural network, labeling an examinee monitoring picture under abnormal conditions of large amplitude deviation and the like of a long-time visual line, and prompting an invigilation teacher to pay attention, so that the pressure of the invigilation teacher can be reduced, and a small number of invigilation teachers can effectively invigilate a large number of examinees.

Description

Auxiliary invigilation method for large-scale online examination
Technical Field
The invention belongs to the technical field of image recognition, and relates to a method for invigilating through a video monitoring technology in a large-scale online examination scene.
Background
In real life, due to the influence of certain invariants, some examinations cannot be performed on site, and only can be arranged on line. On-line examination needs invigilate examinees through video monitoring to examinee, because every examinee has an independent image of invigilating, a small number of invigilating mr need go on in the face of a large amount of examinee's control image, invigilate the pressure greatly.
Disclosure of Invention
The invention aims to solve the problems and provides an auxiliary invigilation method for large-scale online examination, which is characterized in that an invigilation teacher only needs to focus on examinees with abnormal behaviors by analyzing abnormal behaviors of examinees appearing in a monitoring picture and prompting in time, so that the pressure of the invigilation teacher is reduced.
The technical scheme of the invention is as follows:
an auxiliary invigilation method for large-scale online examination, which is characterized by comprising the following steps:
step 1, installing and configuring an examinee end hardware device: the examinee needs to use a computer with a front camera, and is additionally provided with a rear camera, wherein the front camera can shoot the complete head of the examinee, and the rear camera can shoot the panorama of the seat of the examinee;
step 2, collecting a neural network training data set: guiding an examinee to simulate the answering state within a certain time before an examination, acquiring a plurality of frames of front images of the examinee through a computer with a front camera used by the examinee, and extracting eye images in each frame of images by using a target recognition algorithm Viola-Jones; zooming or expanding each eye image to make the size of each eye image consistent;
step 3, data preprocessing: preprocessing the image acquired in the step 2, wherein the preprocessing process comprises outlier rejection and data normalization; the outliers are images of the blinks of the examinees; after outliers are eliminated, normalizing the data representing the pixel values to an interval [ -1,1 ];
step 4, neural network training: inputting the preprocessed eye image into a neural network for training; setting the sight direction deviating from the range of the computer screen as abnormal behavior, wherein the output value of the neural network is a criterion whether the sight direction corresponding to the eye image is in the range of the computer screen;
and 5, real-time detection: after the examination begins, the front-facing camera collects the eye image of the examinee, and the eye image is input into the neural network to judge the abnormal behavior, and the judging method comprises the following steps: presetting an examinee abnormal behavior threshold, calculating the examinee abnormal behavior probability according to the sight line range of continuous multi-frame images of the examinee, marking the abnormal behavior probability of a certain examinee as abnormal in a monitoring video corresponding to the examinee when the abnormal behavior probability of the examinee exceeds the threshold, prompting an invigilator to pay attention, calling a rear camera arranged on the examinee answering site, and feeding the rear camera back to the invigilator for further judgment.
According to the invention, the monitoring pictures with abnormal behaviors are labeled to prompt the invigilator to pay attention, the system can switch the pictures of the abnormal examinee camera, and the images of the rear camera are called to further check the state of the examinee, so that a small number of invigilators can effectively invigorate a large number of examinees, and the pressure of the invigilators can be effectively reduced. The invention has lower requirement on required hardware equipment, only needs one computer with a camera and one rear camera, and the rear camera can be replaced by a mobile phone camera, so the invention has better practicability.
Drawings
FIG. 1 is a flow chart of an assisted invigilation method of the present invention.
Detailed Description
The on-line examination referred by the invention is a kind of examination in which the examinee answers at the computer terminal, and in the examination process, the visual line range of the examinee is in the computer screen for most of time, so that the image recognition technology can be applied, and a basis is provided for the judgment of the neural network.
Fig. 1 is a flow chart of an auxiliary invigilation method of the present invention, and the specific implementation process is as follows:
step 1, installing and configuring the examinee end hardware equipment.
In the step, the examinee needs to use a computer with a front camera and is additionally provided with a rear camera, the fields of view of the front camera and the rear camera are reasonably adjusted, the front camera can shoot the complete head of the examinee, and the rear camera can shoot the panorama of the seat of the examinee.
And 2, collecting a neural network training data set.
Prompting and guiding the examinee to simulate answering state in a few minutes before the examination, acquiring a front image of the examinee when answering by using a computer with a front camera used by the examinee on the answering site, and extracting an eye image by using a target recognition algorithm Viola-Jones algorithm.
The Viola-Jones algorithm is able to accurately extract an eye image from an image regardless of the direction of the line of sight. Because the distance between the eyes and the camera is constantly in dynamic change in the examination process of the examinee, the eye images detected by the algorithm each time are zoomed or expanded, so that the sizes of the eye images are consistent, and the requirement of subsequent neural network input is met. In actual operation, each eye image is uniformly transformed to a size of 25 × 90 pixels.
And 3, preprocessing data.
And (3) preprocessing the image acquired in the step (2), wherein the preprocessing process comprises data normalization and outlier rejection. The outliers are images of the blinks of the examinees; after outliers are eliminated, normalizing the data representing the pixel values to an interval [ -1,1 ];
the reason why the preprocessing is performed is that the training data meeting the conditions are divided into two types, one type is the eye image when the examinee looks at the answering interface, and the other type is the eye image when the examinee does not look at the computer screen. However, in the actual image acquisition process, due to the high acquisition frequency, pictures of eye closure at the moment of blinking and some pictures which are mistakenly identified as eyes due to the identification algorithm can be acquired, the images do not belong to any class in the training data and belong to outliers in the data, and if the data are mistakenly judged as any class, the data are mixed in the effective data to train the neural network, so that the classification effect of the neural network can be greatly reduced. The method specifically comprises the steps of firstly reducing the dimension of data through principal component analysis, then clustering the data after dimension reduction, defining initial clustering centers as average values of two types of data respectively, and deleting points far away from the clustering centers and points with clustering results inconsistent with original labels after clustering is completed, namely removing outliers.
After outliers are eliminated, the data representing the pixel values need to be normalized to the interval [ -1,1], so that the network parameters can be converged more quickly during training. Because the activation function of each layer of the established neural network is a logistic function, when the independent variable is increased to a certain degree, the output of the function is close to 1, if the data is not normalized, the final result output of the neural network is always data close to 1, and the updating amount of each parameter is small. Therefore, to ensure that the parameters can be updated effectively each time, the raw data needs to be normalized to a smaller interval so that the neural network can converge more quickly.
And 4, constructing and training a neural network.
The neural network parameters constructed in this example are shown in table 1 below.
Parameter(s) Value of
hidden_layer_size (100,50)
activation logistics
solver lbfgs
batch_size 50
learning_rate 0.001
max_iter 200
TABLE 1 neural network Key parameter settings
The parameters in table 1 are detailed below:
hidden _ layer _ size: the sizes of the hidden layers, which are shown in table 1, indicate that the neural network has two hidden layers, and the number of the neurons is 100 and 50 respectively;
activation: an activation function type, here a logistic function, i.e. f (x) 1/(1+ exp (-x));
a solver: the mathematical method for updating the network weight value, wherein lbfgs in the table 1 refers to updating parameters by a quasi-Newton method;
batch _ size: the number of samples input into the network at a time, in this example, 50 samples input into the network at a time for training;
left _ rate: a learning rate;
max _ iter: and (4) the maximum iteration time, and the iteration is completed by using a quasi-Newton method until convergence or 200 times are reached.
And inputting the preprocessed image into a neural network for training. Setting the visual direction of the examinee deviating from the range of the computer screen as abnormal behavior, wherein the output value of the neural network is a criterion whether the visual direction corresponding to the eye image is in the range of the computer screen;
and 5, detecting in real time.
After the examination begins, the front-facing camera collects the eye image of the examinee, and the eye image is input into the neural network to judge the abnormal behavior, and the judging method comprises the following steps: presetting an examinee abnormal behavior threshold, calculating the examinee abnormal behavior probability according to the sight line range of continuous multi-frame images of the examinee, marking the abnormal behavior probability of a certain examinee as abnormal in a monitoring video corresponding to the examinee when the abnormal behavior probability of the examinee exceeds the threshold, prompting an invigilator to pay attention, calling a rear camera arranged on the examinee answering site, and feeding the rear camera back to the invigilator for further judgment.
The behavior abnormal probability is obtained by integrating the neural network output values corresponding to the previous k frames and the current frame and calculating by adopting a weighted average method. The reason why the mathematical model is provided in the stage of calculating the abnormal behavior probability is that the transient deviation of the visual line of the examinee is inevitable in the process of calling the camera in real time, and if each frame of image is judged independently, the situation of misjudgment due to over sensitivity exists. Therefore, on the basis of independent judgment of each frame of image, the judgment results of the previous frames are integrated to obtain the judgment of the current frame, and the influence can be effectively eliminated.
Suppose that the results output by the neural network corresponding to the images of the current frame and the previous k frames are respectively:
P0,P1,P2,……,Pkin which P is0Outputting a result for the neural network corresponding to the current frame;
and carrying out weighted average on the output result of the current frame and the result of the previous k frames by using the idea of weighted average. Because the result closest to the current frame has the maximum reference value, the weight value of each frame in the observation window is decreased with time; setting the corresponding weights as: w is a0,w1,w2,……,wkAnd w isi≥wjAnd i is more than or equal to 0 and less than j and less than or equal to k, the probability value of the abnormal behavior of the current frame is as follows:
Figure BDA0003190413210000041
therefore, the invigilating teacher only needs to focus on the examinees with high abnormal behavior probability, and the pressure of the invigilating teacher can be greatly reduced.

Claims (4)

1. An auxiliary invigilation method for large-scale online examination, which is characterized by comprising the following steps:
step 1, installing and configuring an examinee end hardware device: the examinee needs to use a computer with a front camera, and is additionally provided with a rear camera, wherein the front camera can shoot the complete head of the examinee, and the rear camera can shoot the panorama of the seat of the examinee;
step 2, collecting a neural network training data set: guiding an examinee to simulate the answering state within a certain time before an examination, acquiring a plurality of frames of front images of the examinee through a computer with a front camera used by the examinee, and extracting eye images in each frame of images by using a target recognition algorithm Viola-Jones; zooming or expanding each eye image to make the size of each eye image consistent;
step 3, data preprocessing: preprocessing the image acquired in the step 2, wherein the preprocessing process comprises outlier rejection and data normalization; the outliers are images of the blinks of the examinees; after outliers are eliminated, normalizing the data representing the pixel values to an interval [ -1,1 ];
step 4, neural network training: inputting the preprocessed eye image into a neural network for training; setting the sight direction deviating from the range of the computer screen as abnormal behavior, wherein the output value of the neural network is a criterion whether the sight direction corresponding to the eye image is in the range of the computer screen;
and 5, real-time detection: after the examination begins, the front-facing camera collects the eye image of the examinee, and the eye image is input into the neural network to judge the abnormal behavior, and the judging method comprises the following steps: presetting an examinee abnormal behavior threshold, calculating the examinee abnormal behavior probability according to the sight line range of continuous multi-frame images of the examinee, marking the abnormal behavior probability of a certain examinee as abnormal in a monitoring video corresponding to the examinee when the abnormal behavior probability of the examinee exceeds the threshold, prompting an invigilator to pay attention, calling a rear camera arranged on the examinee answering site, and feeding the rear camera back to the invigilator for further judgment.
2. The method of assisted proctoring for large scale online examination of claim 1, wherein: in step 2, the eye image detected by each algorithm is scaled or expanded, and each eye image is uniformly transformed to a size of 25 × 90 pixels.
3. The method of assisted proctoring for large scale online examination of claim 1, wherein: in step 3, dividing the acquired data into two types, wherein one type is an eye image when the examinee watches the answering interface, and the other type is an eye image when the examinee does not watch a computer screen; the method specifically comprises the steps of firstly reducing the dimension of data through principal component analysis, then clustering the data after dimension reduction, respectively defining initial clustering centers as the average values of the two types of data, and deleting points far away from the clustering centers and points with clustering results inconsistent with original labels after clustering is completed, namely completing the elimination of outliers.
4. The method of assisted proctoring for large scale online examination of claim 1, wherein: in step 5, the abnormal behavior probability is obtained by synthesizing the neural network output values corresponding to the current frame image and the previous k frame image and calculating by adopting a weighted average method, and the specific method is as follows:
let the results output by the neural network corresponding to the images of the current frame and the previous k frames be P respectively0,P1,P2,……,PkIn which P is0Outputting a result for the neural network corresponding to the current frame;
let the corresponding weight of each frame be w0,w1,w2,……,wkAnd w isi≥wjAnd i is more than or equal to 0 and less than j and less than or equal to k, the probability value of the abnormal behavior of the current frame is as follows:
Figure FDA0003190413200000021
CN202110875667.6A 2021-07-30 2021-07-30 Auxiliary invigilation method for large-scale online examination Pending CN113743209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110875667.6A CN113743209A (en) 2021-07-30 2021-07-30 Auxiliary invigilation method for large-scale online examination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110875667.6A CN113743209A (en) 2021-07-30 2021-07-30 Auxiliary invigilation method for large-scale online examination

Publications (1)

Publication Number Publication Date
CN113743209A true CN113743209A (en) 2021-12-03

Family

ID=78729631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110875667.6A Pending CN113743209A (en) 2021-07-30 2021-07-30 Auxiliary invigilation method for large-scale online examination

Country Status (1)

Country Link
CN (1) CN113743209A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085189A1 (en) * 2012-09-26 2014-03-27 Renesas Micro Systems Co., Ltd. Line-of-sight detection apparatus, line-of-sight detection method, and program therefor
CN108304794A (en) * 2018-01-26 2018-07-20 安徽爱学堂教育科技有限公司 cheating automatic identifying method and device
CN110837784A (en) * 2019-10-23 2020-02-25 中山大学 Examination room peeping cheating detection system based on human head characteristics
CN111611865A (en) * 2020-04-23 2020-09-01 平安国际智慧城市科技股份有限公司 Examination cheating behavior identification method, electronic equipment and storage medium
CN112036299A (en) * 2020-08-31 2020-12-04 山东科技大学 Examination cheating behavior detection method and system under standard examination room environment
CN112084994A (en) * 2020-09-21 2020-12-15 哈尔滨二进制信息技术有限公司 Online invigilation remote video cheating research and judgment system and method
CN112115870A (en) * 2020-09-21 2020-12-22 南京润北智能环境研究院有限公司 Examination cheating small copy recognition method based on YOLOv3
CN112464793A (en) * 2020-11-25 2021-03-09 大连东软教育科技集团有限公司 Method, system and storage medium for detecting cheating behaviors in online examination
KR102255598B1 (en) * 2020-07-08 2021-05-25 국민대학교산학협력단 Non-face-to-face online test system and operation method
CN113095675A (en) * 2021-04-12 2021-07-09 华东师范大学 Method for monitoring action mode of examinee by means of identification point in network examination

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085189A1 (en) * 2012-09-26 2014-03-27 Renesas Micro Systems Co., Ltd. Line-of-sight detection apparatus, line-of-sight detection method, and program therefor
CN108304794A (en) * 2018-01-26 2018-07-20 安徽爱学堂教育科技有限公司 cheating automatic identifying method and device
CN110837784A (en) * 2019-10-23 2020-02-25 中山大学 Examination room peeping cheating detection system based on human head characteristics
CN111611865A (en) * 2020-04-23 2020-09-01 平安国际智慧城市科技股份有限公司 Examination cheating behavior identification method, electronic equipment and storage medium
KR102255598B1 (en) * 2020-07-08 2021-05-25 국민대학교산학협력단 Non-face-to-face online test system and operation method
CN112036299A (en) * 2020-08-31 2020-12-04 山东科技大学 Examination cheating behavior detection method and system under standard examination room environment
CN112084994A (en) * 2020-09-21 2020-12-15 哈尔滨二进制信息技术有限公司 Online invigilation remote video cheating research and judgment system and method
CN112115870A (en) * 2020-09-21 2020-12-22 南京润北智能环境研究院有限公司 Examination cheating small copy recognition method based on YOLOv3
CN112464793A (en) * 2020-11-25 2021-03-09 大连东软教育科技集团有限公司 Method, system and storage medium for detecting cheating behaviors in online examination
CN113095675A (en) * 2021-04-12 2021-07-09 华东师范大学 Method for monitoring action mode of examinee by means of identification point in network examination

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴承耕: "视线检测在考试监控系统中的研究与应用", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 06, pages 3 *

Similar Documents

Publication Publication Date Title
CN108520504B (en) End-to-end blurred image blind restoration method based on generation countermeasure network
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
CN109284738B (en) Irregular face correction method and system
CN107133612A (en) Based on image procossing and the intelligent ward of speech recognition technology and its operation method
CN107798318A (en) The method and its device of a kind of happy micro- expression of robot identification face
CN110827432B (en) Class attendance checking method and system based on face recognition
CN109446953A (en) A kind of recognition methods again of the pedestrian based on lightweight convolutional neural networks
CN108664886A (en) A kind of fast face recognition method adapting to substation's disengaging monitoring demand
CN113192028B (en) Quality evaluation method and device for face image, electronic equipment and storage medium
CN114187640A (en) Learning situation observation method, system, equipment and medium based on online classroom
CN108960169A (en) Instrument and equipment state on_line monitoring method and system based on computer vision
CN112418068A (en) On-line training effect evaluation method, device and equipment based on emotion recognition
CN113743209A (en) Auxiliary invigilation method for large-scale online examination
CN115862121B (en) Face quick matching method based on multimedia resource library
CN112560668A (en) Human behavior identification method based on scene prior knowledge
CN116580444A (en) Method and equipment for testing long-distance running timing based on multi-antenna radio frequency identification technology
CN115909400A (en) Identification method for using mobile phone behaviors in low-resolution monitoring scene
CN111507124A (en) Non-contact video lie detection method and system based on deep learning
CN112102175A (en) Image contrast enhancement method and device, storage medium and electronic equipment
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition
CN109214354B (en) Dynamic face retrieval and identification method based on 3D camera
CN111274898A (en) Method and device for detecting group emotion and cohesion in video stream based on deep learning
CN112949363A (en) Face living body identification method and device
CN110796112A (en) In-vehicle face recognition system based on MATLAB
CN117994838B (en) Real-time micro-expression recognition method and device based on incremental depth subspace network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination