CN110837784B - Examination room peeping and cheating detection system based on human head characteristics - Google Patents

Examination room peeping and cheating detection system based on human head characteristics Download PDF

Info

Publication number
CN110837784B
CN110837784B CN201911014024.1A CN201911014024A CN110837784B CN 110837784 B CN110837784 B CN 110837784B CN 201911014024 A CN201911014024 A CN 201911014024A CN 110837784 B CN110837784 B CN 110837784B
Authority
CN
China
Prior art keywords
head
cheating
rgb
face
peeping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911014024.1A
Other languages
Chinese (zh)
Other versions
CN110837784A (en
Inventor
蔡昆京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201911014024.1A priority Critical patent/CN110837784B/en
Publication of CN110837784A publication Critical patent/CN110837784A/en
Application granted granted Critical
Publication of CN110837784B publication Critical patent/CN110837784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an examination room peeping and cheating detection system based on human head characteristics, which comprises an RGB-D data acquisition module, a detection module and a detection module, wherein the RGB color video and depth information data of an examinee are acquired; the head characteristic extraction module consists of a head position track calculation unit, a head posture estimation unit, an eye direction estimation unit and a face recognition unit, and analyzes head position track, head posture, eye gazing direction, face identity and other various head characteristics of human body according to RGB-D video data; and the cheating behavior judging and classifying module is used for respectively judging whether the extracted head features are cheated according to a plurality of rules, and then weighting and synthesizing the result of each rule classification to obtain a conclusion whether the cheating is peeped finally.

Description

Examination room peeping and cheating detection system based on human head characteristics
Technical Field
The invention relates to the field of computer vision, in particular to an examination room peeping and cheating detection system based on human head characteristics.
Background
At present, many measures are taken in the aspect of cheating prevention in examination rooms in China, wherein video monitoring is included. Although video monitoring makes up the shortcoming that manual monitoring cannot be considered together to a certain extent, video monitoring is relatively more used for recording examination hall live conditions and preventing larger emergency, and a large amount of manpower is required to be organized to look back and forth for analyzing the monitoring video after an examination to judge whether cheating behaviors exist in an examinee. Therefore, the efficiency is low, and the conditions of missed judgment and misjudgment are easy to occur, so that if computer vision and big data can be introduced to intelligently analyze the monitoring video, whether fraud behaviors exist or not is detected, the efficiency can be improved, the accuracy can be improved, and the examination room discipline and examination fairness are ensured.
Cheating on important examination rooms at present can be stopped by metal detection, and cheating which is not easy to notice mainly belongs to peeping other answer sheets or peeping private bands and the like. Most of the video monitoring intelligent detection cheating schemes proposed for the cheating behaviors are developed based on human body gestures. In the prior art, the Chinese patent application with the application number of CN201910336784.8 is an examination cheating detection method based on gestures, and the cheating behavior is detected by identifying the status of an examinee through a gesture estimation algorithm and a kinematic analysis on a human skeleton sequence.
In fact, most examination rooms are not easy to find cheating behaviors, such as peeping test papers of other people, and the like are mainly concentrated on the motion of the upper body, particularly the head, and the method for estimating the skeleton posture of the whole body is too wide in conditions, low in detection efficiency, not provided with specific judgment standards, not high in accuracy and not practical. And based on the video shot by a single RGB color camera, the monocular view angle can hardly accurately estimate the position of an object in a picture and the subsequent motion track.
Disclosure of Invention
The invention provides an examination room peeping and cheating detection system based on human head characteristics, which can intelligently analyze the characteristics of head position track, head posture, eye direction and the like of an examinee according to RGB-D video data collected by the examination room, and efficiently and accurately judge whether the examinee has peeping and cheating actions or not.
In order to solve the technical problems, the invention adopts the following technical scheme:
an examination room peeping and cheating detection system based on head characteristics of a human body, which is characterized by comprising:
the RGB-D data acquisition module is used for recording RGB color video and depth information data of invigilator and examinee in the examination room in real time;
the head characteristic extraction module is used for analyzing the acquired RGB-D video data frame by frame to acquire the characteristics of head position, head posture, head movement track, face identity and eye gaze direction;
and the cheating behavior judging and classifying module is used for judging and classifying the extracted characteristics of the RGB-D video data according to a plurality of rules, and then synthesizing the judging result of each rule to give a conclusion whether to peep the cheating or not.
Further, the connection relation among the modules of the system is as follows: the RGB-D data acquisition module transmits the acquired RGB-D video data to the head feature extraction module, extracts various head features, and finally, the RGB-D data acquisition module judges whether the peeping and cheating behaviors exist or not through the cheating behavior judging and classifying module.
Furthermore, the RGB-D data acquisition module consists of two RGB-D cameras which are arranged at fixed positions in front and back of the examination room, and can comprehensively acquire RGB color video and depth picture data of examination staff and examinees in the examination room.
Further, the head characteristic extraction module comprises a head position track calculation unit, a head posture estimation unit, a face recognition unit and an eye direction estimation unit.
Further, the head position track calculating unit firstly acquires a face rectangular frame of each examinee in the video by using a deep learning face detection frame, finds a depth d in a corresponding depth map according to a midpoint position (u, v) of the rectangular frame, and can calculate a spatial position coordinate (x, y, z) of the head according to the following formula;
Figure BDA0002245098340000021
Figure BDA0002245098340000022
z=d
wherein c x 、c y Is the known optical center position f of the RGB-D camera x 、f y Is the focal length. Then, the motion trail of the head and the hot spot area can be obtained by applying the whole video sequence.
Furthermore, the head posture estimation unit can accurately identify three steering angles of the head in a three-dimensional space by using a deep convolutional neural network model: pitch, yaw, roll. Inputting the rectangular frame picture obtained by face detection into a deep convolutional neural network model, outputting classifications of three steering angles, namely class_pitch, class_Yaw and class_roll,
Pitch=(class_Pitch×2–90)°
Yaw=(class_Yaw×2–90)°
Roll=(class_Roll×2–90)°。
further, the eye direction estimating unit is used for identifying the eye gazing direction of the examinee. Firstly, performing super-resolution processing on a rectangular frame obtained by face detection, then acquiring feature points of two eyes from the rectangular frame by using a key point recognition technology, inputting the rectangular frame of the face and the feature points of the two eyes into an eye recognition neural network model, outputting a gaze direction based on a head, and determining a final gaze direction by combining Euler angles of a head gesture Pitch, yaw, roll.
Furthermore, the face recognition unit extracts features from the detected face rectangular frame by using a depth face recognition model, and calculates cosine values of the features and face features of the examinees in the examination room database, wherein the examinees with the minimum cosine values are the examinees with the best matching identities.
Further, the cheating behavior judging and classifying module judges and classifies the extracted features of the RGB-D video data according to a plurality of rules, each rule represents a weak judging classifier, and N weak classifiers are cascaded into a strong classifier F for finally judging whether to peep or not n Calculating the error rate e of each weak classifier n Then each weight is
Figure BDA0002245098340000031
The final cheating decision classifier F is: f >1 indicates that there is a peeping and cheating behavior for the examinee, 0< F <1 indicates the probability that the examinee is suspected to be cheating, and f=0 indicates that the examinee is not cheating.
Compared with the prior art, the beneficial effects are that: compared with other methods, the method has the advantages that the target is clear, the detection is more efficient and rapid, the characteristics are more specific, and the accuracy of information capturing and judging can be improved; the weak classification is primarily judged according to a plurality of rules by combining various head features, and finally the results of all the weak classifiers are synthesized to give out a conclusion whether to peep and cheat or not, so that various elements are comprehensively considered as the final judgment result, and the method is more accurate and reliable; video and depth data obtained based on the RGB-D camera can bring more accurate analysis and estimation to the head position and the motion track, and the problem of inaccurate positioning caused by the traditional method of using a monocular RGB color camera is solved.
Drawings
Fig. 1 is a schematic structural frame of the technical scheme of the present invention.
FIG. 2 is a schematic diagram of a deep convolutional neural network model framework for head pose estimation.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationship described in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
Example 1:
referring to fig. 1, an embodiment of the invention provides an examination room peeping and cheating detection system based on head features of a human body, which comprises an RGB-D data acquisition module, a head feature extraction module and a cheating behavior judgment and classification module. The relation among the three is that the RGB-D data acquisition module sends RGB-D video data to the head characteristic extraction module, and the head characteristic extraction module extracts various head characteristics and then sends the head characteristics to the cheating behavior judgment classification module, so that a final result is obtained.
In this embodiment, the RGB-D data acquisition module is configured to record RGB video and depth information data of a prisoner and an examinee in an examination room in real time;
specifically, two RGB-D cameras with model numbers of the image FM510 are respectively placed before and after the examination room, the cameras can record high-definition RGB color video, and each frame of RGB picture corresponding to the video has a depth data picture with 16 bit depth corresponding to the same resolution.
In this embodiment, the head feature extraction module is configured to perform frame-by-frame analysis on the collected RGB-D video data, and obtain features of a position, a motion track, an attitude angle, a face identity, and an eye gaze direction of the head. The head characteristic extraction module consists of a head position track calculation unit, a head posture estimation unit, a eye direction estimation unit and a face recognition unit:
in particular, the headThe position track calculation unit firstly uses an OpenCVDnn face detection frame to obtain a face rectangular frame of each examinee in RGB video, and finds the depth d in a corresponding depth map according to the center point position (u, v) of the rectangular frame, and combines the camera internal parameters and the optical center position c which are known in advance x 、c y Focal length f x 、f y The spatial position coordinates (x, y, z) of the head can be calculated by substituting the following formula.
Figure BDA0002245098340000041
Figure BDA0002245098340000042
z=d
Similarly, the motion trail of the head can be obtained by applying the whole video sequence, and the hot spot area of the head position can be calculated according to the motion trail;
specifically, the head pose estimation unit uses a deep convolutional neural network model to accurately calculate three steering angles of the head in three-dimensional space: pitch, yaw, roll, the model frame can be seen in fig. 2. Pitch, yaw, roll is first classified according to angle intervals, and the categories are divided into 90 categories by (-90 DEG, 90 DEG) every 2 DEG. Taking a rectangular frame picture obtained by face detection as input of a ResNet convolutional neural network, respectively passing through a full connection layer and a Softmax layer of each of three channels by an obtained feature layer, outputting to obtain three classes of steering angles, namely class_pitch, class_Yaw and class_roll, and calculating to obtain a final attitude Euler angle by the following formula:
Pitch=(class_Pitch×2–90)°
Yaw=(class_Yaw×2–90)°
Roll=(class_Roll×2–90)°
where Pitch >0 represents an upward head-up angle, pitch <0 represents a downward head-down angle; yaw 0 represents the angle of turning horizontally to the left, and Yaw <0 represents the angle of turning horizontally to the right; roll >0 represents the angle of the left shoulder side head, and Roll <0 represents the angle of the right shoulder side head. The head pose estimation model achieves good effects on several classical head pose data sets, and the absolute average error of Pitch, yaw, roll is below 5 degrees.
Specifically, the eye direction estimating unit may identify the eye gazing direction of the examinee. Firstly, a super-resolution convolutional neural network is used for improving the resolution of a rectangular frame picture obtained by face detection, then, the positions of characteristic points of two eyes are obtained by using Dlib face key point recognition, the rectangular frame picture of the face and the characteristic points of the two eyes are input into an eye-recognition neural network model together, the gaze direction based on the head is output, and the final gaze direction is calculated by combining the head attitude angle Pitch, yaw, roll obtained previously.
Specifically, the face recognition unit aligns the rectangular frame obtained by OpenCVDnn face detection with an advanced face, then inputs the rectangular frame into a deep learning face recognition model to extract a feature vector, calculates the cosine value of the feature vector and the face feature vector of an examinee in an examination room database, and the examinee with the minimum cosine value is the identity of the examinee with the most similar matching. In this way, once the examinee is judged to have peeping and cheating actions, the corresponding identity can be immediately identified.
In this embodiment, the cheating behavior judging and classifying module is configured to judge and classify the features extracted from the RGB-D video data according to a plurality of rules, and then integrate the result of each rule judgment to give a conclusion whether to peep the cheating.
Specifically, features extracted from RGB-D video data are used to determine whether to peep or not according to certain rules, each rule representing a weak decision classifier f n Cascading N weak classifiers into a strong classifier F for finally judging whether to peep or not, and calculating the error rate e of each weak classifier n
As a possible implementation manner, according to the feature combination extracted by each examinee, at least three rules can be used to calculate whether the examinee has peeping and cheating actions:
if the head posture of the examinee is |Yaw|>50°、Pitch<0, at the same time gazeThe direction is the lower state, and the time t is maintained>5s, showing that the examinee peeps at left and right tables, f 1 =1, otherwise, f 1 =0;
If the rear camera captures the face of the examinee, i.e., |Yaw| exceeds 90 DEG, at the same time Pitch<0. The eye gaze direction is in a downward state and the time t is maintained>5s, showing that the examinee peeps to the back desk, f 2 =1, otherwise, f 2 =0;
If the head position (x, y, z) deviates from the hot spot region by more than a threshold value, while Pitch<0. The gaze direction is in the front lower state and the time t is maintained>5s, showing that the examinee peeps towards the front desk, f 3 =1, otherwise, f 3 =0。
The final cheating decision classifier F is:
Figure BDA0002245098340000061
wherein->
Figure BDA0002245098340000062
Weights for each weak classifier.
F >1 indicates that there is a peeping and cheating behavior for the examinee, 0< F <1 indicates the probability that the examinee is suspected to be cheating, and f=0 indicates that the examinee is not cheating.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (5)

1. An examination room peeping and cheating detection system based on head characteristics of a human body, which is characterized by comprising:
the RGB-D data acquisition module is used for recording RGB color video and depth information data of invigilator and examinee in the examination room in real time;
the head characteristic extraction module is used for analyzing the acquired RGB-D video data frame by frame to acquire various human head characteristics such as head position, head posture, head movement track, human face identity, eye gaze direction and the like;
the cheating behavior judging and classifying module is used for judging and classifying the extracted characteristics of the RGB-D video data according to a plurality of rules, and then synthesizing the judging result of each rule to give a conclusion whether to peep the cheating or not;
the head characteristic extraction module comprises a head position track calculation unit, a head posture estimation unit, a face recognition unit and a eye direction estimation unit;
the head position track calculation unit acquires a face rectangular frame of each examinee in the video by using a deep learning face detection frame, finds the depth d in the corresponding depth map according to the midpoint position (u, v) of the rectangular frame, and combines the reference optical center position c in the camera x 、c y Focal length f x 、f x The spatial position coordinates (x, y, z) of the head can be calculated according to the following formula,
Figure FDA0004093895830000011
Figure FDA0004093895830000012
z=d
the motion trail of the head and the hot spot position area can be calculated by the same method on the whole video;
the head posture estimation unit inputs a rectangular frame picture obtained by face detection into the deep convolutional neural network model, outputs classification class_pitch, class_Yaw and class_roll for obtaining three steering angles, and finally accurately calculates three steering angles of the head in a three-dimensional space according to the following formula: pitch, yaw, roll the number of the individual pieces of the plastic,
Pitch=(class_Pitch×2–90)°
Yaw=(class_Yaw×2–90)°
Roll=(class_Roll×2–90)°。
2. the examination room peeping and cheating detection system based on human head features as set forth in claim 1, wherein the RGB-D data acquisition module is composed of two RGB-D cameras arranged at fixed positions in front of and behind the examination room, and can comprehensively acquire RGB color video and depth picture data of examination staff and examinees in the examination room.
3. The examination room peeping and cheating detection system based on human head features as claimed in claim 1, wherein the eye direction estimation unit firstly carries out super-resolution processing on a face rectangular frame picture, then uses key point recognition to obtain feature points of eyes, uses the face rectangular frame and the feature points of the eyes together as input of an eye recognition neural network model to obtain a head-based gazing direction, and then combines a head posture angle Pitch, yaw, roll to determine a final eye direction.
4. A system for detecting the cheating in a test room based on the features of the head of a human body according to any one of claims 1 to 3, wherein the face recognition unit uses a depth face recognition model to extract features from the rectangular frame of the detected face, calculates cosine values of the features and the features of the face of the test taker in the test room database, and the test taker with the smallest cosine value is the test taker with the best matching identity.
5. The examination room peeping and cheating detection system based on human head features according to claim 1, wherein the cheating behavior judging and classifying module is used for comprehensively extracting various head features according to a plurality of rules for cheating judgment and classification, and each rule represents a weak cheating judgment classifier; the error rate of each weak classifier is e n Weighting of
Figure FDA0004093895830000021
Cascading N weak classifiers into a strong class for finally judging whether to peep or not>
Figure FDA0004093895830000022
If the final peeping and cheating judgment result F>1, the examinee has peeping and cheating actions, 0<F<1 represents the probability that the examinee is suspected to be cheated, and f=0 indicates that the examinee is not cheated.
CN201911014024.1A 2019-10-23 2019-10-23 Examination room peeping and cheating detection system based on human head characteristics Active CN110837784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911014024.1A CN110837784B (en) 2019-10-23 2019-10-23 Examination room peeping and cheating detection system based on human head characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911014024.1A CN110837784B (en) 2019-10-23 2019-10-23 Examination room peeping and cheating detection system based on human head characteristics

Publications (2)

Publication Number Publication Date
CN110837784A CN110837784A (en) 2020-02-25
CN110837784B true CN110837784B (en) 2023-06-20

Family

ID=69575791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911014024.1A Active CN110837784B (en) 2019-10-23 2019-10-23 Examination room peeping and cheating detection system based on human head characteristics

Country Status (1)

Country Link
CN (1) CN110837784B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598049B (en) * 2020-05-29 2023-10-10 中国工商银行股份有限公司 Cheating identification method and device, electronic equipment and medium
CN111709396A (en) * 2020-07-08 2020-09-25 六盘水达安驾驶培训有限公司 Driving skill subject two and three examination auxiliary evaluation method based on human body posture
CN111738209A (en) * 2020-07-17 2020-10-02 南京晓庄学院 Examination room cheating behavior pre-judging system based on examinee posture recognition
CN112446360A (en) * 2020-12-15 2021-03-05 作业帮教育科技(北京)有限公司 Target behavior detection method and device and electronic equipment
CN112883832A (en) * 2021-01-29 2021-06-01 北京市商汤科技开发有限公司 Method and device for managing behavior of person under test, electronic equipment and storage medium
CN113435362A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Abnormal behavior detection method and device, computer equipment and storage medium
CN113743209A (en) * 2021-07-30 2021-12-03 北京长峰科威光电技术有限公司 Auxiliary invigilation method for large-scale online examination
CN114882533A (en) * 2022-05-30 2022-08-09 北京百度网讯科技有限公司 Examination room abnormal behavior detection method, device, equipment and storage medium
CN114943922B (en) * 2022-06-02 2024-04-02 浙大城市学院 Machine examination suspicious behavior identification method based on deep learning
CN115598064A (en) * 2022-10-21 2023-01-13 圣名科技(广州)有限责任公司(Cn) Data detection method and device, electronic equipment and storage medium
CN115937793B (en) * 2023-03-02 2023-07-25 广东汇通信息科技股份有限公司 Student behavior abnormality detection method based on image processing
CN116894978B (en) * 2023-07-18 2024-03-29 中国矿业大学 Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics
CN117649630A (en) * 2024-01-29 2024-03-05 武汉纺织大学 Examination room cheating behavior identification method based on monitoring video stream

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
CN108009468A (en) * 2017-10-23 2018-05-08 广东数相智能科技有限公司 A kind of marathon race anti-cheat method, electronic equipment and storage medium
CN109961000A (en) * 2018-10-22 2019-07-02 大连艾米移动科技有限公司 A kind of intelligence examination hall anti-cheating system
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture
CN110135282A (en) * 2019-04-25 2019-08-16 沈阳航空航天大学 A kind of examinee based on depth convolutional neural networks model later plagiarizes cheat detection method
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
CN108009468A (en) * 2017-10-23 2018-05-08 广东数相智能科技有限公司 A kind of marathon race anti-cheat method, electronic equipment and storage medium
CN109961000A (en) * 2018-10-22 2019-07-02 大连艾米移动科技有限公司 A kind of intelligence examination hall anti-cheating system
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture
CN110135282A (en) * 2019-04-25 2019-08-16 沈阳航空航天大学 A kind of examinee based on depth convolutional neural networks model later plagiarizes cheat detection method
CN110349667A (en) * 2019-07-05 2019-10-18 昆山杜克大学 The autism assessment system analyzed in conjunction with questionnaire and multi-modal normal form behavioral data
CN110349674A (en) * 2019-07-05 2019-10-18 昆山杜克大学 Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system

Also Published As

Publication number Publication date
CN110837784A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
US8462996B2 (en) Method and system for measuring human response to visual stimulus based on changes in facial expression
Ahmed et al. Vision based hand gesture recognition using dynamic time warping for Indian sign language
CN107403142B (en) A kind of detection method of micro- expression
US7848548B1 (en) Method and system for robust demographic classification using pose independent model from sequence of face images
US8401248B1 (en) Method and system for measuring emotional and attentional response to dynamic digital media content
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Tome et al. The 1st competition on counter measures to finger vein spoofing attacks
Lim et al. Automated classroom monitoring with connected visioning system
CN107133601A (en) A kind of pedestrian&#39;s recognition methods again that network image super-resolution technique is resisted based on production
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN109598242B (en) Living body detection method
US11194997B1 (en) Method and system for thermal infrared facial recognition
CN107506800A (en) It is a kind of based on unsupervised domain adapt to without label video face identification method
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN111507592B (en) Evaluation method for active modification behaviors of prisoners
CN109544523B (en) Method and device for evaluating quality of face image based on multi-attribute face comparison
CN110781762B (en) Examination cheating detection method based on posture
CN107230267A (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
CN110163567A (en) Classroom roll calling system based on multitask concatenated convolutional neural network
CN112668557A (en) Method for defending image noise attack in pedestrian re-identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant