CN110874585A - Peeping cheating behavior identification method based on attention area - Google Patents
Peeping cheating behavior identification method based on attention area Download PDFInfo
- Publication number
- CN110874585A CN110874585A CN201911189014.1A CN201911189014A CN110874585A CN 110874585 A CN110874585 A CN 110874585A CN 201911189014 A CN201911189014 A CN 201911189014A CN 110874585 A CN110874585 A CN 110874585A
- Authority
- CN
- China
- Prior art keywords
- angle
- attention area
- attention
- cheating
- nose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a peeping cheating behavior identification method based on an attention area. A peeping cheating behavior identification method based on attention areas comprises the following steps: measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen; during examination, a camera acquires a facial image of an examinee every 100ms, and whether the attention area is in a standard angle interval is judged; (3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated; the invention mainly calculates the position of the region where the attention of the examinee is concentrated by identifying the position of the facial organ in the video image.
Description
Technical Field
The invention relates to the field of identification of cheating behaviors in a paperless examination system, in particular to a peeping type cheating behavior identification method based on an attention area.
Background
At present, students at home and abroad provide various solutions for examination cheating. Can be broadly divided into two categories: a traditional information verification class and an electronic-assisted verification class. The conventional information verification type aims to improve and improve the conventional examination cheating identification method, such as more effective planning on the patrol mode of a invigilator or providing information which is more difficult to be faked by a test taker for identity verification. Electronic auxiliary verification aims at monitoring and identifying cheating behaviors of examinees by using an electronic automation technology besides traditional examination information verification, and currently, an electronic auxiliary verification mode based on biological characteristic information is the mainstream. McGinity states that biometric-based authentication is superior to authentication based on traditional information such as identification numbers. Another study underscores the importance of the detection mechanism being effective throughout the course of the examination. Populus and verbauhwide also consider that biometric systems provide better security than traditional cryptography systems. Biometric authentication uses automatic identification of an authentication object with reference to physiological characteristics of a living person, such as voice, geometric characteristics of a hand, a fingerprint, a facial image, and the like. In general, biometric identification requires comparing pre-stored data with captured data to yield a similarity result. There are two main categories of biometric identification methods: identification based on keystroke dynamics characteristics and identification based on video image characteristics.
Identification mode based on keystroke dynamics
Flior and Kowalski proposed a way to provide continuous biometric user authentication for online examination through keystroke dynamics. The characteristics of words input by a user and the key stroke rhythm are counted and compared, the problem is solved more simply after cheating is carried out by an examinee, and the cheating behavior is identified according to the obvious change of the relevant key stroke characteristics during the problem answering. However, the keystroke dynamics approach has the disadvantage that the recognition result is influenced by the change of the keystroke characteristics caused by long-time typing fatigue as the examination time progresses, the change of the keystroke characteristics caused by different thinking times when solving the difficult problems and simple problems, and the like.
(II) recognition mode based on video analysis
The identification mode based on video analysis is different and has different classifications according to different objects of video surveillance, and plum persistent proposes a cheating behavior identification mode through video images of an examination room, wherein the object of surveillance is the whole examination room. But because the supervision object is too complex, the deficiency is obvious. It can only identify extreme examination cheating behaviors of large-area body movement such as leaving a seat and walking in an examination room. And the identification effect on the routine examination cheating behaviors with small actions is not ideal. Compared with a method for carrying out cheating behavior recognition by taking the whole examination room as a unit, more cheating behavior recognition based on videos is carried out by taking an examinee as a unit. One examinee corresponds to one camera, and cheating behaviors are recognized by analyzing the video of the upper body or the face of the examinee. The identification method has good effect on identity cheating, namely the identification of the cheating behavior of the test taker, and has good identification rate on other cheating behaviors with small actions. However, the identification of cheating actions with smaller physical actions, i.e. keeping the corresponding face still within the shooting area of the camera, is still insufficient, in fact, when the corresponding examinee is looking at the carried data or the head of the screen of the adjacent examinee, compared with the cheating actions of peeping type, which only change at a certain angle during normal answering, the head of the screen of the adjacent examinee is changed.
Disclosure of Invention
The invention aims to provide an attention area calculation method based on a face alignment technology aiming at the problems of the identification mode based on video analysis.
The technical scheme of the invention is as follows:
a peeping cheating behavior identification method based on attention areas comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
The method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
Preferably, the head rotation angle α is calculated as follows:
defining α is 0 degree when the head is vertical to the horizontal axis of the camera, the angle of rotation is increased by α degree right and the angle of rotation is decreased by α degree left, because the connecting line between the left eye and the right eye is always vertical to the horizontal direction of the head, the included angle theta between the connecting line between the left eye and the right eye and the horizontal direction is used for marking the head rotation angle α;
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
Alternatively, the horizontal tilt angle β is preferably calculated as follows:
(1) using eyes and nose as face positioning markers;
when there is a horizontal tilt of the head, the larger the tilt angle, the closer the nose region is to the eye region in the corresponding direction in the horizontal directionDefining the horizontal tilt angle β to be 0 when the binocular emmetropic camera is viewed, β increasing when the head turns to the right and β decreasing when the head turns to the left;
defining a distance l between the left and right eye2Intersection point p1 and intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eyeDistance to left eye l1(ii) a Is calculated to
For horizontal tilt angle and R1The mapping relationship of (2) is confirmed by adopting a statistical fitting mode. Firstly, clear sample pictures are selected, the horizontal inclination angle is manually marked, and the range of the selected angle is about [ -30 DEG, 30 DEG ]]Calculating R of each sample picture1Value, recording the corresponding horizontal tilt angle and R1The relationship between them. As shown in FIG. 3, the x-axis is the horizontal tilt angle of the manual mark, and the y-axis is R1The proportional value of (c); the points are the relation coordinates of the corresponding sample pictures, and the straight lines are the fitted linear mapping functions.
The horizontal inclination angle and the R can be obtained by fitting the linear function of the three-dimensional linear inclination angle1The mapping function between is:
β1=72.1R1-35.4;
(2) using eyes and mouth as face positioning markers;
defining an intersection point p2 of perpendicular lines drawn from the mouth to the line between the left and right eyes, and a distance l from the intersection point p2 to the left eye3;
β2=72.1R2-35.4;
The solving process of the coordinates of the intersection point p1 is as follows:
straight line of binocular coordinate points:
extracting the corresponding coefficient to obtain:
a1=1;
distance l from left eye coordinate point to p11Obtaining:
wherein, the straight line where the nose coordinate point is located:
the same can be extracted for the corresponding coefficients:
a2=1;
p1 and p2 point coordinates can be obtained using determinants in linear algebra:
distance l from intersection point p2 to left eye3Obtaining:
(3) selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
through observation, it can be found that because of perspective transformation, when the horizontal inclination angle is 0 °, the left and right eye regions are the same, and when the angle is positive, the left eye region is large and the right eye region is small, and when the angle is negative, the right eye region is large and the left eye region is small;
further comprising:
horizontal tilt angle value and R3The relation between the horizontal inclination angle β and the horizontal inclination angle R is shown in the figure 4 on the Y axis and the X axis3Each point represents a sample and the line represents the fitting function; fitting to obtain:
β3=49.1R3-48.9;
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
Or preferably, the calculation process of the vertical inclination angle γ is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4(ii) a The vertical inclination angle of the face is 0 degree when the head looks at the camera, and the face is positive when the head leans upwardsNegative when downward;
to R4And vertical tilt angle γ 1 as shown in fig. 5. X-axis and Y-axis respectively represent R4And vertical tilt angle γ 1, points represent samples and straight lines represent fitting functions. The corresponding mapping function is obtained as:
γ1=800R4 2+395R4-80;
(2) defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5;
R5And the vertical inclination angle γ 2 are as shown in fig. 6. In the figure, the X-axis represents R5The Y axis represents a vertical inclination angle gamma 2, the points represent samples, and the straight line represents a fitting curve;
γ2=800R5 2-141R5-122.5;
if gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
The invention has the technical effects that:
the method provided by the invention mainly identifies the position of the facial organ in the video image, and further calculates the position of the region where the attention of the examinee is concentrated. Furthermore, the identification of peeping cheating behaviors which are difficult to identify such as checking adjacent examinees PC or checking illegal carried data and the like, which are not available in the similar cheating behavior identification system, is realized.
Drawings
Fig. 1 is a schematic diagram of the head inclination angle θ based on the left and right eye coordinates.
Fig. 2 is a schematic diagram of the head inclination angle θ' based on the nose-mouth coordinates.
FIG. 3 is R of a sample picture1、R2A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 4 is R of a sample picture3A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 5 is R of a sample picture4A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
FIG. 6 is R of a sample picture5A graph of the relationship between the scale values and the manually identified horizontal tilt angles.
Fig. 7 is a schematic diagram of a peeping cheating behavior identification method based on attention area.
FIG. 8 is a bar graph of the head rotation angle difference in the exemplary embodiment.
FIG. 9 is a bar graph of the difference in horizontal tilt angle of the head in the example embodiment.
FIG. 10 is a bar graph of the vertical tilt angle of the head in the example embodiment.
Detailed Description
A peeping cheating behavior identification method based on attention areas comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
The method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
Firstly, the calculation process of the head rotation angle α is as follows:
defining α is 0 degree when the head is vertical to the horizontal axis of the camera, the angle of rotation is increased by α degree right and the angle of rotation is decreased by α degree left, because the connecting line between the left eye and the right eye is always vertical to the horizontal direction of the head, the included angle theta between the connecting line between the left eye and the right eye and the horizontal direction is used for marking the head rotation angle α;
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
Secondly, the calculation process of the horizontal inclination angle β is as follows:
(1) using eyes and nose as face positioning markers;
when there is a horizontal tilt of the head, the larger the tilt angle, the closer the nose region is to the eye region in the corresponding direction in the horizontal directionDefining the horizontal tilt angle β to be 0 when the binocular emmetropic camera is viewed, β increasing when the head turns to the right and β decreasing when the head turns to the left;
defining a distance l between the left and right eye2Intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eye, and distance l from intersection point p1 to the left eye1(ii) a Is calculated to
For horizontal tilt angle and R1The mapping relationship of (2) is confirmed by adopting a statistical fitting mode. Firstly, clear sample pictures are selected, the horizontal inclination angle is manually marked, and the range of the selected angle is about [ -30 DEG, 30 DEG ]]Calculating R of each sample picture1Value, recording the corresponding horizontal tilt angle and R1The relationship between them. As shown in FIG. 3, the x-axis is the horizontal tilt angle of the manual mark, and the y-axis is R1The proportional value of (c); the points are the relation coordinates of the corresponding sample pictures, and the straight lines are the fitted linear mapping functions.
The horizontal inclination angle and the R can be obtained by fitting the linear function of the three-dimensional linear inclination angle1The mapping function between is:
β1=72.1R1-35.4;
(2) using eyes and mouth as face positioning markers;
defining an intersection point p2 of perpendicular lines drawn from the mouth to the line between the left and right eyes, and a distance l from the intersection point p2 to the left eye3;
β2=72.1R2-35.4;
The solving process of the coordinates of the intersection point p1 is as follows:
straight line of binocular coordinate points:
extracting the corresponding coefficient to obtain:
a1=1;
distance l from left eye coordinate point to p11Obtaining:
wherein, the straight line where the nose coordinate point is located:
the same can be extracted for the corresponding coefficients:
a2=1;
p1 and p2 point coordinates can be obtained using determinants in linear algebra:
distance l from intersection point p2 to left eye3Obtaining:
(3) selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
through observation, it can be found that because of perspective transformation, when the horizontal inclination angle is 0 °, the left and right eye regions are the same, and when the angle is positive, the left eye region is large and the right eye region is small, and when the angle is negative, the right eye region is large and the left eye region is small;
further comprising:
horizontal tilt angle value and R3The relation between the horizontal inclination angle β and the horizontal inclination angle R is shown in the figure 4 on the Y axis and the X axis3Each point represents a sample and the line represents the fitting function; fitting to obtain:
β3=49.1R3-48.9;
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
Thirdly, the calculation process of the vertical inclination angle gamma is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4(ii) a When the head is viewed from the camera, the vertical inclination angle of the face is approximately 0 degrees, when the head faces upwards, the face is positive, and when the head faces downwards, the face is negative;
to R4And vertical tilt angle γ 1 as shown in fig. 5. X-axis and Y-axis respectively represent R4And vertical tilt angle γ 1, points represent samples and straight lines represent fitting functions. The corresponding mapping function is obtained as:
γ1=800R4 2+395R4-80;
(2) defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5;
R5And the vertical inclination angle γ 2 are as shown in fig. 6. In the figure, the X-axis represents R5The Y axis represents a vertical inclination angle gamma 2, the points represent samples, and the straight line represents a fitting curve;
γ2=800R5 2-141R5-122.5;
if gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
Specific examples of the experiments
Left eye region start coordinates (248, 213), length 38, width 12; left eye coordinates (267, 219);
right eye region start coordinates (364, 234), length 48, width 16; right eye coordinate (388, 242)
Nose region start coordinate (272,224), length 52, width 88; nose coordinates (298, 268);
mouth start coordinate (248,340), length 88, width 42; mouth coordinates (292,361).
The head rotation angle α is calculated as follows:
the calculation process of the horizontal inclination angle β is as follows:
P1(248,213);P2(312,227);
l1=39.62,l2=123.17,l3=45.71,l4=42.76,l5=135.48;
(III) the calculation process of the vertical inclination angle gamma is as follows:
γ1=800R4 2+395R4-80=0.198°;
γ2=800R5 2-141R5-122.5=0.152°;
50 clear facial images are selected, and the corresponding head rotation angle, horizontal inclination angle and vertical inclination angle are obtained by the method provided by the patent. Comparing the obtained angle with the angle of the manual identification, recording the difference value of the corresponding angle of each sample, and calculating the average value of the corresponding difference values and the variance of the difference value distribution;
the mean value of the head rotation angle difference was found to be 3.14 ° and the variance was 2.9204. As shown in fig. 8;
the average of the head horizontal tilt angle differences was 4.76 ° and the variance was 3.0824. As shown in fig. 9;
the average of the head vertical tilt angle differences was 5.34 ° and the variance was 12.2644. As shown in fig. 10;
from the results, it is understood that the average error of the head rotation angle is the smallest, only 3.14 °, and the error distribution is also the smallest. The average error of the vertical inclination angle is maximum and reaches 5.34 degrees, and the error distribution is also maximum.
Claims (7)
1. A peeping cheating behavior identification method based on attention areas is characterized in that: the method comprises the following steps:
(1) before the examination, measuring a standard angle interval of the head angle of the examinee relative to the camera when the examinee watches the screen;
(2) in the examination, a face image of an examinee is acquired by a camera every 100ms, an attention area is formed by the head rotation angle α, the horizontal inclination angle β and the vertical inclination angle gamma of the face image relative to the camera, and whether the attention area is in a standard angle interval or not is judged;
(3) taking 1s as a statistical period, and considering that the attention area is concentrated if at least 1 time of corresponding 10 times of judgment is within a standard angle interval, otherwise, the attention is not concentrated; taking 3min as a statistical period, if the time of inattention exceeds 60s, considering that the cheating suspicion exists, and displaying a warning message on a screen of the cheating suspicion; if no 1s in the warning message display 10s is attention-focused, judging that the information is cheated;
wherein, the standard angle interval is: the horizontal angle interval [ -30 degrees, 30 degrees ], the vertical angle interval [ -40 degrees, 20 degrees ], and the rotation angle interval [ -60 degrees, 60 degrees ].
2. The attention area-based peer-to-peer cheating action recognition method of claim 1, wherein: the method for acquiring the attention area comprises the following steps:
(1) normalizing and graying the face image acquired by the camera and recording as grey;
(2) carrying out face region identification on the normalized and grayed face image to obtain a maximum face region which is marked as biggestFace;
(3) identifying the facial organ areas of eyes, nose and mouth of the maximum facial area, and respectively recording as leftEye, rightEye, nose and mouth;
(4) the head rotation angle α, the horizontal tilt angle β, and the vertical tilt angle γ are calculated, respectively.
3. The method for recognizing peeping-type cheating behavior based on attention area as claimed in claim 2, wherein the head rotation angle α is calculated as follows:
(1) the head rotation angle α 1 is marked by the angle θ between the horizontal direction and the line between the left eye and the right eye:
(2) Marking the head rotation angle α 2 by the included angle theta' between the connecting line between the mouth and the nose and the vertical direction;
4. The method for recognizing peeping-type cheating action based on attention area according to claim 3, wherein the horizontal inclination angle β is calculated as follows:
(1) defining a distance l between the left and right eye2Intersection point p1 of perpendicular lines drawn by connecting the nose to the left eye and the right eye, and distance l from intersection point p1 to the left eye1(ii) a According to the Pythagorean theorem, calculate
β1=72.1R1-35.4;
(2) Defining the mouth to the left and right eyesThe intersection point p2 of the perpendicular lines between the lines and the distance l from the intersection point p2 to the left eye3;
β2=72.1R2-35.4;
(3) Selecting the ratio between the area sizes of the left eye and the right eye as a correlation factor;
β3=49.1R3-48.9;
if β falls in the horizontal angle range of [ -30 DEG, 30 DEG ]]Inner, then sβ1, otherwise sβ=0。
5. The attention area-based peer-to-peer cheating action recognition method of claim 4, wherein: the calculation process of the vertical inclination angle gamma is as follows:
(1) the length of a perpendicular line drawn from the nose to the line between the left and right eyes is defined as l4;
γ1=800R4 2+395R4-80;
(2) Defining the length of a perpendicular line drawn from the mouth to the line between the left eye and the right eye as l5;
γ2=800R5 2-141R5-122.5;
If gamma falls within the vertical angle range of [ -40 DEG, 20 DEG ]]Inner, then sγ1, otherwise sγ=0。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911189014.1A CN110874585B (en) | 2019-11-28 | 2019-11-28 | Peeping cheating behavior identification method based on attention area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911189014.1A CN110874585B (en) | 2019-11-28 | 2019-11-28 | Peeping cheating behavior identification method based on attention area |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110874585A true CN110874585A (en) | 2020-03-10 |
CN110874585B CN110874585B (en) | 2023-04-18 |
Family
ID=69717737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911189014.1A Active CN110874585B (en) | 2019-11-28 | 2019-11-28 | Peeping cheating behavior identification method based on attention area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110874585B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113516074A (en) * | 2021-07-08 | 2021-10-19 | 西安邮电大学 | Online examination system anti-cheating method based on pupil tracking |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5933527A (en) * | 1995-06-22 | 1999-08-03 | Seiko Epson Corporation | Facial image processing method and apparatus |
CN102799893A (en) * | 2012-06-15 | 2012-11-28 | 北京理工大学 | Method for processing monitoring video in examination room |
CN103208212A (en) * | 2013-03-26 | 2013-07-17 | 陈秀成 | Anti-cheating remote online examination method and system |
US20140114148A1 (en) * | 2011-11-04 | 2014-04-24 | Questionmark Computing Limited | System and method for data anomaly detection process in assessments |
CN205835343U (en) * | 2016-04-27 | 2016-12-28 | 深圳前海勇艺达机器人有限公司 | A kind of robot with invigilator's function |
CN106778676A (en) * | 2016-12-31 | 2017-05-31 | 中南大学 | A kind of notice appraisal procedure based on recognition of face and image procossing |
US20170278417A1 (en) * | 2014-08-27 | 2017-09-28 | Eyessessment Technologies Ltd. | Evaluating test taking |
WO2019080295A1 (en) * | 2017-10-23 | 2019-05-02 | 上海玮舟微电子科技有限公司 | Naked-eye 3d display method and control system based on eye tracking |
-
2019
- 2019-11-28 CN CN201911189014.1A patent/CN110874585B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5933527A (en) * | 1995-06-22 | 1999-08-03 | Seiko Epson Corporation | Facial image processing method and apparatus |
US20140114148A1 (en) * | 2011-11-04 | 2014-04-24 | Questionmark Computing Limited | System and method for data anomaly detection process in assessments |
CN102799893A (en) * | 2012-06-15 | 2012-11-28 | 北京理工大学 | Method for processing monitoring video in examination room |
CN103208212A (en) * | 2013-03-26 | 2013-07-17 | 陈秀成 | Anti-cheating remote online examination method and system |
US20170278417A1 (en) * | 2014-08-27 | 2017-09-28 | Eyessessment Technologies Ltd. | Evaluating test taking |
CN205835343U (en) * | 2016-04-27 | 2016-12-28 | 深圳前海勇艺达机器人有限公司 | A kind of robot with invigilator's function |
CN106778676A (en) * | 2016-12-31 | 2017-05-31 | 中南大学 | A kind of notice appraisal procedure based on recognition of face and image procossing |
WO2019080295A1 (en) * | 2017-10-23 | 2019-05-02 | 上海玮舟微电子科技有限公司 | Naked-eye 3d display method and control system based on eye tracking |
Non-Patent Citations (2)
Title |
---|
熊碧辉;周后盘;黄经州;阮益权;周里程;: "一种融合视线检测的注意力检测方法" * |
程文冬;付锐;袁伟;刘卓凡;张名芳;刘通;: "驾驶人注意力分散的图像检测与分级预警" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113516074A (en) * | 2021-07-08 | 2021-10-19 | 西安邮电大学 | Online examination system anti-cheating method based on pupil tracking |
Also Published As
Publication number | Publication date |
---|---|
CN110874585B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108108684B (en) | Attention detection method integrating sight detection | |
CN110837784B (en) | Examination room peeping and cheating detection system based on human head characteristics | |
TWI383325B (en) | Face expressions identification | |
TWI250469B (en) | Individual recognizing apparatus and individual recognizing method | |
EP3680794B1 (en) | Device and method for user authentication on basis of iris recognition | |
US8698914B2 (en) | Method and apparatus for recognizing a protrusion on a face | |
Batista | A drowsiness and point of attention monitoring system for driver vigilance | |
CN101593352A (en) | Driving safety monitoring system based on face orientation and visual focus | |
US8150118B2 (en) | Image recording apparatus, image recording method and image recording program stored on a computer readable medium | |
JP6906717B2 (en) | Status determination device, status determination method, and status determination program | |
JP2013513155A (en) | Cost-effective and robust system and method for eye tracking and driver awareness | |
Giannakakis et al. | Evaluation of head pose features for stress detection and classification | |
CN109101949A (en) | A kind of human face in-vivo detection method based on colour-video signal frequency-domain analysis | |
CN109711239B (en) | Visual attention detection method based on improved mixed increment dynamic Bayesian network | |
TW201140511A (en) | Drowsiness detection method | |
CN109063686A (en) | A kind of fatigue of automobile driver detection method and system | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
CN111460950A (en) | Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior | |
JP4507679B2 (en) | Image recognition apparatus, image extraction apparatus, image extraction method, and program | |
CN110874585B (en) | Peeping cheating behavior identification method based on attention area | |
CN115937928A (en) | Learning state monitoring method and system based on multi-vision feature fusion | |
CN113609963B (en) | Real-time multi-human-body-angle smoking behavior detection method | |
CN111104817A (en) | Fatigue detection method based on deep learning | |
CN115937793B (en) | Student behavior abnormality detection method based on image processing | |
CN111275754B (en) | Face acne mark proportion calculation method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |