CN112883814A - Method and storage device for detecting contraband worn by double ears of examinees in online examination - Google Patents
Method and storage device for detecting contraband worn by double ears of examinees in online examination Download PDFInfo
- Publication number
- CN112883814A CN112883814A CN202110101838.XA CN202110101838A CN112883814A CN 112883814 A CN112883814 A CN 112883814A CN 202110101838 A CN202110101838 A CN 202110101838A CN 112883814 A CN112883814 A CN 112883814A
- Authority
- CN
- China
- Prior art keywords
- target
- detected
- picture
- examinee
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000005069 ears Anatomy 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 238000007781 pre-processing Methods 0.000 claims abstract description 39
- 230000002159 abnormal effect Effects 0.000 claims description 44
- 238000001914 filtration Methods 0.000 claims description 10
- 238000003708 edge detection Methods 0.000 claims description 8
- 238000005728 strengthening Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 5
- 238000000605 extraction Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 18
- 238000004458 analytical method Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 10
- 238000012549 training Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 3
- 238000011897 real-time detection Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Tourism & Hospitality (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a method and storage equipment for detecting contraband worn by ears of an examinee in an online examination. The method for detecting contraband worn by ears of an examinee in an online examination comprises the following steps: acquiring a target picture through a predefined detection window; preprocessing the target picture; positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the target to be detected through a preset rectangular frame, wherein the target to be detected comprises: the ear is used for acquiring a to-be-detected picture containing the ear and images around the ear; and performing feature enhancement operation on the picture to be detected, inputting the picture to be detected after the feature enhancement operation to a depth classification network, outputting a judgment result, and further assisting to know whether the examinee violates rules or not according to the judgment result.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and storage equipment for detecting contraband worn by ears of an examinee in an online examination.
Background
The online examination is innovated on the basis of the traditional examination in a more flexible and convenient form, so that examinees and invigilators are greatly convenient to use, the examination is not restricted in the examination field, and the examination can be remotely participated in by using the mobile equipment which can be networked at any place. The online examination can help teachers to supervise and urge examination examinees to learn and master the degree of a certain subject, and students can also independently learn through online learning and examination. However, because the examination environment of the examinee is complex and various, the on-line examination cheating mode is more complex and hidden and difficult to perceive, the integrity degree of the examinee becomes a large measure of the reliability of the examination result, meanwhile, the invigilation pressure of the examiner is increased, and a large amount of labor cost needs to be invested to detect the possible conditions and cheating behaviors of the examinee which violate the examination regulations. Wherein, the examinee may wear the earphone and other intelligent devices on both ears, and obtains the resources and answers from the assistor in a voice communication mode; the earring with the hidden camera is worn to shoot the examination questions and transmit the examination questions to the assistor; the ears of the examinees are shielded by the hair, so that the ears are difficult to detect, and the like. Earphones, earrings and the like become examination contraband in an online examination scene, and the two ears of an examinee become a major key point to be detected in the online examination invigilation process. However, since the intelligent devices or other contraband articles worn by two ears of the examinee are various in form, small in size, covered by hair and the like, the intelligent devices or other contraband articles are difficult to perceive under the condition of manual online invigilation without intelligent examination detection.
Disclosure of Invention
Therefore, a method for detecting that the ears of the examinees wear prohibited articles in the online examination needs to be provided, so that the problem that the ears of the examinees wear cheating equipment and cannot be intelligently detected in the online examination is solved, and the specific technical scheme is as follows:
a method for detecting contraband worn by both ears of an examinee in an online examination, comprising the steps of:
acquiring a target picture through a predefined detection window;
preprocessing the target picture;
positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: an ear;
and performing feature enhancement operation on the to-be-detected picture extracted from the preset rectangular frame, inputting the to-be-detected picture subjected to the feature enhancement operation to a depth classification network, and outputting a judgment result.
Further, the pre-processing operation includes one or more of: median filtering, histogram equalization and gray level stretching;
the strengthening operation includes one or more of: edge detection, skin color detection.
Further, after the target to be detected is positioned on the target picture after the preprocessing operation, the method further comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
Further, the method also comprises the following steps: if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected.
In order to solve the technical problem, the storage device is further provided, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for performing:
acquiring a target picture through a predefined detection window;
preprocessing the target picture;
positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: an ear;
and performing feature enhancement operation on the to-be-detected picture extracted from the preset rectangular frame, inputting the to-be-detected picture subjected to the feature enhancement operation to a depth classification network, and outputting a judgment result.
Further, the set of instructions is further for performing: the pre-processing operation comprises one or more of: median filtering, histogram equalization and gray level stretching;
the strengthening operation includes one or more of: edge detection, skin color detection.
Further, the set of instructions is further for performing: after the target picture after the preprocessing operation is positioned by the target to be detected, the method also comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
Further, the set of instructions is further for performing: if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected.
The invention has the beneficial effects that: acquiring a target picture through a predefined detection window; preprocessing the target picture; positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the target to be detected through a preset rectangular frame, wherein the target to be detected comprises: the ear is used for acquiring a to-be-detected picture containing the ear and images around the ear; and performing feature enhancement operation on the picture to be detected, inputting the picture to be detected after the feature enhancement operation to a depth classification network, outputting a judgment result, and further assisting to know whether the examinee violates rules or not according to the judgment result.
Drawings
Fig. 1 is a flowchart illustrating a method for detecting contraband worn by both ears of an examinee in an online examination according to an embodiment;
fig. 2 is a schematic application flow diagram of a method for detecting that contraband is worn by both ears of an examinee in an online examination according to an embodiment;
fig. 3 is a schematic block diagram of a storage device according to an embodiment.
Description of reference numerals:
300. a storage device.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1 to 2, in the present embodiment, a method for detecting contraband worn by both ears of a test taker in an online examination may be applied to a storage device, including but not limited to: personal computers, servers, general purpose computers, special purpose computers, network devices, embedded devices, programmable devices, and the like.
In a practical application scene, the method is applied to the online examination process, after the identity authentication of an examinee and before the examinee starts the examination, the method aims to set preventive cheating detection before the examination under the condition that no examiner aims at the invigilation in real time, and intelligently analyzes whether the two ears of the examinee have abnormal conditions violating the examination regulations. When the detection result is abnormal, the result is submitted to the examiner terminal, so that the possibility of cheating during examination of the examinee can be prevented and avoided to a certain extent, and the burden that the examiner needs to pay attention to the examination picture in real time is reduced. The method and the device have the characteristics of manpower labor reduction, resource optimization, high running speed, high precision, small load, real-time detection and the like.
The specific implementation mode is as follows:
step S101: and acquiring a target picture through a predefined detection window. The method specifically comprises the following steps: the system defines a detection window area in a detection interface, examinees show double ears to the camera in cooperation with system instructions, meanwhile, the remaining light of the examinees needs to pay attention to whether the double ears in a picture captured by the camera appear in the detection window, and the module only intercepts images in a predefined detection window, so that the interference of the complex background picture where the examinees are located is avoided.
In the present embodiment, the examinee's binaural state is defined as normal, abnormal, and no target class 3 case detected. Under normal conditions, the examinee is exposed to the full appearance of ears, the ears are bare and free of foreign matters, and the ears are not provided with earphones, earrings or other examination prohibited articles; the opposite case will be defined as an abnormal case; when the ear of the examinee is not detected in the screen, it is defined as a case where the object is not detected.
Step S102: and carrying out preprocessing operation on the target picture. The pre-processing operation comprises one or more of: median filtering, histogram equalization and gray level stretching. The method specifically comprises the following steps: in consideration of the fact that cameras used by storage devices used by examinees for shooting have different pixels and performances, and meanwhile, examination environments of the examinees cause illumination and noise interference of different degrees in examination pictures, preprocessing operation needs to be performed on target pictures captured in videos in order to reduce the adverse effects, and methods such as median filtering, histogram equalization and gray level stretching are commonly used. The processing sequence in the preprocessing process is to perform median filtering operation to remove noise interference in the image, reduce the influence of light change and noise on the image captured in step S101 through histogram equalization, and expand the histogram of the image through gray scale stretching to fill the whole gray scale range, thereby improving the captured image. After the preprocessing operation, the captured image is enhanced in contrast, so that the target in the image has a clearer outline, and the feature extraction and analysis of a subsequent depth classification network on the target ear outline are facilitated.
Step S103: positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: and (5) ears.
Step S104: and carrying out feature enhancement operation on the picture to be detected extracted from the preset rectangular frame.
Step S105: and inputting the to-be-detected picture subjected to the characteristic enhancement operation to a depth classification network, and outputting a judgment result. The steps S103 to S105 may specifically be as follows:
the method comprises the steps of collecting a large number of image data sets containing human ears by combining the characteristics of ear types, biological structures and the like, operating the data sets, framing target ears contained in the images by using a rectangular frame, training the image data sets with marks on the basis of an object recognition and positioning algorithm of an end-to-end deep neural network, solving object detection as a regression problem, finishing the input from an original image to the output of the position, the category and the corresponding confidence probability of the object, and enabling the network to have the capability of analyzing, recognizing and positioning the characteristics of the target object. When the original image is subjected to target detection, when the detected target prediction confidence coefficient is larger than a set threshold value, the target is judged to exist in the image, and a target ear is extracted by using a rectangular frame. For the extracted target ear, the edge detection and skin color detection technology is utilized to obtain the contour information on the target ear, the contrast between the contour information and the background is enhanced by enhancing the lines on the target contour, the ear contour is clearer, the subsequent depth classifier is favorable for judging whether the target object has abnormal conditions, the judgment is obtained according to the characteristic expressions reflected on the ears under different conditions, the classification network outputs one of the two conditions including 'normal' and 'abnormal', and the detection result is output.
After the target picture after the preprocessing operation is positioned by the target to be detected, the method also comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
Further, the method also comprises the following steps: if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected. The method specifically comprises the following steps: and when the result is abnormal, triggering a manual detection function to guide the examiner to carry out secondary judgment on the abnormal condition judged by the system. If the abnormal condition is judged, the examinee is required to correct the abnormal condition; and judging to be normal, and ending the module output detection. If the judgment result is normal, namely the examinee is not abnormal at the position of the two ears, the cheating suspicion is avoided, the module outputs the detection result, and the detection is finished.
Acquiring a target picture through a predefined detection window; preprocessing the target picture; positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the target to be detected through a preset rectangular frame, wherein the target to be detected comprises: the ear is used for acquiring a to-be-detected picture containing the ear and images around the ear; and performing feature enhancement operation on the picture to be detected, inputting the picture to be detected after the feature enhancement operation to a depth classification network, outputting a judgment result, and further assisting to know whether the examinee violates rules or not according to the judgment result.
The intelligent detection part does not need manual participation in the whole process, can reduce the labor intensity in implementation, and has the characteristics of resource optimization, high running speed, high precision, small load and real-time detection. Under the scene of online examination, the algorithm is positioned after the identity authentication of the examinee and before the examinee opens the examination, intelligent detection is carried out under the condition of no examiner, the invigilation pressure of the examiner is reduced, cheating analysis on the examinee is assisted, meanwhile, partial cheating suspicion of the examinee is effectively avoided before the examination, and the warning effect is played.
According to the characteristics of the structure and biological characteristics of the ear, the contour curve and the skin color of the ear are combined, the wearing modes of different earphone earrings are considered, so that the ear cannot be completely detected on the ear, the target object in the detection picture including the background on the periphery of the target object is divided and extracted from the picture by using the rectangular frame, the background interference outside the ear is eliminated to the maximum extent, and the representation performance of a subsequent feature analysis depth classification network is improved.
Compared with the ear recognition task under the interference condition (earphone, earring and hair shielding) generally adopted for the ear, on the contrary, the method is based on the detection of abnormal interference detection in the ear detection, classifies the condition on the target ear through the feature analysis deep classification network, does not need to position the position on the ear where the abnormality occurs, and only needs to judge that the abnormal condition exists.
Referring to fig. 2, a more specific implementation of the above scheme can be as follows:
step 1, the system sends out an instruction through the picture extraction module, and instructs an examinee to act according to the instruction sent out by the system and demo (animation, video or legend).
The system instructions instruct the test taker to act, to be guided by voice, to demonstrate to the test taker in animation, video or legend.
And 2, triggering a timer by the system, wherein the examinee needs to make corresponding actions instructed by the system within a specified time, and ensuring that the ears of the examinee appear in the limited area of the monitoring picture.
The examinee shows the left ear and the right ear at the front rotary head of the system camera according to the prompt of the system, and the ears are ensured to appear in the area of the system detection picture and are kept for a plurality of seconds to wait for the system to capture the screen.
And 3, finishing the timing of the system timer, triggering a screen capturing function in the picture extraction module by the system, and capturing the picture in the detection area.
And 4, for the captured picture, triggering a picture preprocessing function in a picture extraction module by the system, carrying out preprocessing operations including median filtering, histogram equalization, gray level stretching and the like on the picture, outputting the image to be detected on the side face of the examinee as the output of the picture extraction module.
And 5, the picture to be detected output by the picture extraction module is used as the input of the intelligent detection abnormity analysis module. The system triggers a detection positioning function in the module, detects whether a target object exists in the picture and positions the target object. When the target object is not detected in the picture, the intelligent detection abnormity analysis module directly outputs the result of 'target object not detected'. If the detection result in the picture is that the target object exists, returning the coordinate value of the center point of the target object and the length and width data of the rectangular frame containing the target. Considering that the article is worn on the ear or will partially go beyond the ear, it is decided to use a rectangular frame that can completely encompass the ear and retain a certain range around it. Therefore, the length and width data of the rectangular frame returned by the detection positioning function are properly expanded, the target is divided from the detection picture by using the coordinates of the central point of the target object and the expanded division rectangle, and the characteristic expression in the target ear is enhanced by using the technologies of edge detection, skin color detection and the like.
And 6, obtaining a further extracted target object by the system through the detection positioning and target extraction functions in the step 5. The deep classification network (such as SqueezeNet, vgg and the like) learns the characteristics of the deep significance of the picture through the processing of the convolution layer and the pooling layer through training, has good image characteristic extraction performance, and has better classification effect on the characteristic extraction based on ears. The network combines the characteristics of ear types, biological structures and the like to extract and analyze the characteristics of input segmentation targets, analyzes and learns the interference caused by ear contour lines according to different forbidden articles on abnormal ears, judges whether the ears of examinees are abnormal or not, further classifies the judgment results, outputs one of 2 types of results of 'normal' or 'abnormal' and outputs the result as the output of the intelligent detection abnormity analysis module. Through training a deep classification network, such as an vgg network, it is found that in a deep network structure, the classification effect of the same performance as that of the full network structure can be achieved by training a front four-layer convolutional layer and a rear full-connection layer, the Squeezenet network provides a new network architecture Fire Module, model compression is performed by reducing parameters, the number of the parameters is 2.14% of AlexNet with equal performance, the effect similar to AlexNet can be achieved on an ImageNet data set, the parameters are 50 times less than that of AlexNet, and a model file can be 510 times less than that of AlexNet by combining the model compression technology of the AlexNet. Under the condition of ensuring deep learning, the network structure is simplified, the judgment speed is accelerated, and the detection judgment time is further shortened.
And 7, intelligently detecting a detection result output by the abnormity analysis module as the input of the response operation module. And when the input result is that the target is not detected, namely the system determines that no two ears of the examinee appear in the picture, the examinee is prompted to detect again, the system state is reset back to the picture extraction module, and the step 1 is skipped back.
And 8, when the input result of the response operation module is abnormal, the module triggers a manual detection function to guide the examiner to carry out secondary judgment on the abnormal condition judged by the system. The examiner judges that the suspicion is not found, step 9 is skipped, and the module output detection is finished; if the examiner judges that the examination is abnormal, the examiner wears examination contraband and the like, the student is warned to make correction until the examiner judges that the examination is suspect-free.
And 9, responding to the input result of the operation module to be normal, namely, the examinee does not have abnormal conditions at the ears without cheating suspicion, and finishing the module output detection.
Referring to fig. 2 to fig. 3, in the present embodiment, an embodiment of a memory device 300 is as follows:
a storage device 300 having stored therein a set of instructions for performing:
and acquiring a target picture through a predefined detection window. The method specifically comprises the following steps: the system defines a detection window area in a detection interface, examinees show double ears to the camera in cooperation with system instructions, meanwhile, the remaining light of the examinees needs to pay attention to whether the double ears in a picture captured by the camera appear in the detection window, and the module only intercepts images in a predefined detection window, so that the interference of the complex background picture where the examinees are located is avoided.
In the present embodiment, the examinee's binaural state is defined as normal, abnormal, and no target class 3 case detected. Under normal conditions, the examinee is exposed to the full appearance of ears, the ears are bare and free of foreign matters, and the ears are not provided with earphones, earrings or other examination prohibited articles; the opposite case will be defined as an abnormal case; when the ear of the examinee is not detected in the screen, it is defined as a case where the object is not detected.
And carrying out preprocessing operation on the target picture. The pre-processing operation comprises one or more of: median filtering, histogram equalization and gray level stretching.
Positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: an ear;
and performing feature enhancement operation on the to-be-detected picture extracted from the preset rectangular frame, inputting the to-be-detected picture subjected to the feature enhancement operation to a depth classification network, and outputting a judgment result. The method specifically comprises the following steps:
the method comprises the steps of collecting a large number of image data sets containing human ears by combining the characteristics of ear types, biological structures and the like, operating the data sets, framing target ears contained in the images by using a rectangular frame, training the image data sets with marks on the basis of an object recognition and positioning algorithm of an end-to-end deep neural network, solving object detection as a regression problem, finishing the input from an original image to the output of the position, the category and the corresponding confidence probability of the object, and enabling the network to have the capability of analyzing, recognizing and positioning the characteristics of the target object. When the original image is subjected to target detection, when the detected target prediction confidence coefficient is larger than a set threshold value, the target is judged to exist in the image, and a target ear is extracted by using a rectangular frame. For the extracted target ear, the edge detection and skin color detection technology is utilized to obtain the contour information on the target ear, the contrast between the contour information and the background is enhanced by enhancing the lines on the target contour, the ear contour is clearer, the subsequent depth classifier is favorable for judging whether the target object has abnormal conditions, the judgment is obtained according to the characteristic expressions reflected on the ears under different conditions, the classification network outputs one of the two conditions including 'normal' and 'abnormal', and the detection result is output.
Further, the set of instructions is further for performing: after the target picture after the preprocessing operation is positioned by the target to be detected, the method also comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
Further, the set of instructions is further for performing: if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected. The method specifically comprises the following steps: and when the result is abnormal, triggering a manual detection function to guide the examiner to carry out secondary judgment on the abnormal condition judged by the system. If the abnormal condition is judged, the examinee is required to correct the abnormal condition; and judging to be normal, and ending the module output detection. If the judgment result is normal, namely the examinee is not abnormal at the position of the two ears, the cheating suspicion is avoided, the module outputs the detection result, and the detection is finished.
Acquiring a target picture through a predefined detection window; preprocessing the target picture; positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the target to be detected through a preset rectangular frame, wherein the target to be detected comprises: the ear is used for acquiring a to-be-detected picture containing the ear and images around the ear; and performing feature enhancement operation on the picture to be detected, inputting the picture to be detected after the feature enhancement operation to a depth classification network, outputting a judgment result, and further assisting to know whether the examinee violates rules or not according to the judgment result.
The intelligent detection part does not need manual participation in the whole process, can reduce the labor intensity in implementation, and has the characteristics of resource optimization, high running speed, high precision, small load and real-time detection. Under the scene of online examination, the algorithm is positioned after the identity authentication of the examinee and before the examinee opens the examination, intelligent detection is carried out under the condition of no examiner, the invigilation pressure of the examiner is reduced, cheating analysis on the examinee is assisted, meanwhile, partial cheating suspicion of the examinee is effectively avoided before the examination, and the warning effect is played.
According to the characteristics of the structure and biological characteristics of the ear, the contour curve and the skin color of the ear are combined, the wearing modes of different earphone earrings are considered, so that the ear cannot be completely detected on the ear, the target object in the detection picture including the background on the periphery of the target object is divided and extracted from the picture by using the rectangular frame, the background interference outside the ear is eliminated to the maximum extent, and the representation performance of a subsequent feature analysis depth classification network is improved.
Compared with the ear recognition task under the interference condition (earphone, earring and hair shielding) generally adopted for the ear, on the contrary, the method is based on the detection of abnormal interference detection in the ear detection, classifies the condition on the target ear through the feature analysis deep classification network, does not need to position the position on the ear where the abnormality occurs, and only needs to judge that the abnormal condition exists.
Step 1, the system sends out an instruction through the picture extraction module, and instructs an examinee to act according to the instruction sent out by the system and demo (animation, video or legend).
The system instructions instruct the test taker to act, to be guided by voice, to demonstrate to the test taker in animation, video or legend.
And 2, triggering a timer by the system, wherein the examinee needs to make corresponding actions instructed by the system within a specified time, and ensuring that the ears of the examinee appear in the limited area of the monitoring picture.
The examinee shows the left ear and the right ear at the front rotary head of the system camera according to the prompt of the system, and the ears are ensured to appear in the area of the system detection picture and are kept for a plurality of seconds to wait for the system to capture the screen.
And 3, finishing the timing of the system timer, triggering a screen capturing function in the picture extraction module by the system, and capturing the picture in the detection area.
And 4, for the captured picture, triggering a picture preprocessing function in a picture extraction module by the system, carrying out preprocessing operations including median filtering, histogram equalization, gray level stretching and the like on the picture, outputting the image to be detected on the side face of the examinee as the output of the picture extraction module.
And 5, the picture to be detected output by the picture extraction module is used as the input of the intelligent detection abnormity analysis module. The system triggers a detection positioning function in the module, detects whether a target object exists in the picture and positions the target object. When the target object is not detected in the picture, the intelligent detection abnormity analysis module directly outputs the result of 'target object not detected'. If the detection result in the picture is that the target object exists, returning the coordinate value of the center point of the target object and the length and width data of the rectangular frame containing the target. Considering that the article is worn on the ear or will partially go beyond the ear, it is decided to use a rectangular frame that can completely encompass the ear and retain a certain range around it. Therefore, the length and width data of the rectangular frame returned by the detection positioning function are properly expanded, the target is divided from the detection picture by using the coordinates of the central point of the target object and the expanded division rectangle, and the characteristic expression in the target ear is enhanced by using the technologies of edge detection, skin color detection and the like.
And 6, obtaining a further extracted target object by the system through the detection positioning and target extraction functions in the step 5. The deep classification network (such as SqueezeNet, vgg and the like) learns the characteristics of the deep significance of the picture through the processing of the convolution layer and the pooling layer through training, has good image characteristic extraction performance, and has better classification effect on the characteristic extraction based on ears. The network combines the characteristics of ear types, biological structures and the like to extract and analyze the characteristics of input segmentation targets, analyzes and learns the interference caused by ear contour lines according to different forbidden articles on abnormal ears, judges whether the ears of examinees are abnormal or not, further classifies the judgment results, outputs one of 2 types of results of 'normal' or 'abnormal' and outputs the result as the output of the intelligent detection abnormity analysis module. Through training a deep classification network, such as an vgg network, it is found that in a deep network structure, the classification effect of the same performance as that of the full network structure can be achieved by training a front four-layer convolutional layer and a rear full-connection layer, the Squeezenet network provides a new network architecture Fire Module, model compression is performed by reducing parameters, the number of the parameters is 2.14% of AlexNet with equal performance, the effect similar to AlexNet can be achieved on an ImageNet data set, the parameters are 50 times less than that of AlexNet, and a model file can be 510 times less than that of AlexNet by combining the model compression technology of the AlexNet. Under the condition of ensuring deep learning, the network structure is simplified, the judgment speed is accelerated, and the detection judgment time is further shortened.
And 7, intelligently detecting a detection result output by the abnormity analysis module as the input of the response operation module. And when the input result is that the target is not detected, namely the system determines that no two ears of the examinee appear in the picture, the examinee is prompted to detect again, the system state is reset back to the picture extraction module, and the step 1 is skipped back.
And 8, when the input result of the response operation module is abnormal, the module triggers a manual detection function to guide the examiner to carry out secondary judgment on the abnormal condition judged by the system. The examiner judges that the suspicion is not found, step 9 is skipped, and the module output detection is finished; if the examiner judges that the examination is abnormal, the examiner wears examination contraband and the like, the student is warned to make correction until the examiner judges that the examination is suspect-free.
And 9, responding to the input result of the operation module to be normal, namely, the examinee does not have abnormal conditions at the ears without cheating suspicion, and finishing the module output detection.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.
Claims (8)
1. A method for detecting contraband worn by ears of an examinee in an online examination is characterized by comprising the following steps:
acquiring a target picture through a predefined detection window;
preprocessing the target picture;
positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: an ear;
and performing feature enhancement operation on the to-be-detected picture extracted from the preset rectangular frame, inputting the to-be-detected picture subjected to the feature enhancement operation to a depth classification network, and outputting a judgment result.
2. The method of claim 1, wherein the preprocessing comprises one or more of: median filtering, histogram equalization and gray level stretching;
the strengthening operation includes one or more of: edge detection, skin color detection.
3. The method for detecting contraband worn by the ears of the examinee in the online examination as claimed in claim 1, wherein after the target image after the preprocessing operation is located by the target to be detected, the method further comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
4. The method for detecting contraband worn by the ears of the examinee in the online examination as claimed in claim 1, further comprising the steps of:
if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected.
5. A storage device having a set of instructions stored therein, the set of instructions being operable to perform:
acquiring a target picture through a predefined detection window;
preprocessing the target picture;
positioning a target to be detected on a target picture after the preprocessing operation, and if the target to be detected is positioned, extracting the picture to be detected through a preset rectangular frame, wherein the target to be detected comprises: an ear;
and performing feature enhancement operation on the to-be-detected picture extracted from the preset rectangular frame, inputting the to-be-detected picture subjected to the feature enhancement operation to a depth classification network, and outputting a judgment result.
6. The storage device of claim 5, wherein the set of instructions is further configured to perform: the pre-processing operation comprises one or more of: median filtering, histogram equalization and gray level stretching;
the strengthening operation includes one or more of: edge detection, skin color detection.
7. The storage device of claim 5, wherein the set of instructions is further configured to perform: after the target picture after the preprocessing operation is positioned by the target to be detected, the method also comprises the following steps: and if the target to be detected is not positioned, sending a prompt for informing the examinee to shoot the picture again.
8. The storage device of claim 5, wherein the set of instructions is further configured to perform: if the judgment result is abnormal, triggering the manual detection function, and carrying out secondary judgment on the picture to be detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110101838.XA CN112883814A (en) | 2021-01-26 | 2021-01-26 | Method and storage device for detecting contraband worn by double ears of examinees in online examination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110101838.XA CN112883814A (en) | 2021-01-26 | 2021-01-26 | Method and storage device for detecting contraband worn by double ears of examinees in online examination |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112883814A true CN112883814A (en) | 2021-06-01 |
Family
ID=76051864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110101838.XA Pending CN112883814A (en) | 2021-01-26 | 2021-01-26 | Method and storage device for detecting contraband worn by double ears of examinees in online examination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112883814A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366506A (en) * | 2013-06-27 | 2013-10-23 | 北京理工大学 | Device and method for automatically monitoring telephone call behavior of driver when driving |
CN105787442A (en) * | 2016-02-19 | 2016-07-20 | 电子科技大学 | Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof |
CN107077796A (en) * | 2016-03-11 | 2017-08-18 | 深圳市大疆创新科技有限公司 | Method, system and the equipment of the anti-cheating of storage medium, network test |
US20190294769A1 (en) * | 2018-03-21 | 2019-09-26 | Cirrus Logic International Semiconductor Ltd. | Ear proximity detection |
CN111222493A (en) * | 2020-01-20 | 2020-06-02 | 北京捷通华声科技股份有限公司 | Video processing method and device |
CN112087603A (en) * | 2020-08-31 | 2020-12-15 | 浙江蓝鸽科技有限公司 | Intelligent examination room supervision method |
CN112149511A (en) * | 2020-08-27 | 2020-12-29 | 深圳市点创科技有限公司 | Method, terminal and device for detecting violation of driver based on neural network |
-
2021
- 2021-01-26 CN CN202110101838.XA patent/CN112883814A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366506A (en) * | 2013-06-27 | 2013-10-23 | 北京理工大学 | Device and method for automatically monitoring telephone call behavior of driver when driving |
CN105787442A (en) * | 2016-02-19 | 2016-07-20 | 电子科技大学 | Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof |
CN107077796A (en) * | 2016-03-11 | 2017-08-18 | 深圳市大疆创新科技有限公司 | Method, system and the equipment of the anti-cheating of storage medium, network test |
US20190294769A1 (en) * | 2018-03-21 | 2019-09-26 | Cirrus Logic International Semiconductor Ltd. | Ear proximity detection |
CN111222493A (en) * | 2020-01-20 | 2020-06-02 | 北京捷通华声科技股份有限公司 | Video processing method and device |
CN112149511A (en) * | 2020-08-27 | 2020-12-29 | 深圳市点创科技有限公司 | Method, terminal and device for detecting violation of driver based on neural network |
CN112087603A (en) * | 2020-08-31 | 2020-12-15 | 浙江蓝鸽科技有限公司 | Intelligent examination room supervision method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522815B (en) | Concentration degree evaluation method and device and electronic equipment | |
EP3916627A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN111597938B (en) | Living body detection and model training method and device | |
WO2021179471A1 (en) | Face blur detection method and apparatus, computer device and storage medium | |
CN108549854A (en) | A kind of human face in-vivo detection method | |
CN106372629A (en) | Living body detection method and device | |
CN109508706B (en) | Silence living body detection method based on micro-expression recognition and non-sensory face recognition | |
CN110059546A (en) | Vivo identification method, device, terminal and readable medium based on spectrum analysis | |
CN107832721B (en) | Method and apparatus for outputting information | |
CN111967319B (en) | Living body detection method, device, equipment and storage medium based on infrared and visible light | |
CN113627256B (en) | False video inspection method and system based on blink synchronization and binocular movement detection | |
CN113139439B (en) | Online learning concentration evaluation method and device based on face recognition | |
CN112861588B (en) | Living body detection method and device | |
CN112883814A (en) | Method and storage device for detecting contraband worn by double ears of examinees in online examination | |
CN115657859A (en) | Intelligent interaction system based on virtual reality | |
CN116206373A (en) | Living body detection method, electronic device and storage medium | |
CN112686851B (en) | Image detection method, device and storage medium | |
CN114189685A (en) | Video preprocessing method and device based on video correlation and storage medium | |
CN113343889A (en) | Face recognition system based on silence live body detection | |
Beckmann et al. | Fooling State-of-the-Art Deepfake Detection with High-Quality Deepfakes | |
CN113706402A (en) | Neural network training method and device and electronic equipment | |
CN111274898A (en) | Method and device for detecting group emotion and cohesion in video stream based on deep learning | |
CN117789314A (en) | Living body detection method based on deep learning | |
CN112861587B (en) | Living body detection method and device | |
CN117593679A (en) | Fake video detection method, fake video detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |