CN114639175A - Method, device, equipment and storage medium for predicting examination cheating behaviors - Google Patents
Method, device, equipment and storage medium for predicting examination cheating behaviors Download PDFInfo
- Publication number
- CN114639175A CN114639175A CN202210292955.3A CN202210292955A CN114639175A CN 114639175 A CN114639175 A CN 114639175A CN 202210292955 A CN202210292955 A CN 202210292955A CN 114639175 A CN114639175 A CN 114639175A
- Authority
- CN
- China
- Prior art keywords
- sequence
- frame
- cheating
- examination
- monitoring video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000006399 behavior Effects 0.000 title claims abstract description 281
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012544 monitoring process Methods 0.000 claims abstract description 136
- 238000012360 testing method Methods 0.000 claims abstract description 81
- 238000006243 chemical reaction Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 230000036544 posture Effects 0.000 description 68
- 238000001514 detection method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 4
- 238000012163 sequencing technique Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Business, Economics & Management (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Primary Health Care (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of artificial intelligence, and discloses a method, a device, equipment and a storage medium for predicting examination cheating behaviors, wherein the method comprises the following steps: acquiring a front monitoring video of a target examinee; judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence; analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence; respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set; acquiring an answer behavior sequence corresponding to the front monitoring video; and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result. The cheating behavior prediction based on various information is realized, the comprehensiveness of the information for the cheating behavior prediction is increased, and the accuracy of the test cheating behavior prediction is improved.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for predicting examination cheating behaviors.
Background
With the rapid development of the internet, paperless examinations are more and more common through the internet. And along with the development of electronic technology, the means of cheating is also more and more diversified, has increased the degree of difficulty of examination cheating action discernment, moreover because of the cost reason can't arrange one invigilator for every examinee, lead to can't discern examination cheating action comprehensively.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for predicting examination cheating behaviors, and aims to solve the technical problem that the prior art cannot comprehensively identify the examination cheating behaviors.
In order to achieve the above object, the present invention provides a method for predicting examination cheating behaviors, the method including:
acquiring a front monitoring video of a target examinee;
judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set;
acquiring an answer behavior sequence corresponding to the front monitoring video;
and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
Further, the step of respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set includes:
based on an ASR technology, performing text conversion on the audio in the front monitoring video to obtain a text to be analyzed;
performing word segmentation on the text to be analyzed to obtain a phrase set;
acquiring an examination keyword set corresponding to the front monitoring video from a preset examination keyword library to serve as a target examination keyword set;
performing examination keyword matching on each phrase in the phrase set in the target examination keyword set to obtain a matching result;
and if the matching result is successful, taking each phrase corresponding to the successful matching result as the audio test keyword set.
Further, the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result includes:
inputting the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence into a preset cheating behavior prediction model to predict cheating behaviors, and obtaining a cheating behavior prediction result;
the cheating behavior prediction model is a model obtained based on a Bert model and a classification prediction layer.
Further, after the step of obtaining the front monitoring video of the target examinee, the method further includes:
extracting a face image of each frame of image from the front monitoring video to obtain a face image set corresponding to each frame of image;
extracting the face image with the largest size from the face image set to obtain a face image to be analyzed;
extracting actual information of the auricle shape of the face image to be analyzed;
comparing the actual information of the auricle shape with preset standard information of the auricle shape to obtain a comparison result of the auricle shape;
generating a pinna shape comparison result sequence according to each pinna shape comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
Further, after the step of extracting the face image with the largest size from the face image set to obtain the face image to be analyzed, the method further includes:
acquiring an admission card image of the target examinee;
extracting face difference information of the face image to be analyzed and the face image of the admission card image to obtain single-frame face difference information;
generating a single-frame human face difference information sequence according to each single-frame human face difference information;
the step of predicting cheating behaviors according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps of:
and carrying out cheating behavior prediction according to the single-frame human face difference information sequence, the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
Further, after the step of obtaining the front monitoring video of the target examinee, the method further includes:
according to the audio frequency in the front monitoring video, carrying out volume identification on the generation time point of each frame of image of the front monitoring video to obtain the volume of a single frame;
comparing the single-frame volume with a preset volume threshold value to obtain a single-frame volume comparison result;
generating a single-frame volume comparison result sequence according to each single-frame volume comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the single-frame volume comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
Further, after the step of obtaining the front monitoring video of the target examinee, the method further includes:
obtaining a examination room monitoring video corresponding to the front monitoring video;
according to the examination room monitoring video, performing desktop viewing behavior analysis on the target examinee to obtain desktop behavior data;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the desktop behavior data, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
The invention also provides a device for predicting the cheating behaviors in the examination, which comprises the following components:
the data acquisition module is used for acquiring a front monitoring video of the target examinee;
the single-frame people number sequence determining module is used for judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
the single-frame human body posture sequence determining module is used for analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
the audio examination keyword set determining module is used for respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set;
the answer behavior sequence acquisition module is used for acquiring an answer behavior sequence corresponding to the front monitoring video;
and the cheating behavior prediction result determining module is used for carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
The invention also proposes a computer device comprising a memory storing a computer program and a processor implementing the method of any one of the above when the processor executes the computer program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of the above-mentioned claims.
According to the method, the device, the equipment and the storage medium for predicting the cheating behaviors in the examination, the front monitoring video of the target examinee is obtained; judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence; analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence; respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set; acquiring an answer behavior sequence corresponding to the front monitoring video; and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result. The cheating behavior prediction is carried out through the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence extracted from the front monitoring video, so that the cheating behavior prediction based on various information is realized, the comprehensiveness of the information for the cheating behavior prediction is increased, and the accuracy of the cheating behavior prediction of the test is improved.
Drawings
Fig. 1 is a flowchart illustrating a method for predicting cheating in an examination according to an embodiment of the present invention;
fig. 2 is a block diagram schematically illustrating the structure of an examination cheating act prediction apparatus according to an embodiment of the present invention;
fig. 3 is a block diagram schematically illustrating a structure of a computer apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, an embodiment of the present invention provides a method for predicting examination cheating behaviors, where the method includes:
s1: acquiring a front monitoring video of a target examinee;
s2: judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
s3: analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
s4: respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set;
s5: acquiring an answer behavior sequence corresponding to the front monitoring video;
s6: and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, the cheating behavior prediction is performed through the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence extracted from the front monitoring video, so that the cheating behavior prediction based on various information is realized, the comprehensiveness of the information used for the cheating behavior prediction is increased, and the accuracy of the cheating behavior prediction in the examination is improved.
For S1, the front monitoring video of the target test taker input by the user may be obtained, the front monitoring video of the target test taker may also be obtained from the database, and the front monitoring video of the target test taker may also be obtained from a third-party application.
The front monitoring video can be a complete video of an examination or a section of video of the examination. The front monitoring video is a video shot by facing the face of the examinee.
It can be understood that the front monitoring video of the target examinee sent by the target examination end according to a preset monitoring period may also be the front monitoring video of one monitoring period.
For S2, performing target detection on each frame of image in the front monitoring video by adopting a preset target detection model to obtain a target image set of each frame of image; carrying out human body classification prediction on each target image in the target image set to obtain a human body classification prediction result; carrying out number statistics on all the human body classification prediction results corresponding to the target image set to obtain the number of single frames corresponding to the target image set; and sequencing the number of the single-frame people according to time to form the single-frame people number sequence.
The target detection model is a model obtained based on neural network training. The target detection model is used for carrying out target detection and image segmentation corresponding to a target on the image.
The human classification prediction result has only one value. The value range of the human body classification prediction result comprises the following steps: yes and no. If the human body classification prediction result is yes, the target image corresponding to the human body classification prediction result is a human body image; and if the human body classification prediction result is negative, the target image corresponding to the human body classification prediction result is not a human body image.
For S3, a preset human body posture analysis model is adopted to analyze the human body posture of each frame of image of the front monitoring video, and data obtained by analyzing each frame of image is used as a single frame of human body posture; and sequencing the single-frame human body postures according to time, and taking the sequenced single-frame human body postures as the single-frame human body posture sequence.
The human body posture analysis model is obtained based on neural network training. The human body posture analysis model is used for detecting the posture of the human body in the image.
The single frame human pose includes: head pose, left hand pose, right hand pose, and chest pose.
For S4, text conversion is performed on the audio in the front monitoring video, examination keyword extraction is performed according to the converted text, and each extracted examination keyword is used as an audio examination keyword set.
For S5, the answer behavior sequence corresponding to the front monitoring video may be obtained from a database, or may be obtained from a third-party application.
The answer behavior sequence corresponding to the front monitoring video refers to behavior data of answering of a target examinee on a computer in a time period corresponding to the front monitoring video.
The sequence of answer behaviors includes a plurality of answer behaviors. Answering behaviors include, but are not limited to: the number of answers, the answer speed of each question and the answer sequence.
And S6, performing classification prediction of cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence, and taking data obtained through the classification prediction as a cheating behavior prediction result.
The cheating action prediction result has only one value. The value range of the cheating behavior prediction result comprises the following steps: yes and no. If the cheating behavior prediction result is yes, the target examinee is indicated to have cheating behaviors in the examination corresponding to the front monitoring video; and if the cheating behavior prediction result is negative, the target examinee is not cheated in the examination corresponding to the front monitoring video.
In an embodiment, the step of performing text conversion and examination keyword extraction on the audio in the front monitoring video respectively to obtain an audio examination keyword set includes:
s41: based on an ASR technology, performing text conversion on the audio in the front monitoring video to obtain a text to be analyzed;
s42: performing word segmentation on the text to be analyzed to obtain a phrase set;
s43: acquiring an examination keyword set corresponding to the front monitoring video from a preset examination keyword library to serve as a target examination keyword set;
s44: performing examination keyword matching on each phrase in the phrase set in the target examination keyword set to obtain a matching result;
s45: and if the matching result is successful, taking each phrase corresponding to the successful matching result as the audio test keyword set.
According to the embodiment, the examination keyword is extracted from the phrase set corresponding to the front monitoring video based on the examination keyword set corresponding to the front monitoring video, so that the accuracy of the extracted examination keyword is improved, and the accuracy of the examination cheating behavior prediction is improved.
And S41, performing text conversion on the audio input preset audio-to-text model in the front monitoring video, and taking the converted text as the text to be analyzed.
The audio-to-text model is a model obtained by training based on an ASR (automatic speech recognition) technology.
And S42, performing word segmentation on the text to be analyzed, and taking each phrase obtained by word segmentation as a phrase set.
For step S43, an examination keyword set corresponding to the examination type corresponding to the front-side monitoring video is acquired from a preset examination keyword library, and the acquired examination keyword set is used as a target examination keyword set. Therefore, the examination keyword set closely related to the front monitoring video is used as the target examination keyword set, and the accuracy of the audio examination keyword set determined in the subsequent steps is improved.
The examination keyword library comprises: test type and test keyword set. The test keyword set includes one or more test keywords.
And S44, performing examination keyword matching on each phrase in the phrase set in the target examination keyword set to obtain a matching result corresponding to each phrase.
If the matching result is successful, the phrase corresponding to the matching result is matched with the examination keyword in the target examination keyword set; if the matching result is failure, the phrase corresponding to the matching result is not matched with the test keyword in the target test keyword set.
For S45, if the matching result is successful, the phrases corresponding to the successful matching result are used as the audio test keyword set. Therefore, the collection of examination keywords relevant to the front monitoring video is extracted.
In an embodiment, the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set, and the answer behavior sequence to obtain a cheating behavior prediction result includes:
s611: inputting the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence into a preset cheating behavior prediction model to predict cheating behaviors, and obtaining a cheating behavior prediction result;
the cheating behavior prediction model is a model obtained based on a Bert model and a classification prediction layer.
According to the embodiment, the model obtained by training based on the Bert model and the classification prediction layer is adopted for cheating behavior prediction, and the accuracy of cheating behavior prediction is improved based on artificial intelligence.
For S611, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence are spliced, the spliced data are input into a preset cheating behavior prediction model to conduct cheating behavior prediction, and the predicted data are used as a cheating behavior prediction result.
The classification prediction layer is a fully connected layer adopting a softmax activation function (normalized exponential function).
Bert, English called Bidirective Encoder retrieval from transforms, is a pre-trained language Representation model.
In an embodiment, after the step of obtaining the front monitoring video of the target test taker, the method further includes:
s111: extracting a face image of each frame of image from the front monitoring video to obtain a face image set corresponding to each frame of image;
s112: extracting the face image with the largest size from the face image set to obtain a face image to be analyzed;
s113: extracting actual information of the auricle shape of the face image to be analyzed;
s114: comparing the actual information of the auricle shape with preset standard information of the auricle shape to obtain a comparison result of the auricle shape;
s115: generating a pinna shape comparison result sequence according to each pinna shape comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
s621: and carrying out cheating behavior prediction according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, the practical information of the auricle shape is extracted from the face image with the largest size of each frame of image of the front monitoring video, then the comparison result sequence of the auricle shape is generated according to the comparison result of the practical information of the auricle shape and the standard information of the auricle shape, and finally the comparison result sequence of the auricle shape is used for cheating behavior prediction, so that the comprehensiveness of the information used for the cheating behavior prediction is further increased, and the accuracy of the cheating behavior prediction in the examination is further improved.
For S111, performing target detection on each frame of image in the front monitoring video by adopting a target detection model to obtain a target image set of each frame of image; carrying out human body classification prediction on each target image in the target image set to obtain a human body classification prediction result; and extracting face images from each target image corresponding to the human body classification prediction result, and taking each extracted face image as a face image set.
For step S112, the face image with the largest size is extracted from the face image set, so as to obtain an image of a face closest to a camera that captures a front surveillance video, and the extracted face image is used as a face image to be analyzed.
For S113, a specific method for extracting the actual information of the pinna shape from the face image to be analyzed is not limited, and a person skilled in the art may set the method according to a specific design requirement.
The auricle shape actual information includes: pinna shape and pinna pose.
And S114, comparing the actual information of the auricle shape with the preset standard information of the auricle shape in the same auricle pose, and taking the compared data as the comparison result of the auricle shape.
According to the corresponding relation between the auricle pose of the auricle shape actual information and the auricle pose corresponding to the auricle shape standard information, performing shape affine transformation on the auricle shape of the auricle shape actual information, comparing the auricle shape after the shape affine transformation with the auricle shape of the auricle shape standard information, and taking the data obtained by comparison as an auricle shape comparison result.
And S115, sequencing the auricle shape comparison results according to the time sequence, and taking the sequenced auricle shape comparison results as an auricle shape comparison result sequence.
And S621, splicing the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence, inputting spliced data into a cheating behavior prediction model to predict cheating behaviors, and obtaining predicted data as a cheating behavior prediction result.
In an embodiment, after the step of extracting the face image with the largest size from the face image set to obtain the face image to be analyzed, the method further includes:
s116: acquiring an admission card image of the target examinee;
s117: extracting face difference information of the face image to be analyzed and the face image of the admission card image to obtain single-frame face difference information;
s118: generating a single-frame human face difference information sequence according to each single-frame human face difference information;
the step of predicting cheating behaviors according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a result of predicting the cheating behaviors further comprises the following steps of:
s6211: and carrying out cheating behavior prediction according to the single-frame human face difference information sequence, the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, the face difference information of the face image of the examination evidence image is extracted from the face image with the largest size of each frame of image of the front monitoring video, then the single-frame face difference information sequence is generated according to the face difference information, and finally the single-frame face difference information sequence is used for cheating behavior prediction, so that the comprehensiveness of the information used for the cheating behavior prediction is further increased, and the accuracy of the cheating behavior prediction of the examination is further improved.
For step S116, the admission image of the target examinee may be obtained from a database, or may be obtained from a third-party application.
The admission image includes: an image of a face of a test taker.
For step S117, according to the corresponding relationship between the face pose of the face image to be analyzed and the face pose of the face image of the admission ticket image, performing affine transformation on the face image to be analyzed, performing face difference information extraction on the face image to be analyzed and the face image of the admission ticket image after affine transformation, and taking the extracted face difference information as single-frame face difference information.
For step S118, the single-frame face difference information is sorted in time sequence, and the sorted single-frame face difference information is used as a single-frame face difference information sequence.
For S6211, the single-frame human face difference information sequence, the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence are spliced, the spliced data is input into a cheating behavior prediction model to carry out cheating behavior prediction, and the predicted data is used as the cheating behavior prediction result.
In an embodiment, after the step of obtaining the front monitoring video of the target test taker, the method further includes:
s121: according to the audio frequency in the front monitoring video, carrying out volume identification on the generation time point of each frame of image of the front monitoring video to obtain the volume of a single frame;
s122: comparing the single-frame volume with a preset volume threshold to obtain a single-frame volume comparison result;
s123: generating a single-frame volume comparison result sequence according to each single-frame volume comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
s631: and carrying out cheating behavior prediction according to the single-frame volume comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, the single-frame volume of each frame of image at the generating time point is extracted from the audio of the front monitoring video, then the single-frame volume comparison result sequence is generated according to each single-frame volume, and finally the single-frame volume comparison result sequence is used for cheating behavior prediction, so that the comprehensiveness of information used for the cheating behavior prediction is further increased, and the accuracy of the cheating behavior prediction in the examination is further improved.
For step S121, for the generation time point of each frame of image of the front monitoring video, acquiring the volume from the audio in the front monitoring video, and taking the volume corresponding to each frame of image as the single-frame volume.
And S122, subtracting the volume threshold from the single-frame volume to obtain a single-frame volume comparison result.
And S123, sequencing the single-frame volume comparison results according to the time sequence, and taking the sequenced single-frame volume comparison results as a single-frame volume comparison result sequence.
And S631, splicing the single-frame volume comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence, inputting spliced data into a cheating behavior prediction model to predict cheating behaviors, and predicting the obtained data to obtain a cheating behavior prediction result.
In an embodiment, after the step of obtaining the front monitoring video of the target test taker, the method further includes:
s141: obtaining a test room monitoring video corresponding to the front monitoring video;
s142: according to the examination room monitoring video, performing desktop viewing behavior analysis on the target examinee to obtain desktop behavior data;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
s641: and carrying out cheating behavior prediction according to the desktop behavior data, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, the desktop behavior data is extracted from the examination room monitoring video corresponding to the front monitoring video and used for cheating behavior prediction, so that the comprehensiveness of information used for the cheating behavior prediction is further increased, and the accuracy of the cheating behavior prediction of the examination is further improved.
For S141, the examination room monitoring video corresponding to the front monitoring video may be obtained from a database, or the examination room monitoring video corresponding to the front monitoring video may be obtained from a third-party application.
The examination room monitoring video is the monitoring video of the whole examination room.
And S142, performing desktop viewing behavior analysis on the target examinee according to the examination room monitoring video, and taking data obtained through analysis as desktop behavior data.
The desktop behavior data is behavior data of an examinee viewing and operating on the desktop where the examination computer is located.
Desktop behavior data includes, but is not limited to: desktop viewing frequency, hand-on-desktop behavior data, and desktop item data.
And S641, splicing the desktop behavior data, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence, inputting the spliced data into a cheating behavior prediction model to predict cheating behaviors, and obtaining the predicted data as a cheating behavior prediction result.
Referring to fig. 2, the present invention also provides an examination cheating behavior prediction apparatus, including:
the data acquisition module 100 is used for acquiring a front monitoring video of a target examinee;
the single-frame people number sequence determining module 200 is used for judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
the single-frame human body posture sequence determining module 300 is used for analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
an audio examination keyword set determining module 400, configured to perform text conversion and examination keyword extraction on the audio in the front-side monitoring video, respectively, to obtain an audio examination keyword set;
the answer behavior sequence obtaining module 500 is configured to obtain an answer behavior sequence corresponding to the front monitoring video;
and the cheating behavior prediction result determining module 600 is configured to perform cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, cheating behavior prediction is performed through the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence extracted from the front monitoring video, so that cheating behavior prediction based on various information is realized, comprehensiveness of information for the cheating behavior prediction is increased, and accuracy of the cheating behavior prediction in the test is improved.
In one embodiment, the audio test keyword set determining module 400 includes:
the phrase set determining submodule is used for performing text conversion on the audio in the front monitoring video based on an ASR technology to obtain a text to be analyzed, and performing word segmentation on the text to be analyzed to obtain a phrase set;
the target examination keyword set determining submodule is used for acquiring an examination keyword set corresponding to the front monitoring video from a preset examination keyword library to serve as a target examination keyword set;
and the audio test keyword set determining submodule is used for matching each phrase in the phrase set with the target test keyword set to obtain a matching result, and if the matching result is successful, each phrase corresponding to the successful matching result is used as the audio test keyword set.
In one embodiment, the cheating behavior prediction module 600 includes:
the first prediction submodule is used for inputting the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence into a preset cheating behavior prediction model to predict cheating behaviors and obtain a cheating behavior prediction result; the cheating behavior prediction model is a model obtained based on a Bert model and a classification prediction layer.
In one embodiment, the above apparatus further comprises:
the auricle shape comparison result sequence generation module is used for extracting a face image of each frame of image from the front monitoring video to obtain a face image set corresponding to each frame of image, extracting the face image with the largest size from the face image set to obtain a face image to be analyzed, extracting actual auricle shape information from the face image to be analyzed, comparing the actual auricle shape information with preset standard auricle shape information to obtain an auricle shape comparison result, and generating an auricle shape comparison result sequence according to each auricle shape comparison result;
the cheating behavior prediction result determination module 600 further includes: a second prediction sub-module;
and the second prediction submodule is used for carrying out cheating behavior prediction according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain the cheating behavior prediction result.
In one embodiment, the above apparatus further comprises:
the single-frame face difference information sequence determining module is used for acquiring an admission card image of the target examinee, extracting face difference information of the face image to be analyzed and the face image of the admission card image to obtain single-frame face difference information, and generating a single-frame face difference information sequence according to each piece of single-frame face difference information;
the cheating behavior prediction result determination module 600 further includes: a third prediction sub-module;
and the third prediction sub-module is used for carrying out cheating behavior prediction according to the single-frame human face difference information sequence, the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
In one embodiment, the above apparatus further comprises:
the single-frame volume comparison result sequence determining module is used for carrying out volume identification on the generation time point of each frame of image of the front monitoring video according to the audio frequency in the front monitoring video to obtain single-frame volume, comparing the single-frame volume with a preset volume threshold value to obtain a single-frame volume comparison result, and generating a single-frame volume comparison result sequence according to each single-frame volume comparison result;
the cheating behavior prediction result determination module 600 further includes: a fourth prediction sub-module;
and the fourth prediction submodule is used for carrying out cheating behavior prediction according to the single-frame volume comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain the cheating behavior prediction result.
In one embodiment, the above apparatus further comprises:
the desktop behavior data determining module is used for acquiring a test room monitoring video corresponding to the front monitoring video, and performing desktop viewing behavior analysis on the target examinee according to the test room monitoring video to obtain desktop behavior data;
the cheating behavior prediction result determination module 600 further includes: a fifth prediction sub-module;
and the fifth prediction submodule is used for carrying out cheating behavior prediction according to the desktop behavior data, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
Referring to fig. 3, an embodiment of the present invention further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data such as test cheating behavior prediction methods and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of predicting examination cheating behavior. The test cheating behavior prediction method comprises the following steps: acquiring a front monitoring video of a target examinee; judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence; analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence; respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set; acquiring an answer behavior sequence corresponding to the front monitoring video; and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the embodiment, cheating behavior prediction is performed through the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence extracted from the front monitoring video, so that cheating behavior prediction based on various information is realized, comprehensiveness of information for the cheating behavior prediction is increased, and accuracy of the cheating behavior prediction in the test is improved.
An embodiment of the present invention further provides a computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements a method for predicting examination cheating behavior, including the steps of: acquiring a front monitoring video of a target examinee; judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence; analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence; respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set; acquiring an answer behavior sequence corresponding to the front monitoring video; and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
According to the executed test cheating behavior prediction method, cheating behavior prediction is performed through the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence extracted from the front monitoring video, so that cheating behavior prediction based on various information is realized, the comprehensiveness of information used for cheating behavior prediction is increased, and the accuracy of test cheating behavior prediction is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media provided herein or used in embodiments of the present invention may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A method for predicting examination cheating behaviors, the method comprising:
acquiring a front monitoring video of a target examinee;
judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set;
acquiring an answer behavior sequence corresponding to the front monitoring video;
and carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
2. The method of predicting cheating activities in an examination according to claim 1, wherein the step of performing text conversion and examination keyword extraction on the audio in the front surveillance video respectively to obtain an audio examination keyword set comprises:
based on an ASR technology, performing text conversion on the audio in the front monitoring video to obtain a text to be analyzed;
performing word segmentation on the text to be analyzed to obtain a phrase set;
acquiring an examination keyword set corresponding to the front monitoring video from a preset examination keyword library to serve as a target examination keyword set;
matching each phrase in the phrase set with examination keywords in the target examination keyword set to obtain a matching result;
and if the matching result is successful, taking each phrase corresponding to the successful matching result as the audio test keyword set.
3. The method for predicting the cheating behaviors in the examination according to claim 1, wherein the step of predicting the cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result comprises:
inputting the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence into a preset cheating behavior prediction model to predict cheating behaviors, and obtaining a cheating behavior prediction result;
the cheating behavior prediction model is a model obtained based on a Bert model and a classification prediction layer.
4. The method of predicting cheating activities on an examination according to claim 1, wherein after the step of obtaining the front monitoring video of the target examinee, the method further comprises:
extracting a face image of each frame of image from the front monitoring video to obtain a face image set corresponding to each frame of image;
extracting the face image with the largest size from the face image set to obtain a face image to be analyzed;
extracting actual information of the auricle shape of the face image to be analyzed;
comparing the actual information of the auricle shape with preset standard information of the auricle shape to obtain a comparison result of the auricle shape;
generating a pinna shape comparison result sequence according to each pinna shape comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio examination keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
5. The method for predicting examination cheating behaviors of claim 4, wherein after the step of extracting the face image with the largest size from the face image set to obtain the face image to be analyzed, the method further comprises:
obtaining an admission card image of the target examinee;
extracting face difference information of the face image to be analyzed and the face image of the admission card image to obtain single-frame face difference information;
generating a single-frame human face difference information sequence according to each single-frame human face difference information;
the step of predicting cheating behaviors according to the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps of:
and carrying out cheating behavior prediction according to the single-frame human face difference information sequence, the auricle shape comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
6. The method of predicting cheating activities on an examination according to claim 1, wherein after the step of obtaining the front monitoring video of the target examinee, the method further comprises:
according to the audio frequency in the front monitoring video, carrying out volume identification on the generation time point of each frame of image of the front monitoring video to obtain the volume of a single frame;
comparing the single-frame volume with a preset volume threshold value to obtain a single-frame volume comparison result;
generating a single-frame volume comparison result sequence according to each single-frame volume comparison result;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the single-frame volume comparison result sequence, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
7. The method of predicting cheating activities on an examination according to claim 1, wherein after the step of obtaining the front monitoring video of the target examinee, the method further comprises:
obtaining a test room monitoring video corresponding to the front monitoring video;
according to the examination room monitoring video, performing desktop viewing behavior analysis on the target examinee to obtain desktop behavior data;
the step of predicting cheating behaviors according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result further comprises the following steps:
and carrying out cheating behavior prediction according to the desktop behavior data, the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
8. An examination cheating behavior prediction apparatus, comprising:
the data acquisition module is used for acquiring a front monitoring video of the target examinee;
the single-frame people number sequence determining module is used for judging the number of people in each frame of image of the front monitoring video to obtain a single-frame people number sequence;
the single-frame human body posture sequence determining module is used for analyzing the human body posture of each frame of image of the front monitoring video to obtain a single-frame human body posture sequence;
the audio examination keyword set determining module is used for respectively performing text conversion and examination keyword extraction on the audio in the front monitoring video to obtain an audio examination keyword set;
the answer behavior sequence acquisition module is used for acquiring an answer behavior sequence corresponding to the front monitoring video;
and the cheating behavior prediction result determining module is used for carrying out cheating behavior prediction according to the single-frame people number sequence, the single-frame human body posture sequence, the audio test keyword set and the answer behavior sequence to obtain a cheating behavior prediction result.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210292955.3A CN114639175A (en) | 2022-03-23 | 2022-03-23 | Method, device, equipment and storage medium for predicting examination cheating behaviors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210292955.3A CN114639175A (en) | 2022-03-23 | 2022-03-23 | Method, device, equipment and storage medium for predicting examination cheating behaviors |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114639175A true CN114639175A (en) | 2022-06-17 |
Family
ID=81949118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210292955.3A Pending CN114639175A (en) | 2022-03-23 | 2022-03-23 | Method, device, equipment and storage medium for predicting examination cheating behaviors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114639175A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116153312A (en) * | 2023-03-05 | 2023-05-23 | 广州网才信息技术有限公司 | Online pen test method and device using voice recognition |
CN116883953A (en) * | 2023-09-08 | 2023-10-13 | 杭州东方网升科技股份有限公司 | Online examination anti-cheating method, system and storage medium |
-
2022
- 2022-03-23 CN CN202210292955.3A patent/CN114639175A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116153312A (en) * | 2023-03-05 | 2023-05-23 | 广州网才信息技术有限公司 | Online pen test method and device using voice recognition |
CN116883953A (en) * | 2023-09-08 | 2023-10-13 | 杭州东方网升科技股份有限公司 | Online examination anti-cheating method, system and storage medium |
CN116883953B (en) * | 2023-09-08 | 2023-11-17 | 杭州东方网升科技股份有限公司 | Online examination anti-cheating method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109729383B (en) | Double-recording video quality detection method and device, computer equipment and storage medium | |
WO2021051607A1 (en) | Video data-based fraud detection method and apparatus, computer device, and storage medium | |
CN114639175A (en) | Method, device, equipment and storage medium for predicting examination cheating behaviors | |
CN111160275B (en) | Pedestrian re-recognition model training method, device, computer equipment and storage medium | |
CN111126233B (en) | Call channel construction method and device based on distance value and computer equipment | |
CN110427881B (en) | Cross-library micro-expression recognition method and device based on face local area feature learning | |
CN111931718B (en) | Method, device and computer equipment for updating face characteristics based on face recognition | |
CN111832581B (en) | Lung feature recognition method and device, computer equipment and storage medium | |
CN110505504B (en) | Video program processing method and device, computer equipment and storage medium | |
WO2021047190A1 (en) | Alarm method based on residual network, and apparatus, computer device and storage medium | |
CN111881726A (en) | Living body detection method and device and storage medium | |
US20150178544A1 (en) | System for estimating gender from fingerprints | |
CN111401105A (en) | Video expression recognition method, device and equipment | |
CN111325082A (en) | Personnel concentration degree analysis method and device | |
CN112699758A (en) | Sign language translation method and device based on dynamic gesture recognition, computer equipment and storage medium | |
CN115221941A (en) | Cognitive disorder detection method and related device, electronic equipment and storage medium | |
CN113128522B (en) | Target identification method, device, computer equipment and storage medium | |
US11238289B1 (en) | Automatic lie detection method and apparatus for interactive scenarios, device and medium | |
CN110717407A (en) | Human face recognition method, device and storage medium based on lip language password | |
CN111599382B (en) | Voice analysis method, device, computer equipment and storage medium | |
CN109697421A (en) | Evaluation method, device, computer equipment and storage medium based on micro- expression | |
CN109241864A (en) | Emotion prediction technique, device, computer equipment and storage medium | |
CN114399699A (en) | Target recommendation object determination method and device, electronic equipment and storage medium | |
CN113705511A (en) | Gesture recognition method and device | |
CN112528797A (en) | Question recommendation method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |