CN113569656A - Examination room monitoring method based on deep learning - Google Patents

Examination room monitoring method based on deep learning Download PDF

Info

Publication number
CN113569656A
CN113569656A CN202110752439.XA CN202110752439A CN113569656A CN 113569656 A CN113569656 A CN 113569656A CN 202110752439 A CN202110752439 A CN 202110752439A CN 113569656 A CN113569656 A CN 113569656A
Authority
CN
China
Prior art keywords
frame
behavior
deep learning
examination room
behavior class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110752439.XA
Other languages
Chinese (zh)
Other versions
CN113569656B (en
Inventor
朱静
薛穗华
何伟聪
潘梓沛
毛俊彦
尹邦政
赵宣博
明家辉
何泳隆
郑森元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202110752439.XA priority Critical patent/CN113569656B/en
Publication of CN113569656A publication Critical patent/CN113569656A/en
Application granted granted Critical
Publication of CN113569656B publication Critical patent/CN113569656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an examination room monitoring method based on deep learning, which comprises the following steps: the monitoring system shoots an examination video in real time; inputting the preprocessed test image into a trained convolutional neural network, wherein the convolutional neural network uniformly divides the test image into a plurality of grids; each grid carries out behavior frame prediction on an object to be recognized in the grid, the category score of the behavior frame is compared with a preset threshold value, the behavior frame lower than the preset threshold value is filtered, and the behavior frame higher than the preset threshold value is subjected to non-maximum value suppression to obtain a final detection frame; and comparing the score T of the final detection frame with the optimized adaptive parameters q, p and r to obtain the probability that the behavior frame contains the target. On the basis of a YOLO algorithm in the existing one-stage algorithm, the method carries out deep learning adaptive parameter optimization on the detection data, and provides the degree of abnormal behavior division of the adaptive parameter interval, so that the accuracy of abnormal behavior detection is further improved.

Description

Examination room monitoring method based on deep learning
Technical Field
The invention relates to the technical field of video monitoring and identification, in particular to an examination room monitoring method based on deep learning.
Background
In order to monitor whether an examinee cheats, the existing method generally installs a monitoring probe in an examination room, and a special person monitors the monitoring probe, and meanwhile, a invigilator carries out on-site invigilation in the examination room; when the monitoring personnel find that the examination survives in the monitoring video in abnormal behaviors, the information is transmitted to examination room invigilators, and the invigilators verify whether the examinees cheat. In the existing method, because the number of examinees monitored by monitoring personnel is large, the examinees are difficult to pay attention to specific actions of each examinee one by one, a large number of omissions are often caused, the same problem also exists in on-site invigilators, even if cheating examinees exist, the cheating examinees are difficult to find in time, and the cheating detection effect is poor.
With the continuous development of computer computing power, computer vision technology can be gradually applied to actual life, and is convenient for people's life. In the field of video surveillance, one important task in monitoring video is to discover people and interpret and discern their behavior. When it is necessary to know whether a specified target is present in the monitoring system, people should be identified in the video sequence, which is a problem of people detection. And recognizing and judging the problem of action behavior of a certain target appearing in monitoring in the monitoring video by considering the space-time relevance. The human detection technology is the basis of human action recognition. Action detection is realized based on person detection, and further, the system is applied to an examination monitoring system, and the real-time efficiency of invigilation can be improved.
However, the monitoring system used in the examination room is the traditional monitoring system, which observes the monitoring video by the manager, and makes the judgment and analyzes the abnormal behavior. The monitoring system for analyzing abnormal behaviors in the examination room has great narrowness, and is easy to appear in human unobservable behaviors and inevitable human negligence. The target detection algorithm based on the deep learning network is applied to examination room monitoring, and the detection accuracy of abnormal behaviors is greatly improved.
The detection method based on deep learning is mainly divided into two types, namely two-stage detection and one-stage detection. At present, the existing two-stage detection comprises R-CNN, SPP-net, Fast R-CNN and the like, and the two-stage detection algorithm has poor detection speed. The one-stage detection algorithm is YOLO and SSD, and is provided for realizing an end-to-end target detection network, and the algorithm has good real-time performance and high detection speed. In view of identification and prompt of abnormal behaviors monitored in an examination room, the abnormal behavior alarm prompt can be effectively realized only by good real-time performance and accuracy. Therefore, there is a need in the industry to develop a method or system for monitoring and detecting an examination room based on a one-stage algorithm.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an examination room monitoring method based on deep learning, which can improve the detection accuracy of abnormal behaviors.
The purpose of the invention is realized by the following technical scheme:
an examination room monitoring method based on deep learning comprises the following steps:
s1, shooting an examination video in real time by the monitoring system, acquiring an examination image of the examination video, and preprocessing the examination image;
s2, inputting the preprocessed test image into a trained convolutional neural network, wherein the convolutional neural network uniformly divides the test image into a plurality of grids;
s3, performing behavior frame prediction on the object to be recognized in each grid by each grid, wherein the behavior frame comprises the center coordinate information, the width and the height and the category confidence of the frame;
s4, comparing the category score of the behavior frame with a preset threshold, filtering the behavior frame lower than the preset threshold, and performing non-maximum suppression (NMS) on the behavior frame higher than the preset threshold to obtain a final detection frame;
and S5, comparing the score T of the final detection frame with the optimized adaptive parameters q, p and r to obtain the probability that the behavior frame contains the target.
Preferably, the pre-processing of the video sequence comprises: and carrying out scaling processing on the input image.
Preferably, step S3 includes: an artificial label behavior class candidate box; the convolution network outputs the offset relative to the candidate frame to obtain the center coordinate information of the behavior class frame; obtaining confidence by predicting the product of the probability of the behavior class and the IOU; and predicting the conditional probability of all the behavior classes by each grid, and obtaining the score of each behavior class frame through the product of the conditional probability and the confidence coefficient.
Preferably, the artificial tag behavior class candidate box includes: clustering a data set by a K-means clustering algorithm to obtain the size of a candidate frame, specifically, artificially labeling width and height data (w, h) of n behavior class frames, selecting K candidate frames corresponding to K candidate frame clusters from the n behavior class frames, calculating the Euclidean distance between each behavior class frame and the candidate frame, dividing the candidate frame with the Euclidean distance of P into one cluster, and calculating the average value of each cluster to obtain a new cluster center; repeating the above process means that there is no change in the cluster center; the resulting final cluster center is the size of the behavior class candidate box, where n > k, and P > 0.
Preferably, obtaining the confidence level by predicting a product of the probability of the behavior class and the IOU comprises: if the behavior class falls into the grid, the probability of predicting the behavior class is 1, otherwise, the probability is 0; calculating the Area of the behavior class frame as (w-x +1) × (h-y +1), and then calculating the ratio of the intersection and the union of the behavior class frame and the label frame to obtain the IOU; and obtaining the confidence coefficient by the product of the probability of the predicted behavior class and the IOU.
Preferably, IOU ═ B)/(atou B).
Preferably, step S5 is followed by:
all the test box scores T are saved.
Preferably, the examination room monitoring method based on deep learning further includes optimizing a pre-generated adaptive parameter q, p, r, specifically:
s51, taking all the stored detection box scores T as a data set;
s52, selecting 3 objects from the data set as initial clustering centers, namely the determined adaptive parameters q, p and r;
s53, calculating the distance from each score T in the data set to the center point of each cluster, and dividing each data score into the clusters represented by the center point nearest to the data score T;
s54, calculating the center point of each cluster, and updating the center point to be a new cluster center point, namely the average value of each score in the cluster;
s55, repeating the steps S52 and S53 until the algorithm is stopped when the objects in the cluster are not obviously changed;
s56, comparing the obtained cluster center scores, wherein the largest is q, the next is p, and the smallest is r.
Preferably, the generating of the adaptive parameters q, p, r comprises: the monitoring terminal acquires a video sequence comprising an examination room scene, collects the action data of an examinee and preprocesses the action data of the examinee; detecting behavior characteristics of the examinee motion data from a monitoring video sequence by adopting a target detection algorithm of a convolutional data network; and (4) artificially selecting biological motion data, and carrying out deep learning through a clustering algorithm to produce self-adaptive parameters q, p and r.
Preferably, the behavior class box contains the target probability divided into four levels, and the four levels respectively correspond to q < T, p < T < q, r < T < p and T < r.
Compared with the prior art, the invention has the following advantages:
the method is based on a YOLO algorithm in the existing one-stage algorithm, deep learning adaptive parameter optimization is carried out on detection data, and the degree of abnormal behavior of interval division of adaptive parameters is provided. In this way, the accuracy of the abnormal behavior detection will be further improved. In addition, if the neural network model has errors, artificial interference can be performed to correct the data, which also helps to improve the accuracy.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flow chart of an examination room monitoring method based on deep learning according to the present invention.
FIG. 2 is another schematic flow chart of the examination room monitoring method based on deep learning according to the present invention.
FIG. 3 is a schematic diagram of the position of the convolution network output relative to the candidate box according to the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
The method is based on an improved YOLO framework to identify and detect a plurality of target behaviors of the image; calculating a target recognition behavior input in real time and a behavior class score T in a data set; based on the comparison result of the score T and the adaptive parameters q, p and r of the invention, when T is greater than q, the data is saved and sent to the invigilator; when p < T < q, the frame image is saved, and a warning prompt and a judgment application are provided for a monitoring administrator. When r < T < p, recording the coordinates (x, y) of the target position, and continuously tracking the target position. And when T < r, re-identifying the target. The data generated in the above process are stored in a data set, and parameters q, p and r are optimized.
The invention realizes the examination room monitoring and warning of further improving the accuracy under the condition of real-time running speed through the application of the yolov algorithm of the self-adaptive parameters to a monitoring system for person identification, the application of the convolutional neural network and other technologies to a monitoring end for human body identification and the training of the self-adaptive parameters and the application of the algorithm. The basic implementation of the yolov algorithm is as follows:
1) the input image enters a convolution neural network for feature extraction, and the image is divided into a plurality of grids
2) And predicting a plurality of frames for each grid, wherein the frame information comprises confidence coefficient, coordinate information and width and height.
3) Removing the frames with low scores according to the set threshold value, and finally performing NMS to remove redundant frames
Specifically, referring to fig. 1-2, a method for monitoring an examination room based on deep learning includes:
and S1, the monitoring system shoots the examination video in real time, acquires the examination image of the examination video, and preprocesses the examination image, namely, performs scaling processing on the input image. Because the monitoring scene of the examination room is fixed, the proportion of the examination room to the input image is fixed at a certain value in the image input process, namely the scaling format can be manually adjusted according to the actual scene.
And S2, inputting the preprocessed test image into a trained convolutional neural network, wherein the convolutional neural network uniformly divides the test image into a plurality of grids (k × k).
S3, each grid carries out behavior box prediction on the object to be recognized in the grid, if the center of an object falls on a certain grid, the grid is responsible for detecting the object. The behavior class frame comprises center coordinate information, width and height and category confidence of the frame; step S3 includes:
an artificial label behavior class candidate box; the artificial label behavior class candidate box comprises: clustering the data set by using a K-means clustering algorithm to obtain the size of a candidate box, which specifically comprises the following steps:
artificially labeling width and height data (w, h) of n behavior class frames, selecting k candidate frames corresponding to k candidate frame clusters, calculating the Euclidean distance between each behavior class frame and the candidate frame, dividing the candidate frames with similar Euclidean distances into one cluster, and calculating the average value of each cluster to obtain a new cluster center; repeating the above process means that there is no change in the cluster center; the resulting final cluster center is the size of the behavior class candidate box, where n > k.
The convolution network outputs the offset relative to the candidate frame to obtain the center coordinate information of the behavior class frame; referring to fig. 3, the offset of the candidate frame is tx, ty, tw, th, respectively. Relationship of candidate box and behavior class box: wherein bx, by, bw and bh are the central coordinates of the behavior class box and the width and height. cx, cy are the coordinates of the upper left corner of the grid. And Pw and ph are the width and height of the candidate frame.
Obtaining confidence by predicting the product of the probability of the behavior class and the IOU; obtaining the confidence level by multiplying the probability of the predicted behavior class and the IOU comprises:
if the behavior class falls into the grid, the probability of predicting the behavior class is 1, otherwise, the probability is 0;
calculating the Area of the behavior class frame as (w-x +1) × (h-y +1), and then calculating the ratio of the intersection and the union of the behavior class frame and the label frame to obtain the IOU; IOU ═ B)/(AuB).
And obtaining the confidence coefficient by the product of the probability of the predicted behavior class and the IOU.
And predicting the conditional probability of all the behavior classes by each grid, and obtaining the score of each behavior class frame through the product of the conditional probability and the confidence coefficient.
S4, comparing the category score of the behavior frame with a preset threshold (set to be 0.5), filtering the behavior frame lower than the preset threshold, and performing non-maximum suppression (NMS) on the behavior frame higher than the preset threshold to obtain a final detection frame;
and S5, comparing the score T of the final detection frame with the optimized adaptive parameters q, p and r to obtain the probability that the behavior frame contains the target. The behavior class box contains target probability which is divided into four levels, wherein the four levels respectively correspond to q < T, p < T < q, r < T < p and T < r.
In this embodiment, step S5 is followed by: the method adopts the simple and efficient partition clustering of the k-means algorithm, and comprises the following steps:
s51, taking all the stored detection box scores T as a data set; manually setting the number of the divided classes to be 3;
s52, selecting 3 objects from the data set as initial clustering centers, namely the determined adaptive parameters q, p and r;
s53, calculating the distance from each score T in the data set to the center point of each cluster, and dividing each data score into the clusters represented by the center point nearest to the data score T;
s54, calculating the center point of each cluster, and updating the center point to be a new cluster center point, namely the average value of each score in the cluster;
s55, repeating the steps S52 and S53 until the algorithm is stopped when the objects in the cluster are not obviously changed;
s56, comparing the obtained cluster center scores, wherein the largest is q, the next is p, and the smallest is r.
The generation of the adaptive parameters q, p, r comprises:
the monitoring terminal acquires a video sequence comprising an examination room scene, collects the action data of an examinee and preprocesses the action data of the examinee;
detecting behavior characteristics of the examinee motion data from a monitoring video sequence by adopting a target detection algorithm of a convolutional data network;
and (4) artificially selecting biological motion data, and carrying out deep learning through a clustering algorithm to produce self-adaptive parameters q, p and r.
In summary, the intelligent monitoring terminal (monitoring system) of the invention analyzes the examination room environment in real time, can send data to the monitoring personnel for timely abnormal event processing, and can also send data to the monitoring administrator for further data analysis and then send data to the invigilator.
The above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.

Claims (10)

1. An examination room monitoring method based on deep learning is characterized by comprising the following steps:
s1, shooting an examination video in real time by the monitoring system, acquiring an examination image of the examination video, and preprocessing the examination image;
s2, inputting the preprocessed test image into a trained convolutional neural network, wherein the convolutional neural network uniformly divides the test image into a plurality of grids;
s3, performing behavior frame prediction on the object to be recognized in each grid by each grid, wherein the behavior frame comprises the center coordinate information, the width and the height and the category confidence of the frame;
s4, comparing the category score of the behavior frame with a preset threshold, filtering the behavior frame lower than the preset threshold, and performing non-maximum suppression on the behavior frame higher than the preset threshold to obtain a final detection frame;
and S5, comparing the score T of the final detection frame with the optimized adaptive parameters q, p and r to obtain the probability that the behavior frame contains the target.
2. The examination room monitoring method based on deep learning of claim 1, wherein the preprocessing of the video sequence comprises: and carrying out scaling processing on the input image.
3. The examination room monitoring method based on deep learning of claim 1, wherein the step S3 comprises:
an artificial label behavior class candidate box;
the convolution network outputs the offset relative to the candidate frame to obtain the center coordinate information of the behavior class frame;
obtaining confidence by predicting the product of the probability of the behavior class and the IOU;
and predicting the conditional probability of all the behavior classes by each grid, and obtaining the score of each behavior class frame through the product of the conditional probability and the confidence coefficient.
4. The examination room monitoring method based on deep learning of claim 3, wherein the artificial label behavior class candidate box comprises: clustering the data set by using a K-means clustering algorithm to obtain the size of a candidate box, which specifically comprises the following steps:
artificially labeling width and height data (w, h) of n behavior class frames, selecting k candidate frames corresponding to k candidate frame clusters, calculating the Euclidean distance between each behavior class frame and the candidate frame, dividing the candidate frame with the Euclidean distance of P into one cluster, and calculating the average value of each cluster to obtain a new cluster center; repeating the above process means that there is no change in the cluster center; the resulting final cluster center is the size of the behavior class candidate box, where n > k, and P > 0.
5. The examination room monitoring method based on deep learning of claim 4, wherein obtaining the confidence level by predicting the product of the probability of the behavior class and the IOU comprises:
if the behavior class falls into the grid, the probability of predicting the behavior class is 1, otherwise, the probability is 0;
calculating the Area of the behavior class frame as (w-x +1) × (h-y +1), and then calculating the ratio of the intersection and the union of the behavior class frame and the label frame to obtain the IOU;
and obtaining the confidence coefficient by the product of the probability of the predicted behavior class and the IOU.
6. The deep learning-based examination room monitoring method according to claim 5, wherein IOU is (A ═ B)/(A ═ U B).
7. The examination room monitoring method based on deep learning of claim 1, wherein step S5 is followed by further comprising:
all the test box scores T are saved.
8. The examination room monitoring method based on deep learning of claim 7, further comprising optimizing pre-generated adaptive parameters q, p, r, specifically:
s51, taking all the stored detection box scores T as a data set;
s52, selecting 3 objects from the data set as initial clustering centers, namely the determined adaptive parameters q, p and r;
s53, calculating the distance from each score T in the data set to the center point of each cluster, and dividing each data score into the clusters represented by the center point nearest to the data score T;
s54, calculating the center point of each cluster, and updating the center point to be a new cluster center point, namely the average value of each score in the cluster;
s55, repeating the steps S52 and S53 until the algorithm is stopped when the objects in the cluster are not obviously changed;
s56, comparing the obtained cluster center scores, wherein the largest is q, the next is p, and the smallest is r.
9. The examination room monitoring method based on deep learning of claim 8, wherein the generation of the adaptive parameters q, p, r comprises:
the monitoring terminal acquires a video sequence comprising an examination room scene, collects the action data of an examinee and preprocesses the action data of the examinee;
detecting behavior characteristics of the examinee motion data from a monitoring video sequence by adopting a target detection algorithm of a convolutional data network;
and (4) artificially selecting biological motion data, and carrying out deep learning through a clustering algorithm to produce self-adaptive parameters q, p and r.
10. The examination room monitoring method based on deep learning of claim 1, wherein the probability of the target contained in the behavior class box is divided into four levels, and the four levels respectively correspond to q < T, p < T < q, r < T < p and T < r.
CN202110752439.XA 2021-07-02 2021-07-02 Examination room monitoring method based on deep learning Active CN113569656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110752439.XA CN113569656B (en) 2021-07-02 2021-07-02 Examination room monitoring method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110752439.XA CN113569656B (en) 2021-07-02 2021-07-02 Examination room monitoring method based on deep learning

Publications (2)

Publication Number Publication Date
CN113569656A true CN113569656A (en) 2021-10-29
CN113569656B CN113569656B (en) 2023-08-29

Family

ID=78163656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110752439.XA Active CN113569656B (en) 2021-07-02 2021-07-02 Examination room monitoring method based on deep learning

Country Status (1)

Country Link
CN (1) CN113569656B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333070A (en) * 2022-03-10 2022-04-12 山东山大鸥玛软件股份有限公司 Examinee abnormal behavior detection method based on deep learning
CN114816077A (en) * 2022-06-30 2022-07-29 济南大学 Multimode-fused intelligent glove system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901251A (en) * 2010-06-28 2010-12-01 吉林大学 Method for analyzing and recognizing complex network cluster structure based on markov process metastability
CN103297267A (en) * 2013-05-10 2013-09-11 河北远东通信系统工程有限公司 Method and system for network behavior risk assessment
CN103699822A (en) * 2013-12-31 2014-04-02 同济大学 Application system and detection method for users' abnormal behaviors in e-commerce based on mouse behaviors
CN109711377A (en) * 2018-12-30 2019-05-03 陕西师范大学 Standardize examinee's positioning and method of counting in the single-frame images of examination hall monitoring
CN111488920A (en) * 2020-03-27 2020-08-04 浙江工业大学 Bag opening position detection method based on deep learning target detection and recognition
CN111709310A (en) * 2020-05-26 2020-09-25 重庆大学 Gesture tracking and recognition method based on deep learning
CN112417990A (en) * 2020-10-30 2021-02-26 四川天翼网络服务有限公司 Examination student violation behavior identification method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901251A (en) * 2010-06-28 2010-12-01 吉林大学 Method for analyzing and recognizing complex network cluster structure based on markov process metastability
CN103297267A (en) * 2013-05-10 2013-09-11 河北远东通信系统工程有限公司 Method and system for network behavior risk assessment
CN103699822A (en) * 2013-12-31 2014-04-02 同济大学 Application system and detection method for users' abnormal behaviors in e-commerce based on mouse behaviors
CN109711377A (en) * 2018-12-30 2019-05-03 陕西师范大学 Standardize examinee's positioning and method of counting in the single-frame images of examination hall monitoring
CN111488920A (en) * 2020-03-27 2020-08-04 浙江工业大学 Bag opening position detection method based on deep learning target detection and recognition
CN111709310A (en) * 2020-05-26 2020-09-25 重庆大学 Gesture tracking and recognition method based on deep learning
CN112417990A (en) * 2020-10-30 2021-02-26 四川天翼网络服务有限公司 Examination student violation behavior identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333070A (en) * 2022-03-10 2022-04-12 山东山大鸥玛软件股份有限公司 Examinee abnormal behavior detection method based on deep learning
CN114816077A (en) * 2022-06-30 2022-07-29 济南大学 Multimode-fused intelligent glove system

Also Published As

Publication number Publication date
CN113569656B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN110826538B (en) Abnormal off-duty identification system for electric power business hall
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN111813997B (en) Intrusion analysis method, device, equipment and storage medium
CN103517042A (en) Nursing home old man dangerous act monitoring method
CN113569656A (en) Examination room monitoring method based on deep learning
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN112598054A (en) Power transmission and transformation project quality general-purpose prevention and control detection method based on deep learning
CN112200011A (en) Aeration tank state detection method and system, electronic equipment and storage medium
CN116229052B (en) Method for detecting state change of substation equipment based on twin network
CN111079694A (en) Counter assistant job function monitoring device and method
CN116883763A (en) Deep learning-based automobile part defect detection method and system
TW202201275A (en) Device and method for scoring hand work motion and storage medium
CN114170686A (en) Elbow bending behavior detection method based on human body key points
CN112784494B (en) Training method of false positive recognition model, target recognition method and device
CN112232235B (en) Intelligent factory remote monitoring method and system based on 5G sum algorithm
CN117475353A (en) Video-based abnormal smoke identification method and system
Ouivirach et al. Clustering human behaviors with dynamic time warping and hidden Markov models for a video surveillance system
CN117114420A (en) Image recognition-based industrial and trade safety accident risk management and control system and method
CN116959099A (en) Abnormal behavior identification method based on space-time diagram convolutional neural network
KR100543706B1 (en) Vision-based humanbeing detection method and apparatus
CN110427894A (en) A kind of crowd&#39;s abnormal motion monitoring method, system, equipment and medium
CN116665390A (en) Fire detection system based on edge calculation and optimized YOLOv5
CN113822240B (en) Method and device for extracting abnormal behaviors from power field operation video data
US11954955B2 (en) Method and system for collecting and monitoring vehicle status information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant