CN114445862A - Attendance detection method and system based on offline classroom monitoring - Google Patents

Attendance detection method and system based on offline classroom monitoring Download PDF

Info

Publication number
CN114445862A
CN114445862A CN202210068367.1A CN202210068367A CN114445862A CN 114445862 A CN114445862 A CN 114445862A CN 202210068367 A CN202210068367 A CN 202210068367A CN 114445862 A CN114445862 A CN 114445862A
Authority
CN
China
Prior art keywords
frame
human
frames
head
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210068367.1A
Other languages
Chinese (zh)
Inventor
彭苏婷
于丹
肖鹏
王艳秋
张彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Neusoft Education Technology Group Co ltd
Original Assignee
Dalian Neusoft Education Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Neusoft Education Technology Group Co ltd filed Critical Dalian Neusoft Education Technology Group Co ltd
Priority to CN202210068367.1A priority Critical patent/CN114445862A/en
Publication of CN114445862A publication Critical patent/CN114445862A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Technology (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an attendance detection method and system based on offline class monitoring, wherein the method comprises the steps of training a human-head human-body joint detection network model, and optimizing the weight of the model by adopting weighted joint loss; acquiring offline classroom monitoring videos, and extracting key frame images at intervals of a certain number of frames; inputting the key frame image into a trained human head and human body joint detection network model so as to obtain a human head candidate frame; removing repeated frames, background frames and unreasonable frames in the candidate frames by adopting a post-processing algorithm so as to obtain the number of people contained in the current key frame; and dividing the number of people detected in the key frame by the number of people corresponding to the key frame to obtain the attendance rate corresponding to the key frame. The method can obtain the number of attendance people and further obtain the attendance rate corresponding to the key frame, and the detection method is high in accuracy, real-time performance and automation degree and can help managers to quickly know the real-time distribution condition of the attendance rate of the classroom.

Description

Attendance detection method and system based on offline classroom monitoring
Technical Field
The invention relates to the technical field of image processing, in particular to an attendance detection method and system based on offline classroom monitoring.
Background
The off-line class quality assessment indexes include various indexes, such as attendance rate, head lifting rate, front sitting rate, abnormal behavior detection and the like, wherein the attendance rate is a very important index, and the level of the attendance rate represents the enthusiasm and interest degree of students in a class to some extent and can also be used as an important reference index for evaluating the teacher level. At present, a plurality of common attendance statistical methods exist, but certain disadvantages exist, for example, offline roll call occupies class time, a face attendance machine, a fingerprint card punch and the like need extra budget to purchase equipment, and a wechat applet card punch may have the situations of checking in on behalf of a user and remotely checking in by taking pictures. These statistics are generally only performed in the class, and students can not be detected when they leave early in the class, and the attendance status can not be dynamically obtained in real time. In the prior art, attendance detection methods based on living body identification and image analysis are provided, although the methods can automatically count classroom videos, pre-modules such as manual feature extraction and multi-area detection need to be arranged for series connection, the accuracy of the pre-modules greatly affects subsequent detection, the method is complex, and the detection efficiency cannot be guaranteed.
Disclosure of Invention
According to the problems in the prior art, the invention discloses an attendance detection method based on offline class monitoring, which specifically comprises the following steps:
training a human head and human body joint detection network model, and optimizing the weight of the model by adopting weighted joint loss;
acquiring offline classroom monitoring videos, and extracting key frame images at intervals of a certain number of frames;
inputting the key frame image into a trained human head and human body joint detection network model so as to obtain a human head candidate frame;
removing repeated frames, background frames and unreasonable frames in the candidate frames by adopting a post-processing algorithm so as to obtain the number of people contained in the current key frame;
and dividing the number of people detected in the key frame by the number of people corresponding to the key frame to obtain the attendance rate corresponding to the key frame.
Further, when training the human head and human body joint detection network model:
according to the principle that the head and the upper body of a human body simultaneously appear and are in a certain proportion, the corresponding human body frame is automatically generated in a certain proportion according to the head frame of each human body, so that a data set with marks of the head frame and the human body frame is obtained, and the data set is proportionally divided into a training set and a verification set;
establishing a human head and human body joint detection network model, wherein the model comprises a backbone network and an FPN module, the backbone network is used for extracting feature response graphs under different resolutions and comprises a series of continuous CPM modules and CBL modules, the FPN module performs up-sampling and summation processing on the feature response graphs,
inputting the training set into a human head and human body joint detection network model, extracting feature information with different resolutions, and fusing the feature information so as to output candidate frame coordinates of a human head frame and a human body frame;
and comparing the actual coordinates of the human head frame and the human body frame with the candidate frame predicted by the model to obtain weighted joint loss, and optimizing the weight of the model based on the weighted joint loss.
Further, when the weight of the human-head human body joint detection network model is optimized by adopting the weighted joint loss, wherein the weighted joint loss is expressed as:
Figure BDA0003481108550000021
wherein i represents the ith frame, m takes a value of 0, 1 respectively represents the head and the body, p is the category probability of the candidate frame, and t is the coordinate information of the candidate frame;
the formula for calculating the loss of human head and human body is as follows:
Figure BDA0003481108550000022
wherein L iscIs the log loss of classification, LrIs the mean square error of coordinates, p, t respectively represent the class probability of the real label, the coordinate information, lambdam、αmRespectively, representing the corresponding weights.
And optimizing the weight of the model by adopting a random gradient descent method according to the weighted joint loss.
An attendance detection method system based on offline classroom monitoring comprises the following steps:
the training module is used for training the human-head and human-body combined detection network model and optimizing the weight of the model by adopting weighted combined loss;
the key frame extraction module is used for acquiring key frame images of the classroom video;
the human head detection module is used for detecting a human head candidate frame in the key frame;
the post-processing module is used for removing repeated frames, background frames and unreasonable frames in the candidate frames to obtain the number of people in the key frames;
and the output module is used for calculating the attendance rate according to the number of people detected in the key frame and outputting the attendance rate.
Due to the adoption of the technical scheme, the attendance detection method and the attendance detection system based on offline class monitoring are provided, wherein the method comprises the steps of firstly training a human-head human-body joint detection network model, and optimizing the weight of the model by adopting weighted joint loss; acquiring offline classroom monitoring videos, and extracting key frame images at intervals of a certain number of frames; inputting the key frame image into a trained human head and human body joint detection network model so as to obtain a human head candidate frame; removing repeated frames, background frames and unreasonable frames in the candidate frames by adopting a post-processing algorithm so as to obtain the number of people contained in the current key frame; and dividing the number of people detected in the key frame by the number of people corresponding to the key frame to obtain the attendance rate corresponding to the key frame. The neural network used by the invention is based on a semi-supervised human head and human body joint detection model, the human head does not exist in an isolated manner and is usually accompanied by human body information such as shoulders, and when the texture characteristics of the human head cannot be completely extracted, the position of the human head can be detected in an auxiliary manner through the human body information. The invention can automatically generate the human body marking frame by a semi-supervised method only by the data set with human head marking, and can detect the human head frame by the human body frame in an auxiliary way, thereby solving the problems of small target, fuzziness, difficulty in detecting the human head with serious shielding and the like. The invention can detect the number of the attendance in the whole class in real time and reflect the situations of late arrival, early departure and the like.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of the system of the present invention;
FIG. 3 is a diagram of a human body joint detection network model structure according to the present invention;
FIG. 4 is a schematic diagram of a CBL module according to the present invention;
FIG. 5 is a schematic diagram of a CPM module of the present invention;
FIG. 6 is a diagram of an FPN module according to the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, an attendance rate detection method based on offline class monitoring includes the following steps:
s1: training a human head and human body joint detection network model, optimizing the weight of the model by adopting weighted joint loss, wherein the training process of the human head and human body joint detection network model specifically comprises the following steps:
the data set is acquired first and then the human head and human body joint detection network model is trained, and the model structure is shown in fig. 3.
S1.1: acquiring a head data set
The data set with the head frame and the body frame marks is obtained from a public database or manually marked, under normal conditions, the head and the upper body of the human body can appear at the same time and are regular, so that the real head frame and the real body frame can be simulated by a series of proportional frames, the data set with the head frame and the body frame marks is obtained, and the data set is proportionally divided into a training set and a verification set.
Let t be the coordinates of the face box obtained from the dataset0Then, the coordinates of the human body frame can be defined as t1And can be expressed as:
t0=(x0,y0,w0,h0)
Figure BDA0003481108550000041
wherein x is0,y0,w0,h0Respectively representing the distance between the center point of the face frame and the x axis and the y axis, and width and height information. It is to be noted that t is set in the present invention1The coordinate proportion is only used for reference and can be changed according to actual needs.
S1.2: extracting feature response maps using a backbone network
After the training set is input into the backbone network backbone, feature response maps with different resolutions can be obtained after several times of continuous CPM and CBL module processing, and the size of the input map is 640x640, so that features with sizes of 320x320, 160x160, 80x80, 40x40 and 20x20 can be obtained respectively and can be used as the input of subsequent modules.
The CBL module is composed of a 3 × 3 sized convolution layer Conv, a batch normalized BN layer, and a leakage Relu active layer, as shown in fig. 4. The cpm (context prediction module) design uses the idea of inclusion structure for reference, and is composed of three branches, each branch is processed by using 1, 2, and 3 CBL modules respectively, so as to extract the characteristics of the receptive fields with different sizes, and finally, the network is wider by using the Concat operation parallel connection characteristics, so as to obtain more information of the receptive fields.
S1.3: fusing information of different scales
The invention uses the FPN module to fuse information among different scales, but 5 outputs with different sizes of the backbone network are not all used as the input of the FPN module, because the receptive field of the lower layer of the network is relatively small, and the complete human head characteristics can not be extracted, therefore, only the FPN modules used by the characteristics with the sizes of 80x80 and 40x40 are processed, and the function of the invention is that the network can learn the characteristics transferred from the lower order to the higher order and also can learn the characteristics transferred from the higher order to the lower order. Taking a feature map with a resolution of 80x80 as an example, input 1 of the FPN is the 80x80 feature of the current branch, input 2 is the 40x40 feature of the next adjacent branch, up-sampling input 2 to obtain the 80x80 feature, and adding the 80x80 feature to input 1 to obtain the final output. The FPN module is not used at 20x20 resolution because it is already the deepest layer of the network, another branch of smaller resolution is not available, and is therefore replaced with a CBL module.
S1.4: obtaining candidate boxes and calculating weighted joint losses
After a series of feature extraction, a large number of head frames and candidate frames of the body frame can be obtained. In the training stage of the network, the actual coordinates of the human head frame and the human body frame need to be compared with the candidate frame predicted by the model to obtain the weighted joint loss, and the weight of the human head and human body joint detection network model is optimized by adopting a random gradient descent method. Wherein the weighted joint loss is expressed as:
Figure BDA0003481108550000051
wherein i represents the ith frame, m takes the values of 0 and 1 and respectively represents the head and the human body, p is the category probability of the candidate frame, and t is the coordinate information of the candidate frame. The loss calculation formula of the human head and the human body is as follows:
Figure BDA0003481108550000052
wherein L iscIs the log loss of classification, LrIs the mean square error of coordinates, p, t respectively represent the class probability of the real label, the coordinate information, lambdam、αmRespectively represent corresponding weights, and can be set and adjusted manually.
S2: and setting a specific interval frame value, extracting images from the offline class monitoring video at a specific frame interval, and storing the images as key frames for subsequent algorithm detection. The reason for extracting the key frames is that the normal class time is generally 45 minutes, each minute of video corresponds to thousands of continuous images, the similarity between the images is high, and the detection of the attendance rate of each frame is not necessary, so that the time and the labor are consumed.
S3: and inputting the key frame image into the trained human head and human body joint detection network model so as to obtain a human head candidate frame. The trained human head and human body combined detection network model can output the human head frame and the candidate frame of the human body frame during detection, the human body frame is used for providing auxiliary information in a training stage, and the human head frame can be used for calculating the subsequent attendance rate in a detection stage.
S4: and carrying out post-processing to obtain a human head detection result. And (4) performing post-processing on the data by using an NMS algorithm, removing redundant background frames and repeating the prediction frames to obtain a final result. The NMS is called a non-maximum suppression algorithm, and can remove non-maximum values in local areas. In human head detection, each candidate frame has a corresponding confidence, the set of candidate frames is assumed to be T, the screening result is R, the detection process of NMS is that the frame A with the highest confidence in T is selected, deleted from T and added into R, IOU (cross-over ratio) of A and other candidate frames in T is calculated, the frame with IOU larger than a certain threshold value in T is deleted, the above process is continuously repeated for the rest frames in T until T is empty, and the final result is stored in R at the moment.
S5: and calculating the attendance rate, and dividing the number of people detected in each key frame by the number of people corresponding to the key frame to obtain the attendance rate corresponding to the key frame. The result of integrating all the key frames is the attendance condition of each time period of the whole class, and whether the phenomena of late arrival, early retreat and the like exist can be obviously seen.
The embodiment of the invention provides an attendance detection system based on offline class monitoring, which comprises a training module, a detection module and a detection module, wherein the training module is used for training a human-human body joint detection network model and optimizing the weight of the model by adopting weighted joint loss; the key frame extraction module is used for acquiring key frame images of the classroom video; the human head detection module is used for detecting a human head candidate frame in the key frame; the post-processing module is used for removing repeated frames, background frames and unreasonable frames in the candidate frames to obtain the number of people in the key frames; and the output module is used for calculating the attendance rate and outputting the attendance rate. Reference method embodiments are specifically implemented.
The invention provides an attendance detection method and system based on offline class monitoring, which are used for training a head-human body joint detection network model, optimizing the weight of the model by adopting weighted joint loss, extracting key frame images at intervals of a certain class video frame number, detecting the images by using the trained head-human body joint detection network model to obtain a large number of head candidate frames, filtering repeated frames and background frames by using a post-processing method to obtain the number of attendance and further obtain the attendance corresponding to the key frames.
The human body frame used in the training of the training joint detection model is automatically generated, so that a model with higher precision can be trained on the basis of greatly saving the manual marking cost.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. An attendance detection method based on offline classroom monitoring is characterized by comprising the following steps:
training a human head and human body joint detection network model, and optimizing the weight of the model by adopting weighted joint loss;
acquiring offline classroom monitoring videos, and extracting key frame images at intervals of a certain number of frames;
inputting the key frame image into a trained human head and human body joint detection network model so as to obtain a human head candidate frame;
removing repeated frames, background frames and unreasonable frames in the candidate frames by adopting a post-processing algorithm so as to obtain the number of people contained in the current key frame;
and dividing the number of people detected in the key frame by the number of people corresponding to the key frame to obtain the attendance rate corresponding to the key frame.
2. The method of claim 1, wherein: when training the human head and human body joint detection network model:
according to the principle that the head and the upper body of a human body simultaneously appear and are in a certain proportion, the corresponding human body frame is automatically generated in a certain proportion according to the head frame of each human body, so that a data set with marks of the head frame and the human body frame is obtained, and the data set is proportionally divided into a training set and a verification set;
establishing a human head and human body joint detection network model, wherein the model comprises a backbone network and an FPN module, the backbone network is used for extracting feature response graphs under different resolutions and comprises a series of continuous CPM modules and CBL modules, the FPN module performs up-sampling and summation processing on the feature response graphs,
inputting the training set into a human head and human body joint detection network model, extracting feature information with different resolutions, and fusing the feature information so as to output candidate frame coordinates of a human head frame and a human body frame;
and comparing the actual coordinates of the human head frame and the human body frame with the candidate frame predicted by the model to obtain weighted joint loss, and optimizing the weight of the model based on the weighted joint loss.
3. The method of claim 1, wherein: when the weighting of the human head and body joint detection network model is optimized by adopting the weighting joint loss, wherein the weighting joint loss is expressed as:
Figure FDA0003481108540000011
wherein i represents the ith frame, m takes the values of 0 and 1 and respectively represents the head and the human body, p is the category probability of the candidate frame, and t is the coordinate information of the candidate frame;
the formula for calculating the loss of human head and human body is as follows:
Figure FDA0003481108540000021
wherein L iscIs the log loss of classification, LrIs the mean square error of coordinates, p, t respectively represent the class probability of the real label, the coordinate information, lambdam、αmRespectively representing corresponding weights;
and optimizing the weight of the model by adopting a random gradient descent method according to the weighted joint loss.
4. A attendance detection method system based on offline classroom monitoring is characterized in that: the method comprises the following steps:
the training module is used for training the human-head and human-body combined detection network model and optimizing the weight of the model by adopting weighted combined loss;
the key frame extraction module is used for acquiring key frame images of the classroom video;
the human head detection module is used for detecting a human head candidate frame in the key frame;
the post-processing module is used for removing repeated frames, background frames and unreasonable frames in the candidate frames to obtain the number of people in the key frames;
and the output module is used for calculating the attendance rate according to the number of people detected in the key frame and outputting the attendance rate.
CN202210068367.1A 2022-01-20 2022-01-20 Attendance detection method and system based on offline classroom monitoring Pending CN114445862A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210068367.1A CN114445862A (en) 2022-01-20 2022-01-20 Attendance detection method and system based on offline classroom monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210068367.1A CN114445862A (en) 2022-01-20 2022-01-20 Attendance detection method and system based on offline classroom monitoring

Publications (1)

Publication Number Publication Date
CN114445862A true CN114445862A (en) 2022-05-06

Family

ID=81367679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210068367.1A Pending CN114445862A (en) 2022-01-20 2022-01-20 Attendance detection method and system based on offline classroom monitoring

Country Status (1)

Country Link
CN (1) CN114445862A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190324A (en) * 2022-06-30 2022-10-14 广州市奥威亚电子科技有限公司 Method, device and equipment for determining online and offline interactive live broadcast heat

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781843A (en) * 2019-10-29 2020-02-11 首都师范大学 Classroom behavior detection method and electronic equipment
CN111680569A (en) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 Attendance rate detection method, device, equipment and storage medium based on image analysis
CN111832489A (en) * 2020-07-15 2020-10-27 中国电子科技集团公司第三十八研究所 Subway crowd density estimation method and system based on target detection
WO2020252924A1 (en) * 2019-06-19 2020-12-24 平安科技(深圳)有限公司 Method and apparatus for detecting pedestrian in video, and server and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020252924A1 (en) * 2019-06-19 2020-12-24 平安科技(深圳)有限公司 Method and apparatus for detecting pedestrian in video, and server and storage medium
CN110781843A (en) * 2019-10-29 2020-02-11 首都师范大学 Classroom behavior detection method and electronic equipment
CN111680569A (en) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 Attendance rate detection method, device, equipment and storage medium based on image analysis
CN111832489A (en) * 2020-07-15 2020-10-27 中国电子科技集团公司第三十八研究所 Subway crowd density estimation method and system based on target detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PANYU PENG等: "MLFF: A Object Detector based on a Multi-Layer Feature Fusion", 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS(IJCNN), 22 September 2021 (2021-09-22) *
宋锋;刘瑞歌;: "基于人脸识别的自动课堂考勤系统的设计与实现", 滨州学院学报, no. 06, 15 December 2019 (2019-12-15) *
沈守娟;郑广浩;彭译萱;王展青;: "基于YOLOv3算法的教室学生检测与人数统计方法", 软件导刊, no. 09, 15 September 2020 (2020-09-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190324A (en) * 2022-06-30 2022-10-14 广州市奥威亚电子科技有限公司 Method, device and equipment for determining online and offline interactive live broadcast heat
CN115190324B (en) * 2022-06-30 2023-08-29 广州市奥威亚电子科技有限公司 Method, device and equipment for determining online and offline interactive live broadcast heat

Similar Documents

Publication Publication Date Title
CN111259850B (en) Pedestrian re-identification method integrating random batch mask and multi-scale representation learning
JP2019035626A (en) Recognition method of tire image and recognition device of tire image
CN104992223A (en) Intensive population estimation method based on deep learning
CN110929687B (en) Multi-user behavior recognition system based on key point detection and working method
CN109858389A (en) Vertical ladder demographic method and system based on deep learning
CN114220143B (en) Face recognition method for wearing mask
CN109272487A (en) The quantity statistics method of crowd in a kind of public domain based on video
CN111507227B (en) Multi-student individual segmentation and state autonomous identification method based on deep learning
CN110827432B (en) Class attendance checking method and system based on face recognition
CN113177937B (en) Improved YOLOv 4-tiny-based cloth defect detection method
CN112044046B (en) Skipping rope counting method based on deep learning
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion
CN113592839A (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN116051539A (en) Diagnosis method for heating fault of power transformation equipment
CN114445862A (en) Attendance detection method and system based on offline classroom monitoring
CN113421223B (en) Industrial product surface defect detection method based on deep learning and Gaussian mixture
CN112766419A (en) Image quality evaluation method and device based on multitask learning
CN113221667A (en) Face and mask attribute classification method and system based on deep learning
CN110348386B (en) Face image recognition method, device and equipment based on fuzzy theory
CN116503398A (en) Insulator pollution flashover detection method and device, electronic equipment and storage medium
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition
CN115830701A (en) Human violation behavior prediction method based on small sample learning
CN112597842B (en) Motion detection facial paralysis degree evaluation system based on artificial intelligence
CN114663910A (en) Multi-mode learning state analysis system
CN114220082A (en) Lane line identification method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 116000 room 206, no.8-9, software garden road, Ganjingzi District, Dalian City, Liaoning Province

Applicant after: Neusoft Education Technology Group Co.,Ltd.

Address before: 116000 room 206, no.8-9, software garden road, Ganjingzi District, Dalian City, Liaoning Province

Applicant before: Dalian Neusoft Education Technology Group Co.,Ltd.

CB02 Change of applicant information