CN111898492A - Intelligent campus study room monitoring and management system - Google Patents

Intelligent campus study room monitoring and management system Download PDF

Info

Publication number
CN111898492A
CN111898492A CN202010681572.6A CN202010681572A CN111898492A CN 111898492 A CN111898492 A CN 111898492A CN 202010681572 A CN202010681572 A CN 202010681572A CN 111898492 A CN111898492 A CN 111898492A
Authority
CN
China
Prior art keywords
module
study room
monitoring
management system
intelligent campus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010681572.6A
Other languages
Chinese (zh)
Inventor
张智恒
李明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Shiyou University
Original Assignee
Xian Shiyou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Shiyou University filed Critical Xian Shiyou University
Priority to CN202010681572.6A priority Critical patent/CN111898492A/en
Publication of CN111898492A publication Critical patent/CN111898492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent campus study room monitoring and management system, which belongs to the technical field of study room monitoring and management and comprises a monitoring module, a transmission module, a PC (personal computer) end, a graphic preprocessing module and a feedback module, wherein the output end of the monitoring module is electrically connected with the input end of the transmission module, the output end of the transmission module is electrically connected with the input end of the PC end, and the output end of the PC end is in network connection with the input end of the feedback module. According to the invention, by adopting the improved stage characteristic fusion convolution neural network model, the cross connection of the low-level convolution result and the high-level convolution result can be enhanced in generalization capability, the occurrence of an over-fitting phenomenon is prevented, the robustness is better, the information of an uncovered area is better extracted by combining the structure of a classifier, the correct expression recognition rate is improved, the faceNet structure is applied, the information can be timely processed, and the identity recognition recording and feedback can be accurately and quickly carried out by face recognition.

Description

Intelligent campus study room monitoring and management system
Technical Field
The invention belongs to the technical field of study room monitoring and management, and particularly relates to an intelligent campus study room monitoring and management system.
Background
In the 21 st century, with the development of computer and network technologies, information identification and detection show high importance, and with the development of technologies, the traditional identity identification method has been challenged more and more, the reliability is greatly reduced, and new information identification and detection technologies are bound to appear.
The concept of deep learning comes from artificial neural networks, is a neural network structure used for simulating human brain activities, and the deep learning rapidly develops in recent years, and becomes a popular field in machine learning research. In the early stage, due to the limitation of experimental conditions, when the multi-layer neural network is developed, a large amount of calculation is easily generated by forward-propagation network training, so that the multi-layer network structure does not obtain too much results in practical application. The deep belief network proposed by Hinton in 2006 provides a new idea for solving the network training difficulty, and a multi-layer network is valued by people. Meanwhile, with the rapid development of the internet, the data volume is greatly improved, the computing capability of a computer is also greatly improved, and a deep network can be well trained at present.
The method based on deep learning has a wave with the development of the deep learning in the last decade, and especially the migration from the traditional CPU calculation to the GPU calculation makes it possible to train a complex neural network. Many top scientific companies, including Facebook, Google, Microsoft, Baidu, etc., at home and abroad, invest many resources for the study of neural networks. Convolutional Neural Networks (CNN) are a development of conventional neural networks that are particularly well suited to handle modeling problems where the raw data of the image contains multiple dimensions. It expands the point multiplication operation of the vector into a two-dimensional convolution operation, and the convolution operator encodes the original image into an image containing specific features. The convolution neural network adopts the setting of shared parameters, the convolution kernel has only two dimensions, and the quantity of the parameters can be greatly reduced by adopting the mode of sharing the parameters for the data containing the RGB depth of the image.
Disclosure of Invention
The invention aims to: the intelligent campus study room monitoring and management system is provided for solving the problems of how to perform face recognition on students in study rooms, how to perform abnormal behavior recognition on recognized classmates and establishing a deduction recording and feedback system for the abnormal behavior classmates.
In order to achieve the purpose, the invention adopts the following technical scheme: the utility model provides an intelligence campus study room control management system, includes monitoring module, transmission module, PC end, figure preprocessing module and feedback module, monitoring module's output and transmission module's input electric connection, transmission module's output and the input electric connection of PC end, the output of PC end and feedback module's input internet access.
As a further description of the above technical solution:
the PC end comprises a graph preprocessing module, a face detection module, a behavior feature recognition module, a face recognition module and a classification statistical module, wherein the output end of the graph preprocessing module is in data connection with the input end of the face detection module, the output end of the face detection module is in data connection with the input ends of the behavior feature recognition module and the face recognition module respectively, and the output ends of the behavior feature recognition module and the face recognition module are in data connection with the input end of the classification statistical module.
As a further description of the above technical solution:
the monitoring module is composed of a plurality of cameras in the study room.
As a further description of the above technical solution:
the human face detection module, the behavior feature recognition module and the human face recognition module are processed based on a deep convolutional neural network, and the deep convolutional neural network is established based on a deep learning Tensorflow framework.
As a further description of the above technical solution:
the model of the face recognition module is FaceNet.
As a further description of the above technical solution:
the behavior feature identification module is a stage feature fusion convolutional neural network.
As a further description of the above technical solution:
and the output end of the PC end is connected with the input end of the school internal program APP through a network.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. in the invention, a faceNet structure is applied, information processing can be carried out in time, and face recognition can accurately and quickly carry out identity recognition recording and feedback.
2. In the invention, an MTCNN face detection algorithm is applied, three networks cascaded in the MTCNN are used for simultaneously carrying out a face classification task, a frame regression task and a feature point regression task in a multi-task mode. Compared with a single-task model, the method has the advantage that the performance of the network model can be effectively improved in training through the correlation among a plurality of tasks.
3. According to the invention, an improved stage characteristic fusion convolution neural network model is adopted, so that cross connection of low-level and high-level convolution results is enhanced in generalization capability, the occurrence of an over-fitting phenomenon is prevented, better robustness is achieved, the information of an uncovered area is better extracted by combining the structure of a classifier, and the correct expression recognition rate is improved. Meanwhile, the size of the image is normalized to 96 × 96, so that the size can better maximize the image information contained, and the data size of the size can ensure the rapidity and the real-time performance of identification.
Drawings
Fig. 1 is a schematic block diagram of an intelligent campus study room monitoring and management system according to the present invention;
fig. 2 is a schematic structural diagram of a sub-module of a PC terminal in the intelligent campus study room monitoring and management system according to the present invention;
FIG. 3 is a structural diagram of a FaceNet network in the intelligent campus study room monitoring and management system according to the present invention;
fig. 4 is a structural diagram of a stage feature fusion convolutional neural network in an intelligent campus study room monitoring and management system according to the present invention.
Illustration of the drawings:
1. a monitoring module; 2. a transmission module; 3. a PC terminal; 31. a graphics pre-processing module; 32. a face detection module; 33. a behavior feature identification module; 34. a face recognition module; 35. a classification statistic module; 4. and a feedback module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-4, the present invention provides a technical solution: the utility model provides an intelligence campus study room control management system, includes monitoring module 1, transmission module 2, PC end 3, figure preprocessing module 31 and feedback module 4, monitoring module 1's output and transmission module 2's input electric connection, transmission module 2's output and PC end 3's input electric connection, PC end 3's output and feedback module 4's input internet access.
Specifically, as shown in fig. 1 and 2, the PC terminal 3 is composed of a graph preprocessing module 31, a face detection module 32, a behavior feature recognition module 33, a face recognition module 34, and a classification statistic module 35, an output end of the graph preprocessing module 31 is connected with input end data of the face detection module 32, an output end of the face detection module 32 is connected with input end data of the behavior feature recognition module 33 and the face recognition module 34, and output ends of the behavior feature recognition module 33 and the face recognition module 34 are connected with input end data of the classification statistic module 35.
Specifically, as shown in fig. 1, the monitoring module 1 is composed of a plurality of cameras in the study room, and the monitoring module 1 is composed of a plurality of cameras in the study room, so that the plurality of cameras can monitor the study room through multiple angles, and the comprehensiveness of monitoring and shooting is improved.
Specifically, as shown in fig. 1 and 2, the face detection module 32, the behavior feature recognition module 33, and the face recognition module 34 are processed based on a deep convolutional neural network, the deep convolutional neural network is established based on a deep learning tensflo frame, and the behaviors in a study room, such as reading, stuttering, resting, eating, and calling, are trained without an official data set, so that the behaviors need to be photographed in a laboratory and made into a data set to ensure the learning of the deep convolutional neural network, and the accuracy and the recognition range of behavior recognition are improved.
Specifically, as shown in fig. 1 and 2, the model of the face recognition module 34 is FaceNet, and the FaceNet model can effectively improve the recognition accuracy of the face recognition module 34.
Specifically, as shown in fig. 1 and 2, the behavior feature identification module 33 is a stage feature fusion convolutional neural network, and includes four convolutional layers, three maximum pooling layers, and two full-link layers. The image after face detection enters a convolutional neural network, is firstly processed by two convolutional layers, then enters a first maximum pooling layer for pooling, then passes through a second pooling layer of a third convolutional layer and a third pooling layer of a fourth convolutional layer, and finally is input into a first full-connection layer and a second full-connection layer. In order to prevent some important features from being ignored in low-level processing, particularly, local components are selected from the outputs of the first two pooling layers by utilizing a dropout technology and are input into the first full-connection layer together with the output of the third pooling layer, so that the low-level features and the high-level features are respectively fused in the first full-connection layer, the generalization capability of the classification result of the convolutional network can be enhanced, the over-fitting phenomenon is prevented, better robustness is achieved, the information of an unblocked area is better extracted by combining the structure of a classifier, and the correct behavior recognition rate is improved.
Specifically, as shown in fig. 1, the output end of the PC terminal 3 is connected to the input end of the school program APP through a network, the behavior and score deduction rules of the illegal student recorded by the PC terminal 3 are sent to the school applet, the score of the study room of each student is recorded at the background, and the student after deduction of 0 cannot enter the study room for study. Can actively promote the optimization of the learning environment and the improvement of the quality of students.
The system flow shown in FIGS. 1-4 is as follows:
1. the monitoring module 1 captures facial and body images of students at high frequency.
2. Image data shot by the camera in the study room is uploaded to the PC terminal 3 through the transmission module 2 for processing.
3. The PC terminal 3 collects data from the transmission module, and performs preprocessing on the picture through function programming in the Opencv library, including processing such as picture gray level equalization, rotation, normalization, and the like. And outputting the image information of the same scale.
4. And inputting the preprocessed pictures into a program of an MTCNN face detection algorithm with trained parameters through a data set, and outputting the pictures with the detected faces.
5. And inputting the photo after the face detection into a face recognition model based on the deep convolutional neural network for recognition. And calibrating the human body and carrying out behavior classification based on the behavior recognition model of the deep neural network.
6. The recognition result of each violation picture is counted by the classification counting module 35, student IDs are recorded, and then violation feedback preprocessing information is performed.
In the above flow 4, the MTCNN face detection model used in the present invention uses the face data set a { (X1, Y1), (X2, Y2), …, (XN, YN) }, i ═ 1, 2, …, N. Where N represents the number of samples of the data set, X represents the samples, and Y represents the output of the samples. The specific principle is as follows:
MTCNN consists of 3 network structures (P-Net, R-Net, O-Net), (P-Net): the network structure mainly obtains regression vectors of candidate windows and bounding boxes of the face region. And using the bounding box to do regression, calibrating the candidate window, and then merging the highly overlapped candidate boxes through non-maximum suppression (NMS). (R-Net): the network structure also removes those false-positive areas by bounding box regression and NMS. (O-Net): the layer has one more roll base layer than the R-Net layer, so the processing result is more fine. The effect is the same as that of the R-Net layer. But this layer has made more supervision of the face area and also outputs 5 landmarks (landmark).
The MTCNN feature descriptor mainly includes 3 parts, which are face classification:
Figure BDA0002586044360000081
and (3) boundary box regression:
Figure BDA0002586044360000082
landmark positioning:
Figure BDA0002586044360000083
the process 5 specifically includes the construction of a face recognition model based on a deep convolutional neural network, and the model is recognized with high accuracy by training model parameters using samples of an LFW face recognition data set. And then, using the face image of the teacher and the student in the universities as a test set to carry out generalization test.
The face recognition model is FaceNet, and it needs to be explained that:
wherein Batch: the method refers to an input face image sample, wherein the sample is a picture sample which is obtained by finding a face through face detection and cutting the face to a fixed size.
Deeparchitecture: refers to the adoption of a deep learning architecture, here using the googlenet structure.
L2: which refers to feature normalization.
Embeddings: the feature vectors generated by the deep learning network, L2 normalization, are the same.
RipletLoss: namely, the separability among Loss direct learning features input by three pictures: the characteristic distance between the same identities is as small as possible and the characteristic distance between different identities is as large as possible.
The triad consists of an anchor (A), a negative (N) and a positive (P), any picture can be used as a base point (A), then the picture belonging to the same person is the P, and the picture not belonging to the same person is the N. By learning, the distance between classes is made larger than the distance within the classes.
The loss function for RipletLoss is:
Figure BDA0002586044360000091
where the left two-norm represents the intra-class distance, the right two-norm represents the inter-class distance, and α is a constant. In the optimization process, a gradient descent method is used to enable the loss function to descend continuously, namely the intra-class distance descends continuously, and the inter-class distance is promoted continuously.
In order to ensure that the network training effect is the best, Hardpositive is selected.
Figure BDA0002586044360000092
And Hardnergive
Figure BDA0002586044360000093
In the above flow 5, the process is the construction of a human behavior recognition model based on a deep convolutional neural network. Because the behavior in the study room is, for example: the behaviors of reading, getting lost, resting, eating, calling and the like have no official data set for training, so that the behaviors need to be photographed in a laboratory and made into a data set to ensure the learning of the deep convolutional neural network.
The human behavior recognition model adopts a stage characteristic fusion convolutional neural network which comprises four convolutional layers, three maximum pooling layers and two full-connection layers. The image after face detection enters a convolutional neural network, is firstly processed by two convolutional layers, then enters a first maximum pooling layer for pooling, then passes through a second pooling layer of a third convolutional layer and a third pooling layer of a fourth convolutional layer, and finally is input into a first full-connection layer and a second full-connection layer. In order to prevent some important features from being ignored in low-level processing, particularly, local components are selected from the outputs of the first two pooling layers by utilizing a dropout technology and are input into the first full-connection layer together with the output of the third pooling layer, so that the low-level features and the high-level features are respectively fused in the first full-connection layer, the generalization capability of the classification result of the convolutional network can be enhanced, the over-fitting phenomenon is prevented, better robustness is achieved, the information of an unblocked area is better extracted by combining the structure of a classifier, and the correct behavior recognition rate is improved.
In the above flow 6, the specific process is to send the rule of the behavior and the score deduction of the illegal student recorded by the PC to the school applet, and record the score of the study room of each student in the background, and the student after deducting 0 cannot enter the study room for study. Can actively promote the optimization of the learning environment and the improvement of the quality of students.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1. The utility model provides an intelligence campus study room control management system, includes monitoring module (1), transmission module (2), PC end (3), figure preprocessing module (31) and feedback module (4), its characterized in that, the output of monitoring module (1) and the input electric connection of transmission module (2), the output of transmission module (2) and the input electric connection of PC end (3), the output of PC end (3) and the input network connection of feedback module (4).
2. The intelligent campus study room monitoring and management system according to claim 1, wherein the PC terminal (3) is composed of a graphic preprocessing module (31), a face detection module (32), a behavior feature recognition module (33), a face recognition module (34) and a classification and statistics module (35), an output terminal of the graphic preprocessing module (31) is in data connection with an input terminal of the face detection module (32), an output terminal of the face detection module (32) is in data connection with input terminals of the behavior feature recognition module (33) and the face recognition module (34), and output terminals of the behavior feature recognition module (33) and the face recognition module (34) are in data connection with an input terminal of the classification and statistics module (35).
3. The intelligent campus study room monitoring and management system according to claim 1, wherein said monitoring module (1) is composed of a plurality of cameras in the study room.
4. The intelligent campus study room monitoring and management system according to claim 2, wherein the face detection module (32), the behavior feature recognition module (33) and the face recognition module (34) are processed based on a deep convolutional neural network, and the deep convolutional neural network is established based on a deep learning Tensorflow framework.
5. The intelligent campus study room monitoring and management system of claim 2, wherein the model of said face recognition module (34) is FaceNet.
6. The intelligent campus study room monitoring and management system according to claim 2, wherein said behavior feature identification module (33) is a stage feature fusion convolutional neural network.
7. The intelligent campus study room monitoring and management system according to claim 1, wherein the output terminal of said PC terminal (3) is connected to the input terminal of said school internal program APP via a network.
CN202010681572.6A 2020-07-15 2020-07-15 Intelligent campus study room monitoring and management system Pending CN111898492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010681572.6A CN111898492A (en) 2020-07-15 2020-07-15 Intelligent campus study room monitoring and management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010681572.6A CN111898492A (en) 2020-07-15 2020-07-15 Intelligent campus study room monitoring and management system

Publications (1)

Publication Number Publication Date
CN111898492A true CN111898492A (en) 2020-11-06

Family

ID=73192762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010681572.6A Pending CN111898492A (en) 2020-07-15 2020-07-15 Intelligent campus study room monitoring and management system

Country Status (1)

Country Link
CN (1) CN111898492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040159A (en) * 2021-11-05 2022-02-11 漳州爱果冻信息科技有限公司 Intelligent study room

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359606A (en) * 2018-10-24 2019-02-19 江苏君英天达人工智能研究院有限公司 A kind of classroom real-time monitoring and assessment system and its working method, creation method
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN110889672A (en) * 2019-11-19 2020-03-17 哈尔滨理工大学 Student card punching and class taking state detection system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359606A (en) * 2018-10-24 2019-02-19 江苏君英天达人工智能研究院有限公司 A kind of classroom real-time monitoring and assessment system and its working method, creation method
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN110889672A (en) * 2019-11-19 2020-03-17 哈尔滨理工大学 Student card punching and class taking state detection system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苗军;李凯;许少武;: "基于卷积神经网络多层特征融合的目标跟踪", 现代电子技术 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040159A (en) * 2021-11-05 2022-02-11 漳州爱果冻信息科技有限公司 Intelligent study room

Similar Documents

Publication Publication Date Title
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
Khan et al. Deep unified model for face recognition based on convolution neural network and edge computing
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Wang et al. Hierarchical attention network for action recognition in videos
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
Fathi et al. Understanding egocentric activities
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
US8805018B2 (en) Method of detecting facial attributes
CN106874826A (en) Face key point-tracking method and device
CN112784763A (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN113673505A (en) Example segmentation model training method, device and system and storage medium
CN113920540A (en) Knowledge distillation-based pedestrian re-identification method, device, equipment and storage medium
Cui et al. A novel online teaching effect evaluation model based on visual question answering
Ashwinkumar et al. Deep learning based approach for facilitating online proctoring using transfer learning
CN111898492A (en) Intelligent campus study room monitoring and management system
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN113239915B (en) Classroom behavior identification method, device, equipment and storage medium
Hutagalung et al. The effectiveness of opencv based face detection in low-light environments
Imran et al. Multimodal egocentric activity recognition using multi-stream CNN
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
Long et al. [Retracted] Application of Machine Learning to Badminton Action Decomposition Teaching
CN114511877A (en) Behavior recognition method and device, storage medium and terminal
Sandhiyasa et al. Real Time Face Recognition for Mobile Application Based on Mobilenetv2

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination