CN116665260A - Online classroom supervision method, device, equipment and medium based on target mask - Google Patents

Online classroom supervision method, device, equipment and medium based on target mask Download PDF

Info

Publication number
CN116665260A
CN116665260A CN202210148189.3A CN202210148189A CN116665260A CN 116665260 A CN116665260 A CN 116665260A CN 202210148189 A CN202210148189 A CN 202210148189A CN 116665260 A CN116665260 A CN 116665260A
Authority
CN
China
Prior art keywords
facial
user
expression recognition
target mask
supervision method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210148189.3A
Other languages
Chinese (zh)
Inventor
朱静
孙淑颖
杜晓楠
张颂研
梁顺棠
林伟照
牛子晗
麦钦
尹邦政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202210148189.3A priority Critical patent/CN116665260A/en
Publication of CN116665260A publication Critical patent/CN116665260A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An online class supervision method, device, equipment and medium based on a target mask, wherein the method comprises the following steps: dynamically capturing facial pictures of a user through a mobile terminal of the user; preprocessing the captured facial picture to obtain facial features; inputting the facial features into a trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result; carrying out emotion judgment according to the recognition result, and carrying out corresponding feedback according to the judgment result; the application utilizes the mobile terminal to dynamically capture the facial pictures of the user, finally generates and judges the result through the machine vision and image processing technology, gives the warning to the user in time, can play a role of supervision, and simultaneously, the teaching staff can receive teaching feedback, thereby being convenient for the next teaching work to be carried out.

Description

Online classroom supervision method, device, equipment and medium based on target mask
Technical Field
The application relates to the technical field of machine vision and graphic processing, in particular to an online class supervision method, device, equipment and medium based on a target mask.
Background
With the development of computer technology, the internet is widely used in people's life. In the aspect of education industry, the Internet influences the teaching mode, and the application of the online teaching mode is more common. Meanwhile, under the condition of no supervision, how to judge the learning state of the user and ensure the efficiency of online teaching becomes a popular research.
Unlike conventional expressions, micro-expressions are special facial micro-actions, and are spontaneous expressions capable of reflecting the most true emotion state of the human heart, so that the micro-expressions can be used as an important basis for judging the emotion state of the human heart. At present, the technology of performing micro-expression recognition by using machine vision is more and more mature, but the traditional micro-expression recognition technology cannot effectively weaken interference caused by non-micro-expression areas on micro-expression feature extraction, such as blinking, head integral movement and the like, and the non-micro-expression areas can influence micro-expression recognition results, so that the accuracy of the recognition results is reduced.
Therefore, how to provide a micro-expression recognition method with high accuracy is a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an online classroom supervision method, device, equipment and medium based on a target mask, which aim to solve the problem that the accuracy of a recognition result is low because the interference of a non-micro expression area cannot be effectively weakened by the existing micro expression recognition technology.
In a first aspect, the present application provides an online class supervision method based on a target mask, including:
dynamically capturing facial pictures of a user through a mobile terminal of the user;
preprocessing the captured facial picture to obtain facial features;
inputting the facial features into a trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result;
and carrying out emotion judgment according to the recognition result, and carrying out corresponding feedback according to the judgment result.
In a second aspect, the present application also provides an online class supervision device based on a target mask, the device comprising:
the facial image capturing module is used for dynamically capturing facial images of the user through the mobile terminal of the user;
the image preprocessing module is used for preprocessing the captured face image to obtain face characteristics;
the micro-expression recognition module is used for inputting the facial features into the trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result;
and the judging and feedback module is used for carrying out emotion judgment according to the recognition result and carrying out corresponding feedback according to the judgment result.
In a third aspect, the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the online class supervision method based on the target mask according to the first aspect when executing the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program, which when executed by a processor, causes the processor to perform the above-mentioned on-line classroom monitoring method based on a target mask of the first aspect.
The application provides an online class supervision method, device, equipment and medium based on a target mask, which has the beneficial effects that:
through machine vision and image processing technology, the learning state of the user is effectively judged, teaching feedback is timely given to the teaching person, and smooth implementation of online class is facilitated. The application uses the target mask to process the image, thereby enhancing the accuracy and feasibility of the micro expression recognition application in online class.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an online class supervision method based on a target mask according to an embodiment of the present application.
Detailed Description
The following describes the embodiments of the present application further with reference to the drawings. The description of these embodiments is provided to assist understanding of the present application, but is not intended to limit the present application. In addition, the technical features of the embodiments of the present application described below may be combined with each other as long as they do not collide with each other.
Referring to a flowchart of an online class supervision method based on a target mask shown in the embodiment of fig. 1, the method includes:
s101, dynamically capturing facial pictures of a user through the mobile terminal of the user.
Capturing facial pictures of a user dynamically through a front camera of a mobile terminal of the user; specifically, the front camera dynamically captures a user's facial pictures at 1 minute intervals. The mobile terminal is a computer.
S102, preprocessing the captured face picture to obtain the face characteristics.
In one embodiment, the preprocessing includes: face detection, normalization, key point positioning and FACS face coding processing are sequentially carried out on the face image.
The method specifically comprises the following steps: carrying out normalization processing on the face picture, and enhancing the face information in the image; performing keypoint location of the facial picture using a keypoint location algorithm based on a refined neural network (Coarse to Fine CNN); and performing image masking processing on the human eye region by using FACS facial coding to obtain the human face features.
S103, inputting the facial features into the trained facial expression recognition model to perform micro-expression recognition, and obtaining a recognition result.
In one embodiment, the training of the facial expression recognition model includes: finishing preprocessing of data by adopting a picture generator (Image Data Generator) of Keras, and creating a facial expression recognition model; invoking a function of the Keras library, modifying training parameters, training the facial expression recognition model, and finally outputting a training result; and comparing the output emotion result with the set emotion.
Respectively carrying out normalization processing on face pictures of a test set and a verification set, and setting rescale=1/255; coarse to Fine CNN algorithm locates 51 interior points (eyebrows, eyes, nose, mouth) and 17 contour points, divides the face of the person into 6 individual areas (left eyebrow, right eyebrow, left eye, right eye, mouth, nose) according to the position information of the 51 interior points; performing image masking on the human eye region according to the surrounding calibration information of human eyes by FACS face coding; and calling a target function of the Keras generator object to extract the processed picture, calling a fit_generator () function of the Sequential, configuring a training set, and respectively returning the training set generator to 800 and 2000 times to train the model.
After training of the model is completed, the facial features can be processed, and the micro-expressions in the facial features can be identified to obtain an identification result.
S104, carrying out emotion judgment according to the recognition result, and carrying out corresponding feedback according to the judgment result.
The identified emotion can be compared with preset emotion to judge which emotion the identified emotion belongs to, the preset emotion comprises positive emotion and negative emotion, the emotion classification is carried out by using a valence state-awakening degree theory, the emotion corresponds to the online classroom state of the user, and then a corresponding scheme or strategy is adopted according to the emotion of the user.
The application combines the micro-expression recognition technology with the online classroom teaching mode, improves the accuracy of the micro-expression recognition technology through a large amount of image processing technology, can effectively supervise online learning users, feeds back teaching conditions to the students in time, and has great significance in maintaining good environment of the classroom and promoting the education business to be carried out steadily.
In an embodiment, the present application further provides an online class supervision device based on the target mask, where the device includes:
the facial image capturing module is used for dynamically capturing facial images of the user through the mobile terminal of the user;
the image preprocessing module is used for preprocessing the captured face image to obtain face characteristics;
the micro-expression recognition module is used for inputting the facial features into the trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result;
and the judging and feedback module is used for carrying out emotion judgment according to the recognition result and carrying out corresponding feedback according to the judgment result.
In an embodiment, the present application also provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the on-line classroom monitoring method based on the target mask of any of the embodiments when executing the computer program.
In an embodiment, the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor causes the processor to perform the online class supervision method based on the objective mask according to any one of the embodiments above.
The embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the present application is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the application, and yet fall within the scope of the application.

Claims (10)

1. The online class supervision method based on the target mask is characterized by comprising the following steps of:
dynamically capturing facial pictures of a user through a mobile terminal of the user;
preprocessing the captured facial picture to obtain facial features;
inputting the facial features into a trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result;
and carrying out emotion judgment according to the recognition result, and carrying out corresponding feedback according to the judgment result.
2. The online class supervision method based on the target mask according to claim 1, wherein the trained facial expression recognition model is trained by the following training method:
preprocessing data by adopting a picture generator of Keras, and creating a facial expression recognition model;
and calling a function of the Keras library, modifying training parameters, and training the facial expression recognition model to obtain a trained facial expression recognition model.
3. The online class-supervision method based on the target mask according to claim 1, wherein the preprocessing the captured face picture includes:
and sequentially carrying out face detection, normalization, key point positioning and FACS face coding processing on the captured face picture.
4. The online class supervision method based on the target mask according to claim 1, wherein the performing emotion judgment according to the recognition result includes:
the recognition result is compared with preset emotions including positive and negative emotions.
5. The online class-supervision method based on the target mask according to claim 1, wherein the mobile terminal is a computer or a mobile phone.
6. The online class-supervision method based on the object mask according to claim 5, wherein the dynamically capturing the face picture of the user through the mobile terminal of the user comprises:
the front camera of the computer or the mobile phone is controlled to dynamically capture facial pictures of the user every 1 minute.
7. The online class supervision method based on the target mask according to claim 3, wherein the performing the key point positioning on the captured face image includes:
and performing the key point positioning of the facial picture by using a key point positioning algorithm based on a refined neural network.
8. On-line classroom supervision device based on target mask, characterized by comprising:
the facial image capturing module is used for dynamically capturing facial images of the user through the mobile terminal of the user;
the image preprocessing module is used for preprocessing the captured face image to obtain face characteristics;
the micro-expression recognition module is used for inputting the facial features into the trained facial expression recognition model to perform micro-expression recognition, so as to obtain a recognition result;
and the judging and feedback module is used for carrying out emotion judgment according to the recognition result and carrying out corresponding feedback according to the judgment result.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the object mask based on-line classroom monitoring method of any one of claims 1-7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to perform the object mask based on-line classroom supervision method according to any one of claims 1-7.
CN202210148189.3A 2022-02-17 2022-02-17 Online classroom supervision method, device, equipment and medium based on target mask Pending CN116665260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210148189.3A CN116665260A (en) 2022-02-17 2022-02-17 Online classroom supervision method, device, equipment and medium based on target mask

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210148189.3A CN116665260A (en) 2022-02-17 2022-02-17 Online classroom supervision method, device, equipment and medium based on target mask

Publications (1)

Publication Number Publication Date
CN116665260A true CN116665260A (en) 2023-08-29

Family

ID=87714109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210148189.3A Pending CN116665260A (en) 2022-02-17 2022-02-17 Online classroom supervision method, device, equipment and medium based on target mask

Country Status (1)

Country Link
CN (1) CN116665260A (en)

Similar Documents

Publication Publication Date Title
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
WO2019174439A1 (en) Image recognition method and apparatus, and terminal and storage medium
Zhang et al. Random Gabor based templates for facial expression recognition in images with facial occlusion
CN109472206A (en) Methods of risk assessment, device, equipment and medium based on micro- expression
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN107798318A (en) The method and its device of a kind of happy micro- expression of robot identification face
WO2021047185A1 (en) Monitoring method and apparatus based on facial recognition, and storage medium and computer device
JP2005242567A (en) Movement evaluation device and method
CN109711356B (en) Expression recognition method and system
CN111191564A (en) Multi-pose face emotion recognition method and system based on multi-angle neural network
CN113139439B (en) Online learning concentration evaluation method and device based on face recognition
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
Guarin et al. The effect of improving facial alignment accuracy on the video-based detection of neurological diseases
WO2023279557A1 (en) Fake video inspection method and system based on blink synchronization and binocular movement detection
CN110866962A (en) Virtual portrait and expression synchronization method based on convolutional neural network
CN112818796A (en) Intelligent posture discrimination method and storage device suitable for online invigilation scene
CN112149517A (en) Face attendance checking method and system, computer equipment and storage medium
KR102229056B1 (en) Apparatus and method for generating recognition model of facial expression and computer recordable medium storing computer program thereof
Ray et al. Design and implementation of affective e-learning strategy based on facial emotion recognition
CN116665260A (en) Online classroom supervision method, device, equipment and medium based on target mask
CN114120422A (en) Expression recognition method and device based on local image data fusion
Alom et al. Optimized facial features-based age classification
Ríos et al. Facial expression recognition and modeling for virtual intelligent tutoring systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination