CN109522815B - Concentration degree evaluation method and device and electronic equipment - Google Patents

Concentration degree evaluation method and device and electronic equipment Download PDF

Info

Publication number
CN109522815B
CN109522815B CN201811259091.5A CN201811259091A CN109522815B CN 109522815 B CN109522815 B CN 109522815B CN 201811259091 A CN201811259091 A CN 201811259091A CN 109522815 B CN109522815 B CN 109522815B
Authority
CN
China
Prior art keywords
concentration
evaluation
degree
stage
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811259091.5A
Other languages
Chinese (zh)
Other versions
CN109522815A (en
Inventor
李贵华
莫思仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bowei Education Technology Co ltd
Original Assignee
Shenzhen Bowei Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bowei Education Technology Co ltd filed Critical Shenzhen Bowei Education Technology Co ltd
Priority to CN201811259091.5A priority Critical patent/CN109522815B/en
Publication of CN109522815A publication Critical patent/CN109522815A/en
Application granted granted Critical
Publication of CN109522815B publication Critical patent/CN109522815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of education, in particular to a concentration degree evaluation method and device and electronic equipment. The method comprises the following steps: acquiring image data of an application scene and an evaluation object; acquiring a concentration evaluation stage and at least two concentration parameters according to the image data; performing a concentration assessment on the assessment subject based on the concentration assessment stage and the at least two concentration parameters. The implementation method can automatically monitor the concentration degree of the person without adding extra equipment, thereby improving the comfort degree of the user and reducing the detection cost; in addition, when the concentration degree is calculated, the number of concentration degree parameters is increased, so that the accuracy of the obtained concentration degree is improved; finally, the concentration degree is calculated by simultaneously considering the concentration degree evaluation stage and the concentration degree parameter, and the performance of the evaluation object is different by considering different stages, so that the scientificity and the representativeness of the obtained concentration degree are enhanced.

Description

Concentration degree evaluation method and device and electronic equipment
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of education, in particular to a concentration degree evaluation method and device and electronic equipment.
[ background of the invention ]
Concentration degree is an effective mode for judging work and learning efficiency of people, and concentration degree evaluation has important significance in the fields of digital teaching and the like. For example, in classroom teaching, through assessing the concentration degree of students, can help the students to review the period that self is not really true on the classroom to help the students to think about the problem, in addition, can also let the mr in time master the student's the situation of going to class and adjust the teaching strategy accordingly.
Currently, common methods for assessing concentration include: a first type, expert on-line monitoring method; and secondly, monitoring data such as eye sight, brain wave and the like of the evaluation object through the wearable device, and calculating the concentration degree according to the data.
However, the first method lacks automatic monitoring, is easily influenced by subjective factors, and has low accuracy of the obtained concentration; the second category of methods described above requires the assessment subject to wear additional associated equipment, affecting his freedom and comfort and increasing the cost of detection.
[ summary of the invention ]
The invention aims to provide a concentration degree evaluation method, a concentration degree evaluation device and electronic equipment, and aims to solve the technical problems of low accuracy, poor comfort and high cost in concentration degree detection in the related art.
In one aspect of the embodiments of the present invention, a method for concentration assessment is provided, the method including:
acquiring image data of an application scene and an evaluation object;
acquiring a concentration evaluation stage and at least two concentration parameters according to the image data;
performing a concentration assessment on the assessment subject based on the concentration assessment stage and the at least two concentration parameters.
Optionally, the acquiring concentration evaluation phase from the image data comprises:
acquiring audio data of the image data, and converting the audio data into text data;
extracting keywords related to the concentration degree evaluation stage based on the text data;
and determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, an exercise stage and a discussion stage.
Optionally, the obtaining at least two concentration parameters from the image data comprises:
extracting a face image according to the image data;
performing face recognition according to the face image to determine the identity of the evaluation object;
preprocessing the face image;
and acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
Optionally, the concentration parameter includes a head-up degree, and the obtaining the concentration parameter of the evaluation object according to the preprocessed face image includes:
acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image;
and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
Optionally, the concentration evaluation of the evaluation subject based on the concentration evaluation phase and the at least two concentration parameters comprises:
associating the concentration degree evaluation stage with the concentration degree parameter according to time so as to determine the concentration degree parameter corresponding to the concentration degree evaluation stage;
acquiring a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage;
and performing product summation operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, thereby evaluating the concentration degree of the evaluation object in the concentration degree evaluation stage.
In another aspect of the embodiments of the present invention, there is provided a concentration degree evaluation apparatus, including:
the first data acquisition module is used for acquiring image data of an application scene and an evaluation object;
a second data acquisition module for acquiring a concentration evaluation stage and at least two concentration parameters according to the image data;
a concentration calculation module for performing a concentration evaluation on the evaluation object based on the concentration evaluation stage and the at least two concentration parameters.
Optionally, the second data obtaining module is specifically configured to:
acquiring audio data of the image data, and converting the audio data into text data;
extracting keywords related to the concentration degree evaluation stage based on the text data;
and determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, a classroom practice stage and a discussion stage.
Optionally, the second data obtaining module is specifically configured to:
extracting a face image according to the image data;
performing face recognition according to the face image to determine the identity of the evaluation object;
preprocessing the face image;
and acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
Optionally, the concentration parameter includes a head-up degree, and the obtaining the concentration parameter of the evaluation object according to the preprocessed face image includes:
acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image;
and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
Optionally, the concentration calculation module comprises:
the parameter association unit is used for associating the concentration degree evaluation stage with the concentration degree parameter according to time so as to determine the concentration degree parameter corresponding to the concentration degree evaluation stage;
a weight coefficient obtaining unit, configured to obtain a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage;
the concentration degree evaluation unit is used for performing product and sum operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, so as to evaluate the concentration degree of the evaluation object in the concentration degree evaluation stage.
In another aspect of the embodiments of the present invention, an electronic device is provided, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory is executable by the at least one processor to enable the at least one processor to perform the method as described above.
In an embodiment of the invention, the concentration of the evaluation object is calculated according to the determined concentration evaluation stage and the obtained at least two concentration parameters by acquiring image data of the application scene and the evaluation object, determining the concentration evaluation stage according to the image data, and acquiring the at least two concentration parameters. The implementation method can automatically monitor the concentration degree of the person without adding extra equipment, thereby improving the comfort degree of the user and reducing the detection cost; in addition, when the concentration degree is calculated, the number of concentration degree parameters is increased, so that the accuracy of the obtained concentration degree is improved; finally, the concentration degree is calculated by simultaneously considering the concentration degree evaluation stage and the concentration degree parameter, and the performance of the evaluation object is different by considering different stages, so that the scientificity and the representativeness of the obtained concentration degree are enhanced.
[ description of the drawings ]
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic diagram of an operating environment of a method for concentration assessment provided by an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for concentration assessment according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for obtaining a concentration evaluation phase according to the image data in a concentration evaluation method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for obtaining at least two concentration parameters according to the image data in a concentration evaluation method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a face image in a concentration evaluation method according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a method for concentration calculation based on the concentration evaluation stage and the at least two concentration parameters in a concentration evaluation method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a concentration degree evaluation apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device 40 for performing the concentration evaluation method according to the embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in the device diagrams, with logical sequences shown in the flowcharts, in some cases, the steps shown or described may be performed in a different order than the block divisions in the device diagrams, or the flowcharts.
For the convenience of better understanding of the present invention, before describing the inventive concept of the present invention, an operating environment of the present invention is described, please refer to fig. 1, fig. 1 is a schematic diagram of an operating environment according to an embodiment of the present invention, and the operating environment includes an image acquisition unit 10 and a control processing unit 20.
The image capturing unit 10 may be a video camera, a still camera, a video recorder, or the like, and is configured to capture image data of an application scene and an evaluation target and transmit the captured image data to the control processing unit 20. The image capturing units 10 may be arranged according to different application scenarios, for example, when the application scenario is classroom teaching, the image capturing units 10 may be arranged at different locations in the classroom, and the number and specific locations thereof are determined according to the current environment. Further, the image pickup unit 10 may pick up image data at a certain frequency or pattern.
The control processing unit 20 may be any suitable type of electronic computing device, such as a multi-core central processing unit, a computer, a server, or the like. The control processing unit 20 may receive a series of image information, such as the images acquired by the image acquisition unit 10, and implement concentration detection on the evaluation object in the current application scene according to the acquired images. Specifically, for example, the control processing unit 20 acquires image data of an application scene and an evaluation subject, acquires a concentration evaluation stage and at least two concentration parameters from the image data, and performs concentration evaluation on the evaluation subject based on the concentration evaluation stage and the at least two concentration parameters.
The image capturing unit 10 and the control processing unit 20 in fig. 1 are independently arranged, and it should be noted that the image capturing unit 10 may also be integrally arranged in the control processing unit 20, and the image capturing unit 10 may establish a communication connection with the control processing unit 20 in a wireless/wired communication manner.
It should be noted that only 3 image acquisition units 10 and 1 control processing unit 20 are shown in fig. 1, and those skilled in the art will understand that the application environment of the concentration degree evaluation method may further include any number of image acquisition units 10 and control processing units 20.
With reference to the above application environment, as shown in fig. 2, a specific process of the concentration evaluation method provided by the embodiment of the present invention is further illustrated:
step 11, acquiring image data of an application scene and an evaluation object;
the application scenes comprise classroom teaching, office meetings, lectures, online video learning and the like. The evaluation object is a person in the application scene, which includes a student, a participant, and the like. The image data is image data corresponding to the application scene, and includes audio, video, pictures, and the like. For example, when the application scene is classroom teaching, the image data includes video data of a student in class, and the video data includes face images of the student, audio and video of teaching by a teacher, and the like; when the application scene is an office meeting, the image data comprises video data during the meeting, and the video data comprises face images of participants, audios and videos of conference presiding personnel and the like.
The image data can be collected through a camera, and the electronic equipment executing the method acquires the image data from the camera. The image data of the application scene and the evaluation object may be acquired periodically, and the size of the period may be set by a user or a system.
Step 12, acquiring a concentration degree evaluation stage and at least two concentration degree parameters according to the image data;
in this embodiment, the concentration degree evaluation stage, that is, a period corresponding to the whole process of the application scene, may be one period corresponding to the whole process of the application scene, or may be several periods corresponding to the whole process of the application scene. When it is several periods corresponding to the whole process of the application scenario, the concentration evaluation phase includes several phases, i.e. the whole process of the application scenario is divided into different several phases.
For example, when the application scenario is classroom teaching, the whole classroom teaching process can be a concentration evaluation phase; or, dividing the whole classroom teaching process into different periods according to the teaching stages, wherein each period corresponds to a concentration degree evaluation stage.
In this embodiment, as shown in fig. 3, the stage of acquiring concentration degree evaluation according to the image data includes:
step 201, acquiring audio data of the image data, and converting the audio data into text data; the audio data can be extracted from the video, and then converted into text for analysis, and the specific process of converting the audio data into the text data can refer to the record in the related art.
Step 202, extracting keywords related to the concentration degree evaluation stage based on the text data; the keywords related to the concentration evaluation stage may be words related to the application scenario, and the different stages can be distinguished by the words, for example, when the application scenario is classroom teaching, the words include "main introduction of the class", "answer questions", "do exercises", "freely discuss", and the like.
The keywords may be periodically obtained when the above steps are performed, and a specific value of the period may be determined according to the lesson taking habits of different teachers and/or the characteristics of different courses, for example, the teacher habit asks and reviews lesson taking contents in the first 5 minutes of a classroom, lessons are given in the middle 30 minutes, and students freely discuss the last 10 minutes, so the keywords include "answer questions", "give lessons", and "freely discuss", thereby corresponding to three concentration assessment stages.
The advantage of periodically acquiring the keywords is that the keywords are not required to be acquired by polling all the time, so that various resources, such as variables in a memory, a CPU, a network and a program, are not required to be occupied, the flow consumption is reduced, and the resources are saved.
Step 203, determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, an exercise stage and a discussion stage.
In this embodiment, after the keyword is obtained, the keyword may be converted into a feature vector, and then the feature vector is classified by using a preset algorithm (for example, a classification algorithm such as a support vector machine), so as to determine the concentration degree evaluation stage corresponding to the keyword. The concentration evaluation stage can be divided into a teaching stage, an interaction stage, an exercise stage and a discussion stage. Of course, in practical applications, the concentration evaluation stage may include not only these several stages, but also other more stages, and in addition, the concentration evaluation stage may be only one stage of these several stages. For example, when the current application scenario is classroom teaching, the concentration evaluation stage specifically includes: a teacher lecture student listening stage, a teacher questioning student answering stage, a classroom exercise stage and a classroom discussion stage.
Wherein, the concentration degree parameter is a parameter for evaluating the concentration degree of the evaluation object. In this embodiment, the concentration parameter is first defined, and the concentration parameter includes two aspects of emotional state and attention state, where the emotional state mainly analyzes the facial expression of the evaluation subject, such as dividing the facial expression into three categories, namely neutral, smiling face and bitter face, which correspond to three cases, namely high, medium and low concentration degrees respectively. The attention state can be detected by different visual cues, including the head raising degree, the lip activity condition, the eye closing degree, the mobile phone playing condition and the like. The concentration parameter may be defined in relation to, for example, table 1 below.
TABLE 1
Figure BDA0001843478840000081
In table 1, the head-up degree can be represented by a value in an interval [0,1], and the value reflects the head-up degree, for example, the head-up degree is higher when the value is larger; the lip activity can be represented by 0 and 1, such as 0 for speaking, 1 for silence, etc.; the eye closing degree can be represented by a value in the interval [0,1], and the value size reflects the eye closing degree size, for example, the larger the value is, the larger the eye opening degree is, and the like; the mobile phone playing situation can be represented by 0 and 1, for example, 0 represents that no mobile phone is played, 1 represents that a mobile phone is played, etc.; the facial expression can be represented by the values 0,1, 2 respectively, such as 0 for a bitter face, 1 for a smiling face, 2 for a neutral, etc.
Specifically, as shown in fig. 4, the acquiring at least two concentration parameters according to the image data includes:
step 301, extracting a face image according to the image data;
the image data may be extracted from the video data at intervals of a certain time, and then a face detection algorithm is applied to detect a face in the image, where the face detection algorithm may be a Multi-task Convolutional Neural Network (MTCNN) algorithm or the like.
Step 302, performing face recognition according to the face image to determine the identity of the evaluation object;
the specific process of face recognition according to the face image may be that a face library is established in advance, and the identity of the face is recognized by a feature comparison method, or a deep learning method is adopted; for example, a neural network trained in advance through a face library is used, for example, FaceNet and the like are used to perform face recognition, and the specific recognition process may refer to the records in the related art, which is not described herein again.
Step 303, preprocessing the face image; the preprocessing comprises filtering the face image to remove the influence of motion blur and illumination and improve the accuracy.
Step 304, acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
Wherein the concentration parameter includes a head-up degree, and the obtaining the concentration parameter of the evaluation object according to the preprocessed face image includes: acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image; and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
Specifically, the length parameter obtained based on the preprocessed face image may be that three-dimensional face key point coordinates corresponding to the face image are detected, the face key point coordinates include center coordinates of left and right eyes and a mouth region, posture correction, alignment and the like are performed according to the center coordinates of the left and right eyes and the mouth region, and then the length of the three-dimensional frontal face projected to the two-dimensional plane is calculated. For example, as shown in fig. 5, Len denotes a frontal face length, i.e., a length from the centers of both eyes to the center of the mouth region, LenOriginalThe original frontal face length of the subject under standard conditions (i.e., when facing the camera, when the frontal face length is at a maximum). The head-up degree can be calculated using the following formula, wherein for the same evaluation object, a longer Len indicates a higher head-up degree of the evaluation object.
Figure BDA0001843478840000101
When the concentration parameter includes a lip activity condition, the process of acquiring the lip activity condition based on the preprocessed face image may be to determine the lip activity condition according to the size and standard deviation of pixel values near the lip region. Specifically, a lip region is detected from a preprocessed face image by using a detection method based on edge information, and then the energy detector and the averager are applied to determine the frame positions of speaking and silence, so as to determine the lip activity condition of an evaluation object, wherein 1 can be used for representing silence, and 0 can be used for representing speaking. In addition, most false alarm conditions are caused by the open mouth, such as yawning and the like, so that the open mouth is misjudged as speaking, but the condition still accords with the behavior that the evaluation object does not pay attention to.
When the concentration parameter includes the eye closure degree, the process of obtaining the eye closure degree based on the preprocessed face image may be, first, locating the region of the eyes in the face image by using a face detection method, then, performing iris region segmentation on the eye region by using an adaptive threshold histogram enhancement technique and selecting a suitable threshold to obtain a binarized iris image, and then, according to the segmentation result, calculating the aspect ratio of the bounding box by using the connected components of the segmented region, thereby determining the eye closure degree. For example, also referring to fig. 5, where h represents the height of the eye and w represents the width of the eye, then,
Figure BDA0001843478840000102
when the concentration degree parameter comprises a mobile phone playing condition, the process of acquiring the mobile phone playing condition based on the preprocessed face image comprises the following steps: the aim of detecting the mobile phone playing condition is to detect whether a mobile phone is on the hand of an evaluation object or in a preset area when the mobile phone appears on a picture, so that the area containing the mobile phone can be used as a region of interest (ROI), an image marked with the ROI is collected as a positive sample, an image without the ROI is used as a negative sample, and a database is established. During detection, by extracting Haar features and then training a Viola & Jones cascade classifier to detect a target or inputting an image, whether the image contains an ROI or not is detected by utilizing the currently advanced fast-RCNN network, so that the mobile phone playing condition of an evaluation object is obtained. The specific process may refer to the related art.
When the concentration parameter includes a facial expression, the process of acquiring the facial expression based on the preprocessed face image includes: extracting texture features in the face image, such as features of LBP (Local Binary Pattern), HOG (Histogram of Oriented Gradient), Gabor, and the like, and then classifying expression features by using methods of SVM (Support Vector Machine) or LDA (Linear Discriminant Analysis), and the like. The process of obtaining facial expressions based on the preprocessed face images further comprises: and directly putting the preprocessed face image into a convolutional neural network model for training so as to obtain the belonged expression category.
It should be noted that the present embodiment is not limited to the above method for detecting the head-up degree, the lip movement, the eye closing degree, the mobile phone playing condition, and the facial expression, and other methods may be adopted to detect these concentration parameters.
In some other embodiments, when the concentration evaluation stage includes a plurality of stages, after the concentration evaluation stage and the at least two concentration parameters are obtained from the image data, different concentration parameters may be selected based on different concentration evaluation stages, so that the concentration of the evaluation object in the corresponding concentration evaluation stage is evaluated according to the selected concentration parameters. Wherein selecting different concentration parameters according to the concentration evaluation stage specifically comprises determining a topic keyword of the concentration evaluation stage and selecting concentration parameters according to the topic keyword. The topic keyword may be extracted from the name corresponding to the concentration evaluation stage, for example, the concentration evaluation stage is a "teaching stage", the topic keyword is determined as "teaching" by methods such as semantic analysis, and the semantics of the topic keyword of "teaching" is analyzed, so as to obtain the level of the heading degree of the evaluation object, which can directly reflect the concentration degree, and thus determine that the concentration degree parameter includes the heading degree, and then the concentration degree of the evaluation object is evaluated by the "heading degree" in the "teaching stage". For another example, the "lip activity condition" is selected during the "interaction stage" to evaluate the concentration of the evaluation object, and the like. Here, by selecting different concentration parameters for different concentration evaluation stages, the accuracy and the scientificity of the concentration evaluation can be improved.
And step 13, carrying out concentration evaluation on the evaluation object based on the concentration evaluation stage and the at least two concentration parameters.
Wherein, as shown in fig. 6, performing concentration assessment on the assessment subject based on the concentration assessment stage and the at least two concentration parameters comprises:
step 401, associating the concentration degree evaluation stage and the concentration degree parameter according to time, so as to determine a concentration degree parameter corresponding to the concentration degree evaluation stage;
in this embodiment, the concentration evaluation stage and the concentration parameter of the same time period are associated with each other by using time as a clue, so that it can be determined which concentration parameters are obtained in the current concentration evaluation stage. Specifically, the time may be a time corresponding to the time when the concentration degree evaluation stage and the concentration degree parameter are detected respectively.
Step 402, obtaining a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage;
the weight coefficient corresponding to the concentration degree parameter may be manually defined, or the weight coefficient may be defined based on a current application scenario, for example, if the application scenario is classroom teaching, and a teaching state is that a teacher gives lessons, then the weight coefficient corresponding to the head-up degree may be defined to be the highest, and so on.
And 403, performing product and sum operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, thereby evaluating the concentration degree corresponding to the evaluation object in the concentration degree evaluation stage.
Specifically, the concentration degree S is:
Figure BDA0001843478840000121
wherein x isiRepresenting the extracted concentration parameter, αiThe weight coefficient corresponding to the concentration degree parameter is represented, and n represents the number of concentration degree parameters.
The subjects to be evaluated have different degrees of concentration in different stages of concentration evaluation, for example, the following table 2 shows a high-degree-of-concentration expression of students in different stages of concentration evaluation, taking classroom teaching as an example.
TABLE 2
Figure BDA0001843478840000122
Figure BDA0001843478840000131
In the corresponding teaching stage, if the index value corresponding to the high concentration degree is opposite to the meaning of the original index, the index value is reversed first, and then the index value is substituted into the concentration degree calculation formula for calculation.
Above-mentioned process can calculate every concentration degree evaluation phase in the concentration degree of assessment object to can assess, the analysis to the concentration degree of each concentration degree evaluation phase's assessment object, help assessment object promotes self concentration degree, especially when the classroom teaching, the mr can be according to the low reason of each segmentation stage analysis student concentration degree, thereby adjustment teaching strategy improves student participation degree and attention, improves student's learning process, promotes the teaching quality. In addition, the concentration degree corresponding to each concentration degree evaluation stage can be summed to obtain the total concentration degree, so that the concentration degree condition of the evaluation object in the whole process can be evaluated.
Embodiments of the present invention provide a concentration assessment method that calculates a concentration from a determined concentration assessment stage and at least two concentration parameters by obtaining image data of an application scene, determining the concentration assessment stage from the image data, and obtaining the at least two concentration parameters. The implementation method can automatically monitor the concentration degree of the person without adding extra equipment, thereby improving the comfort degree of the user and reducing the detection cost; in addition, when the concentration degree is calculated, the number of concentration degree parameters is increased, so that the accuracy of the obtained concentration degree is improved; finally, the concentration degree is calculated by simultaneously considering the concentration degree evaluation stage and the concentration degree parameter, and the performance of the evaluation object is different by considering different stages, so that the scientificity and the representativeness of the obtained concentration degree are enhanced.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a concentration evaluation apparatus according to an embodiment of the present invention. As shown in fig. 7, the apparatus 30 includes a first data acquisition module 31, a second data acquisition module 32, and a concentration calculation module 33.
The first data acquisition module 31 is configured to acquire image data of an application scene and an evaluation object; a second data acquisition module 32 for acquiring a concentration evaluation phase and at least two concentration parameters from the image data; a concentration calculation module 33 for performing a concentration evaluation on the evaluation object based on the concentration evaluation stage and the at least two concentration parameters.
The application scenes comprise classroom teaching, office meetings, lectures and online video learning.
In this embodiment, the second data obtaining module 32 is specifically configured to: acquiring audio data of the image data, and converting the audio data into text data; extracting keywords related to the concentration degree evaluation stage based on the text data; and determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, an exercise stage and a discussion stage. The second data obtaining module 32 is specifically configured to: extracting a face image according to the image data; performing face recognition according to the face image to determine the identity of the evaluation object; preprocessing the face image; and acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
Wherein, when the concentration parameter includes a head-up degree, the obtaining the concentration parameter of the evaluation object according to the preprocessed face image includes: acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image; and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
In the present embodiment, referring to fig. 7 as well, the concentration calculation module 33 includes a parameter association unit 331, a weight coefficient obtaining unit 332, and a concentration evaluation unit 333. A parameter association unit 331, configured to associate the concentration evaluation stage and the concentration parameter according to time, so as to determine a concentration parameter corresponding to the concentration evaluation stage; a weight coefficient obtaining unit 332, configured to obtain a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage; the concentration degree evaluation unit 333 is configured to perform a product and sum operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, so as to evaluate the concentration degree of the evaluation object in the concentration degree evaluation stage.
In this embodiment, the first data obtaining module 31 sends the obtained image data of the application scene and the evaluation object to the second data obtaining module 32, the second data module 32 determines a concentration degree evaluation phase according to the image data and obtains at least two concentration degree parameters, the concentration degree calculating module 33 obtains the concentration degree evaluation phase and the at least two concentration degree parameters from the second data module 32, and evaluates the concentration degree of the evaluation object according to the concentration degree evaluation phase and the at least two concentration degree parameters.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is based on the embodiment of the method of the present invention, and are not described herein again.
Embodiments of the present invention provide a concentration assessment apparatus that calculates a concentration from a determined concentration assessment stage and at least two concentration parameters by acquiring image data of an application scene and an assessment target, determining the concentration assessment stage from the image data, and acquiring the at least two concentration parameters. The implementation method can automatically monitor the concentration degree of the person without adding extra equipment, thereby improving the comfort degree of the user and reducing the detection cost; in addition, when the concentration degree is calculated, the number of concentration degree parameters is increased, so that the accuracy of the obtained concentration degree is improved; finally, the concentration degree is calculated by simultaneously considering the concentration degree evaluation stage and the concentration degree parameter, and the performance of the evaluation object is different by considering different stages, so that the scientificity and the representativeness of the obtained concentration degree are enhanced.
Referring to fig. 8, fig. 8 is a schematic diagram of a hardware structure of an electronic device 40 for performing a concentration evaluation method according to an embodiment of the present invention, and as shown in fig. 8, the electronic device 40 includes:
one or more processors 41 and memory 42, with one processor 41 being an example in fig. 8.
The processor 41 and the memory 42 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
Memory 42, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the concentration assessment method in the embodiments of the present invention (e.g., first data acquisition module 31, second data acquisition module 32, and concentration calculation module 33 shown in fig. 7). The processor 41 executes various functional applications of the server and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 42, so as to implement the concentration evaluation method of the above-described method embodiment.
The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the concentration degree evaluation apparatus, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 42 optionally includes memory located remotely from processor 41, which may be connected to the concentration assessment device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 42 and when executed by the one or more processors 41, perform the concentration assessment method in any of the above-described method embodiments, for example, performing the above-described method steps 11 to 13 in fig. 2, method steps 201 to 203 in fig. 3, method steps 301 to 304 in fig. 4, and method steps 401 to 403 in fig. 6, to implement the functions of the modules 31-33, the unit 331 and 333 in fig. 7.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones, multimedia phones, functional phones, etc.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc.
(3) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(4) And other electronic devices with data interaction functions.
Embodiments of the present invention provide a non-volatile computer-readable storage medium storing computer-executable instructions for an electronic device to perform the concentration evaluation method in any of the above-mentioned method embodiments, for example, to perform the above-described method steps 11 to 13 in fig. 2, method steps 201 to 203 in fig. 3, method steps 301 to 304 in fig. 4, and method steps 401 to 403 in fig. 6, to implement the functions of the modules 31 to 33 and the units 331 and 333 in fig. 7.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a non-volatile computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the concentration assessment method in any of the above-described method embodiments, for example, the method steps 11 to 13 in fig. 2, the method steps 201 to 203 in fig. 3, the method steps 301 to 304 in fig. 4, and the method steps 401 to 403 in fig. 6 described above are performed to implement the functions of the modules 31-33 and the unit 331-333 in fig. 7.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A method of concentration assessment, the method comprising:
acquiring image data of an application scene and an evaluation object;
acquiring a concentration evaluation stage and at least two concentration parameters according to the image data;
performing a concentration assessment on the assessment subject based on the concentration assessment stage and the at least two concentration parameters;
the concentration evaluation of the evaluation subject based on the concentration evaluation stage and the at least two concentration parameters comprises:
associating the concentration degree evaluation stage with the concentration degree parameter according to time so as to determine the concentration degree parameter corresponding to the concentration degree evaluation stage;
acquiring a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage;
and performing product summation operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, thereby evaluating the concentration degree of the evaluation object in the concentration degree evaluation stage.
2. The method of claim 1, wherein the acquiring a concentration assessment phase from the image data comprises:
acquiring audio data of the image data, and converting the audio data into text data;
extracting keywords related to the concentration degree evaluation stage based on the text data;
and determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, an exercise stage and a discussion stage.
3. The method of claim 1, wherein the obtaining at least two concentration parameters from the image data comprises:
extracting a face image according to the image data;
performing face recognition according to the face image to determine the identity of the evaluation object;
preprocessing the face image;
and acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
4. The method of claim 3, wherein the concentration parameter comprises a head-up degree, and wherein obtaining the concentration parameter of the evaluation subject from the pre-processed face image comprises:
acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image;
and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
5. A concentration assessment apparatus, the apparatus comprising:
the first data acquisition module is used for acquiring image data of an application scene and an evaluation object;
a second data acquisition module for acquiring a concentration evaluation stage and at least two concentration parameters according to the image data;
a concentration calculation module for performing a concentration evaluation on the evaluation object based on the concentration evaluation stage and the at least two concentration parameters;
the concentration calculation module includes:
the parameter association unit is used for associating the concentration degree evaluation stage with the concentration degree parameter according to time so as to determine the concentration degree parameter corresponding to the concentration degree evaluation stage;
a weight coefficient obtaining unit, configured to obtain a weight coefficient corresponding to the concentration parameter in the concentration evaluation stage;
the concentration degree evaluation unit is used for performing product and sum operation on the concentration degree parameter and the weight coefficient corresponding to the concentration degree parameter, so as to evaluate the concentration degree of the evaluation object in the concentration degree evaluation stage.
6. The apparatus of claim 5, wherein the second data acquisition module is specifically configured to:
acquiring audio data of the image data, and converting the audio data into text data;
extracting keywords related to the concentration degree evaluation stage based on the text data;
and determining a concentration degree evaluation stage corresponding to the keyword according to the keyword and a preset algorithm, wherein the concentration degree evaluation stage comprises a teaching stage, an interaction stage, an exercise stage and a discussion stage.
7. The apparatus of claim 5, wherein the second data acquisition module is specifically configured to:
extracting a face image according to the image data;
performing face recognition according to the face image to determine the identity of the evaluation object;
preprocessing the face image;
and acquiring concentration degree parameters of the evaluation object according to the preprocessed face image, wherein the concentration degree parameters comprise at least two of head raising degree, lip activity condition, eye closing degree, mobile phone playing condition and facial expression.
8. The apparatus of claim 7, wherein the concentration parameter comprises a head-up degree, and wherein obtaining the concentration parameter of the evaluation subject from the pre-processed face image comprises:
acquiring a length parameter based on the preprocessed face image, wherein the length parameter is the length of the centers of two eyes and the center of a mouth area in the face image;
and calculating the percentage of the length parameter and the original length parameter of the face image, wherein the percentage is the head raising degree of the evaluation object, and the original length of the face image is the length of the centers of the two eyes and the center of the mouth area when the evaluation object corresponding to the face image is a front face.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory is executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 4.
CN201811259091.5A 2018-10-26 2018-10-26 Concentration degree evaluation method and device and electronic equipment Active CN109522815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259091.5A CN109522815B (en) 2018-10-26 2018-10-26 Concentration degree evaluation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259091.5A CN109522815B (en) 2018-10-26 2018-10-26 Concentration degree evaluation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109522815A CN109522815A (en) 2019-03-26
CN109522815B true CN109522815B (en) 2021-01-15

Family

ID=65772587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259091.5A Active CN109522815B (en) 2018-10-26 2018-10-26 Concentration degree evaluation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109522815B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175501B (en) * 2019-03-28 2023-04-07 重庆电政信息科技有限公司 Face recognition-based multi-person scene concentration degree recognition method
CN112949373A (en) * 2019-04-02 2021-06-11 中国计量大学上虞高等研究院有限公司 Learning attention detection and prejudgment method under variable light environment
CN110163100B (en) * 2019-04-17 2022-04-01 中国电子科技网络信息安全有限公司 Anti-photographing display
CN111860033A (en) * 2019-04-24 2020-10-30 北京三好互动教育科技有限公司 Attention recognition method and device
CN110148075A (en) * 2019-06-19 2019-08-20 重庆工商职业学院 A kind of learning evaluation method and device based on artificial intelligence
CN110458113A (en) * 2019-08-14 2019-11-15 旭辉卓越健康信息科技有限公司 A kind of non-small face identification method cooperated under scene of face
CN112418572A (en) * 2019-08-20 2021-02-26 成都易腾创想智能科技有限公司 Conference quality assessment system and method based on expression analysis technology
CN110765987B (en) * 2019-11-27 2022-05-17 北京工业大学 Method and device for quantifying innovative behavior characteristics and electronic equipment
CN111160239A (en) * 2019-12-27 2020-05-15 中国联合网络通信集团有限公司 Concentration degree evaluation method and device
CN111144321B (en) * 2019-12-28 2023-06-09 北京如布科技有限公司 Concentration detection method, device, equipment and storage medium
CN111179133B (en) * 2019-12-30 2020-09-25 智慧校园(广东)教育科技有限公司 Wisdom classroom interaction system
CN111754368A (en) * 2020-01-17 2020-10-09 天津师范大学 College teaching evaluation method and college teaching evaluation system based on edge intelligence
CN111091733B (en) * 2020-03-19 2020-06-30 浙江正元智慧科技股份有限公司 Auxiliary detection system for real-time teaching achievements of teachers
CN113591515B (en) * 2020-04-30 2024-04-05 百度在线网络技术(北京)有限公司 Concentration degree processing method, device and storage medium
CN111970471B (en) * 2020-06-30 2024-06-11 视联动力信息技术股份有限公司 Conference participant scoring method, device, equipment and medium based on video conference
CN111914694A (en) * 2020-07-16 2020-11-10 哈尔滨工程大学 Class quality detection method based on face recognition
CN114155566A (en) * 2020-09-04 2022-03-08 上海惠芽信息技术有限公司 Evaluation method and system of playing content and electronic equipment
CN112116841A (en) * 2020-09-10 2020-12-22 广州大学 Personalized remote education system and method based on deep learning
CN112329643B (en) * 2020-11-06 2021-06-04 重庆第二师范学院 Learning efficiency detection method, system, electronic device and medium
CN112381001A (en) * 2020-11-16 2021-02-19 四川长虹电器股份有限公司 Intelligent television user identification method and device based on concentration degree
CN112560638A (en) * 2020-12-11 2021-03-26 上海明略人工智能(集团)有限公司 Meeting place concentration evaluation method and system based on face recognition and behavior detection
CN113139439B (en) * 2021-04-06 2022-06-10 广州大学 Online learning concentration evaluation method and device based on face recognition
CN113393160A (en) * 2021-07-09 2021-09-14 北京市博汇科技股份有限公司 Classroom concentration analysis method and device, electronic equipment and medium
CN113783709B (en) * 2021-08-31 2024-03-19 重庆市易平方科技有限公司 Conference participant monitoring and processing method and device based on conference system and intelligent terminal
CN114267072A (en) * 2021-12-27 2022-04-01 海信集团控股股份有限公司 Electronic device and concentration degree determination method
CN116543446B (en) * 2023-05-11 2023-09-29 浙江优图教育科技有限公司 Online learning concentration recognition analysis method based on AI technology
CN118413627B (en) * 2024-04-29 2024-10-01 广州承道信息科技有限公司 Video conference remote interaction system based on cloud database

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104173063A (en) * 2014-09-01 2014-12-03 北京工业大学 Visual attention detection method and system
CN105825189A (en) * 2016-03-21 2016-08-03 浙江工商大学 Device for automatically analyzing attendance rate and class concentration degree of college students

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135983A (en) * 2011-01-17 2011-07-27 北京邮电大学 Group dividing method and device based on network user behavior
KR20170136160A (en) * 2016-06-01 2017-12-11 주식회사 아이브이티 Audience engagement evaluating system
CN106128188A (en) * 2016-08-31 2016-11-16 华南理工大学 Desktop education focus analyzes system and the method for analysis thereof
CN107918755A (en) * 2017-03-29 2018-04-17 广州思涵信息科技有限公司 A kind of real-time focus analysis method and system based on face recognition technology
CN108021893A (en) * 2017-12-07 2018-05-11 浙江工商大学 It is a kind of to be used to judging that student to attend class the algorithm of focus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104173063A (en) * 2014-09-01 2014-12-03 北京工业大学 Visual attention detection method and system
CN105825189A (en) * 2016-03-21 2016-08-03 浙江工商大学 Device for automatically analyzing attendance rate and class concentration degree of college students

Also Published As

Publication number Publication date
CN109522815A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN107292271B (en) Learning monitoring method and device and electronic equipment
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108648757B (en) Analysis method based on multi-dimensional classroom information
CN110945522B (en) Learning state judging method and device and intelligent robot
CN111046819B (en) Behavior recognition processing method and device
US20180308114A1 (en) Method, device and system for evaluating product recommendation degree
CN111898881B (en) Classroom teaching quality assessment method, device, equipment and storage medium
CN113139439B (en) Online learning concentration evaluation method and device based on face recognition
CN113723530A (en) Intelligent psychological assessment system based on video analysis and electronic psychological sand table
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN111860057A (en) Face image blurring and living body detection method and device, storage medium and equipment
CN116682052B (en) Teaching service platform based on cloud service
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
CN116261009A (en) Video detection method, device, equipment and medium for intelligently converting video audience
CN116259104A (en) Intelligent dance action quality assessment method, device and system
CN113409822B (en) Object state determining method and device, storage medium and electronic device
Verma et al. Automated smart artificial intelligence-based proctoring system using deep learning
CN111274898A (en) Method and device for detecting group emotion and cohesion in video stream based on deep learning
Ramos et al. A Facial Expression Emotion Detection using Gabor Filter and Principal Component Analysis to identify Teaching Pedagogy
Gupta et al. An adaptive system for predicting student attentiveness in online classrooms
Hashmani et al. Hybrid automation of student activity records in virtual learning environments in semi-dark scenarios
Gadkar et al. Online Examination Auto-Proctoring System
CN107145863A (en) A kind of Face detection method and system
Sathya et al. Assessment of Student Attentiveness in Classroom Environment Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 1005, Sanhang technology building, northwest Polytechnic University, Gaoxin South 9th Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN BOWEI EDUCATION TECHNOLOGY Co.,Ltd.

Address before: Room 1005, Sanhang Science and Technology Building, Northwest Polytechnic University, Gaoxin Nanjiu Road, Yuehai Street, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN BOWEI EDUCATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant