CN112819665A - Classroom state evaluation method and related device and equipment - Google Patents

Classroom state evaluation method and related device and equipment Download PDF

Info

Publication number
CN112819665A
CN112819665A CN202110124484.0A CN202110124484A CN112819665A CN 112819665 A CN112819665 A CN 112819665A CN 202110124484 A CN202110124484 A CN 202110124484A CN 112819665 A CN112819665 A CN 112819665A
Authority
CN
China
Prior art keywords
target object
expression
classroom
expression information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110124484.0A
Other languages
Chinese (zh)
Inventor
李靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202110124484.0A priority Critical patent/CN112819665A/en
Publication of CN112819665A publication Critical patent/CN112819665A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The application discloses a classroom state evaluation method and a related device and equipment, wherein the classroom state evaluation method comprises the following steps: acquiring image data; identifying a target object in the image data, and determining identity information of the target object and expression information of the target object; and evaluating the classroom state of the target object based on the expression information and a preset analysis standard. According to the scheme, the classroom state of the target object in a classroom can be analyzed through the expression information of the target object under the non-inductive condition of the target object, the external environment interference is reduced, and the objectivity and the reliability of the classroom state detection result are guaranteed to a certain extent.

Description

Classroom state evaluation method and related device and equipment
Technical Field
The present application relates to the field of classroom assessment technologies, and in particular, to a classroom state assessment method, and a related device and apparatus.
Background
With the increasing living standard and the development of science and technology, people have higher and higher requirements on the education quality of the education industry. The existing schools or education institutions increasingly pay more attention to the fact that different education schemes are formulated according to interests or specialties of different students so as to teach each student according to the factors and fully develop the interests or specialties of each student.
However, when each student is in a low age period (1-7 years old), it is difficult for a school or an education institution to directly obtain quality evaluation such as interest, knowledge acceptance and fitness of the student in each classroom teaching through the behavior of the student himself. The classroom education quality of the students of the low ages is difficult to be effectively fed back.
At present, classroom state quality assessment for students is generally to manually observe and assess students in the classroom directly by teachers, so as to obtain the quality of classroom states of the students. However, the method is easily affected by the subjective consciousness of the teacher through manual observation of students, and results are often not objective.
Disclosure of Invention
The application at least provides an assessment method of classroom state and a related device and equipment.
A first aspect of the present application provides a classroom state assessment method, including: acquiring image data; identifying a target object in the image data, and determining identity information of the target object and expression information of the target object; and evaluating the classroom state of the target object based on the expression information and a preset analysis standard.
Therefore, the identity information of the target object and the expression information of the target object are determined by acquiring the image data and then identifying the target object in the image data; thereby evaluating the classroom state of the target object based on the expression information and the preset analysis criteria. The whole classroom state detection process is free of manual intervention, and the classroom state of the target object is evaluated by means of identity information and expression information of the target object, so that objectivity and reliability of classroom state detection results are guaranteed to a certain extent.
Wherein the step of acquiring image data comprises: acquiring image data of a target area in a preset time period; before the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard, the method comprises the following steps of: determining a sub-region to which the target region belongs by using the image data; determining the function type of a sub-region to which the target object belongs; determining a class type of the target object based on the function type and a preset time period; the method comprises the following steps of evaluating the classroom state of a target object based on expression information and preset analysis criteria, wherein the steps comprise: and evaluating the classroom state of the target object based on the expression information and the classroom type.
Therefore, the sub-region to which the target object belongs is determined by using the image data of the target region in the preset time period, the class type of the target object is determined according to the function type of the sub-region, the class state of the target object is evaluated based on the expression information and the class type, and the accuracy of class state detection is improved.
The step of evaluating the classroom state of the target object based on the expression information and the classroom type comprises the following steps of: presetting an analysis standard based on the class type of the target object; and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
Therefore, the analysis standard is preset based on the class type of the target object so as to adopt different analysis standards aiming at different class types, thereby objectivity and accuracy of class state detection.
The method comprises the following steps of evaluating the classroom state of a target object based on expression information and preset analysis standards, wherein the steps comprise: presetting an analysis standard based on expression information of a plurality of target objects; and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
Therefore, the analysis criteria are preset based on the expression information of the plurality of target objects to set the analysis criteria in consideration of the overall expression information of the target objects, thereby improving the objectivity and accuracy of the classroom state detection.
The step of presetting the analysis standard based on the expression information of the target objects comprises the following steps: determining expression information of a plurality of target objects contained in the image data; the types of the expression information are various: determining the proportion of each type of expression information to the total type of expression information; the analysis criteria were determined based on the proportion of the proportion.
Therefore, the ratio of each type of expression information to the total type of expression information is determined, and the analysis standard is determined based on the ratio, so that the analysis standard is set, and the objectivity of classroom state detection is improved.
Before the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard, the method comprises the following steps of: determining the expression type of the expression information of the target object; the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard comprises the following steps: and evaluating the classroom state of the target object based on the expression type to which the expression information belongs and a preset analysis standard.
Therefore, the classroom state of the target object is evaluated by determining the expression type to which the expression information of the target object belongs and based on the expression type to which the expression information belongs and a preset analysis standard. Therefore, the expression meaning of the expression information of the target object is refined, the accuracy of expression analysis is improved, and the reliability of classroom evaluation is further improved.
The step of determining the expression type to which the expression information of the target object belongs comprises the following steps: sequentially comparing the expression information with the expression subtypes to determine at least one expression subtype corresponding to the expression information; the step of evaluating the classroom state of the target object based on the expression type to which the expression information belongs and a preset analysis standard comprises the following steps of: respectively counting the duration of each expression subtype corresponding to the expression information; and detecting and analyzing the classroom state of the target object by integrating each expression subtype and the duration of each expression subtype to obtain an initial evaluation result.
Therefore, the expression information of the target object is classified into the exact expression sub-types, the duration of the expression sub-types is obtained, the class state of the target object is detected and analyzed through the expression sub-types and the duration thereof, the subjective judgment data is further quantized, and data support is provided for the class state detection of the target object.
The steps of identifying a target object in the image data and determining the identity information and the expression information of the target object include: carrying out feature extraction on a target object in the image data to obtain the face feature of the target object; and determining the identity information of the target object and the expression information of the target object based on the facial features of the target object.
Therefore, the facial features of the target object are obtained by extracting the features of the target object in the image data, and the identity information and the expression information of the target object are determined based on the facial features of the target object. The accuracy of feature extraction is improved, and the reliability of classroom state detection is never further improved.
After the steps of identifying a target object in image data and determining identity information and expression information of the target object, the method comprises the following steps: establishing a corresponding relation between the identity information and the expression information of the target object; and establishing an expression archive of the target object based on the identity information and the expression information of the target object by utilizing the corresponding relation.
Therefore, the corresponding relation between the identity information and the expression information of the targeted object is established; and an expression archive of the target object is created based on the identity information and the expression information of the target object by utilizing the corresponding relation so as to record the expression information of the target object and facilitate subsequent evaluation.
Wherein, the method further comprises: acquiring expression archives of all target objects in a target classroom; and evaluating the target classroom state based on the expression archives of all target objects.
Therefore, the target classroom state is evaluated based on the expression archives of all target objects, so that the objectivity of the evaluation result is improved.
After the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard, the method comprises the following steps of: acquiring an initial evaluation result for evaluating the classroom state of the target object based on the expression information; acquiring manual evaluation based on the expression information; and integrating the initial evaluation result and the manual evaluation to generate an evaluation result of the classroom state of the target object.
Therefore, after an initial evaluation result for evaluating the classroom state of the target object based on the expression information is obtained, manual evaluation is obtained based on the expression information; and integrating the initial evaluation result and the manual evaluation to generate an evaluation result of the classroom state of the target object. Therefore, the situation of false alarm of the initial evaluation result is reduced through manual evaluation, and the accuracy of the evaluation result of the classroom state of the target object is improved.
This application second aspect provides a detection device of classroom state, includes: the acquisition module is used for acquiring image data; the determining module is used for identifying the target object in the image data and determining the identity information of the target object and the expression information of the target object; and the evaluation module is used for evaluating the classroom state of the target object based on the expression information and a preset analysis standard.
A third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the classroom state assessment method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the method for assessing a classroom state of the first aspect described above.
According to the scheme, the interference and shunting of irrelevant image data are reduced by acquiring the image data of the target area in the preset time period. Then, identifying the target object in the image data through an identification system, and determining the identity information of the target object and the expression information of the target object; and performing detection analysis on the class state of the target object based on the expression information so as to evaluate the class state of the target object through the initial evaluation result. The whole classroom state detection process is free of manual intervention, and the classroom state of the target object is evaluated by means of identity information and expression information of the target object, so that objectivity and reliability of classroom state detection results are guaranteed to a certain extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a classroom status assessment method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of the classroom status assessment method of the present application;
FIG. 3a is a diagram illustrating an embodiment of positive expressions in the initial evaluation results of the embodiment of FIG. 2;
FIG. 3b is a diagram illustrating an embodiment of negative expressions in the initial evaluation results of the embodiment of FIG. 2;
FIG. 4 is a flowchart illustrating a classroom status assessment method according to another embodiment of the present application
FIG. 5 is a block diagram of an embodiment of the classroom status detection system of the present application;
FIG. 6 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 7 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to the classroom state evaluation method, the video data of the target area in the preset time period are obtained first, and the image data are uniformly extracted from the video data in the preset time period based on the time axis, so that the time length of an analysis object is prolonged, and the reliability of classroom state detection is improved. And then determining a sub-region to which the target object belongs by using the image data, determining a function type of the sub-region to which the target object belongs, determining a class type of the target object based on the function type and a preset time period, and determining an analysis standard based on the class type of the target object, so that the class state of the target object can be evaluated by adopting different analysis standards according to different class types. Or the analysis standard is set through the integral expression information of a plurality of target objects in the image data, and the classroom state of the target objects can be evaluated by combining the integral expression information of the plurality of target objects and the expression information of the target state, so that the objectivity and comprehensiveness of the analysis standard are improved. The embodiment also performs feature extraction on the target object in the image data through a feature extraction network to obtain facial features of the target object, determines identity information of the target object and expression information of the target object based on the facial features of the target object, sequentially compares the expression information in a preset time period with the expression sub-types to determine the expression sub-types corresponding to the expression information, and obtains the expression types of the target object in the preset time period based on the expression sub-types. The expression types of the target object in the preset time period are obtained, the classroom state of the target object is detected and analyzed based on the expression types of the target object in the preset time period to obtain an evaluation result, a teacher can conveniently feed back different expressions of the target object in class according to the evaluation result, guidance is tracked in time, technical support is provided, and accordingly the purpose of teaching the target object according to the factors is achieved. Specifically, please refer to fig. 1.
Fig. 1 is a flowchart illustrating an embodiment of a classroom state assessment method according to the present application. Specifically, the method may include the steps of:
step S11: image data is acquired.
The classroom state assessment method of the present embodiment can be applied to infants or children in a low age period (1-7 years) to perform quality detection on classroom states. And application scenarios of classroom state detection may include: the education places of the children of low ages such as kindergarten, preschool training/education institution, primary school and even family education places are not limited herein. In a specific application scenario, the target area may include classroom places such as a classroom, a living room, a playground, and a reading room, and the classroom status of the target object is detected when the target object is in the classroom.
Image data of a target object is initially acquired by an image acquisition device to be evaluated based on a class state of the target object.
In a specific application scenario, image data is acquired by a fixed-position image acquisition device, so that fixed-angle image data is acquired. The classroom status of the target object is evaluated based on the fixed-angle image data. In a specific application scenario, the image data may be a panoramic image, a top view, and the like of the target area, which can comprehensively represent an image of the target area scene.
Step S12: and identifying the target object in the image data, and determining the identity information of the target object and the expression information of the target object.
In a specific application scenario, a target object in image data of a fixed angle is identified by an identification system to determine identity information of the target object and expression information of the target object. In a specific application scenario, when a plurality of target objects exist in the image data, each target object may be identified separately to determine the identity information of all target objects and the expression information of each target object in the image data. In a specific application scenario, the recognition system may include a face recognition system, an expression recognition system, a feature extraction system, and other systems capable of recognizing the identity information of the target object and the expression information of the target object, which are not limited herein.
Step S13: and evaluating the classroom state of the target object based on the expression information and a preset analysis standard.
And evaluating the classroom state of the target object based on the expression information of the target object and a preset analysis standard so as to obtain the classroom performance of the target object. The preset analysis criteria may be set according to a class type or according to a general expression condition of all target objects, and may be specifically set according to an actual condition, which is not limited herein.
In a specific application scenario, when the target object is in a class of learning type, the calm expression may be used as the frontal expression of the analysis criteria. Specifically, whether the time length ratio of the quiet expression time length of the target object in all expressions exceeds a certain threshold value or not can be judged, and if yes, the state of the target object in the class can be better; if not, the object can be indicated that the class status is poor. In a specific application scenario, if the target object is in a class of the activity type, the happy expression may be taken as the positive expression of the analysis criteria. Specifically, whether the duration ratio of the happy expression duration of the target object in all expressions exceeds a certain threshold value or not can be judged, and if yes, the state of the class of the target object can be better; if not, the object can be indicated that the class status is poor. The application scenario only describes the detection and analysis process to a certain extent, is convenient to understand, and does not limit the detection and analysis mode.
In this way, the classroom state evaluation method of the embodiment determines the identity information of the target object and the expression information of the target object by acquiring the image data and identifying the target object in the image data; and obtaining the evaluation result of the classroom state of the target object based on the expression information and the preset analysis standard. The whole classroom state detection process is free of manual intervention, and the classroom state of the target object is evaluated by means of identity information and expression information of the target object, so that objectivity and reliability of classroom state detection results are guaranteed to a certain extent.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating an evaluation method for classroom status according to another embodiment of the present application. Specifically, the method may include the steps of:
step S21: the method comprises the steps of obtaining video data of a target area in a preset time period, and evenly extracting a plurality of pieces of image data from the video data in the preset time period based on a time axis.
When the classroom state of the target object is evaluated, the classroom state of the target object in the whole classroom needs to be analyzed, so that the accuracy of the evaluation of the final classroom state is ensured to a certain extent. Therefore, the video data of the target area in the preset time period is acquired through the image acquisition device, and the classroom state of the target object in the preset time period is evaluated according to the video data in the preset time period. A plurality of pieces of image data are uniformly extracted from video data of a preset period based on a time axis. Wherein, the time span of the plurality of image data is a preset time interval. In a specific application scenario, when the classroom state detection needs to be performed on the students in the Chinese class at 10 am to 11 am on 21.7 months, video data in the Chinese classroom at 10 am to 11 am on 21.7 months is acquired first, so that the students in the Chinese class are subjected to the Chinese classroom state detection based on the video data.
In a specific application scenario, video data of a target area within a preset time period of 1 hour may be acquired first. Based on the time axis, a plurality of pieces of image data are uniformly extracted from 1 hour of video data. For example, 3 images are extracted from each minute of video data to evaluate the classroom status of the target object based on 1 × 60 × 3 images. In practical applications, the number of images extracted from the video data needs to ensure that the time axes of all the images are uniformly distributed in a preset time period, and the specific value thereof may be determined according to practical situations, and is not limited herein.
In a specific application scenario, image data of a target area of an angle is acquired by a fixed-position image acquisition device, so that fixed-angle image data is acquired. The classroom state detection is carried out on the target object based on the image data with the fixed angle, so that the recognition system can conveniently carry out recognition analysis on the image data, and the accuracy of classroom state detection is improved. In a specific application scenario, the image data may be a panoramic image, a top view, and the like of the target area, which can comprehensively represent an image of the target area scene.
Step S22: determining the function type of the sub-region to which the target object belongs by using the image data, and determining the class type of the target object based on the function type and the preset time period.
And performing functional division on the target area to divide the target area into a plurality of sub-areas with function types. In a specific application scenario, the classroom of the kindergarten can be divided into functional sub-areas such as a learning classroom, an activity classroom or a reading classroom according to functions.
In a specific application scenario, each image acquisition device may be associated with each sub-region, and when the classroom state of a target object of a certain functional sub-region needs to be evaluated, the corresponding image acquisition device is turned on to acquire image data of the target functional sub-region.
After the image data are obtained, the image data are utilized to determine the sub-region to which the target object belongs, so that the function type of the sub-region to which the target object belongs is determined, and then the class type of the target object is determined based on the function type and the preset time period. In the present embodiment, the class type of the target object may be determined based on the function type of the sub-region and the preset time period. In other embodiments, however, the class type of the target object may be determined directly from the function type of its region. For example: when the sub-region is a music room or a drawing room, the class type of the target object can be directly determined according to the function type of the sub-region.
Step S23: and performing feature extraction on the target object in the image data through a feature extraction network to obtain the face feature of the target object, and determining the identity information of the target object and the expression information of the target object based on the face feature of the target object.
When the classroom states of the target objects are evaluated, the face features of the target objects can be put into a warehouse firstly, so that identity information libraries of the target objects are established, the target objects in the image data are identified based on the identity information libraries, the identity information of the target objects is obtained, and the subsequent binding of the expression information of the identified target objects and the identity information of the target objects is facilitated. .
In a specific application scenario, when 30 students in a kindergarten are subjected to classroom state detection, the 30 students are subjected to face feature collection, and an identity information base is established based on the 30 corresponding face features. When classroom state detection is carried out on 30 students in a certain classroom of a kindergarten class, the face features in the identity information base are used for respectively carrying out identity recognition on the 30 students to obtain the identity information of each student, and the subsequent binding of the recognized expression information of each student and the identity information of each student is facilitated.
Specifically, when the identity of a target object in a classroom is identified, feature extraction is performed on the target object in image data through a feature extraction network to obtain the face features of the target object. The face recognition system is used for recognizing the face features of the target object to obtain the identity information of the target object, and specifically, the face features of the target object and the face features in the identity information base are sequentially compared one by one to determine the identity information of the target object. And then, recognizing the facial features of the target object in a preset time period by using an expression recognition system to obtain expression information of the target object in the preset time period. And establishing a corresponding relation between the identity information and the expression information of the target object so as to establish the association between the finally obtained evaluation result and the target object.
Step S24: and presetting an analysis standard based on the class type of the target object.
Since the target object has different types of frontal expressions to be expressed in different classes, different analysis criteria need to be preset for different classes in this step.
When the target object is in the same target area, the class type of the target object may be different according to the different preset time periods, and therefore, in this embodiment, the class type of the target object is determined together by the function type of the sub-area to which the target object belongs and the preset time period, and then the analysis standard matched with the class type is determined.
For example, when the target object is in an activity classroom, the target object sufficiently participates in classroom performance in which classroom activities are good in classroom status as the target object, and therefore the type of happy expression can be taken as a frontal expression of the analysis criteria of the activity classroom. However, when the target object is in a reading classroom, quiet reading of the target object shows that the classroom state of the target object is good, and therefore, the happy expression type is difficult to be used as the frontal expression of the analysis standard, and the peaceful expression type or the confused expression type can be used as the frontal expression of the analysis standard of the reading classroom.
In this step, the analysis criteria of the class may be determined based on the class type of the target object, so that the class status of the target object is evaluated with reference to the analysis criteria and the expression information.
Step S24 of this embodiment may be executed before step S23, and the specific execution step may be set based on actual requirements, which is not limited herein.
Step S25: and comparing the expression information with the expression subtypes in sequence, determining at least one expression subtype corresponding to the expression information, and counting the duration of each expression subtype corresponding to the expression information respectively.
After the facial features of the target object in the preset time period are identified through the expression identification system, the expression information of the target object in the preset time period is obtained, the expression information in the preset time period is sequentially compared with the expression sub-types, the expression type to which the expression information belongs is determined, namely at least one expression sub-type to which the expression information belongs, and the duration of each expression sub-type corresponding to the expression information in the preset time period is respectively counted. In a specific application scenario, the expression sub-types include: anger, joy, sadness, calmness, surprise, confusion, aversion, fear, strabismus, scream and other expression subtypes. In other application scenarios, the expression sub-type may be other expressions, and is not limited herein.
After the expression sub-types of the target object are obtained, the duration of each expression sub-type of the target object in a preset time period is further obtained. In a specific application scenario, when 5 expression subtypes of a target object in a preset time period are identified according to image data of the preset time period, the duration of the 5 expression subtypes in the preset time period is respectively counted, at least one expression subtype of the target object and the duration corresponding to the expression subtype are obtained, and the expression type of the target object in the preset time period is obtained.
And evaluating the classroom state of the target object based on each expression subtype to which the expression information belongs and a preset analysis standard to obtain an initial evaluation result.
Step S26: and detecting and analyzing the classroom state of the target object based on the duration of the expression subtype of the target object in the preset time period and the analysis standard to obtain an evaluation result.
And evaluating the classroom state of the target object based on the expression type of the target object in a preset time period and the analysis standard to obtain an evaluation result. In a specific application scenario, the analysis criteria may classify each expression sub-type into positive expressions and negative expressions. And evaluating the classroom state of the target object by evaluating the duration of the positive expression and the duration of the negative expression of the target object. In a specific application scenario, when classroom state detection is performed on an activity classroom of a target object, happiness, calmness, surprise, confusion and strabismus can be used as positive expressions of the activity classroom, and anger, sadness, aversion, fear, scream and other expression subtypes can be used as negative expressions of the activity classroom. And evaluating the classroom state of the target object by taking the classification of the positive expression and the negative expression as an analysis standard. When the duration of the positive expression exceeds a certain threshold or the duration ratio between the positive expression and the negative expression exceeds a ratio threshold, the target object can be determined to be in a good state in a classroom in a preset time period, and the classroom acceptance is high. The threshold and the ratio threshold may be set according to practical applications, and are not limited herein.
In a specific application scenario, after a target object in image data is identified and identity information and expression information of the target object are determined, a corresponding relationship between the identity information and the expression information of the target object can be established, and an expression archive library of the target object is created based on the identity information and the expression information of the target object by using the corresponding relationship, so that the expression information of the target object is stored, and an expression archive of the target object is obtained. Therefore, in the subsequent classroom state evaluation process, expression files of all target objects in the target classroom can be obtained from the expression file library, and the target classroom state is evaluated based on the expression files of all target objects in the expression file library in the previous classroom evaluation. The expression files of the target object are further utilized for evaluation, so that the accuracy of classroom evaluation is improved.
In a specific application scenario, a situation of false alarm when the expression information is analyzed is considered, and therefore, an evaluation result of evaluating the classroom state of the target object based on the expression information can be used as an initial evaluation result. And performing secondary evaluation based on the initial evaluation result to improve the accuracy of the finally obtained evaluation result.
In a specific application scenario, after an initial evaluation result for evaluating the classroom state of the target object based on the expression information is obtained, the initial evaluation result may be input to a teaching analysis system to evaluate the classroom state of the target object, so as to obtain an evaluation result of the target object. In a specific application scenario, after the initial evaluation result is obtained, the evaluation result of the target object in a preset time period can also be obtained by performing secondary evaluation on the initial evaluation result through manual detection, for example, by a teacher. In the application scene, the detection of the classroom state is subjected to auxiliary evaluation through manual detection, so that the detection result can be humanized and accurate to a certain extent.
In a specific application scenario, a situation of false alarm when the expression information is analyzed is considered, so that after an initial evaluation result of evaluating the classroom state of the target object based on the expression information is obtained, manual evaluation is further obtained based on the expression information, and then the initial evaluation result and the manual evaluation are integrated to generate an evaluation result of the classroom state of the target object. Thereby improving the accuracy of the evaluation result.
In a specific application scenario, the manual evaluation may be teacher evaluation, for example: eyes concentrate on watching, listening to stories or listening people do not do other things, the students are calm, one limb state is kept during listening, no other limb actions exist, the thinking is not interfered by the outside, the speeches of the tellers can be understood, and the like.
The initial evaluation result of the step of evaluating the classroom status of the target object based on the expression information may include each expression subtype and time length thereof in the positive expressions and each expression subtype and time length thereof in the negative expressions of each target object.
Referring to fig. 3 a-3 b, fig. 3a is a schematic diagram illustrating an embodiment of a positive expression in the initial evaluation result of the embodiment of fig. 2. FIG. 3b is a diagram illustrating an embodiment of negative expressions in the initial evaluation result of the embodiment shown in FIG. 2.
The first frontal expression subtype 11 and its first frontal length of time b, the second frontal expression subtype 12 and its second frontal length of time a, the third frontal expression subtype 13 and its third frontal length of time c, and the fourth frontal expression subtype 14 and its fourth frontal length of time d are shown in the frontal expression map 10.
The negative expression graph 20 shows a first negative expression subtype 21 and its first negative time length f, a second negative expression subtype 22 and its second negative time length h, a third negative expression subtype 23 and its third negative time length e, and a fourth negative expression subtype 24 and its fourth negative time length g. The coordinate system of the positive expression graph 10 is the same as the negative expression graph 20.
According to the initial evaluation result, the expression sub-types and the duration of the target object in the preset time period can be clearly and clearly displayed, the expression data with certain subjectivity is subjected to data quantization, and the accuracy of classroom state detection is improved. And analyzing the interest degree, knowledge acceptance degree, fitting degree and the like of the target object in classroom teaching according to the detection result, and solving the problem that the data judged subjectively cannot be quantified. The histogram generated in fig. 3a and 3b is only one expression form of the initial evaluation result, and in other embodiments, the expression form of the initial evaluation result may also be an expression form such as a line graph, a numerical report, and the like, which is not limited herein. The table entry form of the present embodiment is also applicable to the finally obtained evaluation result.
In a specific application scenario, the initial evaluation result or the expression form of the evaluation result may not include the classification of positive expression and negative expression, only show each expression subtype and duration of the target object in a preset time period, and also realize the quantification of subjective expression data, so as to provide a data base for a subsequent teaching analysis system or manual analysis and evaluate the overall classroom state.
In this way, according to the classroom state assessment method, the video data of the target area in the preset time period is obtained first, and the plurality of pieces of image data are uniformly extracted from the video data in the preset time period based on the time axis, so that the time length of an analysis object is increased, and the reliability of classroom state detection is improved. Determining a sub-region to which the target object belongs by using the image data, determining a function type of the sub-region to which the target object belongs, and determining a class type of the target object based on the function type and a preset time period; and then, carrying out feature extraction on the target object in the image data through a feature extraction network to obtain the facial features of the target object, determining the identity information and the expression information of the target object based on the facial features of the target object, and presetting an analysis standard based on the class type of the target object, so that the class state of the target object can be evaluated by adopting different analysis standards according to different class types. The embodiment also performs feature extraction on the target object in the image data through a feature extraction network to obtain facial features of the target object, determines identity information of the target object and expression information of the target object based on the facial features of the target object, sequentially compares the expression information in a preset time period with the expression sub-types to determine the expression sub-types corresponding to the expression information, and obtains the expression types of the target object in the preset time period based on the expression sub-types. The expression type of the target object in the preset time period is obtained, and therefore the classroom state of the target object is detected and analyzed based on the expression type of the target object in the preset time period, and an evaluation result is obtained. The classroom state evaluation method of the embodiment has certain flexibility and freedom. In addition, the classroom state evaluation method of the embodiment has no manual intervention, when the target object is analyzed, the whole process is noninductive, interference factors are effectively reduced, and the final detection result has certain accuracy and reliability.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a classroom status assessment method according to another embodiment of the present application. Specifically, the method may include the steps of:
step S31: the method comprises the steps of obtaining video data of a target area in a preset time period, and evenly extracting a plurality of pieces of image data from the video data in the preset time period based on a time axis.
The specific content of step S31 is the same as step S21 in the previous embodiment, and please refer to the foregoing description, which is not repeated herein.
Step S32: and performing feature extraction on the target object in the image data through a feature extraction network to obtain the face feature of the target object, and determining the identity information of the target object and the expression information of the target object based on the face feature of the target object.
The specific content of step S32 is the same as step S23 in the previous embodiment, and please refer to the foregoing description, which is not repeated herein.
The method comprises the steps of extracting the features of all target objects in image data through a feature extraction network to obtain the face features of all the target objects, and then determining the identity information and the expression information of all the target objects based on the face features of all the target objects until the identity information and the expression information of all the target objects are obtained.
Step S33: determining expression information of a plurality of target objects contained in the image data, wherein the types of the expression information are multiple, determining the proportion of each type of expression information to the total type of expression information, and determining an analysis standard based on the proportion.
And integrating the expression information of the target objects obtained in the previous step to obtain the overall expression information of the target objects, and presetting an analysis standard based on the overall expression information to improve the objectivity of the analysis standard in judging the classroom state.
In a specific application scenario, analysis criteria may be preset based on the expression information of a plurality of target objects, so as to evaluate the classroom state of the target objects with reference to the analysis criteria and the expression information. The expression information of the plurality of target objects may refer to the overall expression information of all target objects in the current image data, may refer to the overall expression information of all target objects acquired in the same time period and the same place in the past, and is not limited specifically.
In a specific application scene, firstly determining expression information of all target objects contained in image data obtained this time; a ratio of each type of the expression information to the total type of the expression information is determined to determine an analysis criterion based on the ratio. The types to which the expression information belongs are various, and the expression information of the total type refers to the expression types to which the expression information of all the target objects belongs.
In a specific application scenario, the step of determining the proportion of each type of expression information to the total type of expression information may be obtained by calculating a ratio between the duration of each type of expression information and the duration of the total type of expression information.
For example: when the image data contains 10 target objects, the expression information of the 10 target objects is further determined. It is assumed that there are 3 kinds of expression information of the total types of 10 target objects: calm, angry, and sad. When the proportion of the expression information of the calm expression to the total type is 1/2, the proportion of the expression information of the angry expression to the total type is 1/4, and the proportion of the expression information of the sad expression to the total type is 1/4, the calm expression with the largest proportion is taken as the positive expression of the analysis standard, and the proportion of 1/2 corresponding to the calm expression is taken as the analysis standard to be used for judging the threshold value of the classroom state. When the ratio of the calm expression of a certain target object to the expression information thereof exceeds 1/2, it can be determined that the classroom status of the target object is good. Wherein, the proportion ratio can be obtained by calculating the duration of each expression type.
Step S34: and comparing the expression information with the expression subtypes in sequence, determining at least one expression subtype corresponding to the expression information, and counting the duration of each expression subtype corresponding to the expression information respectively.
The specific content of step S34 is the same as step S25 in the previous embodiment, and please refer to the foregoing description, which is not repeated herein.
Step S35: and detecting and analyzing the classroom state of the target object based on the duration of the expression subtype of the target object in the preset time period and the analysis standard to obtain an evaluation result.
The specific content of step S31 is the same as step S26 in the previous embodiment, and please refer to the foregoing description, which is not repeated herein.
Through the method, the evaluation method of the classroom state of the embodiment determines the expression information of a plurality of target objects contained in the image data, and then determines the proportion of each type of expression information to the total type of expression information to determine the analysis standard based on the proportion. Therefore, the analysis standard can be set through the integral expression information of each target object in the image data, an objective and comprehensive analysis standard is provided for detecting and analyzing the classroom state of the target object, and the accuracy and the objectivity of the evaluation result are improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of a classroom status detection apparatus according to the present application. The classroom status detection apparatus 50 includes an acquisition module 51, a determination module 52, and an evaluation module 53. An obtaining module 51, configured to obtain image data; the determining module 52 is configured to identify a target object in the image data, and determine identity information of the target object and expression information of the target object; and the evaluation module 53 is configured to evaluate the classroom state of the target object based on the expression information and a preset analysis standard.
According to the scheme, the identity information of the target object and the expression information of the target object are determined by acquiring the image data and then identifying the target object in the image data; thereby evaluating the classroom state of the target object based on the expression information and the preset analysis criteria. The whole classroom state detection process is free of manual intervention, and the classroom state of the target object is evaluated by means of identity information and expression information of the target object, so that objectivity and reliability of classroom state detection results are guaranteed to a certain extent.
In some disclosed embodiments, the step of acquiring image data comprises: acquiring image data of a target area in a preset time period; before the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard, the method comprises the following steps of: determining a sub-region to which the target region belongs by using the image data; determining the function type of a sub-region to which the target object belongs; determining a class type of the target object based on the function type and a preset time period; the step of evaluating the classroom state of the target object based on the expression information and a preset analysis standard comprises the following steps: and evaluating the classroom state of the target object based on the expression information and the classroom type.
Different from the embodiment, the image data of the target area in the preset time period is used for determining the sub-area to which the target object belongs, and then the class type of the target object is determined according to the function type of the sub-area, so that the class state of the target object is evaluated based on the expression information and the class type, and the accuracy of class state detection is improved.
In some disclosed embodiments, the step of evaluating the classroom status of the target object based on the emoticon information and the classroom type includes: presetting an analysis standard based on the class type of the target object; and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
Different from the foregoing embodiment, the analysis criteria are preset based on the class type of the target object to adopt different analysis criteria for different class types, thereby objectivity and accuracy of the class state detection.
In some disclosed embodiments, the step of evaluating the classroom status of the target object based on the expression information and the preset analysis criteria includes: presetting an analysis standard based on expression information of a plurality of target objects; and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
Different from the foregoing embodiment, the analysis criteria are preset based on the expression information of a plurality of target objects to set the analysis criteria in consideration of the overall expression information of the target objects, thereby improving the objectivity and accuracy of the classroom state detection.
In some disclosed embodiments, the step of presetting the analysis criteria based on the expression information of the plurality of target objects includes: determining expression information of a plurality of target objects contained in the image data; the types of the expression information are various: determining the proportion of each type of expression information to the total type of expression information; the analysis criteria were determined based on the proportion of the proportion.
Different from the embodiment, the objectivity of classroom state detection is improved by determining the proportion of each type of expression information to the total type of expression information and then determining an analysis standard based on the proportion, thereby setting the analysis standard.
In some disclosed embodiments, the step of evaluating the classroom status of the target object based on the expression information and the preset analysis criteria is preceded by the steps of: determining the expression type of the expression information of the target object; the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard comprises the following steps: and evaluating the classroom state of the target object based on the expression type to which the expression information belongs and a preset analysis standard.
Different from the embodiment, the classroom state of the target object is evaluated by determining the expression type of the expression information of the target object and based on the expression type of the expression information and a preset analysis standard. Therefore, the expression meaning of the expression information of the target object is refined, the accuracy of expression analysis is improved, and the reliability of classroom evaluation is further improved.
In some disclosed embodiments, the step of determining the expression type to which the expression information of the target object belongs includes: sequentially comparing the expression information with the expression subtypes to determine at least one expression subtype corresponding to the expression information; the step of evaluating the classroom state of the target object based on the expression type to which the expression information belongs and a preset analysis standard comprises the following steps of: respectively counting the duration of each expression subtype corresponding to the expression information; and detecting and analyzing the classroom state of the target object by integrating each expression subtype and the duration of each expression subtype to obtain an initial evaluation result.
Different from the embodiment, the expression information of the target object is classified into the exact expression sub-types, the duration of the expression sub-types is obtained, the classroom state of the target object is detected and analyzed through the expression sub-types and the duration thereof, the subjective judgment data is further quantized, and therefore data support is provided for classroom state detection of the target object.
In some disclosed embodiments, the step of identifying the target object in the image data and determining the identity information of the target object and the expression information of the target object comprises: carrying out feature extraction on a target object in the image data to obtain the face feature of the target object; and determining the identity information of the target object and the expression information of the target object based on the facial features of the target object.
Different from the foregoing embodiment, the facial features of the target object are obtained by performing feature extraction on the target object in the image data, and then the identity information of the target object and the expression information of the target object are determined based on the facial features of the target object. The accuracy of feature extraction is improved, and the reliability of classroom state detection is never further improved.
In some disclosed embodiments, the step of identifying the target object in the image data and determining the identity information of the target object and the expression information of the target object is followed by the steps of: establishing a corresponding relation between the identity information and the expression information of the target object; and establishing an expression archive of the target object based on the identity information and the expression information of the target object by utilizing the corresponding relation.
Different from the embodiment, the corresponding relation between the identity information and the expression information of the target object is established; and an expression archive of the target object is created based on the identity information and the expression information of the target object by utilizing the corresponding relation so as to record the expression information of the target object and facilitate subsequent evaluation.
In some disclosed embodiments, the method further comprises: acquiring expression archives of all target objects in a target classroom; and evaluating the target classroom state based on the expression archives of all target objects.
Different from the embodiment, the target classroom state is evaluated based on the expression archives of all target objects, so that the objectivity of the evaluation result is improved.
In some disclosed embodiments, the step of evaluating the classroom status of the target object based on the expression information and the preset analysis criteria is followed by the steps of: acquiring an initial evaluation result for evaluating the classroom state of the target object based on the expression information; acquiring manual evaluation based on the expression information; and integrating the initial evaluation result and the manual evaluation to generate an evaluation result of the classroom state of the target object.
Different from the embodiment, after an initial evaluation result for evaluating the classroom state of the target object based on the expression information is obtained, manual evaluation is obtained based on the expression information; and integrating the initial evaluation result and the manual evaluation to generate an evaluation result of the classroom state of the target object. Therefore, the situation of false alarm of the initial evaluation result is reduced through manual evaluation, and the accuracy of the evaluation result of the classroom state of the target object is improved.
Referring to fig. 6, fig. 6 is a schematic frame diagram of an embodiment of an electronic device according to the present application. The electronic device 60 comprises a memory 61 and a processor 62 coupled to each other, and the processor 62 is configured to execute program instructions stored in the memory 61 to implement the steps of any of the above-described classroom state evaluation method embodiments. In one particular implementation scenario, electronic device 60 may include, but is not limited to: a microcomputer, a server, and in addition, the electronic device 60 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 62 is configured to control itself and the memory 61 to implement the steps of any of the above-described classroom state assessment method embodiments. The processor 62 may also be referred to as a CPU (Central Processing Unit). The processor 62 may be an integrated circuit chip having signal processing capabilities. The Processor 62 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 62 may be collectively implemented by an integrated circuit chip.
According to the scheme, the accuracy and the reliability of detection of the classroom state can be improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 70 stores program instructions 701 executable by the processor, the program instructions 701 being for implementing the steps of any of the above-described classroom state assessment method embodiments.
According to the scheme, the accuracy and the reliability of detection of the classroom state can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (14)

1. A classroom state assessment method, comprising:
acquiring image data;
identifying a target object in the image data, and determining identity information of the target object and expression information of the target object;
and evaluating the classroom state of the target object based on the expression information and a preset analysis standard.
2. The classroom state assessment method according to claim 1, wherein said step of acquiring image data comprises:
acquiring the image data of a target area in a preset time period;
before the step of evaluating the classroom state of the target object based on the expression information and the preset analysis standard, the method comprises the following steps of:
determining a sub-region to which the target region belongs by using the image data;
determining the function type of a sub-region to which the target object belongs;
determining a class type of the target object based on the function type and the preset time period;
the step of evaluating the classroom state of the target object based on the expression information and a preset analysis standard comprises the following steps:
and evaluating the classroom state of the target object based on the expression information and the classroom type.
3. The classroom state assessment method according to claim 2, wherein the step of assessing the classroom state of the target object based on said emotional information and said classroom type comprises:
presetting the analysis standard based on the class type of the target object;
and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
4. The classroom state assessment method according to claim 1, wherein said step of assessing the classroom state of said target object based on said expression information and a predetermined analysis criteria comprises:
presetting the analysis standard based on the expression information of a plurality of target objects;
and evaluating the classroom state of the target object by referring to the analysis standard and the expression information.
5. The classroom status assessment method according to claim 4, wherein said step of presetting said analysis criteria based on the facial expression information of a plurality of target objects comprises:
determining expression information of a plurality of target objects contained in the image data; the types of the expression information are various:
determining the proportion of each type of expression information to the total type of expression information;
determining the analysis criteria based on the fraction.
6. The classroom state assessment method according to any one of claims 1 to 5, wherein the step of assessing the classroom state of the target object based on the expression information and a preset analysis criterion is preceded by the steps of:
determining the expression type of the expression information of the target object;
the step of evaluating the classroom state of the target object based on the expression information and a preset analysis standard comprises the following steps:
and evaluating the classroom state of the target object based on the expression type of the expression information and a preset analysis standard.
7. The classroom status assessment method according to claim 6, wherein said step of determining the expression type to which the expression information of the target object belongs comprises:
sequentially comparing the expression information with the expression sub-types, and determining at least one expression sub-type corresponding to the expression information;
the step of evaluating the classroom state of the target object based on the expression type to which the expression information belongs and a preset analysis standard comprises:
respectively counting the duration of each expression subtype corresponding to the expression information;
and detecting and analyzing the classroom state of the target object by integrating each expression subtype and the duration of each expression subtype to obtain an evaluation result.
8. The classroom status assessment method according to claim 1, wherein said step of identifying a target object in said image data and determining identity information of said target object and facial expression information of said target object comprises:
performing feature extraction on a target object in the image data to obtain the face feature of the target object;
and determining the identity information of the target object and the expression information of the target object based on the facial features of the target object.
9. The classroom status assessment method according to any one of claims 1-8, wherein said identifying a target object in the image data and determining identity information of the target object and expression information of the target object is followed by:
establishing a corresponding relation between the identity information and the expression information of the target object;
and establishing an expression archive of the target object based on the identity information and the expression information of the target object by using the corresponding relation.
10. The classroom status assessment method according to claim 9, wherein said method further comprises:
acquiring expression archives of all target objects in a target classroom;
and evaluating the target classroom state based on the expression archives of all the target objects.
11. The method for evaluating the classroom status according to any one of claims 1-10, wherein the step of evaluating the classroom status of the target object based on the expression information and a predetermined analysis criteria is followed by the steps of:
acquiring an initial evaluation result for evaluating the classroom state of the target object based on the expression information;
acquiring manual evaluation based on the expression information;
and integrating the initial evaluation result and the manual evaluation to generate an evaluation result of the classroom state of the target object.
12. A classroom state detection device, comprising:
the acquisition module is used for acquiring image data;
the determining module is used for identifying a target object in the image data and determining the identity information of the target object and the expression information of the target object;
and the evaluation module is used for evaluating the classroom state of the target object based on the expression information and a preset analysis standard.
13. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the classroom status assessment method according to any of claims 1-11.
14. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the classroom status assessment method of any of claims 1-11.
CN202110124484.0A 2021-01-29 2021-01-29 Classroom state evaluation method and related device and equipment Withdrawn CN112819665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110124484.0A CN112819665A (en) 2021-01-29 2021-01-29 Classroom state evaluation method and related device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110124484.0A CN112819665A (en) 2021-01-29 2021-01-29 Classroom state evaluation method and related device and equipment

Publications (1)

Publication Number Publication Date
CN112819665A true CN112819665A (en) 2021-05-18

Family

ID=75860141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110124484.0A Withdrawn CN112819665A (en) 2021-01-29 2021-01-29 Classroom state evaluation method and related device and equipment

Country Status (1)

Country Link
CN (1) CN112819665A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762107A (en) * 2021-08-23 2021-12-07 海宁奕斯伟集成电路设计有限公司 Object state evaluation method and device, electronic equipment and readable storage medium
CN114677751A (en) * 2022-05-26 2022-06-28 深圳市中文路教育科技有限公司 Learning state monitoring method, monitoring device and storage medium
CN115130932A (en) * 2022-08-31 2022-09-30 中国医学科学院阜外医院 Digital assessment method for classroom activity

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169902A (en) * 2017-06-02 2017-09-15 武汉纺织大学 The classroom teaching appraisal system of micro- Expression analysis based on artificial intelligence
CN108229441A (en) * 2018-02-06 2018-06-29 浙江大学城市学院 A kind of classroom instruction automatic feedback system and feedback method based on image and speech analysis
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109657529A (en) * 2018-07-26 2019-04-19 台州学院 Classroom teaching effect evaluation system based on human facial expression recognition
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN111160189A (en) * 2019-12-21 2020-05-15 华南理工大学 Deep neural network facial expression recognition method based on dynamic target training
CN111291613A (en) * 2019-12-30 2020-06-16 新大陆数字技术股份有限公司 Classroom performance evaluation method and system
CN111353363A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Teaching effect detection method and device and electronic equipment
CN111680558A (en) * 2020-04-29 2020-09-18 北京易华录信息技术股份有限公司 Learning special attention assessment method and device based on video images
CN111931585A (en) * 2020-07-14 2020-11-13 东云睿连(武汉)计算技术有限公司 Classroom concentration degree detection method and device
CN112200138A (en) * 2020-10-30 2021-01-08 福州大学 Classroom learning situation analysis method based on computer vision

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169902A (en) * 2017-06-02 2017-09-15 武汉纺织大学 The classroom teaching appraisal system of micro- Expression analysis based on artificial intelligence
CN108229441A (en) * 2018-02-06 2018-06-29 浙江大学城市学院 A kind of classroom instruction automatic feedback system and feedback method based on image and speech analysis
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN109657529A (en) * 2018-07-26 2019-04-19 台州学院 Classroom teaching effect evaluation system based on human facial expression recognition
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN111353363A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Teaching effect detection method and device and electronic equipment
CN111160189A (en) * 2019-12-21 2020-05-15 华南理工大学 Deep neural network facial expression recognition method based on dynamic target training
CN111291613A (en) * 2019-12-30 2020-06-16 新大陆数字技术股份有限公司 Classroom performance evaluation method and system
CN111680558A (en) * 2020-04-29 2020-09-18 北京易华录信息技术股份有限公司 Learning special attention assessment method and device based on video images
CN111931585A (en) * 2020-07-14 2020-11-13 东云睿连(武汉)计算技术有限公司 Classroom concentration degree detection method and device
CN112200138A (en) * 2020-10-30 2021-01-08 福州大学 Classroom learning situation analysis method based on computer vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762107A (en) * 2021-08-23 2021-12-07 海宁奕斯伟集成电路设计有限公司 Object state evaluation method and device, electronic equipment and readable storage medium
CN114677751A (en) * 2022-05-26 2022-06-28 深圳市中文路教育科技有限公司 Learning state monitoring method, monitoring device and storage medium
CN114677751B (en) * 2022-05-26 2022-09-09 深圳市中文路教育科技有限公司 Learning state monitoring method, monitoring device and storage medium
CN115130932A (en) * 2022-08-31 2022-09-30 中国医学科学院阜外医院 Digital assessment method for classroom activity

Similar Documents

Publication Publication Date Title
CN109165552B (en) Gesture recognition method and system based on human body key points and memory
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN112819665A (en) Classroom state evaluation method and related device and equipment
CN108648757B (en) Analysis method based on multi-dimensional classroom information
CN111046819B (en) Behavior recognition processing method and device
CN109740446A (en) Classroom students ' behavior analysis method and device
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
WO2019218427A1 (en) Method and apparatus for detecting degree of attention based on comparison of behavior characteristics
CN111898881B (en) Classroom teaching quality assessment method, device, equipment and storage medium
CN111353366A (en) Emotion detection method and device and electronic equipment
CN111325082A (en) Personnel concentration degree analysis method and device
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
US20210304339A1 (en) System and a method for locally assessing a user during a test session
CN112949461A (en) Learning state analysis method and device and electronic equipment
JP2009267621A (en) Communication apparatus
CN110111011B (en) Teaching quality supervision method and device and electronic equipment
Kamble et al. Video Interpretation for cost-effective remote proctoring to prevent cheating
Fekry et al. Automatic detection for students behaviors in a group presentation
CN111199378A (en) Student management method, student management device, electronic equipment and storage medium
Satre et al. Online Exam Proctoring System Based on Artificial Intelligence
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
KR20220057892A (en) Method for educating contents gaze-based and computing device for executing the method
WO2022181105A1 (en) Analysis device, analysis method, and non-transitory computer-readable medium
CN111369400A (en) Middle school student learning process supervision method based on image data processing
Thampan et al. Smart Online Exam Invigilation using AI based Facial Detection and Recognition Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210518

WW01 Invention patent application withdrawn after publication