CN117218703A - Intelligent learning emotion analysis method and system - Google Patents

Intelligent learning emotion analysis method and system Download PDF

Info

Publication number
CN117218703A
CN117218703A CN202311198720.9A CN202311198720A CN117218703A CN 117218703 A CN117218703 A CN 117218703A CN 202311198720 A CN202311198720 A CN 202311198720A CN 117218703 A CN117218703 A CN 117218703A
Authority
CN
China
Prior art keywords
student
face
class
frame
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311198720.9A
Other languages
Chinese (zh)
Inventor
张新华
李琳璐
张宁权
曾奕秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lancoo Technology Co ltd
Original Assignee
Zhejiang Lancoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lancoo Technology Co ltd filed Critical Zhejiang Lancoo Technology Co ltd
Priority to CN202311198720.9A priority Critical patent/CN117218703A/en
Publication of CN117218703A publication Critical patent/CN117218703A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The application relates to the technical field of education, and discloses an intelligent learning condition analysis method and system, wherein the method comprises the following steps: the face and human body behaviors of the student collected by each camera are identified through a face identification algorithm and a behavior identification algorithm, and a face frame and a human body frame of the student are obtained according to the identification result; the identity matching of each student is completed by carrying out de-duplication on the face frames of the students obtained by the images acquired by all cameras and carrying out position information judgment on the face frames to obtain a unique matching result of the face frames of each student; according to the identity matching result of the target student, calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera, and calculating the class intelligence analysis comprehensive evaluation score of the target student according to the class attendance score of the target student and the class behavior score of the target student. The application can realize accurate and efficient analysis of student learning condition.

Description

Intelligent learning emotion analysis method and system
Technical Field
The application relates to the technical field of education, in particular to an intelligent classroom behavior analysis technology.
Background
Along with the continuous advancement of educational informatization construction, more and more schools begin to apply intelligent algorithms and multimedia technologies to analyze and evaluate the classroom condition of students so as to help teachers develop teaching management and decisions better.
Aiming at a ubiquitous classroom scene of a primary school and a secondary school, a plurality of cameras are installed to simultaneously acquire video images of students in the classroom. However, in performing student identity and behavior recognition, since there are a plurality of cameras to repeatedly recognize the same student, a large amount of redundancy and repeated recognition result data are obtained, in other words, this results in redundancy of data acquisition in the recognition stage.
On the other hand, when the student classroom behavior recognition is performed, the situation that the student is low in head writing, playing a mobile phone or turning around, lying on a desk and the like can cause that the identity and the behavior of the student cannot be accurately matched, so that the problem of lower accuracy of analysis results of the classroom behavior is caused.
Disclosure of Invention
The application aims to provide an intelligent learning condition analysis method and system for solving the problems in the background technology.
The application discloses an intelligent learning emotion analysis method, which comprises the following steps:
step A: the face and human body behaviors of the student collected by each camera are identified through a face identification algorithm and a behavior identification algorithm, and a face frame and a human body frame of the student are obtained according to the identification result;
And (B) step (B): the identity matching of each student is completed by carrying out de-duplication on the face frames of the students obtained by the images acquired by all cameras and carrying out position information judgment on the face frames to obtain a unique matching result of the face frames of each student;
step C: according to the identity matching result of the target student, calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera, and calculating the class intelligence analysis comprehensive evaluation score of the target student according to the class attendance score of the target student and the class behavior score of the target student.
In a preferred embodiment, the step a further comprises the sub-steps of:
step A1: starting from the time of class, the camera acquires one frame of image every N seconds to perform face recognition and behavior recognition of the student, and marks corresponding seat position information of the student;
step A2: the method comprises the steps of carrying out face recognition of students according to images acquired by cameras, outputting face frames of the students, wherein a general face recognition algorithm is adopted for carrying out face recognition and matching with face information in a class information base of a current class in an intelligent class study analysis database, and outputting the face frames of the students, wherein the class information base refers to a set of personal information bases of all the students in the class, and information in the class information base comprises one or any combination of the following: student name, student ID, student face information, class corresponding to class of classroom, seat position information of classroom, class time of class, class subjects;
Step A3: human body behavior recognition is carried out according to images acquired by the cameras, and a human body frame of the student is output, wherein a general behavior recognition algorithm is adopted to carry out frame-by-frame human body behavior recognition on real-time video of a classroom, and information of the position and behavior category of the student corresponding to the student is obtained; according to the human body position information of each target student, carrying out posture estimation by adopting an OpenPose posture estimation algorithm to obtain skeleton key point distribution information of each target student; and outputting a behavior recognition result of the student human body frame, wherein the behavior recognition result comprises one or any combination of the following: sit up, stand up, play the cell-phone, take notes, turn around, lift the hand, lie prone desk.
In a preferred embodiment, said step B comprises the sub-steps of:
step B1: the method comprises the steps of performing de-duplication on face frames of students acquired by images acquired by all cameras, wherein if one student corresponds to a plurality of face frames, the face frame with the highest confidence of the student is reserved, and other face frames of the student are removed;
step B2: when all cameras cannot recognize the face frame of the target student, if the position of the target student is determined not to change based on the human frame of the target student and a preset position updating threshold value, taking the student identity information of the face frame of the student with the highest recent effective confidence as the face frame corresponding to the human frame of the target student;
Step B3: and carrying out identity matching of the target students based on the acquired face frames and the human frames, and acquiring unique matching results of the human frames and the face frames of each student, thereby completing the identity matching of each student.
In a preferred embodiment, in the step B1, for the situation that a plurality of cameras recognize the face of a student at the same time in a large classroom scene, before each frame of image is selected to perform identity matching, the face information with the highest confidence is reserved from the repeated face information recognized by the plurality of cameras, and other data are removed.
In a preferred embodiment, in the step B2, each student position of the identified first frame is selected as an initial position; setting a student position update threshold x, and taking the left and right vertexes of a human frame as (m 1, n 1), (m 2 and n 2), wherein the range threshold of the left and right frames is respectively: (m1+x, n 1), (m 1-x, n 1), (m2+x, n 2), (m 2-x, n 2); selecting a frame of picture every N seconds to perform human body behavior recognition to obtain a human body frame, and judging whether the change range of the coordinates of the human body frame exceeds a change range threshold value determined by a student position change threshold value x to detect whether the position of a student changes, if the change range of the coordinates of the human body frame does not exceed the threshold value, determining that the position of the student does not change, judging whether the most-recently effective student identity information exists at other positions, and if the most-recently effective student identity information does not exist, taking the most-recently effective student identity information with the highest confidence coefficient as the student identity information of the frame;
In a preferred embodiment, in the step B3, the number of face frames in a certain human frame is determined; if only one human face frame data is in the human body frame, directly matching, if two or more human face frames are in the human body frame, further judging whether only one human face frame with 100% of inclusion degree is in the human body frame, if so, directly matching; otherwise, screening the face frames by adopting a shortest distance judging method, and selecting the face frames with the shortest distance as the matching frames of the face frames, wherein the shortest distance judging method is calculated as follows:
wherein dm represents the difference between the distances from the center point of the mth face frame to the left and right shoulders; (X) m ,Y m ) Marking as the center point coordinates of the mth face frame; (xshulderk 1, yshulderk 1), (xshulderk 2, yshulderk 2) represent coordinates of the left and right shoulders of the human frame.
In a preferred embodiment, said step C comprises the sub-steps of:
step C1: according to the identity matching result of the target student, determining the class attendance state of the target student by utilizing the information acquired by the camera, and calculating the class attendance score of the target student according to the class attendance state of the target student, wherein the class attendance state comprises one or any combination of the following: early backing, late arrival, class business handling and seat replacement;
Step C2: according to the identity matching result of the target student, determining the class behavior state of the target student by using the information acquired by the camera, and calculating the class behavior score of the target student according to the class behavior state of the target student;
step C3: and calculating the classroom emotion analysis comprehensive evaluation score of the target student according to the class attendance score and class behavior score of the target student.
In a preferred embodiment, the attendance score design is calculated as follows:
wherein y is 1 The total score is 100 points, and the lowest value is 0;
n is the number of courses on the same day;
t is the duration of a class;
t1 is the class time of the current class;
t2 is the time of entering class in class;
z1, z2, z3 and z4 are the times of early backing, late arrival, class business and seat replacement of a certain day respectively;
t2-s1 is the early-retreating duration;
s2-t1 is the duration of the delay;
s3-s4 are the duration of the transaction;
s5-s6 are the time length for changing seats.
In a preferred embodiment, in the step C2, the class behavior score is designed in the following manner:
wherein y is 2 The score of the student class performance is 100;
x_1=standing times×m1+lifting times×m2+taking times×m3-table-lying times×m4-mobile phone playing times×m5-turning times×m6;
m1-m6 are action weights and can be customized;
when x_1=0, the base score of the student is a score of a, which is a score when the student is sitting in all the movements, wherein a is 60 and b is 40.
In a preferred embodiment, in the step C3, the calculation method of the comprehensive assessment score for the classroom plot analysis is as follows:
y 3 =y 1 ×α%+y 2 ×(100—α)%
wherein y is 3 Comprehensive assessment scores are analyzed for classroom intelligence, and alpha is the weight.
The application also discloses an intelligent learning intelligence analysis system which comprises:
the face frame and human frame acquisition module is used for identifying the face and human behaviors of the student acquired by each camera through a face identification algorithm and a behavior identification algorithm, and acquiring the face frame and the human frame of the student according to the identification result;
the identity matching module is used for obtaining a unique matching result of the human body frame of each student and the human face frame by carrying out de-duplication on the human face frames of the students acquired by the images acquired by all cameras and carrying out position information judgment on the human body frames, so that the identity matching of each student is completed;
and the classroom analysis comprehensive evaluation module is used for calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera according to the identity matching result of the target student, and calculating the classroom analysis comprehensive evaluation score of the target student according to the class attendance score and the class behavior score of the target student.
The application also discloses an intelligent learning intelligence analysis system which comprises:
a memory for storing computer executable instructions; the method comprises the steps of,
a processor for implementing steps in a method as described hereinbefore when executing said computer executable instructions.
In the embodiment of the application, the face and the human body behaviors of the student collected by each camera are identified by a face identification algorithm and a behavior identification algorithm, and the face frame and the human body frame of the student are obtained according to the identification result; the identity matching of each student is completed by carrying out de-duplication on the face frames of the students obtained by the images acquired by all cameras and carrying out position information judgment on the face frames to obtain a unique matching result of the face frames of each student; according to the identity matching result of the target student, calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera, and calculating the class intelligence analysis comprehensive evaluation score of the target student according to the class attendance score of the target student and the class behavior score of the target student. The application acquires abundant teaching data of students by adopting the technical means of combining face recognition and behavior recognition, and improves the accuracy of learning condition analysis. The problem of data redundancy in a multi-camera scene is solved by a duplication elimination technology, and the system efficiency is improved. More effective identity and behavior data are obtained by adopting a position threshold judging method, and matching failure is avoided. The study condition assessment model comprising two dimensions of attendance checking and behavior is constructed, and the study condition can be analyzed from multiple angles. And (3) comprehensively scoring by adopting a weighting algorithm of attendance checking and behavior score, and adjusting the weight according to the requirement. And (5) performing image quality improvement design to improve the identification accuracy. And a reasonable camera arrangement scheme is used, so that the method is suitable for large-scene application.
In conclusion, the application realizes the accurate and efficient analysis of student science conditions by the technical means of face and behavior recognition, identity matching, study condition multidimensional evaluation and the like, and has the advantage of remarkable technical effect.
The numerous technical features described in the description of the present application are distributed among the various technical solutions, which can make the description too lengthy if all possible combinations of technical features of the present application (i.e., technical solutions) are to be listed. In order to avoid this problem, the technical features disclosed in the above summary of the application, the technical features disclosed in the following embodiments and examples, and the technical features disclosed in the drawings may be freely combined with each other to constitute various new technical solutions (these technical solutions are regarded as already described in the present specification) unless such a combination of technical features is technically impossible. For example, in one example, feature a+b+c is disclosed, in another example, feature a+b+d+e is disclosed, and features C and D are equivalent technical means that perform the same function, technically only by alternative use, and may not be adopted simultaneously, feature E may be technically combined with feature C, and then the solution of a+b+c+d should not be considered as already described because of technical impossibility, and the solution of a+b+c+e should be considered as already described.
Drawings
FIG. 1 is a schematic diagram of an intelligent intelligence analysis method and system according to the present application;
FIG. 2 is a flow chart of an intelligent intelligence analysis method according to a first embodiment of the present application;
FIG. 3 is a schematic view of a scenario of an intelligent intelligence analysis method according to a first embodiment of the present application;
fig. 4 is a schematic diagram of a human body frame in an intelligent intelligence analysis method according to a first embodiment of the present application;
FIG. 5 is a schematic view of a camera setup in an intelligent intelligence analysis method according to a first embodiment of the present application;
FIG. 6 is a flow chart of an intelligent intelligence analysis method according to a first embodiment of the present application;
FIG. 7 is a schematic structural view of an intelligent intelligence analysis system according to a second embodiment of the present application;
fig. 8 is a schematic diagram of labeling a human body picture in an intelligent intelligence analysis method according to a first embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, it will be understood by those skilled in the art that the claimed application may be practiced without these specific details and with various changes and modifications from the embodiments that follow.
The following outline of some of the innovative features of the present application:
in order to solve the problem that the matching success rate is affected due to the fact that data redundancy is low or students are in a review mode and the like in a multi-camera scene, the inventor of the application creatively provides an intelligent learning analysis method and an intelligent learning analysis system through intensive research, and the method can reduce the occupation of computing resources by designing a face de-duplication technology on the basis of acquiring data sources such as images and the like; more effective identity matching information and student behavior data are obtained by adopting a position threshold judgment mode; and a comprehensive learning condition assessment model comprising two dimensions of attendance checking and behavior is constructed, so that the learning condition of students can be analyzed from multiple angles. The system and the method can improve the accuracy and the efficiency of the analysis of the learning condition.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
A first embodiment of the present application relates to an intelligent learning intelligence analysis method in which an intelligent classroom intelligence analysis database is constructed in advance, and data in the intelligent classroom analysis database includes face data, human behavior data, and other data.
Optionally, the face data includes face entry information of the students in the whole school, and corresponding student IDs and student names.
Optionally, the human behavior data includes basic behavior data of students in a classroom, namely sitting, standing, playing mobile phones, turning around, lifting hands, lying on a table, and the like.
Optionally, the other data includes classroom equipment data such as personal information base information, classroom information, table information, camera information, and audio information of students in class.
The concept and flow of the intelligent intelligence analysis method of the present embodiment can be seen in fig. 1, 2 and 6, and the method includes the following steps:
step 100: the face and human body behaviors of the student collected by each camera are identified through a face identification algorithm and a behavior identification algorithm, and the face frame and the human body frame of the student are obtained according to the identification result.
Optionally, the step comprises the sub-steps of:
step 110: from the time of class, the camera collects a frame of image every N seconds to perform face recognition and behavior recognition of the student, and marks corresponding seat position information of the student.
Step 120: and carrying out face recognition of the student according to the image acquired by the camera, and outputting a face frame of the student.
Optionally, a general face recognition algorithm is adopted to perform face recognition, and the face recognition algorithm is matched with face information in a class information base of the current class in an intelligent class analysis database to output a face frame of the student.
Optionally, the class information base refers to a collection of personal information bases including all students in the class, and the information in the class information base may include one or any combination of the following: a personal information base of students in a class, the personal information base comprising basic information of the students' individuals, for example, may comprise one or any combination of the following: student name, student ID, student face information; classroom information, such as: seat position information of corresponding class and classroom; school timetable information, such as: the time of taking and taking lessons in the class and the subjects of taking lessons.
Optionally, the face frame of the student contains the face of the target student and the personal information of the student. Specifically, the face frame refers to a rectangular frame area containing the face of the target student, which is detected and located in the image by a face recognition algorithm. The personal information refers to identity information corresponding to the student, including student ID, student name, and the like. The face frame output by the face recognition algorithm contains the face image area of the identified target student. Then the system will match the face image in the face frame with the face sample in the database, if the matching is successful, the personal information of the student in the face frame, namely the student ID and name, etc. can be obtained. Therefore, the face frame is an area containing the face image of the student, and the personal information is the identity information of the corresponding student obtained by matching the face frame. The face frame provides a face sample of the student, and the personal information provides identity identification such as ID of the student. The two are corresponding relations established through face recognition and matching.
Optionally, if the matching of the face of the student in the image acquired by the camera and the face information of the intelligent classroom analysis database fails, further matching the face of the student in the image with all face information in the campus recorded in the intelligent classroom analysis database, and if the matching is successful, outputting the student information; if the matching still fails, the matching is marked as strangers.
Step 130: and carrying out human behavior recognition according to the image acquired by the camera, and outputting the human frame of the student.
Optionally, carrying out frame-by-frame human body behavior recognition on the real-time video of the class by adopting a general behavior recognition algorithm to acquire the corresponding student position and behavior type information of the student; further, according to the human body position information of each target student, performing posture estimation by using an openPose posture estimation algorithm to obtain skeleton key point distribution information of each target student, for example: coordinate information of key points such as head, wrist, shoulder and the like; and finally, outputting the behavior recognition result of the student human body frame.
Optionally, the behavior recognition result comprises one or any combination of the following: sit up, stand up, play the cell-phone, take notes, turn around, lift the hand, lie prone desk.
Note that, in the present embodiment, the human body frame refers to a rectangular frame area including the entire human body of the target student, which is located in the image by the behavior recognition algorithm. The behavior recognition result refers to the behavior category judgment of the student, such as sitting, lifting hands, and the like. The behavior recognition algorithm will output the human body frame corresponding to each target student first to locate the student's position in the image. And then analyzing the gesture, the action and the like of the student based on the human body frame, and judging the ongoing behavior of the student to obtain a behavior recognition result. The human body frame contains the whole human body area of the target student in the image, and the behavior recognition result is that the type of the behaviors of the student is judged by analyzing the human body frame. The human body frame provides the positioning of the student in the image, and the behavior recognition result gives the specific behavior of the student. The two are also the corresponding relations established by the behavior recognition algorithm.
Step 200: the unique matching result of the human body frame and the human face frame of each student is obtained by carrying out de-duplication on the human face frames of the students acquired by the images acquired by all cameras and carrying out position information judgment on the human body frames, so that the identity matching of each student is completed.
Optionally, the step is based on the face recognition and behavior recognition result, firstly, data deduplication is performed, secondly, position threshold judgment is performed or identity matching is directly performed according to whether invalid face information exists, matching of a student face frame and a human body frame is further achieved, and behavior information of each student is output. The implementation flow is as shown in fig. 2:
optionally, the step comprises the sub-steps of:
step 210: and de-duplicating the face frames of the students obtained from the images acquired by all cameras, wherein if the same student corresponds to a plurality of face frames, the face frame with the highest confidence of the student is reserved, and other face frames of the student are removed.
It should be noted that in this embodiment, two or more cameras are used for identifying a student at a certain moment, so that the two or more cameras obtain identification information of the student, which causes a technical problem of data redundancy, and only the data with the highest confidence of the face of the student is reserved by adopting a duplication elimination technology. The advantage of this is that:
ensuring data accuracy: by selecting the data with the highest confidence, the selected face data can be ensured to be the most reliable, thereby reducing errors caused by selecting the error data.
The system efficiency is improved: processing and storing redundant data consumes more computing and storage resources. By removing redundant face data, unnecessary computation and storage can be reduced, thereby improving the operation efficiency of the overall system.
Simplifying data management: the data after the duplication removal is more concise, is easy to manage and analyze, and helps teachers and administrators to acquire student information faster.
Optimizing resource allocation: in situations where resources are limited, the system may allocate more resources to other tasks, such as further behavioral analysis or other complex computing tasks.
Optionally, in this step, for the situation that a plurality of cameras recognize the face of a student at the same time in a large classroom scene, before each frame of image selected is subjected to identity matching, face information with highest confidence is reserved from repeated face information recognized by the plurality of cameras, and other data are removed.
The following is a specific example:
as shown in fig. 3, first, from the time of a lesson, a frame of face picture is selected from videos shot by a plurality of cameras at the same time every N seconds for recognition.
If a certain face recognition is performed, the face accuracy of the third face recognized by the No. 4 camera is higher than that of the other cameras, the face recognized by the No. 4 camera is reserved, and the information of the third face recognized by the other cameras is removed, namely, only the unique third face recognition information at the current moment exists in the system.
And so on, until the time of class, the camera only reserves the face information with the highest confidence when carrying out face recognition, and the advantage of the method is that the space of system management resources can be saved.
Step 220: when all cameras cannot recognize the face frame of the target student, if the position of the target student is not changed based on the human body frame of the target student and a preset position updating threshold value, the student identity information with the highest confidence coefficient which is effective recently is taken as the student identity information of the frame, that is, the student identity information of the student face frame with the highest confidence coefficient which is effective recently is taken as the face frame corresponding to the human body frame of the target student.
In this embodiment, the most recent valid means that the recognition result may be the data with the highest confidence in the plurality of cameras in the previous frame, and if the recognition result is not found in the previous frame, the face recognition information in the previous frame is obtained until the face recognition information in the position is obtained. Namely, the recognition result with the highest confidence in the latest frame is searched as the result of the current frame. Optionally, the specific operation is as follows: when a certain frame cannot identify the face of a target student, the last frame is traced back. Checking the identification results of all cameras in the previous frame, and selecting the identification result with the highest confidence as the identification result of the current frame. If the last frame has no effective recognition result, continuing backtracking to the last frame, and checking and selecting the recognition result with the highest confidence level again. This backtracking process continues until a valid recognition result is found or a certain predetermined backtracking limit is reached. In short, the system will continually look at previous frames in an attempt to find a reliable recognition result so that the system can still provide an approximately accurate output even if a recognition failure occurs in some particular frames.
In the step, aiming at the situation that a plurality of cameras cannot recognize the face of a student at the same time, if the face recognition fails to acquire the student identity information, a student position threshold judgment model is adopted to analyze whether the position of the student changes or not, and then the student identity information of the frame is acquired.
In this embodiment, aiming at the situation that the face recognition fails and the student behavior information can only be recognized due to the situations of low-head writing, mobile phone playing, turning, table lying prone and the like of the student, so that the matching of the face and the behavior result fails, a position threshold judgment method is adopted to obtain more effective identity matching information.
In other words, in this embodiment, when the face of the student cannot be recognized by the camera in the current frame for some reason (e.g., turning, blocking, etc.), the system does not immediately determine that the student has left or that the identity has changed. Instead, it may infer the current status of the student taking into account the student's body frame location information and previous recognition results. The purpose of this is to ensure data continuity and stability, avoiding data breakage due to short recognition failures.
For example, if a student was identified as "Zhang Sanj" in the previous frame, but was not identified in the current frame because of a turn, the system would check if there was a significant change in his body frame position. If there is little change in position and within the preset position update threshold, the system will assume that "Zhang Sanj" is still there and take the identification of the previous frame as the result of the current frame.
In the present embodiment, consider a scenario: the student turns his head before the camera, resulting in the face being blocked in frame 10, and all cameras cannot recognize him. The system does not give up immediately at this time, but looks at the recognition result in frame 9. If a camera in frame 9 successfully recognizes Zhang three and the confidence is high, the system uses this result as the recognition result for frame 10.
However, if frame 9 also did not successfully identify Zhang Sanj, the system would continue to look at frame 8, then frame 7, and so on until a valid identification result was found or some preset backtracking limit was reached (e.g., the system would backtrack only 5 frames ago).
The benefit of this "most recent effective" strategy is that it provides a buffer for short interruptions in recognition, ensuring that face occlusion or other factors-induced recognition failures do not immediately affect the output of the system. This is particularly important for real-time or near real-time analysis systems, as they need to provide a continuous and stable output in a short time. Namely, the method can effectively process short face recognition failure, ensure continuity and accuracy of student identity information, and simultaneously reduce misjudgment caused by recognition errors.
Optionally, the student position threshold decision model may be designed using the following ideas:
first, each student position of the identified first frame is selected as an initial position.
Then, a student position update threshold value x (threshold value x of human body frame fluctuation) is set, and left and right vertices of the human body frame are respectively denoted as (m 1, n 1), (m 2, n 2). The ranges of the left and right frames are respectively: (m1+x, n 1), (m 1-x, n 1), (m2+x, n 2), (m 2-x, n 2). Only the x-axis change of the human body frame is taken as the basis of the position change (in the y-axis direction, if the human body frame in the initial sitting state is taken as the initial frame, the hand lifting and standing actions of students can cause the human body frame to change greatly in the y-axis, but the position of the students is not changed at the moment). The student position change diagram is shown in fig. 4.
Then, a frame of picture is selected every N seconds to conduct human body behavior recognition to obtain a human body frame, and whether the position of the student changes is detected by judging whether the fluctuation range of coordinates of the human body frame exceeds a student position fluctuation threshold value x. If the position of the student does not exceed the threshold value, the position of the student is determined to be unchanged, whether the most-recently effective student identity information appears at other positions is firstly determined under the condition that the position is unchanged, and if the position is not changed, the most-recently effective student identity information with the highest confidence is taken as the student identity information of the frame.
Step 230: and carrying out identity matching of the target students based on the acquired face frames and the human frames, and acquiring unique matching results of the human frames and the face frames of each student, thereby completing the identity matching of each student.
In the step, based on the acquired face frame and the human body frame, the identity matching of the target student is carried out, and only the identity data of the unique matching of the face frame and the human body frame of the target student is reserved in the system.
Optionally, the identity matching procedure of this step is as follows:
first, the number of face frames in a certain human frame is judged.
If only one face frame data is in the human body frame, matching is directly carried out.
If two or more face frames exist in the human frames, further judging whether only one face frame with 100% inclusion degree exists in the human frames, and if so, directly matching; otherwise, the shortest distance judgment method is adopted to screen the face frames, and the face frames with the shortest distance are selected as the matching frames of the face frames so as to realize the matching of the unique face frames and the human frames.
Note that the inclusion degree in the present embodiment is to quantify and describe the positional relationship of the face frame with respect to the human frame. Specifically, the inclusion level is calculated by comparing the intersection area of two frames with the total area of the face frame.
To understand this concept more clearly, the following several common scenarios can be considered:
completely comprises: when the face frame is completely located within the body frame without any portion exceeding the boundary of the body frame, we say that the face frame is completely contained within the body frame. In this case, the inclusion degree is 100%.
The part comprises: if only a portion (e.g., half) of the face frame is located within the body frame, the inclusion is 50%. This means that only half of the face frame area intersects the body frame.
Not containing or minor intersections: when the human face frame has almost no intersection or only a small intersection with the human body frame, the inclusion degree is close to 0%.
In this way, inclusion provides a quantitative method to determine the position and duty cycle of a face frame within a body frame. This is very helpful to ensure accuracy of face recognition and more complex analysis in complex scenarios.
Optionally, the shortest distance determining method is calculated as follows:
wherein dm represents the difference between the distances from the center point of the mth face frame to the left and right shoulders; (X) m ,Y m ) Marking as the center point coordinates of the mth face frame; (xshulderk 1, yshulderk 1), (xshulderk 2, yshulderk 2) represent coordinates of the left and right shoulders of the human frame.
And repeatedly executing the operations until the matching of all the human frames of the students and the human face frames is completed, only the data uniquely matched with the human face frames and the human face frames are reserved in the system, and finally, the identity matching of the students is completed.
Step 300: according to the identity matching result of the target student, calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera, and calculating the class intelligence analysis comprehensive evaluation score of the target student according to the class attendance score of the target student and the class behavior score of the target student.
Firstly, constructing a class attendance module and a class behavior module to acquire data and design scores; then, a study analysis comprehensive evaluation module is constructed according to the scores of the attendance module and the classroom behavior module; finally, comprehensively evaluating the analysis result of the science and the condition. The method specifically comprises the following three parts:
step 310: and determining the class attendance state of the target student by utilizing the information acquired by the camera according to the identity matching result of the target student, and calculating the class attendance score of the target student according to the class attendance state of the target student.
Optionally, the class attendance status includes one or any combination of the following: early backing, late arrival, class business, seat replacement.
In the step, the class attendance state is judged first, and then the class attendance score of the target student is calculated through the attendance module.
Optionally, the specific manner of the attendance module is as follows:
attendance status and its evaluation mode:
the calculation method of the class attendance score comprises the following steps:
and according to the obtained class attendance states such as early-backing, late-arriving, class business handling time and seat changing time and times of the target students in the class, the score design of the class attendance module is carried out. The attendance score design is calculated as follows:
wherein y is 1 The total score is 100 points, and the lowest value is 0;
n is the number of courses on the same day;
t is the duration of a class;
t1 is the class time of the current class;
t2 is the time of entering class in class;
z1, z2, z3 and z4 are the times of early backing, late arrival, class business and seat replacement of a certain day respectively;
t2-s1 is the early-retreating duration;
s2-t1 is the duration of the delay;
s3-s4 are the duration of the transaction;
s5-s6 are the time length for changing seats.
Step 320: and determining the class behavior state of the target student by utilizing the information acquired by the camera according to the identity matching result of the target student, and calculating the class behavior score of the target student according to the class behavior state of the target student.
In the step, the class behavior score is calculated through the class behavior module according to the obtained times of sitting, standing, playing a mobile phone, taking notes, turning around, lifting hands and lying on a desk.
Optionally, the calculation mode of the class behavior score design is as follows:
/>
wherein y is 2 The score of the student class performance is 100;
x 1 the number of standing times is multiplied by m1+the number of lifting hands is multiplied by m2+the number of taking notes is multiplied by m3-the number of table lying prone is multiplied by m4-the number of mobile phone playing is multiplied by m5-the number of turning around is multiplied by m6 (the number of actions followed by the number of actions is multiplied by action weight, and the custom weight is supported).
When x is 1 When=0, the student's base is divided into a score (i.e., the score when the student's actions are all sitting is a score, a is typically chosen to be 60, and b is 40).
Step 330: and calculating the classroom emotion analysis comprehensive evaluation score of the target student according to the class attendance score and class behavior score of the target student.
Note that in this embodiment, student emotion analysis is performed by evaluating from two dimensions of attendance result and behavior recognition, which has the following advantages:
first, the learning state of the student can be evaluated more comprehensively and stereoscopically. The information obtained from only one dimension is incomplete, and the combination of two dimensions can perform multi-angle analysis, so that the result is more accurate and reliable.
Further, attendance and behavior recognition reflect different aspects of the student information. Attendance is focused on the attendance time condition of students, and behavior recognition judges the class concentration degree of the students. The two are mutually complementary, and more accurate analysis of the emotion can be obtained by reasonable utilization.
Furthermore, the multi-source heterogeneous information is comprehensively fused, so that the defect of data of a single source can be overcome, and the accuracy of a result is improved. The attendance data and the behavior data can be mutually verified, so that errors are reduced.
Furthermore, different students have different attendance and behavior modes, and the characteristics of different types of students can be distinguished by integrating two dimensions.
Furthermore, the two features can be processed by different algorithms, and the advantages of the different algorithms can be fused together, so that the overall effect of the system is improved.
Further, customized learning assessment and analysis according to different combinations of attendance and behavior may be supported.
Further, the system can be flexibly optimized by adjusting the weights of the two types of information for different scenes or applications.
In this step, optionally, the overall status of the student class is evaluated by a learning analysis-by-synthesis evaluation module.
Optionally, the weight alpha is selected according to the emphasis degree of schools on class attendance and class behavior.
Optionally, the calculation mode of the classroom science intelligence analysis comprehensive evaluation score is as follows:
y 3 =y 1 ×α%+y 2 ×(100—α)%
wherein y is 3 The score was comprehensively assessed for classroom analysis of the intelligence.
The comprehensive assessment score y may then be analyzed based on classroom theory 3 And (5) comprehensively evaluating the classroom condition of the students.
In the above embodiment, the face algorithm may be one of fisherface, blazeFace, and Haar cam algorithm models, but is not limited to a certain algorithm.
In the above embodiment, optionally, the behavior recognition algorithm preferably adopts a YOLO series model, which is used to detect the human frame from top to bottom first, and then detect the key points.
In the above embodiment, optionally, the quality of the picture is improved by acquiring and processing the data information, so as to help to improve the accuracy of recognizing the face and the human behavior of the student. The specific data acquisition comprises the following steps:
(1) face information acquisition:
A. face picture acquisition: and obtaining face information of students from different dimensions, performing model learning on characteristic parts of the students so as to accurately perform face recognition of the students subsequently, and finally constructing a face information base.
B. Preprocessing face pictures: the face shielding condition caused by the shielding objects such as glasses and bang can influence the accuracy of face recognition, and the sharpening processing operation of partial face images by adopting the Laplace algorithm can enable the face characteristics of the face to be more obvious. The image is discrete and consists of discrete points, and the Laplace calculation formula is as follows:
Wherein F (x, y) is the pixel value of the (x, y) point in the image, and under the discrete form, the value of the adjacent pixels in the four directions of the pixel is used for carrying out 4 times difference with the adjacent pixels to obtain the result of the Laplace operator, and G (x, y) is the final sharpening output.
C. Labeling face pictures: the manual marking operation is carried out on the face picture by a tool for selecting the behavior mark (coco official tool, labelImg and the like). In the marking process, marking the minimum circumscribed rectangle of the face, wherein the rectangle frame is required to contain the left ear, the top of the head, the right ear and the chin of the face, and the marking information comprises: student ID, student name.
(2) Human behavior information acquisition:
A. human body picture acquisition: and acquiring classroom videos in different classroom scenes, and cutting out a video stream according to the number of frames to obtain a large number of pictures containing the human behaviors of students.
B. Preprocessing face pictures: the bilateral filtering denoising operation is carried out on the human body picture, a large amount of shadows and sloshing and the like exist in the acquired classroom video to cause the edge blurring of the human body picture, the influence of blurring and noise on the human body behavior picture can be effectively solved by adopting the bilateral filtering denoising operation, and the accuracy of human body behavior recognition can be ensured to be higher. The calculation formula of bilateral filtering is as follows:
wherein f (x, y) represents a noise point pixel value; g (x, y) represents the filtered value; w (x, y, i, j) is the product of the distance coefficient and the pixel value coefficient.
Labeling human body pictures: and defining the action of human body behavior and performing picture marking operation. The manual marking operation is carried out on the human behavior pictures by a tool for selecting the behavior mark (coco official tool, labelImg and the like). During the labeling process, the smallest outer rectangle of the human body is labeled to ensure that the rectangle contains as little background information as possible. Basic behavior analysis in class: sit up, stand up, play a cell phone, take notes, turn around, lift hands, lie prone to the desk, see fig. 8.
(3) Database information acquisition:
A. the method comprises the steps of obtaining a personal information base of students in class, wherein the personal information base comprises personal basic information of the students: student name, student ID, student face information.
B. Classroom information is acquired: seat position information of corresponding class and classroom of lesson.
C. Acquiring school timetable information: the time of taking and taking lessons in the class and the subjects of taking lessons.
In the above embodiment, optionally, as shown in fig. 5, for a large classroom scene, the present invention selects a large classroom with a specification of 18 meters by 12 meters as an example. And the mounting position and the shooting angle of the camera are reasonably selected aiming at the classrooms of the specification, so that the accuracy of recognizing the faces and behaviors of students is improved.
(1) First, the shooting angle of the camera is the horizontal angle of view Q:105.7 degrees to ensure that the camera shoots the front face of the student as much as possible;
(2) Secondly, the cameras are deployed at A, B, C, D, E, F, the distance from the first row of classrooms to the blackboard is d, the length of the classrooms is l, and the width is w, so that the coordinates are as follows:
A(0,0.67w) B(8-d,0.67w) C(16-2d,0.67w)
D(0,0.33w) E(8-d,0.33w) F(16-2d,0.33w)
the specific deployment mode of the camera is shown in fig. 5.
The above embodiment has the following technical effects:
firstly, by adopting a mode of combining face recognition and behavior recognition, richer student lesson information can be obtained, and accuracy of learning emotion analysis is improved.
Furthermore, through the body weight removing technology, the problem of information redundancy under a multi-camera scene can be effectively solved, and the system efficiency is improved.
Furthermore, the position threshold judging method for identity matching is designed, more effective identity and behavior data can be obtained, and the situation that identity information cannot be matched with a behavior recognition result due to shielding is avoided.
Furthermore, the student condition is evaluated by combining the classroom attendance and the classroom behavior, and the student condition can be analyzed from multiple dimensions.
Furthermore, a weighted comprehensive algorithm of the scores of the class attendance and the scores of the class behavior is adopted, and the weight proportion of the scores can be dynamically adjusted according to actual needs.
Furthermore, the technical design is carried out aiming at the improvement of the image quality, so that the accuracy of facial and behavior recognition can be improved.
Furthermore, the application of a large-scale scene can be adapted through a reasonable camera arrangement scheme.
A second embodiment of the present application relates to an intelligent emotion analysis system having a structure as shown in fig. 7, comprising:
the face frame and human frame acquisition module is used for identifying the face and human behaviors of the student acquired by each camera through a face identification algorithm and a behavior identification algorithm, and acquiring the face frame and the human frame of the student according to the identification result;
the identity matching module is used for obtaining a unique matching result of the human body frame of each student and the human face frame by carrying out de-duplication on the human face frames of the students acquired by the images acquired by all cameras and carrying out position information judgment on the human body frames, so that the identity matching of each student is completed;
and the classroom analysis comprehensive evaluation module is used for calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera according to the identity matching result of the target student, and calculating the classroom analysis comprehensive evaluation score of the target student according to the class attendance score and the class behavior score of the target student.
The first embodiment is a method embodiment corresponding to the present embodiment, and the technical details in the first embodiment can be applied to the present embodiment, and the technical details in the present embodiment can also be applied to the first embodiment.
It should be noted that, those skilled in the art should understand that the implementation functions of the modules shown in the embodiments of the intelligent intelligence analysis system described above may be understood with reference to the description of the intelligent intelligence analysis method described above. The functions of the modules shown in the above-described embodiments of the intelligent emotion analysis system may be implemented by a program (executable instructions) running on a processor, or by a specific logic circuit. The intelligent intelligence analysis system according to the embodiment of the present application may be stored in a computer-readable storage medium if implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present application also provide a computer storage medium having stored therein computer executable instructions which when executed by a processor implement the method embodiments of the present application.
In addition, the embodiment of the application also provides an intelligent science plot analysis system, which comprises a memory for storing computer executable instructions and a processor; the processor is configured to implement the steps of the method embodiments described above when executing computer-executable instructions in the memory. The processor may be a central processing unit (Central Processing Unit, abbreviated as "CPU"), other general purpose processors, digital signal processors (Digital Signal Processor, abbreviated as "DSP"), application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as "ASIC"), and the like. The aforementioned memory may be a read-only memory (ROM), a random access memory (random access memory, RAM), a Flash memory (Flash), a hard disk, a solid state disk, or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware processor for execution, or may be executed by a combination of hardware and software modules in the processor.
It should be noted that in the present patent application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. In the present patent application, if it is mentioned that an action is performed according to an element, it means that the action is performed at least according to the element, and two cases are included: the act is performed solely on the basis of the element and is performed on the basis of the element and other elements. Multiple, etc. expressions include 2, 2 times, 2, and 2 or more, 2 or more times, 2 or more.
All references mentioned in this disclosure are to be considered as being included in the disclosure of the application in its entirety so that modifications may be made as necessary. Further, it is understood that various changes or modifications of the present application may be made by those skilled in the art after reading the above disclosure, and such equivalents are intended to fall within the scope of the application as claimed.

Claims (12)

1. An intelligent intelligence analysis method, comprising:
step A: the face and human body behaviors of the student collected by each camera are identified through a face identification algorithm and a behavior identification algorithm, and a face frame and a human body frame of the student are obtained according to the identification result;
and (B) step (B): the identity matching of each student is completed by carrying out de-duplication on the face frames of the students obtained by the images acquired by all cameras and carrying out position information judgment on the face frames to obtain a unique matching result of the face frames of each student;
step C: according to the identity matching result of the target student, calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera, and calculating the class intelligence analysis comprehensive evaluation score of the target student according to the class attendance score of the target student and the class behavior score of the target student.
2. The method of claim 1, wherein said step a further comprises the substeps of:
step A1: starting from the time of class, the camera acquires one frame of image every N seconds to perform face recognition and behavior recognition of the student, and marks corresponding seat position information of the student;
step A2: the method comprises the steps of carrying out face recognition of students according to images acquired by cameras, outputting face frames of the students, wherein a general face recognition algorithm is adopted for carrying out face recognition and matching with face information in a class information base of a current class in an intelligent class study analysis database, and outputting the face frames of the students, wherein the class information base refers to a set of personal information bases of all the students in the class, and information in the class information base comprises one or any combination of the following: student name, student ID, student face information, class corresponding to class of classroom, seat position information of classroom, class time of class, class subjects;
step A3: human body behavior recognition is carried out according to images acquired by the cameras, and a human body frame of the student is output, wherein a general behavior recognition algorithm is adopted to carry out frame-by-frame human body behavior recognition on real-time video of a classroom, and information of the position and behavior category of the student corresponding to the student is obtained; according to the human body position information of each target student, carrying out posture estimation by adopting an OpenPose posture estimation algorithm to obtain skeleton key point distribution information of each target student; and outputting a behavior recognition result of the student human body frame, wherein the behavior recognition result comprises one or any combination of the following: sit up, stand up, play the cell-phone, take notes, turn around, lift the hand, lie prone desk.
3. The method of claim 1, wherein said step B comprises the sub-steps of:
step B1: the method comprises the steps of performing de-duplication on face frames of students acquired by images acquired by all cameras, wherein if one student corresponds to a plurality of face frames, the face frame with the highest confidence of the student is reserved, and other face frames of the student are removed;
step B2: when all cameras cannot recognize the face frame of the target student, if the position of the target student is determined not to change based on the human frame of the target student and a preset position updating threshold value, taking the student identity information of the face frame of the student with the highest recent effective confidence as the face frame corresponding to the human frame of the target student;
step B3: and carrying out identity matching of the target students based on the acquired face frames and the human frames, and acquiring unique matching results of the human frames and the face frames of each student, thereby completing the identity matching of each student.
4. The method of claim 3, wherein in the step B1, for the situation that a plurality of cameras recognize the face of a student at the same time in a large classroom scene, before each frame of image is selected for identity matching, the face information with the highest confidence is reserved from the repeated face information recognized by the plurality of cameras, and other data are removed.
5. A method according to claim 3, wherein in step B2, each student position of the identified first frame is selected as an initial position; setting a student position update threshold x, and taking the left and right vertexes of a human frame as (m 1, n 1), (m 2 and n 2), wherein the range threshold of the left and right frames is respectively: (m1+x, n 1), (m 1-x, n 1), (m2+x, n 2), (m 2-x, n 2); and selecting a frame of picture every N seconds to perform human body behavior recognition to obtain a human body frame, judging whether the fluctuation range of the coordinates of the human body frame exceeds a fluctuation range threshold value determined by a student position fluctuation threshold value x, so as to detect whether the position of a student changes, if the fluctuation range of the coordinates of the human body frame does not exceed the threshold value, determining that the position of the student does not change, judging whether the most-recently effective student identity information exists at other positions, and if the most-recently effective student identity information does not exist, taking the most-recently effective student identity information with the highest confidence coefficient as the student identity information of the frame.
6. The method according to claim 3, wherein in the step B3, the number of face frames in a certain human frame is determined; if only one human face frame data is in the human body frame, directly matching, if two or more human face frames are in the human body frame, further judging whether only one human face frame with 100% of inclusion degree is in the human body frame, if so, directly matching; otherwise, screening the face frames by adopting a shortest distance judging method, and selecting the face frames with the shortest distance as the matching frames of the face frames, wherein the shortest distance judging method is calculated as follows:
Wherein dm represents the difference between the distances from the center point of the mth face frame to the left and right shoulders; (X) m, Y m ) Marking as the center point coordinates of the mth face frame; (xshulderk 1, yshulderk 1), (xshulderk 2, yshulderk 2) represent coordinates of the left and right shoulders of the human frame.
7. The method of claim 1, wherein said step C comprises the sub-steps of:
step C1: according to the identity matching result of the target student, determining the class attendance state of the target student by utilizing the information acquired by the camera, and calculating the class attendance score of the target student according to the class attendance state of the target student, wherein the class attendance state comprises one or any combination of the following: early backing, late arrival, class business handling and seat replacement;
step C2: according to the identity matching result of the target student, determining the class behavior state of the target student by using the information acquired by the camera, and calculating the class behavior score of the target student according to the class behavior state of the target student;
step C3: and calculating the classroom emotion analysis comprehensive evaluation score of the target student according to the class attendance score and class behavior score of the target student.
8. The method of claim 7, wherein the attendance score design is calculated as follows:
Wherein y is 1 The total score is 100 points, and the lowest value is 0;
n is the number of courses on the same day;
t is the duration of a class;
t1 is the class time of the current class;
t2 is the time of entering class in class;
z1, z2, z3 and z4 are the times of early backing, late arrival, class business and seat replacement of a certain day respectively;
t2-s1 is the early-retreating duration;
s2-t1 is the duration of the delay;
s3-s4 are the duration of the transaction;
s5-s6 are the time length for changing seats.
9. The method of claim 7, wherein in the step C2, the class behavior score is calculated as follows:
wherein t is 2 The score of the student class performance is 100;
x 1 the number of standing times is multiplied by m1, the number of lifting hands is multiplied by m2, the number of taking notes is multiplied by m 3-table lying times is multiplied by m 4-mobile phone playing times is multiplied by m 5-turning times is multiplied by m6;
m1-m6 are action weights and can be customized;
and when x is 1 When=0, the student's foundation is divided into a-scoreThe score is a score when all the student's actions are sitting, where a is chosen to be 60 and b is 40.
10. The method of claim 7, wherein in the step C3, the classroom plot analysis total evaluation score is calculated as follows:
t 3 =t 1 ×α%+y 2 ×(100—α)%
wherein y is 3 Comprehensive assessment scores are analyzed for classroom intelligence, and alpha is the weight.
11. An intelligent intelligence analysis system, comprising:
the face frame and human frame acquisition module is used for identifying the face and human behaviors of the student acquired by each camera through a face identification algorithm and a behavior identification algorithm, and acquiring the face frame and the human frame of the student according to the identification result;
the identity matching module is used for obtaining a unique matching result of the human body frame of each student and the human face frame by carrying out de-duplication on the human face frames of the students acquired by the images acquired by all cameras and carrying out position information judgment on the human body frames, so that the identity matching of each student is completed;
and the classroom analysis comprehensive evaluation module is used for calculating the class attendance score of the target student and the class behavior score of the target student by utilizing the information acquired by the camera according to the identity matching result of the target student, and calculating the classroom analysis comprehensive evaluation score of the target student according to the class attendance score and the class behavior score of the target student.
12. An intelligent intelligence analysis system, comprising:
A memory for storing computer executable instructions; the method comprises the steps of,
a processor for implementing the steps in the method of any one of claims 1 to 10 when executing the computer executable instructions.
CN202311198720.9A 2023-09-13 2023-09-13 Intelligent learning emotion analysis method and system Pending CN117218703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311198720.9A CN117218703A (en) 2023-09-13 2023-09-13 Intelligent learning emotion analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311198720.9A CN117218703A (en) 2023-09-13 2023-09-13 Intelligent learning emotion analysis method and system

Publications (1)

Publication Number Publication Date
CN117218703A true CN117218703A (en) 2023-12-12

Family

ID=89050620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311198720.9A Pending CN117218703A (en) 2023-09-13 2023-09-13 Intelligent learning emotion analysis method and system

Country Status (1)

Country Link
CN (1) CN117218703A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455442A (en) * 2023-12-25 2024-01-26 数据空间研究院 Statistical enhancement-based identity recognition method, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455442A (en) * 2023-12-25 2024-01-26 数据空间研究院 Statistical enhancement-based identity recognition method, system and storage medium
CN117455442B (en) * 2023-12-25 2024-03-19 数据空间研究院 Statistical enhancement-based identity recognition method, system and storage medium

Similar Documents

Publication Publication Date Title
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
US11074436B1 (en) Method and apparatus for face recognition
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN109214337B (en) Crowd counting method, device, equipment and computer readable storage medium
US20180018503A1 (en) Method, terminal, and storage medium for tracking facial critical area
CN111726586A (en) Production system operation standard monitoring and reminding system
CN110837795A (en) Teaching condition intelligent monitoring method, device and equipment based on classroom monitoring video
CN111696128A (en) High-speed multi-target detection tracking and target image optimization method and storage medium
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN112001219B (en) Multi-angle multi-face recognition attendance checking method and system
CN110659397A (en) Behavior detection method and device, electronic equipment and storage medium
CN109993021A (en) The positive face detecting method of face, device and electronic equipment
CN110827432B (en) Class attendance checking method and system based on face recognition
CN110210285A (en) Face tracking method, face tracking device and computer storage medium
CN111814587A (en) Human behavior detection method, teacher behavior detection method, and related system and device
CN117218703A (en) Intelligent learning emotion analysis method and system
CN111160134A (en) Human-subject video scene analysis method and device
CN110490115A (en) Training method, device, electronic equipment and the storage medium of Face datection model
CN111325082A (en) Personnel concentration degree analysis method and device
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
CN108665389A (en) A kind of student's assisted learning system
WO2020244076A1 (en) Face recognition method and apparatus, and electronic device and storage medium
CN115527083B (en) Image annotation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination