CN113591678A - Classroom attention determination method, device, equipment, storage medium and program product - Google Patents

Classroom attention determination method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN113591678A
CN113591678A CN202110858425.6A CN202110858425A CN113591678A CN 113591678 A CN113591678 A CN 113591678A CN 202110858425 A CN202110858425 A CN 202110858425A CN 113591678 A CN113591678 A CN 113591678A
Authority
CN
China
Prior art keywords
classroom
determining
distribution
scenes
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110858425.6A
Other languages
Chinese (zh)
Other versions
CN113591678B (en
Inventor
刘海涛
李玉格
胡益珲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110858425.6A priority Critical patent/CN113591678B/en
Publication of CN113591678A publication Critical patent/CN113591678A/en
Application granted granted Critical
Publication of CN113591678B publication Critical patent/CN113591678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The disclosure provides a classroom attention determining method and device, electronic equipment, a computer readable storage medium and a computer program product, and relates to the technical field of artificial intelligence such as image processing, voice processing and deep learning. The method comprises the following steps: determining the distribution of the noisy degree according to the audio data in the classroom monitoring video; determining the distribution of the chaos degree according to image data in the classroom monitoring video; determining classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution; and determining classroom attention parameters respectively corresponding to classroom scenes of different types according to the face orientation information in the image data. The method utilizes the characteristics that different types of scenes are respectively expressed in image and audio data in the process of giving lessons in the online class, so as to accurately determine the class scene types in different time periods, further accurately determine the class attention in the class scene as much as possible, and not neglect the influence of different scenes on the evaluation of class attention parameters.

Description

Classroom attention determination method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of artificial intelligence technologies such as image processing, speech processing, and deep learning, and in particular, to a method and an apparatus for determining classroom attention, an electronic device, a computer-readable storage medium, and a computer program product.
Background
When online education or online learning is hot and hot, the off-line education formed by teachers and students facing each other is still the most conventional education or learning form, how to better evaluate the off-line teaching state is used for providing targeted improvement opinions for teachers and students and improving teaching ability, and the key point of research of technical personnel in the field is.
Disclosure of Invention
The embodiment of the disclosure provides a classroom attention determination method, a classroom attention determination device, electronic equipment, a computer-readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a classroom attention determining method, including: determining the distribution of the noisy degree according to the audio data in the classroom monitoring video; determining the distribution of the chaos degree according to image data in the classroom monitoring video; determining classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution; and determining classroom attention parameters respectively corresponding to classroom scenes of different types according to the face orientation information in the image data.
In a second aspect, an embodiment of the present disclosure provides a classroom attention determination apparatus, including: a noisy level distribution determination unit configured to determine a noisy level distribution from the audio data in the classroom monitoring video; a disturbance degree distribution determination unit configured to determine a disturbance degree distribution from image data in the classroom monitoring video; a different classroom scene determination unit configured to determine classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution; and the classroom attention parameter determination unit is configured to determine classroom attention parameters respectively corresponding to different types of classroom scenes according to the face orientation information in the image data.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of classroom attention determination as described in any one of the implementations of the first aspect when executed.
In a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement a classroom attention determination method as described in any implementation manner of the first aspect when executed.
In a fifth aspect, the disclosed embodiments provide a computer program product comprising a computer program, which when executed by a processor is capable of implementing the classroom attention determination method as described in any of the implementations of the first aspect.
According to the classroom attention determining method provided by the embodiment of the disclosure, firstly, noisy degree distribution is determined according to audio data in classroom monitoring videos; meanwhile, determining the distribution of the chaos degree according to the image data in the classroom monitoring video; then, according to the noisy degree distribution and the chaotic degree distribution, determining classroom scenes corresponding to different time periods; and finally, according to the face orientation information in the image data, determining classroom attention parameters respectively corresponding to classroom scenes of different types.
The method utilizes the characteristics that different types of scenes are respectively expressed in image and audio data in the process of giving lessons in the online class, so as to accurately determine the class scene types in different time periods, and further accurately determine the class attention in the class scene as much as possible on the basis of determining the class scene types, rather than neglecting the influence of different scenes on the evaluation of class attention parameters.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present disclosure may be applied;
fig. 2 is a flowchart of a classroom attention determination method provided by an embodiment of the present disclosure;
fig. 3 is a flowchart of another classroom attention determination method provided by the embodiments of the present disclosure;
fig. 4 is a flowchart of an offline lecture advice generation method according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a classroom attention determination apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device suitable for executing a classroom attention determination method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the classroom attention determination methods, apparatuses, electronic devices, and computer-readable storage media of the present disclosure can be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between camera 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The camera 101 disposed in the off-line classroom can interact with the server 103 via the network 102, for example, a captured classroom monitoring video is transmitted to the server 103, and the like. The camera 101 and the server 103 may be installed with various applications for implementing information communication between the two, such as a data transmission application, an instruction transmission application, an instant messaging application, and the like.
The cameras 101 may be embodied in different sizes, shapes and specifications, and the server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited herein.
The server 103 may provide various services through various built-in applications, for example, a classroom attention determination application that may determine classroom attention according to received classroom monitoring video may be provided, and the following effects may be achieved when the server 103 runs the classroom attention determination application: firstly, receiving a classroom monitoring video shot by a camera arranged in a classroom through a network 104; then, determining the distribution of the noisy degree according to the audio data in the classroom monitoring video; meanwhile, determining the distribution of the chaos degree according to the image data in the classroom monitoring video; secondly, according to the noisy degree distribution and the chaotic degree distribution, determining classroom scenes corresponding to different time periods; and finally, according to the face orientation information in the image data, determining classroom attention parameters respectively corresponding to classroom scenes of different types.
It should be noted that, besides being acquired from the camera 101 in real time through the network 102, the classroom monitoring video can also be stored locally in the server 103 in advance in various ways. Thus, when the server 103 detects that such data is already stored locally (e.g., a pending dynamic video generation task remaining before starting processing), it may choose to retrieve such data directly from locally, in which case the exemplary system architecture 100 may also not include the camera 101 and the network 102.
Since determining the classroom attention based on the classroom monitoring video requires more complex processing and analysis of the audio content, and requires more computational resources and stronger computational power, the classroom attention determination method provided in the subsequent embodiments of the present disclosure is generally executed by the server 103 having stronger computational power and more computational resources, and accordingly, the classroom attention determination apparatus is generally also disposed in the server 103.
It should be understood that the number of cameras, networks, and servers in fig. 1 is merely illustrative. There may be any number of cameras, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of a classroom attention determination method provided in an embodiment of the present disclosure, where the flowchart 200 includes the following steps:
step 201: determining the distribution of the noisy degree according to the audio data in the classroom monitoring video;
this step is intended to determine the noisiness distribution from the audio data in the classroom monitoring video by the executing entity of the classroom attention determination method (e.g., server 105 shown in fig. 1). The classroom monitoring video can be received from a camera arranged in a classroom where a target classroom needing to determine classroom attention is located, classroom attention described in the disclosure does not simply describe the attention of students in the classroom, but comprehensively represents comprehensive classroom attention generated by teaching of teachers acting on students, and therefore the camera at least comprises a front camera which deviates from a blackboard or is consistent with the direction of the teachers looking at the students, so that face direction information of the students can be determined conveniently.
The method can determine the sound size, the sound definition, the sound source position overlapping condition (namely whether different sound source positions are in different regions or not, the farther the region is, the lower the overlapping degree is, the closer the distance is, the higher the overlapping degree is) and other characteristic parameters to represent the noisy degree according to the audio data, wherein the noisy degree can be a characteristic vector and a multidimensional matrix which are used for comprehensively representing a plurality of noisy characteristics, and can also be a quantized value obtained according to a certain conversion mode. Generally, the larger the sound, the more disorganized the sound source location, the higher the representation of the noise level, and vice versa. The distribution of the noisy degree determined in this step is the distribution of the noisy degree in different time periods in the whole audio data duration, and if the noisy degree is distinguished from small to large by levels a1, a2, A3 and a4, the distribution of the noisy degree according to the time sequence can be as follows in the whole audio duration of 10 minutes: grade A1 in 0-3 min, grade A3 in 3-4 min, grade A1 in 4-8 min, and grade A2 in 8-10 min.
Step 202: determining the distribution of the chaos degree according to image data in the classroom monitoring video;
this step is intended to determine the distribution of the degree of confusion from the image data in the classroom monitoring video by the execution subject described above. The method is characterized in that the method is different from the method for determining the audio data of the noisy degree distribution, the execution main body determines the chaotic degree distribution according to the activity information of students and teachers in the image data, namely the postures of the students are fixed when the students concentrate on listening to the class under the normal condition, so the activity amplitude is small, the chaotic degree is low, and the chaotic degree is high otherwise. As with the noisy level distribution, the chaotic level distribution is still a distribution that characterizes the degree of confusion at different time periods throughout the audio duration.
Similarly, the chaos degree may be a combination of a plurality of eigenvectors and multidimensional matrices representing chaos features, or a quantized value obtained according to a certain conversion method.
Assuming that the degree of disorder is defined in terms of activity amplitude and that there are specifically different degrees of disorder arranged from small to large in B1, B2, B3, B4, the distribution in terms of the degree of temporal disorder in a stream of surveillance video images up to 10 minutes may be: grade B3 in 0-2 min, grade B1 in 2-5 min, grade B2 in 5-7 min and grade B4 in 7-10 min.
Step 203: determining classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution;
on the basis of step 201 and step 202, this step aims to synthesize the noisy degree distribution and the chaotic degree distribution by the execution subject described above, thereby determining the types of the class scenes corresponding to different time periods as accurately as possible, i.e., determining the distribution of the class scene types.
It should be understood that the distribution of the noisiness determined based on the audio data and the distribution of the chaos determined based on the image data may each reflect, to some extent, the scene type of the classroom in different time periods, which is abstracted from different teaching modes of the classroom and is related to the attention of the representative classroom. Namely, the noisy degree distribution and the chaotic degree distribution are used as two parallel influence factors to jointly and better determine the type of the class scene in which the corresponding time period is positioned. Considering that the noisy degree distribution and the chaotic degree distribution are derived from the same cause in some cases, but the association between the noisy degree distribution and the chaotic degree distribution is not tight enough in some scenes, when the scene type is comprehensively determined, the noisy degree distribution or the chaotic degree distribution can be separately determined, and then anomaly analysis or superposition enhancement analysis and the like are performed by combining the conclusions of the noisy degree distribution and the chaotic degree distribution, so that the weights of the corresponding conclusions can be weighted by combining the noisy degree distribution and the chaotic degree distribution respectively in comprehensive consideration to obtain a result which is more in line with the actual situation.
One implementation, including and not limited to, is:
inputting the noisy degree parameter and the chaotic degree parameter of the same time period as input parameters into a preset classroom scene determination model;
and determining the result output by the classroom scene determination model as the classroom scene of the corresponding time period to obtain classroom scenes respectively corresponding to different time periods.
The classroom scene determination model is used for representing the corresponding relation between different noisy degree parameters and chaotic degree parameters and different classroom scenes, in order to enable the model to represent the corresponding relation, video fragments marked with real classroom scene types can be used as training samples in a training stage, the noisy degree analysis results and the chaotic degree analysis results are stored in each video fragment, the potential association between the noisy degree analysis results and the chaotic degree analysis results and the real classroom scene types can be learned from the training samples through the special structure of the model, and then the classroom scene types of the real video fragments can be determined by the learned and trained classroom scene determination model.
Specifically, the classroom scene determination model can be constructed by adopting neural networks, convolutional networks and deep learning networks with various structures, and the noisy degree parameter and the chaotic degree parameter in the parallel relation can also find proper weight in the training and iteration process through a full connection layer or other similar functional layers.
Different classroom scenarios can be divided into: the method has the advantages that the standard for evaluating the classroom attention in different classroom scenes can be obviously not unified, and therefore the necessity of determining the classroom scene in the step is reflected.
For ease of understanding, it is still exemplified here that within 45 minutes of a whole class, different types of classroom scenes can be distributed as: the first 3 minutes are the confused interactive scenes of the students joining the head and the ears, the small interactive scenes of the students listening seriously in 3-20 minutes, the non-interactive scenes of the students doing questions quietly in 21-30 minutes, the large interactive scenes of the group discussions in 30-35 minutes, the small interactive scenes of the class questioning and discussing results in 35-43 minutes, and the confused interactive scenes of the students joining the head and the ears in 43-45 minutes.
Step 204: and determining classroom attention parameters respectively corresponding to classroom scenes of different types according to the face orientation information in the image data.
On the basis of step 203, this step is intended to determine, by the above-described executing subject, on the basis of determining the class scene type for each period, the class attention parameter in the corresponding class scene type in accordance with the face orientation information reflected in the image data within the period. It should be understood that since the face orientation of students and teachers at concentration is not the same in different types of classroom scenes, for example, when evaluating classroom attention parameters in a carefully spoken small interactive scene, the number of face orientations toward the teacher or blackboard should be taken as the basis for evaluation, and when evaluating classroom attention parameters in a large interactive scene discussed in a small group, the number of face orientations toward other students or content carriers such as textbooks should be taken as the basis for evaluation, and so on.
The classroom attention parameter described in this step is a parameterized description mode for quantifying the abstract classroom attention so as to estimate the degree of the classroom attention, and the quantification mode is various, such as scoring, conversion according to an equation, and the like, and is not limited herein.
According to the classroom attention determining method provided by the embodiment of the disclosure, different types of scenes in the offline classroom teaching process are respectively expressed in image and audio data, so that classroom scene types at different time periods are accurately determined, and classroom attention under the class scene is determined as accurately as possible on the basis of determining the class scene types, rather than neglecting the influence of different scenes on the assessment of classroom attention parameters.
Referring to fig. 3, fig. 3 is a flowchart of another classroom attention determination method provided in the embodiments of the present disclosure, where the flowchart 300 includes the following steps:
step 301: determining sound source distribution and sound volume change trends in different time periods according to audio data in the classroom monitoring video;
step 302: determining the distribution of the noisy degree according to the overlapping degree and the sound quantity variation trend of the sound source distribution;
for the upper implementation scheme given in step 201 in the process 200, a specific lower implementation scheme is given in this embodiment through steps 301 to 302, that is, firstly, the sound source distribution and the sound volume variation trend in different time periods are determined according to the audio data, and then, the noisiness in different time periods are determined according to the overlapping degree and the sound volume variation trend of the sound source distribution, so as to form the distribution of the noisiness.
One specific way of determining this can be seen in the following steps:
determining the time interval with the overlapping degree smaller than a first preset value and the volume change trend conforming to the first preset change trend as the teaching time interval with low noise, namely extracting the first preset change trend from the volume change condition of the real teaching time interval;
determining the time interval with the overlapping degree larger than a second preset value and the volume change area conforming to a second preset change trend as the unorganized discussion time interval with high noise degree, namely extracting the second preset change trend from the volume change condition of the real unorganized discussion time interval;
and summarizing the noisy degree parameters corresponding to each time period, and determining the noisy degree distribution. Assuming that, in practical situations, all the time intervals can only be divided into the lecture time intervals and the unorganized discussion time intervals as shown above, the noisy degree distribution will be determined according to the time sequence relationship.
Step 303: identifying human body key points in image data in a classroom monitoring video;
step 304: determining the posture according to the key points of the human body, and determining the change condition of the posture;
step 305: determining the distribution of the chaos degree according to the change condition and consistency of the postures of different human bodies;
for the upper implementation scheme given in step 202 of the process 200, a specific lower implementation scheme is given in this embodiment through steps 303 to 305, that is, firstly, by recognizing human key points of the image data, determining the posture and the posture change condition according to the human key points, and finally determining the confusion degree according to the change condition of the postures of different human bodies and the consistency between the posture changes of different human bodies. Under the requirement of organized activities, even if the posture of a human body changes frequently, the posture of the human body is consistent with the posture change modes of other people, the human body can be considered to belong to cooperative activities or unified activities, the disorder degree at the moment is determined to be low by the constraint of the organization, and the human body is not considered to have high disorder degree only according to the frequent posture change.
One specific way of determining this can be seen in the following steps:
determining a time period when the change conditions of the postures of different human bodies are smaller than a first preset change amplitude and the posture consistency between different human bodies is larger than a first preset consistency degree as a teaching time period with low confusion degree, namely, the smaller the change degree is, the smaller the limb movement amplitude of students is, and the higher the consistency is, the same postures among most students are;
determining a time interval in which the posture change condition of different human bodies is greater than a second preset change amplitude (greater than a first preset change amplitude) and the posture consistency between different human bodies is less than a second preset consistency degree (less than the first preset consistency degree) as an unorganized discussion time interval with high confusion degree, namely, the larger the change degree is, the smaller the limb movement amplitude of students is, the smaller the constraint is, and the lower the consistency is, the more different students are, and the degree of freedom is high;
and summarizing the chaos degree parameters corresponding to each time interval, and determining the chaos degree distribution. Assuming that, in practical situations, all the time intervals can only be divided into the lecture time intervals and the unorganized discussion time intervals as shown above, the distribution of the degree of confusion is determined according to the time sequence relationship.
Step 306: determining classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution;
step 307: determining forward orientations respectively corresponding to different classroom scenes;
for example, when the classroom scene is a teaching scene, the direction of the carrier facing the teaching object or the teaching content can be determined as a positive direction; in the classroom scenario, organized discussion scenario, the direction towards other discussion objects or knowledge record carriers can be determined to be positive. I.e. a positive heading should be able to characterize the attention of the student in a corresponding type of classroom scenario.
Step 308: extracting face orientation information from image data by using a face recognition model provided with a multi-scale convolution kernel;
because the sizes of the video target bodies are different in the process, the size of a suitable convolution kernel has a large influence on the accuracy of the identification result, the size of the current mainstream convolution kernel is fixed 3 × 3, and the accurate identification of the target body size difference in different pictures can be realized by changing the sizes of the convolution kernels by comprehensively utilizing a plurality of convolution kernels with different scales (for example, the combination of 3 × 3+1 × 3+3 × 1 and 3 × 3 and 5 × 5).
Step 309: and determining the classroom attention parameters in the corresponding classroom scenes according to the number of the faces in the forward direction respectively corresponding to the different classroom scenes.
On the basis of step 308, this step is intended to determine the classroom attention parameters in the corresponding classroom scene according to the number of the faces facing forward and respectively corresponding to different classroom scenes by the execution subject. I.e. the more faces are facing, the more the classroom attention can be characterized.
In the embodiment shown in the difference flow 200, the embodiment provides a specific implementation manner for determining the distribution of the noisy degree and the distribution of the chaotic degree through steps 301 to 302 and steps 303 to 305, respectively, defines the forward direction of the classroom scenes of different types through step 307, and improves the recognition capability of the multi-resolution image and the multi-scale object recognition through the multi-scale convolution kernel provided in step 308, so as to obtain the classroom attention parameter determined more accurately through step 309 in the classroom scene of the corresponding type.
It should be noted that the above mentioned added improvement points or specific implementation manners are not dependent on and cause-and-effect relationships, and only a matching relationship exists between step 307 and step 309, and other improvement points or specific implementation manners may be completely combined with the embodiment of the process 200 to form different independent embodiments, and this embodiment only exists as a preferred embodiment that includes multiple improvement points or specific implementation manners at the same time.
On the basis of any of the above embodiments, in order to improve the accuracy of the determined classroom scene part as much as possible, the classroom scenes corresponding to different time periods determined based on noisy degree distribution and chaotic degree distribution can be adjusted according to the planning information of the classroom scenes of different types in the classroom planning information when the classroom planning information can be acquired through the preset data interface. The preset data interface may be an acquisition interface for teaching plan information uploaded by a teacher before teaching in the teaching system, and the planning information is teaching arrangement information of the teacher on the class, including arrangement of classes of different types.
On the basis of any of the foregoing embodiments, this step is intended to generate offline teaching advice by the execution subject in combination with the determined classroom attentions corresponding to different classes of classroom types, that is, generate offline teaching advice by finding a teaching mode capable of bringing higher classroom attentions, please refer to fig. 4, where fig. 4 is a flowchart of a method for generating offline teaching advice according to an embodiment of the present disclosure, where the flowchart 400 includes the following steps:
step 401: according to the classroom attention parameters corresponding to each type of classroom scene, carrying out weighted calculation to obtain a comprehensive attention parameter corresponding to the whole classroom;
the step aims to obtain the comprehensive attention parameter corresponding to the whole class by the executing main body through weighting calculation according to the class attention parameter corresponding to each type of class scene, namely, a proper weight is pre-distributed to each type of class scene, and the comprehensive attention parameter corresponding to the whole class is calculated through weighting of the weight.
Step 402: determining the whole class with comprehensive attention parameters meeting preset requirements as a target class;
on the basis of step 401, this step is intended to determine as a target class, by the executing entity, a whole class having a comprehensive attention parameter meeting preset requirements set to screen preferred classes that can be selected to generate offline lecture advice, and therefore generally needs to have a higher comprehensive attention parameter, which may include the kind, order, etc. of the class scene types involved, in addition to looking at the comprehensive attention parameter.
Step 403: and generating an offline teaching suggestion according to the distribution of different types of classroom scenes in the target classroom.
On the basis of step 402, this step is intended to generate offline teaching advice by the execution subject according to the distribution of different types of classroom scenes in the target classroom. Further, the generated offline lecture advice can be pushed to teachers of the same type to guide them to adjust their lecture schedule.
In order to deepen understanding, the disclosure also provides a specific implementation scheme by combining a specific application scenario:
considering that a classroom is generally rectangular in a classroom scene, and the distance between each desk row and the front camera gradually becomes far, the proportion of the faces of the students sitting behind the classroom in the visual field is small. In order to better identify the student faces with different sizes in such a scene, in this embodiment, a TinyFace detection model with a better detection effect on the faces with different sizes is used, and meanwhile, for the case of inconsistent sizes, a multi-scale convolution kernel mode is further fused to improve the detection accuracy.
The classroom attention index is constructed by using the number of faces detected in the video to determine the type of classroom scene, for example, when listening seriously, the face information of a student is usually exposed in the field of view of a front camera, so that the classroom attention index in the carefully listened classroom scene can be determined by detecting the number of faces recognized in the video.
In the process of processing the surveillance video, the video data can be processed by using an Ffmmpeg (video processing tool), and the audio information in the video can be separated and analyzed. And the recognition of the classroom scene is assisted by adopting an audio analysis mode. The audio format separated by the Ffmpeg is a WAV format, recoding and sampling rate designation are required on the basis of the WAV format, and audio data needs to be normalized when an audio file is processed, so that the problem of non-uniform standard caused by non-normalization is avoided.
In the audio processing process, the identification of classroom interaction scenes can be influenced by the division of different thresholds, and various interaction scenes can exist in the same threshold frequency band, for example, when the audio threshold is between 0.3 and 0.5, the audio threshold is probably in a stage of carefully listening to a speech or asking questions in a classroom; when the audio threshold is between 0.5 and 0.7, the interactive scenes may be in an organized activity or group discussion stage, and the scene recognition task is difficult to be performed by means of audio segmentation recognition only in this mode, so that the detection of the classroom interactive scenes is more accurate when the situation is met and the fusion processing needs to be combined with the video scene recognition module of the next section.
The proportion of the quiet question making, the authentication listening and speaking and the organized activity stage of the students in the whole class is large, the proportion of the time length of the class asking, the group discussion and the unorganized chaotic stage of the students in the whole class is small, unbalanced sampling can be adopted for processing, different sampling frequencies are adopted in the data preprocessing stage aiming at different scenes, the video in the quiet question making, the careful listening and organized activity stage is subjected to down sampling, and the audio in the class asking, the group discussion and the unorganized chaotic stage is subjected to up sampling to improve the balance of sample distribution.
And the subsequent attention evaluation module integrates the result of audio segmentation, the result of face detection and the extraction result of video features to realize the identification of the current video interaction scene.
In the aspect of dividing the attention of the classroom, the embodiment can also divide the attention situation into three situations of concentration, moderate attention and non-concentration, and the three situations correspond to different classroom interaction scenes under different attention situations, so that the proportion of unorganized chaotic stages is less in the aspect of overall distribution, the occurrence frequency in the whole data set is the lowest, and the difficulty of identification is the greatest. Under the condition of attention concentration, the quiet question making and the serious listening are the conditions of most distribution in the whole data set, and under the condition of training of a large number of data sets, the recognition result is more accurate. The action differences between classroom questions and group discussions and three interaction scenarios of organized activities are not very obvious in moderate attention situations. Therefore, the scheme provided by the embodiment can be obviously adopted to amplify the obvious degree of the difference, and the accuracy of judgment is further improved.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a classroom attention determination apparatus, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the classroom attention determination apparatus 500 of the present embodiment may include: a noisy degree distribution determination unit 501, a chaotic degree distribution determination unit 502, a different classroom scene determination unit 503, and a classroom attention parameter determination unit 504. The noisy level distribution determining unit 501 is configured to determine noisy level distribution according to audio data in the classroom monitoring video; a disturbance degree distribution determination unit 502 configured to determine a disturbance degree distribution from image data in the classroom monitoring video; a different classroom scene determination unit 503 configured to determine classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution; a classroom attention parameter determination unit 504 configured to determine classroom attention parameters corresponding to different types of classroom scenes respectively according to the face orientation information in the image data.
In the present embodiment, in the classroom attention determination apparatus 500: the detailed processing and the brought technical effects of the noisy degree distribution determining unit 501, the chaotic degree distribution determining unit 502, the different classroom scene determining unit 503, and the classroom attention parameter determining unit 504 can refer to the related descriptions of step 201 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of the present embodiment, the classroom attention parameter determination unit 504 may include:
a positive orientation-per-scene determination subunit configured to determine positive orientations respectively corresponding to different classroom scenes;
and the classroom attention parameter determination subunit is configured to determine classroom attention parameters in corresponding classroom scenes according to the number of the faces in the forward direction respectively corresponding to different classroom scenes.
In some optional implementations of this embodiment, the determining the sub-unit by scene of the positive orientation may include:
the teaching scene positive orientation determining module is configured to determine the direction of a teaching object or a carrier presenting teaching contents as a positive orientation in response to the classroom scene being a teaching scene;
an organized discussion forward facing determination module configured to determine a direction towards other discussion objects or the knowledge record carrier as a forward facing in response to the classroom scene being an organized discussion scene.
In some optional implementations of the present embodiment, the classroom attention determination apparatus 500 may further include:
and the face recognition model calling unit is configured to extract face orientation information from the image data by using a face recognition model provided with a multi-scale convolution kernel.
In some optional implementations of this embodiment, the noisiness distribution determining unit 501 may be further configured to:
determining sound source distribution and sound volume change trends in different time periods according to the audio data;
and determining the distribution of the noisy degree according to the overlapping degree and the sound quantity variation trend of the sound source distribution.
In some optional implementations of this embodiment, the root chaos degree distribution determining unit 502 may be further configured to:
identifying human key points in the image data;
determining the posture according to the key points of the human body, and determining the change condition of the posture;
and determining the distribution of the chaos degree according to the change condition and consistency of the postures of different human bodies.
In some optional implementations of the present embodiment, the different classroom scenario determination unit 503 can be further configured to:
inputting the noisy degree parameter and the chaotic degree parameter of the same time period as input parameters into a preset classroom scene determination model; the classroom scene determination model is used for representing corresponding relations between different noisy degree parameters and chaotic degree parameters and different classroom scenes;
and determining the result output by the classroom scene determination model as the classroom scene of the corresponding time period to obtain classroom scenes respectively corresponding to different time periods.
In some optional implementations of the present embodiment, the classroom attention determination apparatus 500 may further include:
and the adjusting unit is configured to respond to the classroom planning information acquired through the preset data interface, and adjust the classroom scenes corresponding to different time periods determined based on the noisy degree distribution and the chaotic degree distribution according to the planning information of the classroom scenes of different types in the classroom planning information.
In some optional implementations of the present embodiment, the classroom attention determination apparatus 500 may further include:
the comprehensive attention parameter determining unit is configured to obtain a comprehensive attention parameter corresponding to the whole class through weighted calculation according to the class attention parameters respectively corresponding to each type of class scene;
a preferred classroom determination unit configured to determine a whole classroom having a comprehensive attention parameter satisfying a preset requirement as a target classroom;
and the teaching suggestion generating unit is configured to generate offline teaching suggestions according to the distribution of different types of classroom scenes in the target classroom.
The present embodiment exists as an embodiment of an apparatus corresponding to the above method embodiment, and the classroom attention determining apparatus provided in the present embodiment utilizes characteristics of different types of scenes respectively expressed in image and audio data during an offline classroom teaching process to accurately determine classroom scene types at different time intervals, so that classroom attention under the class scene is determined as accurately as possible on the basis of determining classroom scene types, rather than ignoring influences of different scenes on assessment of classroom attention parameters.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the classroom attention determination method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the classroom attention determination method described in any of the above embodiments when executed.
The disclosed embodiments provide a computer program product which, when executed by a processor, is capable of implementing the classroom attention determination method described in any of the above embodiments.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as the classroom attention determination method. For example, in some embodiments, the classroom attention determination method can be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the classroom attention determination method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the classroom attention determination method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the conventional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the embodiment of the disclosure, different types of scenes in the offline class teaching process are respectively expressed in the image and the audio data, so that the class scene types at different time intervals are accurately determined, the class attention in the class scene is determined as accurately as possible on the basis of determining the class scene types, and the influence of the different scenes on the evaluation of the class attention parameters is not ignored.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (21)

1. A classroom attention determination method comprising:
determining the distribution of the noisy degree according to the audio data in the classroom monitoring video;
determining the distribution of the chaos degree according to the image data in the classroom monitoring video;
determining classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution;
and determining classroom attention parameters respectively corresponding to classroom scenes of different types according to the face orientation information in the image data.
2. The method according to claim 1, wherein the determining, according to the face orientation information in the image data, classroom attention parameters respectively corresponding to different types of classroom scenes comprises:
determining forward orientations respectively corresponding to different classroom scenes;
and determining the classroom attention parameters in the corresponding classroom scenes according to the number of the faces in the forward direction respectively corresponding to the different classroom scenes.
3. The method of claim 2, wherein the determining positive orientations corresponding to different classroom scenarios, respectively, comprises:
determining the direction of a vector facing a teaching object or presenting teaching content as the forward direction in response to the classroom scene being a teaching scene;
in response to the classroom scene being an organized discussion scene, a direction towards other discussion objects or a knowledge record carrier is determined to be the positive orientation.
4. The method of claim 1, further comprising:
and extracting the face orientation information from the image data by using a face recognition model provided with a multi-scale convolution kernel.
5. The method of claim 1, wherein determining a noisiness distribution from audio data in the classroom monitoring video comprises:
determining sound source distribution and sound volume change trends in different time periods according to the audio data;
and determining the distribution of the noisy degree according to the overlapping degree of the sound source distribution and the volume change trend.
6. The method of claim 1, wherein the determining a distribution of degrees of confusion from image data in the classroom monitoring video comprises:
identifying human key points in the image data;
determining the posture according to the key points of the human body, and determining the change condition of the posture;
and determining the distribution of the chaos degree according to the change condition and consistency of the postures of different human bodies.
7. The method of claim 1, wherein the determining the classroom scene corresponding to different time periods according to the noisy level distribution and the chaotic level distribution comprises:
inputting the noisy degree and the chaotic degree of the same time period as input parameters into a preset classroom scene determination model; the classroom scene determination model is used for representing the corresponding relation between different noisy degrees and chaotic degrees and different classroom scenes;
and determining the result output by the classroom scene determination model as the classroom scene of the corresponding time period to obtain classroom scenes respectively corresponding to different time periods.
8. The method of claim 1, further comprising:
responding to the fact that classroom planning information is obtained through a preset data interface, and adjusting classroom scenes which are determined based on the noisy degree distribution and the chaotic degree distribution and correspond to different time periods according to the planning information of classroom scenes of different types in the classroom planning information.
9. The method of any of claims 1-8, further comprising:
according to the classroom attention parameters corresponding to each type of classroom scene, carrying out weighted calculation to obtain a comprehensive attention parameter corresponding to the whole classroom;
determining the whole class with comprehensive attention parameters meeting preset requirements as a target class;
and generating an offline teaching suggestion according to the distribution of the classroom scenes of different types in the target classroom.
10. A classroom attention determination device comprising:
a noisy level distribution determination unit configured to determine a noisy level distribution from the audio data in the classroom monitoring video;
a disturbance degree distribution determination unit configured to determine a disturbance degree distribution from image data in the classroom monitoring video;
a different classroom scene determination unit configured to determine classroom scenes corresponding to different time periods according to the noisy degree distribution and the chaotic degree distribution;
a classroom attention parameter determination unit configured to determine classroom attention parameters corresponding to different types of classroom scenes respectively according to the face orientation information in the image data.
11. The apparatus of claim 10, wherein the classroom attention parameter determination unit comprises:
a positive orientation-per-scene determination subunit configured to determine positive orientations respectively corresponding to different classroom scenes;
and the classroom attention parameter determination subunit is configured to determine classroom attention parameters in corresponding classroom scenes according to the number of the faces in the forward direction respectively corresponding to different classroom scenes.
12. The apparatus of claim 11, wherein the positive orientation scene-wise determining subunit comprises:
a teaching scene positive orientation determining module configured to determine, as the positive orientation, a direction toward a teaching object or a carrier presenting teaching contents in response to the classroom scene being a teaching scene;
an organized discussion forward orientation determination module configured to determine a direction towards other discussion objects or the knowledge record carrier as the forward orientation in response to the classroom scene being an organized discussion scene.
13. The apparatus of claim 10, further comprising:
a face recognition model calling unit configured to extract the face orientation information from the image data using a face recognition model provided with a multi-scale convolution kernel.
14. The apparatus of claim 10, wherein the noisiness distribution determining unit is further configured to:
determining sound source distribution and sound volume change trends in different time periods according to the audio data;
and determining the distribution of the noisy degree according to the overlapping degree of the sound source distribution and the volume change trend.
15. The apparatus of claim 10, wherein the root confusion level distribution determining unit is further configured to:
identifying human key points in the image data;
determining the posture according to the key points of the human body, and determining the change condition of the posture;
and determining the distribution of the chaos degree according to the change condition and consistency of the postures of different human bodies.
16. The apparatus of claim 10, wherein the different classroom scenario determination unit is further configured to:
inputting the noisy degree parameter and the chaotic degree parameter of the same time period as input parameters into a preset classroom scene determination model; the classroom scene determination model is used for representing corresponding relations between different noisy degree parameters and chaotic degree parameters and different classroom scenes;
and determining the result output by the classroom scene determination model as the classroom scene of the corresponding time period to obtain classroom scenes respectively corresponding to different time periods.
17. The apparatus of claim 10, further comprising:
and the adjusting unit is configured to respond to the situation that classroom planning information is acquired through a preset data interface, and adjust classroom scenes corresponding to different time periods determined based on the noisy degree distribution and the chaotic degree distribution according to the planning information of classroom scenes of different types in the classroom planning information.
18. The apparatus of any of claims 10-17, further comprising:
the comprehensive attention parameter determining unit is configured to obtain a comprehensive attention parameter corresponding to the whole class through weighted calculation according to the class attention parameters respectively corresponding to each type of class scene;
a preferred classroom determination unit configured to determine a whole classroom having a comprehensive attention parameter satisfying a preset requirement as a target classroom;
and the teaching suggestion generation unit is configured to generate offline teaching suggestions according to the distribution of different types of classroom scenes in the target classroom.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the classroom attention determination method of any of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the classroom attention determination method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements a classroom attention determination method as defined in any one of claims 1-9.
CN202110858425.6A 2021-07-28 2021-07-28 Classroom attention determination method, device, apparatus, storage medium, and program product Active CN113591678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110858425.6A CN113591678B (en) 2021-07-28 2021-07-28 Classroom attention determination method, device, apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110858425.6A CN113591678B (en) 2021-07-28 2021-07-28 Classroom attention determination method, device, apparatus, storage medium, and program product

Publications (2)

Publication Number Publication Date
CN113591678A true CN113591678A (en) 2021-11-02
CN113591678B CN113591678B (en) 2023-06-23

Family

ID=78251195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110858425.6A Active CN113591678B (en) 2021-07-28 2021-07-28 Classroom attention determination method, device, apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN113591678B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114493952A (en) * 2022-04-18 2022-05-13 北京梦蓝杉科技有限公司 Education software data processing system and method based on big data
CN115907507A (en) * 2022-10-13 2023-04-04 华中科技大学 Classroom behavior detection and learning situation analysis method for students in combined classroom scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160284095A1 (en) * 2015-03-27 2016-09-29 Edmond Chalom Machine learning of real-time image capture parameters
CN108399376A (en) * 2018-02-07 2018-08-14 华中师范大学 Student classroom learning interest intelligent analysis method and system
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN111046823A (en) * 2019-12-19 2020-04-21 东南大学 Student classroom participation degree analysis system based on classroom video
CN111898881A (en) * 2020-07-15 2020-11-06 杭州海康威视系统技术有限公司 Classroom teaching quality assessment method, device, equipment and storage medium
CN112287844A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Student situation analysis method and device, electronic device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160284095A1 (en) * 2015-03-27 2016-09-29 Edmond Chalom Machine learning of real-time image capture parameters
CN108399376A (en) * 2018-02-07 2018-08-14 华中师范大学 Student classroom learning interest intelligent analysis method and system
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN111046823A (en) * 2019-12-19 2020-04-21 东南大学 Student classroom participation degree analysis system based on classroom video
CN111898881A (en) * 2020-07-15 2020-11-06 杭州海康威视系统技术有限公司 Classroom teaching quality assessment method, device, equipment and storage medium
CN112287844A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Student situation analysis method and device, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
缪佳;禹东川;: "基于课堂视频的学生课堂参与度分析", 教育生物学杂志, no. 04, pages 26 - 32 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114493952A (en) * 2022-04-18 2022-05-13 北京梦蓝杉科技有限公司 Education software data processing system and method based on big data
CN115907507A (en) * 2022-10-13 2023-04-04 华中科技大学 Classroom behavior detection and learning situation analysis method for students in combined classroom scene
CN115907507B (en) * 2022-10-13 2023-11-14 华中科技大学 Student class behavior detection and learning analysis method combined with class scene

Also Published As

Publication number Publication date
CN113591678B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN107680019B (en) Examination scheme implementation method, device, equipment and storage medium
US10922991B2 (en) Cluster analysis of participant responses for test generation or teaching
US10621685B2 (en) Cognitive education advisor
CN105516280B (en) A kind of Multimodal Learning process state information packed record method
CN109063587B (en) Data processing method, storage medium and electronic device
CN108875785B (en) Attention degree detection method and device based on behavior feature comparison
US20190114937A1 (en) Grouping users by problematic objectives
CN108537702A (en) Foreign language teaching evaluation information generation method and device
US10540994B2 (en) Personal device for hearing degradation monitoring
US11182447B2 (en) Customized display of emotionally filtered social media content
US20210312288A1 (en) Method for training classification model, classification method, apparatus and device
CN113591678A (en) Classroom attention determination method, device, equipment, storage medium and program product
US10541884B2 (en) Simulating a user score from input objectives
US10354543B2 (en) Implementing assessments by correlating browsing patterns
CN110070076B (en) Method and device for selecting training samples
Pugh et al. Say What? Automatic Modeling of Collaborative Problem Solving Skills from Student Speech in the Wild.
CN112417158A (en) Training method, classification method, device and equipment of text data classification model
CN112509690A (en) Method, apparatus, device and storage medium for controlling quality
US10866956B2 (en) Optimizing user time and resources
US10915819B2 (en) Automatic real-time identification and presentation of analogies to clarify a concept
Ceneda et al. Show me your face: Towards an automated method to provide timely guidance in visual analytics
US20230085195A1 (en) Enhanced learning content in an interconnected environment
CN110188602A (en) Face identification method and device in video
US10930169B2 (en) Computationally derived assessment in childhood education systems
CN109214616B (en) Information processing device, system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant