WO2018233398A1 - 学习监控方法、装置及电子设备 - Google Patents

学习监控方法、装置及电子设备 Download PDF

Info

Publication number
WO2018233398A1
WO2018233398A1 PCT/CN2018/086686 CN2018086686W WO2018233398A1 WO 2018233398 A1 WO2018233398 A1 WO 2018233398A1 CN 2018086686 W CN2018086686 W CN 2018086686W WO 2018233398 A1 WO2018233398 A1 WO 2018233398A1
Authority
WO
WIPO (PCT)
Prior art keywords
class
student
feature data
classroom
information
Prior art date
Application number
PCT/CN2018/086686
Other languages
English (en)
French (fr)
Inventor
付国为
薛刚
李月梅
安山
孟明秀
宋宇佳
冯艳刚
孟旭
Original Assignee
北京易真学思教育科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京易真学思教育科技有限公司 filed Critical 北京易真学思教育科技有限公司
Publication of WO2018233398A1 publication Critical patent/WO2018233398A1/zh
Priority to US16/720,070 priority Critical patent/US10891873B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the embodiments of the present invention relate to the field of computer technologies, and in particular, to a learning monitoring method, apparatus, and electronic device.
  • the embodiments of the present invention provide a learning monitoring method, apparatus, and electronic device to solve the problem that the students in the classroom cannot be effectively monitored in the classroom in the prior art.
  • a learning monitoring method including: acquiring a class image of a class student; identifying the class image, and acquiring feature data of the class student, wherein the feature data includes At least one of the following: a facial feature data of the classroom student, visual characteristic data of the classroom student, and physical characteristic data of the classroom student; and determining a class state of the classroom student according to the characteristic data of the classroom student.
  • a learning monitoring apparatus including: a first acquiring module configured to acquire a class image of a class student; and a second acquiring module configured to identify the class image, Obtaining feature data of the class student, wherein the feature data comprises at least one of: face feature data of the class student, visual feature data of the class student, limb feature data of the class student; determining module And configured to determine a class status of the class student according to the feature data of the class student.
  • an electronic device includes: a processor, a memory, a communication interface, and a communication bus, wherein the processor, the memory, and the communication interface are completed by the communication bus Communication with each other; the memory is for storing at least one executable instruction that causes the processor to perform operations corresponding to the learning monitoring method as described above.
  • a computer storage medium storing: executable instructions for acquiring a class image of a class student; and for identifying the class image, Obtaining executable instructions of the class student's feature data, wherein the feature data includes at least one of: face feature data of the class student, visual feature data of the class student, and physical characteristics of the class student Data; executable instructions for determining a class status of the class student based on the class student's feature data.
  • the acquired class image of the class student is identified, and the feature data of the class student is determined, thereby determining the class state of the class student during the class.
  • the student's facial feature data, visual feature data and limb feature data can express the student's expression, line of sight, body movements and other information during class. This information can effectively reflect the student's current class status, and therefore, through the student's facial feature data.
  • visual feature data and limb feature data can monitor and analyze the students' class situation from multiple dimensions such as expression, sight, and body movements, and effectively and accurately realize the lectures of students in the classroom through computer and network. The monitoring of the situation provides an effective reference for subsequent study and lectures to further improve the learning or teaching process.
  • FIG. 1 is a flow chart showing the steps of a learning monitoring method according to Embodiment 1 of the present invention.
  • FIG. 2 is a flow chart showing the steps of a learning monitoring method according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic diagram of a class information curve diagram in the embodiment shown in FIG. 2;
  • FIG. 4 is a structural block diagram of a learning monitoring apparatus according to a third embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a learning monitoring apparatus according to Embodiment 4 of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to Embodiment 5 of the present invention.
  • FIG. 1 a flow chart of steps of a learning monitoring method according to a first embodiment of the present invention is shown.
  • Step S102 Acquire an image of the class of the class student.
  • the image of the class can be obtained by taking a picture of the camera, such as taking a time every time interval, such as 1 second; or obtaining it through the class video, such as automatically capturing the image from the class video.
  • Step S104 Identify the class image and obtain the feature data of the class student.
  • the characteristic data of the classroom student includes at least one of the following: facial feature data of the classroom student, visual characteristic data of the classroom student, and physical characteristic data of the classroom student.
  • the facial characteristics data of the students in the classroom characterizes the facial features of the students.
  • the facial features data can identify the students in the classroom, and also determine the facial movements and expressions of the students.
  • the visual characteristics of the students in the classroom represent the visual characteristics of the students.
  • the visual characteristics data can be used to determine the current visual attention points of the classroom students; the physical characteristics data of the classroom students represent the physical movement characteristics of the classroom students, and the current movements of the classroom students can be determined by the physical characteristic data.
  • the facial feature data of the classroom students, the visual characteristics data of the classroom students, and the physical characteristics data of the classroom students can express the expressions, visions, gestures and other information of the classroom students, and the information can effectively reflect the current state of the classroom students.
  • Step S106 Determine the class status of the class student according to the characteristic data of the class student.
  • the character data of the class student is determined, and then the class state of the class student at the time of class is determined.
  • the student's facial feature data, visual feature data and limb feature data can express the student's expression, line of sight, body movements and other information during class. This information can effectively reflect the student's current class status, and therefore, through the student's facial feature data.
  • visual feature data and limb feature data can monitor and analyze the students' class situation from multiple dimensions such as expression, sight, and body movements, and effectively and accurately realize the lectures of students in the classroom through computer and network. The monitoring of the situation provides an effective reference for subsequent study and lectures to further improve the learning or teaching process.
  • the learning monitoring method of this embodiment may be implemented by any suitable device or device having data processing functions, including but not limited to various terminals and servers.
  • FIG. 2 a flow chart of steps of a learning monitoring method according to a second embodiment of the present invention is shown.
  • Step S202 Perform three-dimensional modeling on the classroom in which the classroom student is located, and generate a three-dimensional classroom model.
  • the classroom is modeled by the 3D technology through a multi-azimuth camera.
  • 3D modeling is a technique for simulating the display of 3D objects using a computer or other video device. Modeling and rendering based on images taken by the camera can achieve high rendering speed and high realism. Since the image itself contains a wealth of scene information, it is easier to obtain a photo-realistic scene model from the image. Compared with other methods that use modeling software or 3D scanners to obtain stereo models, image modeling-based methods are cheap, realistic, and highly automated.
  • the 3D classroom model is generated by 3D modeling, which makes the student's learning and monitoring more convenient and more realistic. However, it should be noted that this step is an optional step. In practical applications, it is also possible to perform 3D modeling. .
  • Step S204 Determine the position of the class student according to the heat map in the three-dimensional classroom model.
  • the human body Since the human body scatters a certain amount of heat, it can be detected by a corresponding device such as an infrared device. Therefore, based on the three-dimensional classroom model, combined with the human body heat map, the position of the student in the classroom can be determined. When the identity of the students in the classroom is confirmed, the students can track the actions and behaviors of the students through the heat map, determine the actual physical location of the students in real time, dynamically determine and bind the students, and eliminate the need to confirm the students through image comparison. Identifying the efficiency also reduces the recognition burden. If the identity of the class student cannot be confirmed temporarily, the data obtained may be associated with the class student indicated by the heat map. After the identity of the class student is confirmed, the class is confirmed and the class is confirmed. Student association. However, it should be apparent to those skilled in the art that the manner in which the identity of the student is confirmed by other suitable means, such as image comparison, is equally applicable to the solution of the embodiments of the present invention.
  • Step S206 Acquire a class image of the class student in the three-dimensional classroom model according to the determined position of the class student.
  • the overall image of the class in which the class student is located may be obtained, and the class image of each class student may be obtained from the overall image; or, the class image of each class student may be separately obtained.
  • the class images of the class students can be obtained in real time, and the class status of the class students can be understood in time.
  • the real-time acquisition of the class image of the class student may be obtained by video in real time, or may be taken by the camera at a short interval, and the class image may be obtained by the camera, wherein the shorter time may be
  • the technical personnel in the field can appropriately set according to actual needs, and can obtain the class situation of the class students in time, for example, every second or less.
  • the class image of the class student may be obtained according to the determined position of the class student, such as including an image of the class student in all positions in a class image to avoid omission; or, for a certain A position of the classroom students take pictures and so on.
  • the class image of the class student can be obtained according to the heat map and the position of the class student, and when it is determined that the class is currently in class.
  • the students' actions and behaviors can be tracked.
  • the heat map indicates that the students have been in a certain position for a certain period of time, it can be determined that they are currently in the class, and the heat map indicates the position of most students. If the usual position is different, you can be sure that you are currently in the class.
  • Step S208 Identify the class image and obtain the feature data of the class student.
  • the characteristic data of the classroom student includes at least one of the following: facial feature data of the classroom student, visual characteristic data of the classroom student, and physical characteristic data of the classroom student.
  • the manner of identifying the class image can be appropriately set by a person skilled in the art according to actual needs, and the embodiment of the present invention does not limit this, and the facial feature data, the visual feature data, and the limb feature data of the class student can be obtained.
  • the facial feature data, the visual feature data, and the limb feature data of the class student can be obtained.
  • support vector machine SVM algorithm convolutional neural network model algorithm, and so on.
  • Step S210 Determine the class state of the class student according to the characteristic data of the class student.
  • the class status includes at least one of the following: classroom concentration, class interaction, and course preference.
  • classroom concentration is used to indicate whether classroom students are focused on listening.
  • the degree of classroom interaction is used to indicate whether classroom students actively participate in classroom interaction in teacher-initiated interactive teaching.
  • the degree of curriculum preference is used to indicate whether classroom students like the course and / or the teacher of the class.
  • each feature data of the class student is compared with each feature data of the pre-stored class state sample image, and the current state and the current state of each class student are determined according to the comparison result.
  • the score of the class is determined according to the current state and the score, and at least one of the class concentration of the class, the degree of classroom interaction, and the degree of class preference are determined; wherein the sample image of the class state is marked with the characteristic data and status data of the student. And a score corresponding to the status data.
  • the class state sample image may be an image set containing a plurality of classroom students' state images during class as samples, and the class students in each sample image are marked with features, states, and corresponding scores.
  • the current state of the class student may be considered to be consistent with the state of the sample image, and the same score shall be corresponding. value.
  • the determined current state and the score of the classroom student at least one of the class concentration of the class, the degree of classroom interaction, and the degree of class preference are determined. For example, by comparing the score to a set threshold, it is possible to determine how much the classmate is fond of the current class.
  • the setting of the threshold may be appropriately set by a person skilled in the art according to actual needs, which is not limited by the embodiment of the present invention.
  • the facial feature data of the class student is compared with the facial feature data in the class state sample image, and according to the comparison result, it is determined whether the facial expression of the class student matches the facial expression in the class image of the class state; Comparing the facial feature data and the visual feature data of the classroom student with the facial feature data and the visual feature data in the class state sample image, and determining, according to the comparison result, whether the line of sight direction of the class student matches the line of sight direction in the sample image of the class state.
  • the direction matching result and the limb action matching result determine a first score corresponding to the first current state and the first current state of the classroom student; and determine a classroom focus degree of the classroom student according to the first current state and the first score.
  • the facial feature data is used to determine that a facial expression of a class student matches a facial expression in the sample image; and the facial feature data and the visual feature data are used to determine that the line of sight direction of the class student matches the line of sight direction in the sample image;
  • the limb feature data it is determined that the physical movements of the classroom students and the physical movements in the sample images also match, and it can be determined that the classroom students have a high degree of classroom concentration (for example, a score of 10 indicates a full degree of concentration in the classroom, then the classroom students can Reach 9 points, etc.).
  • the facial feature data of the classroom student is compared with the facial feature data in the school state sample image, and the mouth motion of the classroom student and the mouth motion in the sample image of the class state are determined according to the comparison result. Whether the matching is performed; comparing the facial feature data and the visual feature data of the class student with the facial feature data and the visual feature data in the class state sample image, and determining the line of sight direction of the class student and the line of sight in the sample image of the class state according to the comparison result; Whether the direction is matched; comparing the limb feature data of the class student with the limb feature data in the class image of the class state, and determining, according to the comparison result, whether the limb movement of the class student matches the limb movement in the sample image of the class state; according to the mouth movement a matching result, a line-of-sight direction matching result, and a limb motion matching result, determining a second current value corresponding to the second current state and the second current state of the classroom student; determining the
  • the facial feature data it is determined that the mouth movement of a class student matches the mouth movement in the sample image; and the line of sight direction and the visual feature data are used to determine the line of sight direction of the class student and the line of sight direction in the sample image.
  • Matching; through the limb feature data it is determined that the physical movements of the classroom students and the physical movements in the sample images also match, thereby determining that the classroom students have a high degree of classroom interaction (eg, 10 indicates the degree of classroom interaction, the classroom) Students can reach 9 points, etc.).
  • the classroom interaction includes but is not limited to: answering the teacher's question, performing physical actions such as raising a hand according to the teacher's instruction.
  • the facial feature data of the class student is compared with the facial feature data in the class state sample image, and according to the comparison result, it is determined whether the facial expression of the class student matches the facial expression in the class image of the class state. And determining, according to the facial expression matching result, a third current value corresponding to the third current state of the classroom student and a third current state; and determining a degree of preference of the classroom student according to the third current state and the third score.
  • the facial feature data is used to determine that the facial expression of a class student matches the facial expression in the sample image, it may be determined that the class student has a higher degree of course preference (eg, 10 indicates the degree of fullness of the course, then the classroom students can reach 9 points, etc.).
  • a degree of course preference eg, 10 indicates the degree of fullness of the course, then the classroom students can reach 9 points, etc.
  • a class student likes a course or a teacher there will be facial expressions such as a smile. Therefore, according to the facial expression, it can reflect the degree of preference of the class students.
  • the class information graph may be generated according to the score corresponding to at least one class state of each class of the classroom students, the class interaction degree, the class preference level, and the acquisition time of the class image.
  • the facial expressions displayed by the classroom students in the current class image can represent the degree of focus and/or affection of the class students, such as laughter, anger, doubt, etc.; according to whether the student’s line of sight is pointing to the teacher
  • the direction of the class can indicate the level of concentration of the students in the class; the current physical movements of the class students, such as sitting, turning heads, bowing, eating or drinking, playing things, lying down, whispering, chin, squatting, etc. It can be used to characterize the level of focus and/or interaction of the students in the class.
  • corresponding information can be determined according to the feature data, such as at least according to the mouth feature data and/or the eye feature data in the facial feature data, the facial expression can be determined; according to the facial feature data and the visual feature Data, the direction of the line of sight can be determined, such as facial feature data indicating that the classroom student is currently in a state of being turned and visual feature data (including but not limited to eye feature data, optionally also including eyebrow feature data) indicating the class student If the line of sight is drooping, it can be determined that the student's line of sight is far from the direction of the teacher; according to the limb feature data, the limb movement can be determined.
  • the feature data such as at least according to the mouth feature data and/or the eye feature data in the facial feature data
  • the facial expression can be determined
  • the direction of the line of sight can be determined, such as facial feature data indicating that the classroom student is currently in a state of being turned and visual feature data (including but not limited to eye feature data, optionally also including eyebrow feature data) indicating the
  • FIG. 3 is a graph of the class information of a class student such as Zhang San.
  • the abscissa is the time axis, and every 1S is set to obtain a class image containing Zhang San's information; the ordinate is Zhang.
  • the score axis of the class state of the third class is divided into 10 points.
  • the class state includes three levels of class focus, class interaction, and course preference, but those skilled in the art should understand that in practical applications, any one or combination of the above three may be used.
  • FIG. 3 three kinds of class status are simultaneously presented in one picture, but it is not limited thereto. In practical applications, different class states may be presented through multiple pictures, and each picture presents one type.
  • the facial feature data of each class student is compared with the pre-existing feature data of the student face image to determine the identity information of each class student;
  • the student's characteristic data and identity information determine the class status of each class student.
  • the identity information of a class student is not obtained temporarily, in a feasible way, if the 3D classroom model has a heat map, if the heat map is determined, it is determined that the identity information is not obtained.
  • Classroom students storing the feature data of the classmates who have not obtained the identity information and the heat map, and after determining the identity information of the class students, determining the identity information according to the determined identity information and the feature data corresponding to the heat map
  • the class status of the students in the classroom That is, the feature data is first stored corresponding to the heat map, and after the identity is determined, the feature data is stored correspondingly to the class student identified by the identity information, and then the class status of the confirmed class student is determined according to the feature data.
  • a student face database is set in the system, wherein the face image of the student and the identity information of the student corresponding to the face image are saved, and the students in each class in the class image can be identified and confirmed through image comparison. To further determine the class information corresponding to each class student.
  • the image comparison can be implemented by any suitable means, including but not limited to the facial feature comparison manner.
  • the current class situation of the student can be effectively evaluated and processed, for example, if the class state of the class meets the set warning condition, the early warning process is performed.
  • the setting of the warning condition is appropriately set by a person skilled in the art according to the actual situation, and the embodiment of the present invention does not limit this. For example, in the case of the ten system, if a student's current class focus is less than 6 points, the information can be fed back to the instructor, and the instructor can handle it in time.
  • Step S212 determining the degree of association between the class information of the class student and the learning information according to the pre-established relationship between the class student and the learning information, and the relationship between the class student and the class information.
  • the class information includes: facial facial expression information, facial motion information, visual line direction information, and limb motion information obtained by comparing each feature data of the classroom student with each feature data of the pre-stored school state sample image.
  • At least one of the information (such as the information of the facial expression acquired during the class state determination of the class student, the information of the facial action, the information of the direction of the line of sight, and the information of the body motion);
  • the learning information includes at least one of the following: The student's grade information, the course information of the class students, the teacher information corresponding to the course information, the teaching process information corresponding to the course information, the parental satisfaction information of the class students, and the teaching platform information.
  • the system stores the pre-established relationship between the students in each class and the learning information, and the relationship between the students in each class and the class information.
  • the class information and learning information of each class student can be established.
  • the two are related.
  • the student's class information is not only closely related to the information such as the class information of the class students, the teacher information corresponding to the course information, the teaching process information corresponding to the course information, the teaching platform information, etc., but also the class information will be for the students.
  • the results have a key impact.
  • the grade information of the students in the classroom and the parental satisfaction information of the students in the classroom can be the reflection and feedback of the class information.
  • the multiple information can be correlated to further guide the students' learning and curriculum and teacher selection. Learning provides an improved reference.
  • Step S214 Adjust the warning threshold of each class information of the class students according to the information of the degree of association, and/or generate an analysis report for guiding the teacher to teach.
  • a student often has a chin movement during class, which may result in a poor level of class concentration, but according to the student's grades, the student is found to have a good grade, so he can initially determine his chin movement. It is not an action indicating that the degree of concentration is not high, and accordingly, the warning threshold of the student when the action occurs can be adjusted, even if the student keeps the action for a period of time, there is no need to prompt or instruct the instructor to process.
  • a corresponding analysis report can be generated, which can be distributed to the teacher device and/or the parent device to guide the teacher and/or parent.
  • step S212 and step S214 are optional steps.
  • the feature data of the class student is determined, and then the class state of the class student at the time of class is determined.
  • the student's facial feature data, visual feature data and limb feature data can express the student's expression, line of sight, body movements and other information during class. This information can effectively reflect the student's current class status, and therefore, through the student's facial feature data.
  • visual feature data and limb feature data can monitor and analyze the students' class situation from multiple dimensions such as expression, sight, and body movements, and effectively and accurately realize the lectures of students in the classroom through computer and network. The monitoring of the situation provides an effective reference for subsequent study and lectures to further improve the learning or teaching process.
  • the learning monitoring method of this embodiment may be implemented by any suitable device or device having data processing functions, including but not limited to various terminals and servers.
  • FIG. 4 a block diagram of a structure of a learning monitoring apparatus according to a third embodiment of the present invention is shown.
  • the learning and monitoring device of the embodiment includes: a first obtaining module 302 configured to acquire a class image of a class student; and a second obtaining module 304 configured to identify the class image and obtain feature data of the class student, wherein
  • the feature data includes at least one of: face feature data of the class student, visual feature data of the class student, body feature data of the class student, and a determination module 306 configured to be based on the class student
  • the feature data determines the class status of the class student.
  • the feature data of the class student is determined, and then the class state of the class student at the time of class is determined.
  • the student's facial feature data, visual feature data and limb feature data can express the student's expression, line of sight, body movements and other information during class. This information can effectively reflect the student's current class status, and therefore, through the student's facial feature data.
  • visual feature data and limb feature data can monitor and analyze the students' class situation from multiple dimensions such as expression, sight, and body movements, and effectively and accurately realize the lectures of students in the classroom through computer and network. The monitoring of the situation provides an effective reference for subsequent study and lectures to further improve the learning or teaching process.
  • FIG. 5 a block diagram of a structure of a learning monitoring apparatus according to a fourth embodiment of the present invention is shown.
  • the learning and monitoring device of the embodiment includes: a first obtaining module 402 configured to acquire a class image of a class student; and a second obtaining module 404 configured to identify the class image and obtain feature data of the class student, wherein
  • the feature data includes at least one of: face feature data of the class student, visual feature data of the class student, body feature data of the class student, and a determination module 406 configured to be based on the class student
  • the feature data determines the class status of the class student.
  • the class status includes at least one of the following: classroom concentration, class interaction, and course preference.
  • the determining module 406 is configured to compare each feature data of the class student with each feature data of the pre-stored class state sample image, and determine a score corresponding to a current state and a current state of each class student according to the comparison result. Determining, according to the current state and the score, at least one of the class concentration, the class interaction degree, and the class preference degree of each class student; wherein the class state image is marked with the student's feature data, status data, and the The score corresponding to the status data.
  • the determining module 406 includes: a first determining submodule 4062, configured to compare the facial feature data of the classroom student with the facial feature data in the class state sample image for each classroom student, and determine, according to the comparison result Whether the facial expression of the classroom student matches the facial expression in the sample image of the class state; comparing the facial feature data and the visual feature data of the classroom student with the facial feature data and the visual feature data in the sample image of the class state, according to comparison a result of determining whether the line of sight direction of the classroom student matches the line of sight direction in the sample image of the class state; comparing the limb feature data of the class student with the limb feature data in the class image of the class state, and determining the class according to the comparison result Whether the physical movement of the student matches the physical movement in the sample image of the class state; determining the first current state of the classroom student and the first current state according to the facial expression matching result, the visual line direction matching result, and the limb motion matching result One point; according to a current state and a first score
  • the learning monitoring apparatus of the embodiment further includes: a graph generating module 408, configured to: according to the class of the class of the class, the level of class interaction, the degree of class preference, the score corresponding to the class state, and , the acquisition time of the classroom image, generating a curve of the class information.
  • a graph generating module 408 configured to: according to the class of the class of the class, the level of class interaction, the degree of class preference, the score corresponding to the class state, and , the acquisition time of the classroom image, generating a curve of the class information.
  • the learning monitoring apparatus of the embodiment further includes: an association module 410, configured to determine a class of the class student according to a pre-established association relationship between the class student and the learning information, and an association relationship between the class student and the class information.
  • the degree of association between the state and the learning information wherein the class information includes: information of facial expressions obtained by comparing each feature data of the class student with each feature data of the pre-stored class state sample image, facial motion information, At least one of the information of the line of sight direction and the information of the body motion; the learning information includes at least one of the following: achievement information of the class student, course information of the class student, teacher information corresponding to the course information, and teaching process information corresponding to the course information Parental satisfaction information and classroom information of classroom students.
  • the learning monitoring apparatus of the embodiment further includes: an adjusting module 412, configured to adjust an early warning threshold of each class information of the class student according to the information of the degree of association, and/or generate an analysis report for guiding the teacher to teach the class .
  • an adjusting module 412 configured to adjust an early warning threshold of each class information of the class student according to the information of the degree of association, and/or generate an analysis report for guiding the teacher to teach the class .
  • the determining module 406 includes: a comparison sub-module 4068 configured to compare facial feature data of each classroom student with characteristic data of the pre-stored student facial image to determine identity information of each classroom student;
  • the module 4070 is configured to determine the class status of each class student according to the class data of the class student and the identity information.
  • the learning monitoring apparatus of the embodiment further includes: a three-dimensional module 414, configured to perform three-dimensional modeling on a classroom in which the classroom student is located before the first acquisition module 402 acquires the classroom image of the classroom student, and generate a three-dimensional classroom model; Determining the position of the classroom student according to the heat map in the three-dimensional classroom model; the first obtaining module 402 is configured to acquire the class image in the three-dimensional classroom model according to the determined location of the classroom student.
  • a three-dimensional module 414 configured to perform three-dimensional modeling on a classroom in which the classroom student is located before the first acquisition module 402 acquires the classroom image of the classroom student, and generate a three-dimensional classroom model; Determining the position of the classroom student according to the heat map in the three-dimensional classroom model; the first obtaining module 402 is configured to acquire the class image in the three-dimensional classroom model according to the determined location of the classroom student.
  • the learning monitoring apparatus of the embodiment further includes: an identity information corresponding module 416, configured to: if it is determined according to the heat map, there is a classroom student who does not obtain the identity information; and the characteristics of the classroom student who does not obtain the identity information
  • the data is stored corresponding to the heat map, and after determining the identity information, determining the class status of the class student according to the determined identity information and the feature data corresponding to the heat map.
  • the learning monitoring apparatus of the embodiment further includes: an early warning module 418 configured to perform an early warning process if the class state of the class student satisfies the set warning condition.
  • the first obtaining module 402 is configured to acquire an overall image of the classroom in which the classroom student is located, obtain a class image of each class student from the overall image, or acquire an image of the class of each class student separately.
  • the learning monitoring device of the present embodiment is used to implement the corresponding learning monitoring method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
  • FIG. 6 a schematic structural diagram of an electronic device according to Embodiment 5 of the present invention is shown.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device can include a processor 502, a communications interface 504, a memory 506, and a communications bus 508.
  • Processor 502, communication interface 504, and memory 506 complete communication with one another via communication bus 508.
  • the communication interface 504 is configured to communicate with network elements of other devices, such as other terminals or servers.
  • the processor 502 is configured to execute the program 510, and specifically, the related steps in the foregoing embodiment of the learning monitoring method.
  • program 510 can include program code, the program code including computer operating instructions.
  • the processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 506 is configured to store the program 510.
  • Memory 506 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the program 510 may be specifically configured to enable the processor 502 to: acquire a class image of the class student; identify the class image, and obtain feature data of the class student, wherein the feature data includes at least one of the following: The facial feature data of the classroom student, the visual characteristic data of the classroom student, and the physical characteristic data of the classroom student; and determining the class state of the classroom student according to the characteristic data of the classroom student.
  • the class status includes at least one of the following: a degree of class concentration, a degree of class interaction, and a degree of class preference.
  • the program 510 is further configured to: when the processor 502 determines the class status of the class student according to the class data of the class student, the feature data and the pre-existence of the class student The characteristic data of the sample state image of the class is compared, and the current state and the current state corresponding scores of each class of students are determined according to the comparison result; according to the current state and the score, the class focus degree of the class students, the class interaction degree, At least one class state of the course preference level; wherein the class state sample image is marked with the student's feature data, status data, and a score corresponding to the state data.
  • the program 510 is further configured to cause the processor 502 to compare each feature data of the classroom student with each feature data of the pre-stored class state sample image, and determine the current current of each class student according to the comparison result. a score corresponding to the state and the current state; determining, according to the current state and the score, a class of students in each class, class interaction level, class preference level, at least one class state, Comparing the facial feature data of the classroom student with the facial feature data in the school state sample image, and determining, according to the comparison result, whether the facial expression of the classroom student matches the facial expression in the sample image of the class state; Comparing the facial feature data and the visual feature data with the facial feature data and the visual feature data in the class state sample image, and determining, according to the comparison result, whether the line of sight direction of the class student matches the line of sight direction in the class image of the class state; Physical characteristics of class students and class status The limb feature data in the image is compared, and according to the comparison result, whether the limb movement of
  • the program 510 is further configured to cause the processor 502 to perform a score corresponding to at least one of the classroom state of the classroom students, the class interaction degree, and the class preference degree, and the The acquisition time of the classroom image generates a curve of the class information.
  • the program 510 is further configured to: determine, by the processor 502, the classroom relationship according to the pre-established association relationship between the classroom student and the learning information, and the association relationship between the classroom student and the class information.
  • the degree of association between the class information of the student and the learning information includes: a facial expression obtained by comparing each feature data of the class student with each feature data of the pre-stored class state sample image At least one of information, facial motion information, line of sight direction information, and body motion information;
  • the learning information includes at least one of: grade information of the classroom student, course information of the classroom student, and the The teacher information corresponding to the course information, the teaching process information corresponding to the course information, the parent satisfaction information of the class student, and the teaching platform information.
  • the program 510 is further configured to enable the processor 502 to adjust an early warning threshold of each class information of the class student according to the information of the degree of association, and/or generate a guide for guiding the teacher. Analysis report.
  • the program 510 is further configured to enable the processor 502 to determine the facial feature data of each classroom student and the pre-stored student face image when determining the class status of the class student according to the feature data of the class student.
  • the feature data is compared to determine the identity information of each class student; and the class status of each class student is determined according to the class student's feature data and the identity information.
  • the program 510 is further configured to enable the processor 502 to perform three-dimensional modeling on the classroom in which the classroom student is located before acquiring the classroom image of the classroom student, to generate a three-dimensional classroom model;
  • the heat map in the model determines the position of the classroom student;
  • the program 510 is further configured to enable the processor 502 to acquire the classroom student in the three-dimensional classroom model according to the determined position of the classroom student when acquiring the classroom image of the classroom student Class image.
  • the program 510 is further configured to: if the processor 502 determines, according to the heat map, that there is a classroom student who has not obtained the identity information; and the feature data of the classroom student who has not obtained the identity information
  • the heat map is correspondingly stored, and after the identity information is determined, the class state of the class student is determined according to the determined identity information and the feature data corresponding to the heat map.
  • the program 510 is further configured to cause the processor 502 to perform an early warning process if the class status of the class student satisfies the set warning condition.
  • the program 510 is further configured to: when acquiring the class image of the class student, acquire the overall image of the class in which the class student is located, and obtain the student of each class from the overall image. Class image; or, separately get the class image of each class student.
  • the feature data of the class student is determined, and then the class state of the class student at the time of class is determined.
  • the student's facial feature data, visual feature data and limb feature data can express the student's expression, line of sight, body movements and other information during class. This information can effectively reflect the student's current class status, and therefore, through the student's facial feature data.
  • visual feature data and limb feature data can monitor and analyze the students' class situation from multiple dimensions such as expression, sight, and body movements, and effectively and accurately realize the lectures of students in the classroom through computer and network. The monitoring of the situation provides an effective reference for subsequent study and lectures to further improve the learning or teaching process.
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g., magnetic disks, magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc.
  • the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiment
  • embodiments of the embodiments of the invention may be provided as a method, apparatus (device), or computer program product.
  • embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

本发明实施例提供了一种学习监控方法、装置及电子设备,其中,所述学习监控方法包括:获取课堂学生的上课图像;对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。通过本发明实施例,可以有效且准确地实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。

Description

学习监控方法、装置及电子设备 技术领域
本发明实施例涉及计算机技术领域,尤其涉及一种学习监控方法、装置及电子设备。
背景技术
随着计算机和互联网技术的发展,借助于计算机和网络辅助学习和教学已成为一种趋势。例如,老师可以通过该种方式,以直播或录播的方式同时对多个教室中的学生讲课,学生通过这种方式可以选择自己喜欢的老师和课程等等。
该种方式虽然大大方便了学生学习和老师教学,但目前尚没有有效的方式对课堂上学生的听课情况进行监控,以对学生的学习进行监测,进而进行相应的学习方式或者讲课方式的调整,改善学习效果。
发明内容
有鉴于此,本发明实施例提供了一种学习监控方法、装置及电子设备,以解决现有技术中,无法有效对通过计算机和网络进行学习的学生在课堂上的听课情况进行监控的问题。
根据本发明实施例的一个方面,提供了一种学习监控方法,包括:获取课堂学生的上课图像;对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
根据本发明实施例的另一个方面,还提供了一种学习监控装置,包括:第一获取模块,配置为获取课堂学生的上课图像;第二获取模块,配置为对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;确定模块,配置为根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
根据本发明实施例的又一个方面,还提供了一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如上所述的学习监控方法对应的操作。
根据本发明实施例的再一个方面,还提供了一种计算机存储介质,所述计算机存储介质存储有:用于获取课堂学生的上课图像的可执行指令;用于对所述上课图像进行识别,获取所述课堂学生的特征数据的可执行指令,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;用于根据所述课堂学生的特征数据,确定所述课堂学生的上课状态的可执行指令。
根据本发明实施例提供的方案,通过对获取的课堂学生的上课图像进行识别,确定课堂学生的特征数据,进而确定课堂学生在上课时的上课状态。学生的面部特征数据、视觉特征数据和肢体特征数据可以表达学生在上课时的表情、视线、肢体动作等信息,通过这些信息可以有效反映该学生的当前听课状态,因此,通过学生的面部特征数据、视觉特征数据和肢体特征数据,可以从表情、视线、肢体动作等多个维度对学生的上课情况进行监控和分析,有效且准确地实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1是根据本发明实施例一的一种学习监控方法的步骤流程图;
图2是根据本发明实施例二的一种学习监控方法的步骤流程图;
图3是图2所示实施例中的一种上课信息曲线图的示意图;
图4是根据本发明实施例三的一种学习监控装置的结构框图;
图5是根据本发明实施例四的一种学习监控装置的结构框图;
图6是根据本发明实施例五的一种电子设备的结构示意图。
具体实施方式
当然,实施本发明实施例的任一技术方案必不一定需要同时达到以上的所有优点。
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
实施例一
参照图1,示出了根据本发明实施例一的一种学习监控方法的步骤流程图。
本实施例的学习监控方法包括以下步骤:
步骤S102:获取课堂学生的上课图像。
本实施例中,上课图像可以通过摄像头拍照取样获取,如每间隔一定时间如1秒拍摄一次;也可以通过课堂视频获得,如从课堂视频中自动截取获得图像。
一般来说,在一个教室中有多名学生同时上课,这些学生即为本发明实施例中的课堂学生,但本领域技术人员应当明了,本发明实施例也同样适用于单个课堂学生的情形。
步骤S104:对上课图像进行识别,获取课堂学生的特征数据。
其中,课堂学生的特征数据包括以下至少之一:课堂学生的面部特征数据、课堂学生的视觉特征数据、课堂学生的肢体特征数据。课堂学生的面部特征数据表征了课堂学生的面部特点,通过面部特征数据可以对课堂学生进行身份识别,也可以确定课堂学生的面部动作和表情;课堂学生的视觉特征数据表征了课堂学生的视线特征,通过视觉特征数据可以确定课堂学生的当前视觉注意点;课堂学生的肢体特征数据表征了课堂学生的肢体动作特征,通过肢体特征数据可以确定课堂学生的当前动作。
课堂学生的面部特征数据、课堂学生的视觉特征数据、课堂学生的肢体特征数据可以表达课堂学生的表情、视觉、姿态等信息,通过这些信息可以有效反映课堂学生的当前上课状态。
步骤S106:根据课堂学生的特征数据,确定课堂学生的上课状态。
如,是否专注于听讲、是否积极参加课堂互动、是否喜爱该课程或者老师等等。
根据本实施例,通过对获取的课堂学生的上课图像进行识别,确定课堂学生的特征数据,进而确定课堂学生在上课时的上课状态。学生的面部特征数据、视觉特征数据和肢体特征数据可以表达学生在上课时的表情、视线、肢体动作等信息,通过这些信息可以有效反映该学生的当前听课状态,因此,通过学生的面部特征数据、视觉特征数据和肢体特征数据,可以从表情、视线、肢体动作等多个维度对学生的上课情况进行监控和分析,有效且准确地实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。
本实施例的学习监控方法可以由任意适当的具有数据处理功能的设备或装置实现,包括但不限于各种终端及服务器等。
实施例二
参照图2,示出了根据本发明实施例二的一种学习监控方法的步骤流程图。
本实施例的学习监控方法包括以下步骤:
步骤S202:对课堂学生所在的课堂进行三维建模,生成三维教室模型。
本实施例中,通过多方位摄像头,利用3D技术对教室进行建模。三维建模是一种用计算机或者其它视频设备模拟显示三维物体的技术,而基于摄像头拍摄的图像进行建模和绘制可以获得很高的绘制速度和高度的真实感。由于图像本身包含着丰富的场景信息,因而更容易从图像获得照片般逼真的场景模型。与其它利用建模软件或者三维扫描仪得到立体模型的方法相比,基于图像建模的方法成本低廉,真实感强,自动化程度高。
通过三维建模生成三维教室模型,使得对学生的学习监控更为方便,也更具有真实感,但需要说明的是,本步骤为可选步骤,在实际应用中,也可以不进行三维建模。
步骤S204:根据三维教室模型中的热力图,确定课堂学生的位置。
由于人体散射一定的热量,可被相应设备如红外设备侦测到,因此,基于三维教室模型,结合人体热人图,可确定教室中学生的位置。当课堂学生 的身份被确认后,通过热力图可以对学生的动作和行为进行跟踪,实时确定学生的实际物理位置,动态判定和绑定学生身份,无需再次通过图像比对确认学生,提高了学生识别效率,也减轻了识别负担。而若课堂学生的身份暂时无法得到确认,则可以先将获取的数据与热力图指示的课堂学生相关联,待该课堂学生的身份被确认后,再将该数据与确认了身份后的该课堂学生关联。但本领域技术人员应当明了,通过其它适当方式,如图像比对确认学生身份的方式,也同样适用于本发明实施例的方案。
步骤S206:根据确定的课堂学生的位置,获取三维教室模型中的课堂学生的上课图像。
其中,在获取课堂学生的上课图像时,可以获取课堂学生所在的课堂的整体图像,从整体图像中获取每个课堂学生的上课图像;或者,也可以分别获取每个课堂学生的上课图像。较优地,可以实时获取课堂学生的上课图像,以及时了解课堂学生的上课状态。
其中,实时获取课堂学生的上课图像,可以是通过视频实时获取上课图像;也可以是每间隔一个较短的时间即通过摄像头对课堂学生进行拍摄,获得上课图像,其中,该较短时间可以由本领域技术人员根据实际需要适当设定,可及时获得课堂学生的上课情况即可,如,每隔一秒或更短时间等。
在确定了课堂学生的位置的情况下,可以根据确定的课堂学生的位置,获取课堂学生的上课图像,如在一张上课图像中包括所有位置的课堂学生的图像,以免遗漏;或者,针对某一位置的课堂学生进行拍摄等等。
在具有热力图的情况下,则可以根据热力图和课堂学生的位置,并在确定当前处于上课状态时,获取课堂学生的上课图像。通过热力图,可以对学生的动作和行为进行跟踪,当热力图指示学生在一段时间内一直处于某一较确定位置,则可确定目前处于上课状态,而热力图指示大部分学生所处位置与惯常位置不同,则可确定目前处于下课状态。
步骤S208:对上课图像进行识别,获取课堂学生的特征数据。
其中,课堂学生的特征数据包括以下至少之一:课堂学生的面部特征数据、课堂学生的视觉特征数据、课堂学生的肢体特征数据。
本实施例中,对上课图像进行识别的方式可以由本领域技术人员根据实际需求适当设置,本发明实施例对此不作限制,可获得课堂学生的面部特征数据、视觉特征数据、肢体特征数据即可。如,支持向量机SVM算法、卷积神经网络模型算法等等。
步骤S210:根据课堂学生的特征数据,确定课堂学生的上课状态。
其中,上课状态包括以下至少之一:课堂专注程度、课堂互动程度、课程喜爱程度。课堂专注程度用于表征课堂学生上课是否专注于听讲;课堂互动程度用于表征在教师发起的互动教学中课堂学生是否积极参加了课堂互动;课程喜爱程度用于表征课堂学生是否喜爱该堂课程和/或该堂课的老师。
在一种可行方式中,可以基于课堂学生的特征数据,对课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;根据当前状态及分值,确 定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态;其中,上课状态样本图像中标注有学生的特征数据、状态数据、及与所述状态数据对应的分值。上课状态样本图像可以是一个图像集,其中包含了多种课堂学生在上课时的状态图像作为样本,每个样本图像中的课堂学生都被进行了特征、状态及对应的分值标注。若某一课堂学生的特征数据与某一上课状态样本图像的特征数据比较接近,如欧式距离在一定范围内,则可以认为该课堂学生的当前状态与样本图像的状态一致,应对应相同的分值。进而,根据确定的课堂学生的当前状态及分值,确定该课堂学生的课堂专注程度、课堂互动程度及课程喜爱程度中的至少一个上课状态。例如,通过将分值与一个设定阈值进行比较,可以确定该课堂学生对当前课堂的喜爱程度。其中,设定阈值可以由本领域技术人员根据实际需求适当设置,本发明实施例对此不作限制。
例如,针对各课堂学生,将该课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定该课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;将该课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定该课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将该课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据面部表情匹配结果、视线方向匹配结果和肢体动作匹配结果,确定该课堂学生的第一当前状态及第一当前状态对应的第一分值;根据第一当前状态及第一分值,确定课堂学生的课堂专注程度。如,若通过面部特征数据,确定某一课堂学生的面部表情与样本图像中的面部表情匹配;通过面部特征数据和视觉特征数据,确定该课堂学生的视线方向与样本图像中的视线方向匹配;通过肢体特征数据,确定该课堂学生的肢体动作与样本图像中的肢体动作也匹配,则可确定该课堂学生具有较高的课堂专注程度(如以10表示课堂专注程度满分,则该课堂学生可达到9分等)。
再例如,针对各课堂学生,将该课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定该课堂学生的嘴部动作与上课状态样本图像中的嘴部动作是否匹配;将该课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定该课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将该课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据嘴部动作匹配结果、视线方向匹配结果和肢体动作匹配结果,确定该课堂学生的第二当前状态及第二当前状态对应的第二分值;根据第二当前状态及第二分值,确定该课堂学生的课堂互动程度。如,若通过面部特征数据,确定某一课堂学生的嘴部动作与样本图像中的嘴部动作匹配;通过面部特征数据和视觉特征数据,确定该课堂学生的视线方向与样本图像中的视线方向匹配;通过肢体特征数据,确定该课堂学生 的肢体动作与样本图像中的肢体动作也匹配,则可确定该课堂学生具有较高的课堂互动程度(如以10表示课堂互动程度满分,则该课堂学生可达到9分等)。其中,课堂互动包括但不限于:回答教师提问、根据教师指令进行肢体动作如举手等。
又例如,针对各课堂学生,将该课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定该课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;根据面部表情匹配结果,确定所述课堂学生的第三当前状态及第三当前状态对应的第三分值;根据第三当前状态及第三分值,确定该课堂学生的课程喜爱程度。如,若通过面部特征数据,确定某一课堂学生的面部表情与样本图像中的面部表情匹配,则可确定该课堂学生具有较高的课程喜爱程度(如以10表示课程喜爱程度满分,则该课堂学生可达到9分等)。其中,若课堂学生喜爱某一课程或教师,会有诸如微笑等面部表情呈现。因此,根据面部表情,可以反映课堂学生的课程喜爱程度。
进一步可选地,可以根据各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态对应的分值,以及,所述课堂图像的获取时间,生成上课信息曲线图。
其中,课堂学生在当前上课图像中所表现出来的面部表情可以表征该课堂学生对课程的专注程度和/或喜爱程度,如笑、生气、疑惑等等;根据课堂学生的视线方向是否正指向老师授课方向,可以表征该课堂学生对课程的专注程度;课堂学生的当前肢体动作,如,端坐、扭头、低头、吃或喝东西、玩东西、躺靠、交头接耳、拄下巴、趴桌子等等,可以表征该课堂学生对课程的专注程度和/或互动程度。
例如,通过机器学习方式,根据特征数据即可确定相应的信息,如至少根据面部特征数据中的嘴部特征数据和/或眼部特征数据,即可确定面部表情;根据面部特征数据和视觉特征数据,即可确定视线方向,如面部特征数据指示该课堂学生目前处于扭头状态且视觉特征数据(包括但不限于眼部特征数据,可选地,还可以包括眉部特征数据)指示该课堂学生视线下垂,则可确定该课堂学生的视线远离教师方向;根据肢体特征数据,即可确定肢体动作。
基于以上描述,一种生成的上课信息曲线图如图3所示。图3是某一课堂学生如张三的上课信息曲线图,在图3中,横坐标为时间坐标轴,设定每隔1S获取一张上课图像,其中包含张三的信息;纵坐标为张三的上课状态的分数坐标轴,以10分为满分。图3中,上课状态包括了课堂专注程度、课堂互动程度、课程喜爱程度三种,但本领域技术人员应当明了,在实际应用中,可以有上述三种中的任意一种或组合。设定在10S之内拍摄了10张线三的上课图像,根据张三的面部表情、视线方向和肢体动作,确定张三的10张上课图像中的课程专注程度(黑实线)分别为(9,9,8,5,4,3,8,7,9,9);根据张三的嘴部动作、视线方向和肢体动作,确定张三的10张上课图像中的课堂互动程度(黑虚线)分别为(0,0,0,0,0,0,9,9,7,9);根据张三的面部表情,确定张三的10张上课图像中的课堂喜爱程度(黑虚点线) 分别为(8,7,6,1,1,2,6,6,6,8)。通过图3,可以简单明了地了解张三在该堂课上的上课状态。
需要说明的是,图3中在一张图中同时呈现了三种上课状态,但不限于此,在实际应用中,也可以通过多张图呈现不同的上课状态,每张图呈现一种。
因每个课堂学生对应的上课状态不同,因此,针对每个课堂学生,将各个课堂学生的面部特征数据与预存的学生人脸图像的特征数据进行比对,确定各个课堂学生的身份信息;根据课堂学生的特征数据和身份信息,确定各个课堂学生的上课状态。由此,将课堂学生的上述数据、信息和状态与该课堂学生的身份对应起来。
但在某些情况下,若暂时未能获取到某个课堂学生的身份信息,则在一种可行方式中,若三维教室模型具有热力图,则若根据热力图,确定存在未获取到身份信息的课堂学生;则将未获取到身份信息的课堂学生的特征数据与热力图对应存储,并在确定这些课堂学生的身份信息后,根据确定的身份信息和与热力图对应的特征数据,确定该课堂学生的上课状态。也即,先将特征数据与热力图对应存储,在确定身份后,再将特征数据对身份信息标识的课堂学生对应存储,进而,根据这些特征数据确定该确认了身份的课堂学生的上课状态。
系统中设置有学生人脸数据库,其中保存有学生的人脸图像以及该人脸图像对应的学生的身份信息,通过图像比对,即可对上课图像中的各个课堂学生进行身份识别和确认,以进一步确定各个课堂学生对应的上课信息。其中,图像比对可以通过任意适当方式实现,包括但不限于面部特征比对方式。
进一步地,通过获得的上课状态,可以对学生当前的上课情况进行有效评估和处理,如,若课堂学生的上课状态满足设定预警条件,则进行预警处理。其中,设定预警条件由本领域技术人员根据实际情况适当设置,本发明实施例对此不作限制。如,以十分制为例,若某个学生当前的课堂专注程度低于6分,则可将该信息反馈给授课教师,由授课教师及时进行处理等。
步骤S212:根据预先建立的课堂学生与学习信息的关联关系,以及课堂学生与上课信息的关联关系,确定课堂学生的上课信息与学习信息的关联程度。
其中,上课信息包括:根据对课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较后获得的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息中的至少一个(如步骤S210中,在确定课堂学生的上课状态过程中获取的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息);学习信息包括以下至少之一:课堂学生的成绩信息、课堂学生的课程信息、与课程信息对应的教师信息、与课程信息对应的授课过程信息、课堂学生的家长满意度信息、授课平台信息。
系统中保存有预先建立的各个课堂学生与学习信息的关联关系,以及各个课堂学生与上课信息的关联关系,通过各个课堂学生的身份信息,可以在各个课堂学生的上课信息与学习信息之间建立对应关系,将二者关联起来。 而学生的上课信息不仅与学习信息中的诸如课堂学生的课程信息、与课程信息对应的教师信息、与课程信息对应的授课过程信息、授课平台信息等信息密切相关,而且上课信息会对学生的成绩产生关键影响,课堂学生的成绩信息、课堂学生的家长满意度信息可以是上课信息的反映和反馈,将多种信息关联起来,可以进一步指导学生的学习和课程及教师选择,为学生的后续学习提供改善参考。
步骤S214:根据关联程度的信息,调整课堂学生的各上课信息的预警阈值,和/或,生成用于指导教师授课的分析报告。
例如,某个学生在上课时经常有拄下巴动作,该动作可能导致其课堂专注程度处于较差状态,但根据该学生的成绩,却发现该学生成绩较好,因而可以初步确定其拄下巴动作并不是指示专注程度不高的动作,据此,可调整该学生出现该动作时的预警阈值,即使该学生在一段时间内保持该动作,也无需进行提示或者指示授课教师进行处理。
此外,根据这些信息还可以生成相应的分析报告,该分析报告可以分发至教师端设备和/或家长端设备,以对教师和/或家长进行指导。
需要说明的是,步骤S212和步骤S214均为可选步骤。
通过本实施例,通过对获取的课堂学生的上课图像进行识别,确定课堂学生的特征数据,进而确定课堂学生在上课时的上课状态。学生的面部特征数据、视觉特征数据和肢体特征数据可以表达学生在上课时的表情、视线、肢体动作等信息,通过这些信息可以有效反映该学生的当前听课状态,因此,通过学生的面部特征数据、视觉特征数据和肢体特征数据,可以从表情、视线、肢体动作等多个维度对学生的上课情况进行监控和分析,有效且准确地实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。
本实施例的学习监控方法可以由任意适当的具有数据处理功能的设备或装置实现,包括但不限于各种终端及服务器等。
实施例三
参照图4,示出了根据本发明实施例三的一种学习监控装置的结构框图。
本实施例的学习监控装置包括:第一获取模块302,配置为获取课堂学生的上课图像;第二获取模块304,配置为对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;确定模块306,配置为根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
通过本实施例,通过对获取的课堂学生的上课图像进行识别,确定课堂学生的特征数据,进而确定课堂学生在上课时的上课状态。学生的面部特征数据、视觉特征数据和肢体特征数据可以表达学生在上课时的表情、视线、肢体动作等信息,通过这些信息可以有效反映该学生的当前听课状态,因此,通过学生的面部特征数据、视觉特征数据和肢体特征数据,可以从表情、视线、肢体动作等多个维度对学生的上课情况进行监控和分析,有效且准确地 实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。
实施例四
参照图5,示出了根据本发明实施例四的一种学习监控装置的结构框图。
本实施例的学习监控装置包括:第一获取模块402,配置为获取课堂学生的上课图像;第二获取模块404,配置为对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;确定模块406,配置为根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
可选地,上课状态包括以下至少之一:课堂专注程度、课堂互动程度、课程喜爱程度。
可选地,确定模块406配置为对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值,根据当前状态及分值确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态;其中,上课状态样本图像中标注有学生的特征数据、状态数据、及与所述状态数据对应的分值。
可选地,确定模块406包括:第一确定子模块4062,配置为针对各课堂学生,将所述课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据面部表情匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第一当前状态及第一当前状态对应的第一分值;根据第一当前状态及第一分值,确定所述课堂学生的课堂专注程度;和/或,第二确定子模块4064,配置为针对各课堂学生,将所述课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的嘴部动作与上课状态样本图像中的嘴部动作是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据嘴部动作匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第二当前状态及第二当前状态对应的第二分值;根据第二当前状态及第二分值,确定所述课堂学生的课堂互动程度;和/或,第三确定子模块 4066,配置为针对各课堂学生,将所述课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;根据面部表情匹配结果,确定所述课堂学生的第三当前状态及第三当前状态对应的第三分值;根据第三当前状态及第三分值,确定所述课堂学生的课程喜爱程度。
可选地,本实施例的学习监控装置还包括:曲线图生成模块408,配置为根据各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态对应的分值,以及,所述课堂图像的获取时间,生成上课信息曲线图。
可选地,本实施例的学习监控装置还包括:关联模块410,配置为根据预先建立的课堂学生与学习信息的关联关系,以及所述课堂学生与上课信息的关联关系,确定课堂学生的上课状态与学习信息的关联程度;其中,上课信息包括:根据对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较后获得的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息中的至少一个;学习信息包括以下至少之一:课堂学生的成绩信息、课堂学生的课程信息、与课程信息对应的教师信息、与课程信息对应的授课过程信息、课堂学生的家长满意度信息、授课平台信息。
可选地,本实施例的学习监控装置还包括:调整模块412,配置为根据关联程度的信息,调整课堂学生的各上课信息的预警阈值,和/或,生成用于指导教师授课的分析报告。
可选地,确定模块406包括:比对子模块4068,配置为将各个课堂学生的面部特征数据与预存的学生人脸图像的特征数据进行比对,确定各个课堂学生的身份信息;状态确定子模块4070,配置为根据所述课堂学生的特征数据和所述身份信息,确定各个课堂学生的上课状态。
可选地,本实施例的学习监控装置还包括:三维模块414,配置为在第一获取模块402获取课堂学生的上课图像之前,对课堂学生所在的课堂进行三维建模,生成三维教室模型;根据所述三维教室模型中的热力图,确定所述课堂学生的位置;第一获取模块402配置为根据确定的所述课堂学生的位置,获取三维教室模型中的上课图像。
可选地,本实施例的学习监控装置还包括:身份信息对应模块416,配置为若根据热力图,确定存在未获取到身份信息的课堂学生;则将未获取到身份信息的课堂学生的特征数据与热力图对应存储,并在确定身份信息后,根据确定的身份信息和与所述热力图对应的特征数据,确定课堂学生的上课状态。
可选地,本实施例的学习监控装置还包括:预警模块418,配置为若所述课堂学生的上课状态满足设定预警条件,则进行预警处理。
可选地,第一获取模块402配置为获取课堂学生所在的课堂的整体图像,从所述整体图像中获取每个课堂学生的上课图像;或者,分别获取每个课堂学生的上课图像。
本实施例的学习监控装置用于实现前述多个方法实施例中相应的学习监 控方法,并具有相应的方法实施例的有益效果,在此不再赘述。
实施例五
参照图6,示出了根据本发明实施例五的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。
如图6所示,该电子设备可以包括:处理器(processor)502、通信接口(Communications Interface)504、存储器(memory)506、以及通信总线508。
其中:
处理器502、通信接口504、以及存储器506通过通信总线508完成相互间的通信。
通信接口504,用于与其它设备比如其它终端或服务器等的网元通信。
处理器502,用于执行程序510,具体可以执行上述学习监控方法实施例中的相关步骤。
具体地,程序510可以包括程序代码,该程序代码包括计算机操作指令。
处理器502可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器506,用于存放程序510。存储器506可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序510具体可以用于使得处理器502执行以下操作:获取课堂学生的上课图像;对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
在一种可选的实施方式中,所述上课状态包括以下至少之一:课堂专注程度、课堂互动程度、课程喜爱程度。
在一种可选的实施方式中,程序510还用于使得处理器502在根据所述课堂学生的特征数据,确定所述课堂学生的上课状态时,对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;根据当前状态及分值,确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态;其中,上课状态样本图像中标注有学生的特征数据、状态数据、及与所述状态数据对应的分值。
在一种可选的实施方式中,程序510还用于使得处理器502在对课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;根据所述当前状态及所述分值,确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态时,针对各课堂学生,将所述课堂学生的面部特征 数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据面部表情匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第一当前状态及第一当前状态对应的第一分值;根据所述第一当前状态及所述第一分值,确定所述课堂学生的课堂专注程度;和/或,将所述课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的嘴部动作与所述上课状态样本图像中的嘴部动作是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与上课状态样本图像中的肢体动作是否匹配;根据嘴部动作匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第二当前状态及第二当前状态对应的第二分值;根据第二当前状态及第二分值,确定所述课堂学生的课堂互动程度;和/或,将所述课堂学生的面部特征数据与上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与上课状态样本图像中的面部表情是否匹配;根据面部表情匹配结果,确定所述课堂学生的第三当前状态及第三当前状态对应的第三分值;根据所述第三当前状态及所述第三分值,确定所述课堂学生的课程喜爱程度。
在一种可选的实施方式中,程序510还用于使得处理器502根据各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态对应的分值,以及,所述课堂图像的获取时间,生成上课信息曲线图。
在一种可选的实施方式中,程序510还用于使得处理器502根据预先建立的所述课堂学生与学习信息的关联关系,以及所述课堂学生与上课信息的关联关系,确定所述课堂学生的上课信息与所述学习信息的关联程度;其中,所述上课信息包括:根据对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较后获得的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息中的至少一个;所述学习信息包括以下至少之一:所述课堂学生的成绩信息、所述课堂学生的课程信息、与所述课程信息对应的教师信息、与所述课程信息对应的授课过程信息、所述课堂学生的家长满意度信息、授课平台信息。
在一种可选的实施方式中,程序510还用于使得处理器502根据所述关联程度的信息,调整所述课堂学生的各上课信息的预警阈值,和/或,生成用于指导教师授课的分析报告。
在一种可选的实施方式中,程序510还用于使得处理器502在根据课堂学生的特征数据,确定课堂学生的上课状态时,将各个课堂学生的面部特征数据与预存的学生人脸图像的特征数据进行比对,确定各个课堂学生的身份信息;根据课堂学生的特征数据和所述身份信息,确定各个课堂学生的上课状态。
在一种可选的实施方式中,程序510还用于使得处理器502在获取课堂学生的上课图像之前,还对所述课堂学生所在的课堂进行三维建模,生成三维教室模型;根据三维教室模型中的热力图,确定所述课堂学生的位置;程序510还用于使得处理器502在获取课堂学生的上课图像时,根据确定的所述课堂学生的位置,获取三维教室模型中的课堂学生的上课图像。
在一种可选的实施方式中,程序510还用于使得处理器502若根据热力图,确定存在未获取到身份信息的课堂学生;则将未获取到身份信息的课堂学生的特征数据与所述热力图对应存储,并在确定身份信息后,根据确定的身份信息和与热力图对应的特征数据,确定课堂学生的上课状态。
在一种可选的实施方式中,程序510还用于使得处理器502在若所述课堂学生的上课状态满足设定预警条件时,进行预警处理。
在一种可选的实施方式中,程序510还用于使得处理器502在获取课堂学生的上课图像时,获取课堂学生所在的课堂的整体图像,从所述整体图像中获取每个课堂学生的上课图像;或者,分别获取每个课堂学生的上课图像。
程序510中各步骤的具体实现可以参见上述学习监控方法实施例中的相应步骤和单元中对应的描述,在此不赘述。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备和模块的具体工作过程,可以参考前述方法实施例中的对应过程描述,在此不再赘述。
通过本实施例的电子设备,通过对获取的课堂学生的上课图像进行识别,确定课堂学生的特征数据,进而确定课堂学生在上课时的上课状态。学生的面部特征数据、视觉特征数据和肢体特征数据可以表达学生在上课时的表情、视线、肢体动作等信息,通过这些信息可以有效反映该学生的当前听课状态,因此,通过学生的面部特征数据、视觉特征数据和肢体特征数据,可以从表情、视线、肢体动作等多个维度对学生的上课情况进行监控和分析,有效且准确地实现对通过计算机和网络进行学习的学生在课堂上的听课情况的监控,为后续的学习和授课提供了有效的参考,以进一步对学习或授课过程进行改善。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字 信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。
本领域的技术人员应明白,本发明实施例的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明实施例是参照根据本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。

Claims (26)

  1. 一种学习监控方法,包括:
    获取课堂学生的上课图像;
    对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;
    根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
  2. 根据权利要求1所述的方法,其中,所述上课状态包括以下至少之一:课堂专注程度、课堂互动程度、课程喜爱程度。
  3. 根据权利要求2所述的方法,其中,根据所述课堂学生的特征数据,确定所述课堂学生的上课状态包括:
    对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;
    根据所述当前状态及所述分值,确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态;
    其中,所述上课状态样本图像中标注有学生的特征数据、状态数据、及与所述状态数据对应的分值。
  4. 根据权利要求3所述的方法,其中,对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;根据所述当前状态及所述分值,确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态包括:
    针对各课堂学生,
    将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与所述上课状态样本图像中的面部表情是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与所述上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与所述上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与所述上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与所述上课状态样本图像中的肢体动作是否匹配;根据面部表情匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第一当前状态及第一当前状态对应的第一分值;根据所述第一当前状态及所述第一分值,确定所述课堂学生的课堂专注程度;
    和/或,
    将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的嘴部动作与所述上课状态样本图像中的嘴部动作是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与所述上课状态样本图像中的面部特征数据和视觉特征数据进行比较, 根据比较结果确定所述课堂学生的视线方向与所述上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与所述上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与所述上课状态样本图像中的肢体动作是否匹配;根据嘴部动作匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第二当前状态及第二当前状态对应的第二分值;根据所述第二当前状态及所述第二分值,确定所述课堂学生的课堂互动程度;
    和/或,
    将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与所述上课状态样本图像中的面部表情是否匹配;根据面部表情匹配结果,确定所述课堂学生的第三当前状态及第三当前状态对应的第三分值;根据所述第三当前状态及所述第三分值,确定所述课堂学生的课程喜爱程度。
  5. 根据权利要求3或4所述的方法,其中,所述方法还包括:
    根据各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态对应的分值,以及,所述课堂图像的获取时间,生成上课信息曲线图。
  6. 根据权利要求4所述的方法,其中,所述方法还包括:
    根据预先建立的所述课堂学生与学习信息的关联关系,以及所述课堂学生与上课信息的关联关系,确定所述课堂学生的上课信息与所述学习信息的关联程度;
    其中,所述上课信息包括:根据对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较后获得的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息中的至少一个;所述学习信息包括以下至少之一:所述课堂学生的成绩信息、所述课堂学生的课程信息、与所述课程信息对应的教师信息、与所述课程信息对应的授课过程信息、所述课堂学生的家长满意度信息、授课平台信息。
  7. 根据权利要求6所述的方法,其中,所述方法还包括:
    根据所述关联程度的信息,调整所述课堂学生的各上课信息的预警阈值,和/或,生成用于指导教师授课的分析报告。
  8. 根据权利要求1所述的方法,其中,根据所述课堂学生的特征数据,确定所述课堂学生的上课状态包括:
    将各个课堂学生的面部特征数据与预存的学生人脸图像的特征数据进行比对,确定各个课堂学生的身份信息;
    根据所述课堂学生的特征数据和所述身份信息,确定各个课堂学生的上课状态。
  9. 根据权利要求1所述的方法,其中,
    在所述获取课堂学生的上课图像之前,所述方法还包括:对所述课堂学生所在的课堂进行三维建模,生成三维教室模型;根据所述三维教室模型中的热力图,确定所述课堂学生的位置;
    所述获取课堂学生的上课图像包括:根据确定的所述课堂学生的位置,获取所述三维教室模型中的课堂学生的上课图像。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    若根据所述热力图,确定存在未获取到身份信息的课堂学生;
    则将未获取到身份信息的课堂学生的特征数据与所述热力图对应存储,并在确定身份信息后,根据确定的身份信息和与所述热力图对应的特征数据,确定课堂学生的上课状态。
  11. 根据权利要求1所述的方法,其中,所述方法还包括:
    若所述课堂学生的上课状态满足设定预警条件,则进行预警处理。
  12. 根据权利要求1所述的方法,其中,所述获取课堂学生的上课图像包括:
    获取课堂学生所在的课堂的整体图像,从所述整体图像中获取每个课堂学生的上课图像;
    或者,
    分别获取每个课堂学生的上课图像。
  13. 一种学习监控装置,包括:
    第一获取模块,配置为获取课堂学生的上课图像;
    第二获取模块,配置为对所述上课图像进行识别,获取所述课堂学生的特征数据,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;
    确定模块,配置为根据所述课堂学生的特征数据,确定所述课堂学生的上课状态。
  14. 根据权利要求13所述的装置,其中,所述上课状态包括以下至少之一:课堂专注程度、课堂互动程度、课程喜爱程度。
  15. 根据权利要求14所述的装置,其中,所述确定模块配置为对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较,根据比较结果确定各课堂学生的当前状态及当前状态对应的分值;根据所述当前状态及所述分值,确定各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态;其中,所述上课状态样本图像中标注有学生的特征数据、状态数据、及与所述状态数据对应的分值。
  16. 根据权利要求13所述的装置,其中,所述确定模块包括:
    第一确定子模块,配置为针对各课堂学生,将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与所述上课状态样本图像中的面部表情是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与所述上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与所述上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与所述上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与所述上课状态样本图像中的肢体动作是否匹配;根据面部表情匹配结果、视线方向匹配结果和肢体动作匹配 结果,确定所述课堂学生的第一当前状态及第一当前状态对应的第一分值;根据所述第一当前状态及所述第一分值,确定所述课堂学生的课堂专注程度;
    和/或,
    第二确定子模块,配置为针对各课堂学生,将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的嘴部动作与所述上课状态样本图像中的嘴部动作是否匹配;将所述课堂学生的面部特征数据和视觉特征数据与所述上课状态样本图像中的面部特征数据和视觉特征数据进行比较,根据比较结果确定所述课堂学生的视线方向与所述上课状态样本图像中的视线方向是否匹配;将所述课堂学生的肢体特征数据与所述上课状态样本图像中的肢体特征数据进行比较,根据比较结果确定所述课堂学生的肢体动作与所述上课状态样本图像中的肢体动作是否匹配;根据嘴部动作匹配结果、视线方向匹配结果和肢体动作匹配结果,确定所述课堂学生的第二当前状态及第二当前状态对应的第二分值;根据所述第二当前状态及所述第二分值,确定所述课堂学生的课堂互动程度;
    和/或,
    第三确定子模块,配置为针对各课堂学生,将所述课堂学生的面部特征数据与所述上课状态样本图像中的面部特征数据进行比较,根据比较结果确定所述课堂学生的面部表情与所述上课状态样本图像中的面部表情是否匹配;根据面部表情匹配结果,确定所述课堂学生的第三当前状态及第三当前状态对应的第三分值;根据所述第三当前状态及所述第三分值,确定所述课堂学生的课程喜爱程度。
  17. 根据权利要求15或16所述的装置,其中,所述装置还包括:
    曲线图生成模块,配置为根据各课堂学生的课堂专注程度、课堂互动程度、课程喜爱程度中的至少一个上课状态对应的分值,以及,所述课堂图像的获取时间,生成上课信息曲线图。
  18. 根据权利要求16所述的装置,其中,所述装置还包括:
    关联模块,配置为根据预先建立的所述课堂学生与学习信息的关联关系,以及所述课堂学生与上课信息的关联关系,确定所述课堂学生的上课状态与所述学习信息的关联程度;其中,所述上课信息包括:根据对所述课堂学生的各特征数据与预存的上课状态样本图像的各特征数据进行比较后获得的面部表情的信息、面部动作的信息、视线方向的信息和肢体动作的信息中的至少一个;所述学习信息包括以下至少之一:所述课堂学生的成绩信息、所述课堂学生的课程信息、与所述课程信息对应的教师信息、与所述课程信息对应的授课过程信息、所述课堂学生的家长满意度信息、授课平台信息。
  19. 根据权利要求18所述的装置,其中,所述装置还包括:
    调整模块,配置为根据所述关联程度的信息,调整所述课堂学生的各上课信息的预警阈值,和/或,生成用于指导教师授课的分析报告。
  20. 根据权利要求13所述的装置,其中,所述确定模块包括:
    比对子模块,配置为将各个课堂学生的面部特征数据与预存的学生人脸图像的特征数据进行比对,确定各个课堂学生的身份信息;
    状态确定子模块,配置为根据所述课堂学生的特征数据和所述身份信息,确定各个课堂学生的上课状态。
  21. 根据权利要求13所述的方法,其中,
    所述装置还包括:三维模块,配置为在所述第一获取模块获取课堂学生的上课图像之前,对所述课堂学生所在的课堂进行三维建模,生成三维教室模型;根据所述三维教室模型中的热力图,确定所述课堂学生的位置;
    所述第一获取模块,配置为根据确定的所述课堂学生的位置,获取所述三维教室模型中的课堂学生的上课图像。
  22. 根据权利要求21所述的装置,其中,所述装置还包括:
    身份信息对应模块,配置为若根据所述热力图,确定存在未获取到身份信息的课堂学生;则将未获取到身份信息的课堂学生的特征数据与所述热力图对应存储,并在确定身份信息后,根据确定的身份信息和与所述热力图对应的特征数据,确定课堂学生的上课状态。
  23. 根据权利要求13所述的装置,其中,所述装置还包括:
    预警模块,配置为若所述课堂学生的上课状态满足设定预警条件,则进行预警处理。
  24. 根据权利要求13所述的装置,其中,所述第一获取模块,配置为获取课堂学生所在的课堂的整体图像,从所述整体图像中获取每个课堂学生的上课图像;或者,分别获取每个课堂学生的上课图像。
  25. 一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-12任一项所述的学习监控方法对应的操作。
  26. 一种计算机存储介质,所述计算机存储介质存储有:用于获取课堂学生的上课图像的可执行指令;用于对所述上课图像进行识别,获取所述课堂学生的特征数据的可执行指令,其中,所述特征数据包括以下至少之一:所述课堂学生的面部特征数据、所述课堂学生的视觉特征数据、所述课堂学生的肢体特征数据;用于根据所述课堂学生的特征数据,确定所述课堂学生的上课状态的可执行指令。
PCT/CN2018/086686 2017-06-23 2018-05-14 学习监控方法、装置及电子设备 WO2018233398A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/720,070 US10891873B2 (en) 2017-06-23 2019-12-19 Method and apparatus for monitoring learning and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710487794.2 2017-06-23
CN201710487794.2A CN107292271B (zh) 2017-06-23 2017-06-23 学习监控方法、装置及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/720,070 Continuation US10891873B2 (en) 2017-06-23 2019-12-19 Method and apparatus for monitoring learning and electronic device

Publications (1)

Publication Number Publication Date
WO2018233398A1 true WO2018233398A1 (zh) 2018-12-27

Family

ID=60098082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/086686 WO2018233398A1 (zh) 2017-06-23 2018-05-14 学习监控方法、装置及电子设备

Country Status (3)

Country Link
US (1) US10891873B2 (zh)
CN (1) CN107292271B (zh)
WO (1) WO2018233398A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344723A (zh) * 2018-09-04 2019-02-15 四川文轩教育科技有限公司 一种基于视距算法的学生监控方法
CN109858809A (zh) * 2019-01-31 2019-06-07 浙江传媒学院 基于课堂学生行为分析的学习质量评估方法和系统
CN111127267A (zh) * 2019-12-18 2020-05-08 四川文轩教育科技有限公司 基于测评大数据的学校教学问题分析方法
CN111144255A (zh) * 2019-12-18 2020-05-12 华中科技大学鄂州工业技术研究院 一种教师的非语言行为的分析方法及装置
CN111563702A (zh) * 2020-06-24 2020-08-21 重庆电子工程职业学院 一种课堂教学互动系统
CN111970490A (zh) * 2020-08-06 2020-11-20 万翼科技有限公司 用户流量监控方法及相关设备
CN112749048A (zh) * 2021-01-13 2021-05-04 北京开拓鸿业高科技有限公司 用于教学服务器压力测试的方法、装置、介质及设备

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292271B (zh) 2017-06-23 2020-02-14 北京易真学思教育科技有限公司 学习监控方法、装置及电子设备
US20190087834A1 (en) 2017-09-15 2019-03-21 Pearson Education, Inc. Digital credential analysis in a digital credential platform
CN109753855A (zh) * 2017-11-06 2019-05-14 北京易真学思教育科技有限公司 教学场景状态的确定方法及装置
CN108154109A (zh) * 2017-12-22 2018-06-12 福州瑞芯微电子股份有限公司 一种智能录播模型的构建方法、装置及教学智能录播方法
CN108460700B (zh) * 2017-12-28 2021-11-16 北京科教科学研究院 一种智能化学生教育管理调控系统
CN108304779B (zh) * 2017-12-28 2021-11-26 合肥智权信息科技有限公司 一种学生教育管理的智能化调控方法
CN108304793B (zh) * 2018-01-26 2021-01-08 北京世纪好未来教育科技有限公司 在线学习分析系统及方法
CN108615420B (zh) * 2018-04-28 2020-08-28 北京比特智学科技有限公司 课件的生成方法和装置
CN108710204A (zh) * 2018-05-15 2018-10-26 北京普诺兴科技有限公司 一种基于眼跟踪的教学质量测试方法及系统
CN108694680A (zh) * 2018-05-23 2018-10-23 安徽爱学堂教育科技有限公司 一种课堂互动方法与装置
CN108564073A (zh) * 2018-06-20 2018-09-21 四川文理学院 一种课堂环境中学生情绪识别方法及装置
CN108924487A (zh) * 2018-06-29 2018-11-30 合肥霞康电子商务有限公司 一种基于在线教学的远程监控系统
CN108694565A (zh) * 2018-06-29 2018-10-23 重庆工业职业技术学院 上课提醒系统以及上课提醒方法
CN108846446B (zh) * 2018-07-04 2021-10-12 国家新闻出版广电总局广播科学研究院 基于多路径密集特征融合全卷积网络的目标检测方法
CN108848366B (zh) * 2018-07-05 2020-12-18 盎锐(上海)信息科技有限公司 基于3d摄像机的信息获取装置及方法
TWI674553B (zh) * 2018-07-27 2019-10-11 財團法人資訊工業策進會 教學品質監測系統及方法
CN109344682A (zh) * 2018-08-02 2019-02-15 平安科技(深圳)有限公司 课堂监控方法、装置、计算机设备及存储介质
CN109271896B (zh) * 2018-08-30 2021-08-20 南通理工学院 一种基于图像识别的学生测评系统及方法
CN109089099A (zh) * 2018-09-05 2018-12-25 广州维纳斯家居股份有限公司 图像处理方法、装置、设备及存储介质
CN109165633A (zh) * 2018-09-21 2019-01-08 上海健坤教育科技有限公司 一种基于摄像头感知的智能交互式学习系统
CN109284713A (zh) * 2018-09-21 2019-01-29 上海健坤教育科技有限公司 一种基于摄像头采集表情数据的情绪识别分析系统
CN109461104A (zh) * 2018-10-22 2019-03-12 杭州闪宝科技有限公司 课堂监控方法、装置及电子设备
CN109614849A (zh) * 2018-10-25 2019-04-12 深圳壹账通智能科技有限公司 基于生物识别的远程教学方法、装置、设备及存储介质
CN109192050A (zh) * 2018-10-25 2019-01-11 重庆鲁班机器人技术研究院有限公司 体验式语言教学方法、装置及教育机器人
CN109635725B (zh) * 2018-12-11 2023-09-12 深圳先进技术研究院 检测学生专注度的方法、计算机存储介质及计算机设备
CN109815795A (zh) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 基于人脸监测的课堂学生状态分析方法及装置
CN109523441A (zh) * 2018-12-20 2019-03-26 合肥凌极西雅电子科技有限公司 一种基于视频识别的教学管理方法和系统
CN109859078A (zh) * 2018-12-24 2019-06-07 山东大学 一种学生学习行为分析干预方法、装置及系统
CN109727501A (zh) * 2019-01-07 2019-05-07 北京汉博信息技术有限公司 一种教学系统
CN111462558A (zh) * 2019-01-19 2020-07-28 掌傲信息科技(上海)有限公司 一种基于人工智能的智慧教育平台
CN109615826B (zh) * 2019-01-22 2020-08-28 宁波财经学院 一种智能防瞌睡及报警系统
CN109919079A (zh) * 2019-03-05 2019-06-21 百度在线网络技术(北京)有限公司 用于检测学习状态的方法和装置
CN109934150B (zh) * 2019-03-07 2022-04-05 百度在线网络技术(北京)有限公司 一种会议参与度识别方法、装置、服务器和存储介质
CN110033400A (zh) * 2019-03-26 2019-07-19 深圳先进技术研究院 一种课堂监控分析系统
CN111833861A (zh) * 2019-04-19 2020-10-27 微软技术许可有限责任公司 基于人工智能的事件评估报告生成
CN109919143B (zh) * 2019-04-24 2023-08-18 重庆交互科技有限公司 基于多感官交互体验和学习注意力评估的教育方法
CN111862521B (zh) * 2019-04-28 2022-07-05 杭州海康威视数字技术股份有限公司 行为热力图生成及报警方法、装置、电子设备及存储介质
CN110188629A (zh) * 2019-05-13 2019-08-30 四点零(成都)教育咨询有限公司 随机点名系统及其点名方法
CN110166839A (zh) * 2019-06-15 2019-08-23 韶关市启之信息技术有限公司 一种验证视频是否被观看的方法与系统
CN112580910A (zh) * 2019-09-29 2021-03-30 鸿富锦精密电子(天津)有限公司 教学调查问卷分析方法、电子装置及存储介质
WO2021077382A1 (zh) * 2019-10-25 2021-04-29 中新智擎科技有限公司 一种学习状态的判断方法、装置及智能机器人
CN111507555B (zh) * 2019-11-05 2023-11-14 浙江大华技术股份有限公司 人体状态检测方法、课堂教学质量的评价方法及相关装置
CN111242049B (zh) * 2020-01-15 2023-08-04 武汉科技大学 一种基于面部识别的学生网课学习状态评价方法及系统
CN111243363B (zh) * 2020-03-27 2021-11-09 上海松鼠课堂人工智能科技有限公司 多媒体感官教学系统
CN111507241A (zh) * 2020-04-14 2020-08-07 四川聚阳科技集团有限公司 一种轻量级网络课堂表情监测方法
CN111565230A (zh) * 2020-04-29 2020-08-21 江苏加信智慧大数据研究院有限公司 一种综合识别系统及其流程
CN111696011B (zh) * 2020-06-04 2023-09-29 信雅达科技股份有限公司 一种监测与调控学生自主学习系统及其方法
CN111681474A (zh) * 2020-06-17 2020-09-18 中国银行股份有限公司 在线直播教学方法、装置、计算机设备及可读存储介质
CN111881830A (zh) * 2020-07-28 2020-11-03 安徽爱学堂教育科技有限公司 一种基于注意力集中度检测的交互提示方法
JP6898502B1 (ja) * 2020-07-29 2021-07-07 株式会社オプティム プログラム、方法及び情報処理装置
CN111935264A (zh) * 2020-07-31 2020-11-13 深圳市汇合体验教育科技有限公司 智慧课堂交互系统
CN111985364A (zh) * 2020-08-06 2020-11-24 深圳拔越软件有限公司 一种远程互联网教育学生行为评价方法及系统
CN112017085B (zh) * 2020-08-18 2021-07-20 上海松鼠课堂人工智能科技有限公司 智能虚拟教师形象人格化方法
CN111985582B (zh) * 2020-09-27 2021-06-01 上海松鼠课堂人工智能科技有限公司 基于学习行为的知识点掌握程度评测方法
CN112163162B (zh) * 2020-10-14 2023-12-08 珠海格力电器股份有限公司 基于人像识别的选修课程推荐方法、存储介质及电子设备
CN112651602A (zh) * 2020-12-03 2021-04-13 宁波大学科学技术学院 一种课堂模式评价方法和设备
CN113743263B (zh) * 2021-08-23 2024-02-13 华中师范大学 一种教师非言语行为测量方法及系统
CN114299769A (zh) * 2022-01-04 2022-04-08 华北理工大学 一种网络教学装置
CN114169808A (zh) * 2022-02-14 2022-03-11 北京和气聚力教育科技有限公司 计算机实施的学习力评估方法、计算设备、介质和系统
US20230261894A1 (en) * 2022-02-14 2023-08-17 Sony Group Corporation Meeting session control based on attention determination
CN116416097B (zh) * 2023-06-02 2023-08-18 成都优学家科技有限公司 一种基于多维教学模型的教学方法、系统及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217321A1 (en) * 2015-01-23 2016-07-28 Shindig. Inc. Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
CN106599881A (zh) * 2016-12-30 2017-04-26 首都师范大学 学生状态的确定方法、装置及系统
CN106652605A (zh) * 2017-03-07 2017-05-10 佛山市金蓝领教育科技有限公司 一种远程情感教学方法
CN106851216A (zh) * 2017-03-10 2017-06-13 山东师范大学 一种基于人脸和语音识别的课堂行为监控系统及方法
CN106846949A (zh) * 2017-03-07 2017-06-13 佛山市金蓝领教育科技有限公司 一种远程情感教学系统
CN107292271A (zh) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 学习监控方法、装置及电子设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4973149A (en) * 1987-08-19 1990-11-27 Center For Innovative Technology Eye movement detector
US8269834B2 (en) * 2007-01-12 2012-09-18 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream
US8463006B2 (en) * 2007-04-17 2013-06-11 Francine J. Prokoski System and method for using three dimensional infrared imaging to provide detailed anatomical structure maps
US8428918B2 (en) * 2007-09-19 2013-04-23 Utc Fire & Security Corporation System and method for occupancy estimation
US8515127B2 (en) * 2010-07-28 2013-08-20 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
US9354725B2 (en) * 2012-06-01 2016-05-31 New York University Tracking movement of a writing instrument on a general surface
US9165190B2 (en) * 2012-09-12 2015-10-20 Avigilon Fortress Corporation 3D human pose and shape modeling
US20140267611A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Runtime engine for analyzing user motion in 3d images
DE102015113941A1 (de) * 2014-08-21 2016-02-25 Affectomatics Ltd. Rating von Restaurants auf der Grundlage von affektiver Reaktion
CN106662746B (zh) * 2014-08-22 2020-10-23 国际智能技术公司 安全考试设备、系统和方法
EP3192058A4 (en) * 2014-09-08 2018-05-02 Simx LLC Augmented reality simulator for professional and educational training
EP3030151A4 (en) * 2014-10-01 2017-05-24 Nuralogix Corporation System and method for detecting invisible human emotion
US11232466B2 (en) * 2015-01-29 2022-01-25 Affectomatics Ltd. Recommendation for experiences based on measurements of affective response that are backed by assurances
US10692395B2 (en) * 2015-08-17 2020-06-23 University Of Maryland, Baltimore Automated surgeon performance evaluation
US9940823B2 (en) * 2016-02-29 2018-04-10 International Business Machines Corporation System, method, and recording medium for emergency identification and management using smart devices and non-smart devices
CN106250822A (zh) 2016-07-21 2016-12-21 苏州科大讯飞教育科技有限公司 基于人脸识别的学生专注度监测系统及方法
US10517520B2 (en) * 2016-11-10 2019-12-31 Neurotrack Technologies, Inc. Method and system for correlating an image capturing device to a human user for analysis of cognitive performance
US10474336B2 (en) * 2016-12-20 2019-11-12 Adobe Inc. Providing a user experience with virtual reality content and user-selected, real world objects
CN106851579B (zh) * 2017-03-27 2018-09-28 华南师范大学 基于室内定位技术对教师移动数据记录及分析的方法
US11301944B2 (en) * 2017-04-13 2022-04-12 International Business Machines Corporation Configuring classroom physical resources
US10593058B2 (en) * 2017-08-23 2020-03-17 Liam Hodge Human radar
US11908345B2 (en) * 2018-12-27 2024-02-20 Intel Corporation Engagement level determination and dissemination

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217321A1 (en) * 2015-01-23 2016-07-28 Shindig. Inc. Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
CN106599881A (zh) * 2016-12-30 2017-04-26 首都师范大学 学生状态的确定方法、装置及系统
CN106652605A (zh) * 2017-03-07 2017-05-10 佛山市金蓝领教育科技有限公司 一种远程情感教学方法
CN106846949A (zh) * 2017-03-07 2017-06-13 佛山市金蓝领教育科技有限公司 一种远程情感教学系统
CN106851216A (zh) * 2017-03-10 2017-06-13 山东师范大学 一种基于人脸和语音识别的课堂行为监控系统及方法
CN107292271A (zh) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 学习监控方法、装置及电子设备

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344723A (zh) * 2018-09-04 2019-02-15 四川文轩教育科技有限公司 一种基于视距算法的学生监控方法
CN109858809A (zh) * 2019-01-31 2019-06-07 浙江传媒学院 基于课堂学生行为分析的学习质量评估方法和系统
CN109858809B (zh) * 2019-01-31 2020-04-28 浙江传媒学院 基于课堂学生行为分析的学习质量评估方法和系统
CN111127267A (zh) * 2019-12-18 2020-05-08 四川文轩教育科技有限公司 基于测评大数据的学校教学问题分析方法
CN111144255A (zh) * 2019-12-18 2020-05-12 华中科技大学鄂州工业技术研究院 一种教师的非语言行为的分析方法及装置
CN111144255B (zh) * 2019-12-18 2024-04-19 华中科技大学鄂州工业技术研究院 一种教师的非语言行为的分析方法及装置
CN111563702A (zh) * 2020-06-24 2020-08-21 重庆电子工程职业学院 一种课堂教学互动系统
CN111970490A (zh) * 2020-08-06 2020-11-20 万翼科技有限公司 用户流量监控方法及相关设备
CN111970490B (zh) * 2020-08-06 2023-04-18 万翼科技有限公司 用户流量监控方法及相关设备
CN112749048A (zh) * 2021-01-13 2021-05-04 北京开拓鸿业高科技有限公司 用于教学服务器压力测试的方法、装置、介质及设备

Also Published As

Publication number Publication date
CN107292271A (zh) 2017-10-24
US10891873B2 (en) 2021-01-12
US20200126444A1 (en) 2020-04-23
CN107292271B (zh) 2020-02-14

Similar Documents

Publication Publication Date Title
WO2018233398A1 (zh) 学习监控方法、装置及电子设备
US20170039869A1 (en) System and method for validating honest test taking
US20210201697A1 (en) Language learning method, electronic device executing the method, and storage medium storing the method program files
US11651700B2 (en) Assessing learning session retention utilizing a multi-disciplined learning tool
CN111428686A (zh) 一种学生兴趣偏好评估方法、装置及系统
CN110992222A (zh) 教学交互方法、装置、终端设备及存储介质
CN111008542A (zh) 对象专注度分析方法、装置、电子终端及存储介质
TW201941152A (zh) 用於互動式線上教學的即時監控方法
CN111353363A (zh) 一种教学效果检测方法及装置、电子设备
CN112669422A (zh) 仿真3d数字人生成方法、装置、电子设备及存储介质
US20240038087A1 (en) Updating a virtual reality environment based on portrayal evaluation
CN115937961B (zh) 一种线上学习识别方法及设备
CN113837010A (zh) 一种教育评估系统及方法
Sakthivel et al. Online Education Pedagogy Approach
CN112528790B (zh) 基于行为识别的教学管理方法、装置及服务器
Takahashi et al. Improvement of detection for warning students in e-learning using web cameras
KR102590787B1 (ko) 스크린 골프 강습 매칭 서비스 시스템 및 방법
TWI750613B (zh) 遠距教學成效呈現系統及方法
US12002379B2 (en) Generating a virtual reality learning environment
US11887494B2 (en) Generating a virtual reality learning environment
FR3085221A1 (fr) Systeme multimedia comportant un equipement materiel d’interaction homme-machine et un ordinateur
KR102439446B1 (ko) 인공지능 기반 학습 관리 시스템
US20230237922A1 (en) Artificial intelligence-driven avatar-based personalized learning techniques
Nakachi et al. Perception analysis of motion contributing to individuality using kinect sensor
Antoni et al. Augmented Virtuality Training for Special Education Teachers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18821127

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18821127

Country of ref document: EP

Kind code of ref document: A1