CN112883851A - Learning state detection method and device, electronic equipment and storage medium - Google Patents

Learning state detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112883851A
CN112883851A CN202110154077.4A CN202110154077A CN112883851A CN 112883851 A CN112883851 A CN 112883851A CN 202110154077 A CN202110154077 A CN 202110154077A CN 112883851 A CN112883851 A CN 112883851A
Authority
CN
China
Prior art keywords
user
learning state
frames
information
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110154077.4A
Other languages
Chinese (zh)
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202110154077.4A priority Critical patent/CN112883851A/en
Publication of CN112883851A publication Critical patent/CN112883851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a learning state detection method and device, electronic equipment and a storage medium. The method is applied to the electronic equipment and can comprise the following steps: acquiring M frames of user images through a camera of the electronic equipment; extracting N frames of user images to be identified from the M frames of user images, wherein M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M; respectively identifying the N frames of user images to be identified to obtain a head characteristic and a face characteristic corresponding to each frame of user image to be identified; calculating to obtain face information of the user according to the head characteristics and the face characteristics corresponding to each frame of user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of the five sense organs; determining a user learning state from the facial information. The learning state detection method, the learning state detection device, the electronic equipment and the storage medium can improve the accuracy of learning state detection.

Description

Learning state detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a learning state detection method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of scientific technology, along with the development of mobile internet and the popularization of intelligent terminal, online study slowly becomes a popular study mode, and students can study without leaving home, do not need to go to a specific place to class, study through electronic equipment and internet can. However, the students can select the learning time and place by themselves without being constrained by geography and time, so that parents, teachers and the like cannot pay attention to the learning state of the users in real time, and the learning effect of the students cannot be guaranteed.
Disclosure of Invention
The embodiment of the application discloses a learning state detection method and device, electronic equipment and a storage medium, which can accurately detect the learning state of a user and improve the learning effect of the user.
The embodiment of the application discloses a learning state detection method, which comprises the following steps:
acquiring M frames of user images through a camera of the electronic equipment;
extracting N frames of user images to be identified from the M frames of user images, wherein M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M;
respectively identifying the N frames of user images to be identified to obtain a head characteristic and a face characteristic corresponding to each frame of user image to be identified;
calculating to obtain face information of the user according to the head characteristics and the face characteristics corresponding to each frame of user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of the five sense organs;
determining a user learning state from the facial information.
The embodiment of the application discloses learning state detection device, the device includes:
the image acquisition unit is used for acquiring M frames of user images through a camera of the electronic equipment;
an image extraction unit, configured to extract N frames of user images to be identified from the M frames of user images, where M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M;
the image identification unit is used for respectively identifying the N frames of user images to be identified to obtain the head characteristics and the face characteristics corresponding to each frame of user image to be identified;
and the calculating unit is used for calculating and obtaining the face information of the user according to the head characteristic and the face characteristic corresponding to each frame of the user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of the five sense organs.
A state determination unit for determining a user learning state from the face information.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize the method.
An embodiment of the application discloses a computer-readable storage medium, which stores a computer program, and the computer program realizes the method described above when being executed by a processor.
The learning state detection method, the learning state detection device, the electronic device and the computer-readable storage medium are disclosed in the embodiments of the present application, wherein M frames of user images are collected by a camera of the electronic device, N frames of user images to be identified are extracted from the collected M frames of user images, the N frames of user images to be identified are respectively identified to obtain a head feature and a face feature corresponding to each frame of user image to be identified, face information of a user is obtained by calculation according to the head feature and the face feature obtained by identification, and then the learning state of the user is determined according to the face information. By implementing the embodiment of the application, the facial information of the user can be obtained by analyzing the user image, the learning state of the user is determined by combining the facial information such as the face area, the deviation degree of the distribution of the five sense organs and the like, the learning state of the user can be accurately detected, and the learning effect of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram illustrating an exemplary implementation of a learning state detection method;
FIG. 2 is a flow diagram illustrating a learning state detection method according to one embodiment;
FIG. 3 is a flow diagram illustrating a process for determining a learning state of a user based on rotation information of the user's head, according to one embodiment;
FIG. 4 is a flowchart illustrating an exemplary process of outputting a prompt message by an electronic device;
FIG. 5 is a block diagram of a learning state detection apparatus in one embodiment;
fig. 6 is a block diagram of an electronic device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Fig. 1 is a diagram illustrating an application scenario of the learning state detection method in one embodiment. As shown in fig. 1, the electronic device 101 is an electronic device with a camera, the electronic device may include but is not limited to a smart phone, a wearable device, a tablet Computer, a PC (Personal Computer), and the like, the user may be a user who needs to perform online learning, the implementation scenario may be applied to any occasion where the user performs online learning at home, in a library, or in some other occasions, when the user performs online learning using the electronic device, the learning state of the user is detected, and it is ensured that the user can have a better learning state in the online learning process.
In some embodiments, the electronic device may further establish a communication connection with the binding terminal, and when a guardian (such as a parent, a teacher, or the like) corresponding to the binding terminal does not have time or has other matters that the user cannot be supervised for learning, the method according to the embodiments of the present application may be used to learn the online learning state of the user, so as to perform remote supervision.
Fig. 2 is a flowchart illustrating a learning state detection method according to an embodiment. As shown in fig. 2, the learning state detection method can be applied to the electronic device, and the method can include the following steps:
201. and acquiring M frames of user images through a camera of the electronic equipment.
In some embodiments, the camera may be a front camera of the electronic device, and when a user uses the electronic device to perform online learning, the electronic device may collect an image of the user through the front camera and transmit the collected image of the user to a processor of the electronic device in real time.
In one embodiment, the electronic device may preset a session period during which the user images are captured by the camera. For example, the lesson session may be preset to be 8 to 10 am, and the electronic device may capture the user image through the camera during the lesson session. Optionally, the lesson time period may be a time period manually input by the user according to actual requirements, the user may also import the curriculum schedule into the electronic device, and the electronic device identifies the curriculum schedule of the user and determines the lesson time period of each day, which is more convenient and faster.
In another embodiment, the electronic device may also monitor the running application in real time, and when it is detected that the application program of the learning class starts running, it may indicate that the user accurately performs online learning, and may acquire the user image through the camera. Further, when detecting the application program is started, the electronic device may obtain an application identifier of the started application program, and obtain an application type according to the application identifier, where the application identifier may include, but is not limited to, a number, a name, a version number, and the like of the application, and the application type may include, but is not limited to, an entertainment application, a learning application, a social application, and the like. The correspondence between the application identifier and the application type may be stored in the electronic device in advance, and the electronic device may acquire the application type of the started application program based on the correspondence. If the application type of the started application program is learning application, the user image can be collected through the camera.
203. And extracting N frames of user images to be identified from the acquired M frames of user images, wherein M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M.
In one embodiment, the electronic device extracting N frames of user images to be recognized from the acquired M frames of user images may include: and extracting N frames of user images to be identified at intervals from the M frames of user images according to a certain number of collected images. Wherein, a certain number of the collected images can be set according to actual requirements, such as 10 frames, 15 frames, and the like. For example, the number of the user images collected by the camera within a certain period of time may be 30, and the collected user images are transmitted to the processor of the electronic device in real time, and the processor of the electronic device receives the user images collected by the camera in real time, and may extract one frame of image from the received user images every 15 frames as the user image to be identified.
In another embodiment, the electronic device may also extract N frames of user images to be identified at intervals from the M frames of user images according to a certain interval time period. The interval time period can be set according to actual requirements, such as 3 seconds, 2 seconds, and the like. For example, the frame rate of the camera acquiring the user image may be 10 frames/second, that is, the camera may acquire 10 frames of user images per second and transmit the acquired user images to the processor of the electronic device in real time, and the processor of the electronic device receives the user images acquired by the camera in real time and may extract one frame of image from the received user images every 2 seconds as the user image to be identified.
In the embodiment of the application, the N frames of user images to be identified are extracted from the acquired M frames of user images, and the N frames of user images to be identified are identified, namely, all the acquired user images do not need to be identified. By implementing the method, the learning state detection efficiency can be improved, the time required by detection is shortened, the power consumption of the electronic equipment in the learning state detection process can be reduced, and the resource waste is avoided.
205. And respectively identifying the N frames of user images to be identified to obtain the head characteristics and the face characteristics corresponding to each frame of user image to be identified.
In this embodiment of the application, the method for identifying N frames of user images to be identified may include: and carrying out image recognition on each frame of user image to be recognized, extracting feature points in each frame of user image to be recognized, and marking the feature points belonging to the head and the face so as to obtain the head feature and the face feature corresponding to one frame of user image to be recognized. For example, feature points belonging to a head contour, a face contour, a facial contour, etc. may be labeled.
The head features are image features about head information extracted from each frame of user image to be recognized, and include features such as head position, head size, head shape and the like of a user. The facial features are image features extracted from each frame of user image to be recognized and related to facial information, and the facial features can include features such as a human face region, positions of five sense organs, sizes of five sense organs and the like.
207. And calculating to obtain the face information of the user according to the head characteristic and the face characteristic corresponding to each frame of the user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of the five sense organs.
The human face area can refer to the size of an image area occupied by the recognized human face image area in the user image, the deviation degree of the distribution of the five sense organs can refer to the deviation position, the deviation direction and the like of the distribution position of the five sense organs, which is determined from the user image to be recognized, relative to the position of the five sense organs in a standard user image, and the standard user image refers to an image acquired when the human face of the user is over against the camera.
In some embodiments, calculating the facial information of the user according to the head feature and the face feature corresponding to each frame of the image of the user to be recognized may include: calculating the face area of the user according to the corresponding head position and face area of the user in each frame of user image to be recognized; and calculating the deviation degree of the distribution of the five sense organs according to the corresponding user face area and the position of the five sense organs in each frame of user image to be recognized.
As a specific implementation mode, the first feature points belonging to the face contour in the user image to be recognized can be determined, the image coordinates of each first feature point in the user image to be recognized are obtained, and the face area of the user can be calculated according to the image coordinates of each first feature point in the user image to be recognized. Second feature points belonging to the outline of the five sense organs in the user image to be recognized can be determined, image coordinates of the second feature points in the user image to be recognized are obtained, and the positions of the five sense organs in the user image to be recognized can be calculated according to the image coordinates of the second feature points in the user image to be recognized. The position of the five sense organs in the image to be identified can be compared with the position of the five sense organs in the standard user image to obtain the relative position relationship of the same position of the five sense organs, and the deviation degree of the distribution of the five sense organs can be determined based on the relative position relationship.
209. The user learning state is determined from the face information.
The user learning states may include, but are not limited to, a state of mental inattention, a state of fatigue, a state of non-pre-device learning, and a state of normal learning. Wherein, the state of mental inattention refers to a state in which the user does not concentrate on learning, looking elsewhere or focusing on other things; the fatigue state refers to the state of dozing, mental fatigue or sleeping of the user; the non-learning-before-device state refers to a state in which the user does not learn before the electronic device; the normal learning state refers to a state in which the user is attentive to learning.
The electronic equipment can determine the learning state of the user according to the face area of the user and the deviation degree of the distribution of the five sense organs. For example, when the face area is small and the degree of deviation of the distribution of the five sense organs is large, the user learning state may be determined to be the state of mental inattention, and when the degree of deviation of the distribution of the five sense organs is small, the user learning state may be determined to be the normal learning state, but the present invention is not limited thereto.
In the embodiment of the application, the head characteristics and the face characteristics of the user are obtained by identifying each frame of user image to be identified, the face information of the user is obtained through calculation, and the learning state of the user is determined according to the face information, so that the accuracy of detecting the learning state of the user can be improved, and the learning effect of the user is improved.
In one embodiment, step 209 determines the user learning state from the facial information, which may include: and determining the rotation information of the head of the user according to the face area of the user and the deviation degree of the distribution of the five sense organs, and determining the learning state of the user according to the rotation information of the head of the user.
Optionally, the rotation information of the user's head includes a head rotation direction and a head rotation degree. The head rotating direction refers to a clockwise rotating or counterclockwise rotating direction when the head of the user is opposite to the camera relative to the face of the user. The head rotation degree refers to an angle value of deviation after clockwise rotation or anticlockwise rotation when the head of the user is opposite to the camera relative to the face of the user.
When the face of the user is over against the camera, the face area reaches the maximum value, the five sense organs are distributed smoothly and symmetrically, and the size of the five sense organs also reaches the maximum value. However, when the head of the user is rotated, the area of the face of the user is reduced when the face of the user is opposite to the camera, and the positions of the five sense organs are deviated and asymmetrical. Therefore, by calculating the face area, the size of the five sense organs, and the deviation degree of the distribution of the five sense organs, the rotation information of the head of the user can be determined.
In some embodiments, a corresponding relationship between the face area of the user and the deviation degree of the distribution of the five sense organs and the rotation information of the head of the user may be pre-established, and the corresponding relationship may be obtained by measuring the head rotation information of the images acquired by the user under different postures for multiple times, so that the accuracy of the rotation information of the head of the user can be improved, and the accuracy of the learning state detection can be further improved.
FIG. 3 is a flow diagram illustrating a process for determining a learning state of a user based on rotation information of the user's head, according to an embodiment. As shown in fig. 3, determining the user learning state according to the rotation information of the user's head may include the steps of:
302. judging whether the rotation degree of the head of the user is greater than or equal to an angle threshold value, if so, executing a step 304; if not, go to step 306.
The angle threshold refers to a preset numerical value of deviation between the rotation degree of the head of the user obtained through calculation and the face of the user right facing the camera. For example, the angle threshold may be 45 degrees.
304. Determining the current user learning state as an inattentive state.
In the embodiment of the application, when the rotation degree of the head of the user is greater than or equal to the angle threshold, the current learning state of the user is determined to be the state of mental inattention. In one embodiment, if it is determined that the degree of rotation of the head of the user is greater than or equal to the angle threshold, it indicates that the degree of rotation of the head of the user is large, and the electronic device cannot be looked at for learning. For example, when the user sits in front of the electronic device, turns his head to pay attention to things other than the electronic device (e.g., playing a mobile phone, watching a video, looking outside a window, etc.), and does not learn, if the rotation degree of the head of the user is greater than or equal to the angle threshold after the user is judged, it is determined that the current learning state of the user is not in an inattentive state.
306. Judging whether the size of the eye is smaller than or equal to a set value, if so, executing a step 308; if not, go to step 310.
The eye size may refer to an image area occupied by eyes in the user's five sense organs identified in each frame of the user image to be identified in the user image to be identified, and the set value is a preset eye image area used in a normal learning state.
308. And determining the current learning state of the user as the fatigue state.
In the embodiment of the application, when the image area of the calculated eye area in the image to be recognized is smaller than or equal to the set value, the current learning state of the user can be judged to be a fatigue state, and the user is in a tired, sleepy or dozing state. If the size of the eyes of the user is judged to be smaller than or equal to the set value, the current eyes of the user are smaller than the eyes of the user in the normal learning state at ordinary times, the user is in the squinting or eye closing state, and the user is not in the normal learning state, and the current learning state of the user is judged to be the fatigue state.
310. And determining the current learning state of the user as a normal learning state.
In the embodiment of the application, when the rotation degree of the head of the user is smaller than the angle threshold and the size of the eyes is larger than a set value, the current learning state of the user is determined to be the normal learning state.
In one embodiment, after the electronic device extracts N frames of user images to be recognized from M frames of user images and respectively recognizes the N frames of user images to be recognized, whether the head features and the face features of the user can be recognized from the N frames of user images to be recognized or not can be judged, if the head features and the face features of the user can be recognized, face information of the user is obtained through calculation according to the recognized head features and the recognized face features of the user, and whether the current user state is in an unconcentrated state or a fatigue state or not is judged according to the face information; and if the head features and the face features of the user cannot be recognized from the N frames of user images to be recognized, determining that the current user learning state is not in the pre-equipment learning state.
In the embodiment of the application, the rotation degree and the eye size of the head are calculated according to the face area of the user and the deviation degree of the distribution of the five sense organs, and whether the user is looking at the electronic equipment for study can be judged, so that the current study state of the user can be determined, the accuracy of study state detection can be improved, and the study state of the current user can be detected and evaluated in many aspects.
FIG. 4 is a flowchart illustrating an exemplary process of outputting a prompt message by an electronic device. As shown in fig. 4, in one embodiment, after the step of determining the learning state of the user based on the face information, the above-mentioned learning state detection method further includes the steps of:
401. and judging whether the current learning state of the user is a mental inattention state or a fatigue state. If yes, go to step 403; otherwise, the flow is ended.
403. The electronic equipment outputs first reminding information.
In the embodiment of the application, after the electronic equipment determines that the learning state of the user is the mental inattention state or the fatigue state through the facial information, the electronic equipment can output first reminding information, and the first reminding information can be used for prompting the user to adjust the learning state. For example, the first reminding information may be "please the student to concentrate on learning", so as to remind the user to concentrate on learning, return to a normal learning state, and put into learning.
The manner in which the electronic device outputs the first reminder information may be various. In one embodiment, the first reminding information can be sound information, and the electronic equipment can output the first reminding information through a sound device or a sound device. For example, the electronic device outputs the first reminding information and plays the reminding voice through a sound device of the electronic device, and the voice content can be the reminding voice of "please the student to concentrate on learning", "please the student to attend the class seriously", and the like, so as to play a role in reminding the user of concentrating on the attention to be put into learning.
In another embodiment, the first reminding information may be a picture or a text message, and the electronic device may output the first reminding information through a display device such as a display screen. For example, through a display device of the electronic device, the electronic device outputs first reminding information to display a picture for reminding a user of paying attention to learning, and the picture content may be reminding content such as "please a student pay attention to learning", "please a student attend a class", and the like, so as to play a role in reminding the user of paying attention to learning.
In another embodiment, the first reminder information may be a vibration window, and the electronic device may output the first reminder information through a display device such as a display screen. For example, a vibration window is popped up through a display device of the electronic device to output first reminding information, the vibration window which reminds the user of putting the attention into the learning process is displayed, and the user can close the pop-up window program only by clicking a certain position in the pop-up window program or inputting a verification code, so that the effect of reminding the user of putting the attention into the learning process is achieved.
In the embodiment of the application, when the user is in the state of being unconscious or tired, the electronic equipment can remind the user in time, and the learning effect of the user is further improved.
405. And after a certain period of time, re-executing the step of acquiring M frames of user images through the camera of the electronic equipment, and determining the learning state of the user again.
After the electronic device outputs the first reminding information in step 403, the user is reminded to be put into a normal learning state. After outputting the first reminding information for a certain period of time, the steps 201-209 can be executed again, and the learning state of the user is determined again. And the electronic equipment carries out the next operation according to the re-determined user learning state. For example, after the electronic device outputs the first reminding information for 15 minutes, the step of acquiring M frames of user images through a camera of the electronic device is executed again, N frames of user images to be identified are extracted from the M frames of user images acquired again, the N frames of user images to be identified are identified respectively to obtain head features and face features corresponding to each frame of user image to be identified, face information of the user is calculated, and the learning state of the user is determined again according to the face information.
407. It is judged whether the re-determined learning state of the user is still a state of mental inattention or fatigue. If yes, go to step 409; otherwise, the flow is ended.
According to the result of re-determining the user learning state of step 407, it is determined whether the current user learning state is an inattentive state or a tired state.
Through the step, whether the user corrects the self-learning state and puts the self-learning state into learning after receiving the first reminding information output by the electronic equipment can be detected, and the learning state of the user can be detected more reasonably and more comprehensively.
409. And the electronic equipment outputs second reminding information to the binding terminal.
The binding terminals may be terminal devices held by guardians (such as parents, teachers, and the like) corresponding to users, the electronic devices may pre-establish binding relationships with the binding terminals, store device information (such as mobile phone numbers, IP addresses, and the like) of the binding terminals, and establish communication connections with the binding terminals, and the binding terminals may include various devices or systems, such as touch screen mobile phones, computers, smart watches, and the like.
The second reminding information can be used for reminding a guardian corresponding to the binding terminal of paying attention to the learning state of the user, so that the guardian corresponding to the binding terminal intervenes to guide the user to learn carefully and is put into the learning state.
In this embodiment of the application, after the electronic device determines again that the current user learning state is an inattentive state or a fatigue state, the electronic device may output second prompting information to the binding terminal, where the second prompting information may be used to prompt a guardian (such as a parent, a teacher, and the like) corresponding to the binding terminal, so as to prompt the guardian to intervene to guide the user to learn carefully. For example, the second reminding information may be "a classmates are in a fatigue state, please pay attention to the learning state of the a classmates", and the like, so that the guardian can know the learning state of the user and remind the user of the unfair learning state.
The manner in which the electronic device outputs the second reminder information may be various. In one embodiment, the second reminder information may be a text message. The electronic device can output second reminding information to the binding terminal, and send text content to the binding terminal, and the binding terminal can display the text content through a display screen, for example, the text content can be text information such as 'the current learning state of the student is a mental unfocused state, a troubled guardian reminds the student to correct the learning state', 'the current learning state of the student is a fatigue state, and the troubled guardian reminds the student to correct the learning state', so that the guardian can know the learning state of the user and remind the guardian to intervene to guide the user to learn.
In another embodiment, the second reminder information may be a voice message. The electronic device can output the second reminding information to the binding terminal, send reminding voice to the binding terminal, and the binding terminal can play the reminding voice through a sound playing device such as a loudspeaker. For example, the voice content may be a prompting voice such as "the current learning state of the student is an unconscious state, the troubled guardian prompts the student to correct the learning state", "the current learning state of the student is a fatigue state, the troubled guardian prompts the student to correct the learning state", and the like, so as to enable the guardian to know the learning state of the user and prompt the guardian to intervene to guide the user to learn.
In yet another embodiment, the second reminder information may be picture information. The electronic device can output the second reminding information to the binding terminal, send the image to the binding terminal, and the binding terminal can display the image through a display screen and the like. For example, the image content may be a user image acquired by a camera of the electronic device or an image acquired by a camera of the electronic device such as a small video, so as to allow a guardian to know a learning state of the user and remind the guardian to intervene to guide the user to learn.
In some embodiments, if it is determined that the current user learning state is not in the pre-device learning state, the electronic device may output third reminding information to the binding terminal. The third reminding information can be used for reminding a guardian corresponding to the binding terminal to pay attention to and supervise the learning state of the user, and can remind the guardian that the user does not learn before the electronic equipment.
The manner in which the electronic device outputs the third reminder information may be various. In one embodiment, the third reminder information may be a text message. The electronic device can output the third reminding information to the binding terminal, and send the text content to the binding terminal, and the binding terminal can display the text content through a display screen and the like. For example, the text content may be text information such as "the current learning state of the student is not learning before the device, and the guardian is in trouble to remind the student to learn as soon as possible", so as to allow the guardian to know the learning state of the user and remind the guardian to intervene to guide the user to learn.
In another embodiment, the third reminder information may be a voice message. The electronic device can output the third reminding information to the binding terminal, send reminding voice to the binding terminal, and the binding terminal can play the reminding voice through a sound playing device such as a loudspeaker. For example, the voice content may be voice information such as "the current learning state of the student is not learning before the device, and the guardian is in trouble to remind the student to learn as soon as possible", so as to play a role in letting the guardian know the learning state of the user and reminding the guardian to intervene to guide the user to learn.
In yet another embodiment, the third reminder information may be picture information. The electronic device can output the third reminding information to the binding terminal, send the image to the binding terminal, and the binding terminal can display the image through a display screen and the like. For example, the image content may be a user image acquired by a camera of the electronic device or an image acquired by a camera of the electronic device such as a small video, so as to allow a guardian to know a learning state of the user and remind the guardian to intervene to guide the user to learn.
In the embodiment of the application, different reminding information can be output to users in different learning states in a targeted manner. And whether the electronic equipment needs to output the second reminding information is judged by determining the learning state of the user again, so that the problem that the resources are wasted and the use is inconvenient because the user sends the reminding information to the guardian after correcting the learning state of the user is avoided. When the electronic equipment fails to recognize the head features and the face features of the user, the current learning state of the user is judged to be not learned before the equipment, and third reminding information is sent to the binding terminal corresponding to the guardian, so that the effectiveness of the guardian in monitoring the learning of the user can be improved, the guardian can also know the current learning state of the user more directly, and the user can be enabled to concentrate attention on learning as soon as possible. The method can ensure the effectiveness of the reminding function in the learning state detection method, and can output different reminding information to users in different learning states in a targeted manner, thereby avoiding resource waste and inconvenience in use.
Fig. 5 is a block diagram of a learning state detection apparatus in one embodiment. As shown in fig. 5, the learning state detection apparatus includes an image acquisition unit 501, an image extraction unit 503, an image recognition unit 505, a calculation unit 507, and a state determination unit 509.
The image acquisition unit 501 is configured to acquire M frames of user images through a camera of the electronic device.
An image extracting unit 503, configured to extract N frames of user images to be identified from M frames of user images, where M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M.
In one embodiment, the image extracting unit 503 is further configured to extract N frames of user images to be identified at intervals according to a certain number of acquired images from the M frames of user images.
In one embodiment, the image extracting unit 503 is further configured to extract N frames of user images to be identified at intervals from M frames of user images according to a certain interval time period.
The image recognition unit 505 is configured to respectively recognize the extracted N frames of user images to be recognized, and obtain a head feature and a face feature corresponding to each frame of user image to be recognized.
The calculating unit 507 is configured to calculate face information of the user according to the head feature and the face feature corresponding to each extracted frame of the user image to be recognized.
In one embodiment, the calculating unit 507 is further configured to calculate a face area of the user for a corresponding head position and face area of the user in each frame of the image of the user to be recognized.
In one embodiment, the calculating unit 507 is further configured to calculate a deviation degree of the distribution of the facial features and facial features of the five sense organs for each frame of the user image to be recognized.
A state determination unit 509 for determining a user learning state from the face information.
In the embodiment of the application, the head characteristics and the face characteristics of the user are obtained by identifying each frame of user image to be identified, the face information of the user is obtained through calculation, and the learning state of the user is determined according to the face information, so that the accuracy of detecting the learning state of the user can be improved, and the learning effect of the user is improved.
In an embodiment, the state determining unit 509 is further configured to determine rotation information of the head of the user according to the face area of the user and the deviation degree of the distribution of the five sense organs, and determine the learning state of the user according to the rotation information of the head of the user.
In one embodiment, the rotation information includes a degree of rotation, and the face information further includes an eye size.
The state determining unit 509 is further configured to determine that the current user learning state is an inattentive state when the degree of rotation of the head of the user is greater than or equal to the angle threshold.
In one embodiment, the state determining unit 509 is further configured to determine that the current user learning state is a fatigue state when the degree of rotation of the head of the user is less than the angle threshold and the eye size is less than or equal to the set value.
In one embodiment, the state determining unit 509 is further configured to determine that the current user learning state is not in the pre-device learning state when the image recognizing unit 505 fails to recognize the head feature and the facial feature of the user from the N frames of user images to be recognized.
In the embodiment of the application, the rotation degree and the eye size of the head are calculated according to the face area of the user and the deviation degree of the distribution of the five sense organs, and whether the user is looking at the electronic equipment for study can be judged, so that the current study state of the user can be determined, the accuracy of study state detection can be improved, and the study state of the current user can be detected and evaluated in many ways.
In one embodiment, the learning state detection apparatus further includes a reminding unit in addition to the image acquisition unit 501, the image extraction unit 503, the image recognition unit 505, the calculation unit 507, and the state determination unit 509.
And the reminding unit is used for outputting first reminding information by the electronic equipment if the current learning state of the user is determined to be the mental inattention state or the fatigue state, wherein the first reminding information is used for reminding the user to adjust the learning state.
The image extracting unit 503 is further configured to re-execute the step of acquiring M frames of user images through the camera of the electronic device after a certain period of time for outputting the first reminding information, and re-determine the user learning state through the state determining unit 509.
And the reminding unit is also used for outputting second reminding information to the binding terminal by the electronic equipment if the re-determined learning state of the user is still in the mental inattention state or the fatigue state, wherein the second reminding information is used for reminding a guardian corresponding to the binding terminal to pay attention to the learning state of the user.
In an embodiment, the prompting unit is further configured to, if it is determined that the current user learning state is not in the pre-device learning state, output, by the electronic device, third prompting information to the binding terminal, where the third prompting information is used to prompt a guardian corresponding to the binding terminal to intervene in guiding the user to learn.
In the embodiment of the application, the effectiveness of the reminding function in the learning state detection method can be ensured by outputting the reminding information, different reminding information can be output to users in different learning states in a targeted manner, and resource waste and inconvenience in use are avoided.
Fig. 6 is a block diagram of an electronic device in one embodiment. As shown in fig. 6, the learning state detection method apparatus may include:
a memory 603 in which executable program code is stored;
a processor 601 coupled to a memory 603;
the processor 601 calls the executable program code stored in the memory 603 to execute all or part of the steps in the methods provided in the embodiments.
The memory 603 referred to in the embodiments of the present application may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data programs, code sets or instruction sets, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a Memory medium that may include Read-Only Memory (ROM), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state disk, any type of Memory (e.g., a compact disk, a DVD, etc.), or the like, or a combination thereof. The memory 603 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as an audio alert function, an image alert function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the electronic device in use, and the like.
Processor 601 may include one or more processing cores. The processor 601 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by executing or executing executable instructions, data programs, code sets, or instruction sets stored in the memory 603, and calling data stored in the memory 603.
Furthermore, the present application further discloses a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method described in the above embodiments.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the method as described in the embodiments above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The learning state detection method, the learning state detection device, the electronic device, and the storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied herein to explain the principles and implementations of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A learning state detection method, characterized by comprising:
acquiring M frames of user images through a camera of the electronic equipment;
extracting N frames of user images to be identified from the M frames of user images, wherein M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M;
respectively identifying the N frames of user images to be identified to obtain a head characteristic and a face characteristic corresponding to each frame of user image to be identified;
calculating to obtain face information of the user according to the head characteristics and the face characteristics corresponding to each frame of user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of the five sense organs;
determining a user learning state from the facial information.
2. The method according to claim 1, wherein said extracting N frames of user images to be identified from said M frames of user images comprises:
extracting N frames of user images to be identified at intervals from the M frames of user images according to a certain number of collected images; or
And extracting N frames of user images to be identified at intervals from the M frames of user images according to a certain interval time period.
3. The method of claim 1, wherein the head feature comprises a user head position; the facial features comprise a human face area and five sense organ positions; the calculating to obtain the facial information of the user according to the head feature and the facial feature corresponding to each frame of the image of the user to be recognized comprises:
calculating the face area of the user according to the corresponding head position and face area of the user in each frame of image of the user to be recognized;
and calculating the deviation degree of the distribution of the five sense organs according to the corresponding user face area and the position of the five sense organs in each frame of user image to be recognized.
4. The method of claim 1, wherein determining a user learning state from the facial information comprises:
and determining the rotation information of the head of the user according to the face area of the user and the deviation degree of the distribution of the five sense organs, and determining the learning state of the user according to the rotation information of the head of the user.
5. The method of claim 4, wherein the rotation information includes a degree of rotation, the facial information further includes an eye size;
the determining a user learning state according to the rotation information of the user head includes:
when the rotation degree of the head of the user is larger than or equal to an angle threshold value, determining that the current learning state of the user is an inattentive state;
when the rotation degree of the head of the user is smaller than the angle threshold value and the size of eyes is smaller than or equal to a set value, determining that the current learning state of the user is a fatigue state;
when the head features and the face features of the user cannot be identified from the N frames of user images to be identified, determining that the current user learning state is not in the pre-equipment learning state.
6. The method of claim 5, wherein after determining the user learning state according to the rotation information of the user's head, the method comprises:
if the current learning state of the user is determined to be a mental inattention state or a fatigue state, the electronic equipment outputs first reminding information, and the first reminding information is used for prompting the user to adjust the learning state;
after a certain period of time for outputting the first reminding information, re-executing the step of acquiring M frames of user images through a camera of the electronic equipment, and determining the learning state of the user again;
and if the re-determined learning state of the user is still in the inattentive state or the fatigue state, the electronic equipment outputs second reminding information to the binding terminal, wherein the second reminding information is used for reminding a guardian corresponding to the binding terminal to pay attention to the learning state of the user.
7. The method of claim 5, wherein after determining the user learning state according to the rotation information of the user's head, the method comprises:
and if the current user learning state is determined to be that the user is not in the pre-equipment learning state, the electronic equipment outputs third reminding information to a binding terminal, and the third reminding information is used for reminding a guardian corresponding to the binding terminal to intervene to guide the user to learn.
8. A learning state detection apparatus, characterized in that the apparatus comprises:
the image acquisition unit is used for acquiring M frames of user images through a camera of the electronic equipment;
an image extraction unit, configured to extract N frames of user images to be identified from the M frames of user images, where M is an integer greater than or equal to 2, and N is a positive integer less than or equal to M;
the image identification unit is used for respectively identifying the N frames of user images to be identified to obtain the head characteristics and the face characteristics corresponding to each frame of user image to be identified;
the calculating unit is used for calculating and obtaining face information of the user according to the head characteristics and the face characteristics corresponding to each frame of user image to be recognized, wherein the face information comprises the face area and the deviation degree of the distribution of five sense organs;
a state determination unit for determining a user learning state from the face information.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202110154077.4A 2021-02-04 2021-02-04 Learning state detection method and device, electronic equipment and storage medium Pending CN112883851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110154077.4A CN112883851A (en) 2021-02-04 2021-02-04 Learning state detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110154077.4A CN112883851A (en) 2021-02-04 2021-02-04 Learning state detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112883851A true CN112883851A (en) 2021-06-01

Family

ID=76057228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110154077.4A Pending CN112883851A (en) 2021-02-04 2021-02-04 Learning state detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112883851A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377200A (en) * 2021-06-22 2021-09-10 平安科技(深圳)有限公司 Interactive training method and device based on VR technology and storage medium
CN114724229A (en) * 2022-05-23 2022-07-08 北京英华在线科技有限公司 Learning state detection system and method for online education platform
CN115641234A (en) * 2022-10-19 2023-01-24 广州友好教育科技有限公司 Remote education system based on big data
WO2023159750A1 (en) * 2022-02-25 2023-08-31 平安科技(深圳)有限公司 Method and device for recognizing online state of user, server, and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377200A (en) * 2021-06-22 2021-09-10 平安科技(深圳)有限公司 Interactive training method and device based on VR technology and storage medium
CN113377200B (en) * 2021-06-22 2023-02-24 平安科技(深圳)有限公司 Interactive training method and device based on VR technology and storage medium
WO2023159750A1 (en) * 2022-02-25 2023-08-31 平安科技(深圳)有限公司 Method and device for recognizing online state of user, server, and storage medium
CN114724229A (en) * 2022-05-23 2022-07-08 北京英华在线科技有限公司 Learning state detection system and method for online education platform
CN114724229B (en) * 2022-05-23 2022-09-02 北京英华在线科技有限公司 Learning state detection system and method for online education platform
CN115641234A (en) * 2022-10-19 2023-01-24 广州友好教育科技有限公司 Remote education system based on big data
CN115641234B (en) * 2022-10-19 2024-04-26 北京尚睿通教育科技股份有限公司 Remote education system based on big data

Similar Documents

Publication Publication Date Title
CN112883851A (en) Learning state detection method and device, electronic equipment and storage medium
CN112328999B (en) Double-recording quality inspection method and device, server and storage medium
CN110020059B (en) System and method for inclusive CAPTCHA
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN111008542A (en) Object concentration analysis method and device, electronic terminal and storage medium
WO2020077874A1 (en) Method and apparatus for processing question-and-answer data, computer device, and storage medium
US20210304339A1 (en) System and a method for locally assessing a user during a test session
WO2020214316A1 (en) Artificial intelligence-based generation of event evaluation report
CN112613440A (en) Attitude detection method and apparatus, electronic device and storage medium
CN111353363A (en) Teaching effect detection method and device and electronic equipment
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN111836106A (en) Online video playing monitoring processing method and device, computer and storage medium
Guo et al. How eye gaze feedback changes parent-child joint attention in shared storybook reading? an eye-tracking intervention study
US11819996B2 (en) Expression feedback method and smart robot
CN111182280A (en) Projection method, projection device, sound box equipment and storage medium
CN113283383A (en) Live broadcast behavior recognition method, device, equipment and readable medium
CN111723758B (en) Video information processing method and device, electronic equipment and storage medium
CN112055257B (en) Video classroom interaction method, device, equipment and storage medium
CN113689660A (en) Safety early warning method of wearable device and wearable device
CN111436956A (en) Attention detection method, device, equipment and storage medium
WO2023079370A1 (en) System and method for enhancing quality of a teaching-learning experience
CN110415688B (en) Information interaction method and robot
CN114510700A (en) Course supervision method and related device
CN113553156A (en) Information prompting method and device, computer equipment and computer storage medium
CN112528790A (en) Teaching management method and device based on behavior recognition and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination