CN109978732A - A kind of teaching evaluation method - Google Patents
A kind of teaching evaluation method Download PDFInfo
- Publication number
- CN109978732A CN109978732A CN201711500022.4A CN201711500022A CN109978732A CN 109978732 A CN109978732 A CN 109978732A CN 201711500022 A CN201711500022 A CN 201711500022A CN 109978732 A CN109978732 A CN 109978732A
- Authority
- CN
- China
- Prior art keywords
- face
- user
- key point
- eyes
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 17
- 206010048232 Yawning Diseases 0.000 claims abstract description 44
- 230000001815 facial effect Effects 0.000 claims abstract description 11
- 230000008921 facial expression Effects 0.000 claims abstract description 8
- 239000000284 extract Substances 0.000 claims abstract description 5
- 241001282135 Poromitra oscitans Species 0.000 claims description 14
- 210000000744 eyelid Anatomy 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 210000003128 head Anatomy 0.000 claims 1
- 230000000694 effects Effects 0.000 description 5
- 238000007796 conventional method Methods 0.000 description 2
- 230000008092 positive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000000034 method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Strategic Management (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of teaching evaluation method characterized by comprising obtains the video for recording user's face;The face location for determining user in the video extracts the face key point position of the face location;Judge whether user closes one's eyes, yawns and nod according to the variable condition of face key point position, and calculates separately the number that user closes one's eyes, yawns and nod;According to user's eye closing, the quality of instruction for the number assessment course yawned and nodded;Described the step of obtaining the video for recording user's human face expression includes: to record the video comprising user's face using camera or video recorder;The face location for determining user in the video, the step of extracting the face key point position of the face location, comprising: user's facial image in video is extracted using human-face detector, demarcates each key point initial position in facial image.
Description
Technical field
The present invention relates to a kind of teaching evaluation methods.
Background technique
With the development of internet technology, long-distance education (online education) also gradually enters into schedule life, becomes people
A kind of means of known knowledge.Long-distance education breaches the limitation of region, so that course capacity is very big, often a teacher is corresponding
Many students, therefore how to assess quality of instruction is exactly a very big problem.
Traditional teaching evaluation is usually the atmosphere for being directed to classroom, the content of courses etc. to consider, then
It is to be given a mark to teacher's course by student to judge the quality of quality of instruction.However, many times student can hide it really
Idea, not exclusively cooperate so that evaluation work becomes cumbersome and Evaluated effect is bad, can not quickly and accurately reflect
The practical reception of the teaching level of teacher and student out.
Summary of the invention
The invention mainly solves the technical problem of providing a kind of teaching evaluation methods, comprising: obtains and records user's face
Video;The face location for determining user in the video extracts the face key point position of the face location;According to described
The variable condition of face key point position judges whether user closes one's eyes, yawns and nod, and calculates separately user and close one's eyes, beat Kazakhstan
The number owed and nodded;According to user's eye closing, the quality of instruction for the number assessment course yawned and nodded;
Described the step of obtaining the video for recording user's human face expression includes: to be recorded using camera or video recorder comprising using
The video of family face.
Preferably, the face location for determining user in the video extracts the face key point position of the face location
The step of, comprising: user's facial image in video is extracted using human-face detector, it is initial to demarcate each key point in facial image
Position.
Preferably, corresponding SURF merging features are one by the SURF feature for extracting each key point initial position
A global characteristics;Based on the global characteristics, the translational movement of each key point is obtained using random forests algorithm;Iteration meter
The translational movement for calculating each key point obtains the face key point position of facial image.
Preferably, the variable condition according to face key point position judge whether user closes one's eyes, yawn and
Nod, and calculate separately user close one's eyes, the number yawning and nod the step of, comprising: calculate every in face key point position
Key point distance between a upper and lower eyelid of eyes, when the key point distance between the upper and lower eyelid is below first threshold
And its duration be higher than second threshold when, then determine that user is in closed-eye state.
Preferably, the variable condition according to face key point position judge whether user closes one's eyes, yawn and
Nod, and calculate separately user close one's eyes, the number yawning and nod the step of, further includes: calculate in face key point position
Key point distance between upper and lower lip, when the key point distance between the upper and lower lip is higher than third threshold value and it continues
When time is higher than four threshold values, then determine that user is in state of yawning.
Preferably, the variable condition according to face key point position judge whether user closes one's eyes, yawn and
Nod, and calculate separately user close one's eyes, the number yawning and nod the step of, further includes: according to the rotation of standard 3D face
Angle and corresponding mapping matrix calculate the end rotation angle of user, within a preset time when the change of the end rotation angle
When change value reaches threshold angle, then determine that user is in state of nodding;The number that counting user closes one's eyes, yawns and nod.
Preferably, the step of quality of instruction of the number assessment course for being closed one's eyes according to user, yawning and nodding, packet
Include: detection user closes one's eyes, whether the number yawning and nod is more than preset threshold range within a preset time, according to close one's eyes,
Yawn and the number nodded preset threshold range Interval evaluation quality of instruction.
The advantages and positive effects of the present invention are: putting into view of conventional method, manpower is big and Evaluated effect is bad, this
Invention does not need the marking of student, by acquiring the video of user (participant), obtains the human face expression state of participant, analyzes
Whether the movement closing one's eyes, nod and yawn, movement appearance frequency that statistics close one's eyes, nod and yawn are occurred in human face expression
Rate, thus according to the assessment teaching efficiency of the state objective and fair of participant;Teacher is not only facilitated to be improved according to Evaluated effect standby
Class efficiency, meanwhile, no matter teaching can accurately understand the state of participant by assessment system teacher on line or under line, do in time
It correspondingly adjusts out, more favorably promotes teaching efficiency.
Specific embodiment
Below with reference to embodiment, further description of the specific embodiments of the present invention, and following embodiment is only used for more
Technical solution of the present invention is clearly demonstrated, and not intended to limit the protection scope of the present invention.
A kind of teaching evaluation method, comprising: obtain the video for recording user's face;Determine the face of user in the video
The face key point position of the face location is extracted in position;Judge to use according to the variable condition of face key point position
Whether family closes one's eyes, yawns and nods, and calculates separately the number that user closes one's eyes, yawns and nod;It closed one's eyes, beaten according to user
The quality of instruction of yawn and the number nodded assessment course;
Described the step of obtaining the video for recording user's human face expression includes: to be recorded using camera or video recorder comprising using
The video of family face.
The face location for determining user in the video, the step of extracting the face key point position of the face location,
Include: that user's facial image in video is extracted using human-face detector, demarcates each key point initial position in facial image.
Corresponding SURF merging features are one global special by the SURF feature for extracting each key point initial position
Sign;Based on the global characteristics, the translational movement of each key point is obtained using random forests algorithm;Iterate to calculate each pass
The translational movement of key point obtains the face key point position of facial image.
The variable condition according to face key point position judges whether user closes one's eyes, yawns and nod, and
Calculate separately user close one's eyes, the number yawning and nod the step of, comprising: calculate each eyes in face key point position
Key point distance between upper and lower eyelid, when the key point distance between the upper and lower eyelid is below first threshold and it is held
When the continuous time is higher than second threshold, then determine that user is in closed-eye state.
The variable condition according to face key point position judges whether user closes one's eyes, yawns and nod, and
Calculate separately user close one's eyes, the number yawning and nod the step of, further includes: calculate upper and lower mouth in face key point position
Key point distance between lip, when the key point distance between the upper and lower lip is higher than third threshold value and its duration height
When four threshold values, then determine that user is in state of yawning.
The variable condition according to face key point position judges whether user closes one's eyes, yawns and nod, and
Calculate separately user close one's eyes, the number yawning and nod the step of, further includes: according to the rotation angle of standard 3D face and right
The mapping matrix answered calculates the end rotation angle of user, within a preset time when the changing value of the end rotation angle reaches
When threshold angle, then determine that user is in state of nodding;The number that counting user closes one's eyes, yawns and nod.
The step of quality of instruction of the number assessment course for being closed one's eyes according to user, yawning and nodding, comprising: detection
User closes one's eyes, whether the number yawning and nod is more than preset threshold range within a preset time, according to close one's eyes, yawn and
Interval evaluation quality of instruction of the number nodded in preset threshold range.
The advantages and positive effects of the present invention are: putting into view of conventional method, manpower is big and Evaluated effect is bad, this
Invention does not need the marking of student, by acquiring the video of user (participant), obtains the human face expression state of participant, analyzes
Whether the movement closing one's eyes, nod and yawn, movement appearance frequency that statistics close one's eyes, nod and yawn are occurred in human face expression
Rate, thus according to the assessment teaching efficiency of the state objective and fair of participant;Teacher is not only facilitated to be improved according to Evaluated effect standby
Class efficiency, meanwhile, no matter teaching can accurately understand the state of participant by assessment system teacher on line or under line, do in time
It correspondingly adjusts out, more favorably promotes teaching efficiency.
One embodiment of the present invention has been described in detail above, but the content is only preferable implementation of the invention
Example, should not be considered as limiting the scope of the invention.It is all according to all the changes and improvements made by the present patent application range
Deng should still be within the scope of the patent of the present invention.
Claims (7)
1. a kind of teaching evaluation method characterized by comprising obtain the video for recording user's face;It determines in the video
The face location of user extracts the face key point position of the face location;According to the variation of face key point position
State judges whether user closes one's eyes, yawns and nod, and calculates separately the number that user closes one's eyes, yawns and nod;According to
The quality of instruction of course is assessed in user's eye closing, the number yawned and nodded;
Described the step of obtaining the video for recording user's human face expression includes: to be recorded using camera or video recorder comprising user people
The video of face.
2. a kind of teaching evaluation method according to claim 1, which is characterized in that determine the face of user in the video
Position, the step of extracting the face key point position of the face location, comprising: user in video is extracted using human-face detector
Facial image demarcates each key point initial position in facial image.
3. a kind of teaching evaluation method according to claim 2, which is characterized in that extract each key point initial bit
Corresponding SURF merging features are a global characteristics by the SURF feature set;Based on the global characteristics, using random
Forest algorithm obtains the translational movement of each key point;The translational movement for iterating to calculate each key point obtains the face pass of facial image
Key point position.
4. a kind of teaching evaluation method according to claim 1, which is characterized in that described according to the face key point
The variable condition set judges whether user closes one's eyes, yawns and nod, and calculates separately time that user closes one's eyes, yawns and nods
Several steps, comprising: calculate the key point distance in face key point position between each upper and lower eyelid of eyes, when it is described it is upper,
When key point distance between palpebra inferior is below first threshold and its duration and is higher than second threshold, then determine that user is in
Closed-eye state.
5. a kind of teaching evaluation method according to claim 4, which is characterized in that described according to the face key point
The variable condition set judges whether user closes one's eyes, yawns and nod, and calculates separately time that user closes one's eyes, yawns and nods
Several step, further includes: the key point distance in face key point position between upper and lower lip is calculated, when the upper and lower lip
Between key point distance when being higher than third threshold value and its duration and being higher than four threshold values, then determine that user is in shape of yawning
State.
6. a kind of teaching evaluation method according to claim 4, which is characterized in that described according to the face key point
The variable condition set judges whether user closes one's eyes, yawns and nod, and calculates separately time that user closes one's eyes, yawns and nods
Several step, further includes: the end rotation angle of user is calculated according to the rotation angle of standard 3D face and corresponding mapping matrix
Degree then determines that user is in point head within a preset time when the changing value of the end rotation angle reaches threshold angle
State;The number that counting user closes one's eyes, yawns and nod.
7. a kind of teaching evaluation method according to claim 1, which is characterized in that described to be closed one's eyes, yawned according to user
The step of with the quality of instruction of the number assessment course nodded, comprising: detection user closes one's eyes, the number yawning and nod whether
It is more than within a preset time preset threshold range, according to the number closed one's eyes, yawn and nodded in the section of preset threshold range
Assess quality of instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711500022.4A CN109978732A (en) | 2017-12-28 | 2017-12-28 | A kind of teaching evaluation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711500022.4A CN109978732A (en) | 2017-12-28 | 2017-12-28 | A kind of teaching evaluation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109978732A true CN109978732A (en) | 2019-07-05 |
Family
ID=67075781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711500022.4A Pending CN109978732A (en) | 2017-12-28 | 2017-12-28 | A kind of teaching evaluation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978732A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091484A (en) * | 2020-03-19 | 2020-05-01 | 浙江正元智慧科技股份有限公司 | Student learning behavior analysis system based on big data |
CN111680538A (en) * | 2020-04-13 | 2020-09-18 | 广州播种网络科技有限公司 | Method and device for identifying stability of memorial meditation |
CN112883867A (en) * | 2021-02-09 | 2021-06-01 | 广州汇才创智科技有限公司 | Student online learning evaluation method and system based on image emotion analysis |
-
2017
- 2017-12-28 CN CN201711500022.4A patent/CN109978732A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091484A (en) * | 2020-03-19 | 2020-05-01 | 浙江正元智慧科技股份有限公司 | Student learning behavior analysis system based on big data |
CN111680538A (en) * | 2020-04-13 | 2020-09-18 | 广州播种网络科技有限公司 | Method and device for identifying stability of memorial meditation |
CN112883867A (en) * | 2021-02-09 | 2021-06-01 | 广州汇才创智科技有限公司 | Student online learning evaluation method and system based on image emotion analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228293A (en) | teaching evaluation method and system | |
WO2021077382A1 (en) | Method and apparatus for determining learning state, and intelligent robot | |
US11202594B2 (en) | Stimulus information compiling method and system for tests | |
CN109978732A (en) | A kind of teaching evaluation method | |
Benson et al. | Visual processing of facial distinctiveness | |
CN109922373A (en) | Method for processing video frequency, device and storage medium | |
WO2018233398A1 (en) | Method, device, and electronic apparatus for monitoring learning | |
CN109740466A (en) | Acquisition methods, the computer readable storage medium of advertisement serving policy | |
CN102421007B (en) | Image quality evaluating method based on multi-scale structure similarity weighted aggregate | |
CN107346422A (en) | A kind of living body faces recognition methods based on blink detection | |
CN109670396A (en) | A kind of interior Falls Among Old People detection method | |
CN109657553A (en) | A kind of student classroom attention detection method | |
CN114708658A (en) | Online learning concentration degree identification method | |
CN106778496A (en) | Biopsy method and device | |
CN103617421A (en) | Fatigue detecting method and system based on comprehensive video feature analysis | |
CN111914633B (en) | Face-changing video tampering detection method based on face characteristic time domain stability and application thereof | |
CN109101949A (en) | A kind of human face in-vivo detection method based on colour-video signal frequency-domain analysis | |
CN102096812A (en) | Teacher blackboard writing action detection method for intelligent teaching recording and playing system | |
CN103617644A (en) | Badminton side boundary crossing distinguishing method based on machine vision | |
CN112016429A (en) | Fatigue driving detection method based on train cab scene | |
CN111783687A (en) | Teaching live broadcast method based on artificial intelligence | |
CN113887386B (en) | Fatigue detection method based on multi-feature fusion of deep learning and machine learning | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
CN115205764B (en) | Online learning concentration monitoring method, system and medium based on machine vision | |
CN112185191A (en) | Intelligent digital teaching model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190705 |
|
WD01 | Invention patent application deemed withdrawn after publication |