CN116543446B - Online learning concentration recognition analysis method based on AI technology - Google Patents

Online learning concentration recognition analysis method based on AI technology Download PDF

Info

Publication number
CN116543446B
CN116543446B CN202310526648.1A CN202310526648A CN116543446B CN 116543446 B CN116543446 B CN 116543446B CN 202310526648 A CN202310526648 A CN 202310526648A CN 116543446 B CN116543446 B CN 116543446B
Authority
CN
China
Prior art keywords
student
instruction
students
detection time
lip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310526648.1A
Other languages
Chinese (zh)
Other versions
CN116543446A (en
Inventor
谢春燕
朱紫凯
李政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Youtu Education Technology Co ltd
Original Assignee
Zhejiang Youtu Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Youtu Education Technology Co ltd filed Critical Zhejiang Youtu Education Technology Co ltd
Priority to CN202310526648.1A priority Critical patent/CN116543446B/en
Publication of CN116543446A publication Critical patent/CN116543446A/en
Application granted granted Critical
Publication of CN116543446B publication Critical patent/CN116543446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of concentration recognition, and particularly discloses an online learning concentration recognition analysis method based on an AI technology, which comprises the steps of student learning video acquisition, student instruction execution analysis, student eye concentration analysis, student lip action analysis, student learning concentration analysis and non-concentration student display.

Description

Online learning concentration recognition analysis method based on AI technology
Technical Field
The invention belongs to the technical field of concentration recognition, and relates to an online learning concentration recognition analysis method based on an AI technology.
Background
With the rapid development of the Internet and digital technology, the content transmission technology has also been greatly advanced, but online learning courseware forms based on Internet learning platforms are also becoming more and more popular, and especially online courseware learning of continuous educational institutions facing incumbent staff can break through space-time limitation and solve the problem of the contradiction between the incumbent staff and the engineering. In the on-line learning process, the concentration of the students is one of the determining factors for determining the continuous education quality, so that the concentration of the students in the on-line learning process needs to be analyzed.
The prior art mainly analyzes the concentration analysis of students in the online learning process according to the facial actions and limb actions of the students, and obviously, the analysis mode has the following problems: 1. the side of the execution condition of the instruction in the courseware by the student reflects the concentration degree of the student during learning, the current technology does not analyze the reflection time of the instruction of the teacher or the courseware by the student, the execution completion time and the emotion of the instruction of the teacher or the courseware by the student, so that the learning initiative of the student in the learning process cannot be shown, the sensitivity degree of the student to the instruction of the teacher or the courseware and the interest of the student in learning cannot be shown, and the concentration degree of the student in the learning process cannot be guaranteed.
2. In the field of continuous education facing incumbent personnel, interactive learning is one of important learning modes, the current technology does not analyze the concentration degree of students during learning according to the mouth shape and audio frequency of lips of the students, so that the serious degree of the students in the learning process cannot be known, and meanwhile, effective references cannot be provided for subsequent development of targeted courseware on a platform side to a certain extent.
Disclosure of Invention
The invention aims to provide an on-line learning concentration recognition analysis method based on an AI technology, which solves the problems existing in the background technology.
The aim of the invention can be achieved by the following technical scheme: an on-line learning concentration recognition analysis method based on AI technology comprises the following steps:
step one, student learning video acquisition: and acquiring learning videos of the target courseware corresponding to each student.
Step two, student instruction execution analysis: and analyzing the execution evaluation coefficients of the students on the target courseware instructions.
Step three, eye concentration analysis of students: and analyzing the eye level learning concentration evaluation coefficients corresponding to the students.
Step four, student lip action analysis: and analyzing the lip layer concentration evaluation coefficients corresponding to each student.
Step five, student study concentration analysis: and analyzing the comprehensive learning concentration evaluation coefficients corresponding to the students, and screening out the students not concentrating on the learning.
Step six, not focusing on student display: the study is shown without concentration on the students.
Optionally, analyzing the execution evaluation coefficient of each student on the target courseware instruction, wherein the specific analysis process is as follows: based on the learning video of each student corresponding to the target courseware, each instruction corresponding to the target courseware is read, and then the reflecting duration, the finishing duration and each face image of each instruction corresponding to each student are obtained.
According to the reflecting time length and the finishing time length of each instruction corresponding to each student, analyzing to obtain the instruction reflecting evaluation coefficient corresponding to each student, and recording asWhere i represents the number corresponding to each student, i=1, 2.
According to the facial images of the instructions corresponding to the students, analyzing to obtain instruction emotion assessment coefficients corresponding to the students, and recording as
By calculation formulaObtaining the execution evaluation coefficient of each student to the target courseware instructionWherein ε is 1 、ε 2 And respectively reflecting the weight factors corresponding to the evaluation coefficients and the instruction emotion evaluation coefficients for the set instructions.
Optionally, the instruction corresponding to each student is obtained through analysis to reflect the evaluation coefficient, and the specific analysis process is as follows: respectively marking the reflecting time length and the finishing time length of each instruction corresponding to each student as T ij And T' ij Wherein j represents the number corresponding to each instruction, j=1, 2. Once again, m is chosen, further according to the calculation formulaObtaining instruction reflection evaluation coefficient corresponding to each student>Wherein m represents the number of instructions,λ 1 、λ 2 Respectively set weight factors corresponding to the reflecting duration and the instruction completion duration.
Optionally, analyzing to obtain instruction emotion assessment coefficients corresponding to each student, wherein the specific analysis process is as follows: the cheek area is obtained from the facial images of the students corresponding to the instructions and is marked as S iju Where u represents the number corresponding to each face image, u=1, 2..v., simultaneously, obtaining certificate photos corresponding to the students from the cloud database, further obtain the corresponding standard cheek area of each student from the above-mentioned area and mark as S' i
Based on the certificate photo corresponding to each student, obtaining the eye contour type corresponding to each student, further extracting the reference pleasant eye contour corresponding to each student eye contour type from the cloud database, recording the reference pleasant eye contour corresponding to each student, simultaneously obtaining the eye contour from each facial image corresponding to each instruction of each student, comparing the eye contour in each facial image corresponding to each instruction of each student with the corresponding reference pleasant eye contour, obtaining the area of each eye contour corresponding to each instruction of each student which is the same as the contour of the corresponding reference pleasant eye contour, and recording the area as S '' iju
The positions corresponding to the specified mouth angle and the center point of the lower lip of each student are obtained from the facial images corresponding to each instruction of each student, and then are imported into a set two-dimensional coordinate system, so that the coordinates of the specified mouth angle and the center point of the lower lip in the facial images corresponding to each instruction of each student are obtained and respectively recorded as (x '' iju ,y′ iju ) And (x) iju ,y″ iju )。
According to the calculation formulaObtaining a first instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 1 、γ 2 Respectively the weight factors corresponding to the set cheek area and the eye height difference, S' "is the set referenceThe outline of the eye is the same area.
Obtaining the horizontal distance between the designated mouth angle of each student and the center point of the lower lip from the certificate photo corresponding to each student, and recording as deltax i
According to the calculation formulaObtaining second instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 3 、γ 4 Respectively setting weight factors delta y corresponding to the horizontal distance and the height difference between the designated mouth angle and the center point of the lower lip i Is the height difference between the set reference mouth angle and the center point of the lower lip.
By calculation formulaObtaining instruction emotion assessment coefficients corresponding to students>Wherein eta 1 、η 2 Respectively setting weight factors corresponding to the first instruction emotion estimation coefficient and the second instruction emotion estimation coefficient.
Optionally, analyzing the eye level learning concentration evaluation coefficients corresponding to each student, wherein the specific analysis process is as follows: based on learning videos of students, eye blink frequency and pupil rotation times of the students in each acquisition time period are obtained, and then calculation is performedObtaining the concentration evaluation coefficient alpha of eye level learning corresponding to each student i Wherein Qit ' and Dit ' are respectively the eye blink frequency and pupil rotation frequency of the ith student in the t ' acquisition time period, Q, D is respectively the set reference eye blink frequency and reference pupil rotation frequency, mu 1 、μ 2 Respectively set blink frequency and pupil rotationThe weight factor corresponding to the number of movements, t ', represents the number corresponding to each acquisition period, t ' =1 ', 2.
Optionally, the lip layer concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: extracting lip mouth shapes and audios corresponding to all students in all detection time periods from learning videos of all students, analyzing first lip concentration evaluation coefficients corresponding to all students according to instruction settings corresponding to all lip mouth shapes, audios and target courseware corresponding to all detection time periods, and recording as phi' i
Analyzing the second lip concentration evaluation coefficient corresponding to each student according to the audio frequency corresponding to each detection time period of each student and the instruction setting corresponding to each detection time period of the target courseware, and recording as phi i 。。
According to the calculation formulaObtaining and analyzing the lip layer concentration evaluation coefficient phi corresponding to each student i Wherein->And respectively setting weight factors corresponding to the first lip concentration evaluation coefficient and the second lip concentration evaluation coefficient, wherein e represents a natural constant.
Optionally, the first lip concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: comparing the lip mouth shape corresponding to each student in each detection time period with the instruction setting corresponding to the target courseware in each detection time point, if the lip mouth shape corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, judging the lip mouth shape corresponding to the student in the detection time period as a simulated mouth shape, counting the quantity of the simulated mouth shapes corresponding to each student in each detection time period in this way, and recording as N it Where t represents the number corresponding to each detection period, t=1, 2.
Corresponding audio frequency of each student in each detection time periodObtaining each sentence corresponding to each student in each detection time period through voice recognition, comparing with the instruction setting corresponding to each detection time period of the target courseware, and if a certain sentence corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, marking the sentence corresponding to the student in the detection time point as a pronunciation sentence, thereby counting the number of pronunciation sentences corresponding to each student in each detection time period and marking as N '' it
According to the calculation formulaObtaining a first lip concentration evaluation coefficient phi 'corresponding to each student' i Wherein N1 it 、N2 it Respectively the lip mouth shape number, the sentence number and the theta corresponding to the ith student in the t detection time period 1 、θ 2 And e represents a natural constant, wherein the weight factors correspond to the set lip mouth shape number and statement number respectively.
Optionally, analyzing the second lip concentration evaluation coefficient corresponding to each student, wherein the specific analysis process is as follows: extracting each pronunciation word from each sentence corresponding to each detection time period of each student, comparing with the instruction setting corresponding to each detection time period of the target courseware, and if a pronunciation word corresponding to a certain detection time period of a certain student is matched with the instruction setting corresponding to the detection time period of the target courseware, recording the pronunciation word as the associated pronunciation word corresponding to the detection time period of the student, thereby counting the number of the associated pronunciation words corresponding to each detection time period of each student and recording as the associated pronunciation word corresponding to each detection time period of each student
According to the calculation formulaObtaining a second lip concentration evaluation coefficient phi' corresponding to each student i Wherein->Indicating the number of pronunciation words and/or +/of the ith student in the t-th detection period>Representing the number of teaching words and sigma corresponding to the t-th detection time period of the target courseware 1 、σ 2 Respectively setting weight factors corresponding to the number of pronunciation words and the number of teaching words, ++>The number of the same words of the set reference associated pronunciation words and the number of the same words of the reference associated pronunciation words and teaching words are respectively set.
Optionally, analyzing the comprehensive learning concentration evaluation coefficients corresponding to each student, wherein the specific analysis process is as follows: the execution evaluation coefficient of each student to the target courseware instructionEye level learning concentration evaluation coefficient alpha corresponding to each student i And lip level focus evaluation coefficient phi i Substitution of the calculation formula +.>Obtaining comprehensive learning concentration evaluation coefficients psi corresponding to the students, wherein tau 1 、τ 2 、τ 3 The set execution evaluation coefficients, the eye level learning concentration evaluation coefficients and the lip level concentration evaluation coefficients are respectively corresponding weight factors.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the on-line learning concentration recognition analysis method based on the AI technology, concentration of students during learning is analyzed according to the execution condition of each student on target courseware instructions, the problem of limitations of the current technology is solved, intelligent analysis of the learning concentration of the students during on-line learning is realized, the concentration of the students during courseware learning is effectively improved, and meanwhile, the enthusiasm and the sensitivity of the students during learning are also greatly improved.
2. According to the invention, in the student instruction execution analysis, the execution condition of each student on the target courseware instruction is analyzed, so that the initiative of the students in learning is effectively ensured, and the sensitivity degree of the students on the courseware instruction and the learning interest are improved.
3. According to the invention, the concentration evaluation coefficients corresponding to the students are analyzed in the concentration analysis of the students, so that the concentration of the students during learning is effectively ensured, and the serious state of the students during learning is improved.
4. According to the invention, through analyzing the lip concentration evaluation coefficients corresponding to each student in the lip action analysis of the students, the recognition degree of the students in the learning process is accurately known, and an effective reference is provided for the subsequent development of the targeted courses of the platform side to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an on-line learning concentration recognition analysis method based on AI technology includes the following steps: step one, student learning video acquisition: and acquiring learning videos of the target courseware corresponding to each student.
In the above, the learning video of each student corresponding to the target courseware is obtained from the management background.
Step two, student instruction execution analysis: and analyzing the execution evaluation coefficients of the students on the target courseware instructions.
In the above, the execution evaluation coefficient of each student to the target courseware instruction is analyzed, and the specific analysis process is as follows: based on the learning video of each student corresponding to the target courseware, each instruction corresponding to the target courseware is obtained, and then the reflecting duration, the finishing duration and each face image of each instruction corresponding to each student are obtained.
In the above, each instruction corresponding to the target courseware is acquired, and the specific acquisition process is as follows: and reading various instruction settings such as mouth shapes, audios, words, sentences, pronunciation words and the like corresponding to the target courseware from the management background.
In the above, the specific acquisition process is as follows: and acquiring the time corresponding to each instruction of the target courseware based on each instruction corresponding to the target courseware, thereby acquiring the time corresponding to each instruction in the learning video corresponding to each student, further taking the time corresponding to each instruction in the learning video corresponding to each student as the time point when each instruction starts to acquire, analyzing each ending instruction corresponding to the target courseware in a similar way according to each instruction analysis mode corresponding to the target courseware, further analyzing each instruction ending time point in the learning video corresponding to each student according to the analysis mode of each instruction starting to acquire in the learning video corresponding to each student, further extracting each instruction video segment from the learning video corresponding to each student based on the time corresponding to each instruction starting to acquire time point and each instruction ending time point, and dividing each instruction video segment corresponding to each student to obtain each action picture corresponding to each instruction of each student.
And extracting a standard execution action picture corresponding to each instruction from the cloud database, comparing the standard execution action picture with each action picture corresponding to each instruction of each student, if the action picture corresponding to a certain instruction of a certain student is the same as the standard execution action picture corresponding to the instruction, taking the action picture as an instruction picture corresponding to the instruction of the student, thereby obtaining each instruction picture corresponding to each instruction of each student, simultaneously obtaining a time point of each instruction picture corresponding to each instruction of each student, obtaining the interval duration of each instruction picture corresponding to each student according to the beginning acquisition time point of each instruction in a learning video corresponding to each student, comparing the duration of each instruction picture corresponding to each instruction of each student, screening out the shortest interval duration and the longest interval duration, taking the shortest interval duration as the reflecting duration of each instruction corresponding to each student, and taking the longest interval duration as the completion duration of each instruction corresponding to each student.
And taking each action picture in the reflecting duration of each instruction corresponding to each student as each face image of each instruction corresponding to each student.
According to the reflecting time length and the finishing time length of each instruction corresponding to each student, analyzing to obtain the instruction reflecting evaluation coefficient corresponding to each student, and recording asWhere i represents the number corresponding to each student, i=1, 2.
According to the facial images of the instructions corresponding to the students, analyzing to obtain instruction emotion assessment coefficients corresponding to the students, and recording as
By calculation formulaObtaining the execution evaluation coefficient of each student to the target courseware instruction +.>Wherein ε is 1 、ε 2 And respectively reflecting the weight factors corresponding to the evaluation coefficients and the instruction emotion evaluation coefficients for the set instructions.
In another specific embodiment, the instruction corresponding to each student is obtained by analysis to reflect the evaluation coefficient, and the specific analysis process is as follows: respectively marking the reflecting time length and the finishing time length of each instruction corresponding to each student as T ij And T' ij Wherein j represents the number corresponding to each instruction, j=1, 2. Once again, m is chosen, further according to the calculation formulaObtaining instruction reflection evaluation coefficient corresponding to each student>Where m represents the number of instructions, lambda 1 、λ 2 Respectively set weight factors corresponding to the reflecting duration and the instruction completion duration.
In the above, the instruction emotion assessment coefficient corresponding to each student is obtained through analysis, and the specific analysis process is as follows: the cheek area is obtained from the facial images of the students corresponding to the instructions and is marked as S iju Where u represents the number corresponding to each face image, u=1, 2..v., simultaneously, obtaining certificate photos corresponding to the students from the cloud database, further obtain the corresponding standard cheek area of each student from the above-mentioned area and mark as S' i
Based on the certificate photo corresponding to each student, obtaining the eye contour type corresponding to each student, further extracting the reference pleasant eye contour corresponding to each student eye contour type from the cloud database, recording the reference pleasant eye contour corresponding to each student, simultaneously obtaining the eye contour from each facial image corresponding to each instruction of each student, comparing the eye contour in each facial image corresponding to each instruction of each student with the corresponding reference pleasant eye contour, obtaining the area of each eye contour corresponding to each instruction of each student which is the same as the contour of the corresponding reference pleasant eye contour, and recording the area as S '' iju
The position corresponding to the mouth angle designated by each student and the center point of the lower lip is obtained from the facial image corresponding to each instruction of each student, and then is imported into a set two-dimensional coordinate system to obtain each instruction corresponding to each studentThe mouth angle and the center point coordinates of the lower lip are specified in the face image and are respectively noted as (x' iju ,y′ iju ) And (x) iju ,y″ iju )。
According to the calculation formulaObtaining a first instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 1 、γ 2 The weight factors corresponding to the set cheek area and the eye height difference are respectively shown, and S' "is the same area of the set reference eye contour.
Obtaining the horizontal distance between the designated mouth angle of each student and the center point of the lower lip from the certificate photo corresponding to each student, and recording as deltax i
According to the calculation formulaObtaining second instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 3 、γ 4 Respectively setting weight factors delta y corresponding to the horizontal distance and the height difference between the designated mouth angle and the center point of the lower lip i Is the height difference between the set reference mouth angle and the center point of the lower lip.
By calculation formulaObtaining instruction emotion assessment coefficients corresponding to students>Wherein eta 1 、η 2 Respectively setting weight factors corresponding to the first instruction emotion estimation coefficient and the second instruction emotion estimation coefficient.
According to the invention, in the student instruction execution analysis, the execution condition of each student on the target courseware instruction is analyzed, so that the learning initiative of the students in on-line learning is effectively ensured, and the sensitivity degree of the students on teacher instructions and the learning interest are improved.
Step three, eye concentration analysis of students: and analyzing the eye level learning concentration evaluation coefficients corresponding to the students.
In a specific embodiment, the eye level learning concentration evaluation coefficients corresponding to each student are analyzed, and the specific analysis process is as follows: based on learning videos of students, eye blink frequency and pupil rotation times of the students in each acquisition time period are obtained, and then calculation is performedObtaining the concentration evaluation coefficient alpha of eye level learning corresponding to each student i Wherein Qit ' and Dit ' are respectively the eye blink frequency and pupil rotation frequency of the ith student in the t ' acquisition time period, Q, D is respectively the set reference eye blink frequency and reference pupil rotation frequency, mu 1 、μ 2 The weight factors corresponding to the set blink frequency and pupil rotation number, t 'represents the number corresponding to each acquisition period, and t' =1 ',2'. The term "b".
In the above, the eye blink frequency and pupil rotation times of each student in each acquisition time period are obtained, and the specific obtaining process is as follows: dividing the learning video of each student into each sub-video segment according to a preset acquisition time point, and further counting the eye blink frequency and pupil rotation times of each student in each acquisition time period.
According to the invention, the concentration evaluation coefficients corresponding to the students are analyzed in the concentration analysis of the students, so that the concentration of the students during learning is effectively ensured, and the serious state of the students during learning is improved.
Step four, student lip action analysis: and analyzing the lip layer concentration evaluation coefficients corresponding to each student.
In a specific embodiment, the lip level concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: and extracting lip mouth shapes and audios corresponding to the students in the detection time periods from learning videos of the students, and simultaneously reading instruction settings corresponding to target courseware in the detection time periods.
In the above, the learning video of each student is divided into each detection video segment according to the preset detection time point, so as to obtain each lip mouth shape and audio corresponding to each student in each detection time segment.
Analyzing a first lip concentration evaluation coefficient corresponding to each student and marking as phi 'through instruction setting corresponding to each lip mouth shape, audio frequency and target courseware corresponding to each student in each detection time period' i
Analyzing the second lip concentration evaluation coefficient corresponding to each student according to the audio frequency corresponding to each detection time period of each student and the instruction setting corresponding to each detection time period of the target courseware, and recording as phi i
According to the calculation formulaObtaining and analyzing the lip layer concentration evaluation coefficient phi corresponding to each student i Wherein->And respectively setting weight factors corresponding to the first lip concentration evaluation coefficient and the second lip concentration evaluation coefficient, wherein e represents a natural constant.
In another specific embodiment, the first lip concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: comparing the lip mouth shape corresponding to each student in each detection time period with the instruction setting corresponding to the target courseware in each detection time point, if the lip mouth shape corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, judging the lip mouth shape corresponding to the student in the detection time period as a simulated mouth shape, counting the quantity of the simulated mouth shapes corresponding to each student in each detection time period in this way, and recording as N it Wherein t represents eachThe number corresponding to the detection period, t=1, 2.
The method comprises the steps of obtaining each sentence corresponding to each student in each detection time period through voice recognition by each student in audio corresponding to each detection time period, comparing each sentence corresponding to each student in each detection time period with instruction setting corresponding to a target courseware in each detection time period, and if a certain sentence corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, marking the sentence corresponding to the student in the detection time point as a pronunciation sentence, thereby counting the number of pronunciation sentences corresponding to each student in each detection time period and marking as N '' it
According to the calculation formulaObtaining a first lip concentration evaluation coefficient phi 'corresponding to each student' i Wherein N1 it 、N2 it Respectively the lip mouth shape number, the sentence number and the theta corresponding to the ith student in the t detection time period 1 、θ 2 And e represents a natural constant, wherein the weight factors correspond to the set lip mouth shape number and statement number respectively.
In yet another specific embodiment, the second lip concentration assessment coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: extracting pronunciation words from sentences corresponding to each detection time period of each student, comparing each pronunciation word corresponding to each detection time period of each student with instruction setting corresponding to each detection time period of a target courseware, and if a pronunciation word corresponding to a certain detection time period of a certain student is matched with the instruction setting corresponding to the detection time period of the target courseware, recording the pronunciation word as an associated pronunciation word corresponding to the detection time period of the student, thereby counting the number of the associated pronunciation words corresponding to each detection time period of each student and recording as
According to the calculation formulaObtaining a second lip concentration evaluation coefficient phi' corresponding to each student i Wherein->Indicating the number of pronunciation words and/or +/of the ith student in the t-th detection period>Representing the number of teaching words and sigma corresponding to the t-th detection time period of the target courseware 1 、σ 2 Respectively setting weight factors corresponding to the number of pronunciation words and the number of teaching words, ++>The number of the same words of the set reference associated pronunciation words and the number of the same words of the reference associated pronunciation words and teaching words are respectively set.
According to the invention, through analyzing the lip concentration evaluation coefficients corresponding to each student in the lip action analysis of the students, the recognition degree of the students in the learning process is accurately known, and meanwhile, effective references are provided for subsequent development of targeted courseware by a platform side to a certain extent.
Step five, student study concentration analysis: and analyzing the comprehensive learning concentration evaluation coefficients corresponding to the students, and screening out the students not concentrating on the learning.
In a specific embodiment, the comprehensive learning concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows: the execution evaluation coefficient of each student to the target courseware instructionEye level learning concentration evaluation coefficient alpha corresponding to each student i And lip level focus evaluation coefficient phi i Substitution of the calculation formula +.>Obtaining comprehensive learning concentration evaluation coefficients psi corresponding to the students, wherein tau 1 、τ 2 、τ 3 The set execution evaluation coefficients, the eye level learning concentration evaluation coefficients and the lip level concentration evaluation coefficients are respectively corresponding weight factors.
In another specific embodiment, each study is screened out of concentration with students, and the specific screening process is as follows: comparing the comprehensive learning concentration evaluation coefficient corresponding to each student with the set reference comprehensive learning concentration evaluation coefficient, and if the comprehensive learning concentration evaluation coefficient corresponding to a student is smaller than the set reference comprehensive learning concentration evaluation coefficient, taking the student as a learning non-concentration student, thereby obtaining each learning non-concentration student
Step six, not focusing on student display: the study is shown without concentration on the students.
According to the embodiment of the invention, the concentration of the students in the learning process is analyzed according to the execution condition of each student on the target courseware instruction, the concentration of the students in the learning process is analyzed according to the concentration condition of eyes and lips, the problem of limitations existing in the prior art is solved, the intelligent analysis of the concentration of the students in the online learning process is realized, the concentration of the students in the courseware learning process is effectively improved, and meanwhile, the enthusiasm and the sensitivity of the students in the learning process are also greatly improved.
The foregoing is merely illustrative of the structures of this invention and various modifications, additions and substitutions for those skilled in the art can be made to the described embodiments without departing from the scope of the invention or from the scope of the invention as defined in the accompanying claims.

Claims (6)

1. An on-line learning concentration recognition analysis method based on an AI technology is characterized by comprising the following steps:
step one, student learning video acquisition: acquiring learning videos of students corresponding to target courseware;
step two, student instruction execution analysis: analyzing the execution evaluation coefficients of students on the target courseware instructions;
the method is characterized by comprising the following steps of analyzing the execution evaluation coefficients of students on target courseware instructions, wherein the specific analysis process is as follows:
reading each instruction corresponding to the target courseware based on the learning video of each student corresponding to the target courseware, and further obtaining the reaction time, the completion time and each face image of each instruction corresponding to each student;
according to the reaction time and the completion time of each instruction corresponding to each student, analyzing to obtain an instruction reaction evaluation coefficient corresponding to each student, and recording asWherein i represents the number corresponding to each student, i=1, 2. N;
according to the facial images of the instructions corresponding to the students, analyzing to obtain instruction emotion assessment coefficients corresponding to the students, and recording as
By calculation formulaObtaining the execution evaluation coefficient of each student to the target courseware instruction +.>Wherein ε is 1 、ε 2 Respectively setting weight factors corresponding to the instruction response evaluation coefficients and the instruction emotion evaluation coefficients;
the instruction response evaluation coefficient corresponding to each student is obtained through analysis, and the specific analysis process is as follows:
the reaction time and the completion time of each student corresponding to each instruction are respectively recorded as Tij and T '' ij Wherein j represents the number corresponding to each instruction, j=1, 2. Once again, m is chosen, further according to the calculation formulaObtaining instruction response evaluation coefficients corresponding to students>Where m represents the number of instructions, lambda 1 、λ 2 Respectively setting weight factors corresponding to the reaction time length and the instruction completion time length;
the analysis obtains instruction emotion assessment coefficients corresponding to the students, and the specific analysis process is as follows:
the cheek area is obtained from the facial images of the students corresponding to the instructions and is marked as S iju Where u represents the number corresponding to each face image, u=1, 2..v., simultaneously, obtaining certificate photos corresponding to the students from the cloud database, further obtain the corresponding standard cheek area of each student from the above-mentioned area and mark as S' i
Based on the certificate photo corresponding to each student, obtaining the eye contour type corresponding to each student, further extracting the reference pleasant eye contour corresponding to each student eye contour type from the cloud database, recording the reference pleasant eye contour corresponding to each student, simultaneously obtaining the eye contour from each facial image corresponding to each instruction of each student, comparing the eye contour in each facial image corresponding to each instruction of each student with the corresponding reference pleasant eye contour, obtaining the area of each eye contour corresponding to each instruction of each student which is the same as the contour of the corresponding reference pleasant eye contour, and recording the area as S '' iju
The positions corresponding to the specified mouth angle and the center point of the lower lip of each student are obtained from the facial images corresponding to each instruction of each student, and then are imported into a set two-dimensional coordinate system, so that the coordinates of the specified mouth angle and the center point of the lower lip in the facial images corresponding to each instruction of each student are obtained and respectively recorded as (x '' iju ,y′ iju ) And (x) iju ,y″ iju );
According to the calculation formulaObtaining a first instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 1 、γ 2 Respectively setting weight factors corresponding to cheek areas and eye height differences, wherein S' "is the same area of a set reference eye contour;
obtaining the horizontal distance between the designated mouth angle of each student and the center point of the lower lip from the certificate photo corresponding to each student, and recording as deltax i
According to the calculation formulaObtaining second instruction emotion estimation coefficient corresponding to each student>Wherein gamma is 3 、γ 4 Respectively setting weight factors delta y corresponding to the horizontal distance and the height difference between the designated mouth angle and the center point of the lower lip i The height difference between the set reference mouth angle and the center point of the lower lip is set;
by calculation formulaObtaining instruction emotion assessment coefficients corresponding to students>Wherein eta 1 、η 2 Respectively setting weight factors corresponding to the first instruction emotion estimation coefficient and the second instruction emotion estimation coefficient;
step three, eye concentration analysis of students: analyzing eye level learning concentration evaluation coefficients corresponding to students;
step four, student lip action analysis: analyzing lip layer concentration evaluation coefficients corresponding to students;
step five, student study concentration analysis: analyzing comprehensive learning concentration evaluation coefficients corresponding to all students, and screening out students not concentrating on learning;
step six, not focusing on student display: the study is shown without concentration on the students.
2. The AI-technology-based online learning concentration recognition analysis method of claim 1, wherein: the eye level learning concentration evaluation coefficients corresponding to the students are analyzed, and the specific analysis process is as follows:
based on learning videos of students, eye blink frequency and pupil rotation times of the students in each acquisition time period are obtained, and then calculation is performedObtaining the concentration evaluation coefficient alpha of eye level learning corresponding to each student i Wherein Qit ' and Dit ' are respectively the eye blink frequency and pupil rotation frequency of the ith student in the t ' acquisition time period, Q, D is respectively the set reference eye blink frequency and reference pupil rotation frequency, mu 1 、μ 2 The weight factors corresponding to the set blink frequency and pupil rotation number, t 'represents the number corresponding to each acquisition period, and t' =1 ',2'. The term "b".
3. The AI-technology-based online learning concentration recognition analysis method of claim 2, wherein: the lip layer corresponding to each student is analyzed to focus on the evaluation coefficient, and the specific analysis process is as follows:
extracting lip mouth shapes and audios corresponding to all students in all detection time periods from learning videos of all students, and simultaneously reading instruction settings corresponding to target courseware in all detection time periods;
analyzing a first lip concentration evaluation coefficient corresponding to each student and marking as phi 'through instruction setting corresponding to each lip mouth shape, audio frequency and target courseware corresponding to each student in each detection time period' i
According to the corresponding audio frequency of each student in each detection time period and the corresponding instruction of the target courseware in each detection time periodSetting, analyzing the second lip concentration evaluation coefficient corresponding to each student and marking as phi i ″;
According to the calculation formulaObtaining and analyzing the lip layer concentration evaluation coefficient phi corresponding to each student i Wherein->And respectively setting weight factors corresponding to the first lip concentration evaluation coefficient and the second lip concentration evaluation coefficient, wherein e represents a natural constant.
4. The AI-technology-based online learning concentration recognition analysis method of claim 3, wherein: the first lip concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows:
comparing the lip mouth shape corresponding to each student in each detection time period with the instruction setting corresponding to the target courseware in each detection time point, if the lip mouth shape corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, judging the lip mouth shape corresponding to the student in the detection time period as a simulated mouth shape, counting the quantity of the simulated mouth shapes corresponding to each student in each detection time period in this way, and recording as N it Wherein t represents the number corresponding to each detection period, t=1, 2.
The method comprises the steps that audio corresponding to each detection time period of each student is subjected to voice recognition to obtain each sentence corresponding to each student in each detection time period, each sentence corresponding to each student in each detection time period is compared with instruction setting corresponding to target courseware in each detection time period, if a certain sentence corresponding to a certain student in a certain detection time period is matched with the instruction setting corresponding to the target courseware in the detection time period, the sentence corresponding to the student in the detection time point is recorded as a pronunciation sentence, and therefore the number of pronunciation sentences corresponding to each student in each detection time period is counted and recorded as Ni't;
according to the calculation formulaObtaining a first lip concentration evaluation coefficient phi corresponding to each student i ', wherein N1 it 、N2 it Respectively the lip mouth shape number, the sentence number and the theta corresponding to the ith student in the t detection time period 1 、θ 2 And e represents a natural constant, wherein the weight factors correspond to the set lip mouth shape number and statement number respectively.
5. The AI-technology-based online learning concentration recognition analysis method of claim 4, wherein: the second lip concentration evaluation coefficient corresponding to each student is analyzed, and the specific analysis process is as follows:
extracting pronunciation words from sentences corresponding to each detection time period of each student, comparing each pronunciation word corresponding to each detection time period of each student with instruction setting corresponding to each detection time period of a target courseware, and if a pronunciation word corresponding to a certain detection time period of a certain student is matched with the instruction setting corresponding to the detection time period of the target courseware, recording the pronunciation word as an associated pronunciation word corresponding to the detection time period of the student, thereby counting the number of the associated pronunciation words corresponding to each detection time period of each student and recording as
According to the calculation formulaObtaining a second lip concentration evaluation coefficient phi corresponding to each student i ", therein ]>Indicating the number of pronunciation words and/or +/of the ith student in the t-th detection period>Representing the number of teaching words and sigma corresponding to the t-th detection time period of the target courseware 1 、σ 2 Respectively setting weight factors corresponding to the number of pronunciation words and the number of teaching words, ++>The number of the same words of the set reference associated pronunciation words and the number of the same words of the reference associated pronunciation words and teaching words are respectively set.
6. The AI-technology-based online learning concentration recognition analysis method of claim 3, wherein: the comprehensive learning concentration evaluation coefficients corresponding to the students are analyzed, and the specific analysis process is as follows:
the execution evaluation coefficient of each student to the target courseware instructionEye level learning concentration evaluation coefficient alpha corresponding to each student i And lip level focus evaluation coefficient phi i Substitution of the calculation formula +.>Obtaining comprehensive learning concentration evaluation coefficients psi corresponding to the students, wherein tau 1 、τ 2 、τ 3 The set execution evaluation coefficients, the eye level learning concentration evaluation coefficients and the lip level concentration evaluation coefficients are respectively corresponding weight factors.
CN202310526648.1A 2023-05-11 2023-05-11 Online learning concentration recognition analysis method based on AI technology Active CN116543446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310526648.1A CN116543446B (en) 2023-05-11 2023-05-11 Online learning concentration recognition analysis method based on AI technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310526648.1A CN116543446B (en) 2023-05-11 2023-05-11 Online learning concentration recognition analysis method based on AI technology

Publications (2)

Publication Number Publication Date
CN116543446A CN116543446A (en) 2023-08-04
CN116543446B true CN116543446B (en) 2023-09-29

Family

ID=87450221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310526648.1A Active CN116543446B (en) 2023-05-11 2023-05-11 Online learning concentration recognition analysis method based on AI technology

Country Status (1)

Country Link
CN (1) CN116543446B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522815A (en) * 2018-10-26 2019-03-26 深圳博为教育科技有限公司 A kind of focus appraisal procedure, device and electronic equipment
CN111243362A (en) * 2020-03-23 2020-06-05 咸阳师范学院 Computer multimedia teaching device for experiments
KR102245319B1 (en) * 2020-11-17 2021-04-28 주식회사 서경산업 System for analysis a concentration of learner
CN112990723A (en) * 2021-03-24 2021-06-18 武汉伽域信息科技有限公司 Online education platform student learning force analysis feedback method based on user learning behavior deep analysis
CN114663734A (en) * 2022-03-14 2022-06-24 东北农业大学 Online classroom student concentration degree evaluation method and system based on multi-feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522815A (en) * 2018-10-26 2019-03-26 深圳博为教育科技有限公司 A kind of focus appraisal procedure, device and electronic equipment
CN111243362A (en) * 2020-03-23 2020-06-05 咸阳师范学院 Computer multimedia teaching device for experiments
KR102245319B1 (en) * 2020-11-17 2021-04-28 주식회사 서경산업 System for analysis a concentration of learner
CN112990723A (en) * 2021-03-24 2021-06-18 武汉伽域信息科技有限公司 Online education platform student learning force analysis feedback method based on user learning behavior deep analysis
CN114663734A (en) * 2022-03-14 2022-06-24 东北农业大学 Online classroom student concentration degree evaluation method and system based on multi-feature fusion

Also Published As

Publication number Publication date
CN116543446A (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN110070295B (en) Classroom teaching quality evaluation method and device and computer equipment
US10706738B1 (en) Systems and methods for providing a multi-modal evaluation of a presentation
WO2019095446A1 (en) Following teaching system having speech evaluation function
CN109697976B (en) Pronunciation recognition method and device
WO2019024247A1 (en) Data exchange network-based online teaching evaluation system and method
CN112908355B (en) System and method for quantitatively evaluating teaching skills of teacher and teacher
CN110930781B (en) Recording and broadcasting system
US20230110002A1 (en) Video highlight extraction method and system, and storage medium
WO2020007097A1 (en) Data processing method, storage medium and electronic device
TW202008293A (en) System and method for monitoring qualities of teaching and learning
CN108038461B (en) System and method for interactive simultaneous correction of mouth shape and tongue shape of foreign languages
CN111611854B (en) Classroom condition evaluation method based on pattern recognition
CN116543446B (en) Online learning concentration recognition analysis method based on AI technology
JP2012047998A (en) Utterance learning support device and program thereof
CN110991943A (en) Teaching quality evaluation system based on cloud computing
CN109447863A (en) A kind of 4MAT real-time analysis method and system
Fuyuno et al. Multimodal analysis of public speaking performance by EFL learners: Applying deep learning to understanding how successful speakers use facial movement
Welch The strategies used by ten grade 7 students, working in single-sex dyads, to solve a technological problem
CN113593326A (en) English pronunciation teaching device and method
CN112750057A (en) Student learning behavior database establishing, analyzing and processing method based on big data and cloud computing and cloud data platform
TWM600908U (en) Learning state improvement management system
CN111950472A (en) Teacher grinding evaluation method and system
TWI731577B (en) Learning state improvement management system
CN110808066B (en) Teaching environment safety analysis method
TWI780405B (en) Learning trajectory analysis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant