CN113033514B - Classroom student aggressiveness evaluation method based on network - Google Patents

Classroom student aggressiveness evaluation method based on network Download PDF

Info

Publication number
CN113033514B
CN113033514B CN202110566458.3A CN202110566458A CN113033514B CN 113033514 B CN113033514 B CN 113033514B CN 202110566458 A CN202110566458 A CN 202110566458A CN 113033514 B CN113033514 B CN 113033514B
Authority
CN
China
Prior art keywords
image
value
student
eyeball
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110566458.3A
Other languages
Chinese (zh)
Other versions
CN113033514A (en
Inventor
陈军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Plaso Network Technology Co ltd
Original Assignee
Nanjing Plaso Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Plaso Network Technology Co ltd filed Critical Nanjing Plaso Network Technology Co ltd
Priority to CN202110566458.3A priority Critical patent/CN113033514B/en
Publication of CN113033514A publication Critical patent/CN113033514A/en
Application granted granted Critical
Publication of CN113033514B publication Critical patent/CN113033514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The invention discloses a classroom student aggressiveness evaluation method based on a network, which is characterized by comprising the following steps of: s1, recording and uploading the live video of the student in the learning process of the student; s2, extracting a frame image; s3, obtaining a preprocessed image I [ x, y ] of the human face](ii) a S4, establishing a vertical gray scale integral projection curve to obtain a rising point n1And a falling point n2(ii) a S5, cutting the left and right boundaries of the image to obtain a cut face image; s6, performing binarization processing to obtain a binarized image; s7, obtaining eyeball pixel points, taking the intermediate position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points; s8, counting the number of the connecting line length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data. The invention can effectively reflect extreme learning of the student.

Description

Classroom student aggressiveness evaluation method based on network
Technical Field
The invention relates to a classroom student aggressiveness evaluation method based on a network.
Background
With the advance of education informatization in China, online classroom teaching is developed rapidly, but technical tools adopted by online classrooms are not effectively related, so that the technology which should bring remarkable changes to the current classroom situation cannot excite the huge potential of online classrooms.
Compared with an offline class, the online class can select a favorite teacher according to the requirement, the course can be played circularly for multiple times, and the relative utilization time is convenient.
However, there are some disadvantages in online class, such as poor learning quality, which is of great importance in some stage education.
Disclosure of Invention
The invention aims to provide a system capable of effectively improving the online classroom learning quality.
In order to solve the problems, the invention provides a method for evaluating the activeness of students in class based on a network, which comprises the following steps:
s1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the step S2, wherein the preprocessing comprises graying processing and normalization processing, and thus a preprocessed image I [ x, y ] of the face is obtained;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and obtaining a rising point n of the minimum function value in continuous rising in the vertical gray scale integral projection curve1And a point n of drop of the minimum function value in the successive drops2
S5, setting the rising point n1As the left boundary of the face image, the descending point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining pixel points with RGB values of 0 in a continuous threshold range as eyeball pixel points, taking the middle point position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points;
and S8, counting the number of the connecting line length L, marking the data in the range lower than the number setting range of the connecting line length L as abnormal length data, and counting the number of the abnormal length data.
As a further improvement of the present invention, in step S4, the preprocessed image I [ x, y ] is processed]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure DEST_PATH_IMAGE001
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure 289105DEST_PATH_IMAGE002
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5.
As a further improvement of the present invention, in step S7, when scanning the binarized image and when the number of pixels having an adjacent RGB value of 0 of a certain pixel exceeds a set number, the RGB values of the adjacent pixels of the certain pixel are all set to 0.
As a further improvement of the invention, the method also comprises a step S7.5 of measuring the maximum height difference of the pixel points of the eyeballs on each side to obtain the height value H of the eyeballs,
in step S8, the height value H is counted, data within a set range lower than the height value H is marked as abnormal height data, and the number of the abnormal height data is counted.
As a further improvement of the present invention, the step S2 includes:
s2.1, dividing the received video into a plurality of video segments according to set time;
s2.2 stores the video segments and marks them in chronological order, and the video segment stored earliest is extracted in the step S3 according to the chronological mark and deleted after being extracted.
As a further improvement of the present invention, the normalization process in step S3 includes:
s3.1, carrying out initial positioning on the eyes, arbitrarily intercepting an eye image in the stored face sample set as an eye template image, and defining
Figure DEST_PATH_IMAGE003
In which ITThe face image is opposite to the eye template; i is the extracted face image,
Figure 832344DEST_PATH_IMAGE004
is a mean operator; i isTI is the product of the images;
Figure DEST_PATH_IMAGE005
the standard mean square error of the image area is taken as the position of the eyes, and the position where the correlation coefficient of the extracted face image matched with the eye template has the maximum value is taken as the position of the eyes;
and S3.2, after the eyes are positioned, rotating according to the connecting line of the two eyes, so that the connecting line of the two eyes is superposed with the horizontal line.
As a further improvement of the present invention, in step S3.1, the scale sizes of the clipped eye template images are set to 0.6, 0.8, 1.0, 1.2, and 1.4 respectively as corrected eye template images, the largest one of the maximum correlation coefficients at the matching positions of the face image and each corrected eye template image is taken as a finally selected corrected eye template, and the scale of the face image is adjusted according to the selected corrected eye template scale size.
As a further improvement of the present invention, an 8-bit grayscale image is obtained after the graying processing in step S3.
The method has the advantages that the whole-course video shooting is carried out on the student in the learning process of the student, the real-time video is uploaded and analyzed through the streaming media technology, the image is extracted according to the set time interval, the positions of two eyeballs in the image are calibrated, the distance between the two eyeballs is measured, the eyeball distance is counted, the reasonable value range of the eyeballs is obtained, the eyeball distance data which are lower than the reasonable value range are marked to be abnormal, and the learning volume of the student can be reflected by the times of abnormal data.
Detailed Description
The technical solution of the present invention is further explained by the following embodiments.
The invention comprises the following steps:
s1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and obtaining a rising point n of the minimum function value in continuous rising in the vertical gray scale integral projection curve1And a point n of drop of the minimum function value in the successive drops2
S5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining pixel points with RGB value of 0 in a continuous threshold range as eyeball pixel points, taking the middle point position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s8, counting the number of the connecting line length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data.
As a further improvement of the present invention, in step S4, the preprocessed image I [ x, y ] is processed]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure 425130DEST_PATH_IMAGE006
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure DEST_PATH_IMAGE007
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5.
As a further improvement of the present invention, in step S7, when scanning the binarized image and when the number of pixels having an adjacent RGB value of 0 of a certain pixel exceeds a set number, the RGB values of the adjacent pixels of the certain pixel are all set to 0.
As a further improvement of the present invention,
further comprises a step S7.5 of measuring the maximum height difference of the eyeball pixel points on each side to obtain an eyeball height value H,
in step S8, the height value H may be counted, and the height number of the abnormal data may be counted by marking the data lower than the set range as abnormal data.
As a further improvement of the present invention, the step S2 includes:
s2.1, dividing the received video into a plurality of video segments according to set time;
s2.2 stores the above video segments and marks them in chronological order, and the video segment stored earliest is extracted in the step S3 in chronological order, and deleted after the extraction.
As a further improvement of the present invention, the normalization process in step S3 includes:
s3.1, carrying out initial positioning on the eyes, arbitrarily intercepting an eye image in the stored face sample set as an eye template image, and defining
Figure 123135DEST_PATH_IMAGE008
In which ITThe face image is opposite to the eye template; i is the extracted face image,
Figure DEST_PATH_IMAGE009
is a mean operator; i isTI is the product of the images;
Figure 514671DEST_PATH_IMAGE010
the standard mean square error of the image area is taken as the position of the eyes, and the position where the correlation coefficient of the extracted face image matched with the eye template has the maximum value is taken as the position of the eyes;
and S3.2, after the eyes are positioned, rotating according to the connecting line of the two eyes, so that the connecting line is superposed with the horizontal line.
As a further improvement of the present invention, in step S3.1, the scale sizes of the clipped eye template images are respectively set to 0.6, 0.8, 1.0, 1.2, and 1.4 as corrected eye template images, and the largest one of the maximum correlation coefficients at the matching position between the face image and each corrected eye template image is taken as a finally selected corrected eye template, and the scale of the face image is adjusted according to the selected corrected eye template scale.
As a further improvement of the present invention, an 8-bit grayscale image is obtained after the graying processing in step S3.
The first embodiment,
S1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and obtaining a rising point n of the minimum function value in continuous rising in the vertical gray scale integral projection curve1And a point n of drop of the minimum function value in the successive drops2
S5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining pixel points with RGB value of 0 in a continuous threshold range as eyeball pixel points, taking the middle point position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s8, counting the number of the connecting line length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data.
Example II,
S1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and processing the preprocessed image I [ x, y ]]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure DEST_PATH_IMAGE011
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure 946920DEST_PATH_IMAGE012
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5 in the vertical gray scale integral projection curve;
s5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining pixel points with RGB value of 0 in a continuous threshold range as eyeball pixel points, taking the middle point position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s8, counting the number of the connecting line length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data.
Example III,
S1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and processing the preprocessed image I [ x, y ]]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure DEST_PATH_IMAGE013
And using M × D image blocks inPreprocessing an image I [ x, y ]]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure 794047DEST_PATH_IMAGE014
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5 in the vertical gray scale integral projection curve;
s5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining the pixel point with the RGB value of 0 in the continuous threshold range as an eyeball pixel point, in addition, when the number of the pixel points with the adjacent RGB value of 0 of a certain pixel point exceeds the set number, setting the RGB values of the adjacent pixel points of the pixel point to be 0, taking the middle point position of the eyeball pixel point in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s8, counting the number of the connecting line length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data.
Example four,
S1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and processing the preprocessed image I [ x, y ]]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure DEST_PATH_IMAGE015
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure 898007DEST_PATH_IMAGE016
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5 in the vertical gray scale integral projection curve;
s5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining the pixel point with the RGB value of 0 in the continuous threshold range as an eyeball pixel point, in addition, when the number of the pixel points with the adjacent RGB value of 0 of a certain pixel point exceeds the set number, setting the RGB values of the adjacent pixel points of the pixel point to be 0, taking the middle point position of the eyeball pixel point in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s7.5, measuring the maximum height difference of the eyeball pixel points on each side to obtain an eyeball height value H;
s8, counting the number of the connection length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data; and meanwhile, counting the height value H, marking the data lower than the value within a set range as abnormal data, and counting the height number of the abnormal data.
Example V,
S1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the above steps, wherein the preprocessing comprises graying processing and normalization processing, so as to obtain a preprocessed image I [ x, y ] of the face;
s3.1, carrying out initial positioning on the eyes, arbitrarily intercepting an eye image in the stored face sample set as an eye template image, and defining
Figure DEST_PATH_IMAGE017
In which ITThe face image is opposite to the eye template; i is the extracted face image,
Figure 583197DEST_PATH_IMAGE018
is a mean operator; i isTI is the product of the images;
Figure DEST_PATH_IMAGE019
the standard mean square error of the image area is taken as the position of the eyes, and the position where the correlation coefficient of the extracted face image matched with the eye template has the maximum value is taken as the position of the eyes;
and S3.2, after the eyes are positioned, rotating according to the connecting line of the two eyes, so that the connecting line is superposed with the horizontal line.
As a further improvement of the present invention, in step S3.1, the scale sizes of the clipped eye template images are respectively set to 0.6, 0.8, 1.0, 1.2, and 1.4 as corrected eye template images, and the largest one of the maximum correlation coefficients at the matching position between the face image and each corrected eye template image is taken as a finally selected corrected eye template, and the scale of the face image is adjusted according to the selected corrected eye template scale.
S4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and processing the preprocessed image I [ x, y ]]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure 484551DEST_PATH_IMAGE020
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure DEST_PATH_IMAGE021
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5 in the vertical gray scale integral projection curve;
s5, setting the rising point n1As the left boundary of the face image, the above-mentioned dropping point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining the pixel point with the RGB value of 0 in the continuous threshold range as an eyeball pixel point, in addition, when the number of the pixel points with the adjacent RGB value of 0 of a certain pixel point exceeds the set number, setting the RGB values of the adjacent pixel points of the pixel point to be 0, taking the middle point position of the eyeball pixel point in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s7.5, measuring the maximum height difference of the eyeball pixel points on each side to obtain an eyeball height value H;
s8, counting the number of the connection length L, marking the data lower than the set range of the number as abnormal length data, and counting the length number of the abnormal data; and meanwhile, counting the height value H, marking the data lower than the value within a set range as abnormal data, and counting the height number of the abnormal data.
The technical principle of the present invention is described above in connection with specific embodiments. The description is made for the purpose of illustrating the principles of the invention and should not be construed in any way as limiting the scope of the invention. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive effort, which would fall within the scope of the present invention.

Claims (7)

1. A classroom student aggressiveness evaluation method based on a network is characterized by comprising the following steps:
s1, recording and uploading the live video of the student in the learning process of the student;
s2, extracting and uploading at least one frame of image of the live video according to a set time interval;
s3, preprocessing the image extracted in the step S2, wherein the preprocessing comprises graying processing and normalization processing, and thus a preprocessed image I [ x, y ] of the face is obtained;
s4, pre-processing image I [ x, y ]]Establishing a vertical gray scale integral projection curve, and obtaining a rising point n of the minimum function value in continuous rising in the vertical gray scale integral projection curve1And a point n of drop of the minimum function value in the successive drops2
S5, setting the rising point n1As the left boundary of the face image, the descending point n is used2Cutting the right boundary of the face recognition to obtain a cut face image;
s6, performing binarization processing on the cut face image to obtain a binarized image;
s7, scanning the binary image from left to right or from right to left, defining pixel points with RGB values of 0 in a continuous threshold range as eyeball pixel points, taking the middle point position of the eyeball pixel points in the same range, and calculating the connecting line length L of the two eyeball pixel points;
s8, counting the value of the connection length L, marking the data in the value setting range lower than the connection length L as abnormal length data, counting the number of the abnormal length data, and reflecting the extreme learning of the student according to the number of the abnormal data;
wherein:
in the step S4, the preprocessed image I [ x, y ] is processed]The vertical gray scale integral projection curve function on the image M multiplied by N is defined as
Figure 453237DEST_PATH_IMAGE001
And using the M × D image blocks to pre-process the image I [ x, y]Shifting to calculate the value of the vertical gray scale integral projection curve function with the formula
Figure 317288DEST_PATH_IMAGE002
Wherein M is the height of the preprocessed image I, N and D are the width of the preprocessed image, and the value of D is 5.
2. The method as claimed in claim 1, wherein in step S7, when the binary image is scanned and the number of pixels with RGB value 0 adjacent to a certain pixel exceeds a predetermined number, the RGB values of the adjacent pixels of the certain pixel are all set to 0.
3. The method as claimed in claim 2, further comprising a step S7.5 of obtaining a height value H of the eyeball by measuring the maximum height difference of the pixel points of the eyeball at each side,
in step S8, the height value H is counted, data within a set range lower than the height value H is marked as abnormal height data, and the number of the abnormal height data is counted.
4. The method for evaluating the activeness of classroom members based on internet according to claim 3, wherein the step S2 includes:
s2.1, dividing the received video into a plurality of video segments according to set time;
s2.2 stores the video segments and marks them in chronological order, and the video segment stored earliest is extracted in the step S3 according to the chronological mark and deleted after being extracted.
5. The method for evaluating the activeness of a classroom student based on internet according to claim 4, wherein the normalization process in step S3 includes:
s3.1, carrying out initial positioning on the eyes, arbitrarily intercepting an eye image in the stored face sample set as an eye template image, and defining
Figure 613883DEST_PATH_IMAGE003
In which ITThe face image is opposite to the eye template; i is the extracted face image, which is a mean operator; i isTI is the product of the images;
Figure 905187DEST_PATH_IMAGE004
the standard mean square error of the image area is taken as the position of the eyes, and the position where the correlation coefficient of the extracted face image matched with the eye template has the maximum value is taken as the position of the eyes;
and S3.2, after the eyes are positioned, rotating according to the connecting line of the two eyes, so that the connecting line of the two eyes is superposed with the horizontal line.
6. The method as claimed in claim 5, wherein in step S3.1, the captured eye template images are respectively modified to have the proportional sizes of 0.6, 0.8, 1.0, 1.2 and 1.4, and the largest of the maximum correlation coefficients at the matching positions of the face image and each modified eye template image is taken as the finally selected modified eye template, and the proportion of the face image is adjusted according to the selected modified eyeball template proportional size.
7. The method for evaluating the activeness of classroom members based on internet according to claim 6, wherein an 8-bit grayed image is obtained after the graying process in step S3.
CN202110566458.3A 2021-05-24 2021-05-24 Classroom student aggressiveness evaluation method based on network Active CN113033514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110566458.3A CN113033514B (en) 2021-05-24 2021-05-24 Classroom student aggressiveness evaluation method based on network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110566458.3A CN113033514B (en) 2021-05-24 2021-05-24 Classroom student aggressiveness evaluation method based on network

Publications (2)

Publication Number Publication Date
CN113033514A CN113033514A (en) 2021-06-25
CN113033514B true CN113033514B (en) 2021-08-17

Family

ID=76455710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110566458.3A Active CN113033514B (en) 2021-05-24 2021-05-24 Classroom student aggressiveness evaluation method based on network

Country Status (1)

Country Link
CN (1) CN113033514B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305913B (en) * 2008-07-11 2010-06-09 华南理工大学 Face beauty assessment method based on video
CN108256392A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 Pupil region localization method based on projecting integral and area grayscale extreme value
CN111860423A (en) * 2020-07-30 2020-10-30 江南大学 Improved human eye positioning method of integral projection method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739546A (en) * 2008-11-05 2010-06-16 沈阳工业大学 Image cross reconstruction-based single-sample registered image face recognition method
US10533850B2 (en) * 2013-07-12 2020-01-14 Magic Leap, Inc. Method and system for inserting recognized object data into a virtual world
CN106682603B (en) * 2016-12-19 2020-01-21 陕西科技大学 Real-time driver fatigue early warning system based on multi-source information fusion
CN109446880A (en) * 2018-09-05 2019-03-08 广州维纳斯家居股份有限公司 Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium
CN112329631A (en) * 2020-11-05 2021-02-05 浙江点辰航空科技有限公司 Method for carrying out traffic flow statistics on expressway by using unmanned aerial vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305913B (en) * 2008-07-11 2010-06-09 华南理工大学 Face beauty assessment method based on video
CN108256392A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 Pupil region localization method based on projecting integral and area grayscale extreme value
CN111860423A (en) * 2020-07-30 2020-10-30 江南大学 Improved human eye positioning method of integral projection method

Also Published As

Publication number Publication date
CN113033514A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN109034036B (en) Video analysis method, teaching quality assessment method and system and computer-readable storage medium
CN106778676B (en) Attention assessment method based on face recognition and image processing
CN109460762B (en) Answer sheet scoring method based on image recognition
CN106033535B (en) Electronic paper marking method
CN112183238B (en) Remote education attention detection method and system
CN108876195A (en) A kind of intelligentized teachers ' teaching quality evaluating system
CN113762107A (en) Object state evaluation method and device, electronic equipment and readable storage medium
CN108345833A (en) The recognition methods of mathematical formulae and system and computer equipment
CN111144151A (en) High-speed dynamic bar code real-time detection method based on image recognition
CN110443800A (en) The evaluation method of video image quality
CN110929562A (en) Answer sheet identification method based on improved Hough transformation
CN105678301B (en) method, system and device for automatically identifying and segmenting text image
CN111444389A (en) Conference video analysis method and system based on target detection
CN109886945A (en) Based on contrast enhancing without reference contrast distorted image quality evaluating method
CN111259844B (en) Real-time monitoring method for examinees in standardized examination room
CN112801965A (en) Sintering belt foreign matter monitoring method and system based on convolutional neural network
CN106033534B (en) Electronic paper marking method based on straight line detection
CN109034590A (en) A kind of intelligentized teaching quality evaluation for teachers management system
CN117152648A (en) Auxiliary teaching picture recognition device based on augmented reality
CN113592839B (en) Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN113033514B (en) Classroom student aggressiveness evaluation method based on network
US20170358273A1 (en) Systems and methods for resolution adjustment of streamed video imaging
CN113989608A (en) Student experiment classroom behavior identification method based on top vision
CN107067399A (en) A kind of paper image segmentation processing method
CN104077562B (en) A kind of scanning direction determination methods of test paper

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant