CN113342761B - Teaching resource sharing system and method based on Internet - Google Patents

Teaching resource sharing system and method based on Internet Download PDF

Info

Publication number
CN113342761B
CN113342761B CN202110893773.7A CN202110893773A CN113342761B CN 113342761 B CN113342761 B CN 113342761B CN 202110893773 A CN202110893773 A CN 202110893773A CN 113342761 B CN113342761 B CN 113342761B
Authority
CN
China
Prior art keywords
user
learning
teaching
resource
screenshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110893773.7A
Other languages
Chinese (zh)
Other versions
CN113342761A (en
Inventor
李敏波
周成滔
李雪勇
李群娣
李文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qicheng Zhiyuan Network Technology Co ltd
Original Assignee
Shenzhen Qicheng Zhiyuan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qicheng Zhiyuan Network Technology Co ltd filed Critical Shenzhen Qicheng Zhiyuan Network Technology Co ltd
Priority to CN202110893773.7A priority Critical patent/CN113342761B/en
Publication of CN113342761A publication Critical patent/CN113342761A/en
Application granted granted Critical
Publication of CN113342761B publication Critical patent/CN113342761B/en
Priority to PCT/CN2022/071606 priority patent/WO2023010813A1/en
Priority to ZA2022/03115A priority patent/ZA202203115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a teaching resource sharing system based on the internet, comprising: the shared resource query module is used for retrieving teaching resources required to be queried by a user; and the resource learning monitoring module is used for monitoring the learning state of the teaching resources by the user. The method and the device can monitor the shared teaching resources, monitor the learning state of the user on the teaching resources, judge from multiple angles in the monitoring of the learning state of the user, enable the judgment result to be more accurate, adjust the teaching resources recommended to the user after acquiring the learning state of the user on the teaching resources, facilitate the improvement of the learning interest of the user and enhance the utilization rate of the user on the shared teaching resources.

Description

Teaching resource sharing system and method based on Internet
Technical Field
The invention relates to the technical field of computers, in particular to a teaching resource sharing system and method based on the Internet.
Background
Along with the rapid development of the internet, people gradually feel the convenience brought by science and technology, and in the aspect of applying the internet technology to education, people can share the teaching resources through the internet technology, but because everyone has different preferences, the learning state of the obtained teaching resources is different, and the learning enthusiasm of people for the uninteresting teaching resources is correspondingly weakened.
In view of the above situation, there is a need for a system and a method for sharing teaching resources based on the internet, which can monitor the shared teaching resources, monitor the learning state of the user on the teaching resources, perform judgment from multiple angles in the monitoring of the learning state of the user, so that the judgment result is more accurate, and meanwhile, after the learning attitude of the user on the teaching resources is obtained, the teaching resources recommended to the user can be adjusted, thereby facilitating the improvement of the learning interest of the user, and enhancing the utilization rate of the shared teaching resources by the user.
Disclosure of Invention
The invention aims to provide a teaching resource sharing system and method based on the internet, so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: an internet-based educational resource sharing system comprising:
the shared resource query module is used for retrieving teaching resources required to be queried by a user;
the resource learning monitoring module is used for monitoring the learning state of the teaching resources by the user;
a shared resource recommending module, which marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user, judges the acquired result, adjusts the recommending result of the teaching resource of the user according to the judging result,
the teaching resources shared in the teaching resource sharing system are teaching videos, the teaching videos can be automatically paused every first unit time in the playing process, and the teaching videos can be continuously played only by manual operation of a user.
The invention realizes the inquiry function of the shared teaching resources, the monitoring function of the learning state of the learning teaching resources of the user and the recommendation function of the teaching resources of the user aiming at the learning state of the user through the cooperative cooperation of all the modules.
Further, the shared resource query module automatically acquires the shared teaching resource names, extracts the keywords in the teaching resource names, respectively binds the extracted keywords in the same teaching resource name with the teaching resource links, uniformly stores the extracted keywords of each teaching resource name and the teaching resource links bound by each keyword into a keyword database,
when the user inquires the shared teaching resources, the keyword a of the teaching resources required to be inquired can be directly input in the shared resource inquiry module,
the shared resource query module matches the keyword a with the keyword database, further matches all teaching resource links bound with the keyword a in the keyword database, and the user learns the corresponding teaching resources by selecting the matched teaching resource links.
The shared resource module further realizes the screening of the teaching resource links bound with the identified keywords according to the identification result by identifying the keywords, thereby finding the teaching resources required by the user.
Further, the resource learning monitoring module monitors the learning state of the user when learning the teaching resources, the learning state includes the learning state of the user watching and the learning identity confirmation state of the user,
the resource learning monitoring module comprises a user watching learning state monitoring module and a user learning identity confirmation state monitoring module,
the user watching learning state monitoring module is used for acquiring image information of a user through a camera on user learning equipment, analyzing the acquired image information and obtaining the user watching learning state according to an analysis result,
the user learning identity confirmation status monitoring module is used for extracting a face image of a user through a camera on user learning equipment, superposing the extracted image with a prefabricated user image, establishing a same-plane rectangular coordinate system, calculating the sum of distances between the extracted image in the same-plane rectangular coordinate system and corresponding pixel points of the prefabricated user image, comparing the sum of the distances with a first threshold value to obtain the user learning identity confirmation status,
when the obtained distance sum is smaller than the first threshold value, the condition that the user learns the identity confirmation is that the user identity is correct,
and when the obtained distance sum is larger than or equal to the first threshold value, judging that the user identity learning confirmation condition is a user identity error.
When the resource learning monitoring module monitors the learning state of a user in learning teaching resources, the user watching learning state monitoring module and the user learning identity confirmation state monitoring module respectively monitor the user watching learning state and the user learning identity confirmation state, the sum of the distances between the extracted image and the prefabricated image corresponding coordinate points is calculated in the user learning identity confirmation state monitoring module to serve as an error and is compared with a first threshold value, the user learning identity is confirmed, and whether the user identity is correct or not is judged.
Furthermore, the user watching learning state monitoring module acquires image information of a user through a camera on user learning equipment and analyzes the image information according to the acquired image information, wherein the analysis process comprises two aspects, namely, on one hand, the total time t1 between the automatic pause of the teaching video and the continuous playing of the manual operation of the user in each time in the teaching resource learning process, on the other hand, the time t2 when the face image of the user appears in the image information of the user acquired by the camera on the user learning equipment and the total time t of the teaching resource in the teaching resource watching learning process of the user,
the time t3 at which the face image of the user does not appear in the image information of the user acquired by the camera on the user learning apparatus is t-t 2.
The user watching and learning condition monitoring module analyzes two aspects of the total t1 of the interval time between the automatic pause of the teaching video and the continuous playing of the manual operation of the user each time in the teaching resource learning process and the time t2 of the face image of the user in the image information of the user acquired by the camera on the user learning equipment and the total time t of the teaching resource in the process of watching and learning the teaching resource by the user, and the larger the value of t1 is, the longer the pause time is in the process of watching and learning by the user, the higher the probability that the user carelessly learns and watches is, and the lower the interest in learning the teaching resource is.
Further, the user watching learning state monitoring module further collects image information corresponding to time t2 when a user face image appears in image information of a user acquired by a camera on user learning equipment, and performs screenshot on the image information corresponding to t2 every second unit time to acquire a position coefficient c1 of the user relative to the screenshot in each screenshot and a total number n1 of the screenshots in the image information corresponding to position coefficients c2 and t2 of a black eyeball relative to an eye to which the black eyeball belongs in the screenshot,
the method for acquiring the position coefficient c1 of the user relative to the screenshot in the screenshot comprises the following steps:
s1.1, acquiring a central point of a screenshot, taking the central point of the screenshot as an original point, taking the direction which passes through the original point when the screenshot is vertically placed and is horizontally rightward as the positive direction of an x axis, and taking the direction which passes through the original point when the screenshot is vertically placed and is vertically upward as the positive direction of a y axis to establish a plane rectangular coordinate system;
s1.2, acquiring all pixel points with the same RGB values as corresponding to the skin colors in the screenshot, marking the acquired pixel points, and acquiring the outline of an area where the marked pixel points are located;
s1.3, comparing the contour obtained in the step S1.2 with a prefabricated human body contour, calculating the similarity between the contour and the prefabricated human body contour,
firstly, superposing the two, then calculating the sum of the distances between each pixel point on the contour obtained in the step S1.2 and the corresponding pixel point on the prefabricated human body contour on a plane rectangular coordinate system, and then dividing the obtained sum by a first preset value to obtain a quotient which is the similarity between the contour obtained in the step S1.2 and the prefabricated human body contour;
s1.4, comparing the similarity in the step S1.3 with a second preset value,
when the similarity is larger than or equal to a second preset value, the contour obtained in the step S1.2 is judged to be the human contour,
when the similarity is smaller than a second preset value, judging that the contour obtained in the step S1.2 is not a human contour;
s1.5, calculating the abscissa x1 of the most central pixel point in the region surrounded by the human outline in a planar rectangular coordinate system, the abscissa x2 of the lower left corner point in the screenshot, and the abscissa x3 of the lower right corner point in the screenshot,
the difference between x2 and x3 divided by x1 is calculated as the quotient of the position coefficient c1 of the user relative to the screenshot in the screenshot, i.e.
Figure 533578DEST_PATH_IMAGE001
Further, the method for acquiring the position coefficient c2 of the black eyeball relative to the eye to which the black eyeball belongs in the screenshot from the user watching learning condition monitoring module comprises the following steps:
s2.1, acquiring all pixel points corresponding to an area surrounded by the human outline in the screenshot;
s2.2, carrying out image binarization processing on all the obtained pixel points to obtain a processed image;
s2.3, acquiring pixel points corresponding to all black areas in the processed image, and calculating coordinates of the acquired pixel points in a plane rectangular coordinate system established in the screenshot;
s2.4, respectively comparing the corresponding outlines of all the black areas with the prefabricated eyebrow outline or the prefabricated black eyeball outline, and calculating the similarity of the two outlines;
when the similarity is more than or equal to a third preset value, the black area is judged to be the human eyebrow or the human black eyeball,
when the similarity is smaller than a third preset value, judging that the black area is not the eyebrow or the black eyeball of the person;
s2.5, calculating the abscissa x4 and x5 corresponding to the leftmost point and the rightmost point in the black area corresponding to the eyebrow respectively, calculating the abscissa x6 of the center point in the black area corresponding to the black eyeball in the eye corresponding to the eyebrow,
calculating the abscissa of the midpoint in the black area corresponding to the eyebrow, i.e.
Figure 502671DEST_PATH_IMAGE002
Calculate x6 and
Figure 579080DEST_PATH_IMAGE002
is divided by the difference between x4 and x5, and the quotient is the position coefficient c2 of the black eyeball of the user relative to the eye to which the black eyeball belongs, namely
Figure 848388DEST_PATH_IMAGE003
Further, the user viewing learning condition monitoring module calculates an absolute value of the sum of c1 and c2, compares the obtained absolute value | c1+ c2| with a fourth preset value,
when the value | c1+ c2| is greater than or equal to a fourth preset value, judging that the user does not carefully watch the learning teaching resources in the screenshot;
when the | c1+ c2| is smaller than a fourth preset value, judging that the user carefully watches the learning teaching resources in the screenshot;
counting the number of screenshots of which the judgment result is that the user does not view the learning teaching resources seriously, recording the number as n2, multiplying the quotient by t2 by n2 divided by n1 to obtain the time t4 when the face image of the user appears in the image information of the user acquired by the camera on the user learning equipment but the user does not learn seriously, namely the time t4
Figure 633941DEST_PATH_IMAGE004
Adding t3 and t4 to obtain the time t5 which is not learned by the user in the playing process of the teaching resource, namely
Figure 716429DEST_PATH_IMAGE005
Comparing t1 with the fifth preset value, comparing t5 with the sixth preset value,
when t1 is greater than or equal to the fifth preset value or t5 is greater than or equal to the sixth preset value, the user is determined to have good watching learning condition,
otherwise, the user is judged to have poor watching learning condition.
The user watching learning state monitoring module judges every two screenshots in a mode of carrying out screenshot once every second unit time, judges whether the learning state of the user in each screenshot is carefully learned or not, further multiplies t2 by calculating the ratio of the number of the unintelligibly learned screenshots to the total number of the screenshots, can obtain the time t4 when the face image of the user appears in the image information but the user does not carefully learn, adds t3 to t4, can obtain the time t5 which the user does not learn in the playing process of the teaching resources, and further judges the user watching learning state; in the process of judging each screenshot, two aspects of the position coefficient c1 of the user relative to the screenshot in each screenshot and the position coefficient c2 of the black eyeball of the user relative to the eye to which the black eyeball belongs in each screenshot are analyzed, when the value of c1 is negative, the left position of the user in the screenshot is described, therefore, in order to ensure that the user can normally learn, the black eyeball in the user's eye needs to deflect to the right, at the moment, the value of c2 is positive, the value of | c1+ c2| is calculated, whether the learning state of the user in each screenshot is carefully learned can be confirmed, and the smaller the value of | c1+ c2| is, the more the sight line of sight of the user is deviated to the midpoint of the screenshot.
Further, the resource learning monitoring module respectively obtains the user watching learning status obtained by the user watching learning status monitoring module and the user learning identity confirmation status obtained by the user learning identity confirmation status monitoring module,
when the user has poor watching learning status or the user has wrong learning status and user identity, the user is determined to have bad learning status for the teaching resource,
otherwise, the learning state of the user on the teaching resource is judged to be good.
The resource learning monitoring module comprehensively judges the user watching learning state obtained by the user watching learning state monitoring module and the user learning identity confirmation state obtained by the user learning identity confirmation state monitoring module, and judges the learning state of the user on the teaching resource.
Further, the shared resource recommending module acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user,
when the learning state of the user on the teaching resource is good, the shared resource recommending module continues to recommend other teaching resources with the same labels as the teaching resources;
and when the learning state of the teaching resource by the user is not good, the shared resource recommending module does not continuously recommend other teaching resources with the same labels as the teaching resources.
An internet-based teaching resource sharing method comprises the following steps:
s1, retrieving teaching resources required to be queried by a user through a shared resource query module;
s2, in the resource learning monitoring module, the user monitors the learning state of the teaching resource;
s3, the shared resource recommending module marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the user on the teaching resource, judges the acquired result, and adjusts the recommending result of the user teaching resource according to the judging result.
Compared with the prior art, the invention has the following beneficial effects: the method and the device can monitor the shared teaching resources, monitor the learning state of the user on the teaching resources, judge from multiple angles in the monitoring of the learning state of the user, enable the judgment result to be more accurate, adjust the teaching resources recommended to the user after acquiring the learning state of the user on the teaching resources, facilitate the improvement of the learning interest of the user and enhance the utilization rate of the user on the shared teaching resources.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an Internet-based teaching resource sharing system according to the present invention;
FIG. 2 is a schematic flow chart of a method for acquiring a position coefficient c1 of a user relative screenshot in a screenshot by a user watching learning status monitoring module in an Internet-based teaching resource sharing system according to the present invention;
FIG. 3 is a schematic flow chart of a method for acquiring a position coefficient c2 of a black eyeball of a user in a screenshot relative to an eye to which the black eyeball belongs in a user watching learning condition monitoring module of the Internet-based teaching resource sharing system according to the invention;
FIG. 4 is a schematic flow chart of a learning watching status monitoring module for a user of an Internet-based teaching resource sharing system for determining the learning watching status of the user.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-4, the present invention provides a technical solution: an internet-based educational resource sharing system comprising:
the shared resource query module is used for retrieving teaching resources required to be queried by a user;
the resource learning monitoring module is used for monitoring the learning state of the teaching resources by the user;
a shared resource recommending module, which marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user, judges the acquired result, adjusts the recommending result of the teaching resource of the user according to the judging result,
the teaching resources shared in the teaching resource sharing system are teaching videos, the teaching videos can be automatically paused every first unit time in the playing process, and the teaching videos can be continuously played only by manual operation of a user.
The invention realizes the inquiry function of the shared teaching resources, the monitoring function of the learning state of the learning teaching resources of the user and the recommendation function of the teaching resources of the user aiming at the learning state of the user through the cooperative cooperation of all the modules.
The shared resource query module automatically acquires the shared teaching resource names, extracts the keywords in the teaching resource names, respectively binds the extracted keywords in the same teaching resource name with the teaching resource links, uniformly stores the extracted keywords of each teaching resource name and the teaching resource links bound by each keyword into a keyword database,
when the user inquires the shared teaching resources, the keyword a of the teaching resources required to be inquired can be directly input in the shared resource inquiry module,
the shared resource query module matches the keyword a with the keyword database, further matches all teaching resource links bound with the keyword a in the keyword database, and the user learns the corresponding teaching resources by selecting the matched teaching resource links.
The shared resource module further realizes the screening of the teaching resource links bound with the identified keywords according to the identification result by identifying the keywords, thereby finding the teaching resources required by the user.
The resource learning monitoring module monitors the learning state of the user when learning the teaching resources, the learning state comprises the learning state of watching by the user and the confirmation state of learning identity of the user,
the resource learning monitoring module comprises a user watching learning state monitoring module and a user learning identity confirmation state monitoring module,
the user watching learning state monitoring module is used for acquiring image information of a user through a camera on user learning equipment, analyzing the acquired image information and obtaining the user watching learning state according to an analysis result,
the user learning identity confirmation status monitoring module is used for extracting a face image of a user through a camera on user learning equipment, superposing the extracted image with a prefabricated user image, establishing a same-plane rectangular coordinate system, calculating the sum of distances between the extracted image in the same-plane rectangular coordinate system and corresponding pixel points of the prefabricated user image, comparing the sum of the distances with a first threshold value to obtain the user learning identity confirmation status,
when the obtained distance sum is smaller than the first threshold value, the condition that the user learns the identity confirmation is that the user identity is correct,
and when the obtained distance sum is larger than or equal to the first threshold value, judging that the user identity learning confirmation condition is a user identity error.
When the resource learning monitoring module monitors the learning state of a user in learning teaching resources, the user watching learning state monitoring module and the user learning identity confirmation state monitoring module respectively monitor the user watching learning state and the user learning identity confirmation state, the sum of the distances between the extracted image and the prefabricated image corresponding coordinate points is calculated in the user learning identity confirmation state monitoring module to serve as an error and is compared with a first threshold value, the user learning identity is confirmed, and whether the user identity is correct or not is judged.
The user watching learning state monitoring module acquires image information of a user through a camera on user learning equipment and analyzes the image information according to the acquired image information, wherein the analysis process comprises two aspects, namely t1 of the interval time between the automatic pause of each teaching video and the continuous playing of the manual operation of the user in the teaching resource learning process, t2 of the face image of the user in the image information of the user acquired by the camera on the user learning equipment and the total time t of the teaching resource in the teaching resource watching learning process of the user,
the time t3 at which the face image of the user does not appear in the image information of the user acquired by the camera on the user learning apparatus is t-t 2.
The user watching and learning condition monitoring module analyzes two aspects of the total t1 of the interval time between the automatic pause of the teaching video and the continuous playing of the manual operation of the user each time in the teaching resource learning process and the time t2 of the face image of the user in the image information of the user acquired by the camera on the user learning equipment and the total time t of the teaching resource in the process of watching and learning the teaching resource by the user, and the larger the value of t1 is, the longer the pause time is in the process of watching and learning by the user, the higher the probability that the user carelessly learns and watches is, and the lower the interest in learning the teaching resource is.
The user watching learning state monitoring module further collects image information corresponding to time t2 when a user face image appears in the image information of the user acquired by a camera on the user learning equipment, and performs screenshot on the image information corresponding to t2 every second unit time to acquire a position coefficient c1 of the user in each screenshot relative to the screenshot and a total number n1 of the screenshots in the image information corresponding to position coefficients c2 and t2 of a black eyeball relative to the eye to which the black eyeball belongs in the screenshot,
the method for acquiring the position coefficient c1 of the user relative to the screenshot in the screenshot comprises the following steps:
s1.1, acquiring a central point of a screenshot, taking the central point of the screenshot as an original point, taking the direction which passes through the original point when the screenshot is vertically placed and is horizontally rightward as the positive direction of an x axis, and taking the direction which passes through the original point when the screenshot is vertically placed and is vertically upward as the positive direction of a y axis to establish a plane rectangular coordinate system;
s1.2, acquiring all pixel points with the same RGB values as corresponding to the skin colors in the screenshot, marking the acquired pixel points, and acquiring the outline of an area where the marked pixel points are located;
s1.3, comparing the contour obtained in the step S1.2 with a prefabricated human body contour, calculating the similarity between the contour and the prefabricated human body contour,
firstly, superposing the two, then calculating the sum of the distances between each pixel point on the contour obtained in the step S1.2 and the corresponding pixel point on the prefabricated human body contour on a plane rectangular coordinate system, and then dividing the obtained sum by a first preset value to obtain a quotient which is the similarity between the contour obtained in the step S1.2 and the prefabricated human body contour;
s1.4, comparing the similarity in the step S1.3 with a second preset value,
when the similarity is larger than or equal to a second preset value, the contour obtained in the step S1.2 is judged to be the human contour,
when the similarity is smaller than a second preset value, judging that the contour obtained in the step S1.2 is not a human contour;
s1.5, calculating the abscissa x1 of the most central pixel point in the region surrounded by the human outline in a planar rectangular coordinate system, the abscissa x2 of the lower left corner point in the screenshot, and the abscissa x3 of the lower right corner point in the screenshot,
the difference between x2 and x3 divided by x1 is calculated as the quotient of the position coefficient c1 of the user relative to the screenshot in the screenshot, i.e.
Figure 737474DEST_PATH_IMAGE006
The method for acquiring the position coefficient c2 of the black eyeball relative to the eye to which the black eyeball belongs in the screenshot from the user watching learning condition monitoring module comprises the following steps:
s2.1, acquiring all pixel points corresponding to an area surrounded by the human outline in the screenshot;
s2.2, carrying out image binarization processing on all the obtained pixel points to obtain a processed image;
s2.3, acquiring pixel points corresponding to all black areas in the processed image, and calculating coordinates of the acquired pixel points in a plane rectangular coordinate system established in the screenshot;
s2.4, respectively comparing the corresponding outlines of all the black areas with the prefabricated eyebrow outline or the prefabricated black eyeball outline, and calculating the similarity of the two outlines;
when the similarity is more than or equal to a third preset value, the black area is judged to be the human eyebrow or the human black eyeball,
when the similarity is smaller than a third preset value, judging that the black area is not the eyebrow or the black eyeball of the person;
s2.5, calculating the abscissa x4 and x5 corresponding to the leftmost point and the rightmost point in the black area corresponding to the eyebrow respectively, calculating the abscissa x6 of the center point in the black area corresponding to the black eyeball in the eye corresponding to the eyebrow,
calculating the abscissa of the midpoint in the black area corresponding to the eyebrow, i.e.
Figure 205496DEST_PATH_IMAGE002
Calculate x6 and
Figure 411218DEST_PATH_IMAGE002
is divided by the difference between x4 and x5, and the quotient is the position coefficient c2 of the black eyeball of the user relative to the eye to which the black eyeball belongs, namely
Figure 354904DEST_PATH_IMAGE003
The user watching learning status monitoring module calculates the absolute value of the sum of c1 and c2, compares the obtained absolute value | c1+ c2| with a fourth preset value,
when the value | c1+ c2| is greater than or equal to a fourth preset value, judging that the user does not carefully watch the learning teaching resources in the screenshot;
when the | c1+ c2| is smaller than a fourth preset value, judging that the user carefully watches the learning teaching resources in the screenshot;
counting the number of screenshots of which the judgment result is that the user does not view the learning teaching resources seriously, recording the number as n2, multiplying the quotient by t2 by n2 divided by n1 to obtain the time t4 when the face image of the user appears in the image information of the user acquired by the camera on the user learning equipment but the user does not learn seriously, namely the time t4
Figure 55006DEST_PATH_IMAGE004
Adding t3 and t4 to obtain the time t5 which is not learned by the user in the playing process of the teaching resource, namely
Figure 767747DEST_PATH_IMAGE005
Comparing t1 with the fifth preset value, comparing t5 with the sixth preset value,
when t1 is greater than or equal to the fifth preset value or t5 is greater than or equal to the sixth preset value, the user is determined to have good watching learning condition,
otherwise, the user is judged to have poor watching learning condition.
The user watching learning state monitoring module judges every two screenshots in a mode of carrying out screenshot once every second unit time, judges whether the learning state of the user in each screenshot is carefully learned or not, further multiplies t2 by calculating the ratio of the number of the unintelligibly learned screenshots to the total number of the screenshots, can obtain the time t4 when the face image of the user appears in the image information but the user does not carefully learn, adds t3 to t4, can obtain the time t5 which the user does not learn in the playing process of the teaching resources, and further judges the user watching learning state; in the process of judging each screenshot, two aspects of the position coefficient c1 of the user relative to the screenshot in each screenshot and the position coefficient c2 of the black eyeball of the user relative to the eye to which the black eyeball belongs in each screenshot are analyzed, when the value of c1 is negative, the left position of the user in the screenshot is described, therefore, in order to ensure that the user can normally learn, the black eyeball in the user's eye needs to deflect to the right, at the moment, the value of c2 is positive, the value of | c1+ c2| is calculated, whether the learning state of the user in each screenshot is carefully learned can be confirmed, and the smaller the value of | c1+ c2| is, the more the sight line of sight of the user is deviated to the midpoint of the screenshot.
The resource learning monitoring module respectively acquires the user watching learning state obtained by the user watching learning state monitoring module and the user learning identity confirmation state obtained by the user learning identity confirmation state monitoring module,
when the user has poor watching learning status or the user has wrong learning status and user identity, the user is determined to have bad learning status for the teaching resource,
otherwise, the learning state of the user on the teaching resource is judged to be good.
The resource learning monitoring module comprehensively judges the user watching learning state obtained by the user watching learning state monitoring module and the user learning identity confirmation state obtained by the user learning identity confirmation state monitoring module, and judges the learning state of the user on the teaching resource.
The shared resource recommending module acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user,
when the learning state of the user on the teaching resource is good, the shared resource recommending module continues to recommend other teaching resources with the same labels as the teaching resources;
and when the learning state of the teaching resource by the user is not good, the shared resource recommending module does not continuously recommend other teaching resources with the same labels as the teaching resources.
An internet-based teaching resource sharing method comprises the following steps:
s1, retrieving teaching resources required to be queried by a user through a shared resource query module;
s2, in the resource learning monitoring module, the user monitors the learning state of the teaching resource;
s3, the shared resource recommending module marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the user on the teaching resource, judges the acquired result, and adjusts the recommending result of the user teaching resource according to the judging result.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An internet-based teaching resource sharing system, comprising:
the shared resource query module is used for retrieving teaching resources required to be queried by a user;
the resource learning monitoring module is used for monitoring the learning state of the teaching resources by the user;
a shared resource recommending module, which marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user, judges the acquired result, adjusts the recommending result of the teaching resource of the user according to the judging result,
the teaching resources shared in the teaching resource sharing system are teaching videos, the teaching videos automatically pause every other unit time in the playing process, and the teaching videos can be continuously played only by manual operation of a user;
the resource learning monitoring module monitors the learning state of the user when learning the teaching resources, the learning state comprises the learning state of watching by the user and the confirmation state of learning identity of the user,
the resource learning monitoring module comprises a user watching learning state monitoring module and a user learning identity confirmation state monitoring module,
the user watching learning state monitoring module is used for acquiring image information of a user through a camera on user learning equipment, analyzing the acquired image information and obtaining the user watching learning state according to an analysis result,
the user learning identity confirmation status monitoring module is used for extracting a face image of a user through a camera on user learning equipment, superposing the extracted image with a prefabricated user image, establishing a same-plane rectangular coordinate system, calculating the sum of distances between the extracted image in the same-plane rectangular coordinate system and corresponding pixel points of the prefabricated user image, comparing the sum of the distances with a first threshold value to obtain the user learning identity confirmation status,
when the obtained distance sum is smaller than the first threshold value, the condition that the user learns the identity confirmation is that the user identity is correct,
when the obtained distance sum is larger than or equal to a first threshold value, judging that the user identity learning confirmation condition is a user identity error;
the user watching learning state monitoring module acquires image information of a user through a camera on user learning equipment and analyzes the image information according to the acquired image information, wherein the analysis process comprises two aspects, namely t1 of the interval time between the automatic pause of each teaching video and the continuous playing of the manual operation of the user in the teaching resource learning process, t2 of the face image of the user in the image information of the user acquired by the camera on the user learning equipment and the total time t of the teaching resource in the teaching resource watching learning process of the user,
the time t3 when the face image of the user does not appear in the image information of the user acquired by the camera on the user learning device is t-t 2;
the user watching learning state monitoring module further collects image information corresponding to time t2 when a user face image appears in the image information of the user acquired by a camera on the user learning equipment, and performs screenshot on the image information corresponding to t2 every second unit time to acquire a position coefficient c1 of the user in each screenshot relative to the screenshot and a total number n1 of the screenshots in the image information corresponding to position coefficients c2 and t2 of a black eyeball relative to the eye to which the black eyeball belongs in the screenshot,
the method for acquiring the position coefficient c1 of the user relative to the screenshot in the screenshot comprises the following steps:
s1.1, acquiring a central point of a screenshot, taking the central point of the screenshot as an original point, taking the direction which passes through the original point when the screenshot is vertically placed and is horizontally rightward as the positive direction of an x axis, and taking the direction which passes through the original point when the screenshot is vertically placed and is vertically upward as the positive direction of a y axis to establish a plane rectangular coordinate system;
s1.2, acquiring all pixel points with the same RGB values as corresponding to the skin colors in the screenshot, marking the acquired pixel points, and acquiring the outline of an area where the marked pixel points are located;
s1.3, comparing the contour obtained in the step S1.2 with a prefabricated human body contour, calculating the similarity between the contour and the prefabricated human body contour,
firstly, superposing the two, then calculating the sum of the distances between each pixel point on the contour obtained in the step S1.2 and the corresponding pixel point on the prefabricated human body contour on a plane rectangular coordinate system, and then dividing the obtained sum by a first preset value to obtain a quotient which is the similarity between the contour obtained in the step S1.2 and the prefabricated human body contour;
s1.4, comparing the similarity in the step S1.3 with a second preset value,
when the similarity is larger than or equal to a second preset value, the contour obtained in the step S1.2 is judged to be the human contour,
when the similarity is smaller than a second preset value, judging that the contour obtained in the step S1.2 is not a human contour;
s1.5, calculating the abscissa x1 of the most central pixel point in the region surrounded by the human outline in a planar rectangular coordinate system, the abscissa x2 of the lower left corner point in the screenshot, and the abscissa x3 of the lower right corner point in the screenshot,
the difference between x2 and x3 divided by x1 is calculated as the quotient of the position coefficient c1 of the user relative to the screenshot in the screenshot, i.e.
Figure 945461DEST_PATH_IMAGE001
The method for acquiring the position coefficient c2 of the black eyeball relative to the eye to which the black eyeball belongs in the screenshot from the user watching learning condition monitoring module comprises the following steps:
s2.1, acquiring all pixel points corresponding to an area surrounded by the human outline in the screenshot;
s2.2, carrying out image binarization processing on all the obtained pixel points to obtain a processed image;
s2.3, acquiring pixel points corresponding to all black areas in the processed image, and calculating coordinates of the acquired pixel points in a plane rectangular coordinate system established in the screenshot;
s2.4, respectively comparing the corresponding outlines of all the black areas with the prefabricated eyebrow outline or the prefabricated black eyeball outline, and calculating the similarity of the two outlines;
when the similarity is more than or equal to a third preset value, the black area is judged to be the human eyebrow or the human black eyeball,
when the similarity is smaller than a third preset value, judging that the black area is not the eyebrow or the black eyeball of the person;
s2.5, calculating the abscissa x4 and x5 corresponding to the leftmost point and the rightmost point in the black area corresponding to the eyebrow respectively, calculating the abscissa x6 of the center point in the black area corresponding to the black eyeball in the eye corresponding to the eyebrow,
calculating the abscissa of the midpoint in the black area corresponding to the eyebrow, i.e.
Figure 269127DEST_PATH_IMAGE002
Calculate x6 and
Figure 580022DEST_PATH_IMAGE002
is divided by the difference between x4 and x5, and the quotient is the position coefficient c2 of the black eyeball of the user relative to the eye to which the black eyeball belongs, namely
Figure 631024DEST_PATH_IMAGE003
2. The internet-based instructional resource sharing system of claim 1, wherein: the shared resource query module automatically acquires the shared teaching resource names, extracts the keywords in the teaching resource names, respectively binds the extracted keywords in the same teaching resource name with the teaching resource links, uniformly stores the extracted keywords of each teaching resource name and the teaching resource links bound by each keyword into a keyword database,
when the user inquires the shared teaching resources, the keyword a of the teaching resources required to be inquired can be directly input in the shared resource inquiry module,
the shared resource query module matches the keyword a with the keyword database, further matches all teaching resource links bound with the keyword a in the keyword database, and the user learns the corresponding teaching resources by selecting the matched teaching resource links.
3. The internet-based instructional resource sharing system of claim 1, wherein: the user watching learning status monitoring module calculates the absolute value of the sum of c1 and c2, compares the obtained absolute value | c1+ c2| with a fourth preset value,
when the value | c1+ c2| is greater than or equal to a fourth preset value, judging that the user does not carefully watch the learning teaching resources in the screenshot;
when the | c1+ c2| is smaller than a fourth preset value, judging that the user carefully watches the learning teaching resources in the screenshot;
counting the number of screenshots of which the judgment result is that the user does not view the learning teaching resources seriously, recording the number as n2, multiplying the quotient by t2 by n2 divided by n1 to obtain the time t4 when the face image of the user appears in the image information of the user acquired by the camera on the user learning equipment but the user does not learn seriously, namely the time t4
Figure 445396DEST_PATH_IMAGE004
Adding t3 and t4 to obtain the time t5 which is not learned by the user in the playing process of the teaching resource, namely
Figure 799017DEST_PATH_IMAGE005
Comparing t1 with the fifth preset value, comparing t5 with the sixth preset value,
when t1 is greater than or equal to the fifth preset value or t5 is greater than or equal to the sixth preset value, the user is determined to have good watching learning condition,
otherwise, the user is judged to have poor watching learning condition.
4. An internet-based educational resource sharing system according to claim 3, wherein: the resource learning monitoring module respectively acquires the user watching learning state obtained by the user watching learning state monitoring module and the user learning identity confirmation state obtained by the user learning identity confirmation state monitoring module,
when the user has poor watching learning status or the user has wrong learning status and user identity, the user is determined to have bad learning status for the teaching resource,
otherwise, the learning state of the user on the teaching resource is judged to be good.
5. The internet-based instructional resource sharing system of claim 4, wherein: the shared resource recommending module acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the teaching resource by the user,
when the learning state of the user on the teaching resource is good, the shared resource recommending module continues to recommend other teaching resources with the same labels as the teaching resources;
and when the learning state of the teaching resource by the user is not good, the shared resource recommending module does not continuously recommend other teaching resources with the same labels as the teaching resources.
6. The internet-based tutorial resource sharing method of the internet-based tutorial resource sharing system according to any of claims 1-5, characterized in that the method comprises the steps of:
s1, retrieving teaching resources required to be queried by a user through a shared resource query module;
s2, in the resource learning monitoring module, the user monitors the learning state of the teaching resource;
s3, the shared resource recommending module marks each teaching resource in the form of a label, acquires the label corresponding to the teaching resource learned by the user in the resource learning monitoring module and the learning state of the user on the teaching resource, judges the acquired result, and adjusts the recommending result of the user teaching resource according to the judging result.
CN202110893773.7A 2021-08-05 2021-08-05 Teaching resource sharing system and method based on Internet Active CN113342761B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110893773.7A CN113342761B (en) 2021-08-05 2021-08-05 Teaching resource sharing system and method based on Internet
PCT/CN2022/071606 WO2023010813A1 (en) 2021-08-05 2022-01-12 Internet-based teaching resource sharing system and method
ZA2022/03115A ZA202203115B (en) 2021-08-05 2022-03-15 Internet-based teaching resource sharing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110893773.7A CN113342761B (en) 2021-08-05 2021-08-05 Teaching resource sharing system and method based on Internet

Publications (2)

Publication Number Publication Date
CN113342761A CN113342761A (en) 2021-09-03
CN113342761B true CN113342761B (en) 2021-11-02

Family

ID=77480706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110893773.7A Active CN113342761B (en) 2021-08-05 2021-08-05 Teaching resource sharing system and method based on Internet

Country Status (3)

Country Link
CN (1) CN113342761B (en)
WO (1) WO2023010813A1 (en)
ZA (1) ZA202203115B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342761B (en) * 2021-08-05 2021-11-02 深圳启程智远网络科技有限公司 Teaching resource sharing system and method based on Internet
CN116304315B (en) * 2023-02-27 2024-02-06 广州兴趣岛信息科技有限公司 Intelligent content recommendation system for online teaching

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945624A (en) * 2012-11-14 2013-02-27 南京航空航天大学 Intelligent video teaching system based on cloud calculation model and expression information feedback
KR101563736B1 (en) * 2013-12-24 2015-11-06 전자부품연구원 Apparatus and Method for Mapping Position Information to Virtual Resources
US20150281783A1 (en) * 2014-03-18 2015-10-01 Vixs Systems, Inc. Audio/video system with viewer-state based recommendations and methods for use therewith
JP2017068576A (en) * 2015-09-30 2017-04-06 パナソニックIpマネジメント株式会社 State determination apparatus, eye-close determination apparatus, state determination method, state determination program, and recording medium
CN105426850B (en) * 2015-11-23 2021-08-31 深圳市商汤科技有限公司 Associated information pushing device and method based on face recognition
WO2019180652A1 (en) * 2018-03-21 2019-09-26 Lam Yuen Lee Viola Interactive, adaptive, and motivational learning systems using face tracking and emotion detection with associated methods
CN110378812A (en) * 2019-05-20 2019-10-25 北京师范大学 A kind of adaptive on-line education system and method
CN110197169B (en) * 2019-06-05 2022-08-26 南京邮电大学 Non-contact learning state monitoring system and learning state detection method
CN111310560A (en) * 2019-12-31 2020-06-19 华中师范大学 Learning state monitoring system based on big data
CN113342761B (en) * 2021-08-05 2021-11-02 深圳启程智远网络科技有限公司 Teaching resource sharing system and method based on Internet

Also Published As

Publication number Publication date
ZA202203115B (en) 2022-05-25
WO2023010813A1 (en) 2023-02-09
CN113342761A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN113342761B (en) Teaching resource sharing system and method based on Internet
US20180308114A1 (en) Method, device and system for evaluating product recommendation degree
US20180308107A1 (en) Living-body detection based anti-cheating online research method, device and system
CN111796752B (en) Interactive teaching system based on PC
CN111258433B (en) Teaching interaction system based on virtual scene
CN108304793A (en) On-line study analysis system and method
CN112862639B (en) Education method of online education platform based on big data analysis
CN111581529B (en) Course recommendation method and device combining student fitness and course collocation degree
Abdulkader et al. Optimizing student engagement in edge-based online learning with advanced analytics
CN111523518A (en) Intelligent adaptive learning method and system
CN110490173B (en) Intelligent action scoring system based on 3D somatosensory model
CN111832952B (en) Education courseware pushing system
CN114581271B (en) Intelligent processing method and system for online teaching video
CN111950486A (en) Teaching video processing method based on cloud computing
KR20190043513A (en) System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback
CN112132480A (en) Master and resource matching method and system for big data online education platform
Ashwin et al. Unobtrusive students' engagement analysis in computer science laboratory using deep learning techniques
Yao et al. [Retracted] Optimization of Ideological and Political Education Strategies in Colleges and Universities Based on Deep Learning
Hutt et al. Evaluating calibration-free webcam-based eye tracking for gaze-based user modeling
CN112634096A (en) Classroom management method and system based on intelligent blackboard
Saurav et al. AI Based Proctoring
CN116704603A (en) Action evaluation correction method and system based on limb key point analysis
CN112784144B (en) Online education courseware pushing method based on big data
CN112926497B (en) Face recognition living body detection method and device based on multichannel data feature fusion
CN112163119B (en) Big data online education platform course optimization method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant