CN114936952A - Digital education internet learning system - Google Patents

Digital education internet learning system Download PDF

Info

Publication number
CN114936952A
CN114936952A CN202210486188.XA CN202210486188A CN114936952A CN 114936952 A CN114936952 A CN 114936952A CN 202210486188 A CN202210486188 A CN 202210486188A CN 114936952 A CN114936952 A CN 114936952A
Authority
CN
China
Prior art keywords
teacher
student
image
audio
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210486188.XA
Other languages
Chinese (zh)
Inventor
曹栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Micro Cool Intelligent Technology Co ltd
Original Assignee
Shenzhen Micro Cool Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Micro Cool Intelligent Technology Co ltd filed Critical Shenzhen Micro Cool Intelligent Technology Co ltd
Priority to CN202210486188.XA priority Critical patent/CN114936952A/en
Publication of CN114936952A publication Critical patent/CN114936952A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The invention provides a digital education internet learning system, comprising: the teacher end monitoring module is used for monitoring the teacher end and determining the teaching state of the teacher corresponding to the teacher end; the student end monitoring modules are used for correspondingly monitoring the student ends one by one and determining the lecture listening state of the students corresponding to each student end; and the interactive module is used for receiving the interactive requests of the teacher end and/or the student end, establishing interactive connection and entering interactive connection display. According to the digital education internet learning system, rapid interaction between teachers and students is achieved through the interaction module, and teaching quality and learning efficiency are improved.

Description

Digital education internet learning system
Technical Field
The invention relates to the technical field of internet, in particular to a digital education internet learning system.
Background
At present, digital education, namely, teachers adopt a digital technology to convert the traditional classroom teaching into network teaching, and internet learning, namely, students learn through the internet through terminals such as computers, panels, mobile phones and the like. However, the current digital education and internet learning generally give lessons or learn through pre-recorded videos, and learning interaction is insufficient, which seriously affects the quality of the lessons and the learning efficiency.
Disclosure of Invention
One of the purposes of the invention is to provide a digital education internet learning system, which realizes the rapid interaction between teachers and students through an interaction module, and improves the teaching quality and the learning efficiency.
The embodiment of the invention provides a digital education internet learning system, which comprises:
the teacher end monitoring module is used for monitoring the teacher end and determining the teaching state of the teacher corresponding to the teacher end;
the student end monitoring modules are used for correspondingly monitoring the student ends one by one and determining the lecture listening state of the students corresponding to each student end;
and the interactive module is used for receiving the interactive requests of the teacher end and/or the student end, establishing interactive connection and entering interactive connection display.
Preferably, the teacher monitoring module monitors the teacher end, determines the teaching state of the teacher corresponding to the teacher end, and executes the following operations:
acquiring a first image of a current teacher through first camera equipment at a teacher end;
acquiring a second image displayed on the current student terminal;
matching the first image with the second image to determine whether the teacher gives lessons on site;
and/or the presence of a gas in the gas,
acquiring a second image of the teacher after the teaching starts through first camera equipment at the teacher end;
acquiring a first audio of a teacher after the teaching starts through first audio acquisition equipment of the teacher end;
performing lip language identification on the second image to acquire first information;
performing audio identification on the first audio to acquire second information;
matching the first information with the second information to determine whether the teacher gives lessons on site;
and/or the presence of a gas in the atmosphere,
when the teacher gives a class on site, acquiring second audio of the teacher at first time every preset first time through first audio acquisition equipment at the teacher end;
performing audio identification on the second audio to acquire third information;
extracting keywords from the third information based on a preset keyword extraction template;
determining a teaching state based on the extracted keywords and a preset teaching state trigger library;
the teaching state comprises one or more of an explanation state, an interaction state, a question answering state, a summary state and a job arrangement state.
Preferably, the student end monitoring module monitors the student end, determines the lecture listening state of the student corresponding to the student end, and executes the following operations:
after the time point corresponding to the keyword for teaching state conversion, acquiring a third image of the student through a second camera device of the student end at a preset second time interval; acquiring a third audio frequency of the student through a second audio frequency acquisition device of the student end;
recalling a preset lesson-listening state recognition library corresponding to the converted teaching state;
determining a class attending state based on the third image, the third audio and the class attending state recognition library;
wherein, based on the third image, the third audio and the lesson-attending state recognition library, determining a lesson-attending state comprises:
extracting the features of the third image to obtain a plurality of first feature values;
extracting the features of the third audio to obtain a plurality of second feature values;
constructing an identification feature set based on the plurality of first characteristic values and the plurality of second characteristic values;
matching the recognition feature set with each class-attending state set in a class-attending state recognition base one by one, and determining the class-attending state corresponding to the class-attending state set matched with the recognition feature set in the class-attending state recognition base.
Preferably, the interactive module receives an interactive request from the teacher end and/or the student end, establishes an interactive connection, and enters an interactive connection display to perform the following operations:
when the teaching state is the interactive state, acquiring a third image through second camera equipment of each student end;
determining a student list participating in interaction based on each third image and a preset interaction participation identification library;
outputting the student list to a teacher end;
acquiring fourth audio of the teacher in a preset second time through first audio acquisition equipment at the teacher end;
performing audio identification on the fourth audio to acquire fourth information;
determining whether the fourth information contains the names of students in the student list based on the student list in class;
when the student end and the teacher end are involved, establishing interactive connection between the student end corresponding to the student names and the teacher end, and displaying a third image acquired by second camera equipment of the student end in interactive connection and a first image acquired by first camera equipment of the teacher end in a suspended mode on display pictures of the teacher end and each student end;
or the like, or, alternatively,
when the teaching state is a non-interactive state, acquiring a third image through second camera equipment of each student end;
determining target students initiated by interaction based on the third images and a preset interaction participation identification library;
sending the third image of the target student to a teacher end;
acquiring fourth audio of the teacher in a preset second time through first audio acquisition equipment at the teacher end;
performing audio identification on the fourth audio to acquire fourth information;
determining whether the fourth information contains a student name of the target student;
when the image acquisition device comprises the student terminals corresponding to the target students, the interactive connection between the student terminals and the teacher terminal is established, and the third images acquired by the second camera equipment of the student terminals in interactive connection and the first images acquired by the first camera equipment of the teacher terminal are displayed on the display pictures of the teacher terminal and the student terminals in a suspension mode side by side.
Preferably, the digital education internet learning system further comprises:
the first triggering module is used for analyzing a third image acquired by second camera equipment at the student end, and triggering timing when the student leaves a corresponding class listening area in the third image;
the cache module is used for caching the second image transmitted to the student end after the triggering of timing is started when the timing reaches a preset first time threshold;
the processing module is used for performing labeling processing on the second image based on the character information after the audio recognition corresponding to the second image during caching;
the second triggering module is used for analyzing a third image acquired by second camera equipment at the student end, and triggering cache playing judgment when the student enters a corresponding class listening area in the third image again; determining whether to trigger cache playing according to the situation of teaching courses by teachers and the cache situation;
and the cache playing module is used for playing the cache contents according to a preset playing mode when the cache playing is determined to be triggered, and reminding the students of the cache playing in a suspension reminding frame reminding mode during playing.
Preferably, when the caching module performs caching processing on the second image, the following operations are further performed:
acquiring a fifth audio corresponding to the second image;
determining blank segments with an interval time greater than a preset second time threshold in the fifth audio;
deleting the second image corresponding to the blank segment;
and/or the presence of a gas in the gas,
identifying the fifth audio to obtain character information;
determining characters to be deleted from the character information based on a preset deleted character table;
and deleting the second image corresponding to the character to be deleted.
Preferably, the second triggering module determines whether to trigger the cache play according to the situation of the course taught by the teacher and the cache situation, and executes the following operations:
when the situation of the teacher teaching course is that the course has entered the teaching summary or the job arrangement stage or the answering stage, the cache play is not triggered;
and/or the presence of a gas in the atmosphere,
when the situation of the teacher teaching course is that the teacher enters the interaction stage and the interaction problem does not reach a preset third time threshold value after being proposed, triggering the cache to play;
and/or the presence of a gas in the gas,
when the situation of a teacher teaching course is an explanation stage, determining the remaining time of the explanation stage;
determining the playing time of the cache content;
and when the playing time is more than or equal to the remaining time of the preset multiple, not triggering the cache playing, otherwise, triggering the cache playing.
Preferably, the second triggering module determines the remaining time of the explanation phase and performs the following operations:
acquiring character information corresponding to the speech already explained by the teacher in the explanation stage as information to be processed;
acquiring a plurality of historical explanation records of a teacher, which are the same as the explanation content of an explanation stage, from a big data platform based on information to be processed;
analyzing a plurality of historical explanation records, and determining the remaining time corresponding to the current explanation position in each historical explanation record;
and determining the remaining time of the explanation stage of the current teaching course based on the remaining time corresponding to the current explanation position in each historical explanation record.
Preferably, when the buffer playing is triggered in the explanation stage, the buffer playing module uses a preset speed to play the buffer content, and the processing module synchronously performs buffer processing on the received second image until the buffer playing module finishes playing the buffer content and then switches to the normal playing mode.
Preferably, when the teacher teaching course enters the interaction stage and then triggers the cache play, the cache play module plays the cache content according to a preset play mode, and executes the following operations:
analyzing the cache content, and determining a second image corresponding to the interaction problem in the interaction stage;
playing a second image corresponding to the interaction problem in the interaction stage at a preset speed;
the analyzing the cache content and determining the second image corresponding to the interaction problem in the interaction stage includes:
acquiring a sixth audio of the teacher after the keyword converted into the trigger of the interaction stage and before the pause is greater than a preset fourth time threshold;
and determining a second image corresponding to the sixth audio, wherein the second image corresponds to the interaction problem in the interaction stage.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a digital education Internet learning system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of another digital education internet learning system according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
An embodiment of the present invention provides a digital education internet learning system, as shown in fig. 1, including:
the teacher-side monitoring module 1 is used for monitoring the teacher side 4 and determining the teaching state of a teacher corresponding to the teacher side 4;
the student end monitoring modules 2 are used for correspondingly monitoring the student ends 5 one by one and determining the lecture listening state of students corresponding to each student end 5;
and the interactive module 3 is used for receiving an interactive request of the teacher end 4 and/or the student end 5, establishing interactive connection and entering interactive connection display.
The working principle and the beneficial effects of the technical scheme are as follows:
the digital education internet learning system disclosed by the invention adopts a mode of teaching by teachers, and can be used for team and group teaching, namely, one teacher corresponds to a plurality of students to carry out teaching so as to replace online teachers who cannot enter a school to give lessons; specifically, the method comprises the following steps: the method comprises the steps that a virtual classroom is developed on a server, a teacher is permitted to enter the virtual classroom through a teacher end 4, a plurality of students enter the virtual classroom through a student end 5, the teacher places a small blackboard at home, the acquisition of images and audios of the teacher in class is achieved through first camera equipment and first audio acquisition equipment of the classroom end, and the acquired images and audios are transmitted to the student end 5 of each student; the second camera device and the second audio acquisition device of the student end 5 of each student acquire images and audio of the students, so that the interaction between teachers and students in the course teaching process of teachers and students is realized; the teaching system adopts a teacher monitoring module to monitor a teacher end 4, monitors teaching conditions of a teacher through a first camera device and a first audio acquisition device of the teacher end 4, and further determines teaching states of the teacher, wherein the teaching states comprise: explanation state, question answering state, interaction state, etc.; whether the student is authenticated to attend the class is monitored through the student end monitoring module 2, and whether the student is willing to participate in the interaction can be determined when the student interacts with the class; the interactive judgment is executed by the interactive module 3; the request can be initiated by the teacher at the teacher end 4, and the student at the student end 5 responds, so that the interactive connection is established; or the teacher at the teacher end 4 responds according to the request of the student at the student end 5, so as to establish interactive connection; through the interactive module 3, the interaction between the teacher and the students is realized during online teaching, and the teaching quality and the learning efficiency are improved.
In one embodiment, the teacher-side monitoring module 1 monitors the teacher side 4, determines a teaching state of a teacher corresponding to the teacher side 4, and performs the following operations:
acquiring a first image of a current teacher through a first camera device of the teacher end 4;
acquiring a second image displayed on the current student terminal 5;
matching the first image with the second image to determine whether the teacher gives lessons on site;
and/or the presence of a gas in the gas,
acquiring a second image of the teacher after the teaching starts through the first camera device of the teacher end 4;
acquiring a first audio of a teacher after the teaching starts through a first audio acquisition device of the teacher end 4;
performing lip language identification on the second image to acquire first information;
performing audio identification on the first audio to acquire second information;
matching the first information with the second information to determine whether the teacher gives lessons on site;
and/or the presence of a gas in the gas,
when the teacher gives a class on site, acquiring second audio of the teacher at a first time (for example, 15 seconds) by a first audio acquisition device of the teacher end 4 at intervals of a preset first time;
performing audio identification on the second audio to acquire third information;
extracting keywords from the third information based on a preset keyword extraction template;
determining a teaching state based on the extracted keywords and a preset teaching state trigger library;
the teaching state comprises one or more of an explanation state, an interaction state, a question answering state, a summary state and a job arrangement state.
The working principle and the beneficial effects of the technical scheme are as follows:
whether the teacher gives lessons on site or not is determined by judging whether a first image acquired by the first camera device is consistent with a second image displayed by the student terminal 5 or not and whether the second image direction of the teacher is matched with the first audio or not; only on-site teaching can be carried out for interaction; the interaction can not be carried out through playing the video, and the interaction triggered by the students in the explanation process of the teacher is also ensured, so that the teaching quality of the teacher is improved, and the learning effect of the students is improved. The teaching state is determined based on the extracted keywords and a preset teaching state trigger library; for example: the ' please ask a classmate below to explain the problem solving idea ' of the problem ' corresponds to the keyword which is used as the trigger of the interaction state. In addition, "today mainly explains … …" where "mainly explains" corresponds to a keyword that is a trigger of a summary state.
In one embodiment, the student end monitoring module 2 monitors the student end 5, determines the lecture listening state of the student corresponding to the student end 5, and performs the following operations:
after the time point corresponding to the keyword for the teaching state conversion, acquiring a third image of the student through a second camera device of the student terminal 5 at a preset second time (for example: 3 minutes); acquiring a third audio frequency of the student through a second audio frequency acquisition device of the student terminal 5;
recalling a preset lesson-listening state recognition library corresponding to the converted teaching state;
determining a class attending state based on the third image, the third audio and the class attending state recognition library;
wherein, based on the third image, the third audio and the lesson-attending state recognition library, determining a lesson-attending state comprises:
extracting the features of the third image to obtain a plurality of first feature values;
performing feature extraction on the third audio to obtain a plurality of second feature values;
constructing an identification feature set based on the plurality of first characteristic values and the plurality of second characteristic values;
matching the recognition feature set with each class-attending state set in a class-attending state recognition base one by one, and determining the class-attending state corresponding to the class-attending state set matched with the recognition feature set in the class-attending state recognition base.
The working principle and the beneficial effects of the technical scheme are as follows:
by monitoring the student listening state, the system can remind the student to listen to the talk seriously; the lecture attending state recognition library is different under different lecture giving states, so that in order to ensure the accuracy of lecture attending state recognition, the lecture attending state recognition library is switched according to the lecture giving state; for example: in the lecture state, the time for the student to watch the student end 5 is relatively longer than the time for taking notes by burying the head; when the teacher provides self-calculation in the interaction stage, the time of countersunk calculation is longer than that of watching the student end 5; of course, the students can regard the lecture listening state as poor opening when watching the other areas except the display screen and the desktop for too long time (for example, more than 3 minutes); performing feature extraction on the third image to obtain a plurality of first feature values, wherein the first feature values comprise: the time of watching the display screen, the time of watching the desktop, the time of watching other areas, the maximum eye closing time, the nodding frequency, the maximum nodding speed, the nodding amplitude and the like; extracting features of the third audio to obtain a plurality of second feature values, for example: whether audio information of the student exists, if so, a second characteristic value representing the characteristic is 1, and if not, a second characteristic value representing the characteristic is 0; whether the background sound contains the sound of a television or the like, if the background sound exists, a second feature value representing the feature is 1, and if the background sound does not exist, the second feature value representing the feature is 0; whether the audio information is the speech of the repeat teacher, if so, the second characteristic value representing the characteristic is 1, and if not, the second characteristic value representing the characteristic is 0; orderly arranging the first characteristic value and the second characteristic value to form an identification characteristic set; matching the recognition feature set with each class-attending state set in a class-attending state recognition base one by one, and determining the recognition feature set in the class-attending state recognition baseThe lesson listening state corresponding to the lesson listening state set matched with the feature set; wherein, the lesson listening state in the lesson listening state recognition library is correspondingly associated with the lesson listening state set one by one; the matching can adopt the similarity of the recognition feature set and the lecture attending state set, whether the recognition feature set is matched or not is determined according to the similarity, and when the similarity is the maximum in the lecture attending state recognition library and is more than 0.90, the recognition feature set is determined to be matched with the lecture attending state set; the calculation formula of the similarity is as follows:
Figure BDA0003629238700000091
the XSD represents the similarity between the identification feature set and the class attending state set; s i Identifying the ith data value in the feature set; t is i Setting the ith data value for the class listening state set; n represents the total number of data.
In one embodiment, the interactive module 3 receives an interactive request from the teacher end 4 and/or the student end 5, establishes an interactive connection, and enters an interactive connection display to perform the following operations:
when the teaching state is the interactive state, acquiring a third image through second camera equipment of each student terminal 5;
determining a student list participating in interaction based on each third image and a preset interaction participation identification library; for example: students who want to participate in the interaction can raise hands, and students who raise hands form a student list which wants to participate in the interaction after counting;
outputting the student list to the teacher end 4; through the selection of the teacher at the student end 5, the selection can be operated by a mouse, and the name of the student can also be directly spoken to the first audio acquisition equipment;
acquiring fourth audio of the teacher within a preset second time (for example: 1 minute) through a first audio acquisition device of the teacher end 4;
performing audio identification on the fourth audio to acquire fourth information;
determining whether the fourth information contains the names of students in the student list based on the student list in class;
when the images are contained, establishing interactive connection between the student terminals 5 corresponding to the student names and the teacher terminal 4, and enabling third images acquired by second camera equipment of the student terminals 5 in interactive connection and first images acquired by first camera equipment of the teacher terminal 4 to be displayed on display pictures of the teacher terminal 4 and the student terminals 5 in a side-by-side suspended mode;
or the like, or, alternatively,
when the teaching state is a non-interactive state, acquiring a third image through second camera equipment of each student terminal 5;
and determining target students initiated by the interaction based on the third images and a preset interaction participation identification library, for example: in a non-interactive state, identifying whether a student lifts a hand, and when the student lifts the hand, the student seats a target student;
sending the third image of the target student to the teacher end 4; in order to facilitate the teacher to determine the name of the target student, the name of the student can be marked on the third image in a suspension mode;
acquiring a fourth audio of the teacher in a preset second time through a first audio acquisition device of the teacher end 4;
performing audio identification on the fourth audio to acquire fourth information;
determining whether the fourth information contains a student name of the target student;
when the image acquisition device comprises the student terminals 5 corresponding to the target students and the teacher terminal 4, the interactive connection between the student terminals 5 corresponding to the target students and the teacher terminal 4 is established, and the third images acquired by the second camera devices of the student terminals 5 in interactive connection and the first images acquired by the first camera devices of the teacher terminal 4 are displayed on the display pictures of the teacher terminal 4 and the display pictures of the student terminals 5 in a side-by-side suspension mode.
In one embodiment, the digital education internet learning system, as shown in fig. 2, further comprises:
the first triggering module 11 is configured to analyze a third image acquired by a second camera device at the student end 5, and trigger timing when the student leaves a corresponding lecture listening area in the third image;
the caching module 12 is configured to cache the second image transmitted to the student terminal 5 after the triggering of the timing starts when the timing reaches a preset first time threshold;
the processing module 13 is configured to perform labeling processing on the second image based on the text information after the audio recognition corresponding to the second image during the caching;
the second triggering module 14 is configured to analyze a third image acquired by a second camera device at the student end 5, and trigger a cache play judgment when the student enters a corresponding lecture listening area in the third image again; determining whether to trigger cache playing according to the situation of teaching courses by teachers and the cache situation;
and the cache playing module 15 is configured to play the cache content according to a preset playing mode when the cache playing is determined to be triggered, and remind the student of performing the cache playing in a manner of reminding by using the suspension reminding frame during playing.
The working principle and the beneficial effects of the technical scheme are as follows:
when the student leaves the lecture listening area in the third image (for example, the distance between the student and the student terminal 5 is determined by recognizing the third image, and when the distance is more than 2 meters, the student leaves the lecture listening area), the timing of the first triggering module 11 is triggered; when the student leaves the lecture area and is more than or equal to a first time threshold (for example: 1 minute), the second image transmitted to the student end 5 is cached; when the student returns, the buffer play is carried out, so that the student is prevented from having something midway, and the teaching progress can be quickly followed; the cache playing can be performed at triple playing speed, double playing speed or 1.5 times playing speed, so as to quickly catch up to the progress of the teaching course of the current teacher end 4; and the suspension reminding frame is adopted for reminding during playing so as to remind students of caching and playing; the floating reminding frame is marked with the word '… … in the buffer playing'.
In one embodiment, the caching module 12 further performs the following operations when performing the caching process on the second image:
acquiring a fifth audio corresponding to the second image;
determining blank segments with an interval time greater than a preset second time threshold (for example, set by any value of 5 to 10 seconds) in the fifth audio; blank segments in the audio are the pause positions of the teacher speaking;
deleting the second image corresponding to the blank segment;
and/or the presence of a gas in the gas,
identifying the fifth audio to obtain character information;
determining characters to be deleted from the character information based on a preset deleted character table; the deleted text table includes words which have no practical meaning to teaching, such as: "slightly wait", etc.; correspondingly deleting words without practical meanings, and further reducing the content of the cache;
and deleting the second image corresponding to the character to be deleted.
The working principle and the beneficial effects of the technical scheme are as follows:
through deleting the cached content, the actual playing time of the cached content is reduced, and the instant playing is more favorable for catching up with the teaching progress of the teacher at the teacher end 4 through the double-speed playing.
In one embodiment, the second triggering module 14 determines whether to trigger the cache play according to the situation of the teacher teaching the course and the cache situation, and performs the following operations:
when the situation of the teacher teaching course is that the course has entered the teaching summary or the job arrangement stage or the answering stage, the cache play is not triggered; the teaching summary, the homework arrangement stage and the answering stage belong to the course ending process, so that the cache playing is not needed, and students can watch the cached content when reviewing the course after the course is ended;
and/or the presence of a gas in the gas,
when the situation of the teacher teaching course is that the teacher enters the interaction stage and the interaction problem does not reach a preset third time threshold value (for example, any value from 1 minute to 5 minutes is set), triggering the cache to play; the buffer playing can be triggered only when the interaction stage is started, and when the interaction stage is carried out for a while, the buffer playing does not need to be triggered; because the interaction stage usually occurs after the explanation of one knowledge point is finished, students can better understand the knowledge point through interaction; therefore, the synchronous playing of the teaching progress of the teacher at the current teacher end 4 is directly started without triggering the cache playing; the students can directly enter the playing of the cache contents after the course is finished;
and/or the presence of a gas in the atmosphere,
when the situation that a teacher teaches a course is an explanation stage, determining the remaining time of the explanation stage;
determining the playing time of the cache content;
and when the playing time is more than or equal to the remaining time of the preset multiple, not triggering the cache playing, otherwise, triggering the cache playing. For example: the playing time is 1 minute, the preset speed is 2 times, when the remaining time is more than 30 seconds, the cache playing is carried out, otherwise, the cache playing is not carried out; that is, when the remaining time is just 30 seconds, the cache content is played at a preset 3-time speed, and when the explanation stage is just finished, the student also watches all the content in the explanation playing stage through cache playing.
In one embodiment, the second triggering module 14 determines the remaining time of the explanation phase by:
acquiring character information corresponding to the speech explained by the teacher in the explanation stage as information to be processed;
acquiring a plurality of historical explanation records of a teacher, which have the same explanation content as that of an explanation stage, from a big data platform based on information to be processed;
analyzing a plurality of historical explanation records, and determining the remaining time corresponding to the current explanation position in each historical explanation record;
and determining the remaining time of the explanation stage of the current teaching course based on the remaining time corresponding to the current explanation position in each historical explanation record. For example: and taking the average value of the residual time of the historical explanation as the current residual time.
The working principle and the beneficial effects of the technical scheme are as follows:
through comprehensive analysis of historical data of the content explained by the teacher in the same explanation stage, the residual time is predicted, and the accuracy of cache play triggering is guaranteed. In addition, another historical data acquisition mode can be adopted to determine that the teacher explains the number of the knowledge point corresponding to the explanation stage, for example: the serial numbers are: 6-1-1-1, which represents the first knowledge point of the first unit of the school period at the six-grade level; and the quick query is realized by giving numbers to the explanation stages.
In order to implement the link switching between the cache playing and the real-time playing, in an embodiment, when the cache playing is triggered in the explanation stage, the cache playing module 15 uses a preset speed (for example, 3 speed) to play the cache content, and the processing module 13 synchronously performs the cache processing on the received second image until the cache playing module 15 finishes playing the cache content and then switches to the normal playing mode.
In one embodiment, when the teacher teaching course enters the interactive phase and triggers the cache playing, the cache playing module 15 plays the cache content according to the preset playing mode, and performs the following operations:
analyzing the cache content, and determining a second image corresponding to the interaction problem in the interaction stage;
playing a second image corresponding to the interaction problem in the interaction stage at a preset speed (for example, 3 times speed);
the analyzing the cache content and determining the second image corresponding to the interaction problem in the interaction stage includes:
acquiring a sixth audio of the teacher after the keyword converted into the trigger of the interaction stage and before the pause is greater than a preset fourth time threshold (for example, 20 seconds);
and determining a second image corresponding to the sixth audio, wherein the second image corresponds to the interaction problem in the interaction stage.
The working principle and the beneficial effects of the technical scheme are as follows:
the method comprises the steps of triggering cache play at an interaction stage, screening play contents, determining an interaction problem of interaction, and playing the interaction problem preferentially, so that students can participate in the interaction rapidly, the interaction participation of the students is guaranteed, and the learning efficiency is improved; and after the interaction stage is finished, caching and playing other cached contents.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A digital education internet learning system, comprising:
the teacher end monitoring module is used for monitoring a teacher end and determining the teaching state of a teacher corresponding to the teacher end;
the student end monitoring modules are used for correspondingly monitoring student ends one by one and determining the lecture listening state of students corresponding to each student end;
and the interactive module is used for receiving the interactive requests of the teacher end and/or the student end, establishing interactive connection and entering interactive connection display.
2. The digital education internet learning system as claimed in claim 1, wherein the teacher side monitoring module monitors the teacher side, determines the teaching status of the teacher corresponding to the teacher side, and performs the following operations:
acquiring a current first image of the teacher through first camera equipment at the teacher end;
acquiring a second image displayed on the student terminal at present;
matching the first image with the second image to determine whether the teacher gives lessons on site;
and/or the presence of a gas in the gas,
acquiring a second image of the teacher after the teaching starts through the first camera equipment of the teacher end;
acquiring a first audio of the teacher after the teaching starts through first audio acquisition equipment of the teacher end;
performing lip language identification on the second image to acquire first information;
performing audio identification on the first audio to acquire second information;
matching the first information with the second information to determine whether the teacher gives lessons on site or not;
and/or the presence of a gas in the gas,
when the teacher gives lessons on site, second audio of the teacher in the first time is acquired at intervals of preset first time through first audio acquisition equipment at the teacher end;
performing audio identification on the second audio to acquire third information;
extracting keywords from the third information based on a preset keyword extraction template;
determining the teaching state based on the extracted keywords and a preset teaching state trigger library;
the teaching state comprises one or more of an explanation state, an interaction state, a question answering state, a summary state and a job arrangement state.
3. The internet learning system of digital education as claimed in claim 2, wherein the student end monitoring module monitors the student end, determines the lecture listening status of the student corresponding to the student end, and performs the following operations:
after the time point corresponding to the keyword for teaching state conversion, acquiring a third image of the student through a second camera device of the student end every a preset second time; acquiring a third audio frequency of the student through a second audio frequency acquisition device of the student end;
recalling a preset lesson listening state recognition library corresponding to the converted lesson giving state;
determining the lecture attending state based on the third image, the third audio and the lecture attending state identification library;
wherein determining the lecture attending state based on the third image, the third audio and the lecture attending state recognition library comprises:
extracting features of the third image to obtain a plurality of first feature values;
performing feature extraction on the third audio to obtain a plurality of second feature values;
constructing an identification feature set based on a plurality of the first feature values and a plurality of the second feature values;
matching the identification feature set with each class-attending state set in the class-attending state identification library one by one, and determining the class-attending state corresponding to the class-attending state set matched with the identification feature set in the class-attending state identification library.
4. The digital education internet learning system as claimed in claim 3, wherein the interactive module receives the interactive request from the teacher end and/or the student end, establishes interactive connection and enters interactive connection display, and performs the following operations:
when the teaching state is an interactive state, acquiring a third image through second camera equipment of each student end;
determining a student list participating in interaction based on each third image and a preset interaction participation identification library;
outputting the student list to the teacher end;
acquiring a fourth audio of the teacher in the preset second time through a first audio acquisition device of the teacher end;
performing audio recognition on the fourth audio to acquire fourth information;
determining whether the fourth information contains names of students in a student list based on the student list in class;
when the name of the student is contained, establishing interactive connection between the student end corresponding to the name of the student and the teacher end, and displaying a third image acquired by second camera equipment of the student end in interactive connection and a first image acquired by first camera equipment of the teacher end in a suspended mode on display pictures of the teacher end and the student ends;
or the like, or, alternatively,
when the teaching state is a non-interactive state, acquiring a third image through second camera equipment of each student end;
determining target students initiated by interaction based on the third images and a preset interaction participation identification library;
sending the third image of the target student to the teacher end;
acquiring a fourth audio of the teacher in the preset second time through a first audio acquisition device of the teacher end;
performing audio recognition on the fourth audio to acquire fourth information;
determining whether the fourth information contains a student name of the target student;
when the target student terminal is included, the interactive connection between the student terminal corresponding to the target student terminal and the teacher terminal is established, and the third image acquired by the second camera equipment of the student terminal and the first image acquired by the first camera equipment of the teacher terminal which are in interactive connection are displayed on the display pictures of the teacher terminal and the student terminals in a side-by-side suspension mode.
5. The digital education internet learning system as claimed in claim 1, further comprising:
the first triggering module is used for analyzing a third image acquired by second camera equipment at the student end, and triggering timing when the student leaves a corresponding class listening area in the third image;
the cache module is used for caching the second image transmitted to the student end after the triggering of timing is started when the timing reaches a preset first time threshold;
the processing module is used for performing labeling processing on the second image based on the character information after the audio recognition corresponding to the second image during caching;
the second triggering module is used for analyzing a third image acquired by second camera equipment at the student end, and triggering cache playing judgment when the student enters a corresponding class listening area in the third image again; determining whether to trigger cache playing according to the condition of teaching courses by teachers and the cache condition;
and the cache playing module is used for playing the cache contents according to a preset playing mode when the cache playing is determined to be triggered, and reminding the students of the cache playing in a suspension reminding frame reminding mode during playing.
6. The digital education internet learning system of claim 5, wherein the buffering module, in buffering the second image, further performs:
acquiring a fifth audio corresponding to the second image;
determining blank segments with an interval time greater than a preset second time threshold in the fifth audio;
deleting the second image corresponding to the blank segment;
and/or the presence of a gas in the gas,
identifying the fifth audio to obtain character information;
determining characters to be deleted from the character information based on a preset deleted character table;
and deleting the second image corresponding to the character to be deleted.
7. The digital education internet learning system as claimed in claim 6, wherein the second triggering module determines whether to trigger the cache play according to the situation of the teacher's teaching lesson and the cache situation, and performs the following operations:
when the situation of the teacher teaching course is that a teaching summary or a job arrangement stage or a question answering stage is entered, the cache play is not triggered;
and/or the presence of a gas in the gas,
when the condition that the teacher teaches the course is that the course enters an interaction stage and the interaction problem is provided and does not reach a preset third time threshold value, triggering cache playing;
and/or the presence of a gas in the gas,
when the situation that the teacher teaches the course is an explanation stage, determining the remaining time of the explanation stage;
determining the playing time of the cache content;
and when the playing time is more than or equal to the residual time of the preset multiple, not triggering cache playing, otherwise, triggering cache playing.
8. The digital educational internet learning system of claim 7, wherein a second trigger module determines the remaining time of the explanation phase by:
acquiring character information corresponding to the speeches explained by the teacher in the explanation stage as information to be processed;
acquiring a plurality of historical explanation records of the teacher, which are the same as the explanation content of the explanation stage, from a big data platform based on the information to be processed;
analyzing a plurality of historical explanation records, and determining the remaining time corresponding to the current explanation position in each historical explanation record;
and determining the residual time of the explanation stage of the current teaching course based on the residual time corresponding to the current explanation position in each historical explanation record.
9. The internet learning system of claim 7, wherein when the buffer playing is triggered in the explanation stage, the buffer playing module plays the buffer content at a preset speed, and the processing module synchronously performs the buffer processing on the received second image until the buffer playing module finishes playing the buffer content and then switches to the normal playing mode.
10. The digital education internet learning system as claimed in claim 7, wherein when the teacher teaching lesson enters the interactive stage and triggers the cache play, the cache play module plays the cache contents according to a preset play mode, and performs the following operations:
analyzing the cache content, and determining a second image corresponding to the interaction problem in the interaction stage;
playing a second image corresponding to the interaction problem in the interaction stage at a preset speed;
analyzing the cache content, and determining a second image corresponding to the interaction problem in the interaction stage, including:
acquiring a sixth audio of the teacher after the keyword converted into the trigger of the interaction stage and before a pause is greater than a preset fourth time threshold;
and determining the second image corresponding to the sixth audio, wherein the second image is the second image corresponding to the interaction problem in the interaction stage.
CN202210486188.XA 2022-05-06 2022-05-06 Digital education internet learning system Withdrawn CN114936952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210486188.XA CN114936952A (en) 2022-05-06 2022-05-06 Digital education internet learning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210486188.XA CN114936952A (en) 2022-05-06 2022-05-06 Digital education internet learning system

Publications (1)

Publication Number Publication Date
CN114936952A true CN114936952A (en) 2022-08-23

Family

ID=82864490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210486188.XA Withdrawn CN114936952A (en) 2022-05-06 2022-05-06 Digital education internet learning system

Country Status (1)

Country Link
CN (1) CN114936952A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116259004A (en) * 2023-01-09 2023-06-13 盐城工学院 Student learning state detection method and system applied to online education

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116259004A (en) * 2023-01-09 2023-06-13 盐城工学院 Student learning state detection method and system applied to online education
CN116259004B (en) * 2023-01-09 2023-08-15 盐城工学院 Student learning state detection method and system applied to online education

Similar Documents

Publication Publication Date Title
CN107203953B (en) Teaching system based on internet, expression recognition and voice recognition and implementation method thereof
WO2018072390A1 (en) Classroom teaching recording and requesting method and system
US20190340944A1 (en) Multimedia Interactive Teaching System and Method
WO2019095450A1 (en) Teacher-student multi-dimensional matching method and system for use in online teaching
CN209980508U (en) Wisdom blackboard, and wisdom classroom's teaching system
JP2002132964A (en) Education guidance system
CN113225575B (en) Intelligent classroom question answering interaction system and method
CN110619771A (en) On-site interactive question answering device and method for two teachers in classroom
CN112581813A (en) Intelligent interactive remote education system based on big data
CN111383493A (en) English auxiliary teaching system based on social interaction and data processing method
CN111564066A (en) Intelligent English teaching system for English teaching
CN109754653B (en) Method and system for personalized teaching
KR20000072429A (en) Realtime, interactive multimedia education system and method in online environment
CN114841841A (en) Intelligent education platform interaction system and interaction method for teaching interaction
KR20070006742A (en) Language teaching method
CN114936952A (en) Digital education internet learning system
CN111369408A (en) Hospital home intern teaching management system and method
CN113301369B (en) Interaction system and method for recorded and broadcast videos in intelligent classroom
CN114155755A (en) System for realizing follow-up teaching by using internet and realization method thereof
CN112118490A (en) Cloud classroom online education interaction system
CN112185195A (en) Method and device for controlling remote teaching classroom by AI (Artificial Intelligence)
CN111050111A (en) Online interactive learning communication platform and learning device thereof
WO2022203123A1 (en) Video education content providing method and device on basis of artificially intelligent natural language processing using character
CN115278272A (en) Education practice online guidance system and method
US20160372154A1 (en) Substitution method and device for replacing a part of a video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220823