CN114125537B - Discussion method, device, medium and electronic equipment for live broadcast teaching - Google Patents

Discussion method, device, medium and electronic equipment for live broadcast teaching Download PDF

Info

Publication number
CN114125537B
CN114125537B CN202111440260.7A CN202111440260A CN114125537B CN 114125537 B CN114125537 B CN 114125537B CN 202111440260 A CN202111440260 A CN 202111440260A CN 114125537 B CN114125537 B CN 114125537B
Authority
CN
China
Prior art keywords
current
video
keyword
audio
teaching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111440260.7A
Other languages
Chinese (zh)
Other versions
CN114125537A (en
Inventor
王珂晟
黄劲
黄钢
许巧龄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oook Beijing Education Technology Co ltd
Original Assignee
Oook Beijing Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oook Beijing Education Technology Co ltd filed Critical Oook Beijing Education Technology Co ltd
Priority to CN202111440260.7A priority Critical patent/CN114125537B/en
Publication of CN114125537A publication Critical patent/CN114125537A/en
Application granted granted Critical
Publication of CN114125537B publication Critical patent/CN114125537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The disclosure provides a discussion method, a discussion device, a discussion medium and electronic equipment for live teaching. The method comprises the steps of performing semantic analysis on current speaking audio discussed between a teaching teacher and speaking students to obtain at least two current keywords; then, priority ranking is carried out based on at least two current keywords, and a current keyword queue is obtained; and obtaining the knowledge text content corresponding to the current keyword queue through the current keyword queue. And then the first video of the teaching teacher, the second video of the speaking student and the knowledge text content are simultaneously displayed on the multimedia blackboard. The students in live class can timely and accurately obtain the knowledge text content of the discussion, thereby assisting the students in class to accurately obtain the knowledge points of the discussion content and increasing the understanding of the students in class to the knowledge.

Description

Discussion method, device, medium and electronic equipment for live broadcast teaching
Technical Field
The disclosure relates to the field of artificial intelligence, and in particular relates to a discussion method, a device, a medium and electronic equipment for live broadcast teaching.
Background
With the development of computer technology, internet-based network teaching is beginning to be emerging.
The network teaching is a teaching mode which is mainly based on teaching and is performed by using a network as a communication tool of teachers and students. The network teaching comprises live broadcast teaching and recorded broadcast teaching. The live teaching is the same as the traditional teaching mode, students can listen to the teacher lectures at the same time, and the teachers and students have some simple communication. The recorded broadcast teaching uses the internet service to store the courses recorded in advance by the teacher on the service end, and students can order and watch the courses at any time to achieve the purpose of learning. The recorded broadcast teaching is characterized in that the teaching activity can be carried out for 24 hours in the whole day, each student can determine the learning time, content and progress according to the actual situation of the student, and the learning content can be downloaded on the internet at any time. In web teaching, each course may have a large number of students listening to the course.
Currently, there is a live teaching mode, students in class are concentrated in a live classroom, and the students participate in teaching activities of remote teaching teachers through a display screen in a multimedia blackboard. When the teaching teacher and students in the live classroom are in question discussion, the students can only see the images of the teaching teacher through the multimedia blackboard. The discussion mode can only be discussed in a video mode, the knowledge content involved in the discussion needs to be turned over on site, and knowledge points in the book are found to support the discussion points. Not only is time consuming, but the discussion process becomes cumbersome.
Accordingly, the present disclosure provides a discussion method of live teaching to solve one of the above technical problems.
Disclosure of Invention
The disclosure aims to provide a discussion method, a device, a medium and electronic equipment for live teaching, which can solve at least one technical problem mentioned above. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a discussion method of live teaching, including:
receiving a first video and a second video of live teaching, wherein the first video is used for collecting teaching videos of teaching teachers by a teaching terminal, and the second video is used for collecting videos of speakers discussing problems with the teaching teachers in a remote teaching classroom by a multimedia blackboard;
acquiring current speaking audio based on the first video or the second video;
carrying out semantic analysis on the current speaking audio to obtain at least two current keywords;
the priority ranking is carried out on the at least two current keywords to obtain a current keyword queue;
respectively carrying out similarity matching according to the current keyword queue and a plurality of first keyword groups in the knowledge information set, and obtaining at least one second keyword group meeting a preset similarity condition and knowledge text content corresponding to the second keyword group, wherein the first keyword groups are subjected to priority ordering based on the same mode of generating the current keyword queue;
and transmitting the first video, the second video and the at least one knowledge text content to the multimedia blackboard, and triggering the multimedia blackboard to execute the operation of displaying the first video, the second video and the at least one knowledge text content.
According to a second aspect of the present disclosure, there is provided a discussion device for live teaching, including:
the receiving unit is used for receiving a first video and a second video of live teaching, wherein the first video is used for collecting teaching videos of teaching teachers by a teaching terminal, and the second video is used for collecting videos of speakers discussing problems with the teaching teachers in a remote teaching classroom by a multimedia blackboard;
the first acquisition unit is used for acquiring current speaking audio based on the first video or the second video;
the second acquisition unit is used for carrying out semantic analysis on the current speaking audio to acquire at least two current keywords;
the ordering unit is used for carrying out priority ordering on the at least two current keywords to obtain a current keyword queue;
the matching unit is used for respectively carrying out similarity matching according to the current keyword queue and a plurality of first keyword groups in the knowledge information set, and obtaining at least one second keyword group meeting the preset similarity condition and knowledge text content corresponding to the second keyword group, wherein the first keyword groups are subjected to priority ranking based on the same mode of generating the current keyword queue;
and the transmission unit is used for transmitting the first video, the second video and the at least one knowledge text content to the multimedia blackboard, and triggering the multimedia blackboard to execute the operation of displaying the first video, the second video and the at least one knowledge text content.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a discussion method of live teaching as defined in any of the above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs that when executed by the one or more processors cause the one or more processors to implement the discussion method of live teaching as claimed in any preceding claim.
Compared with the prior art, the scheme of the embodiment of the disclosure has at least the following beneficial effects:
the disclosure provides a discussion method, a discussion device, a discussion medium and electronic equipment for live teaching. The method comprises the steps of performing semantic analysis on current speaking audio discussed between a teaching teacher and speaking students to obtain at least two current keywords; then, priority ranking is carried out based on at least two current keywords, and a current keyword queue is obtained; and obtaining the knowledge text content corresponding to the current keyword queue through the current keyword queue. And then the first video of the teaching teacher, the second video of the speaking student and the knowledge text content are simultaneously displayed on the multimedia blackboard. The students in live class can timely and accurately obtain the knowledge text content of the discussion, thereby assisting the students in class to accurately obtain the knowledge points of the discussion content and increasing the understanding of the students in class to the knowledge.
Drawings
Fig. 1 shows a schematic diagram of the composition of a device for live teaching;
FIG. 2 illustrates a flow chart of a discussion method of live teaching in accordance with an embodiment of the present disclosure;
FIG. 3 shows a block diagram of a unit of discussion device of live teaching in accordance with an embodiment of the present disclosure;
fig. 4 illustrates a schematic diagram of an electronic device connection structure according to an embodiment of the present disclosure;
11-media blackboard, 12-teaching terminal and 13-server;
111-first display module, 112-second display module, 113-third display module, 114-first display area, 115-second display area, 116-third display area.
Detailed Description
For the purpose of promoting an understanding of the principles and advantages of the disclosure, reference will now be made in detail to the drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure of embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure, these descriptions should not be limited to these terms. These terms are only used to distinguish one from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of embodiments of the present disclosure.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or device comprising such element.
In particular, the symbols and/or numerals present in the description, if not marked in the description of the figures, are not numbered.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
Example 1
Embodiments provided for by the present disclosure, namely embodiments of a discussion method of live teaching.
Embodiments of the present disclosure are described in detail below in conjunction with fig. 1 and 2.
As shown in fig. 1, the apparatus for live teaching includes: a teaching terminal 12, a media blackboard 11 and a server 13. The teaching terminal 12 is used for collecting teaching videos of teaching teachers; the media blackboard 11 is used to collect video of a speaker in a remote lecture classroom who discusses problems with the lecturer. The discussion method of live teaching described in the application is applied to the server 13.
As shown in fig. 2, in step S101, a live teaching first video and a second video are received.
The first video is collected by the teaching terminal 12 to give lessons to the teacher, and the second video is collected by the multimedia blackboard 11 to give lessons to the speaker who participates in the question of the teacher in the remote classroom.
Step S102, acquiring the current speech audio based on the first video or the second video.
Since the lecture teacher and the lecture students discuss the problem, no matter how many lecture students participate in the discussion on the side of the multimedia blackboard 11, the audio in the first video and the audio in the second video always take a form of one-to-one answer, that is, only the audio of one speaker exists at one time point, and the audio of multiple parties speaking at the same time does not appear. The embodiment of the disclosure takes the speaking audio in one video as the current speaking audio. It is understood that either the audio of the first video is speaking or the audio of the second video is speaking.
And step S103, carrying out semantic analysis on the current speaking audio to obtain at least two current keywords.
The current keyword is a word capable of characterizing semantic features of the audio of the current utterance.
The semantic analysis is performed on the current speaking audio to obtain at least two current keywords, which can be understood as extracting the current keywords capable of representing the semantic features from the current speaking audio, and since one current keyword is difficult to represent the semantic features, the embodiment of the disclosure needs to extract at least two current keywords from the current speaking audio.
In some specific embodiments, the semantic analysis is performed on the current speech audio to obtain at least two current keywords, including the following steps:
and step S103-1, performing sentence breaking analysis on the current speaking audio to acquire at least one first audio.
Wherein the first audio includes a complete semantic meaning. It is understood that the first audio is an audio that can express a complete meaning. If the first audio is recorded by text, the text of the first audio is a text ending with a period, question mark or mark.
The sentence breaking analysis is carried out on the current speaking audio, namely the current speaking audio is divided into a plurality of sections, and each section is a first audio. If the current speech audio is not one that can express full meaning, the first audio will not be generated. Thus, at least one first audio is acquired.
In some specific embodiments, the sentence breaking analysis is performed on the current speech audio to obtain at least one first audio, including the following steps:
and step S103-1a, inputting the current speaking audio into a trained sentence-breaking analysis model to obtain at least one first audio.
The sentence-breaking analysis model may be obtained based on previous historical speech audio, for example, the sentence-breaking analysis model may be trained using the historical speech audio as a training sample. The process of performing the sentence-breaking analysis on the current speech audio according to the sentence-breaking analysis model will not be described in detail in this embodiment, and may be implemented with reference to various implementations in the prior art.
Of course, the sentence-breaking analysis is performed on the current speech audio, other sentence-breaking analysis methods may also be used, and the embodiments of the present disclosure are not limited.
And step S103-2, carrying out semantic analysis on the first audio to obtain a current keyword group of the first audio.
Wherein the current keyword group comprises at least two current keywords. The current keyword group stores the combination mode of the current keyword, namely, the logical relation of the current keyword in the first audio. And the interrelationship among the current key word groups of each first audio frequency stores the logic relation of the current key word groups in the current speaking audio frequency.
In some specific embodiments, the semantic analysis is performed on the first audio to obtain a current keyword group of the first audio, including the following steps:
step S103-2a, inputting the first audio into the trained semantic analysis model to obtain at least two current keywords.
The semantic analysis model may be obtained based on previous historical first audio, for example, training the semantic analysis model with the historical first audio as a training sample. The process of performing semantic analysis on the first audio according to the semantic analysis model is not described in detail in this embodiment, and may be implemented with reference to various implementation manners in the prior art.
Of course, the semantic analysis of the first audio may also use other semantic analysis methods, which is not limited in the embodiments of the present disclosure.
And step S104, carrying out priority ordering on the at least two current keywords to obtain a current keyword queue.
The purpose of generating the current keyword queue is to obtain the knowledge text content with high matching degree, so that the students in live class can timely and accurately obtain the knowledge text content in discussion, thereby assisting the students in class to accurately obtain knowledge points of the discussion content and increasing the understanding of the students in class to knowledge.
In some specific embodiments, the prioritizing the at least two current keywords to obtain a current keyword queue includes the following steps:
and step S104-1, carrying out weight value analysis on the at least two current keywords to obtain the current weight value of each current keyword.
The current weight value is used for representing the importance degree of the current keyword in the current speaking audio.
Further, taking the above specific embodiment as an example, if at least one first audio is obtained and a current keyword group of the first audio is obtained, the weight value analysis is performed on the at least two current keywords to obtain a current weight value of each current keyword, which includes the following steps:
and step S104-1-1, analyzing the combination weight value of all the current keywords in the current keyword group to obtain the combination weight value of the current keyword group.
And the combined weight value analysis is to take a first audio with complete meaning as a unit, and perform weight value analysis on all current keywords in the first audio as a whole. And determining the importance degree of the current keyword group in the current speaking audio through the combination mode of the current keywords in the current keyword group.
Step S104-1-2, obtaining an intermediate weight value of the current keyword based on the combined weight value of the current keyword group and a preset weight value of the current keyword in the current keyword group.
The preset weight value is the importance degree of the preset current keyword in the current course.
Because the combined weight value only characterizes the importance degree of the current keyword group in the current speaking audio, the importance degree of the current keyword group in the current speaking audio cannot be characterized yet. To this end, the embodiment of the present disclosure provides a method for determining an intermediate weight value of a current keyword by combining a weight value and a preset weight value of the current keyword.
Optionally, the obtaining the intermediate weight value of the current keyword based on the combined weight value of the current keyword group and the preset weight value of the current keyword in the current keyword group includes:
step S104-1-2a, calculating the product of the combined weight value of the current keyword group and the preset weight value of the current keyword in the current keyword group, and obtaining the intermediate weight value of the current keyword.
For example, the combination weight value of the current keyword group is 3.2, and the current keyword group includes: the method comprises the steps of a first current keyword, a second current keyword and a third current keyword, wherein if the preset weight value of the first current keyword is 1.3, the intermediate weight value of the first current keyword is 3.2X1.3 =4.16; if the preset weight value of the second current keyword is 1.5, the intermediate weight value of the second current keyword is equal to 3.2X1.5 =4.8; if the preset weight value of the third current keyword is equal to 1.25, the intermediate weight value of the third current keyword is equal to 3.2X1.25 =4.
And step S104-1-3, classifying all the current keywords in the current speech audio, and calculating to obtain the current weight value of the classified current keywords.
The current weight value is the sum of intermediate weight values of the classified current keywords.
If a plurality of first audios are included in the current speech audio, the same keyword is included in each of the plurality of first audios, for example, the keyword "acceleration" is included in each sentence in a class. The embodiment of the disclosure classifies keywords that appear multiple times in the current speech audio, for example, 3 first audio frequencies in the current speech audio have keyword "acceleration", classifies the keyword "acceleration" as the same type, and if the intermediate weight values of the "acceleration" in the 3 first audio frequencies are respectively: 3.26, 4.25 and 3.6, then the current weight value is equal to 3.26+4.25+3.6=11.11.
And step S104-2, sorting the priorities of all the current keywords according to the current weight value of each current keyword, and generating the current keyword queue.
The priority ranking is carried out on all the current keywords based on the current weight value, and the ranking can be carried out according to the ascending order of the current weight value of the current keywords; the ranking may also be in descending order of the current weight value of the current keyword.
Step 105, performing similarity matching according to the current keyword queue and the plurality of first keyword groups in the knowledge information set, to obtain at least one second keyword group meeting a preset similarity condition, and knowledge text content corresponding to the second keyword group.
The first keyword group is prioritized based on the same mode of generating the current keyword queue.
Due to the diversification of languages, in order to prevent the current keyword queue from failing to find the matched first keyword group in the knowledge information set, the embodiments of the present disclosure provide a matching method of similarity. When the matching degree value of the first keyword group meets the preset similarity condition, determining that the first keyword group is a second keyword group meeting the requirement. Based on this, the knowledge text content corresponding to the second keyword group in the knowledge information set is also required content. For example, if the matching degree values of the current keyword queue and the three first keyword groups in the knowledge information set are: 82%, 76% and 48%, the preset similarity conditions are: the matching degree value is more than or equal to 80%; the first keyword group with the matching degree value of 82% is needed by the corresponding knowledge text content in the knowledge information set.
Step S106, transmitting the first video, the second video and the at least one knowledge text content to the multimedia blackboard 11, and triggering the multimedia blackboard 11 to execute the operation of displaying the first video, the second video and the at least one knowledge text content.
The first video, the second video and the at least one knowledge text content are displayed on a multimedia blackboard 11 to provide a discussion scenario of a teacher-student interaction for a lesson-listening student in a live classroom. Meanwhile, according to the interaction process of teachers and students, knowledge text content matched with discussion content is provided for students in class, and the students in class are assisted to find the discussion knowledge point as soon as possible. The students in the class are guided to enter the discussion situation rapidly, and the experience of the students in the class is improved.
In one specific embodiment, the multimedia blackboard 11 includes: a first display module 111, a second display module 112, and a third display module 113.
Accordingly, the triggering the multimedia blackboard 11 to perform the operation of displaying the first video, the second video and the at least one knowledge text content includes:
triggering the multimedia blackboard 11 to perform an operation of displaying the first video and the second video in a first display module 111 and a second display module 112, respectively;
and triggering the multimedia blackboard 11 to perform an operation of displaying the at least one knowledge text content in the third display module 113.
In the multimedia blackboard 11 of the specific embodiment of the disclosure, the first display module 111 and the second display module 112 may be two vertical screens (for example, the aspect ratio of the screens is greater than 1) in parallel, and the first video and the second video are whole-body images of a lecturer and a speaking student respectively, so as to provide a lecture student with on-site realism of discussion interaction of the teacher and the student; the third display module 113 is a horizontal screen (for example, the aspect ratio of the screen is smaller than 1) for improving the display efficiency of the content, and is more suitable for the reading habit of people.
In another embodiment, the multimedia blackboard 11 includes a display module.
Accordingly, the triggering the multimedia blackboard 11 to perform the operation of displaying the first video, the second video and the at least one knowledge text content includes:
triggering the multimedia blackboard 11 to perform operations of displaying the first video and the second video in a first display area 114 and a second display area 115 in the display module, respectively;
and triggering the multimedia blackboard 11 to perform an operation of displaying the at least one knowledge text content in a third display area 116 in the display module.
The multimedia blackboard 11 of the specific embodiment of the disclosure adopts a display module, in which a first display area 114 and a second display area 115 with the aspect ratio being greater than 1 can be respectively displayed, so as to respectively display the whole body images of a teaching teacher and a speaking student, and provide the students with the on-site realism of the discussion interaction of the teacher and the student; the third display area 116 is an area with an aspect ratio smaller than 1, which is used to improve the display efficiency of the content and more conform to the reading habit of people.
Of course, the size and/or shape of the display area may also be changed according to the teaching process, and the specific embodiments of the disclosure are not limited.
According to the embodiment of the disclosure, semantic analysis is carried out on the current speaking audio discussed between the teaching teacher and the speaking students, and at least two current keywords are obtained; then, priority ranking is carried out based on at least two current keywords, and a current keyword queue is obtained; and obtaining the knowledge text content corresponding to the current keyword queue through the current keyword queue. The first video of the lecturer, the second video of the speaking student, and the knowledge text content are then simultaneously displayed on the multimedia blackboard 11. The students in live class can timely and accurately obtain the knowledge text content of the discussion, thereby assisting the students in class to accurately obtain the knowledge points of the discussion content and increasing the understanding of the students in class to the knowledge.
Example 2
The disclosure further provides an embodiment of a device adapted to the above embodiment, which is configured to implement the method steps described in the above embodiment, and the explanation based on the meaning of the same names is the same as that of the above embodiment, which has the same technical effects as those of the above embodiment, and is not repeated herein.
As shown in fig. 3, the present disclosure provides a discussion device 300 for live teaching, including:
a receiving unit 301, configured to receive a first video and a second video of a live teaching, where the first video is collected by a teaching terminal to give lessons to a teacher, and the second video is collected by a multimedia blackboard to collect videos of a speaker who discusses problems with the teacher in a remote teaching classroom;
a first acquiring unit 302, configured to acquire a current speech audio based on the first video or the second video;
a second obtaining unit 303, configured to perform semantic analysis on the current speech audio, and obtain at least two current keywords;
a ranking unit 304, configured to prioritize the at least two current keywords to obtain a current keyword queue;
the matching unit 305 is configured to perform similarity matching according to the current keyword queue and a plurality of first keyword phrases in the knowledge information set, and obtain at least one second keyword phrase that meets a preset similarity condition, and knowledge text content corresponding to the second keyword phrase, where the first keyword phrase performs priority ranking based on the same manner of generating the current keyword queue;
and the transmitting unit 306 is configured to transmit the first video, the second video, and the at least one knowledge text content to the multimedia blackboard, and trigger the multimedia blackboard to perform an operation of displaying the first video, the second video, and the at least one knowledge text content.
Optionally, the sorting unit 304 includes:
the weight analysis subunit is used for carrying out weight value analysis on the at least two current keywords to obtain the current weight value of each current keyword;
the first acquisition subunit is configured to prioritize all the current keywords according to the current weight value of each current keyword, and generate the current keyword queue.
Optionally, the second obtaining unit 303 includes:
the sentence-breaking analysis subunit is used for performing sentence-breaking analysis on the current speaking audio to obtain at least one first audio, wherein the first audio comprises a complete semantic meaning;
the second obtaining subunit is configured to perform semantic analysis on the first audio to obtain a current keyword group of the first audio, where the current keyword group includes at least two current keywords.
Optionally, the weight analysis subunit includes:
the combination analysis subunit is used for carrying out combination weight value analysis on all current keywords in the current keyword group to obtain a combination weight value of the current keyword group;
a third obtaining subunit, configured to obtain an intermediate weight value of the current keyword based on a combined weight value of the current keyword group and a preset weight value of the current keyword in the current keyword group;
the classifying sub-unit is used for classifying all the current keywords in the current speaking audio, and calculating and obtaining the current weight value of the classified current keywords, wherein the current weight value is the sum of the intermediate weight values of the classified current keywords.
Optionally, the sentence-breaking analysis subunit includes:
and the fourth obtaining subunit is used for inputting the current speaking audio into the trained sentence-breaking analysis model to obtain at least one first audio.
Optionally, the multimedia blackboard includes: the display device comprises a first display module, a second display module and a third display module;
the transfer unit 306 includes:
the first triggering subunit is used for triggering the multimedia blackboard to execute the operation of displaying the first video and the second video in the first display module and the second display module respectively;
and a second triggering subunit, configured to trigger the multimedia blackboard to perform an operation of displaying the at least one knowledge text content in a third display module.
Optionally, the multimedia blackboard comprises a display module;
the transfer unit 306 includes:
a third triggering subunit, configured to trigger the multimedia blackboard to perform an operation of displaying the first video and the second video in a first display area and a second display area in the display module, respectively;
and a fourth triggering subunit, configured to trigger the multimedia blackboard to perform an operation of displaying the at least one knowledge text content in a third display area in the display module.
According to the embodiment of the disclosure, semantic analysis is carried out on the current speaking audio discussed between the teaching teacher and the speaking students, and at least two current keywords are obtained; then, priority ranking is carried out based on at least two current keywords, and a current keyword queue is obtained; and obtaining the knowledge text content corresponding to the current keyword queue through the current keyword queue. And then the first video of the teaching teacher, the second video of the speaking student and the knowledge text content are simultaneously displayed on the multimedia blackboard. The students in live class can timely and accurately obtain the knowledge text content of the discussion, thereby assisting the students in class to accurately obtain the knowledge points of the discussion content and increasing the understanding of the students in class to the knowledge.
Example 3
As shown in fig. 4, the present embodiment provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to enable the at least one processor to perform the method steps described in the embodiments above.
Example 4
The disclosed embodiments provide a non-transitory computer storage medium storing computer executable instructions that perform the method steps described in the embodiments above.
Example 5
Referring now to fig. 4, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processor, a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic device are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 305 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device 309, or installed from a storage device 308, or installed from a ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.

Claims (10)

1. A discussion method of live teaching, comprising:
receiving a first video and a second video of live teaching, wherein the first video is used for collecting teaching videos of teaching teachers by a teaching terminal, and the second video is used for collecting videos of speakers discussing problems with the teaching teachers in a remote teaching classroom by a multimedia blackboard;
acquiring current speaking audio based on the first video or the second video;
carrying out semantic analysis on the current speaking audio to obtain at least two current keywords;
the priority ranking is carried out on the at least two current keywords to obtain a current keyword queue;
respectively carrying out similarity matching according to the current keyword queue and a plurality of first keyword groups in the knowledge information set, and obtaining at least one second keyword group meeting a preset similarity condition and knowledge text content corresponding to the second keyword group, wherein the first keyword groups are subjected to priority ordering based on the same mode of generating the current keyword queue;
and transmitting the first video, the second video and the at least one knowledge text content to the multimedia blackboard, and triggering the multimedia blackboard to execute the operation of displaying the first video, the second video and the at least one knowledge text content.
2. The method of claim 1, wherein prioritizing the at least two current keywords to obtain a current keyword queue comprises:
carrying out weight value analysis on the at least two current keywords to obtain a current weight value of each current keyword;
and sorting the priorities of all the current keywords according to the current weight value of each current keyword, and generating the current keyword queue.
3. The method of claim 2, wherein the performing semantic analysis on the current speech audio to obtain at least two current keywords comprises:
performing sentence breaking analysis on the current speaking audio to obtain at least one first audio, wherein the first audio comprises a complete semantic meaning;
and carrying out semantic analysis on the first audio to obtain a current keyword group of the first audio, wherein the current keyword group comprises at least two current keywords.
4. A method according to claim 3, wherein said performing weight value analysis on said at least two current keywords to obtain a current weight value for each current keyword comprises:
analyzing the combination weight value of all the current keywords in the current keyword group to obtain the combination weight value of the current keyword group;
obtaining an intermediate weight value of the current keyword based on the combined weight value of the current keyword group and a preset weight value of the current keyword in the current keyword group;
classifying all the current keywords in the current speech audio, and calculating to obtain the current weight value of the classified current keywords, wherein the current weight value is the sum of the intermediate weight values of the classified current keywords.
5. The method of claim 3, wherein the performing a sentence-breaking analysis on the current speech audio to obtain at least one first audio comprises:
and inputting the current speaking audio into a trained sentence-breaking analysis model to obtain at least one first audio.
6. The method of claim 1, wherein the multimedia blackboard comprises: the display device comprises a first display module, a second display module and a third display module;
the triggering the multimedia blackboard to perform an operation of displaying the first video, the second video, and the at least one knowledge text content includes:
triggering the multimedia blackboard to execute the operation of displaying the first video and the second video in a first display module and a second display module respectively;
and triggering the multimedia blackboard to perform an operation of displaying the at least one knowledge text content in a third display module.
7. The method of claim 1, wherein the multimedia blackboard comprises a display module;
the triggering the multimedia blackboard to perform an operation of displaying the first video, the second video, and the at least one knowledge text content includes:
triggering the multimedia blackboard to execute the operation of displaying the first video and the second video in a first display area and a second display area in the display module respectively;
and triggering the multimedia blackboard to execute the operation of displaying the at least one knowledge text content in a third display area in the display module.
8. A discussion device for live teaching, comprising:
the receiving unit is used for receiving a first video and a second video of live teaching, wherein the first video is used for collecting teaching videos of teaching teachers by a teaching terminal, and the second video is used for collecting videos of speakers discussing problems with the teaching teachers in a remote teaching classroom by a multimedia blackboard;
the first acquisition unit is used for acquiring current speaking audio based on the first video or the second video;
the second acquisition unit is used for carrying out semantic analysis on the current speaking audio to acquire at least two current keywords;
the ordering unit is used for carrying out priority ordering on the at least two current keywords to obtain a current keyword queue;
the matching unit is used for respectively carrying out similarity matching according to the current keyword queue and a plurality of first keyword groups in the knowledge information set, and obtaining at least one second keyword group meeting the preset similarity condition and knowledge text content corresponding to the second keyword group, wherein the first keyword groups are subjected to priority ranking based on the same mode of generating the current keyword queue;
and the transmission unit is used for transmitting the first video, the second video and the at least one knowledge text content to the multimedia blackboard, and triggering the multimedia blackboard to execute the operation of displaying the first video, the second video and the at least one knowledge text content.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the method of any of claims 1 to 7.
CN202111440260.7A 2021-11-29 2021-11-29 Discussion method, device, medium and electronic equipment for live broadcast teaching Active CN114125537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111440260.7A CN114125537B (en) 2021-11-29 2021-11-29 Discussion method, device, medium and electronic equipment for live broadcast teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111440260.7A CN114125537B (en) 2021-11-29 2021-11-29 Discussion method, device, medium and electronic equipment for live broadcast teaching

Publications (2)

Publication Number Publication Date
CN114125537A CN114125537A (en) 2022-03-01
CN114125537B true CN114125537B (en) 2023-07-25

Family

ID=80368362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111440260.7A Active CN114125537B (en) 2021-11-29 2021-11-29 Discussion method, device, medium and electronic equipment for live broadcast teaching

Country Status (1)

Country Link
CN (1) CN114125537B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118855A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of net work teaching system of huge screen holography reduction real scene
CN109118854A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of panorama immersion living broadcast interactive teaching system
CN109862375A (en) * 2019-01-07 2019-06-07 北京汉博信息技术有限公司 Cloud recording and broadcasting system
CN110647613A (en) * 2018-06-26 2020-01-03 上海谦问万答吧云计算科技有限公司 Courseware construction method, courseware construction device, courseware construction server and storage medium
CN111611434A (en) * 2020-05-19 2020-09-01 深圳康佳电子科技有限公司 Online course interaction method and interaction platform
CN111783687A (en) * 2020-07-03 2020-10-16 佛山市海协科技有限公司 Teaching live broadcast method based on artificial intelligence
CN111813889A (en) * 2020-06-24 2020-10-23 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for sorting question information
CN112367526A (en) * 2020-10-26 2021-02-12 联想(北京)有限公司 Video generation method and device, electronic equipment and storage medium
CN113610682A (en) * 2021-08-20 2021-11-05 汇正(广州)企业管理咨询有限公司 Online remote education method and system based on big data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11544590B2 (en) * 2019-07-12 2023-01-03 Adobe Inc. Answering questions during video playback

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118855A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of net work teaching system of huge screen holography reduction real scene
CN109118854A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of panorama immersion living broadcast interactive teaching system
CN110647613A (en) * 2018-06-26 2020-01-03 上海谦问万答吧云计算科技有限公司 Courseware construction method, courseware construction device, courseware construction server and storage medium
CN109862375A (en) * 2019-01-07 2019-06-07 北京汉博信息技术有限公司 Cloud recording and broadcasting system
CN111611434A (en) * 2020-05-19 2020-09-01 深圳康佳电子科技有限公司 Online course interaction method and interaction platform
CN111813889A (en) * 2020-06-24 2020-10-23 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for sorting question information
CN111783687A (en) * 2020-07-03 2020-10-16 佛山市海协科技有限公司 Teaching live broadcast method based on artificial intelligence
CN112367526A (en) * 2020-10-26 2021-02-12 联想(北京)有限公司 Video generation method and device, electronic equipment and storage medium
CN113610682A (en) * 2021-08-20 2021-11-05 汇正(广州)企业管理咨询有限公司 Online remote education method and system based on big data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于语音识别的实时字幕应用于网络教学的可行性探讨;赵圣洁;;校园英语(第14期);全文 *

Also Published As

Publication number Publication date
CN114125537A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
Gromik Cell phone video recording feature as a language learning tool: A case study
CN111813889B (en) Question information ordering method and device, medium and electronic equipment
US20240079002A1 (en) Minutes of meeting processing method and apparatus, device, and medium
AlShareef The importance of using mobile learning in supporting teaching and learning of English language in the secondary stage
Paciga et al. Better start before kindergarten: Computer technology, interactive media and the education of preschoolers
Maja Teachers’ perceptions of integrating technology in rural primary schools to enhance the teaching of English first additional language
Ismail et al. Teachers' Acceptance of Mobile Technology Use towards Innovative Teaching in Malaysian Secondary Schools.
CN112287659B (en) Information generation method and device, electronic equipment and storage medium
CN112863277B (en) Interaction method, device, medium and electronic equipment for live broadcast teaching
Al-Jarf Text-to-Speech Software as a Resource for Independent Interpreting Practice by Undergraduate Interpreting Students.
CN111260975A (en) Method, device, medium and electronic equipment for multimedia blackboard teaching interaction
CN114125537B (en) Discussion method, device, medium and electronic equipment for live broadcast teaching
CN114095747B (en) Live broadcast interaction system and method
CN109191958B (en) Information interaction method, device, terminal and storage medium
CN114328839A (en) Question answering method, device, medium and electronic equipment
CN114328999A (en) Interaction method, device, medium and electronic equipment for presentation
CN114297420B (en) Note generation method and device for network teaching, medium and electronic equipment
Tuan et al. Mobile learning in non-English speaking countries: Designing a smartphone application of English mathematical terminology for students of mathematics teacher education
CN114120729B (en) Live teaching system and method
CN114038255B (en) Answering system and method
CN114327170B (en) Alternating current group generation method and device, medium and electronic equipment
Osman et al. Paper versus screen: Assessment of basic literacy skill of Indigenous people
CN115757807B (en) Course standard association map generation method, device, electronic equipment and medium
US20220319152A1 (en) Methods for generating cognitive building blocks
CN116415001A (en) Student and teacher interaction method and device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant