CN114120729B - Live teaching system and method - Google Patents

Live teaching system and method Download PDF

Info

Publication number
CN114120729B
CN114120729B CN202111428397.0A CN202111428397A CN114120729B CN 114120729 B CN114120729 B CN 114120729B CN 202111428397 A CN202111428397 A CN 202111428397A CN 114120729 B CN114120729 B CN 114120729B
Authority
CN
China
Prior art keywords
video
current
type
remote
current scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111428397.0A
Other languages
Chinese (zh)
Other versions
CN114120729A (en
Inventor
王珂晟
黄劲
黄钢
许巧龄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oook Beijing Education Technology Co ltd
Original Assignee
Oook Beijing Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oook Beijing Education Technology Co ltd filed Critical Oook Beijing Education Technology Co ltd
Priority to CN202111428397.0A priority Critical patent/CN114120729B/en
Publication of CN114120729A publication Critical patent/CN114120729A/en
Application granted granted Critical
Publication of CN114120729B publication Critical patent/CN114120729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure provides a live lecture system and method, the system comprising: the multi-scene blackboard comprises a server, a first video acquisition terminal, a second video acquisition terminal, a third video acquisition terminal, a fourth video acquisition terminal and a fifth video acquisition terminal. The server side in the system identifies the current scene type according to the first audio characteristic information of the teaching teacher, then obtains the interactive current scene video through the current scene type, and transmits the interactive current scene video to the multi-scene blackboard, so that students on the class can clearly know the teaching process and the teaching intention of the teaching teacher through the current scene video. The interactivity of teaching and the experience of students in class are improved.

Description

Live teaching system and method
Technical Field
The disclosure relates to the field of information processing, in particular to a live lecture system and a live lecture method.
Background
With the development of computer technology, internet-based network teaching is beginning to be emerging.
The network teaching is a teaching mode which is mainly based on teaching and is performed by using a network as a communication tool of teachers and students. The network teaching comprises live broadcast teaching and recorded broadcast teaching. The live teaching is the same as the traditional teaching mode, students can listen to the teacher lectures at the same time, and the teachers and students have some simple communication. The recorded broadcast teaching uses the internet service to store the courses recorded in advance by the teacher on the server, and students can order and watch the courses at any time to achieve the purpose of learning. The recorded broadcast teaching is characterized in that the teaching activity can be carried out for 24 hours in the whole day, each student can determine the learning time, content and progress according to the actual situation of the student, and the learning content can be downloaded on the internet at any time. In web teaching, each course may have a large number of students listening to the course.
There is a teaching mode currently, students in class are concentrated in a classroom, and the students participate in teaching activities of remote teaching teachers through a display screen in a multimedia blackboard. Only a teaching video of a teaching teacher can be displayed in the multimedia blackboard, for example, the teaching teacher end is displayed in the multimedia blackboard and sits at a fixed position in front of the camera, and teaching is performed through language in the whole course; if necessary, a demonstration image of the lesson text is inserted into the video. However, the teaching mode lacks interaction of teachers and students in field teaching, increases the distance sense of teaching activities, often causes boring and tedious teaching process, and is not ideal in teaching experience of students.
Accordingly, the present disclosure provides a live lecture system to solve one of the above-mentioned technical problems.
Disclosure of Invention
The disclosure aims to provide a live teaching system, a live teaching method, a live teaching medium and an electronic device, which can solve at least one technical problem. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a live lecture system, including:
the server is arranged in the data center;
the first video acquisition terminal is in electrical communication with the server and is arranged in the remote teaching room and used for acquiring panoramic videos of the remote teaching room;
The second video acquisition terminal is electrically communicated with the server and is arranged in a remote laboratory and used for acquiring panoramic videos of the remote laboratory;
the third video acquisition terminal is electrically communicated with the server and is arranged in a remote classroom and used for acquiring panoramic videos of the remote classroom;
the fourth video acquisition terminal is electrically communicated with the server and is arranged in the remote classroom and used for acquiring close-up videos of speaking students in the remote classroom;
the fifth video acquisition terminal is in electric communication with the server and is matched with the second video acquisition terminal to be arranged in the remote laboratory, and is used for acquiring a close-up video of the teaching teacher demonstration experiment in the remote laboratory;
the multi-scene blackboard is electrically communicated with the server and is matched with the third video acquisition terminal and the fourth video acquisition terminal to be arranged in the remote classroom, and the multi-scene blackboard comprises a plurality of display modules, wherein the height-width ratio of each display module is larger than 1;
the server side is configured to:
receiving a first video collected by a first video collection terminal in a remote teaching room and a second video collected by a second video collection terminal in a remote laboratory, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
Performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
acquiring a plurality of current first audio feature information based on the current main video;
performing type analysis on teaching scenes on the plurality of first audio feature information to acquire the current scene type;
responding to the trigger of the current scene type, and acquiring at least one current scene video related to the current scene type, wherein the current scene video is acquired by one of the first video acquisition terminal to the fifth video acquisition terminal;
and transmitting the current scene video to at least one display module in the multi-scene blackboard for display.
According to a second aspect of the present disclosure, the present disclosure provides a live lecture method, which is applied to a server of the system according to the first aspect, including:
receiving a first video collected by a first video collection terminal in a remote teaching room and a second video collected by a second video collection terminal in a remote laboratory, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
Performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
acquiring a plurality of current first audio feature information based on the current main video;
performing type analysis on teaching scenes on the plurality of first audio feature information to acquire the current scene type;
responding to the trigger of the current scene type, and acquiring at least one current scene video related to the current scene type, wherein the current scene video is acquired by one of the first video acquisition terminal to the fifth video acquisition terminal;
and transmitting the current scene video to at least one display module in the multi-scene blackboard for display.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a live lecture method as defined in any one of the above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a live lecture method as claimed in any one of the preceding claims.
Compared with the prior art, the scheme of the embodiment of the disclosure has at least the following beneficial effects:
the service end in the system identifies the current scene type according to the first audio characteristic information of the teaching teacher, then obtains the interactive current scene video through the current scene type, and transmits the interactive current scene video to the multi-scene blackboard, so that students on the class can clearly know the teaching process and the teaching intention of the teaching teacher through the current scene video. The interactivity of teaching and the experience of students in class are improved.
Drawings
Fig. 1 illustrates a composition schematic of a live lecture system according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a multi-scene blackboard according to an embodiment of the present disclosure;
fig. 3 illustrates a flowchart of a live lecture method according to an embodiment of the present disclosure;
fig. 4 illustrates a schematic diagram of an electronic device connection structure according to an embodiment of the present disclosure;
description of the reference numerals
11-a first video acquisition terminal, 12-a second video acquisition terminal, 13-a third video acquisition terminal, 14-a fourth video acquisition terminal, 15-a fifth video acquisition terminal, 16-a multi-scene blackboard, 17-a server and 18-a presentation terminal;
161-first display, 162-second display, 163-third display.
Detailed Description
For the purpose of promoting an understanding of the principles and advantages of the disclosure, reference will now be made in detail to the drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure of embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure, these descriptions should not be limited to these terms. These terms are only used to distinguish one from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of embodiments of the present disclosure.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or device comprising such element.
In particular, the symbols and/or numerals present in the description, if not marked in the description of the figures, are not numbered.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
Example 1
The embodiment provided by the present disclosure is an embodiment of a live lecture system.
Embodiments of the present disclosure are described in detail below in conjunction with fig. 1 and 2.
A live lecture system comprising: the system comprises a first video acquisition terminal, a second video acquisition terminal, a third video acquisition terminal, a fourth video acquisition terminal, a fifth video acquisition terminal, a multi-scene blackboard and a server.
The live teaching system in the embodiment of the disclosure is respectively arranged in a remote teaching room, a remote laboratory, a remote classroom and a data center.
As shown in fig. 1, the remote lecture room is mainly a place where a lecturer gives lectures.
The server 17 is arranged in the data center
The first video acquisition terminal 11 is in electrical communication with the server 17, and is disposed in a remote teaching room, and is configured to acquire panoramic video of the remote teaching room. For example, if the teaching teacher speaks in a class in a remote teaching room, the panoramic video includes a whole-body image of the teaching teacher, in order to improve the display effect of the panoramic video, a picture-matting manner may be adopted to key the whole-body image of the teaching teacher out of the panoramic video, and then a virtual background is configured for the whole-body image, so as to improve the display effect of the panoramic video.
The remote laboratory is mainly a place where teaching teachers demonstrate the experimental process. In order to facilitate the teaching of teaching teachers and the demonstration of the experimental process, the remote laboratory can be closely adjacent to the remote teaching room. For example, a remote laboratory is only one spatial division from a remote teaching room, and the remote laboratory and the remote teaching room are actually located in the same room.
The second video acquisition terminal 12 is in electrical communication with the server 17, and is disposed in a remote laboratory, and is configured to acquire panoramic video of the remote laboratory. For example, if the teacher gives lessons to demonstrate experiments, the panoramic video of the remote laboratory includes the whole body image of the teacher given lessons in the remote laboratory and the images of all the devices.
The fifth video acquisition terminal 15 is in electrical communication with the server 17, and is configured to be matched with the second video acquisition terminal 12 in the remote laboratory, so as to acquire a close-up video of the teaching teacher in the remote laboratory for demonstrating experiments. For example, if the lecturer demonstrates an experiment, the close-up video of the lecture teacher demonstrates an experiment includes a local image of the lecturer, such as an image of an operation hand, and an image of an experimental phenomenon, such as an image of display data, an image of a physical change, an image of a chemical change.
The remote classroom is a place where students listen to the class in a concentrated manner.
The third video acquisition terminal 13 is in electrical communication with the server 17, and is disposed in a remote classroom, and is configured to acquire panoramic video of the remote classroom. For example, the panoramic video of the remote classroom includes the lecture images of all lecture students in the remote classroom and the teaching aids in the remote classroom.
And a fourth video acquisition terminal 14, which is in electrical communication with the server 17, is disposed in the remote classroom, and is configured to acquire a close-up video of a speaking student in the remote classroom. For example, if a student speaks, the close-up video records an image of the upper body of the speaking student.
The multi-scene blackboard 16 is in electrical communication with the server 17, and is configured in the remote classroom in cooperation with the third video acquisition terminal 13 and the fourth video acquisition terminal 14, and the multi-scene blackboard 16 comprises a plurality of display modules, and the aspect ratio of each display module is greater than 1.
It can be understood that the third video acquisition terminal 13, the fourth video acquisition terminal 14 and the multimedia blackboard are arranged in a remote classroom in a matching way. In order to improve the experience of listening to class of students in remote classrooms, a plurality of display modules are included in the multimedia blackboard, and the display modules respectively display different videos according to the current teaching process, so that character images in different spaces are simultaneously displayed in the same multimedia blackboard, the interactivity of teaching vision is improved, and the experience of listening to class of students is improved.
The aspect ratio of each display module is greater than 1, and it can be understood that each display module is a vertical screen display. For example, as shown in fig. 2, the first display screen 161, the second display screen 162 and the third display screen 163, where the aspect ratio of the first display screen 161 and the second display screen 162 is greater than 1, the first display screen and the second display screen are respectively vertical display screens, the aspect ratio of the third display screen 163 is less than 1, and the third display screen 163 is a horizontal display screen.
The server 17 is configured to:
receiving a first video acquired by a first video acquisition terminal 11 in a remote teaching room and a second video acquired by a second video acquisition terminal 12 in a remote laboratory, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
acquiring a plurality of current first audio feature information based on the current main video;
performing type analysis on teaching scenes on the plurality of first audio feature information to acquire the current scene type;
Acquiring at least one current scene video related to the current scene type in response to the triggering of the current scene type, the current scene video being acquired by one of the first video acquisition terminal 11 to the fifth video acquisition terminal 15;
the current context video is communicated to at least one display module in multi-context blackboard 16 for display.
Since only one teaching teacher is teaching, and the active space of the teaching teacher in the teaching process is limited to one of the remote teaching room and the remote laboratory, the image of the teaching teacher appears in either the first video of the remote teaching room or the second video of the remote laboratory.
The determining that the first video or the second video including the image of the teaching teacher is the current main video in the embodiments of the present disclosure may be understood that, for the first video and the second video, as long as one of the videos shows the image of the teaching teacher, the video showing the image of the teaching teacher is determined as the current main video. The current main video is used for analyzing the current teaching scene so as to find a video suitable for multi-angle display teaching process from a plurality of videos from different sources.
Video captured during live broadcast includes both video and audio. The first audio characteristic information refers to characteristic information in current audio in the current main video. For example, if lecture information of a lecturer is included in the current audio, a plurality of key information in a lesson, such as proper nouns in the lesson, key information of the lecture, that is, audio feature information, is included in the current audio; if the current audio includes experimental information of the teaching teacher, a plurality of pieces of key information of the experiment, such as device names, are included in the current audio, and the key information of the experiment is audio feature information.
And analyzing the types of the teaching situations of the plurality of first audio feature information to acquire the current situation type, inputting the plurality of first audio feature information into a trained teaching situation recognition model, and outputting the current situation type after being recognized by the teaching situation recognition model.
The lecture scenario recognition model may be trained based on a plurality of sets of history samples of lectures (each set of history samples including a plurality of history audio feature information) as training samples. The process of analyzing the types of the teaching situations of the plurality of first audio feature information according to the teaching situation recognition model will not be described in detail in this embodiment, and may be implemented with reference to various implementation manners in the prior art.
The current scene type includes: a first mute type, an experiment type, a question-answer type, and a lecture type. The application of various current scene types is specifically described below according to some embodiments.
In some specific embodiments, the server 17 is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
receiving a third video acquired by a third video acquisition terminal 13 in a remote classroom in response to the triggering that the current scene type is a first mute type, wherein the third video is a panoramic video of the remote classroom;
acquiring a plurality of current second audio feature information based on the third video;
performing type analysis of classroom scenes on the plurality of second audio feature information to acquire the current classroom type;
responding to the trigger that the current classroom type is the second mute type, and determining that the third video is the current scene video;
and in response to the trigger that the current classroom type is the speaking type, indicating a fourth video acquisition terminal 14 in the remote classroom to acquire a fourth video, receiving the fourth video, and determining that the third video and the fourth video are both the current scene video, wherein the fourth video is a close-up video of a speaking student in the remote classroom.
The first mute type is that mute information appears in the audio of the current main video (i.e. the audio data is lower than the preset mute threshold value in the preset time), that is, the teaching teacher does not speak for a long time.
To avoid abrupt changes in the audio data caused by occasional events (e.g., coughing sounds), interfering with the identification of the first silence type, the audio data may be data filtered prior to processing the audio data to remove the interfering data therein.
When the current scene type is the first mute type, the teaching teacher does not explain the course currently. Embodiments of the present disclosure focus the display of the multi-scene blackboard 16 to a remote classroom. And acquiring a plurality of current second audio feature information through a third video, and analyzing the types of classroom scenes on the plurality of second audio feature information to acquire the current classroom types.
The second audio feature information refers to feature information in the current audio in the third video. For example, the current audio includes speech information of students in class or mute information in a remote classroom (i.e., audio data is below a preset mute threshold for a preset time).
And analyzing the types of the classroom situations of the plurality of second audio feature information to acquire the current classroom types, inputting the plurality of second audio feature information into a trained classroom situation recognition model, and outputting the current classroom types after being recognized by the classroom situation recognition model.
The classroom context identification model may be trained based on multiple sets of historical samples of a classroom (each set of historical samples including multiple historical audio feature information) as training samples. The present embodiment of the process of analyzing the type of the classroom scene with respect to the plurality of second audio feature information according to the classroom scene recognition model will not be described in detail, and may be implemented with reference to various implementations in the prior art.
The current class room type includes a second mute type and a floor type.
The second mute type indicates that no live students in the current classroom are in play, and therefore, in response to a trigger that the current classroom type is the second mute type, the third video is determined to be the current scene video, i.e., the panoramic video of the remote classroom is displayed only in the multi-scene blackboard 16.
If the current classroom type is the speaking type, the present embodiment will send an instruction to focus the speaking student to the fourth video capture terminal 14 in the remote classroom, and the fourth video capture terminal 14 focuses the speaking student to capture a close-up video of the speaking student. Thus, the present embodiment transmits the close-up video of the speaking student and the panoramic video of the remote classroom simultaneously to the multi-scene blackboard 16, and the multi-scene blackboard 16 displays the video on the two display modules simultaneously. For example, as shown in fig. 2, a third video is displayed on the first display screen 161, and a fourth video is displayed on the second display screen 162.
The third video and the fourth video are simultaneously transmitted to the multi-scene blackboard 16 while the close-up video of the speaking student and the panoramic video of the remote classroom are displayed. Especially, when a lesson is given in a large remote classroom, students in class can intuitively see the situation of speaking students and the remote classroom through the multi-scene blackboard 16.
In other specific embodiments, the server 17 is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
responding to the trigger that the current scene type is an experiment type, indicating a fifth video acquired by a fifth video acquisition terminal 15 in the remote laboratory, and receiving the fifth video, wherein the fifth video is a close-up video of the teaching teacher demonstration experiment in the remote laboratory; and determining that the second video and the fifth video are both the current scene video.
In this embodiment, the current scenario type is an experiment type, which indicates that the lecturer is demonstrating the experimental process to the lecture student. Therefore, the fifth video acquisition terminal 15 in the remote laboratory sends out a focusing instruction, and the fifth video acquisition terminal 15 acquires a close-up video of the teaching teacher demonstration experiment.
In other specific embodiments, the server 17 is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
responding to the trigger that the current scene type is a question-answer type, indicating a fourth video acquisition terminal 14 in the remote classroom to acquire a sixth video and receiving the sixth video, wherein the sixth video is a close-up video of a speaking student in the remote classroom; and determining that the current main video and the sixth video are both the current scene video.
In this embodiment, the current scenario type is a question-answer type, which indicates that a session is being performed between the lecturer teacher and the lecture students. Therefore, the panoramic video of the remote teaching room (including the whole body image of the teaching teacher) and the close-up video of the speaking student in the remote classroom are transmitted to the multi-scene blackboard 16 to be synchronously displayed on the two display modules, so that the online interactive display effect is realized, and the class listening experience of the student is improved. For example, as shown in fig. 2, a first video is displayed on the first display screen 161, and a sixth video is displayed on the second display screen 162.
Optionally, the system further comprises a presentation terminal 18;
the demonstration terminal 18 is configured in the remote teaching room in cooperation with the first video acquisition terminal 11, and is used for demonstrating the current demonstration picture played by the teaching teacher in combination with the teaching content;
the multi-scene blackboard 16 further comprises a demonstration display module, wherein the aspect ratio of the demonstration display module is smaller than 1;
the server 17 is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
in response to a trigger that the current scene type is a lecture type, determining that the first video is the current scene video, and receiving a current presentation picture transmitted by the presentation terminal 18;
and transmitting the current demonstration picture to the multi-scene blackboard 16 in cooperation with teaching contents of the first video, wherein the current demonstration picture is displayed by the demonstration display modules, and the first video is displayed by one of the display modules.
The aspect ratio of the demonstration display module is smaller than 1, and the demonstration display module is a transverse screen display screen.
This embodiment provides a presentation terminal 18 for a lecturer teacher in a remote lecture room. The presentation terminal 18 is used for presenting a current presentation picture played by a lecture teacher in combination with lecture contents, for example, a presentation picture in a presentation file (a PowerPoint file, abbreviated as PPT file). The presentation terminal 18 is in electrical communication with the server 17 and transmits the currently displayed presentation picture to the server 17 in real time.
When the current scene type is the lecture type, it indicates that the lecturer is teaching the content of the lecture course, and the embodiment focuses on the first video, that is, the video of the lecture course of the lecture teacher in the remote lecture room, and the presentation picture played by the lecture content of the lecture teacher.
The step of transmitting the current presentation picture to the display module in the multi-scene blackboard 16 in conjunction with the teaching content of the first video is to display, which can be understood that when the teaching teacher teaches a class, the current presentation picture a marked in the first video at the time point T and starting to play at the time point T is transmitted to the multi-scene blackboard 16 at the same time, and the multi-scene blackboard 16 synchronously displays the first video and the current presentation picture a in the two display modules according to the corresponding relationship between the current presentation picture a and the time point T marked in the first video, and further displays the current presentation picture a on the presentation display module, for example, as shown in fig. 2, the first video is displayed on the second display screen 162, and the current presentation picture is displayed on the presentation display screen. Therefore, students can clearly know the teaching intention of the teaching teacher.
Example 2
The disclosure further provides a method embodiment adapting to the above embodiment, which is used for implementing the system steps described in the above embodiment, and the explanation based on the same meaning of the name is the same as that of the above embodiment, and has the same technical effect as that of the above embodiment, and is not repeated herein.
As shown in fig. 3, the present disclosure provides a live lecture method, which is applied to a server of the system as described in embodiment 1, and includes the following steps:
step S301, a first video collected by a first video collection terminal in a remote teaching room and a second video collected by a second video collection terminal in a remote laboratory are received, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
step S302, performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
step S303, acquiring a plurality of current first audio feature information based on the current main video;
step S304, analyzing the types of teaching scenes of the plurality of first audio feature information to obtain the current scene type;
Step S305, responding to the trigger of the current scene type, acquiring at least one current scene video related to the current scene type, wherein the current scene video is acquired by one of the first video acquisition terminal to the fifth video acquisition terminal;
and step S306, transmitting the current scene video to at least one display module in the multi-scene blackboard for display.
Optionally, the step of acquiring at least one current scene video related to the current scene type in response to the triggering of the current scene type includes the following steps:
step S305a-1, responding to the trigger that the current scene type is the first mute type, and receiving a third video acquired by a third video acquisition terminal in a remote classroom, wherein the third video is a panoramic video of the remote classroom;
step S305a-2, acquiring a plurality of second audio feature information based on the third video;
step S305a-3, analyzing the types of classroom scenes for the plurality of second audio feature information to obtain the current classroom type;
step S305a-4, responding to the trigger that the current classroom type is the second mute type, and determining that the third video is the current scene video;
Step S305a-5, responding to the trigger that the current classroom type is the speaking type, indicating a fourth video acquisition terminal in the remote classroom to acquire a fourth video, receiving the fourth video, and determining that the third video and the fourth video are both the current scene video, wherein the fourth video is a close-up video of a speaking student in the remote classroom. Optionally, the responding to the triggering of the current scene type obtains at least one teaching scene video related to the current scene type, and the method comprises the following steps:
step S305b-1, responding to the trigger that the current scene type is the experiment type, indicating a fifth video collected by a fifth video collection terminal in the remote laboratory, and receiving the fifth video, wherein the fifth video is a close-up video of the teaching teacher demonstration experiment in the remote laboratory;
step S305b-2, determining that the second video and the fifth video are both the current scene video.
Optionally, the responding to the triggering of the current scene type obtains at least one teaching scene video related to the current scene type, and the method comprises the following steps:
Step S305c-1, responding to the trigger that the current scene type is a question-answer type, indicating a fourth video acquisition terminal in the remote classroom to acquire a sixth video and receiving the sixth video, wherein the sixth video is a close-up video of a speaking student in the remote classroom;
and step S305c-2, determining that the current main video and the sixth video are both the current scene video.
Optionally, the system further comprises a presentation terminal;
the demonstration terminal is matched with the first video acquisition terminal and arranged in the remote teaching room and is used for demonstrating the current demonstration picture played by a teaching teacher in combination with teaching contents;
the multi-scene blackboard also comprises a demonstration display module, wherein the aspect ratio of the demonstration display module is smaller than 1;
the method further comprises the steps of:
step S305d-1, responding to the trigger that the current scene type is the lecture type, determining that the first video is the current scene video, and receiving a current demonstration picture transmitted by the demonstration terminal;
and step S305d-2, transmitting the current demonstration picture to a multi-scene blackboard in cooperation with teaching contents of the first video, wherein the current demonstration picture is displayed by the demonstration display module, and the first video is displayed by one of the display modules.
According to the embodiment of the disclosure, the current scene type is identified according to the first audio characteristic information of the teaching teacher, then the interactive current scene video is obtained through the current scene type and is transmitted to the multi-scene blackboard, so that the teaching students can clearly know the teaching process and the teaching intention of the teaching teacher through the current scene video. The interactivity of teaching and the experience of students in class are improved.
Example 3
As shown in fig. 4, the present embodiment provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to enable the at least one processor to perform the method steps described in the embodiments above.
Example 4
The disclosed embodiments provide a non-transitory computer storage medium storing computer-executable instructions that perform the method steps described in the embodiments above.
Example 5
Referring now to fig. 4, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processor, a graphics processor, etc.) 401, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data required for the operation of the electronic device are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 405 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, etc.; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.

Claims (10)

1. A live lecture system, comprising:
the server is arranged in the data center;
the first video acquisition terminal is in electrical communication with the server and is arranged in the remote teaching room and used for acquiring panoramic videos of the remote teaching room;
the second video acquisition terminal is electrically communicated with the server and is arranged in a remote laboratory and used for acquiring panoramic videos of the remote laboratory;
the third video acquisition terminal is electrically communicated with the server and is arranged in a remote classroom and used for acquiring panoramic videos of the remote classroom;
the fourth video acquisition terminal is electrically communicated with the server and is arranged in the remote classroom and used for acquiring close-up videos of speaking students in the remote classroom;
the fifth video acquisition terminal is in electric communication with the server and is matched with the second video acquisition terminal to be arranged in the remote laboratory, and is used for acquiring a close-up video of a teaching teacher demonstration experiment in the remote laboratory;
The multi-scene blackboard is electrically communicated with the server and is matched with the third video acquisition terminal and the fourth video acquisition terminal to be arranged in the remote classroom, and the multi-scene blackboard comprises a plurality of display modules, wherein the height-width ratio of each display module is larger than 1;
the server side is configured to:
receiving a first video collected by a first video collection terminal in a remote teaching room and a second video collected by a second video collection terminal in a remote laboratory, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
acquiring a plurality of current first audio feature information based on the current main video;
performing type analysis on teaching scenes on the plurality of first audio feature information to acquire the current scene type;
responding to the trigger of the current scene type, and acquiring at least one current scene video related to the current scene type, wherein the current scene video is acquired by one of the first video acquisition terminal to the fifth video acquisition terminal;
And transmitting the current scene video to at least one display module in the multi-scene blackboard for display.
2. The system according to claim 1, wherein the server is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
responding to the trigger that the current scene type is the first mute type, and receiving a third video acquired by a third video acquisition terminal in a remote classroom, wherein the third video is a panoramic video of the remote classroom;
acquiring a plurality of current second audio feature information based on the third video;
performing type analysis of classroom scenes on the plurality of second audio feature information to acquire the current classroom type;
responding to the trigger that the current classroom type is the second mute type, and determining that the third video is the current scene video;
and responding to the trigger that the current classroom type is the speaking type, indicating a fourth video acquisition terminal in the remote classroom to acquire a fourth video, receiving the fourth video, and determining that the third video and the fourth video are both the current scene video, wherein the fourth video is a close-up video of a speaking student in the remote classroom.
3. The system according to claim 1, wherein the server is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
responding to the trigger that the current scene type is the experiment type, indicating a fifth video collected by a fifth video collecting terminal in the remote laboratory, and receiving the fifth video, wherein the fifth video is a close-up video of the teaching teacher demonstration experiment in the remote laboratory;
and determining that the second video and the fifth video are both the current scene video.
4. The system according to claim 1, wherein the server is configured to obtain at least one current scene video related to the current scene type in response to the triggering of the current scene type, and specifically configured to:
responding to the trigger that the current scene type is a question-answer type, indicating a fourth video acquisition terminal in the remote classroom to acquire a sixth video, and receiving the sixth video, wherein the sixth video is a close-up video of a speaking student in the remote classroom;
And determining that the current main video and the sixth video are both the current scene video.
5. The system of claim 1, further comprising a presentation terminal;
the demonstration terminal is matched with the first video acquisition terminal and arranged in the remote teaching room and is used for demonstrating the current demonstration picture played by a teaching teacher in combination with teaching contents;
the multi-scene blackboard also comprises a demonstration display module, wherein the aspect ratio of the demonstration display module is smaller than 1;
the server is configured to respond to the trigger of the current scene type and acquire at least one current scene video related to the current scene type, and specifically configured to:
responding to the trigger that the current scene type is the lecture type, determining that the first video is the current scene video, and receiving a current demonstration picture transmitted by the demonstration terminal;
and transmitting the current demonstration picture to a multi-scene blackboard in cooperation with teaching contents of the first video, wherein the current demonstration picture is displayed by the demonstration display module, and the first video is displayed by one of the display modules.
6. A live lecture method applied to the server of the system as set forth in claim 1, comprising:
Receiving a first video collected by a first video collection terminal in a remote teaching room and a second video collected by a second video collection terminal in a remote laboratory, wherein the first video is a panoramic video of the remote teaching room, and the second video is a panoramic video of the remote laboratory;
performing image recognition of a teaching teacher on the video image of the first video and the video image of the second video, and determining the first video or the second video comprising the teaching teacher image as a current main video;
acquiring a plurality of current first audio feature information based on the current main video;
performing type analysis on teaching scenes on the plurality of first audio feature information to acquire the current scene type;
responding to the trigger of the current scene type, and acquiring at least one current scene video related to the current scene type, wherein the current scene video is acquired by one of the first video acquisition terminal to the fifth video acquisition terminal;
and transmitting the current scene video to at least one display module in the multi-scene blackboard for display.
7. The method of claim 6, wherein said obtaining at least one current context video associated with said current context type in response to a trigger of said current context type comprises:
Responding to the trigger that the current scene type is the first mute type, and receiving a third video acquired by a third video acquisition terminal in a remote classroom, wherein the third video is a panoramic video of the remote classroom;
acquiring a plurality of current second audio feature information based on the third video;
performing type analysis of classroom scenes on the plurality of second audio feature information to acquire the current classroom type;
responding to the trigger that the current classroom type is the second mute type, and determining that the third video is the current scene video;
and responding to the trigger that the current classroom type is the speaking type, indicating a fourth video acquisition terminal in the remote classroom to acquire a fourth video, receiving the fourth video, and determining that the third video and the fourth video are both the current scene video, wherein the fourth video is a close-up video of a speaking student in the remote classroom.
8. The method of claim 6, wherein said obtaining at least one teaching scene video associated with said current scene type in response to a trigger of said current scene type comprises:
responding to the trigger that the current scene type is the experiment type, indicating a fifth video collected by a fifth video collecting terminal in the remote laboratory, and receiving the fifth video, wherein the fifth video is a close-up video of the teaching teacher demonstration experiment in the remote laboratory;
And determining that the second video and the fifth video are both the current scene video.
9. The method of claim 6, wherein said obtaining at least one teaching scene video associated with said current scene type in response to a trigger of said current scene type comprises:
responding to the trigger that the current scene type is a question-answer type, indicating a fourth video acquisition terminal in the remote classroom to acquire a sixth video, and receiving the sixth video, wherein the sixth video is a close-up video of a speaking student in the remote classroom;
and determining that the current main video and the sixth video are both the current scene video.
10. The method of claim 6, wherein the step of providing the first layer comprises,
the system also comprises a demonstration terminal;
the demonstration terminal is matched with the first video acquisition terminal and arranged in the remote teaching room and is used for demonstrating the current demonstration picture played by a teaching teacher in combination with teaching contents;
the multi-scene blackboard also comprises a demonstration display module, wherein the aspect ratio of the demonstration display module is smaller than 1;
the method further comprises the steps of:
responding to the trigger that the current scene type is the lecture type, determining that the first video is the current scene video, and receiving a current demonstration picture transmitted by the demonstration terminal;
And transmitting the current demonstration picture to a multi-scene blackboard in cooperation with teaching contents of the first video, wherein the current demonstration picture is displayed by the demonstration display module, and the first video is displayed by one of the display modules.
CN202111428397.0A 2021-11-29 2021-11-29 Live teaching system and method Active CN114120729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111428397.0A CN114120729B (en) 2021-11-29 2021-11-29 Live teaching system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111428397.0A CN114120729B (en) 2021-11-29 2021-11-29 Live teaching system and method

Publications (2)

Publication Number Publication Date
CN114120729A CN114120729A (en) 2022-03-01
CN114120729B true CN114120729B (en) 2023-09-12

Family

ID=80370747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111428397.0A Active CN114120729B (en) 2021-11-29 2021-11-29 Live teaching system and method

Country Status (1)

Country Link
CN (1) CN114120729B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394363A (en) * 2014-11-21 2015-03-04 苏州阔地网络科技有限公司 Online class directing method and system
CN204360609U (en) * 2015-01-26 2015-05-27 熊才平 A kind of strange land synchronization video interactive network tutoring system
CN205545683U (en) * 2016-02-19 2016-08-31 绵阳千帆渡科技有限公司 Video acquisition system gives lessons in internet
CN106327929A (en) * 2016-08-23 2017-01-11 北京汉博信息技术有限公司 Visualized data control method and system for informatization
CN109118854A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of panorama immersion living broadcast interactive teaching system
CN210466804U (en) * 2019-08-26 2020-05-05 深圳市受业堂教育科技有限公司 Remote interactive education system
CN111182250A (en) * 2019-11-29 2020-05-19 安徽文香信息技术有限公司 Audio and video teaching recording and playing system and control method thereof
CN111564066A (en) * 2020-05-29 2020-08-21 江苏海事职业技术学院 Intelligent English teaching system for English teaching
CN112330996A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Control method, device, medium and electronic equipment for live broadcast teaching
CN112330997A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for controlling demonstration video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060228692A1 (en) * 2004-06-30 2006-10-12 Panda Computer Services, Inc. Method and apparatus for effectively capturing a traditionally delivered classroom or a presentation and making it available for review over the Internet using remote production control
CN106056996B (en) * 2016-08-23 2017-08-29 深圳市鹰硕技术有限公司 A kind of multimedia interactive tutoring system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394363A (en) * 2014-11-21 2015-03-04 苏州阔地网络科技有限公司 Online class directing method and system
CN204360609U (en) * 2015-01-26 2015-05-27 熊才平 A kind of strange land synchronization video interactive network tutoring system
CN205545683U (en) * 2016-02-19 2016-08-31 绵阳千帆渡科技有限公司 Video acquisition system gives lessons in internet
CN106327929A (en) * 2016-08-23 2017-01-11 北京汉博信息技术有限公司 Visualized data control method and system for informatization
CN109118854A (en) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 A kind of panorama immersion living broadcast interactive teaching system
CN210466804U (en) * 2019-08-26 2020-05-05 深圳市受业堂教育科技有限公司 Remote interactive education system
CN111182250A (en) * 2019-11-29 2020-05-19 安徽文香信息技术有限公司 Audio and video teaching recording and playing system and control method thereof
CN111564066A (en) * 2020-05-29 2020-08-21 江苏海事职业技术学院 Intelligent English teaching system for English teaching
CN112330996A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Control method, device, medium and electronic equipment for live broadcast teaching
CN112330997A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for controlling demonstration video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
录播教室的课堂景别应用技巧;杨世春;《实验室及专用教室建设》(第282期);20-21页 *

Also Published As

Publication number Publication date
CN114120729A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US20190340944A1 (en) Multimedia Interactive Teaching System and Method
CN113055624B (en) Course playback method, server, client and electronic equipment
WO2022089192A1 (en) Interaction processing method and apparatus, electronic device, and storage medium
CN112652200A (en) Man-machine interaction system, man-machine interaction method, server, interaction control device and storage medium
CN109982134B (en) Video teaching method based on diagnosis equipment, diagnosis equipment and system
CN110675674A (en) Online education method and online education platform based on big data analysis
CN111260975B (en) Method, device, medium and electronic equipment for multimedia blackboard teaching interaction
CN112330997A (en) Method, device, medium and electronic equipment for controlling demonstration video
CN111862705A (en) Method, device, medium and electronic equipment for prompting live broadcast teaching target
CN114095747B (en) Live broadcast interaction system and method
CN111161592B (en) Classroom supervision method and supervising terminal
CN109191958B (en) Information interaction method, device, terminal and storage medium
CN114120729B (en) Live teaching system and method
CN110675669A (en) Lesson recording method
CN105913698B (en) Method and device for playing course multimedia information
CN112863277B (en) Interaction method, device, medium and electronic equipment for live broadcast teaching
CN114038255B (en) Answering system and method
CN104504948A (en) Method and device for displaying pushed course for intelligent teaching system
CN111787226B (en) Remote teaching method, device, electronic equipment and medium
CN110933510B (en) Information interaction method in control system
US20220150290A1 (en) Adaptive collaborative real-time remote remediation
CN111081101A (en) Interactive recording and broadcasting system, method and device
CN114328839A (en) Question answering method, device, medium and electronic equipment
CN210072615U (en) Immersive training system and wearable equipment
CN113570227A (en) Online education quality evaluation method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant