CN112218024A - Courseware video generation and channel combination information determination method and device - Google Patents

Courseware video generation and channel combination information determination method and device Download PDF

Info

Publication number
CN112218024A
CN112218024A CN202010979638.XA CN202010979638A CN112218024A CN 112218024 A CN112218024 A CN 112218024A CN 202010979638 A CN202010979638 A CN 202010979638A CN 112218024 A CN112218024 A CN 112218024A
Authority
CN
China
Prior art keywords
channel
video
time period
identification information
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010979638.XA
Other languages
Chinese (zh)
Other versions
CN112218024B (en
Inventor
徐培培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010979638.XA priority Critical patent/CN112218024B/en
Publication of CN112218024A publication Critical patent/CN112218024A/en
Application granted granted Critical
Publication of CN112218024B publication Critical patent/CN112218024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a courseware video generation and channel combination information determining method and device, wherein the method determines a video segment of each channel recorded in a time period according to the time period information contained in a received generation instruction; acquiring identification information of whether each video frame in the video segment has a preset event or not aiming at each video segment, and determining each sub-video segment containing the preset event in the video segment; and according to the time sequence, generating the courseware video in the time period according to each determined sub-video segment. According to the method and the device, the recorded video segments of each channel can be determined to contain each sub-video segment with the preset event according to the time segment information contained in the received generation instruction, so that the courseware video in the time segment is generated according to the time sequence, and the problem of excessive channel resource consumption caused by generation of the courseware video in the prior art is solved.

Description

Courseware video generation and channel combination information determination method and device
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a courseware video generation and channel combination information determining method and device.
Background
With social progress and development of education, the sharing demand of educational resources is gradually increased, and the production of courseware videos which disclose topics through traditional network sharing usually needs manual participation.
With the advance of technology, the advent of intelligent recording and broadcasting systems began to realize automatic courseware video production without manual intervention. However, the existing intelligent recording and broadcasting system usually broadcasts classroom contents live and records courseware videos at the same time.
Because the intelligent recording and broadcasting system in the prior art must simultaneously broadcast the classroom content during courseware generation, the camera not only sends all video videos to the intelligent recording and broadcasting system through the transmission channel, but also needs to add a channel of broadcasting guide channel by the intelligent recording and broadcasting system, codes the broadcasting guide video in real time through the broadcasting guide channel, sends the video to the display device, and stores the video in the broadcasting guide channel. Therefore, in the prior art, additional channel resources are needed when the courseware video is generated, which causes the problems of excessive consumption of the channel resources and redundancy of stored video data.
Disclosure of Invention
The embodiment of the invention provides a courseware video generation method, a channel combination information determination method, a device, equipment and a medium, which are used for solving the problems that extra channel resources are needed when courseware videos are generated in the prior art, so that the channel resources are excessively consumed and the stored video data are redundant.
The embodiment of the invention provides a courseware video generation method, which comprises the following steps:
determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction;
for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and generating the courseware videos in the time period according to the determined each sub-video segment according to the time sequence.
Further, the acquiring the identification information of whether the preset event occurs in each video frame in the video segment includes:
determining channel combination information corresponding to the time period;
and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
Further, the generating the courseware video of the time period according to each determined sub-video segment according to the time sequence includes:
determining each sub-video segment corresponding to target identification information of a preset event in the channel combination information according to the time sequence and the channel combination information corresponding to each moment in the time period, and generating a target video frame of the moment according to the analyzed video frame in each sub-video segment, wherein the channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, and the identification information is sequenced according to the channel sequence;
and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
Further, the generating the target video frame at the time includes:
counting the number of target identification information of a preset event corresponding to the moment, and determining a picture segmentation mode corresponding to the number;
determining the display area of each video frame in the target video frames according to the picture segmentation mode;
and displaying each corresponding video frame in each display area of the target video frame.
Further, the determining the display area of each video frame in the target video frames according to the picture segmentation mode includes:
determining each display area obtained by adopting the picture segmentation mode according to the picture segmentation mode;
and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
Correspondingly, an embodiment of the present invention provides a method for determining channel combination information, where the method includes:
if an event trigger instruction sent by image acquisition equipment is received, recording event starting time and a channel corresponding to the image acquisition equipment, and if an event ending instruction sent by the image acquisition equipment is received, recording event ending time and a channel corresponding to the image acquisition equipment;
if a packaging instruction is received, determining at least one target event located in the packaging time period from the start time of the event and the end time of the event according to the information of the packaging time period carried in the packaging instruction;
and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
Further, the method further comprises:
acquiring identification information of video frames of each channel collected within a preset classroom time period;
judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, performing a supplementing operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
Correspondingly, an embodiment of the present invention provides an apparatus for generating a courseware video, where the apparatus includes:
the determining module is used for determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction; for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and the generation module is used for generating the courseware video in the time period according to the time sequence and each determined sub-video segment.
Further, the determining module is specifically configured to determine channel combination information corresponding to the time period; and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
Further, the generating module is specifically configured to determine, according to a time sequence, each sub-video segment corresponding to target identification information where a preset event occurs in the channel combination information according to the channel combination information corresponding to each time in the time period, and generate a target video frame of the time according to an analyzed video frame in each sub-video segment, where the channel combination information corresponding to each time includes identification information of a video frame corresponding to the time, and the identification information is sorted according to the channel sequence; and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
Further, the generating module is specifically configured to count the number of target identification information of a preset event occurring corresponding to the time, and determine a picture segmentation manner corresponding to the number; determining the display area of each video frame in the target video frames according to the picture segmentation mode; and displaying each corresponding video frame in each display area of the target video frame.
Further, the generating module is specifically configured to determine, according to the picture segmentation mode, each display region obtained by dividing by using the picture segmentation mode; and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
Accordingly, an embodiment of the present invention provides an apparatus for determining channel combination information, where the apparatus includes:
the recording module is used for recording event starting time and a channel corresponding to the image acquisition equipment if an event triggering instruction sent by the image acquisition equipment is received, and recording event ending time and the channel corresponding to the image acquisition equipment if an event ending instruction sent by the image acquisition equipment is received;
the determining module is used for determining at least one target event located in the packing time period from the event starting time to the event ending time according to the information of the packing time period carried in the packing instruction if the packing instruction is received; and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
Further, the apparatus further comprises:
the acquisition module is used for acquiring the identification information of the video frame of each channel collected in a preset classroom time period;
the judging module is used for judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, the identification information of the preset event corresponding to the channel is supplemented; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and the generating module is used for generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
Accordingly, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory is used to store program instructions, and the processor is used to implement the steps of any one of the methods for courseware video generation described above when executing a computer program stored in the memory.
Accordingly, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory is used to store program instructions, and the processor is used to implement the steps of any one of the above methods for determining channel combination information when executing a computer program stored in the memory.
Accordingly, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the methods for courseware video generation described above.
Accordingly, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the methods for courseware video generation described above.
The embodiment of the invention provides a courseware video generation method, a channel combination information determination method, a device, equipment and a medium, wherein the method determines a video segment of each channel recorded in a time period according to the time period information contained in a received generation instruction; acquiring identification information of whether each video frame in the video segment has a preset event or not aiming at each video segment, and determining each sub-video segment containing the preset event in the video segment; and according to the time sequence, generating the courseware video in the time period according to each determined sub-video segment. According to the courseware video generation method and device, when courseware videos are generated, according to the time period information contained in the received generation instruction, the identification information of whether each video frame in the video segment generates the preset event or not can be obtained according to the recorded video segment of each channel, each sub-video segment containing the preset event in the video segment is determined, and therefore courseware videos in the time period are generated according to the time sequence, and the problems that channel resources are excessively consumed and stored video data are redundant when the courseware videos are generated in the prior art are solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a process diagram of a method for generating a courseware video according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a frame segmentation according to an embodiment of the present invention;
fig. 3 is a schematic process diagram of another courseware video generation method provided by the embodiment of the present invention;
fig. 4 is a schematic process diagram of a method for determining channel combination information according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a courseware video generation apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for determining channel combination information according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to record the courseware video, the acquisition of the classroom content video in the embodiment of the invention needs to be carried out in a recording and broadcasting classroom, so that the courseware video is generated according to the video with the preset event in the acquired classroom content video. A plurality of cameras are installed in a recording and broadcasting classroom, and the cameras can be monocular cameras or multi-view cameras. There are multiple shots on the multi-view camera, each shot of the multi-view camera being used to capture video of a defined area or defined target, for example, a tri-view camera can capture video of a stand panorama, a teacher close-up, and a blackboard-writing.
In order to process the video acquired by the camera and generate the video into a courseware video, the acquired video needs to be transmitted through a channel no matter the video is acquired by one lens of a monocular camera or the video is acquired by each lens of a monocular camera, and preferably, each lens included in the camera corresponds to one channel for video transmission.
Example 1:
fig. 1 is a schematic diagram of a process of generating a courseware video according to an embodiment of the present invention, where the process includes the following steps:
s101: and determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction.
The courseware video generation method provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be equipment capable of processing video frame images, such as a server, a PC, a mobile terminal and the like.
In the embodiment of the invention, after receiving the generation instruction, the electronic device can determine the time period of the courseware video required to be generated according to the time period information contained in the generation instruction. The time period may be a specific time period of a course, or may be any time period, which is not limited in this embodiment of the present invention. For example, the time period is 13:00:00 in 10 and 11 months in 2019 to 14:00:00 in 11 months in 10 and 11 months in 2019 corresponding to a course.
The received generation instruction may be sent to the electronic device by another electronic device, or may be triggered by the user on the electronic device. After receiving a generation instruction sent by another electronic device, the information of the time period carried in the generation instruction can be acquired from the generation instruction. Or when the user performs a trigger operation on the electronic equipment to generate the generation instruction, firstly, the electronic equipment receives the trigger operation for triggering the generation instruction, at the moment, the electronic equipment displays the starting time and the ending time which can be selected by the user, displays a button capable of triggering, can trigger the button after the user selects the starting time and the ending time, and generates the generation instruction containing the starting time and the ending time when the electronic equipment recognizes that the button is triggered.
In the embodiment of the invention, videos of all scenes in a classroom are recorded by a plurality of cameras in a recording and broadcasting classroom, and the videos recorded by each camera are transmitted through a corresponding channel, so that when courseware videos in the time period are generated, a video segment of each channel recorded in the time period needs to be acquired. The video segments of each channel refer to video segments recorded by cameras at different locations. For example, video segments recorded by a camera at the podium position and video segments recorded by a student's camera.
Since the time range of the video recorded by the camera includes the time period, in order to generate the courseware video in the time period, the video segment of each channel recorded in the time period needs to be determined according to the information of the time period contained in the received generation instruction.
S102: and acquiring identification information of whether each video frame in the video segment has a preset event or not aiming at the video segment of each channel, and determining each sub-video segment containing the preset event in the video segment.
The video segments recorded in the time period include the video segments of each channel, and if the courseware video is generated according to the video segments of each channel in the time period, the content of the generated courseware video is complicated, and the emphasis is not prominent. For example, when only the teacher speaks in a certain sub-period of the time period, the panoramic view or the close-up view of the students in the sub-period is displayed in the courseware video, which causes the picture of the teacher in the courseware video to be small and does not highlight the speaking scene of the teacher in the sub-period.
Therefore, in order to ensure that the generated courseware video in the time period is emphasized, each sub-video segment of the video segment in which the preset event occurs needs to be determined for the video segment of each channel.
And for the video segment of each channel in the time period, each video frame in the video segment corresponds to the identification information of whether a preset event occurs. The preset event is a preset event, and may be, for example, an event that the teacher is on a platform and the teacher is writing on a blackboard, and students stand.
In order to record whether a preset event occurs in the content recorded in each video frame of each channel, in the embodiment of the present invention, corresponding identification information is recorded for each video frame, and the identification information identifies whether the video frame contains the preset event. For convenience of query, for all video frames in the video segment, identification information corresponding to the video frames is combined to form an identification information group, and the identification information group is stored corresponding to the video segment.
Specifically, when a preset event occurs in each video frame in the video segment, the identification information of the video frame may be set to 1, and when the preset event does not occur, the identification information of the video frame is set to 0. For example, when a student stands to answer a question, the identification information of each video frame in the video segment of the channel recorded during the period from the student standing up to the student sitting down is set to 1. Therefore, according to the identification information of whether the preset event occurs in each video frame in the video segment, each sub-video segment included in the video segment and having the preset event can be determined.
S103: and generating the courseware videos in the time period according to the determined each sub-video segment according to the time sequence.
Since each video segment in the time period may contain one or more sub-video segments in which a predetermined event occurs, the time length of each sub-video segment and the time range in which it is located in the time period are uncertain. Therefore, after it is determined that each video segment includes each sub-video segment in which a preset event occurs, in order to generate the courseware video in the time period, the sub-video segments need to be sorted according to the time sequence according to each determined sub-video segment, and the courseware video in the time period is generated.
In the embodiment of the invention, when the courseware video is generated, according to the time period information contained in the received generation instruction, each sub-video segment containing the preset event can be determined according to the recorded video segment of each channel and the identification information of whether the preset event occurs in each video frame in the video segment, so that the courseware video in the time period is generated according to the time sequence, and the problem of excessive channel resource consumption caused by the generation of the courseware video in the prior art is solved.
Example 2:
in order to obtain the identification information of each video frame in each video segment, on the basis of the above embodiment, in an embodiment of the present invention, the obtaining of the identification information of whether a preset event occurs in each video frame in the video segment includes:
determining channel combination information corresponding to the time period;
and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
The video collected by each camera is transmitted through the corresponding channel, so that the channel information of the channel can be generated according to whether each video frame of the channel has a preset event or not according to each channel, and the combination of the channel information corresponding to each camera positioned in the same classroom is determined as channel combination information.
Since each classroom has its corresponding channel combination information for all video content recorded in that classroom, in addition, for convenience of recording, its corresponding channel combination information can be generated for each class in that classroom. Therefore, after the time period is determined, the channel combination information corresponding to the time period can be determined.
Because the time information corresponding to each lesson is predetermined, the channel combination information corresponding to the time period is determined in the channel combination information corresponding to each lesson according to the time period for generating the courseware video and the channel combination information corresponding to the video content of each lesson.
In order to determine whether the identification information of the preset event occurs in each video frame in the video segment of each channel, in the channel combination information corresponding to the time period, because the identification information of the video frame corresponding to each channel, which corresponds to the acquisition time of each video frame, is recorded in the channel combination information, according to the identification information of the video frame corresponding to each channel, which is recorded in the channel combination information, whether the identification information of the preset event occurs in each video frame in the video segment of each channel in the time period can be determined.
For example, taking the time period as 2019, 10, 11, 13:00:00-14:00:00 as an example, in the time period 13:00:00-14:00:00, the channel combination information is 00 … … 00(20 00)10 … … 10 (10) 00 … … 00(30 00)01 … … 01 (10) 03 … … 03 (20) 03 01 … … 01 (30) 24 … … 24 (20) 24 00 … … 00(160 00)08 … … 08(30 08)00 … … 00(3270 00) total 7200 characters.
In the channel combination information, every two characters correspond to identification information of a video frame corresponding to each channel at a time, where the two characters are represented by hexadecimal, for example, 00 is converted into binary 00000000, and the 00000000 is identification information of a video frame corresponding to 8 channels at a certain time, where each 0 is identification information of a video frame corresponding to a certain time of channel 8, channel 7 … …, and channel 1, respectively, and according to 00000000, identification information of video frames corresponding to all channels at the time is 0, and no preset event occurs in video frames of all channels at the time.
The identification information of the video frames at the first 20 time instants in the channel combination information is 00, that is, no preset event occurs in the video frames of all the channels at the first 20 time instants.
The binary system 10 is 00010000, where 00010000 is identification information of video frames corresponding to 8 channels at a certain time, where 3 first-bit 0 s are identification information of the channel 8, the channel 7, and the channel 6 at the time, the 4 th-bit 1 is identification information of the channel 5 at the time, and the last 4 last-bit 0 s are identification information of the channel 4, the channel 3, the channel 2, and the channel 1 at the time, respectively, and it can be known from 00010000 that the identification information of the video frame corresponding to the channel 5 at the time is 1, that is, a preset event occurs in the video frame corresponding to the channel 5, and the identification information of the remaining video frames corresponding to the channel 1, the channel 2, the channel 3, the channel 4, the channel 6, the channel 7, and the channel 8 at the time is 0, that is, that a preset event does not occur. Similarly, the total number of the video frames is 10, that is, the identification information of the video frames at the next 10 time instants is 10, that is, the preset event occurs in the video frame corresponding to the channel 5 at the next 10 time instants, and the preset event does not occur in the video frames of other channels.
01 is converted into binary 00000001, where 00000001 is the identification information of the video frame corresponding to 8 channels at a certain time, where the first 70 s in the sequence are the identification information of channel 8, channel 7 … …, and channel 2 at the certain time, and the last bit 0 is the identification information of channel 1 at the certain time. According to 00000001, the identification information of the video frame corresponding to the channel 1 at the time is 1, that is, the preset event occurs in the video frame corresponding to the channel 1, and the identification information of the video frames corresponding to the remaining channels 8, 7 … …, and 2 at the time is 0, that is, the preset event does not occur. The total number of the video frames is 1001, that is, the identification information of the video frames at the next 10 moments is 01, that is, the preset event occurs in the video frame corresponding to the channel 1 at the next 10 moments, and the preset event does not occur in the video frames of other channels.
03 is converted into binary system 00000011, where 00000011 is identification information of a video frame corresponding to 8 channels at a certain time, where the first 60 s in the sequence are identification information of the channel 8, the channel 7 … …, and the channel 3 at the certain time, and the last two 0 s are identification information of the channel 2 and the channel 1 at the certain time. According to 00000011, the identification information of the video frames corresponding to the channel 1 and the channel 2 at the time is 1, that is, the preset event occurs in the video frames corresponding to the channel 1 and the channel 2, and the identification information of the remaining video frames corresponding to the channel 8, the channel 7 … …, and the channel 3 at the time is 0, that is, the preset event does not occur. The total number of the video frames is 20, that is, the identification information of the video frames at the next 20 moments is 03, that is, the preset events occur in the video frames corresponding to the channel 1 and the channel 2 at the next 20 moments, and the preset events do not occur in the video frames of other channels.
24 is converted into a binary value 00100100100, where 00100100100 is identification information of a video frame corresponding to 8 channels at a certain time, where the first 2 bits 0 in the sequence are identification information of the channel 8 and the channel 7 at the time, the 3 rd bit in the sequence, and the 6 th bit 1 in the sequence are identification information of the channel 6 and the channel 3 at the time, and the 4 th bit, the 5 th bit, the 7 th bit, and the 8 th bit 0 in the sequence are identification information of the channel 5, the channel 4, the channel 2, and the channel 1 at the time. As can be seen from 00100100100, the identification information of the video frames corresponding to the channels 3 and 6 at this time is 1, that is, the video frames corresponding to the channels 3 and 6 have a preset event. The identification information of the video frames corresponding to the channels 8, 7, 5, 4, 2 and 1 remaining at this time is all 0, that is, no preset event occurs. The total number of the channels is 20, 24, that is, the identification information of the video frames at the next 20 time instants is 24, that is, the preset events occur in the video frames corresponding to the channels 3 and 6 at the next 20 time instants, and the preset events do not occur in the video frames of other channels.
08 is converted into binary system 00001000, where 00001000 is identification information of a video frame corresponding to 8 channels at a certain time, where the first 4 bits 0 in the sequence are identification information of channel 8, channel 7, channel 6, and channel 5 at the certain time, the 5 th bit 1 in the sequence is identification information of channel 4 at the certain time, and the last 3 bits 0 in the sequence are identification information of channel 3, channel 2, and channel 1 at the certain time. According to 00001000, the identification information of the video frame corresponding to the channel 4 at the time is 1, that is, the preset event occurs in the video frame corresponding to the channel 4, and the identification information of the remaining channels 8, 7, 6, 5, 3, 2, and 1 at the time is 0, that is, the preset event does not occur. The total number of the channels is 30 08, that is, the identification information of the video frames at the next 30 moments is 08, that is, the preset event occurs in the video frame corresponding to the channel 4 at the next 30 moments, and the preset event does not occur in the video frames of other channels.
In the channel combination information corresponding to the time zone, the podium panorama channel 0, the student close-up channel 1, the teacher close-up channel 2, the blackboard-writing channel 3, the student panorama channel 4, and the PPT channel 5 correspond to channels 1, 2, 3, 4, 5, and 6, respectively.
In the above channel combination information, since how long to capture one frame of video is fixed and the interval at which each image capture device captures video frames is the same, the number of identification information corresponding to the time period is also fixed.
Since the identification information of the video frame corresponding to the channel 1 is the identification information of the occurrence of the preset event in the time range of 13:01:00-13:01:10, 13:01:10-13:01:30, 13:01:30-13:02:00, the identification information of the video frame in the video segment of the lecture panorama channel 0 within the time period 13:00:00-14:00:00 is 0000 … … 0000(60 0)1111 … … 1111(60 1)0000 … … (3480 0) (3600 characters in total).
The identification information of the video frame corresponding to the channel 2 is the identification information of the occurrence of the preset event in the time range of 13:01:10 to 13:01: 30. Therefore, for the student close-up channel 1, the identification information of the video segment of the student close-up channel 1 is 00 … … 00(70 0)11111111111111111111(20 1)0000 … … (3510 0) (3600 characters in total) in the time period 13:00:00-14:00: 00.
The identification information of the video frame corresponding to the channel 3 is the identification information of the occurrence of the preset event within the time range of 13:02:00-13:02: 20. Therefore, for the teacher close-up channel 2, the identification information of the video segment of the teacher close-up channel 2 is 0000 … … 0000(120 pieces of 0)11111111111111111111(20 pieces of 1)00000 … … (3460 pieces of 0) (3600 characters in total) during the time period 13:00:00-14:00: 00.
The identification information of the video frame corresponding to the channel 4 is the identification information of the occurrence of the preset event within the time range of 13:05:00-13:05: 30. Therefore, for the blackboard-writing channel 3, the identification information of the video segment of the blackboard-writing channel 3 is 000000000(300 pieces of 0)1111 … … 1111(30 pieces of 1)0000 … … (3270 pieces of 0) (3600 characters in total) in the time period 13:00:00-14:00: 00.
Since the identification information of the video frame corresponding to the channel 5 is the identification information of the occurrence of the preset event in the time range of 13:00:20 to 13:00:30, the identification information of the student panorama channel 4 is 0000 … … 0000(20 0)1111111111111 (10 1)0000 … … (3570 0) (3600 characters in total) in the time period 13:00:00 to 14:00:00 for the student panorama channel 4.
The identification information of the video frame corresponding to the channel 6 is the identification information of the occurrence of the preset event within the time range of 13:02:00-13:02: 20. Therefore, for the PPT channel 5, the identification information of the PPT channel 5 is 00 … … 00(120 0)1111 … … 1111(20 1)0000 … … (3460 0) (3600 characters in total) in the time period 13:00:00-14:00: 00.
Example 3:
in order to generate the courseware video in the time period, on the basis of the above embodiments, in an embodiment of the present invention, the generating the courseware video in the time period according to each determined sub-video segment in the chronological order includes:
determining each sub-video segment corresponding to target identification information of a preset event in the channel combination information according to the time sequence and the channel combination information corresponding to each moment in the time period, and generating a target video frame of the moment according to the analyzed video frame in each sub-video segment, wherein the channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, and the identification information is sequenced according to the channel sequence;
and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
In the embodiment of the present invention, in order to generate the courseware video of the time period, after the channel combination information corresponding to each time in the time period is determined, since a plurality of sub-video segments in which a preset event occurs may exist in the video segment of each channel, in order to generate the courseware video of the time period, it is further necessary to determine, according to a time sequence, a video frame of the courseware video corresponding to each time in the time period for each time in the time period.
Therefore, according to the time sequence, the channel combination information corresponding to each time in the time period is determined. The channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, the video frame corresponding to the moment comprises the video frame of each channel corresponding to the moment, and the identification information of the video frame corresponding to the moment is sequenced according to the channel sequence.
According to the target identification information of the preset event occurring in the channel combination information corresponding to each moment, each sub-video segment corresponding to the target identification information can be determined, and each sub-video segment is analyzed to determine the video frame in each sub-video segment. And determining the target video frame at the moment according to the video frame corresponding to the moment.
And aiming at each moment in the time period, according to the target identification information of the preset event in the channel combination information corresponding to each moment, determining the video frame in each sub-video segment corresponding to the target identification information. When a plurality of preset events occur at the moment, each preset event is recorded in a corresponding sub-video segment, each sub-video segment belongs to a video segment of a different channel, and a corresponding video frame exists in each sub-video segment at the moment. The electronic equipment downloads the sub-video segments which are corresponding to the moment and have the preset events and belong to different channels from a video storage module of the image acquisition equipment, performs frame analysis on the sub-video segments, determines video frames in the sub-video segments, and combines the video frames to generate a target video frame at the moment.
After the target video frame of each moment is determined, the target video frames are sequenced according to the generated target video frame of each moment and the time sequence, and the courseware video in the time period is generated by the sequenced target video frames according to the video generation method in the prior art.
Example 4:
in order to generate the target video frame at the time, in addition to the above embodiments, in an embodiment of the present invention, the generating the target video frame at the time includes:
counting the number of target identification information of a preset event corresponding to the moment, and determining a picture segmentation mode corresponding to the number;
determining the display area of each video frame in the target video frames according to the picture segmentation mode;
and displaying each corresponding video frame in each display area of the target video frame.
In order to generate the target video frame at the time, it is necessary to identify the display area of the video frame corresponding to the time in the target video frame, and since the number of video frames corresponding to each time is different, the size of the display area of the video frame corresponding to the time is different when the video frame is displayed.
In order to determine the display area of each video frame in the target video frame, it is necessary to first determine the picture segmentation mode of the target video frame at the time, where the picture segmentation mode of the target video frame is related to the number of target identification information of the occurrence of the preset event corresponding to the time. Therefore, when the target video frame at the moment is generated, the picture segmentation mode of the target video frame corresponding to the number of the target identification information of the preset event can be determined according to the number of the target identification information of the preset event corresponding to the moment.
And determining the picture segmentation mode corresponding to the number according to the number of the target identification information of the occurrence preset event corresponding to the moment. The screen division method is predetermined, and includes 2 division, 3 division, 4 division, 5 division, 6 division, and the like. For example, when the number of target identification information of the occurrence of the preset event corresponding to the time is 3, the screen division manner is 3 division.
And determining the display area of each video frame in the target video frame according to the determined picture segmentation mode at the moment. The number of video frames corresponding to the moment is the same as the number of display areas in the target video frame in the corresponding picture division mode. Specifically, when the display area of each video frame is determined, any one of the video frames may correspond to any one of the display areas divided by the picture segmentation manner, or the display area of each video frame in the target video frame may be determined according to the order of importance of the preset event corresponding to the video frame.
And after the display area of each video frame corresponding to the moment is determined, displaying each corresponding video frame in each display area of the target video frame, thereby generating the target video frame.
Fig. 2 is a schematic diagram of a screen division according to an embodiment of the present invention, and as shown in fig. 2, the screen division includes single division, 2 division, 3 division, 4 division, 5 division, and 6 division in order from left to right in fig. 2 and from top to bottom (top, bottom, left and right in the figure).
Example 5:
in order to determine the display area of each of the target video frames, on the basis of the foregoing embodiments, in an embodiment of the present invention, the determining the display area of each of the target video frames according to the picture segmentation manner includes:
determining each display area obtained by adopting the picture segmentation mode according to the picture segmentation mode;
and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
In order to better determine the display area of each video frame in the target video frame, the importance level of each video frame needs to be considered. Therefore, after each display area obtained by dividing the image frame by the image division method is determined according to the image division method, the display area corresponding to each video frame needs to be determined.
The priority of each video frame at the moment is related to the channel corresponding to the video frame at the moment, and the priority of each video frame at the moment can be determined according to the preset priority of the channel.
For example, the preset priority of the channel may be associated with a channel number, and the larger the channel number, the larger the priority of the channel. The priority ranking may be PPT channel, blackboard writing channel, teacher close-up channel, student close-up channel, exhibition stand panorama channel, and student panorama channel, where the ranking mode is one of the ranking modes of the priorities of the channels, and in the embodiment of the present invention, may also be a ranking mode of the priorities of other channels, which is not limited in this embodiment of the present invention.
According to the preset priority of each display area and the determined priority of each video frame in the sub-video segment at the moment, the display areas corresponding to the video frames with different priorities can be determined. Specifically, the video frame of the first priority is displayed in the display area of the first priority, and the video frame of the second priority is displayed in the display area of the second priority. That is, the priority of the video frame is the same as the priority of the display area, and the video frame is displayed in the display area with the same priority.
As shown in fig. 2, the smaller the number of each display region divided by the screen division method, the higher the priority of the display region.
For example, in the middle of the upper part of fig. 2 (upper, lower, left, and right in fig. 2), the screen division method is 2 division, and the 2 division results in a display area 0 and a display area 1, the priority of the display area 0 is the first priority, and the priority of the display area 1 is the second priority. The number of the video frames during the 2-segmentation is two, for example, when the channel corresponding to one video frame is a blackboard writing channel, and the channel of the other video frame is a exhibition stand panoramic channel, the priority of the blackboard writing channel is greater than that of the exhibition stand panoramic channel, so that the priority of the video frame corresponding to the blackboard writing channel is a first priority, and the priority of the video frame corresponding to the exhibition stand panoramic channel is a second priority. Therefore, the video frame corresponding to the blackboard writing channel is displayed in the display area 0, and the video frame corresponding to the exhibition stand panoramic channel is displayed in the display area 1.
Example 6:
the following describes the process of the courseware video generation method according to a specific embodiment, and further describes the courseware video generation method according to the present invention by taking a specific embodiment as an example, where the courseware video is generated within a time range of 2019, 10, month, 11, day 13:00:00-14:00:00, and the channel combination information corresponding to the time period is 00 … … 00(20 00), 10 … … 10 (10), 00 … … 00(30 00), 01 … … 01 (10), 03 … … 03 (20), 03 01 … … 01(30, 01), 24 … … 24(20, 24), 00 … … 00(160, 00), 08 … … 08(30, 08)00 … … 00(3270, 00). 00 indicates that the preset event does not occur in the video frame of the corresponding moment. And 10 indicates that the preset event occurs to the video frame corresponding to the channel 5 at the corresponding moment. And 01, the preset event occurs in the video frame corresponding to the channel 1 at the corresponding moment. 03 indicates that the preset event occurs in the video frames corresponding to the channel 1 and the channel 2 at the corresponding time. And 24, indicating that the preset event occurs in the video frames corresponding to the channel 3 and the channel 6 at the corresponding time. 08 indicates that the identification information of the video frame corresponding to the channel 4 at the corresponding time point is 1.
Fig. 3 is a schematic process diagram of another courseware video generation method provided by the embodiment of the present invention, where the process includes the following steps:
s301: and determining the video segments of each channel in the 6 channels recorded in the time period according to the information of the time period contained in the received generation instruction.
S302: and determining channel combination information corresponding to the time periods of 13:00:00-14:00:00 in 10, 11 and 2019.
S303: and acquiring the identification information of whether the preset event occurs in each video frame in the video segment of each channel according to the identification information of whether the preset event occurs in the video frame corresponding to each channel at the acquisition time of each video frame recorded in the channel combination information.
S304: and determining each sub-video segment with the preset event in the video segment of each channel according to the identification information of whether the preset event occurs in each video frame in the video segment of each channel.
S305: according to the time sequence, determining the video frames in each sub-video segment corresponding to the target identification information of the preset event in the channel combination information according to the channel combination information corresponding to each time in the time period.
S306: and counting the number of the video frames corresponding to the moment, and selecting the picture segmentation mode corresponding to the number.
For example, 10, 01, and 08 in the channel combination information represent that the picture division manner is single division, and 03 and 24 represent that the picture division manner is 2 division.
S307: and determining each display area obtained by adopting the picture division mode according to the picture division mode.
S308: and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
S309: and displaying each corresponding video frame in each display area of the target video frame.
S310: and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
Example 7:
fig. 4 is a schematic process diagram of a method for determining channel combination information according to an embodiment of the present invention, where the process includes the following steps:
s401: if an event trigger instruction sent by image acquisition equipment is received, recording event starting time and a channel corresponding to the image acquisition equipment, and if an event ending instruction sent by the image acquisition equipment is received, recording event ending time and a channel corresponding to the image acquisition equipment.
The method for determining the channel combination information provided by the embodiment of the invention can be applied to another electronic device connected with the electronic device for generating the courseware video and the image acquisition device, such as a server, a PC, a mobile terminal and the like.
In order to facilitate event recording, a video storage module of the image acquisition device records and stores a video, when the image acquisition device monitors that a preset event is triggered, an event trigger instruction is sent to a video analysis module of the electronic device, wherein the event trigger instruction carries identification information of a channel, and after the electronic device receives the event trigger instruction, the event starting time of the preset event and the channel corresponding to the image acquisition device are recorded.
And when the image acquisition equipment monitors that the preset event is ended, sending an event ending instruction to a video analysis module of the electronic equipment. The event ending instruction carries identification information of a channel, and after the electronic equipment receives the event ending instruction, the event ending time of the preset event and the channel corresponding to the image acquisition equipment are recorded.
S402: and if a packing instruction is received, determining at least one target event located in the packing time period from the event starting time to the event ending time according to the information of the packing time period carried in the packing instruction.
After receiving the packing instruction, the electronic device carries information of a packing time period, where the packing time period is generally a determined classroom time period, and of course, the packing time period may also be any time period, which is not limited in the embodiment of the present invention.
And the electronic equipment determines a target event in the packaging time period according to the information of the packaging time period carried in the packaging instruction. Since the event start time, which may be a preset event, is located before the packing time period, the event end time is located within the packing time period; the event start time, which may be a preset event, is located within the packing time period and the event end time is located after the packing time period; it is also possible that the event start time and the event end time of the preset event are both within the packaged period. Therefore, when the electronic device determines the target event occurring in the packaging time period, only at least one of the event start time and the event end time of the recorded preset event is required to be located in the packaging time period, and then the preset event is determined to be the target event occurring in the packaging time period.
S403: and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
In order to determine the identification information of the video frames in the packing time period, the electronic device needs to determine the occurrence time period of each target event occurring in the packing time period.
Specifically, if the event start time and the event end time of the target event are both located in the packaging time period, the electronic device determines that the occurrence time period of the target event is a time period between the event start time and the event end time; if the event starting time of the target event is located before the packaging time period and the event ending time is located in the packaging time period, the electronic equipment determines that the occurrence time period of the target event is a time period between the starting time and the event ending time of the packaging time period; if the event start time of the target event is located in the packaging time period and the event end time is located after the packaging time period, the electronic device determines that the occurrence time period of the target event is a time period between the event start time and the end time of the packaging time period.
According to the determined occurrence time period of each target event and the recorded channel corresponding to the image acquisition device monitoring the target event, the electronic device determines at which times the channel corresponding to each image acquisition device in the packaging time period has a preset event and at which times the channel does not have the preset event, so as to determine the identification information corresponding to the video frame of each channel.
Specifically, in order to conveniently record whether a preset event exists in a video frame corresponding to each channel at each moment, in the embodiment of the present invention, different identification information is used for distinguishing, for example, the identification information may be 0 and 1, where 0 identifies that the preset event does not exist, and 1 identifies that the preset event occurs, and if a student is standing up to answer a question, the identification information of the corresponding channel in the channel information at the moment is 1, and the identification information of the corresponding channel in the channel information at the moment after sitting down is 0. And in the time range of writing on the writing board, the identification information in the channel information of the corresponding channel is 1, otherwise, the identification information is 0.
In the embodiment of the present invention, in order to determine the channel combination information of each lesson more accurately, the method further includes:
acquiring identification information of video frames of each channel collected within a preset classroom time period;
judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, performing a supplementing operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
Because the embodiment of the invention is used for recording the content in the class, the channel combination information corresponding to the video content of each class can be generated according to the time period corresponding to each class. Therefore, when video recording is performed for each lesson, the channel combination information of each time can be determined according to the time sequence, and the channel combination information of each time can be combined according to the time sequence to form the channel combination information corresponding to the video of the lesson. Specifically, when recording, the time information corresponding to each class and the information of the corresponding classroom may be recorded for the channel combination information corresponding to the video content of each class.
When the classroom content is recorded, the time for starting recording of the shots corresponding to each channel may not be the same, that is, the videos collected by all channels are not classroom content in all time ranges from the beginning to the end of the class.
In order to ensure that the channel combination information records the identification information of the video frame of each channel at each moment, in the embodiment of the present invention, for the video frame of each channel, it is necessary to determine whether the number of the identification information of whether the video frame of the channel corresponds to the preset event is consistent with the preset number in the classroom time period. The preset number in the classroom time period refers to the number of identification information determined according to a preset standard in the classroom time period. If the channel is inconsistent with the preset event, the identification information of whether the preset event exists or not corresponding to the channel needs to be supplemented.
Specifically, because the duration of each class is fixed, and how long a frame of video is collected is also fixed, and the interval at which each image collection device collects video frames is the same, because each video frame corresponds to identification information whether a preset event occurs, the number of identification information contained in each channel of each class is fixed, and therefore comparison can be performed according to the fixed number, and the fixed number of the compared identification information is preset as the preset number of the set identification information.
After the identification information contained in each channel in a preset classroom time period is obtained, the number of the identification information corresponding to the video frame of each channel is obtained for each channel, whether the number is consistent with the preset number in the classroom time period or not is judged, if the number is not consistent with the preset number, the difference value between the number of the identification information of a preset standard and the number of the identification information corresponding to the video frame of each channel is determined, the identification information of the difference value is supplemented before the identification information contained in each channel, and the supplemented identification information identifies that no preset event occurs at the corresponding moment.
And if so, performing a subsequent process of generating the channel information of the channel according to each identification information corresponding to the channel. The channel information of the channel is the combination of the identification information of the collected video frames in the time range of the video frames collected by the image collecting equipment of the channel.
And generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
Specifically, because the lens of the image capturing device for capturing the classroom content is fixed, the number of channels included is also fixed, so as to facilitate determining each video frame in which the preset event occurs, the position of the channel information of each channel in the channel combination information may be preset, and the channel combination information corresponding to the preset classroom time period is generated according to the channel information of each channel and the position of the preset channel information in the channel combination.
For example, there are 8 channels in total, the position of the channel information of the channel 8 in the channel combination information is the first bit in the order from front to back, the channel information of the channel 7 in the channel combination information is the 2 nd bit in the order from front to back, the channel information of the channel 6 in the channel combination information is the 3 rd bit in the order from front to back, the channel information of the channel 5 in the channel combination information is the 4 th bit in the order from front to back, the channel information of the channel 4 in the channel combination information is the 5 th bit in the order from front to back, the channel information of the channel 3 in the channel combination information is the 6 th bit in the order from front to back, the channel information of the channel 2 in the channel combination information is the 7 th bit in the order from front to back, and the channel information of the channel 1 in the channel combination information is the 8 th bit in the order from front to back.
At a certain time, the identification information of the video frames corresponding to the channels 1, 2, 3, and 4 is all 1, and the identification information of the video frames corresponding to the channels 5, 6, 7, and 8 is all 0, then the identification information of the binary video frame corresponding to the time is 00001111, and after the conversion into the hexadecimal system, the identification information of the hexadecimal video frame corresponding to the time is 0F. Therefore, the identification information of the video frame corresponding to the time indicates that the video frame corresponding to the time channel 1, 2, 3, 4 has a preset event, and the effective channel at the time can also be called channels 1, 2, 3, and 4.
Example 8:
on the basis of the foregoing embodiments, fig. 5 is a schematic structural diagram of a courseware video generation apparatus according to an embodiment of the present invention, where the apparatus includes:
a determining module 501, configured to determine, according to information of a time period included in the received generation instruction, a video segment of each channel recorded in the time period; for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
the generating module 502 is configured to generate the courseware video in the time period according to each determined sub-video segment according to the time sequence.
The determining module 501 is specifically configured to determine channel combination information corresponding to the time period; and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
The generating module 502 is specifically configured to determine, according to a time sequence, each sub-video segment corresponding to target identification information where a preset event occurs in the channel combination information according to channel combination information corresponding to each time in the time period, and generate a target video frame of the time according to an analyzed video frame in each sub-video segment, where the channel combination information corresponding to each time includes identification information of a video frame corresponding to the time, and the identification information is sorted according to the channel sequence; and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
The generating module 502 is further specifically configured to count the number of target identification information of a preset event occurring at the time, and determine a picture segmentation manner corresponding to the number; determining the display area of each video frame in the target video frames according to the picture segmentation mode; and displaying each corresponding video frame in each display area of the target video frame.
The generating module 502 is further specifically configured to determine, according to the picture segmentation manner, each display region obtained by dividing by using the picture segmentation manner; and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
Example 8:
on the basis of the foregoing embodiments, fig. 6 is a schematic structural diagram of an apparatus for determining channel combination information according to an embodiment of the present invention, where the apparatus includes:
the recording module 601 is configured to record event start time and a channel corresponding to an image acquisition device if an event trigger instruction sent by the image acquisition device is received, and record event end time and a channel corresponding to the image acquisition device if an event end instruction sent by the image acquisition device is received;
a determining module 602, configured to determine, if a packing instruction is received, at least one target event located in a packing time period from an event start time to an event end time according to information of the packing time period carried in the packing instruction; and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
Further, the apparatus further comprises:
an obtaining module 603, configured to obtain identification information of a video frame of each channel collected within a preset classroom time period;
a determining module 604, configured to determine, for a video frame of each channel, whether a quantity of identification information of a preset event corresponding to the video frame of the channel is consistent with a preset quantity in the classroom time period, and if not, perform a completion operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
the generating module 605 is configured to generate channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
Example 9:
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, an electronic device according to an embodiment of the present invention is further provided, where the electronic device includes a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702 and the memory 703 complete mutual communication through the communication bus 704;
the memory 703 has stored therein a computer program which, when executed by the processor 701, causes the processor 701 to perform the steps of:
determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction;
for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and generating the courseware videos in the time period according to the determined each sub-video segment according to the time sequence.
Further, the specifically acquiring, by the processor 701, the identification information of whether a preset event occurs in each video frame of the video segment includes:
determining channel combination information corresponding to the time period;
and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
Further, the processor 701 is specifically configured to generate, according to a time sequence, the courseware video of the time period according to each determined sub-video segment, and includes:
determining each sub-video segment corresponding to target identification information of a preset event in the channel combination information according to the time sequence and the channel combination information corresponding to each moment in the time period, and generating a target video frame of the moment according to the analyzed video frame in each sub-video segment, wherein the channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, and the identification information is sequenced according to the channel sequence;
and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
Further, the processor 701 is specifically configured to generate a target video frame at the time, and includes:
counting the number of target identification information of a preset event corresponding to the moment, and determining a picture segmentation mode corresponding to the number;
determining the display area of each video frame in the target video frames according to the picture segmentation mode;
and displaying each corresponding video frame in each display area of the target video frame.
Further, the processor 701 is specifically configured to determine, according to the picture segmentation manner, a display area of each video frame in the target video frame, including:
determining each display area obtained by adopting the picture segmentation mode according to the picture segmentation mode;
and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
The processor 701 further performs the following steps:
if an event trigger instruction sent by image acquisition equipment is received, recording event starting time and a channel corresponding to the image acquisition equipment, and if an event ending instruction sent by the image acquisition equipment is received, recording event ending time and a channel corresponding to the image acquisition equipment;
if a packaging instruction is received, determining at least one target event located in the packaging time period from the start time of the event and the end time of the event according to the information of the packaging time period carried in the packaging instruction;
and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
Further, the processor 701 is further configured to acquire identification information of a video frame of each channel acquired within a preset classroom time period;
judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, performing a supplementing operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 702 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Example 10:
on the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to perform the following steps:
determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction;
for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and generating the courseware videos in the time period according to the determined each sub-video segment according to the time sequence.
The acquiring the identification information of whether a preset event occurs in each video frame in the video segment includes:
determining channel combination information corresponding to the time period;
and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
The generating the courseware video of the time period according to each determined sub-video segment according to the time sequence comprises the following steps:
determining each sub-video segment corresponding to target identification information of a preset event in the channel combination information according to the time sequence and the channel combination information corresponding to each moment in the time period, and generating a target video frame of the moment according to the analyzed video frame in each sub-video segment, wherein the channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, and the identification information is sequenced according to the channel sequence;
and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
The generating of the target video frame at the moment includes:
counting the number of target identification information of a preset event corresponding to the moment, and determining a picture segmentation mode corresponding to the number;
determining the display area of each video frame in the target video frames according to the picture segmentation mode;
and displaying each corresponding video frame in each display area of the target video frame.
The determining the display area of each video frame in the target video frames according to the picture segmentation mode comprises:
determining each display area obtained by adopting the picture segmentation mode according to the picture segmentation mode;
and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
The processor further performs the steps of:
if an event trigger instruction sent by image acquisition equipment is received, recording event starting time and a channel corresponding to the image acquisition equipment, and if an event ending instruction sent by the image acquisition equipment is received, recording event ending time and a channel corresponding to the image acquisition equipment;
if a packaging instruction is received, determining at least one target event located in the packaging time period from the start time of the event and the end time of the event according to the information of the packaging time period carried in the packaging instruction;
and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
Acquiring identification information of video frames of each channel collected within a preset classroom time period;
judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, performing a supplementing operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (11)

1. A method of courseware video generation, the method comprising:
determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction;
for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and generating the courseware videos in the time period according to the determined each sub-video segment according to the time sequence.
2. The method of claim 1, wherein obtaining identification information of whether a predetermined event occurs in each video frame of the video segment comprises:
determining channel combination information corresponding to the time period;
and acquiring the identification information of whether each video frame in the video segment has the preset event according to the acquisition time of each video frame recorded in the channel combination information and the identification information of whether the video frame corresponding to each channel has the preset event.
3. The method according to claim 1, wherein the generating the courseware video of the time period according to each determined sub-video segment according to the time sequence comprises:
determining each sub-video segment corresponding to target identification information of a preset event in the channel combination information according to the time sequence and the channel combination information corresponding to each moment in the time period, and generating a target video frame of the moment according to the analyzed video frame in each sub-video segment, wherein the channel combination information corresponding to each moment comprises the identification information of the video frame corresponding to the moment, and the identification information is sequenced according to the channel sequence;
and generating courseware videos in the time period according to the generated target video frames at each moment and the time sequence.
4. The method of claim 3, wherein the generating the target video frame at the time comprises:
counting the number of target identification information of a preset event corresponding to the moment, and determining a picture segmentation mode corresponding to the number;
determining the display area of each video frame in the target video frames according to the picture segmentation mode;
and displaying each corresponding video frame in each display area of the target video frame.
5. The method according to claim 4, wherein the determining the display area of each of the target video frames according to the picture segmentation mode comprises:
determining each display area obtained by adopting the picture segmentation mode according to the picture segmentation mode;
and determining the display area of each video frame in the target video frame according to the preset priority of each display area and the priority of each video frame at the moment.
6. A method for determining channel combination information, the method comprising:
if an event trigger instruction sent by image acquisition equipment is received, recording event starting time and a channel corresponding to the image acquisition equipment, and if an event ending instruction sent by the image acquisition equipment is received, recording event ending time and a channel corresponding to the image acquisition equipment;
if a packaging instruction is received, determining at least one target event located in the packaging time period from the start time of the event and the end time of the event according to the information of the packaging time period carried in the packaging instruction;
and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
7. The method of claim 6, further comprising:
acquiring identification information of video frames of each channel collected within a preset classroom time period;
judging whether the number of the identification information of the preset event corresponding to the video frame of each channel is consistent with the preset number in the classroom time period or not according to the video frame of each channel, and if not, performing a supplementing operation on the identification information of the preset event corresponding to the channel; when the number of the identification information of whether the preset events exist or not corresponding to the video frames of the channel is consistent with the preset number in the classroom time period, generating channel information of the channel according to each corresponding identification information;
and generating channel combination information corresponding to the preset classroom time period according to the channel information of each channel.
8. An apparatus for courseware video generation, the apparatus comprising:
the determining module is used for determining the video segments of each channel recorded in the time period according to the information of the time period contained in the received generation instruction; for each video segment, determining each sub-video segment containing a preset event in the video segment according to the identification information of whether the preset event occurs in each video frame in the video segment;
and the generation module is used for generating the courseware video in the time period according to the time sequence and each determined sub-video segment.
9. An apparatus for determining channel combination information, the apparatus comprising:
the recording module is used for recording event starting time and a channel corresponding to the image acquisition equipment if an event triggering instruction sent by the image acquisition equipment is received, and recording event ending time and the channel corresponding to the image acquisition equipment if an event ending instruction sent by the image acquisition equipment is received;
the determining module is used for determining at least one target event located in the packing time period from the event starting time to the event ending time according to the information of the packing time period carried in the packing instruction if the packing instruction is received; and determining the identification information of the video frame of the channel corresponding to each image acquisition device in the packaging time period according to the occurrence time period of each target event and the recorded channel corresponding to each image acquisition device.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, the memory being adapted to store program instructions, the processor being adapted to carry out the steps of the method of courseware video generation according to any one of claims 1-5 when executing a computer program stored in the memory, and to carry out the steps of the method of determining channel combination information according to any one of claims 6-7.
11. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the steps of the method for courseware video generation according to any one of claims 1 to 5 and the steps of the method for determining channel combination information according to any one of claims 6 to 7.
CN202010979638.XA 2020-09-17 2020-09-17 Courseware video generation and channel combination information determination method and device Active CN112218024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010979638.XA CN112218024B (en) 2020-09-17 2020-09-17 Courseware video generation and channel combination information determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010979638.XA CN112218024B (en) 2020-09-17 2020-09-17 Courseware video generation and channel combination information determination method and device

Publications (2)

Publication Number Publication Date
CN112218024A true CN112218024A (en) 2021-01-12
CN112218024B CN112218024B (en) 2023-03-17

Family

ID=74049918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010979638.XA Active CN112218024B (en) 2020-09-17 2020-09-17 Courseware video generation and channel combination information determination method and device

Country Status (1)

Country Link
CN (1) CN112218024B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196767A (en) * 2001-12-25 2003-07-11 Toshiba Corp System for monitoring and distributing network video
CN105117381A (en) * 2015-08-28 2015-12-02 上海第九城市教育科技股份有限公司 Method and system for generating interactive multimedia courseware
US20150373306A1 (en) * 2014-06-20 2015-12-24 OnDeck Digital LLC Real-time video capture of field sports activities
US20160286244A1 (en) * 2015-03-27 2016-09-29 Twitter, Inc. Live video streaming services
CN206400819U (en) * 2016-10-12 2017-08-11 北京新晨阳光科技有限公司 Course recording arrangement and system
US20180061064A1 (en) * 2014-10-15 2018-03-01 Comcast Cable Communications, Llc Generation of event video frames for content
US20190138795A1 (en) * 2017-11-07 2019-05-09 Ooma, Inc. Automatic Object Detection and Recognition via a Camera System
US10349001B1 (en) * 2012-10-11 2019-07-09 SportSight LLC Venue based event capture organizational and distribution system and method
CN110753256A (en) * 2019-09-18 2020-02-04 深圳壹账通智能科技有限公司 Video playback method and device, storage medium and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196767A (en) * 2001-12-25 2003-07-11 Toshiba Corp System for monitoring and distributing network video
US10349001B1 (en) * 2012-10-11 2019-07-09 SportSight LLC Venue based event capture organizational and distribution system and method
US20150373306A1 (en) * 2014-06-20 2015-12-24 OnDeck Digital LLC Real-time video capture of field sports activities
US20180061064A1 (en) * 2014-10-15 2018-03-01 Comcast Cable Communications, Llc Generation of event video frames for content
US20160286244A1 (en) * 2015-03-27 2016-09-29 Twitter, Inc. Live video streaming services
CN105117381A (en) * 2015-08-28 2015-12-02 上海第九城市教育科技股份有限公司 Method and system for generating interactive multimedia courseware
CN206400819U (en) * 2016-10-12 2017-08-11 北京新晨阳光科技有限公司 Course recording arrangement and system
US20190138795A1 (en) * 2017-11-07 2019-05-09 Ooma, Inc. Automatic Object Detection and Recognition via a Camera System
CN110753256A (en) * 2019-09-18 2020-02-04 深圳壹账通智能科技有限公司 Video playback method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN112218024B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN109698920B (en) Follow teaching system based on internet teaching platform
CN111611434B (en) Online course interaction method and interaction platform
CN105245960A (en) Live comment display method and device for videos
CN111131876B (en) Control method, device and terminal for live video and computer readable storage medium
CN112672219B (en) Comment information interaction method and device and electronic equipment
CN110675674A (en) Online education method and online education platform based on big data analysis
CN114267213A (en) Real-time demonstration method, device, equipment and storage medium for practical training
CN111741247B (en) Video playback method and device and computer equipment
CN112218024B (en) Courseware video generation and channel combination information determination method and device
CN111161592B (en) Classroom supervision method and supervising terminal
CN109523844B (en) Virtual live broadcast simulation teaching system and method
CN114025185A (en) Video playback method and device, electronic equipment and storage medium
CN110136500A (en) Full-automatic more picture live teaching broadcast systems
CN112040249A (en) Recording and broadcasting method and device and single camera
CN105228026B (en) The more equipment display methods of teletext, apparatus and system
CN107291441A (en) Method for education learning and computer program product thereof
CN108966007B (en) Method and device for distinguishing video scenes under HDMI
CN110166825B (en) Video data processing method and device and video playing method and device
CN111901351A (en) Remote teaching system, method and device and voice gateway router
CN113554904A (en) Intelligent processing method and system for multi-mode collaborative education
CN108156529B (en) Data display method, device and system
CN114745521B (en) Video recording device for novel remote online education
CN115331504B (en) Interactive teaching system based on wireless recording and broadcasting system
CN114554234B (en) Method, device, storage medium, processor and system for generating live record
CN107426566B (en) A kind of the Online Judge method and terminal of video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant