CN112601129B - Video interaction system, method and receiving terminal - Google Patents

Video interaction system, method and receiving terminal Download PDF

Info

Publication number
CN112601129B
CN112601129B CN202011432671.7A CN202011432671A CN112601129B CN 112601129 B CN112601129 B CN 112601129B CN 202011432671 A CN202011432671 A CN 202011432671A CN 112601129 B CN112601129 B CN 112601129B
Authority
CN
China
Prior art keywords
video
information
marking
marked
receiving end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011432671.7A
Other languages
Chinese (zh)
Other versions
CN112601129A (en
Inventor
李审霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fangduoduo Network Technology Co ltd
Original Assignee
Shenzhen Fangduoduo Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fangduoduo Network Technology Co ltd filed Critical Shenzhen Fangduoduo Network Technology Co ltd
Priority to CN202011432671.7A priority Critical patent/CN112601129B/en
Publication of CN112601129A publication Critical patent/CN112601129A/en
Application granted granted Critical
Publication of CN112601129B publication Critical patent/CN112601129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/654Transmission by server directed to the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Abstract

The embodiment of the invention relates to the technical field of video processing, in particular to a video interaction system, a method and a receiving end, wherein the marking information is associated with the time stamp information to generate a marking file by recording the time stamp information and the marking information of a video frame to be marked, the marking file and the video are associated when being played, and the marking information in different time stamps can be presented when the video is played, so that the efficiency of checking the marking information by a user is greatly improved, and meanwhile, the receiving end and the sending end only need to transmit the marking file without carrying specific video content, so that the transmission space of the file is greatly reduced, and the transmission efficiency is improved.

Description

Video interaction system, method and receiving terminal
Technical Field
The embodiment of the invention relates to the technical field of video processing, in particular to a video interaction system, a video interaction method and a video interaction receiving end.
Background
At present, when a user learns art creation or software programming, the user often looks at video courses uploaded by some video bloggers, some bloggers often demonstrate through videos, and frequent interaction and deep communication are required for videos which are dependent on demonstration or for creation processes of some art.
However, the applicant found in the research that, due to the lack of effective video interaction means, the user can only communicate in a chat mode in the communication area, or after capturing the interested video frames, the video frames are sent to the blogger in the communication area, and then a certain text or voice description is matched, so that the problem that the user wants to ask the blogger is clearly expressed. The method is very difficult to express clear problems by relying on the content of one frame alone, and if the problems exist for a plurality of frames, the problems are disturbed in an exchange area when the frames are sent to a blog, so that a browser is difficult to find, and great inconvenience is caused to the exchange.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a video interaction system, a method, and a receiving end, which are used to solve the above problems in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a video interaction system, the system including: the system comprises a sending end, a server and a receiving end;
the sending end is used for sending a marking instruction aiming at a video frame to be marked in the video to the server;
the server is used for receiving the marking instruction, acquiring the time stamp information of the video frame to be marked according to the marking instruction, receiving the marking information sent by the sending end, and associating the marking information with the time stamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; the marking file is sent to a receiving end;
the receiving end is used for receiving the marking file and acquiring the time stamp information of the video; and associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
Further, the markup file includes address information of the video;
the receiving end is used for acquiring the video according to the address information of the video, and analyzing the video to acquire the timestamp information of the video.
Further, the markup file includes attribute information of the video;
the receiving end is used for acquiring the video from a local storage space of the receiving end according to the attribute information of the video, and analyzing the video to acquire the timestamp information of the video.
Further, the receiving end is further used for verifying the received marking file and the acquired video;
if the time stamp information of the video comprises the time stamp information of the video frame to be marked in the mark file, associating the mark information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the mark file, so that the receiving end presents the mark information when playing the video;
otherwise, the receiving end displays verification failure information.
Further, the receiving end is further configured to display the video in a first display area and display the tag information in a second display area after associating the tag information with the video information;
the second display area is further configured to display timestamp information of the video frame to be marked, when the receiving end receives a timestamp selection instruction of the video frame to be marked in the second display area, the receiving end directly displays a video frame corresponding to a timestamp of the video frame to be marked in the first display area according to the timestamp selection instruction of the video frame to be marked, and displays marking information corresponding to the timestamp of the video frame to be marked in the second display area.
Further, the receiving end is further configured to obtain a time difference value between time stamps of the adjacent video frames to be marked according to the marking file;
and the receiving end determines the display time length of the marking information corresponding to the time stamp of the current video frame to be marked according to the time difference value of the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked.
Further, the receiving end is further configured to determine a shortest presentation duration of the marking information according to marking information corresponding to a timestamp of the video frame to be marked currently;
if the shortest presentation time length of the marking information is greater than the time difference between the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked, the receiving end pauses the playing of the video information in the first display area until the time difference between the time stamp of the current video frame to be marked and the current time is greater than or equal to the shortest presentation time length.
Further, the receiving end is further configured to receive a plurality of the markup files;
acquiring videos corresponding to the plurality of mark files;
determining the mark files with the same video corresponding to the mark files;
and merging the marking information in the marking files with the same video according to the time stamp information of the video frames to be marked.
The embodiment of the invention also provides a video interaction method, which is applied to a video interaction system, wherein the system comprises the following steps: the system comprises a sending end, a server and a receiving end;
the sending end sends a marking instruction aiming at a video frame to be marked in the video to the server;
the server receives the marking instruction, acquires the time stamp information of the video frame to be marked according to the marking instruction, receives the marking information sent by the sending end, and associates the marking information with the time stamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; the marking file is sent to a receiving end;
the receiving end receives the marking file and acquires the time stamp information of the video; and associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
The embodiment of the invention also provides a receiving end which is applied to the video interaction system and is used for receiving the marking file and acquiring the time stamp information of the video; the marking file comprises time stamp information of a video frame to be marked and marking information associated with the time stamp information; the receiving end is further used for associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
In summary, in the embodiment of the present invention, by recording the timestamp information and the marking information of the video frame to be marked, the marking information is associated with the timestamp information to generate the marking file, and when playing, the marking file and the video are associated, and when playing the video, the marking information of different timestamps can be presented, so that the efficiency of checking the marking information by the user is greatly improved, and meanwhile, since the receiving end and the transmitting end only need to transmit the marking file, no specific video content needs to be carried, the transmission space of the file is greatly reduced, and the transmission efficiency is improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a schematic diagram of a video interaction system provided by an embodiment of the present invention;
fig. 2 shows a schematic diagram of a transmitting end provided by an embodiment of the present invention;
fig. 3 shows a schematic diagram of a transmitting end operation provided by an embodiment of the present invention;
FIG. 4 shows a schematic diagram of a server provided by an embodiment of the present invention;
fig. 5 shows a schematic diagram of a receiving end provided by an embodiment of the present invention;
fig. 6 shows a schematic diagram of a receiver operation provided by an embodiment of the present invention;
fig. 7 shows a schematic flow chart of a video interaction method according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
As shown in fig. 1, an embodiment of the present invention provides a video interaction system, where the system includes a transmitting end, a server, and a receiving end, where the transmitting end is typically a mobile phone, a computer, or an IPad, etc. equipped with the video interaction system, and when a user views a video through the transmitting end, the transmitting end annotates the video; the server is a blade server or other types of servers and the like provided with the video interaction system, the server can be arranged in a remote machine room or a cloud, the sending end is connected with the server through a network, and video is watched and annotated through the server; the receiving end is a mobile phone, a computer or an IPad and the like provided with the video interaction system, and a user checks files sent by the sending end through the server through the receiving end.
Specifically, in order to facilitate a user to ask questions about a video to be watched, and also to facilitate a video publisher to view questions posed by a viewer, as shown in fig. 1, in a video interaction system provided by an embodiment of the present invention, a network architecture of the sending end 100, the server 200, and the receiving end 300 is adopted, where the sending end is configured to send a marking instruction for a video frame to be marked in the video to the server; the server is used for receiving the marking instruction, acquiring the time stamp information of the video frame to be marked according to the marking instruction, receiving the marking information sent by the sending end, and associating the marking information with the time stamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; the marking file is sent to a receiving end; the receiving end is used for receiving the marking file and acquiring the time stamp information of the video; and associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
The video interaction system provided by the embodiment of the invention records the time stamp information and the marking information of the video frame to be marked, associates the marking information with the time stamp information to generate the marking file, associates the marking file with the video when playing, and can present the marking information of different time stamps when playing the video, thereby greatly improving the efficiency of checking the marking information for users.
Further, the transmitting end, the server and the receiving end involved in the video interaction system provided by the embodiment of the invention are respectively described in detail as follows:
as shown in fig. 2, the sending end 100 is configured to send a marking instruction for a video frame to be marked in the video to the server, where the marking instruction may be a shortcut key, a specific operation command, or a certain action, for example: double clicking on a certain area, etc., is not limited herein. Specifically, as shown in fig. 2, the transmitting end 100 includes a marking instruction transmitting module 101 and a marking module 102.
The marking instruction sending module 101 is configured to receive an operation instruction of a user, and send a marking instruction to the server according to the user operation instruction, for example: when a user views a certain part of the video, the user is interested in a certain video frame, the annotating function of the video is started in a shortcut key mode, and when the user presses the shortcut key, the sending end sends a marking instruction for the video frame to be marked selected by the user to the server, so that the user hopes to mark the video frame.
The marking module 102 is configured to receive marking information input by a user, and send the marking information to a server, as shown in fig. 3, for marking a video frame to be marked for a user presented by the sending end, where the sending end is divided into a first area and a second area, the first area is used for displaying video content, the second area is used for displaying marking information of the user, and the user inputs the marking information through a marking information input window.
Wherein the marking information can be text information, voice information, image information or the like. When the information is text information, the content of the mark information is a text or expression, etc.; when the voice information is the voice information, the marking information is a section of voice; and when the information is image information, the marking information is a picture. Preferably, the marking information may also be recorded as an executable program, for recording an operation track of the video frame to be marked by a user, for example: in order to highlight a certain part of a video frame to be marked, a user circles the certain part by using a painting brush, and simultaneously inputs text information to perform key description, a transmitting end records the operation track, generates a section of executable program and transmits the executable program to a server.
The server 200 is configured to receive the marking instruction, obtain timestamp information of the video frame to be marked according to the marking instruction, receive marking information sent by the sending end, and associate the marking information with the timestamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; and sending the mark file to a receiving end.
The structure of the server is shown in fig. 4, and includes: a marking instruction receiving module 201, a video obtaining module 202 to be marked, a marking information receiving module 203, a marking file generating module 204 and a marking file transmitting module 205.
The marking instruction receiving module 201 is configured to receive the marking instruction.
The to-be-marked video frame obtaining module 202 obtains timestamp information of the to-be-marked video frame according to the marking instruction when receiving the marking instruction, and is used for recording a specific position of the video, where the to-be-marked video frame is selected by a user.
The marking information receiving module 203 may pause the playing of the video when receiving the marking instruction, and stay the video frame at the position of the video frame to be marked selected by the user for the user to mark; video playing can also be continued, but a window is popped up, the video frame to be marked selected by the user is displayed, and the user is marked in a new window. The user can input the marking information through a popup information input window, and can also provide a creation button for the user to create the marking information. Meanwhile, the marking information receiving module 203 receives the marking information sent by the sending end, and associates the marking information with the timestamp information of the video frame to be marked, so as to record that the user inputs the marking information at a certain moment of the video. The process may be repeated, i.e., the user may mark a plurality of video frames, thereby generating a plurality of marking information associated with the time stamp information of the video frames to be marked.
The marking file generating module 204 is configured to generate, after the user marks the video, a marking file according to all the marking information and the associated information of the timestamp information of the video frame to be marked; the markup file includes time stamp information of a video frame to be marked, markup information associated with a time stamp of the video frame to be marked, video address information, a type of markup information, and marker information, etc. The marking information types include text, voice, picture or executable file, etc., which are described in the marking module in the transmitting end and are not described herein.
The markup file sending module 205 is configured to send the markup file to the receiving end 300.
The receiving end 300 is configured to receive the markup file sent by the server, and obtain timestamp information of the video; and associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
As shown in fig. 5, the receiving end 300 includes a markup file receiving module 301, a video obtaining module 302, a verification module 303, a markup file associating module 304, and a presenting module 305.
The markup file receiving module 301 is configured to receive a markup file sent by a server.
The video obtaining module 302 is configured to obtain a video file corresponding to the markup file according to the markup file, where address information of a video is recorded in the markup file, and the video obtaining module 302 obtains the video according to the video address information and analyzes the video to obtain timestamp information of the video, where the video address information may be a URL address on a server, and download the video to a local location according to the URL address, or directly play the video information corresponding to the URL address at the receiving end according to the URL address.
Optionally, if the tag file records the attribute information of the video, the video obtaining module 302 may further obtain the video from the local storage space of the receiving end according to the attribute information of the video, and analyze the video to obtain the timestamp information of the video. The attribute information of the video comprises a video name, a publisher, a duration, a creation time and the like, and the problem that the downloading speed is low through the URL address can be solved by searching the information locally. Of course, the tag file may also include the address information of the video and the attribute information of the video, and the receiving end may search locally according to the attribute information of the video preferentially, and if the local search fails, connect to the server for downloading through the address information of the video.
By recording the address information or the attribute information of the video in the mark file, the video file is prevented from being contained in the mark file, the storage space of the mark file is increased, and the reduction of the transmission speed and the waste of resources are avoided.
Further, the receiving end 300 further includes a verification module 303, in order to ensure that the tagged file and the corresponding video can be correctly matched, before the tagged file and the video are associated, the verification module 303 is configured to verify the obtained tagged file and video, if the timestamp information of the video includes timestamp information of a video frame to be tagged in the tagged file, that is, the total time length of the video is greater than or equal to the timestamp information of the largest video frame to be tagged in the tagged file, associate the tag information with the video according to the timestamp information of the video and the timestamp information of the video frame to be tagged in the tagged file, so that the receiving end presents the tag information when playing the video; otherwise, the receiving end displays verification failure information. Such as: the total duration of a certain video is 3 minutes, a certain mark file of the video comprises mark information of 1 minute 50 seconds, mark information of 2 minutes 01 seconds and mark information of 3 minutes 10 seconds, and the time stamp information of the largest video frame to be marked in the mark file is 3 minutes 10 seconds and exceeds the total duration of the video, so that the mark file and the video can be judged to be unmatched and verification fails. According to the embodiment of the invention, the verification module is added, so that the accuracy of video interaction is improved.
The markup file association module 304 is configured to associate the markup information with the video according to the timestamp information of the video and the timestamp information of the video frame to be marked in the markup file, so that the receiving end presents the markup information when playing the video. The association is to load the marking information of the corresponding time stamp to the time stamp corresponding to the video.
The presentation module 305 is configured to present the video and the tag information in the tag file corresponding to the video at the same time. The presentation module 305 includes a first display area for displaying the video and a second display area for displaying the marking information, as shown in fig. 6, the left side is the first display area for displaying the video information, and the right side is the second display area for displaying the marking information. The second display area is further used for displaying time stamp information of the video frame to be marked, and when marking information exists on a certain time stamp, the presenting module is highlighted to indicate the time and the position of the marking information. When a user directly clicks a timestamp mark with a highlighting function, the receiving end judges that a timestamp selection instruction of a video frame to be marked is received in the second display area, and then the receiving end directly displays a video frame corresponding to the timestamp of the video frame to be marked in the first display area according to the timestamp selection instruction of the video frame to be marked, namely, the video displayed in the first display area is directly jumped to the video frame corresponding to the timestamp selected by the user, and meanwhile, marking information corresponding to the timestamp selected by the user is displayed in the second display area. In this way, the user can directly jump between the marking information corresponding to different time stamps, so that the efficiency of reading the marking information is improved, and meanwhile, the presentation module also automatically jumps the video information to the video frame corresponding to the time stamp selected by the user, thereby facilitating the combination of the video frame and the marking information by the user and improving the efficiency of video interaction.
Still further, the presentation module may present the video and marker information in a variety of ways, such as: and displaying the video frame with the mark information and the corresponding mark information statically or dynamically playing the video, and displaying the mark information at the same time when the video frame with the mark information is played until the next video frame with the mark information arrives. Preferably, when the marking information is voice information, the presenting module actively shields the voice information of the video and directly plays the voice corresponding to the marking information; when the tag information is an executable file, the user operation can be directly restored on the video frame, for example: and the user marks a certain area on the original video frame through the painting brush, and when the user presents, the user operation is reproduced through the executable file.
The presentation module 305 may further obtain a time difference value between time stamps of each adjacent video frame to be marked according to the marking file when the video is played in the first display area; the presentation module determines the display duration of the marking information corresponding to the timestamp of the current video frame to be marked according to the time difference value between the timestamp of the current video frame to be marked and the timestamp of the next video frame to be marked, and determines the display duration of the marking information in the second display area according to the display duration.
Assume that the user's tag information for a video is as follows:
TABLE 1
Figure BDA0002827141360000101
The presentation module calculates the time difference between mark 1 and mark 2 as 40 seconds, the time difference between mark 2 and mark 3 as 5 seconds, and the time difference between mark 3 and mark 4 as 10 seconds. When the video is played in the first display area for 1 minute and 10 seconds, the presenting module presents mark information 'beautiful' corresponding to the mark 1 in a second display area, wherein the presenting duration is 40 seconds; likewise, the presentation duration of the mark information corresponding to the mark 2 is 5 seconds.
Further, when the video is played in the first display area, since the tag information is longer, if the tag information corresponding to the current frame is longer for two frames with smaller time difference, the presenting duration of the tag information is determined only according to the time difference, which will result in that the user cannot read the tag information.
In order to solve the problem, the embodiment of the present invention further proposes that the presentation module is further configured to determine, according to the marking information corresponding to the timestamp of the video frame to be marked, a shortest presentation duration of the marking information; in general, the presenting module predicts a time period required for reading the tag information according to the type of the tag information, for example: if the type of the marking information is voice, acquiring the duration recorded by the user as the shortest presentation duration; if the type of the marking information is text, the shortest presentation time length is determined according to the number of the words. If the shortest presentation time length of the marking information is greater than the time difference between the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked, the receiving end pauses the playing of the video information in the first display area until the time difference between the time stamp of the current video frame to be marked and the current time is greater than or equal to the shortest presentation time length.
As shown in table 1, for the mark 1, the presentation module determines that the shortest presentation duration is 3 seconds according to the mark information corresponding to the mark 1; for the mark 2, the presentation module determines that the shortest presentation time length is 10 seconds according to the mark information corresponding to the mark 2; the time difference between mark 1 and mark 2 was 40 seconds, and the time difference between mark 2 and mark 3 was 5 seconds. For the mark 1, since the shortest presentation duration 3 seconds corresponding to the mark information of the mark 1 is smaller than the time difference between the mark 1 and the mark 2 by 40 seconds, there is sufficient time for the mark information corresponding to the mark 1 to be displayed, normally displayed for 40 seconds, and the video is normally played as well. For the mark 2, since the shortest presentation time length corresponding to the mark information of the mark 2 is greater than the time difference between the mark 2 and the mark 3 by 5 seconds, in order to allow the user sufficient time to read the mark information corresponding to the mark 2, the presentation module will pause the playing of the video information in the first display area after playing the video for 5 seconds until the time difference between the timestamp of the current video frame to be marked and the current time is equal to the shortest presentation time length, that is, 10 seconds. And when the shortest presentation duration is reached, continuing to play the video, and displaying the next marking information in the second display area. Of course, the pause time may be customized according to the needs of the user, and may be equal to the shortest presentation time or greater than the shortest presentation time, which is not limited herein, and is essentially all for enabling the user to have enough time to read the tag information.
Further, in practical applications, for some very popular video content, there may be multiple users to mark, if the blogger views each mark file separately, it may waste a relatively large amount of time, and some mark information may be duplicated, so that the receiving end provided in this embodiment of the present application further includes a mark file merging module 306, where the mark file merging module 306 is configured to receive multiple mark files; acquiring videos corresponding to the plurality of mark files; determining the mark files with the same video corresponding to the mark files; and merging the marking information in the marking files with the same video according to the time stamp information of the video frames to be marked.
As shown in table 2, assume that the contents in table 2 are a markup file for the video transmitted by another user.
TABLE 2
User' s Sequence number Time stamp Marking type Marking information
2 Sign 1 1 minute 10 seconds Text with a character pattern General all-purpose
2 Sign 2 1 minute 40 seconds Text with a character pattern What does this mean?
2 Sign 3 1 minute 49 seconds Text with a character pattern Good (good)
2 Sign 4 2 minutes 10 seconds Text with a character pattern Good results are achieved
When the receiving end receives the markup file sent by the user 1 and the markup file sent by the user 2, it is firstly determined whether the two markup files are aimed at the same video, if so, the two markup files are combined, for example: combining tables 1 and 2 gives table 3:
TABLE 3 Table 3
Figure BDA0002827141360000121
Figure BDA0002827141360000131
As can be seen from table 3, the receiving end combines different markup files, retains the corresponding timestamp information, retains the user information, and presents the content of table 3 through the presenting module, thereby greatly improving the presenting efficiency of the markup information.
The embodiment of the invention also provides a video interaction method, which is applied to the video interaction system provided in the above embodiment, as shown in fig. 7, and includes:
701: a sending end sends a marking instruction aiming at a video frame to be marked in the video to the server;
702: the server receives the marking instruction and acquires the time stamp information of the video frame to be marked according to the marking instruction;
703: the server receives marking information sent by the sending end and associates the marking information with the timestamp information of the video frame to be marked;
704: the server generates a mark file according to the association information of the mark information and the timestamp information of the video frame to be marked, and sends the mark file to a receiving end;
705: the receiving end receives the marking file and acquires the time stamp information of the video; and associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
According to the video interaction method provided by the embodiment of the invention, the time stamp information and the marking information of the video frame to be marked are recorded, the marking information is associated with the time stamp information to generate the marking file, the marking file and the video are associated when being played, the marking information of different time stamps can be presented when the video is played, the efficiency of checking the marking information by a user is greatly improved, meanwhile, as the receiving end and the sending end only need to transmit the marking file, specific video content is not needed to be carried, the transmission space of the file is greatly reduced, and the transmission efficiency is improved.
The embodiment of the invention also provides a computer program which can be called by a processor to enable the sending end, the server and the receiving end to execute the video interaction method in any of the method embodiments.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when run on a computer, cause the computer to perform the video interaction method of any of the method embodiments described above.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component, and they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (8)

1. A video interaction system, the system comprising: the system comprises a sending end, a server and a receiving end;
the sending end is used for sending a marking instruction aiming at a video frame to be marked in the video to the server;
the server is used for receiving the marking instruction, acquiring the time stamp information of the video frame to be marked according to the marking instruction, receiving the marking information sent by the sending end, and associating the marking information with the time stamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; the marking file is sent to a receiving end;
the receiving end is used for receiving the marking file and acquiring the time stamp information of the video; according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, the marking information is associated with the video, so that the receiving end presents the marking information when playing the video;
the receiving end is also used for displaying the video in a first display area and displaying the marking information in a second display area after the marking information is associated with the video information;
the second display area is further configured to display timestamp information of the video frame to be marked, when the receiving end receives a timestamp selection instruction of the video frame to be marked in the second display area, the receiving end directly displays a video frame corresponding to a timestamp of the video frame to be marked in the first display area according to the timestamp selection instruction of the video frame to be marked, and displays marking information corresponding to the timestamp of the video frame to be marked in the second display area;
the receiving end is further used for obtaining time difference values between time stamps of the adjacent video frames to be marked according to the marking file;
and the receiving end determines the display time length of the marking information corresponding to the time stamp of the current video frame to be marked according to the time difference value of the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked.
2. The video interaction system of claim 1, wherein the markup file includes address information of the video;
the receiving end is used for acquiring the video according to the address information of the video, and analyzing the video to acquire the timestamp information of the video.
3. The video interaction system of claim 1, wherein the markup file includes attribute information of the video;
the receiving end is used for acquiring the video from a local storage space of the receiving end according to the attribute information of the video, and analyzing the video to acquire the timestamp information of the video.
4. A video interaction system according to claim 2 or 3, wherein the receiving end is further configured to verify the received markup file and the acquired video;
if the time stamp information of the video comprises the time stamp information of the video frame to be marked in the mark file, associating the mark information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the mark file, so that the receiving end presents the mark information when playing the video;
otherwise, the receiving end displays verification failure information.
5. The video interaction system of claim 1, wherein the receiving end is further configured to determine a shortest presentation duration of the marking information according to marking information corresponding to a timestamp of the current video frame to be marked;
if the shortest presentation time length of the marking information is greater than the time difference between the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked, the receiving end pauses the playing of the video information in the first display area until the time difference between the time stamp of the current video frame to be marked and the current time is greater than or equal to the shortest presentation time length.
6. A video interactive system according to claim 2 or 3, wherein said receiving end is further configured to receive a plurality of said markup files;
acquiring videos corresponding to the plurality of mark files;
determining the mark files with the same video corresponding to the mark files;
and merging the marking information in the marking files with the same video according to the time stamp information of the video frames to be marked.
7. A video interaction method, characterized in that it is applied to a video interaction system, said system comprising: the system comprises a sending end, a server and a receiving end; the method comprises the following steps:
the sending end sends a marking instruction aiming at a video frame to be marked in the video to the server;
the server receives the marking instruction, acquires the time stamp information of the video frame to be marked according to the marking instruction, receives the marking information sent by the sending end, and associates the marking information with the time stamp information of the video frame to be marked; generating a marking file according to the marking information and the associated information of the timestamp information of the video frame to be marked; the marking file is sent to a receiving end;
the receiving end receives the marking file and acquires the time stamp information of the video; according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, the marking information is associated with the video, so that the receiving end presents the marking information when playing the video;
the receiving end is also used for displaying the video in a first display area and displaying the marking information in a second display area after the marking information is associated with the video information;
the second display area is further configured to display timestamp information of the video frame to be marked, when the receiving end receives a timestamp selection instruction of the video frame to be marked in the second display area, the receiving end directly displays a video frame corresponding to a timestamp of the video frame to be marked in the first display area according to the timestamp selection instruction of the video frame to be marked, and displays marking information corresponding to the timestamp of the video frame to be marked in the second display area;
the receiving end is further used for obtaining time difference values between time stamps of the adjacent video frames to be marked according to the marking file;
and the receiving end determines the display time length of the marking information corresponding to the time stamp of the current video frame to be marked according to the time difference value of the time stamp of the current video frame to be marked and the time stamp of the next video frame to be marked.
8. A receiving end, characterized in that the receiving end is applied in a video interaction system according to any of the claims 1-6;
the receiving end is used for receiving the marked file and acquiring the time stamp information of the video; the marking file comprises time stamp information of a video frame to be marked and marking information associated with the time stamp information;
the receiving end is further used for associating the marking information with the video according to the time stamp information of the video and the time stamp information of the video frame to be marked in the marking file, so that the receiving end presents the marking information when playing the video.
CN202011432671.7A 2020-12-09 2020-12-09 Video interaction system, method and receiving terminal Active CN112601129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011432671.7A CN112601129B (en) 2020-12-09 2020-12-09 Video interaction system, method and receiving terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011432671.7A CN112601129B (en) 2020-12-09 2020-12-09 Video interaction system, method and receiving terminal

Publications (2)

Publication Number Publication Date
CN112601129A CN112601129A (en) 2021-04-02
CN112601129B true CN112601129B (en) 2023-06-13

Family

ID=75191486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011432671.7A Active CN112601129B (en) 2020-12-09 2020-12-09 Video interaction system, method and receiving terminal

Country Status (1)

Country Link
CN (1) CN112601129B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099260B (en) * 2021-04-21 2022-02-01 北京沃东天骏信息技术有限公司 Live broadcast processing method, live broadcast platform, system, medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930779A (en) * 2010-07-29 2010-12-29 华为终端有限公司 Video commenting method and video player
CN106375870A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Video marking method and device
KR102101963B1 (en) * 2019-09-16 2020-04-17 주식회사 산타 Method for making a memo in a vedio player and server using the same

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103517158B (en) * 2012-06-25 2017-02-22 华为技术有限公司 Method, device and system for generating videos capable of showing video notations
CN104967908B (en) * 2014-09-05 2018-07-24 腾讯科技(深圳)有限公司 Video hotspot labeling method and device
CN109274999A (en) * 2018-10-08 2019-01-25 腾讯科技(深圳)有限公司 A kind of video playing control method, device, equipment and medium
CN111654749B (en) * 2020-06-24 2022-03-01 百度在线网络技术(北京)有限公司 Video data production method and device, electronic equipment and computer readable medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930779A (en) * 2010-07-29 2010-12-29 华为终端有限公司 Video commenting method and video player
CN106375870A (en) * 2016-08-31 2017-02-01 北京旷视科技有限公司 Video marking method and device
KR102101963B1 (en) * 2019-09-16 2020-04-17 주식회사 산타 Method for making a memo in a vedio player and server using the same

Also Published As

Publication number Publication date
CN112601129A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
JP7069778B2 (en) Methods, systems and programs for content curation in video-based communications
CN108153551B (en) Method and device for displaying business process page
CN107066619B (en) User note generation method and device based on multimedia resources and terminal
US7809773B2 (en) Comment filters for real-time multimedia broadcast sessions
US9380410B2 (en) Audio commenting and publishing system
CN109981711B (en) Document dynamic playing method, device and system and computer readable storage medium
US20080281783A1 (en) System and method for presenting media
US7711722B1 (en) Webcast metadata extraction system and method
US20130097484A1 (en) Method and system of operation retrieval for web application
Russell Digital communication networks and the journalistic field: The 2005 French riots
US20050223315A1 (en) Information sharing device and information sharing method
US20090150784A1 (en) User interface for previewing video items
US20110161308A1 (en) Evaluating preferences of content on a webpage
WO2017015114A1 (en) Media production system with social media feature
US11172006B1 (en) Customizable remote interactive platform
US8931002B2 (en) Explanatory-description adding apparatus, computer program product, and explanatory-description adding method
CN104065979A (en) Method for dynamically displaying information related with video content and system thereof
US9525896B2 (en) Automatic summarizing of media content
CN103530320A (en) Multimedia file processing method and device and terminal
CN112329403A (en) Live broadcast document processing method and device
CN112131361A (en) Method and device for pushing answer content
CN112601129B (en) Video interaction system, method and receiving terminal
CN111629267B (en) Audio labeling method, device, equipment and computer readable storage medium
Carter et al. Tools to support expository video capture and access
CN110659006A (en) Cross-screen display method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant