WO2021057693A1 - 交互视频的处理与播放控制 - Google Patents

交互视频的处理与播放控制 Download PDF

Info

Publication number
WO2021057693A1
WO2021057693A1 PCT/CN2020/116677 CN2020116677W WO2021057693A1 WO 2021057693 A1 WO2021057693 A1 WO 2021057693A1 CN 2020116677 W CN2020116677 W CN 2020116677W WO 2021057693 A1 WO2021057693 A1 WO 2021057693A1
Authority
WO
WIPO (PCT)
Prior art keywords
client
time
playback
interactive video
video
Prior art date
Application number
PCT/CN2020/116677
Other languages
English (en)
French (fr)
Inventor
卢俊瑞
Original Assignee
广州虎牙科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910906907.7A external-priority patent/CN112637657A/zh
Priority claimed from CN201910907638.6A external-priority patent/CN112637612B/zh
Application filed by 广州虎牙科技有限公司 filed Critical 广州虎牙科技有限公司
Priority to US17/762,282 priority Critical patent/US20220417619A1/en
Publication of WO2021057693A1 publication Critical patent/WO2021057693A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8541Content authoring involving branching, e.g. to different story endings
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the present disclosure relates to the field of computer software technology, in particular to the processing and playback control of interactive video.
  • Interactive video refers to a new type of video that integrates interactive experience into linear video through various technical means. As shown in Figure 1, it is a schematic diagram of an interactive video in an actual application scenario. When the interactive video is played to a certain progress, several branch options can be provided on the playback interface for the user to choose. When watching the content of the interactive video on the live broadcast platform, the user can independently select different branches to watch different plot trends.
  • a live broadcast platform includes: a node editor and a video editor; the node editor is used to receive node editing instructions, and create a broadcast according to the node editing instructions Node and setting the play time offset corresponding to each play node; wherein, the play node includes at least one bifurcation node and at least two child nodes of the bifurcation node; the video editor is used to receive multiple video files, Each video file is respectively associated with multiple play sub-paths between adjacent play nodes to generate an interactive video; wherein, each play sub-path of the same bifurcation node corresponds to multiple plots in the interactive video.
  • the play time offset is used to control the play progress of each client in the live room to play the interactive video, so that the play time difference of each client to play the interactive video is less than a preset value.
  • an interactive video processing method includes: receiving a node editing instruction, and creating a playback node according to the node editing instruction, and Set the play time offset corresponding to each play node; wherein, the play node includes at least one bifurcation node and at least two child nodes of the bifurcation node; receives multiple video files, and respectively associates each video file with the corresponding Multiple play sub-paths between adjacent play nodes are associated to generate an interactive video; wherein, each play sub-path of the same bifurcation node corresponds to multiple plot branches in the interactive video, and the play time is offset
  • the amount is used to control the playback progress of each client in the live room to play the interactive video, so that the play time difference of the interactive video played by each client is less than a preset value.
  • a method for controlling the playback of an interactive video comprising: receiving a message sent by the server when the current script time of the interactive video reaches the execution time for the interactive video to perform the scenario branching operation Fork message; wherein, the current script time is the time offset of the play time of the current video frame in the interactive video relative to the initial play time of the interactive video; determine the current time when the client is playing the interactive video Whether the difference between the playing time and the execution time is greater than or equal to the preset time threshold; if the difference is greater than or equal to the preset time threshold, respond to the fork message to display the split on the client Fork option.
  • an interactive video playback control device includes: a receiving module, configured to receive the server when the current script time of the interactive video reaches the interactive video to perform the scenario branching operation Fork message sent at execution time; wherein, the current script time is the time offset of the play time of the current video frame in the interactive video with respect to the initial play time of the interactive video; the judgment module is configured to Determine whether the difference between the current play time of the interactive video played by the client and the execution time is greater than or equal to a preset time threshold; the display module is configured to if the difference is greater than or equal to the preset time threshold To respond to the fork message to display fork options on the client.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the processor is caused to implement the Interactive video playback control method.
  • a client including a processor; a memory for storing a computer program executable by the processor; wherein, when the processor executes the computer program, any one of the embodiments is implemented Said interactive video playback control method.
  • an interactive video playback control system includes: a server; and a plurality of clients; each of the clients includes a processor and a processor for storing executables.
  • the computer program memory wherein the processor implements the interactive video playback control method according to any one of the embodiments when the processor executes the computer program, wherein the server is configured to reach the desired time when the current script time of the interactive video When the interactive video executes the execution time of the scenario fork operation, a fork message is sent to each client.
  • Figure 1 is a schematic diagram of an interactive video in an actual application scenario.
  • Fig. 2 is a schematic diagram of a live broadcast platform of an embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram of a play node according to an embodiment of the present disclosure.
  • Fig. 4a is a schematic diagram of an interactive video editing interface according to an embodiment of the present disclosure.
  • Fig. 4b is a schematic diagram of an interactive video editing interface according to another embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram of the interactive video playing time and playing scenes according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of a barrage displayed on a client according to an embodiment of the present disclosure.
  • Fig. 7a is a schematic diagram of a barrage displayed on a client according to another embodiment of the present disclosure.
  • Fig. 7b is a schematic diagram of a barrage displayed on a client according to another embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of a live broadcast platform according to another embodiment of the present disclosure.
  • Fig. 9 is a partial functional architecture diagram of a live broadcast platform according to an embodiment of the present disclosure.
  • Fig. 10 is a schematic diagram of an interactive video creation process according to an embodiment of the present disclosure.
  • Fig. 11 is a schematic diagram of an interactive video playing process according to an embodiment of the present disclosure.
  • Fig. 12 is an overall functional architecture diagram of a live broadcast platform according to an embodiment of the present disclosure.
  • Fig. 13 is a flowchart of an interactive video processing method according to an embodiment of the present disclosure.
  • Fig. 14 is a flowchart of an interactive video playback control method according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic diagram of a playback process of a client with different playback delays according to an embodiment of the present disclosure.
  • Fig. 16 is a block diagram of an interactive video playback control device according to an embodiment of the present disclosure.
  • Fig. 17 is a schematic structural diagram of a computer device for implementing a method for controlling the playback of interactive videos according to an embodiment of the present disclosure.
  • Fig. 18 is a schematic diagram of an interactive video playback control system according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
  • Play nodes nodes that are arranged and associated according to a certain organizational structure.
  • the nodes can be pre-defined when the interactive video is made.
  • An interactive video requires at least 3 nodes, namely, a bifurcation node and At least two child nodes corresponding to the bifurcation node (where the bifurcation node is the parent node of the two child nodes), the interactive video may also include other nodes, such as an aggregation node and a common node.
  • a node that includes multiple child nodes is a bifurcation node
  • a node that includes multiple parent nodes is an aggregation node
  • a node that includes at most only one parent node and at most only one child node is a non-fork node.
  • the node can also be divided into a start node, an intermediate node and an end node.
  • the start node is the root node, and the start node has no parent node; the end node is a leaf node, and the end node has no child nodes;
  • the node whose child node has a parent node is an intermediate node.
  • FIG. 3 A schematic diagram of a play node in one or more embodiments is shown in FIG. 3.
  • the figure includes 8 playback nodes, S, R, A, B, E1, E2, E3, and E4.
  • node S is the starting node
  • node R is an intermediate node
  • nodes A and B are child nodes of node R.
  • E1 and E2 are child nodes of node A
  • nodes E3 and E4 are child nodes of node B
  • node S is a non-forked node
  • nodes R, A, and B are all forked nodes
  • nodes E1, E2, E3, and E4 are all Is the end node.
  • Adjacent play nodes A play node and its parent node are adjacent play nodes to each other. Similarly, a play node and its child nodes are also adjacent play nodes to each other.
  • Playback path the path connecting two playback nodes.
  • the path connecting node S and node R is the playback path SR between node S and node R
  • the path connecting node S and node A is the playback path SR ⁇ RA between node S and node A.
  • the playback path between two adjacent playback nodes is also called a playback sub-path.
  • SR is the playback sub-path between playback node S and playback node R
  • RA is between playback node R and playback node A.
  • Both the bifurcation node and the aggregation node include multiple playback paths, and the common node includes one playback path.
  • the playback path between a node and its child nodes is called the playback path subordinate to the node.
  • the playback path of the child node subordinate to the node is also the playback path subordinate to the node.
  • An interactive video can include multiple video files.
  • Each play sub-path corresponds to a video file of the interactive video.
  • the video files corresponding to all play sub-paths together constitute the video content of the interactive video.
  • Each play node and the relationship between the play nodes The playback path constitutes the playback logic of the interactive video.
  • Multiple playback sub-paths that belong to the same bifurcation node are parallel playback sub-paths.
  • the multiple playback sub-paths in parallel correspond to multiple bifurcation plots in the interactive video. The user can choose one of them to play.
  • the sub-path comes from the main choice of the plot trend of the interactive video.
  • the user can select the option corresponding to RA to play the branch plot corresponding to RA in the interactive video; at node A, the user can select the option corresponding to AE1 to play AE1 in the interactive video The corresponding branch plot.
  • Historical selection path the playback path that the user has selected during the process of watching the interactive video. For example, at node R, the user selects the option corresponding to RA, the user's historical selection path is SR ⁇ RA. If the user has not selected the playback path, the historical selection path is empty.
  • Current playback path The playback sub-path being played at the current moment. For example, when the play path of the interactive video is SR ⁇ RA, and the video file corresponding to RA is currently being played, RA is the current play path.
  • Default playback path Each branch node corresponds to a default playback path. If the user does not select a playback path within the preset time period, the system will automatically select a preset playback path (ie the default playback path) for the user to play. Until the user re-selects the playback path. For example, in Figure 3, the default playback path of node R can be set to RA, and the default playback path of node B can be set to BE3. The path between two adjacent playback nodes on the default playback path is the default playback sub-path.
  • Target playback path the next playback sub-path to be played.
  • the target playback path can be independently selected by the user. If the user does not select the target playback path within the preset time, the default playback path is taken as the target playback path. For example, at node R, if the user selects RB, RB is the target playback path; if the user does not make a selection at node R, the default playback sub-path RA of node R is used as the target playback path.
  • Play time offset the time offset between the current play time of the interactive video and the start play time of the interactive video. For example, if the interactive video starts playing at 20:00:00 and the current playing time is 20:10:00, the playing time offset is 10 minutes.
  • the live broadcast platform may include: a node editor 201 and a video editor 202.
  • the node editor 201 is configured to receive a node editing instruction, and create a play node according to the node edit instruction and set the play time offset corresponding to each play node; wherein, the play node includes at least one bifurcation node and all At least two child nodes of the bifurcation node.
  • the video editor 202 is configured to receive multiple video files, and respectively associate each video file with multiple playback sub-paths between adjacent playback nodes to generate an interactive video; wherein, each of the same bifurcation node The play sub-path corresponds to multiple plot branches in the interactive video, and the play time offset is used to control the play progress of each client in the live room to play the interactive video, so that each client The play time difference of the interactive video is less than a preset value.
  • the live broadcast platform of this embodiment can directly or indirectly receive instructions sent by operation and maintenance personnel or editors to implement the process of creating, editing, publishing, and playing interactive videos.
  • the node editor 201 can receive the node editing instruction sent by the operation and maintenance personnel or the editor, where the node editing instruction can carry the play time offset of the play node, that is, the time when the interactive video is played to the play node is relative to the interactive video The time difference between the start playing time. For example, if the initial playback time of the interactive video is 20:00:00, and the time when it is played to the first playback node is 20:10:00, the playback time offset of the first playback node is 10 minutes. Wherein, when the interactive video is played to a certain playing node, it means that the interactive video starts to play the video file associated with the playing subpath formed by the playing node and its child nodes.
  • the node editing instruction may also carry the node type of the playback node, and the node type may include a bifurcated node, an aggregated node (also called a convergent node), and a common node.
  • a bifurcated node may be a playback node that includes multiple child nodes
  • an aggregated node may be a playback node that includes multiple parent nodes
  • a normal type node may be a playback node whose parent node and child nodes are both 1, or may be A play node with only a parent node and no child nodes (ie, end node), or a play node with only child nodes and no parent node (ie, start node).
  • the node editing instruction can also carry the number of bifurcations of the playback node.
  • Each bifurcation of the playback node corresponds to a child node.
  • Each bifurcation of the same bifurcation node corresponds to the bifurcation in the interactive video.
  • the playback node in the interactive video includes at least one bifurcation node and two corresponding bifurcations, that is, the interactive video includes at least 3 nodes (a bifurcation node and at least two child nodes included in the bifurcation node).
  • the interactive video may also include other types of nodes, for example, aggregation nodes and/or ordinary nodes.
  • the playback node shown in FIG. 3 includes ordinary nodes S, E1, E2, E3, and E4, as well as distribution nodes. Fork nodes R, A, and B.
  • the video editor 202 can receive video files uploaded by operation and maintenance personnel or editors, and respectively associate each video file with multiple play sub-paths between adjacent play nodes.
  • the video editor 202 may also obtain the URL (Uniform Resource Locator, Uniform Resource Locator) of the video file input by the operation and maintenance personnel or the editor, and obtain the corresponding video file by accessing the URL.
  • URL Uniform Resource Locator
  • FIG. 4a shows a schematic diagram of a node editing interface of an embodiment.
  • FIG. 4a includes a node creation component.
  • a player node By sending an instruction to the node creation component (for example, clicking the component with a mouse), a player node can be added.
  • the created play nodes can be displayed in the play node list in order.
  • the play node list may include the serial number, type, node code, and play time offset of each play node.
  • Each playback node can correspond to a node modification component and a node deletion component.
  • the type, node code, and playback time offset of the corresponding playback node By sending an instruction to the node deletion component, the corresponding playback node can be deleted.
  • Figure 4b shows a schematic diagram of a video upload interface of an embodiment.
  • the path between adjacent nodes is a playback sub-path
  • a video file can be uploaded for each playback sub-path.
  • the path between node R and node A in Figure 4a can upload a video file, and the code of the video file can be recorded as 2_3, where 2 is the head node (ie, node R) on the playback subpath.
  • Node number, 3 is the node number of the tail node (ie, node A) on this playback subpath.
  • the head node is the start node on the playback sub-path
  • the tail node is the end node on the playback sub-path.
  • the code of the video file corresponding to each fork of the fork node may also include a number corresponding to the number of forks.
  • 3_5_1 represents one of the playback node with sequence number 3 and the playback node with sequence number 5.
  • Each forked node can also correspond to a new forked component, which is used to add a fork of the forked node.
  • the organization structure and playback path of each video file in the interactive video can be determined.
  • the playback node shown in Figure 3 there are 4 interactive video playback paths, namely: SR ⁇ RA ⁇ AE1, SR ⁇ RA ⁇ AE2, SR ⁇ RB ⁇ BE3 and SR ⁇ RB ⁇ AE4.
  • a virtual time axis ie, script time axis
  • the significance of the script time axis is that the play time of each video frame in the interactive video can be determined The time offset (time difference) between the initial playback time of the interactive video.
  • This time offset is related to the duration of each video file in the interactive video and the playback path of the interactive video, and whether it is related to the network status of the client Good has nothing to do.
  • the plot of the interactive video s first minute and second has been fixed.
  • the reserved time of the playback path is used to control the playback progress of the interactive video played by the client, so as to reduce the playback delay of the interactive video played by the client. In this way, the playback progress of the interactive video played by each client can be relatively unified, so that each client can discuss the plot on a relatively uniform time axis, and the interactivity of the live broadcast is improved.
  • Fig. 8 is a schematic diagram of a live broadcast platform according to another embodiment of the present disclosure.
  • the live broadcast platform also includes a video manager 203, which is configured to receive the play time information and play time information of each interactive video, and associate the play time information and play time information of each interactive video with the corresponding interactive video.
  • a schematic diagram of the play time and play times of the interactive video in an embodiment of the present disclosure is shown in FIG. 5.
  • interactive video 1 is played at 13:00 and 17:00 respectively, the playing time is 90 minutes, and the time played at 13:00 is the premiere (that is, the first play).
  • Interactive video 2 is played at 15:00 and the playing time is 100 minutes.
  • the live broadcast platform further includes a control console 204 for inserting a playback node in the interactive video in response to the received interactive video editing instruction during the playback of the interactive video , Deleting a play node in the interactive video and/or inserting an advertisement in the interactive video.
  • part of the control authority for the interactive video may be opened to the operation and maintenance personnel or the director, and the operation and maintenance personnel or the director may manually send instructions to edit the interactive video.
  • Operation and maintenance personnel or editors can send interactive video editing instructions to the live broadcast platform.
  • the interactive video editing instructions can be play node insertion instructions, play node deletion instructions, or advertisement insertion instructions, or other types of instructions.
  • the live broadcast platform can temporarily insert a play node, delete a play node, or insert an advertisement in the interactive video.
  • the guide console 204 is further configured to count the number of command operations performed by the client terminal that plays the interactive video; and/or count the number of clients on each playback sub-path .
  • the control console 204 can count the number of command operations performed by each client.
  • the command operation mentioned here may include the command operation of selecting the playback path, and may also include the function of enabling or disabling the barrage isolation function. Command operations may also include command operations to enable or disable the barrage spoiler prevention function, and so on.
  • the navigation console 204 can also count the number of clients on each playing sub-path, that is, count how many clients are playing the video file corresponding to the path SR, and how many clients are playing the video file corresponding to the path RA. and so on.
  • the live broadcast platform further includes an instruction control machine 205 for obtaining server time. If the server time reaches the play time indicated by the play time information of the interactive video, it is pre-established with the client The long connection sends an interactive video play start instruction to the client, where the interactive video play start instruction is used to instruct the client to access the video address of the interactive video to start playing the interactive video.
  • the live broadcast platform can maintain a video playlist (ie, program list), which is used to record the play time information and play time information of each interactive video, and load the play time information and play time information of each interactive video into instructions Control machine 205.
  • a video playlist ie, program list
  • the client can establish a long connection with the live broadcast platform when entering the live broadcast room, and monitor the long-connection messages, wait for a unified broadcast instruction, and instruct the controller 205 to loop Self-executing, when judging that the server’s current time has reached the broadcast time, it will send a start-up instruction to each client on time.
  • the start-up instruction includes the address of the video file (for example, a video file in m3u8 format) that needs to be played, and the client accesses this address Then start playing the corresponding video file.
  • the command control machine 205 is further configured to respond to the short connection establishment request sent by the client, establish a short connection with the client, and receive the client through the short connection A sent play request instruction for requesting to start playing the interactive video.
  • the instruction control machine 205 may send a play start instruction to the client, so that the client starts to play the interactive video.
  • the client can also actively send a play request instruction to the live broadcast platform through a short connection, and the instruction controller 205 of the live broadcast platform responds to receiving the play request instruction. You can send a continue playing instruction to the client, so that the client continues to play the interactive video.
  • the live broadcast platform further includes: a barrage processor 206, configured to obtain the barrage sent by the client, and obtain the current status of the interactive video when the client sends the barrage. Play path, and forward the barrage to other clients on the current play path.
  • a barrage processor 206 configured to obtain the barrage sent by the client, and obtain the current status of the interactive video when the client sends the barrage. Play path, and forward the barrage to other clients on the current play path.
  • the barrage processor 206 may forward the barrage sent by one client to each client on the same current playback path. For example, if the current playback path of client 1 and client 2 are both RA, and client 1 sends a barrage "Today's weather is really good", then the barrage processor 206 can forward the barrage "Today's weather is really good" to the client. Terminal 2 is displayed.
  • the barrage processor 206 receives the barrage sent by client 1, it will isolate the barrage from client 2, that is, the barrage The screen processor 206 filters and shields the barrage sent by the client 1 to prevent the barrage sent by the client 1 from being forwarded to the client 2.
  • FIG. 6 and Figure 7a it is a schematic diagram of the barrage displayed on the client.
  • user 1 on the playback path RA sends a bullet screen "Today's weather is really good”
  • user 2 on the playback path RB sends a bullet screen "Hello everyone! and “Quick assembly”
  • those on the playback path RB User 3 sent a bullet screen "Who else is watching?”.
  • the bullet screen displayed on the playback interface of user 1 shields the bullet screens sent by user 2 and user 3, as shown in Figure 6; the bullet screens displayed on the playback interface of user 2 and user 3 shield the bullet screens sent by user 1 , As shown in Figure 7a.
  • the aforementioned barrage isolation function shields barrage sent by clients under different playback paths, so that the client can only display barrage sent by each client under the same playback path. Shielding the barrage sent by the client under different playback paths, on the one hand, reduces the number of barrage and avoids interference with video playback due to too many barrage; on the other hand, because the client under different playback paths is playing Different plot branches of interactive videos, therefore, through barrage isolation, users who watch different plot branches can not be spoiled, and it is also convenient for users who watch the same plot branch under the same playback path to communicate through barrage, which improves the interactive video The interactivity during playback enhances the user experience.
  • the barrage processor 206 is further configured to: obtain a second time offset between the moment when the second client sends the barrage and the start playing time of the interactive video And forward the barrage to the first client, wherein the first time offset between the current play time of the first client and the start play time of the interactive video is less than the second time Offset.
  • the second client When the second client sends the barrage, it will associate the time point of the barrage with the barrage, and then send the barrage to the server. After receiving the bullet screen, the server can analyze the time associated with the bullet screen to obtain the second time offset.
  • the second time offset may indicate the minute and second of the interactive video when the second client sends the bullet screen.
  • the server may also obtain the first time offset of the interactive video currently being played by the first client in the same live broadcast room.
  • the first time offset may indicate the fraction of the interactive video currently played by the first client. second.
  • the second time offset is greater than the first time offset, it means that the playback progress of the interactive video on the second client is faster than the playback progress of the interactive video on the first client.
  • the first time offset is 00:20:00 and the second time offset is 00:30:00, that is, the first client only plays to the 00:20:00 of the interactive video, and the second The client has played to the 00:30:00 of the interactive video. That is to say, the playback progress of the second client is 10 minutes faster than that of the first client.
  • the barrage sent by the second client may cause spoilers to the user of the first client.
  • the barrage sent by the second client can be isolated on the first client, that is, the server intercepts the barrage sent by the second client to prevent the bullets sent by the second client. The scene is displayed on the first client.
  • the solution of this embodiment obtains the second time offset of the second client, and obtains the first time offset of the first client, and the first client whose first time offset is less than the second time offset Isolating the barrage sent by the second client in the above can prevent the barrage sent by the second client with a faster playback progress from causing spoilers to users of the first client with a slower playback progress, which improves the user’s ability to watch interactive videos.
  • Isolating the barrage sent by the second client in the above can prevent the barrage sent by the second client with a faster playback progress from causing spoilers to users of the first client with a slower playback progress, which improves the user’s ability to watch interactive videos.
  • the live broadcast platform further includes a player 207, configured to receive a playback path selection instruction from the client, and determine the target playback sub-path of the interactive video according to the playback path selection instruction, and The URL of the video file associated with the target playback subpath is sent to the client.
  • a player 207 configured to receive a playback path selection instruction from the client, and determine the target playback sub-path of the interactive video according to the playback path selection instruction, and The URL of the video file associated with the target playback subpath is sent to the client.
  • the playback node R includes a playback sub-path RA and a playback sub-path RB
  • the target playback sub-path determined by the playback path selection instruction is RB
  • the URL of the video file associated with the RB is sent to the client.
  • the URL of the video file associated with the default play sub-path is sent to the client.
  • the playback node R includes a playback sub-path RA and a playback sub-path RB, where RA is the default playback sub-path of the node R. If the playback path selection instruction sent by the client is not received within the preset time period, it will communicate with RA The URL of the associated video file is sent to the client.
  • the player 207 is further configured to: send the current script time of the interactive video to the client, and the client is configured to: play the interactive video at the current script time and the current script time.
  • the difference between the actual playing time is greater than the preset time threshold, the frame is chased to the server.
  • the script time is the time on the script timeline.
  • the time offset between the playback time of each video frame in the interactive video and the initial playback time of the interactive video determines the script timeline, script time It is equivalent to the playback time of the client in the ideal state.
  • the client in the ideal state has no network delay, and the playback is not stuck.
  • the actual playback time of the client is generally later than the script time, and the actual playback time of different clients is generally different.
  • the interactive video should start playing at 18:00:00 and progress to the bifurcated node R at 18:01:00. If the playback path RB is selected, it will be played at 18:04:00. The playback path RB progresses to the bifurcation node B. If the playback path RA is selected, it progresses to the bifurcation node A via the playback path RA at 18:06:00.
  • the network condition of client 1 is normal, and occasionally freezes. It starts to play at 18:00:08 and the freeze is 8 seconds later.
  • Client 1 receives the bifurcation message of broadcast node R sent by the live broadcast platform at 18:01:00.
  • the selection component used to select the subsequent playback path of node R will be displayed on the playback interface of the client at 18:01:08; if the client 1 selects the RB playback path, it will be sent by the live broadcast platform at 18:04:00
  • the selection component for selecting the subsequent playback path of node B will be displayed on the playback interface of the client at 18:04:08; if client 1 selects the RA playback path, at 18:04:08:
  • At 06:00 the bifurcation message of the playback node A sent by the live broadcast platform is received, and the selection component for selecting the playback path of node A is displayed on the playback interface of the client at 18:06:08.
  • the network condition of client 2 is extremely poor and often needs to be buffered. It starts to play at 18:00:20, and it stops after 20 seconds (assuming the preset time threshold is 10 seconds), and client 2 receives it at 18:01:00 In order to reduce the playback delay, the bifurcation message of the playback node R sent by the live broadcast platform can be chased after receiving the bifurcation message of the playback node R, and it will be displayed immediately on the playback interface of the client at 18:01:10.
  • Select the selection component of the subsequent playback path of node R (at this time, only 10 seconds of playback delay remains by chasing the frame); if the client 2 selects the RB playback path, it will receive the message sent by the live broadcast platform at 18:04:00 The forked message of node B is played, and the selection component for selecting the subsequent playback path of node B will be displayed on the playback interface of the client at 18:04:10; if the client 2 selects the RA playback path, it will be displayed at 18:06 After receiving the bifurcation message of playing node A sent by the live broadcast platform at :00, the selection component for selecting the subsequent playing path of node A will be displayed on the playing interface of the client at 18:06:10.
  • Frame chasing can mean that after the client receives the node message, the client actively chases the frame. Specifically, the client can obtain the current script time t1, and obtain the current playback time t2 of the client. If the difference between t1 and t2 is greater than a certain threshold (ie, the tolerance duration), the client player immediately starts from t1 time Click to continue playing. If the freeze duration is less than the tolerance duration, no frame tracking will be performed.
  • a certain threshold ie, the tolerance duration
  • Frame tracking can also be passive frame tracking.
  • the live broadcast platform sends a heartbeat packet to the client regularly (for example, every 3 seconds), and the client receives the heartbeat packet and detects whether the difference between the current playback time of the client and the current script time exceeds Tolerance duration. If the difference between the current playback time of the client and the current script time exceeds the tolerance duration, the client will chase the frame, otherwise the client will not chase the frame.
  • the player 207 is further configured to: obtain a bullet screen whitelist of the first client, where the bullet screen white list is used to store a list of preset playback paths, and any item on the preset playback path
  • the barrage sent by a client is allowed to be displayed on the first client.
  • the barrage processor 206 receives the barrage sent by the second client, the player 207 determines whether the current play path of the second client is in the barrage whitelist; if the current play of the second client is If the path is not in the white list of barrage, the barrage sent by the second client is isolated, so that the first client does not display the barrage sent by the second client.
  • the first client may display the barrage sent by the second client.
  • the first client may also display the barrage sent by the second client that is different from the current playback path of the first client, and the current playback path of the second client (that is, the current playback path of the first client)
  • the preset playback path set in the bullet screen white list can be stored in the bullet screen white list in advance.
  • the preset playback path may be a playback sub-path belonging to the same bifurcation node as the current playback path of the first client, or a playback sub-path under other bifurcation nodes.
  • the preset playback path can be customized by the user, or the default preset playback path can be used.
  • the default preset playback path may be a playback sub-path that belongs to the same bifurcation node as the current playback path of the first client.
  • the first client can enable the bullet screen whitelist function. After the bullet screen whitelist function is enabled, if the preset playback path has not been edited, the default preset playback path will be used; if the preset playback path has been edited, the default playback path will be used. The edited preset playback path.
  • This embodiment enables the user to independently select the source of the barrage. For example, when the user wants to watch the comments of other users on different playback sub-paths corresponding to the same bifurcation node, each playback sub-path corresponding to the same bifurcation node can be added to Barrage whitelist.
  • This barrage isolation method is more flexible and further improves the user experience.
  • the player 207 is further used to store the barrage scope of each playback node; when the current playback path of the first client is within the barrage scope, on the first client Shield the barrage sent by the second client whose current playback path is not within the scope of the barrage.
  • each node corresponds to a barrage scope
  • the barrage scope is used to store a list of a set of playback paths.
  • the playback paths in the list satisfy: each client on the same playback path sends The barrage of can be displayed on other clients on the playback path.
  • the barrage sent by client 1 and client 2 can be displayed on client 1, and similarly, the barrage sent by client 2 can also be displayed on client 2. Barrage sent by client 1 and client 2. Since the client 3 is outside the scope of the barrage, only the barrage sent by the client 3 is displayed on the client 3, and the barrage sent by the client 1 and the client 2 are not displayed.
  • the player 207 is further used to update the playback path of the client.
  • the default playback path may be updated according to the current default playback path. Assuming that the original default playback path is SR ⁇ RA, then when the plot progresses to the default playback sub-path AE1 corresponding to RA, that is, when the video file corresponding to RA is finished playing and the video file corresponding to AE1 starts to be played, the default The playback path is updated to SR ⁇ RA ⁇ AE1.
  • the currently played path can be continuously recorded when the user does not select the playback path, so that the next time the user enters the live broadcast room, it can start directly after the currently played path. Play the follow-up content of the interactive video.
  • the target playback path selected by the client can also be obtained; after the video file corresponding to the target playback path is played, the playback path of the client is updated according to the target playback path .
  • the original historical selection path is SR ⁇ RB
  • the play path is updated to SR ⁇ RB ⁇ BE4.
  • the currently played path can be continuously recorded while the user is watching the interactive video, so that the next time the user enters the live broadcast room, the playback can be started on the basis of the currently played path.
  • the player 207 is further configured to update the barrage displayed on the client after the playback path is updated. Since the scope of the barrage of each playback node may be different, after the playback path is updated, the barrage displayed on the client needs to be updated at the same time. Specifically, the target barrage scope of the most recently played node on the updated play path can be obtained, and when the current play path of the first client is within the scope of the target barrage, the current barrage scope is displayed on the first client The barrage sent by the second client whose playback path is within the scope of the target barrage shields the barrage sent by the third client whose current playback path is not within the scope of the target barrage.
  • the node editor, video editor, video manager, guide console, command controller, barrage processor, player, etc. shown in Figure 8 can be implemented by software, or by hardware or a combination of software and hardware. .
  • Fig. 9 is a partial functional architecture diagram of the live broadcast platform shown in Fig. 8.
  • the operation specialist or the director can perform video editing and node editing on the live broadcast platform, and then upload the interactive video after the editing is completed to the live broadcast platform.
  • the number of interactive videos is multiple, multiple interactive videos can be managed, and the play time and play times of each video can be managed through the program list management.
  • Fig. 10 is a schematic diagram of the interactive video creation process of the live broadcast platform shown in Fig. 8.
  • the process of creating an interactive video first create a playback node according to the received node creation instruction, and determine the time offset of each playback node.
  • the time offset of each playback node constitutes the script timeline .
  • Fig. 11 is a schematic diagram of an interactive video playing process according to an embodiment of the present disclosure.
  • the client After the client enters the live broadcast room, it can establish a long connection with the live broadcast platform, monitor the long connection start instruction, and wait for the interactive video to start broadcasting. If the client receives the start instruction, it will play the interactive video of the current session.
  • the live broadcast platform After the client receives a new instruction (for example, a fork instruction), the live broadcast platform obtains the current play time (video_time) of the interactive video played by the client, and obtains the current script time (script_time) of the interactive video. If the current playback time is greater than or equal to the current script time (video_time ⁇ script_time), then determine the type of instruction.
  • a selection interaction UI (User Interface) pops up on the client for the user to select the playback path, or wait for the selection to time out to automatically select the default option; for the merge instruction, the next video is played directly; for the end instruction, End playback.
  • Fig. 12 is an overall functional architecture diagram of a live broadcast platform according to an embodiment of the present disclosure.
  • the live broadcast platform of this embodiment can also be compatible with the general functions of the live broadcast platform, such as presenting gifts in the live broadcast room, sending host barrage, playing advertisements in the live broadcast room, and paying virtual currency.
  • some animation effects can be displayed when progressing to branch nodes or aggregation nodes, for example, the thinking countdown effect when the user selects the playback path, the montage transition effect when switching to play video files on different playback sub-paths, etc.
  • FIG. 13 is a flowchart of an interactive video processing method according to an embodiment of the present disclosure. The method is based on the live broadcast platform described in any embodiment, and the method includes step S1301 to step S1302.
  • Step S1301 Receive a node editing instruction, and create a play node according to the node edit instruction and set the play time offset corresponding to each play node; wherein, the play node includes at least one bifurcation node and At least two child nodes.
  • Step S1302 Receive multiple video files, and respectively associate each video file with multiple playback sub-paths between adjacent playback nodes to generate an interactive video; wherein, each playback sub-path of the same bifurcation node corresponds to all the playback sub-paths.
  • the multiple plot branches in the interactive video are juxtaposed, and the play time offset is used to control the play progress of each client in the live room to play the interactive video, so that each client can play the interactive video
  • the play time difference of is less than the preset value.
  • each playback node can be edited on the live broadcast platform and the playback time offset corresponding to each playback node can be set, so that when the live broadcast platform is playing interactive videos, it can provide information to each client based on the playback time offset.
  • the playback progress of the terminal is controlled.
  • various clients can discuss video content with each other on a relatively uniform time axis, thereby improving the interactivity of each user in the live broadcast room when watching interactive videos.
  • the interactive video playback control method can be executed on the client connected to the server of the live broadcast platform.
  • Fig. 14 is a flowchart of an interactive video playback control method according to an embodiment of the present disclosure. The method may include step S1401 to step S1403.
  • Step S1401 Receive the fork message sent by the server when the current script time of the interactive video reaches the execution time for the interactive video to perform the scenario fork operation; where the current script time is the relative playback time of the current video frame in the interactive video. The time offset from the start playing time of the interactive video.
  • Step S1402 Determine whether the difference between the current play time of the interactive video played by the client and the execution time is greater than or equal to a preset time threshold.
  • Step S1403 If the difference is greater than or equal to the preset time threshold, respond to the fork message to display fork options on the client.
  • the server continuously detects the current script time of the interactive video.
  • the script time reaches the execution time for the interactive video to perform the plot bifurcation operation, it sends a bifurcation message to the client by calling the long connection to notify the client of the interaction
  • the plot of the video is currently progressing to the fork node.
  • the client After the client receives the fork message, it will first determine whether the difference between the current play time of the interactive video played by the client and the execution time is greater than or equal to a preset time threshold. If the difference is greater than or equal to the preset time threshold, it means that the current playback delay of the client is greater than the tolerable tolerance time, which will not be conducive to the exchange of interactive video plots between clients in a unified time dimension. . Therefore, in order to reduce the playback delay of the client, after receiving the fork message, the client will immediately respond to the fork message and display the fork option. At the same time, since the current play time has not reached the execution time, the interactive video will still continue to play. In other words, the bifurcation option is displayed while the interactive video continues to play. Among them, the bifurcation option is used for the client to select the playback path of the interactive video, and each bifurcation option corresponds to a playback path.
  • the difference between the current playback time and the execution time is less than a preset time threshold, it is determined whether the current playback time reaches the execution time; if the current playback time reaches the execution time, respond The fork message is used to display fork options on the client.
  • the bifurcation option can be displayed on the client after the current play time reaches the execution time. That is to say, if the difference is less than the preset time threshold, after the client receives the fork message, it does not immediately respond to the fork message to display the fork option on the client, but waits for the interactive video After the content before the fork node is played, the fork option is displayed on the client. If it is determined that the current play time has not reached the execution time, continue to play the interactive video until the current play time reaches the execution time, and then display the fork option on the client.
  • the tolerance time can be set to 10 seconds.
  • the 10-second tolerance time is just an example, and this value can be adjusted based on actual experience summaries and different business needs.
  • the tolerance time on some playback nodes can be 5 seconds, and some can be 20 seconds or 30 seconds.
  • the value of the tolerance duration can be a very large value.
  • the client may also count down the display time of the fork option, and after the countdown ends, the playback path corresponding to the default fork option (ie, the default playback path) is automatically selected.
  • T is the display countdown of the bifurcation option
  • T1 is the execution time
  • T2 is the current play time
  • T3 is the time threshold (ie, tolerance duration)
  • the client may chase frames from the server. From the user’s point of view, the "chasing frame” feels that the video suddenly "jumps”—for example, the 50th second screen was played at the previous moment, and the 60th second screen was suddenly played at the next moment — the user It feels that a part of the content is "lost", and the video skips and continues to play from the content at a certain time later. By chasing the frame, the playback delay can be further reduced.
  • the client may also chase the frame from the server after receiving the heartbeat packet sent by the server.
  • the heartbeat packet can be sent periodically by the server at a preset time interval. For example, send every 3 seconds.
  • the process of tracking frames is specifically: obtaining the video content of the interactive video starting from the target playback moment through the pre-established interface between the client and the content delivery network (Content Delivery Network, CDN); The difference in the execution time is less than the time threshold (tolerance duration), and then the client plays the video content. For example, if the current playback time is the 50th second, the execution time is the 65th second, and the time threshold is 10 seconds, the client can obtain the video content corresponding to the 60th second of the interactive video through the pre-established interface with the content distribution network, and then The client starts playing the interactive video from the 60th second.
  • the content played from the 60th second of the interactive video is the video content starting from the target playback time.
  • the value here is only an example, and it is not limited to this in practical applications.
  • obtaining the video content of the interactive video starting from the target playback moment includes: searching the interactive video for the video content starting from the target playback moment in the video buffer area; if the video content is found , The video content is obtained from the video buffer area; if the video content is not found, the interactive video that starts from the target playback time is pulled from the content distribution network through the interface Video content.
  • the client For interactive video in MP4 format, during the playback of the interactive video, the client first pulls a piece of video content from the CDN and caches it locally. For example, when the video content from the 0th second to the 20th second of the interactive video is played, the video content from the 20th second to the 40th second may be pulled and cached. Therefore, when the client is chasing frames, it can first search for the video content of the interactive video from the target playback time in the local cache of the client, and if the video content is found, jump directly to the video content and start playing; If the video content is not found, it means that the video content has not been cached. Therefore, the video content can be pulled from the CDN to the local cache, and then read from the local cache and played from the target playback time. The beginning of the video content.
  • the video in M3U8 format is similar to the video in MP4 format.
  • the video file of the interactive video is split into multiple video clips.
  • the client first pulls several video clips from the CDN and caches them locally. For example, when the first video clip is played, the second to fourth video clips can be cached. Since the duration of each video clip is determined, when the client is chasing the frame, the video clip from the target playback time can be determined first, and the video clip can be searched in the cache.
  • the video clip is found, it will be directly from the target The video clip is played at the playback time; if the video clip is not found, the video clip is pulled from the CDN and cached, and then the video clip is read from the cache and played at the target playback time.
  • a user's selection instruction of the fork option may be received; in response to the selection instruction, the video corresponding to the selected fork option in the interactive video is played content. For example, for the interactive video shown in FIG. 1, if the user sends a selection instruction for option 1, the video content corresponding to option 1 will start to be played.
  • a unified time axis (referred to as a script time axis) is adopted for each client.
  • the script timeline also known as the "script axis" is an ideal time. From the perspective of the content creator, what happens at the relative time relative to the zero point of the interactive video playback time-for example, the creator hopes that a plot bifurcation will start at the 10th minute on the script, and a plot aggregation will appear at the 15th minute. , All content ends in the 30th minute. The emphasis here is on the word “first”, which is "relative time", or time offset.
  • the time when the user sees the video is an absolute time, which is related to the network conditions of each user and other factors.
  • the server sends a fork message to the client according to the script time to inform the client whether it should perform the fork operation.
  • the client compares the current playback time of the interactive video with the execution time, and can learn the current delay of the client , And when the delay exceeds the tolerance time, the fork option is displayed on the client immediately after receiving the fork message, and then the client chases the frame, which can reduce the delay of the client playing interactive video and make Different clients can play interactive videos on a relatively uniform time axis, facilitating the interaction between clients.
  • the content of the interactive video will be officially launched at 18:00:00 on July 25, 2019.
  • Zhang San makes a choice immediately when he receives the fork choice interaction. Then he may receive the fork message of forked node R at the absolute time of 18:01:00 on 2019-07-25 and display the fork option of node R; if Zhang San chooses to fork node B, Zhang 3.
  • the fork message of the forked node B is received and the fork option of node B is displayed; if Zhang San chooses to fork node A, Zhang San in 2019 -07-25 18:06:00 This absolute time receives the fork message of the fork node A and displays the fork option of node A.
  • Li Si was delayed by 8 seconds, and Wang Wu was delayed by 20 seconds at the beginning (assuming that Wang Wu was stuck after 20 seconds and did not freeze until the interactive video was played).
  • FIG. 16 it is a block diagram of an interactive video playback control device according to an embodiment of the present disclosure.
  • the device may include a receiving module 501, a judgment module 502, and a display module 503.
  • the receiving module 501 is configured to receive a fork message sent by the server when the current script time of the interactive video reaches the execution time of the interactive video to perform the scenario fork operation; wherein, the current script time is the current video frame in the interactive video The time offset of the play time relative to the initial play time of the interactive video.
  • the determining module 502 is configured to determine whether the difference between the current play time of the interactive video played by the client and the execution time is greater than or equal to a preset time threshold.
  • the display module 503 is configured to respond to the fork message to display fork options on the client if the difference is greater than or equal to the preset time threshold.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the present disclosure. Those of ordinary skill in the art can understand and implement it without creative work.
  • the embodiments of the apparatus of the present disclosure can be applied to a computer device, such as a server or a terminal device.
  • the device embodiments can be implemented by software, or can be implemented by hardware or a combination of software and hardware.
  • a logical device it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory by the processor that processes the file where it is located.
  • FIG. 17 it is a hardware structure diagram of the computer equipment where the interactive video playback control apparatus of the embodiment of the present disclosure is located, except for the processor 601, the memory 602, and the network interface shown in FIG.
  • the server or electronic device where the apparatus in the embodiment is located may also include other hardware generally according to the actual function of the computer device, which will not be repeated here.
  • an embodiment of the present disclosure also provides a computer storage medium in which a program is stored, and the program is executed by a processor to implement the method in any of the foregoing embodiments.
  • an embodiment of the present disclosure also provides a client, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements any of the foregoing embodiments when the program is executed.
  • a client including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements any of the foregoing embodiments when the program is executed.
  • the processor implements any of the foregoing embodiments when the program is executed.
  • an embodiment of the present disclosure also provides an interactive video playback control system.
  • the system includes: a server; and a plurality of clients, each of the clients includes a processor and a storage processor.
  • the memory of the executed computer program wherein the processor implements the method of any one of the embodiments when the computer program is executed; wherein, the server is configured to reach the interactive video execution scenario at the current script time of the interactive video At the execution time of the fork operation, a fork message is sent to each client.
  • any live broadcast platform according to the foregoing embodiments may be implemented on the server.
  • the fork option is displayed on the client immediately, instead of waiting for the current playback time to reach the execution time of the interactive video to perform the plot fork operation. .
  • the time delay for the client to play the interactive video is reduced, so that different clients can play the interactive video on a relatively uniform time axis, which facilitates the interaction between the clients.
  • the embodiments of the present disclosure may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
  • Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cartridge Magnetic tape, magnetic tape, magnetic disk storage or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technologies
  • compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage magnetic cartridge Magnetic tape, magnetic tape, magnetic disk storage or other magnetic storage devices or any other non-transmission media that can be used

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开实施例提供一种交互视频的播放控制方法、装置和系统,在接收到服务器发送的分叉消息之后,判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值,如果所述差值大于或等于所述预设的时间阈值,则立刻在客户端上显示分叉选项。

Description

交互视频的处理与播放控制 技术领域
本公开涉及计算机软件技术领域,尤其涉及交互视频的处理与播放控制。
背景技术
交互视频是指通过各种技术手段,将交互体验融入到线性的视频中的新型视频。如图1所示,是一个实际应用场景下的交互视频的示意图。交互视频在播放到一定进度时,可以在播放界面上提供若干个分支选项供用户选择,用户在直播平台上观看交互视频的内容时,可以自主选择不同的分支,以观看不同的剧情走向。
发明内容
根据本公开实施例的第一方面,提供一种直播平台,所述直播平台包括:节点编辑器和视频编辑器;所述节点编辑器用于接收节点编辑指令,并根据所述节点编辑指令创建播放节点以及设置各个播放节点对应的播放时间偏移量;其中,所述播放节点包括至少一个分叉节点以及所述分叉节点的至少两个子节点;所述视频编辑器用于接收多个视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联,以生成交互视频;其中,同一分叉节点的各条播放子路径对应所述交互视频中并列的多个剧情分支,所述播放时间偏移量用于对直播间内各个客户端播放所述交互视频的播放进度进行控制,以使所述各个客户端播放所述交互视频的播放时间差小于预设值。
根据本公开实施例的第二方面,提供一种交互视频处理方法,基于任一实施例所述的直播平台,所述方法包括:接收节点编辑指令,并根据所述节点编辑指令创建播放节点以及设置各个播放节点对应的播放时间偏移量;其中,所述播放节点包括至少一个分叉节点以及所述分叉节点的至少两个子节点;接收多个视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联,以生成交互视频;其中,同一分叉节点的各条播放子路径对应所述交互视频中并列的多个剧情分支,所述播放时间偏移量用于对直播间内各个客户端播放所述交互视频的播放进度进行控制,以使所述各个客户端播放所述交互视频的播放时间差小于预设值。
根据本公开实施例的第三方面,提供一种交互视频的播放控制方法,所述方法包括:接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时发送的分叉消息;其中,所述当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量;判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值;若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
根据本公开实施例的第四方面,提供一种交互视频的播放控制装置,所述装置包括:接收模块,用于接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情 分叉操作的执行时间时发送的分叉消息;其中,所述当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量;判断模块,用于判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值;显示模块,用于若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
根据本公开实施例的第五方面,提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,使所述处理器实现任一实施例所述的交互视频的播放控制方法。
根据本公开实施例的第六方面,提供一种客户端,包括处理器;用于存储处理器可执行的计算机程序的存储器;其中,所述处理器执行所述计算机程序时实现任一实施例所述的交互视频的播放控制方法。
根据本公开实施例的第七方面,提供一种交互视频的播放控制系统,所述系统包括:服务器;以及多个客户端;每个所述客户端包括处理器以及用于存储处理器可执行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现根据任一实施例所述的交互视频的播放控制方法,其中,所述服务器用于在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时,向每个客户端发送分叉消息。
应当理解,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
图1是一个实际应用场景下的交互视频的示意图。
图2是本公开一个实施例的直播平台的示意图。
图3是本公开一个实施例的播放节点的示意图。
图4a是本公开一个实施例的交互视频编辑界面的示意图。
图4b是本公开另一个实施例的交互视频编辑界面的示意图。
图5是本公开一个实施例的交互视频播放时间和播放场次示意图。
图6是本公开一个实施例的客户端上显示的弹幕的示意图。
图7a是本公开另一个实施例的客户端上显示的弹幕的示意图。
图7b是本公开又一个实施例的客户端上显示的弹幕的示意图。
图8是本公开另一个实施例的直播平台的示意图。
图9是本公开一个实施例的直播平台的部分功能架构图。
图10是本公开一个实施例的交互视频创建过程的示意图。
图11是本公开一个实施例的交互视频播放过程的示意图。
图12是本公开一个实施例的直播平台的整体功能架构图。
图13是本公开一个实施例的交互视频处理方法的流程图。
图14是本公开一个实施例的交互视频的播放控制方法流程图。
图15是本公开一个实施例的具有不同播放延时的客户端的播放过程示意图。
图16是本公开一个实施例的交互视频的播放控制装置的框图。
图17是本公开一个实施例的用于实施交互视频的播放控制方法的计算机设备的结构示意图。
图18是本公开一个实施例的交互视频的播放控制系统的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括复数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目中的任何一个或其所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应受这些术语限制。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
本公开所采用的术语的定义如下。
播放节点(简称节点):即按照一定的组织架构进行排列和关联的节点,节点可以在制作交互视频时被预先定义好,一个交互视频至少需要3个节点,即,一个分叉节点和与该分叉节点对应的至少两个子节点(其中,该分叉节点为这两个子节点的父节点),交互视频还可以包括其他节点,如聚合节点和普通节点。其中,包括多个子节点的节点为分叉节点,包括多个父节点的节点为聚合节点,至多仅包括一个父节点且至多仅包括一个子节点的节点为非分叉节点。根据节点所处的位置,还可以将节点分为开始节点、中间节点和结束节点,其中,开始节点为根节点,开始节点没有父节点;结束节点为叶子节点,结束节点没有子节点;既有子节点又有父节点的节点为中间节点。
一个或多个实施例的播放节点的示意图如图3所示。图中包括S、R、A、B、E1、E2、E3和E4共8个播放节点,其中,节点S为开始节点,节点R为中间节点,节点A和B为节点R的子节点,节点E1和E2为节点A的子节点,节点E3和E4为节点B的 子节点,节点S为非分叉节点,节点R、A和B均为分叉节点,节点E1、E2、E3和E4均为结束节点。
相邻播放节点:一个播放节点及其父节点互为相邻播放节点,同样,一个播放节点及其子节点也互为相邻播放节点。
播放路径:连接两个播放节点之间的路径。例如,连接节点S和节点R的路径即为节点S到节点R之间的播放路径SR,又例如,连接节点S和节点A的路径即为节点S到节点A之间的播放路径SR→RA。相邻两个播放节点之间的播放路径也称为播放子路径,如图3中的SR为播放节点S与播放节点R之间的播放子路径、RA为播放节点R与播放节点A之间的播放子路径。分叉节点和聚合节点均包括多条播放路径,普通节点包括一条播放路径。一个节点及其子节点之间的播放路径称为从属于该节点的播放路径。从属于该节点的子节点的播放路径也是从属于该节点的播放路径。
一个交互视频可以包括多个视频文件,每条播放子路径对应交互视频的一个视频文件,所有播放子路径各自对应的视频文件共同构成了交互视频的视频内容,各个播放节点以及播放节点之间的播放路径构成了交互视频的播放逻辑。从属于同一个分叉节点的多个播放子路径为并列的播放子路径,并列的多个播放子路径对应交互视频中并列的多个分叉剧情,用户可以以择一的方式选择其中一条播放子路径来自主选择交互视频的剧情走向。例如在图3所示的播放节点中,在节点R,用户可以选择RA对应的选项来播放交互视频中RA对应的分支剧情;在节点A,用户可以选择AE1对应的选项来播放交互视频中AE1对应的分支剧情。
历史选择路径:用户在观看交互视频的过程中已选择的播放路径。例如,在节点R,用户选择了RA对应的选项,则该用户的历史选择路径为SR→RA。若用户没有选择过播放路径,则历史选择路径为空。
当前播放路径:当前时刻正在播放的播放子路径。例如,当交互视频的播放路径为SR→RA,且当前正在播放RA对应的视频文件时,RA即为当前播放路径。
默认播放路径:每个分叉节点对应一条默认播放路径,若用户在预设时间段内没有选择播放路径,则系统自动为用户选择一条预先设定好的播放路径(即默认播放路径)进行播放,直到用户重新选择播放路径。例如,在图3中,节点R的默认播放路径可以设为RA,节点B的默认播放路径可以设为BE3。默认播放路径上相邻两个播放节点之间的路径为默认播放子路径。
目标播放路径:下一条待播放的播放子路径。目标播放路径可以由用户自主选择,如果用户在预设时间内未选择目标播放路径,则将默认播放路径作为目标播放路径。例如,在节点R,若用户选择了RB,则RB为目标播放路径;若用户在节点R处未作出选择,则将节点R的默认播放子路径RA作为目标播放路径。
播放时间偏移量:交互视频的当前播放时间与交互视频起始播放时间之间的时间偏移量。例如,交互视频在20:00:00开始播放,当前播放时间为20:10:00,则播放时间偏移量为10分钟。
如图2所示,是本公开一个实施例的直播平台的示意图。所述直播平台可以包括:节点编辑器201和视频编辑器202。
所述节点编辑器201用于接收节点编辑指令,并根据所述节点编辑指令创建播放节点以及设置各个播放节点对应的播放时间偏移量;其中,所述播放节点包括至少一个分叉节点以及所述分叉节点的至少两个子节点。
所述视频编辑器202用于接收多个视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联,以生成交互视频;其中,同一分叉节点的各条播放子路径对应所述交互视频中并列的多个剧情分支,所述播放时间偏移量用于对直播间内各个客户端播放所述交互视频的播放进度进行控制,以使所述各个客户端播放所述交互视频的播放时间差小于预设值。
本实施例的直播平台可以直接或者间接接收运维人员或者编导发送的指令,以实现对交互视频的创建、编辑、发布和播放过程。
节点编辑器201可以接收运维人员或者编导发送的节点编辑指令,其中,节点编辑指令中可以携带播放节点的播放时间偏移量,即,交互视频被播放到该播放节点的时刻相对于交互视频的起始播放时间的时间差。例如,交互视频的起始播放时间为20:00:00,播放到第一个播放节点的时刻为20:10:00,则第一个播放节点的播放时间偏移量为10分钟。其中,交互视频播放到某个播放节点,即表示开始播放交互视频中与所述播放节点及其子节点构成的播放子路径相关联的视频文件。
进一步地,节点编辑指令中还可以携带播放节点的节点类型,节点类型可以包括分叉型节点、聚合型节点(也称汇聚型节点)和普通型节点。分叉型节点可以为包括多个子节点的播放节点,聚合型节点可以为包括多个父节点的播放节点,普通型节点可以是父节点和子节点的个数均为1的播放节点,或者可以是仅有父节点没有子节点(即,结束节点)的播放节点,或者可以是仅有子节点没有父节点的播放节点(即,开始节点)。对于分叉型节点,节点编辑指令中还可以携带该播放节点的分叉数量,该播放节点的每个分叉对应一个子节点,同一个分叉节点的各个分叉对应交互视频中该分叉节点的并列的各个剧情分支。
交互视频中的播放节点包括至少一个分叉节点及其对应的两个分叉,即,该交互视频中包括至少3个节点(一个分叉节点及该分叉节点包括的至少两个子节点)。在实际应用中,该交互视频还可以包括其他类型的节点,例如,聚合节点和/或普通节点,如图3所示的播放节点包括普通节点S、E1、E2、E3和E4,还包括分叉节点R、A和B。
定义好播放节点之后,视频编辑器202可以接收运维人员或者编导上传的视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联。视频编辑器202也可以获取运维人员或者编导输入的视频文件的URL(Uniform Resource Locator,统一资源定位符),并通过访问该URL获取对应的视频文件。
一个或多个实施例的交互视频编辑界面的示意图如图4a和图4b所示,是仅包括SR→RA→AM→MT和SR→RB→BM→MT两条播放路径的交互视频编辑界面的示意图, 图4a示出了一个实施例的节点编辑界面的示意图,图4a中包括节点创建组件,通过向该节点创建组件发送指令(例如,鼠标点击该组件),可以新增一个播放节点。已经创建的播放节点可以按照顺序在播放节点列表中进行显示。播放节点列表中可以包括各个播放节点的序号、类型、节点代码以及播放时间偏移量。每个播放节点可以对应一个节点修改组件和一个节点删除组件,通过向节点修组件发送指令,可以修改相应播放节点的类型、节点代码以及播放时间偏移量。通过向节点删除组件发送指令,可以删除相应播放节点。
图4b示出了一个实施例的视频上传界面的示意图。针对节点编辑过程中创建的节点,相邻节点之间的路径为一条播放子路径,可以为每条播放子路径上传一个视频文件。例如,图4a中节点R与节点A之间的路径可以上传一个视频文件,该视频文件的代码可以记为2_3,其中,2为这条播放子路径上的头节点(即,节点R)的节点序号,3为这条播放子路径上的尾节点(即,节点A)的节点序号。头节点为播放子路径上的起始节点,尾节点为该播放子路径上的结束节点。对于分叉节点,该分叉节点的每条分叉对应的视频文件的代码还可以包括与分叉个数对应的编号,例如,3_5_1表示序号为3的播放节点与序号为5的播放节点之间的第1条分叉对应的视频文件。每个分叉节点还可以对应一个分叉新增组件,用于新增该分叉节点的分叉。
本实施例通过定义播放节点,可以确定交互视频中各个视频文件的组织架构及播放路径。在图3所示的播放节点中,交互视频的播放路径共有4条,分别为:SR→RA→AE1、SR→RA→AE2、SR→RB→BE3和SR→RB→AE4。另外,通过设置各个播放节点对应的播放时间偏移量,可以构成一个虚拟的时间轴(即,脚本时间轴),脚本时间轴的意义在于,可以确定交互视频中每一帧视频帧的播放时间与交互视频的起始播放时间之间的时间偏移量(时间差),这一时间偏移量与交互视频中的各个视频文件的时长以及交互视频的播放路径有关,而与客户端的网络状况是否良好无关。也就是说,当交互视频已经编辑好之后,交互视频的第几分第几秒播放什么剧情就已经固定了。
如果客户端因网络卡顿等原因,造成该客户端当前播放交互视频的实际播放时刻与交互视频在脚本时间轴上的播放时间偏移量相差较大,则可以通过追帧或者调整客户端选择播放路径的预留时间等方式来对客户端播放所述交互视频的播放进度进行控制,以减小该客户端播放交互视频的播放延迟。这样可以使各个客户端播放交互视频的播放进度相对统一,便于各个客户端在相对统一的时间轴上讨论剧情,提高了直播的交互性。
图8是本公开另一个实施例的直播平台的示意图。所述直播平台还包括视频管理器203,用于接收各个交互视频的播放时间信息和播放场次信息,并将所述各个交互视频的播放时间信息和播放场次信息与对应的交互视频进行关联。本公开一个实施例的交互视频播放时间和播放场次示意图如图5所示。在图5中,交互视频1分别在13:00和17:00进行播放,播放时长为90分钟,且13:00播放的场次为首映(即,首次播放)。交互视频2在15:00播放,播放时长为100分钟。通过在不同的时间(不同场次)播放同一交互视频,可以使迟到用户在其他场次中从头开始观看之前未观看的剧情内容,提高了用户体验。例如,假设用户在13:10时进入直播间,那么,在交互视频1首映时, 该用户错过了前10分钟的剧情,因此,用户可以在17:00之前进入直播间重新观看交互视频1前10分钟的剧情。
在一个或多个实施例中,所述直播平台还包括导控台204,用于在所述交互视频的播放过程中,响应接收到的交互视频编辑指令,在所述交互视频中插入播放节点、在所述交互视频中删除播放节点和/或在所述交互视频中插入广告。
在本实施例中,可以将对交互视频的部分控制权限开放给运维人员或者编导,由运维人员或者编导手动发送指令来对交互视频进行编辑。运维人员或者编导可以向直播平台发送交互视频编辑指令,该交互视频编辑指令可以是播放节点插入指令、播放节点删除指令或者广告插入指令,也可以是其他类型的指令。直播平台在接收到交互视频编辑指令之后,可以临时在交互视频中相应地插入播放节点、删除播放节点、或者插入广告。
在一个或多个实施例中,所述导控台204还用于对播放所述交互视频的客户端执行的指令操作次数进行统计;和/或对各个播放子路径上的客户端的数量进行统计。在本实施例中,导控台204可以统计每个客户端执行的指令操作的次数,这里所说的指令操作可以包括选择播放路径的指令操作,还可以包括启用或者停用弹幕隔离功能的指令操作,还可以包括启用或者停用弹幕防剧透功能的指令操作,等等。导控台204还可以对各个播放子路径上的客户端的数量进行统计,即,统计有多少个客户端正在播放路径SR对应的视频文件,有多少个客户端正在播放路径RA对应的视频文件,等等。
在一个或多个实施例中,所述直播平台还包括指令控制机205,用于获取服务器时间,若所述服务器时间达到交互视频的播放时间信息指示的播放时间,通过与客户端预先建立的长连接向所述客户端发送交互视频播放开始指令,所述交互视频播放开始指令用于指示所述客户端访问所述交互视频的视频地址,以开始播放所述交互视频。
直播平台可以维护一个视频播放列表(即,节目单),该视频播放列表用于记录各个交互视频的播放时间信息和播放场次信息,并将各个交互视频的播放时间信息和播放场次信息载入指令控制机205。对于在交互视频开播之前就已经在直播间内的客户端,该客户端可以在进入直播间时与直播平台建立长连接,并监听长连接的消息,等待统一的开播指令,指令控制机205循环自执行,在判断服务器当前时间到达播出时间时,则向各个客户端准时发送开播指令,开播指令中包括需要播放的视频文件(例如,m3u8格式的视频文件)的地址,客户端访问该地址之后开始播放相应的视频文件。
在一个或多个实施例中,所述指令控制机205还用于响应所述客户端发送的短连接建立请求,建立与所述客户端的短连接,并通过所述短连接接收所述客户端发送的用于请求开始播放所述交互视频的播放请求指令。
对于在直播间开始播放交互视频之后才进入直播间的客户端(迟到用户),其进入直播间之后,可以向直播平台发送短连接建立请求,指令控制机205可以建立与所述客户端的短连接,并通过所述短连接接收所述客户端发送的播放请求指令,响应于接收到播放请求指令,指令控制机205可以向客户端发送开播指令,从而使客户端开始播放 交互视频。此外,若客户端在播放交互视频的过程中出现网络断连等情况,客户端也可以通过短连接主动向直播平台发送播放请求指令,直播平台的指令控制机205响应于接收到播放请求指令,可以向客户端发送继续播放指令,从而使客户端继续播放交互视频。
在一个或多个实施例中,所述直播平台还包括:弹幕处理器206,用于获取客户端发送的弹幕,以及获取所述客户端发送所述弹幕时所述交互视频的当前播放路径,并将所述弹幕转发至所述当前播放路径上的其他客户端。
在本实施例中,弹幕处理器206可以将一个客户端发送的弹幕转发至在相同当前播放路径上的各个客户端。例如,客户端1和客户端2的当前播放路径均为RA,客户端1发送了弹幕“今天天气真好”,则弹幕处理器206可以将弹幕“今天天气真好”转发至客户端2进行显示。
进一步地,若客户端1与客户端2的当前播放路径不同,则弹幕处理器206在接收到客户端1发送的弹幕时,会将该弹幕与客户端2进行隔离,即,弹幕处理器206对客户端1发送的弹幕进行过滤屏蔽,以阻止客户端1发送的弹幕转发至客户端2。
如图6和图7a所示,是客户端上显示的弹幕的示意图。假设处于播放路径RA上的用户1发送了弹幕“今天天气真好”,处于播放路径RB上的用户2发送了弹幕“大家好!”和“速速集结”,处于播放路径RB上的用户3发送了弹幕“还有谁在看?”。则用户1的播放界面上显示的弹幕屏蔽了用户2和用户3发送的弹幕,如图6所示;用户2和用户3的播放界面上显示的弹幕屏蔽了用户1发送的弹幕,如图7a所示。
上述弹幕隔离功能通过对不同播放路径下的客户端发送的弹幕进行屏蔽,使得客户端仅能显示同一播放路径下的各个客户端发送的弹幕。对不同播放路径下的客户端发送的弹幕进行屏蔽,一方面减少了弹幕数量,避免了因弹幕过多而干扰视频播放;另一方面,由于不同播放路径下的客户端播放的是交互视频的不同剧情分支,因此,通过弹幕隔离,可以使观看到不同剧情分支的用户不被剧透,也便于同一播放路径下观看相同剧情分支的用户通过弹幕进行交流,提高了交互视频播放过程中的交互性,提升了用户体验。
在一个或多个实施例中,所述弹幕处理器206还用于:获取第二客户端发送所述弹幕的时刻与所述交互视频的起始播放时间之间的第二时间偏移量,并将所述弹幕转发至第一客户端,其中所述第一客户端的当前播放时间与所述交互视频的起始播放时间之间的第一时间偏移量小于所述第二时间偏移量。
第二客户端在发送弹幕时,会将发送弹幕的时间点与弹幕进行关联,之后再将弹幕发送至服务器。服务器接收到弹幕之后,可以对与该弹幕相关联的时间进行解析,得到第二时间偏移量。该第二时间偏移量可以表示第二客户端发送弹幕的时刻已播放到交互视频的第几分第几秒。
另外,服务器还可以获取同一直播间内的第一客户端当前播放交互视频的第一时间偏移量,第一时间偏移量可以表示第一客户端当前播放到交互视频的第几分第几秒。
如果第二时间偏移量大于第一时间偏移量,则表示第二客户端上播放交互视频的 播放进度比第一客户端上播放交互视频的播放进度要快。例如,第一时间偏移量为00:20:00,第二时间偏移量为00:30:00,即,第一客户端仅播放到交互视频的第00:20:00,而第二客户端已经播放到交互视频的第00:30:00。也就是说第二客户端比第一客户端的播放进度要快10分钟。
在上述情况下,由于第二客户端已经播放到了第一客户端当前播放的视频内容的后续剧情,因此,第二客户端发送的弹幕有可能对第一客户端的用户造成剧透。为了防止上述情况发生,可以在第一客户端对所述第二客户端发送的弹幕进行隔离,即,由服务器对第二客户端发送的弹幕进行拦截,阻止第二客户端发送的弹幕在第一客户端上显示。
假设用户1通过客户端1发送了弹幕“今天天气真好”,用户2通过客户端2发送了弹幕“大家好!”和“速速集结”,用户3通过客户端3发送了弹幕“还有谁在看?”,且客户端1当前播放到交互视频的第00:10:00,客户端2当前播放到交互视频的第00:20:00,客户端3当前播放到交互视频的第00:30:00。则客户端1的播放界面上显示的弹幕隔离了客户端2和客户端3发送的弹幕,如图6所示;客户端2的播放界面上显示的弹幕仅隔离了客户端3发送的弹幕,如图7b所示。
本实施例的方案通过获取第二客户端的第二时间偏移量,并获取第一客户端的第一时间偏移量,在第一时间偏移量小于第二时间偏移量的第一客户端上对第二客户端发送的弹幕进行隔离,可以避免播放进度较快的第二客户端发送的弹幕对播放进度较慢的第一客户端的用户造成剧透,提高了用户观看交互视频的体验。
在一个或多个实施例中,所述直播平台还包括播放器207,用于接收来自客户端的播放路径选择指令,并根据所述播放路径选择指令确定所述交互视频的目标播放子路径,并将与所述目标播放子路径相关联的视频文件的URL发送至所述客户端。
假设播放节点R包括播放子路径RA和播放子路径RB,且播放路径选择指令确定的目标播放子路径为RB,则将与RB相关联的视频文件的URL发送至所述客户端。
进一步地,若在预设的时间段内未接收到所述客户端的播放路径选择指令,则将与默认播放子路径相关联的视频文件的URL发送至所述客户端。假设播放节点R包括播放子路径RA和播放子路径RB,其中RA为节点R的默认播放子路径,若在预设的时间段内未接收到客户端发送的播放路径选择指令,则将与RA相关联的视频文件的URL发送至所述客户端。
在一个实施例中,所述播放器207还用于:将所述交互视频的当前脚本时间发送至客户端,所述客户端被配置成:在所述当前脚本时间与当前播放所述交互视频的实际播放时间的差值大于预设时间阈值时向服务器追帧。
脚本时间即为脚本时间轴上的时间,如前所述,交互视频中每一帧视频帧的播放时间与交互视频的起始播放时间之间的时间偏移量确定了脚本时间轴,脚本时间相当于客户端理想状态下的播放时间,理想状态下的客户端没有网络延时,且播放没有卡顿。然而在实际情况下,由于网络延时和卡顿的存在,客户端的实际播放时间一般晚于脚本 时间,且不同客户端的实际播放时间一般不同。
例如,根据设定好的脚本时间,交互视频应在18:00:00开始播放,在18:01:00进展到分叉节点R,若选择了播放路径RB,则在18:04:00经播放路径RB进展到分叉节点B,若选择了播放路径RA,则在18:06:00经播放路径RA进展到分叉节点A。
客户端1的网络状况一般,偶尔卡顿,其在18:00:08,卡顿8秒后开始播放,客户端1在18:01:00接收到直播平台发送的播放节点R的分叉消息,在18:01:08才在客户端的播放界面上显示用于选择节点R的后续播放路径的选择组件;若客户端1选择了RB播放路径,则在18:04:00接收到直播平台发送的播放节点B的分叉消息,在18:04:08才在客户端的播放界面上显示用于选择节点B的后续播放路径的选择组件;若客户端1选择了RA播放路径,则在18:06:00接收到直播平台发送的播放节点A的分叉消息,在18:06:08才在客户端的播放界面上显示用于选择节点A的播放路径的选择组件。
客户端2的网络状况极差,经常需要缓冲,其在18:00:20,卡顿20秒后开始播放(假设预设时间阈值为10秒),客户端2在18:01:00接收到直播平台发送的播放节点R的分叉消息,为了减少播放延时,可以在接收到播放节点R的分叉消息之后进行追帧,在18:01:10在客户端的播放界面上立即显示用于选择节点R的后续播放路径的选择组件(此时,通过追帧仅剩余10秒的播放延时);若客户端2选择了RB播放路径,则在18:04:00接收到直播平台发送的播放节点B的分叉消息,在18:04:10才在客户端的播放界面上显示用于选择节点B的后续播放路径的选择组件;若客户端2选择了RA播放路径,则在18:06:00接收到直播平台发送的播放节点A的分叉消息,在18:06:10才在客户端的播放界面上显示用于选择节点A的后续播放路径的选择组件。
追帧可以是客户端接收到节点消息后,客户端进行主动追帧。具体来说,客户端可以获取当前脚本时间t1,并获取本客户端的当前播放时间t2,如果t1与t2的差距大于某个阈值(即,容差时长),则客户端播放器立即从t1时间点继续播放。若卡顿时长小于容差时长,则不进行追帧。
追帧也可以是被动追帧,比如直播平台定时向客户端发送心跳包(例如每3秒发送一次),客户端收到心跳包即进行检测客户端的当前播放时间与当前脚本时间之差是否超过容差时长,若客户端的当前播放时间与当前脚本时间之差超过容差时长,则客户端进行追帧,否则客户端不进行追帧。
在一个实施例中,所述播放器207还用于:获取第一客户端的弹幕白名单,所述弹幕白名单用于存储预设播放路径的列表,所述预设播放路径上的任一客户端发送的弹幕被允许在所述第一客户端上显示。弹幕处理器206在接收到第二客户端发送的弹幕时,播放器207判断所述第二客户端的当前播放路径是否在所述弹幕白名单中;若所述第二客户端的当前播放路径不在所述弹幕白名单中,则对所述第二客户端发送的弹幕进行隔离,使所述第一客户端不显示所述第二客户端发送的弹幕。
进一步地,若所述第二客户端的当前播放路径在所述弹幕白名单中,则所述第一客户端可以显示所述第二客户端发送的弹幕。
在本实施例中,第一客户端上还可以显示与第一客户端当前播放路径不同的第二客户端发送的弹幕,第二客户端所处的当前播放路径(即,第一客户端的弹幕白名单中设置的预设播放路径)可以预先存储在弹幕白名单中。预设播放路径可以是与第一客户端的当前播放路径从属于同一个分叉节点的播放子路径,也可以是其他分叉节点下的播放子路径。预设播放路径可以由用户自定义,也可以采用默认的预设播放路径。默认的预设播放路径可以是与第一客户端的当前播放路径从属于同一个分叉节点的播放子路径。
第一客户端可以启用弹幕白名单功能,在启用弹幕白名单功能之后,若预设播放路径未被编辑,则采用默认的预设播放路径;若预设播放路径已被编辑,则采用编辑后的预设播放路径。
本实施例使用户能够自主选择弹幕来源,例如,当用户希望观看同一分叉节点对应的不同播放子路径上其他用户的评论,则可以将同一分叉节点对应的各条播放子路径加入到弹幕白名单中。这种弹幕隔离方式更加灵活,进一步提高了用户体验。
在一个实施例中,所述播放器207还用于:存储各个播放节点的弹幕作用域;当第一客户端的当前播放路径在所述弹幕作用域内时,在所述第一客户端上对当前播放路径不在所述弹幕作用域内的第二客户端发送的弹幕进行屏蔽。
在本实施例中,每个节点都对应一个弹幕作用域,该弹幕作用域用于存储一组播放路径的列表,该列表中的播放路径满足:处于同一播放路径上的各个客户端发送的弹幕可以在该播放路径上的其他客户端上进行显示。
以图3所示的播放节点为例,假设播放节点R的弹幕作用域为RA,客户端1和客户端2的当前播放路径均为RA,客户端3的当前播放路径为RB,那么,由于客户端1和客户端2的当前播放路径均处于同一弹幕作用域内,因此在客户端1上可以显示客户端1和客户端2发送的弹幕,同样,在客户端2上也可以显示客户端1和客户端2发送的弹幕。由于客户端3处于该弹幕作用域以外,因此在客户端3上仅显示客户端3发送的弹幕,而不显示客户端1和客户端2发送的弹幕。
在一个实施例中,所述播放器207还用于:对所述客户端的播放路径进行更新。具体地,对于默认播放路径,可以根据所述当前默认播放路径对默认播放路径进行更新。假设原来的默认播放路径为SR→RA,那么,当剧情进展到RA对应的默认播放子路径AE1时,即,当RA对应的视频文件播放完毕,开始播放AE1对应的视频文件时,可以将默认播放路径更新为SR→RA→AE1。本实施例通过对默认播放路径进行更新,可以在用户未选择播放路径的过程中,不断地记录当前已播放的路径,以便在用户下一次进入直播间时可以直接从当前已播放的路径之后开始播放交互视频的后续内容。
在一个或多个实施例中,还可以获取所述客户端选择的目标播放路径;在播放所述目标播放路径对应的视频文件之后,根据所述目标播放路径对所述客户端的播放路径进行更新。假设原来的历史选择路径为SR→RB,那么,当剧情进展到节点B,且用户选择了播放子路径BE4时,在开始播放BE4对应的视频文件之后,则将播放路径更新 为SR→RB→BE4。本实施例通过对播放路径进行更新,可以在用户观看交互视频的过程中,不断地记录当前已播放的路径,以便在用户下一次进入直播间时可以在当前已播放的路径的基础上开始播放交互视频的后续内容。
在一个实施例中,所述播放器207还用于:在所述播放路径更新之后,对所述客户端上显示的弹幕进行更新。由于每个播放节点的弹幕作用域可能不同,因此,在播放路径更新之后,需要同时更新客户端上显示的弹幕。具体地,可以获取更新后的播放路径上的最近播放节点的目标弹幕作用域,当第一客户端的当前播放路径在所述目标弹幕作用域内时,在所述第一客户端上显示当前播放路径在所述目标弹幕作用域内的第二客户端发送的弹幕,屏蔽当前播放路径不在所述目标弹幕作用域内的第三客户端发送的弹幕。
图8中示出的节点编辑器、视频编辑器、视频管理器、导控台、指令控制器、弹幕处理器以及播放器等可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在文件处理的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。
图9是图8所示的直播平台的部分功能架构图。如图9所示,运营专员或者编导可以在直播平台上进行视频编辑和节点编辑,然后可以将编辑完成之后的交互视频上传到直播平台。若交互视频的数量为多个时,可以对多个交互视频进行管理,并通过节目单管理对各个视频的播放时间和播放场次进行管理。另外,还可以对播放交互视频的客户端执行的指令操作次数进行统计;和/或对各个播放子路径上的客户端的数量进行统计。
图10是图8所示的直播平台的交互视频创建过程的示意图。如图所示,在创建交互视频的过程中,首先根据接收到的节点创建指令来创建播放节点,并确定各个播放节点的时间偏移量,各个播放节点的时间偏移量构成了脚本时间轴。然后,为各个播放节点之间的播放路径上传视频文件,以填充播放节点之间的剧情时间。
图11是本公开一个实施例的交互视频播放过程的示意图。客户端在进入直播间后,可以建立与直播平台的长连接,并监听长连接开播指令,等待交互视频开播。如果客户端接收到开播指令,则播放当前场次的交互视频。在客户端接收到新的指令(例如,分叉指令)之后,直播平台获取客户端播放交互视频的当前播放时间(video_time),以及获取交互视频的当前脚本时间(script_time)。如果当前播放时间大于或等于当前脚本时间(video_time≥script_time),则判断指令的类型。对于分叉指令,则在客户端上弹出选择交互UI(User Interface),以供用户选择播放路径,或者等待选择超时自动选择默认选项;对于合并指令,则直接播放下一段视频;对于结束指令,则结束播放。
图12是本公开一个实施例的直播平台的整体功能架构图。除了前述实施例中的各个功能之外,本实施例的直播平台还可以兼容直播平台的一般功能,例如,在直播间赠送礼物、发送主持人弹幕、在直播间内播放广告、支付虚拟货币。此外,还可以在进展到分支节点或者聚合节点时展示一些动画效果,例如,用户选择播放路径时的思考倒计时效果,切换播放不同播放子路径上的视频文件时的蒙太奇过度效果等。
图13是本公开一个实施例的交互视频处理方法的流程图,所述方法基于任一实施例所述的直播平台,所述方法包括步骤S1301至步骤S1302。
步骤S1301:接收节点编辑指令,并根据所述节点编辑指令创建播放节点以及设置各个播放节点对应的播放时间偏移量;其中,所述播放节点包括至少一个分叉节点以及所述分叉节点的至少两个子节点。
步骤S1302:接收多个视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联,以生成交互视频;其中,同一分叉节点的各条播放子路径对应所述交互视频中并列的多个剧情分支,所述播放时间偏移量用于对直播间内各个客户端播放所述交互视频的播放进度进行控制,以使所述各个客户端播放所述交互视频的播放时间差小于预设值。
以上方法的其他实施例详见直播平台的其他实施例,此处不再赘述。
根据本公开的实施例,可以在直播平台上编辑各个播放节点并设置各个播放节点对应的播放时间偏移量,从而使直播平台在播放交互视频时,能够基于该播放时间偏移量对各个客户端的播放进度进行控制。这样,可以使各个客户端在相对统一的时间轴上互相讨论视频内容,从而提高直播间内各个用户在观看交互视频时的交互性。
根据本公开的一些实施例,与直播平台的服务器连接的客户端上可以执行交互视频的播放控制方法。
图14是本公开一个实施例的交互视频的播放控制方法流程图。所述方法可以包括步骤S1401至步骤S1403。
步骤S1401:接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时发送的分叉消息;其中,当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量。
步骤S1402:判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值。
步骤S1403:若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
交互视频在播放过程中,服务器不断检测交互视频的当前脚本时间,当脚本时间达到交互视频执行剧情分叉操作的执行时间时,通过调用长连接向客户端发送分叉消息,以通知客户端交互视频的剧情当前进展到分叉节点。
用户在播放交互视频的过程中,通常存在网络延迟和卡顿的情况。因此,用户的“视频体验轴”与“脚本时间轴”之间存在差异(即,延迟时间)。但是,在有些情况下无止境的延迟是无法容忍的。
这是因为,不同用户需要在相对统一的时间轴上去观看交互视频以及进行交互(例如,互发弹幕)。这样才能让同时观看交互视频的用户之间产生双向的沟通和对话,进而一起分析剧情,一起抒发情绪、表达感想。这也就要求用户观看到的视频的内容相差 不能太大,而如果用户之间观看到的视频内容偏差很大,那么他们之间的双向沟通就越来越困难。举个例子,剧情中出现了一个很好笑的“包袱”,张三看到这个“包袱”后马上哈哈大笑;李四五分钟后才看到这个包袱,那么李四也就在张三大笑之后的五分钟之后才笑了起来。如此场景下,让张三和李四通过发弹幕去交流视频体验、交流观感,是很难达到的。于是,需要限制不同用户之间的延迟时间。
因此,客户端接收到分叉消息之后,会先判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值。如果所述差值大于或等于预设的时间阈值,表示该客户端当前的播放时延已经大于所能容忍的容差时长,将不利于各个客户端在统一的时间维度上交流交互视频的剧情。因此,为了减小该客户端的播放时延,在接收到分叉消息之后,客户端会立即响应所述分叉消息,显示分叉选项。与此同时,由于当前播放时间还未达到执行时间,因此,交互视频仍然会继续播放。也就是说,在交互视频继续播放的同时显示分叉选项。其中,分叉选项用于客户端选择交互视频的播放路径,每个分叉选项对应一条播放路径。
进一步地,若所述当前播放时间与所述执行时间的差值小于预设的时间阈值,判断所述当前播放时间是否达到所述执行时间;若所述当前播放时间达到所述执行时间,响应所述分叉消息,以在客户端上显示分叉选项。
在本实施例中,若当前播放时间与所述执行时间的差值小于预设的时间阈值,表示对应客户端的播放时延小于可容忍的最大时长(即,容差时长)。因此,可以在当前播放时间达到执行时间之后,才在客户端上显示分叉选项。也就是说,若所述差值小于预设的时间阈值时,当客户端接收到分叉消息之后,并不是立即响应所述分叉消息在客户端上显示分叉选项,而是等交互视频在分叉节点之前的内容播放完毕,才开始在客户端上显示分叉选项。如果判断当前播放时间未达到所述执行时间,继续播放交互视频,直到所述当前播放时间达到所述执行时间时,再在客户端上显示分叉选项。
容差时长可以设为10秒。在实际应用中,10秒的容差时长只是一个举例,这个值可以根据实际的经验总结和业务上不同的需要做调整。比如有的播放节点上容差时长可以是5秒,有的可以是20秒或者30秒。特别地,如果各条播放路径不需要合并到一个统一的结局,也就是说,交互视频的后续内容不需要用户之间做同步,那么,容差时长的值可以是一个极大的值。
在一个实施例中,客户端还可以对分叉选项的显示时间进行倒计时,倒计时结束之后,则自动选择默认分叉选项对应的播放路径(即,默认播放路径)。假设T为分叉选项的显示倒计时,T1为所述执行时间,T2为所述当前播放时间,T3为所述时间阈值(即,容差时长),则T=T3-(T1-T2)。
在一个实施例中,若所述当前播放时间与所述执行时间的差值大于预设的时间阈值,在客户端接收到所述分叉消息之后,可以向服务器追帧。“追帧”从用户角度上,感受到的是视频突然“跳跃”了——例如,前一时刻播放的是第50秒的画面,下一时刻突然播放成了第60秒的画面——用户会感觉“丢失”了一部分内容,视频跳跃地从后面某一个时间的内容继续播放了。通过追帧,可以进一步减小播放时延。
可选地,除了客户端接收到分叉消息以后才进行追帧的情况外,客户端还可以在接收到服务器发送的心跳包之后,向服务器追帧。心跳包可以由服务器按照预设的时间间隔周期性发送。例如,每3秒钟发送一次。
追帧的过程具体为:通过客户端与内容分发网络(Content Delivery Network,CDN)之间预先建立的接口,获取交互视频的从目标播放时刻开始的视频内容;其中,所述目标播放时刻与所述执行时间的差值小于所述时间阈值(容差时长),然后客户端播放所述视频内容。例如,当前播放时间为第50秒,执行时间为第65秒,时间阈值为10秒,则客户端可以通过与内容分发网络之间预先建立的接口获取交互视频第60秒对应的视频内容,然后客户端从交互视频的第60秒开始播放。从交互视频的第60秒开始播放的内容即为从所述目标播放时刻开始的视频内容。此处的数值仅为举例,实际应用中不限于此。
在一个实施例中,获取交互视频的从目标播放时刻开始的视频内容包括:在视频缓存区中查找所述交互视频中的从所述目标播放时刻开始的视频内容;若查找到所述视频内容,则从所述视频缓存区获取所述视频内容;若未查找到所述视频内容,则通过所述接口从所述内容分发网络拉取所述交互视频中的从所述目标播放时刻开始的视频内容。
对于MP4格式的交互视频,交互视频在播放过程中,客户端会先从CDN拉取一段视频内容,并缓存在本地。例如,在播放交互视频第0秒至第20秒的视频内容时,可以对第20秒至第40秒的视频内容进行拉取并缓存。因此,在客户端进行追帧时,可以先在客户端的本地缓存中查找交互视频从目标播放时刻开始的视频内容,如果查找到所述视频内容,则直接跳转到所述视频内容开始播放;如果未查找到所述视频内容,说明该视频内容还未缓存,因此,可以先从CDN中将所述视频内容拉取到本地缓存中,再从本地缓存中读取并播放从该目标播放时刻开始的视频内容。
M3U8格式的视频与MP4格式的视频类似。对于M3U8格式的交互视频,交互视频的视频文件被拆分为多个视频片段,交互视频在播放过程中,客户端会先从CDN拉取若干个视频片段,并缓存在本地。例如,在播放第1个视频片段时,可以缓存第2至第4个视频片段。由于每个视频片段的时长是确定的,在客户端追帧时,可以先确定从目标播放时刻开始的视频片段,并在缓存中查找该视频片段,如果查找到该视频片段,则直接从目标播放时刻播放该视频片段;如果未查找到该视频片段,则从CDN中拉取并缓存该视频片段,然后从缓存中读取该视频片段并从目标播放时刻播放该视频片段。
在一个实施例中,在客户端上显示分叉选项之后,还可以接收用户对所述分叉选项的选择指令;响应所述选择指令,播放交互视频中与被选中的分叉选项对应的视频内容。例如,针对图1所示的交互视频,若用户发送了对选项1的选择指令,则开始播放选项1对应的视频内容。
在本公开实施例中,直播间在播放交互视频时,对每个客户端都采用统一的时间轴(称为脚本时间轴)。脚本时间轴,也称为“剧本轴”,是一种理想时间。其在内容创作者的角度上,在相对于播放交互视频的时间零点的相对时间上发生的事情——例如, 创作者希望剧本上第10分钟开始一个剧情分叉,第15分钟出现一个剧情聚合,第30分钟所有内容结束。这里强调的是“第”字,也就是“相对时间”,或者叫时间偏移量。而用户自己看到视频的时间是一个绝对时间,与各个用户的网络状况等因素相关。
服务器根据脚本时间来向客户端发送分叉消息,从而告知客户端当前是否应当执行分叉操作,客户端将播放交互视频的当前播放时间与执行时间进行比较,能够获知本客户端当前的时延,并在所述时延超过容差时长时,在接收到分叉消息后立即在客户端上显示分叉选项,然后客户端进行追帧,从而可以减少客户端播放交互视频的时延,使不同的客户端能够在相对统一的时间轴上播放交互视频,便于客户端之间的交互。
参照图15,下面以一个数值实施例为例进行说明。
例如,以“脚本时间轴”为30分钟为例,交互视频的内容将于2019年7月25日18:00:00正式开播。
在理想情况下,假设用户张三的网络情况极好,没有任何卡顿,接收长连接消息没有任何延迟,且张三收到分叉选择的交互时立即做出选择。那么他可能将于2019-07-25 18:01:00这一绝对时间接收到了分叉节点R的分叉消息并显示节点R的分叉选项;若张三选择了分叉节点B,则张三于2019-07-25 18:04:00这一绝对时间接收到分叉节点B的分叉消息并显示节点B的分叉选项;若张三选择了分叉节点A,则张三于2019-07-25 18:06:00这一绝对时间接收到分叉节点A的分叉消息并显示节点A的分叉选项。
而实际上因为理想情况不存在,李四、王五等不同人的网络情况不一样,因此他们的每一个分叉选择的时间点可能都会比张三理想情况的例子要晚几秒甚至十几二十秒。(2019-07-25 18:00:08、2019-07-25 18:00:20;…)
假设用户李四播放该交互视频时,网络卡顿了8秒钟(lag=8),这卡顿的8秒是在李四的客户端接收到开播指令后在刚开始播放交互视频的时候发生的卡顿,且假设从绝对时间2019-07-25 18:00:08到2019-07-25 18:01:00的52秒没有卡顿。那么,在绝对时间,2019-07-25 18:01:00时,用户李四看到的视频已经播到了用户李四的“视频体验轴”的第52秒了(video_time=52)。
而在同一时刻,因为服务端的“指令控制机”在检测服务器时间之后,判断服务器存储的“脚本时间轴/剧本轴”已经运转到第60秒了(script_time=60),于是“指令控制机”会通过长连接,发送一条“分叉”消息(这个消息记录了让接收者在其video_time到达(大于等于)第60秒时(当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间execute_time=60),展示分叉选择界面及对应选项的信息)。
假设用户李四接收这个“分叉”指令没有延时,那么在绝对时间2019-07-25 18:01:00时,用户李四的客户端收到了这条指令但不显示。因为此时用户李四的video_time=52,小于execute_time=60(这里的60可以指执行这个分叉指令的相对于交互视频的时间零点的相对时间),说明用户李四观看的视频内容还没有到达触发显示分支选项的条件。因此这个时候播放器界面上没有任何事情发生,播放器会继续播放当前的视频。
时间往后推进,到了绝对时间2019-07-25 18:01:01,这时候video_time累加到53, 但是仍然video_time<execute_time,用户李四的客户端仍未显示分叉选项的交互界面,继续播放当前视频。时间一秒一秒过去,一直推进到绝对时间2019-07-25 18:01:08,用户李四的video_time累加到60了,此时已经满足了video_time≥execute_time,则用户李四的客户端的播放器内弹出分叉选项的交互界面,即让用户李四选择A或者B。
进一步地,假设允许的容差时长是10秒(waiting_duration=10,即,不同用户之间看到的视频差异不能超过10秒)。假设李四延迟了8秒,而王五刚开始就延迟了20秒(假设王五卡顿20秒之后一直到播放完交互视频都不卡顿)。在绝对时间,2019-07-25 18:01:00时,李四的video_time=52,而王五的video_time=40(因为卡顿了20秒,lag=20,因此王五只看到了视频的前40秒的内容)。
时间一秒一秒过去,一直推进到绝对时间2019-07-25 18:01:10,王五的video_time只累加到50。虽然此时没有满足video_time≥script_time的条件,但是存在另外一个条件——video_time与script_time之间的时间差不能大于10秒(waiting_duration=10)。因此这时候(video_time+waiting_duration≥script_time),王五的客户端的播放器不用等到video_time=60后,而是在video_time=50即在2019-07-25 18:01:10立即执行以下动作:
(1)播放器内立即弹出分叉选项的交互界面,即让用户王五选择A或者B。
(2)视频追帧,即播放的视频立即跳转到10秒之后。通过追帧,王五与张三之间看到的视频差异不超过10秒了。
在另一个实施例中,在弹出分叉选项后可以显示一个倒计时,仍以图15中的数值为例,假设用户张三的客户端在18:01:00弹出分叉选项后,有10秒的正常倒计时时间,用户张三选择完分叉选项且在看完10秒(例如,count_down=10)的倒计时后即在18:01:10开始播放下一段视频。而用户李四的客户端在开始播放时延迟了8秒,用户李四的客户端在18:01:08弹出分叉选项后,有2秒的倒计时时间(count_down–lag=10–8=2),在这之后,即使李四在2秒内做出了选择,仍然要等到倒计时减少到0之后(即,2019-07-25 18:01:10),再播放李四选择的分支对应的下一段视频片段。而用户王五的客户端在开始播放时延迟了20秒,为了让用户王五与用户张三李四之间看到的视频差异不超过10秒,用户王五的客户端可以在18:01:10弹出分叉选项并立即追帧10秒(waiting_duration–lag=10–20=–10)开始播放下一段视频,并不显示倒计时,此时王五的视频内容与交互界面弹出的分叉选项不同步——王五观看的视频内容还没有到18:01:20应该弹出分叉选项的时间,但分叉选项仍然在18:01:10强制弹出了;或者王五的客户端可以提前在18:01:00(此时王五的video_time=40,video_time+count_down+waiting_duration≥script_time)弹出分叉选项,显示10秒倒计时,同样地,此时王五的视频内容与交互界面弹出的分叉选项不同步,王五观看的视频内容还没有到18:01:20应该弹出分叉选项的时间,但分叉选项仍然在18:01:00强制弹出了,并在18:01:10(王五的video_time=50)时立即追帧10秒,在18:01:00用户王五的客户端上出现从10开始倒计时的分叉选项的同时,上一段视频的第40秒~第50秒的内容仍然会继续播出。通过追帧,王五与张三之间看到的视频差异不超过10秒了。
以上实施例中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征的任意进行组合也属于本公开公开的范围。
如图16所示,是本公开一个实施例的交互视频的播放控制装置的框图。所述装置可以包括接收模块501、判断模块502和显示模块503。
接收模块501,用于接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时发送的分叉消息;其中,当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量。
判断模块502,用于判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值。
显示模块503,用于若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本公开装置的实施例可以应用在计算机设备上,例如服务器或终端设备。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在文件处理的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图17所示,为本公开的实施例的交互视频的播放控制装置所在计算机设备的一种硬件结构图,除了图17所示的处理器601、内存602、网络接口603、以及非易失性存储器604之外,实施例中装置所在的服务器或电子设备,通常根据该计算机设备的实际功能,还可以包括其他硬件,对此不再赘述。
相应地,本公开实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的方法。
相应地,本公开实施例还提供一种客户端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任一实施例中的方法。
如图18所示,本公开实施例还提供一种交互视频的播放控制系统,所述系统包括:服务器;以及多个客户端,每个所述客户端包括处理器以及用于存储处理器可执行的计 算机程序的存储器,其中,所述处理器执行所述计算机程序时实现任一实施例所述的方法;其中,所述服务器用于在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时,向每个客户端发送分叉消息。
在一些例子中,可以在所述服务器上实现根据前述实施例的任一种直播平台。
以上实施例中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征的任意进行组合也属于本公开公开的范围。
根据本公开实施例,在接收到服务器发送的分叉消息之后,判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值,如果所述差值大于或等于所述预设的时间阈值,则立刻在客户端上显示分叉选项,而不是等当前播放时间达到交互视频执行剧情分叉操作的执行时间才在客户端上显示分叉选项。通过这种方式,减少了客户端播放交互视频的时延,使不同的客户端能够在相对统一的时间轴上播放交互视频,便于客户端之间的交互。
本公开实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本领域技术人员在考虑说明书及实践这里公开的说明书后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的范围和精神由下面的权利要求限定。

Claims (28)

  1. 一种交互视频的播放控制方法,包括:
    接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时发送的分叉消息;其中,所述当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量;
    判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值;
    若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
  2. 根据权利要求1所述的方法,还包括:
    若所述差值小于所述预设的时间阈值,判断所述当前播放时间是否达到所述执行时间;
    若所述当前播放时间达到所述执行时间,响应所述分叉消息,以在客户端上显示分叉选项。
  3. 根据权利要求2所述的方法,还包括:
    若判断所述当前播放时间未达到所述执行时间,继续播放所述交互视频,直到所述当前播放时间达到所述执行时间时,在客户端上显示分叉选项。
  4. 根据权利要求3所述的方法,其中,所述分叉选项的显示倒计时被设置为:
    T=T3-(T1-T2);
    其中,T为所述分叉选项的显示倒计时,T1为所述执行时间,T2为所述当前播放时间,T3为所述时间阈值。
  5. 根据权利要求1所述的方法,还包括:
    若所述差值大于所述预设的时间阈值,在接收到所述分叉消息之后,向所述服务器追帧。
  6. 根据权利要求1所述的方法,还包括:
    若所述差值大于所述预设的时间阈值,在接收到所述服务器发送的心跳包之后,向所述服务器追帧。
  7. 根据权利要求6所述的方法,其中,所述心跳包由所述服务器按照预设的时间间隔周期性发送。
  8. 根据权利要求5或6所述的方法,其中,向所述服务器追帧包括:
    通过所述客户端与内容分发网络之间预先建立的接口,获取所述交互视频中的从目标播放时刻开始的视频内容;其中,所述目标播放时刻与所述执行时间的差值小于所述时间阈值;
    播放所述视频内容。
  9. 根据权利要求8所述的方法,其中,获取所述视频内容包括:
    在视频缓存区中查找所述交互视频中的从所述目标播放时刻开始的视频内容;
    若查找到所述视频内容,则从所述视频缓存区获取所述视频内容;
    若未查找到所述视频内容,则通过所述接口从所述内容分发网络拉取所述交互视频中的从所述目标播放时刻开始的视频内容。
  10. 根据权利要求1所述的方法,还包括:
    接收用户对所述分叉选项的选择指令;
    响应所述选择指令,播放所述交互视频中与被选中的分叉选项对应的视频内容。
  11. 一种交互视频的播放控制装置,包括:
    接收模块,用于接收服务器在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时发送的分叉消息;其中,所述当前脚本时间是所述交互视频中当前视频帧的播放时间相对于所述交互视频的起始播放时间的时间偏移量;
    判断模块,用于判断客户端播放所述交互视频的当前播放时间与所述执行时间的差值是否大于或等于预设的时间阈值;
    显示模块,用于若所述差值大于或等于所述预设的时间阈值,响应所述分叉消息,以在客户端上显示分叉选项。
  12. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,使所述处理器实现根据权利要求1至10中任意一项所述的方法。
  13. 一种客户端,包括处理器以及用于存储处理器可执行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现根据权利要求1至10中任意一项所述的方法。
  14. 一种交互视频的播放控制系统,包括:
    服务器;以及
    多个客户端,每个所述客户端包括处理器以及用于存储处理器可执行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现根据权利要求1至10中任意一项所述的方法,
    其中,所述服务器用于在交互视频的当前脚本时间达到所述交互视频执行剧情分叉操作的执行时间时,向每个所述客户端发送分叉消息。
  15. 根据权利要求14所述的播放控制系统,其中,
    在所述服务器上实现直播平台,所述直播平台包括节点编辑器和视频编辑器;
    所述节点编辑器用于接收节点编辑指令,并根据所述节点编辑指令创建播放节点以及设置各个播放节点对应的播放时间偏移量;其中,所述播放节点包括至少一个分叉节点以及所述分叉节点的至少两个子节点;
    所述视频编辑器用于接收多个视频文件,并分别将各个视频文件与相邻播放节点之间的多个播放子路径相关联,以生成交互视频;其中,同一分叉节点的各条播放子路径对应所述交互视频中并列的多个剧情分支,所述播放时间偏移量用于对直播间内各个客户端播放所述交互视频的播放进度进行控制,以使所述各个客户端播放所述交互视频的播放时间差小于预设值。
  16. 根据权利要求15所述的播放控制系统,其中,所述直播平台还包括:
    视频管理器,用于接收各个交互视频的播放时间信息和播放场次信息,并分别将所述各个交互视频的播放时间信息和播放场次信息与对应的交互视频进行关联。
  17. 根据权利要求15所述的播放控制系统,其中,所述直播平台还包括:
    导控台,用于在所述交互视频的播放过程中,响应接收到的交互视频编辑指令,在所述交互视频中插入播放节点、在所述交互视频中删除播放节点和/或在所述交互视频中插入广告。
  18. 根据权利要求17所述的播放控制系统,其中,所述导控台还用于:
    对播放所述交互视频的客户端执行的指令操作次数进行统计;和/或
    对各个播放子路径上的客户端的数量进行统计。
  19. 根据权利要求16所述的播放控制系统,其中,所述直播平台还包括:
    指令控制机,用于获取服务器时间,若所述服务器时间达到一个所述交互视频的播放时间信息指示的播放时间,通过与所述客户端预先建立的长连接向所述客户端发送交互视频播放开始指令,所述交互视频播放开始指令用于指示所述客户端访问所述交互视频的视频地址,以开始播放所述交互视频。
  20. 根据权利要求19所述的播放控制系统,其中,所述指令控制机还用于:
    响应所述客户端发送的短连接建立请求,建立与所述客户端的短连接,并通过所述短连接接收所述客户端发送的用于请求开始播放所述交互视频的播放请求指令。
  21. 根据权利要求15所述的播放控制系统,其中,所述直播平台还包括:
    弹幕处理器,用于获取所述客户端发送的弹幕,以及获取所述客户端发送所述弹幕 时播放所述交互视频的当前播放路径,并将所述弹幕转发至所述当前播放路径上的其他客户端。
  22. 根据权利要求21所述的播放控制系统,其中,所述弹幕处理器还用于:
    获取第二客户端发送所述弹幕的时刻与所述交互视频的起始播放时间之间的第二时间偏移量,并将所述弹幕转发至第一客户端,所述第一客户端的当前播放时间与所述交互视频的起始播放时间之间的第一时间偏移量小于所述第二时间偏移量。
  23. 根据权利要求15所述的播放控制系统,其中,所述直播平台还包括:
    播放器,用于接收来自所述客户端的播放路径选择指令,并根据所述播放路径选择指令确定所述交互视频的目标播放子路径,并将与所述目标播放子路径相关联的视频文件的统一资源定位符URL发送至所述客户端。
  24. 根据权利要求23所述的播放控制系统,其中,
    所述播放器还用于将所述交互视频的当前脚本时间发送至所述客户端;
    所述客户端被配置成:在所述当前脚本时间与当前播放所述交互视频的实际播放时间的差值大于第二预设时间阈值时,向所述服务器追帧。
  25. 根据权利要求23所述的播放控制系统,其中,所述播放器还用于:
    获取第一客户端的弹幕白名单,所述弹幕白名单用于存储预设播放路径的列表,所述预设播放路径上的任一客户端发送的弹幕被允许在所述第一客户端上显示;
    在接收到第二客户端发送的弹幕时,判断所述第二客户端的当前播放路径是否在所述弹幕白名单中;
    若所述第二客户端的当前播放路径不在所述弹幕白名单中,则在所述第一客户端对所述第二客户端发送的弹幕进行隔离。
  26. 根据权利要求23所述的播放控制系统,其中,所述播放器还用于:
    存储各个播放节点的弹幕作用域;
    当第一客户端的当前播放路径在所述弹幕作用域内时,在所述第一客户端上对当前播放路径不在所述弹幕作用域内的第二客户端发送的弹幕进行屏蔽。
  27. 根据权利要求23所述的播放控制系统,其中,所述播放器还用于:
    对所述客户端的播放路径进行更新。
  28. 根据权利要求27所述的直播平台,其中,所述播放器还用于:
    在所述播放路径更新之后,对所述客户端上显示的弹幕进行更新。
PCT/CN2020/116677 2019-09-24 2020-09-22 交互视频的处理与播放控制 WO2021057693A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/762,282 US20220417619A1 (en) 2019-09-24 2020-09-22 Processing and playing control over interactive video

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910907638.6 2019-09-24
CN201910906907.7A CN112637657A (zh) 2019-09-24 2019-09-24 交互视频的播放控制方法、装置和系统
CN201910906907.7 2019-09-24
CN201910907638.6A CN112637612B (zh) 2019-09-24 2019-09-24 直播平台及其交互视频处理方法

Publications (1)

Publication Number Publication Date
WO2021057693A1 true WO2021057693A1 (zh) 2021-04-01

Family

ID=75165611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116677 WO2021057693A1 (zh) 2019-09-24 2020-09-22 交互视频的处理与播放控制

Country Status (2)

Country Link
US (1) US20220417619A1 (zh)
WO (1) WO2021057693A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188225B (zh) * 2020-09-29 2022-12-06 上海哔哩哔哩科技有限公司 用于直播回放的弹幕下发方法和直播视频弹幕回放方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917258A (zh) * 2012-10-12 2013-02-06 深圳Tcl新技术有限公司 基于视频内容的视频播放方法、终端及系统
KR20130061501A (ko) * 2011-12-01 2013-06-11 한국전자통신연구원 이종의 디바이스상에서 다중 시나리오 재생 시스템 및 그 방법
CN104883627A (zh) * 2015-06-22 2015-09-02 田志明 一种情节影视及其播映装置与方法
CN105430509A (zh) * 2015-11-27 2016-03-23 北京奇艺世纪科技有限公司 一种多媒体文件播放方法及装置
CN105472456A (zh) * 2015-11-27 2016-04-06 北京奇艺世纪科技有限公司 一种视频播放方法及装置
CN105828125A (zh) * 2016-03-31 2016-08-03 北京奇艺世纪科技有限公司 一种视频推送方法及装置
CN105898394A (zh) * 2016-05-25 2016-08-24 腾讯科技(深圳)有限公司 一种多媒体播放方法及相关设备
CN106341713A (zh) * 2016-10-08 2017-01-18 广东欧珀移动通信有限公司 一种多媒体同步播放方法、装置、系统及终端
CN106385594A (zh) * 2016-09-18 2017-02-08 深圳市青柠互动科技开发有限公司 一种优化视频直播服务的方法
CN109985382A (zh) * 2019-04-03 2019-07-09 腾讯科技(深圳)有限公司 剧情节点的脚本执行方法、装置、设备及存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130061501A (ko) * 2011-12-01 2013-06-11 한국전자통신연구원 이종의 디바이스상에서 다중 시나리오 재생 시스템 및 그 방법
CN102917258A (zh) * 2012-10-12 2013-02-06 深圳Tcl新技术有限公司 基于视频内容的视频播放方法、终端及系统
CN104883627A (zh) * 2015-06-22 2015-09-02 田志明 一种情节影视及其播映装置与方法
CN105430509A (zh) * 2015-11-27 2016-03-23 北京奇艺世纪科技有限公司 一种多媒体文件播放方法及装置
CN105472456A (zh) * 2015-11-27 2016-04-06 北京奇艺世纪科技有限公司 一种视频播放方法及装置
CN105828125A (zh) * 2016-03-31 2016-08-03 北京奇艺世纪科技有限公司 一种视频推送方法及装置
CN105898394A (zh) * 2016-05-25 2016-08-24 腾讯科技(深圳)有限公司 一种多媒体播放方法及相关设备
CN106385594A (zh) * 2016-09-18 2017-02-08 深圳市青柠互动科技开发有限公司 一种优化视频直播服务的方法
CN106341713A (zh) * 2016-10-08 2017-01-18 广东欧珀移动通信有限公司 一种多媒体同步播放方法、装置、系统及终端
CN109985382A (zh) * 2019-04-03 2019-07-09 腾讯科技(深圳)有限公司 剧情节点的脚本执行方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US20220417619A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
US11960446B2 (en) Video content graph including enhanced metadata
US9077956B1 (en) Scene identification
US9253533B1 (en) Scene identification
KR101649385B1 (ko) 레코딩된 프로그램의 시청한 부분의 삭제
US8285121B2 (en) Digital network-based video tagging system
EP1239673B1 (en) Method and memory for storing content
US8640030B2 (en) User interface for creating tags synchronized with a video playback
US8539535B2 (en) Methods and apparatus for supporting VOD requests in a system with hierarchical content stores
US8156198B2 (en) Live custom media play lists
US20120114302A1 (en) Methods and systems for use in controlling playback of content in relation to recorded content
BRPI0821388A2 (pt) Controle de reprodução de transmissão contínua de mídia
CN108600850B (zh) 视频分享方法、客户端、服务器及存储介质
KR20240055116A (ko) 사용자 선택들에 응답하여 인터랙티브 미디어 타이틀들의 재생을 전진시키기 위한 기법들
WO2021057693A1 (zh) 交互视频的处理与播放控制
US20130136423A1 (en) Identifying series candidates for digital video recorder
CN112637612B (zh) 直播平台及其交互视频处理方法
US10897643B2 (en) Content streaming platform with dynamically arranged media content
CA3104700A1 (en) Systems and methods for providing media content for continuous watching
CN112637611B (zh) 交互视频播放方法、装置和系统
JP5939914B2 (ja) 切替装置及びプログラム
US11616996B2 (en) Systems and methods for providing media content for continuous watching
US10887652B2 (en) Systems and methods for providing media content for continuous watching
JP5101570B2 (ja) 動画データとアプリケーションプログラムとが記録された記録媒体、その再生装置及び方法
CN112637657A (zh) 交互视频的播放控制方法、装置和系统
JP2009032342A (ja) 情報記憶媒体、情報再生装置、及び情報再生方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869288

Country of ref document: EP

Kind code of ref document: A1