CN114071215A - Video playing method, device, system and storage medium - Google Patents

Video playing method, device, system and storage medium Download PDF

Info

Publication number
CN114071215A
CN114071215A CN202010758734.1A CN202010758734A CN114071215A CN 114071215 A CN114071215 A CN 114071215A CN 202010758734 A CN202010758734 A CN 202010758734A CN 114071215 A CN114071215 A CN 114071215A
Authority
CN
China
Prior art keywords
video
video frame
special effect
target
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010758734.1A
Other languages
Chinese (zh)
Inventor
杨洋
蔡鼎
金剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010758734.1A priority Critical patent/CN114071215A/en
Publication of CN114071215A publication Critical patent/CN114071215A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a video playing method, equipment, a system and a storage medium. In the embodiment of the application, the forwarding node caches the video frames provided for the terminal equipment in the process of pushing the video stream to the terminal equipment; and under the condition that the video frames provided by the video source node cannot be obtained, obtaining the target video frame from the cached video frames, and providing the target video frame for the terminal equipment to play. Therefore, for the terminal equipment, even if the communication between the video source node and the forwarding node fails, the video can be played to the audience all the time, the probability of video cut-off of the terminal equipment in the video playing process is further reduced, and the user watching experience is favorably improved.

Description

Video playing method, device, system and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a video playing method, device, system, and storage medium.
Background
A director is a device or system that combines multiple video clips. When the video is produced on site, all the multi-channel video signals are sent to the director station, and the editing is carried out on the director station in real time. The director is equivalent to an electronic switch combination which can control the on-off of each video signal circuit through a key. The director can switch the received multi-channel video into one video stream to be output according to actual needs; or mixing multiple videos into one video stream to be output.
In practical applications, the video stream pushed to the audience by the director often has a cut-off condition due to the fact that the director cannot timely acquire the video provided by the video source such as the camera, and the user experience is not good.
Disclosure of Invention
Aspects of the present application provide a video playing method, device, system, and storage medium to reduce the probability of video stream break, thereby contributing to improving the viewing experience of a user.
An embodiment of the present application provides a video playing system, including: the system comprises a first video source node, a forwarding node and terminal equipment;
the first video source node is used for providing a first video stream to the forwarding node;
the forwarding node is configured to cache a video frame of a second video stream provided to the terminal device in a process of providing the second video stream to the terminal device; the second video stream is generated based on the first video stream; under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame; and providing the first target video frame for the terminal equipment to play.
An embodiment of the present application further provides a video playing method, including:
acquiring a first video stream provided by a first video source node;
in the process of providing a second video stream generated based on the first video stream to a terminal device, caching video frames of the second video stream which are provided to the terminal device;
under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame;
and providing the first target video frame for the terminal equipment to play.
An embodiment of the present application further provides an electronic device, including: a memory, a processor, and a communications component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-mentioned video playback method.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned video playing method.
In the embodiment of the application, the forwarding node caches the video frames provided for the terminal equipment in the process of pushing the video stream to the terminal equipment; and under the condition that the video frames provided by the video source node cannot be obtained, obtaining the target video frame from the cached video frames, and providing the target video frame for the terminal equipment to play. Therefore, for the terminal equipment, even if the communication between the video source node and the forwarding node fails, the video can be played to the audience all the time, the probability of video cut-off of the terminal equipment in the video playing process is further reduced, and the user watching experience is favorably improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of a video playing system according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of another video playing system according to an embodiment of the present application;
fig. 1c is a schematic diagram of a layer processing process provided in an embodiment of the present application;
fig. 1d is a schematic structural diagram of another video playing system according to an embodiment of the present application;
fig. 1e is a schematic view of a video frame selection policy priority setting interface provided in an embodiment of the present application;
fig. 1f is a schematic view of a target video frame selection policy selection interface provided in an embodiment of the present application;
fig. 1g is a signaling diagram of a transition process provided in the embodiment of the present application;
fig. 1h is a block diagram of a logical structure of a director service provided in the embodiment of the present application;
fig. 2 and fig. 3 are schematic flow charts of a video playing method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the forwarding node often fails to obtain the video provided by the video source such as the video camera in time, so that the situation of the video stream pushed to the audience such as the flow break occurs, and the user experience is not good. For the technical problem, in some embodiments of the present application, a forwarding node caches a video frame provided to a terminal device in a process of pushing a video stream to the terminal device; and under the condition that the video frames provided by the video source node cannot be obtained, obtaining the target video frame from the cached video frames, and providing the target video frame for the terminal equipment to play. Therefore, for the terminal equipment, even if the communication between the video source node and the forwarding node fails, the video can be played to the audience all the time, the video cut-off probability in the video playing process is further reduced, and the user watching experience is favorably improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1a is a schematic structural diagram of a video playing system according to an embodiment of the present application. As shown in fig. 1a, the video playing system includes: a video source node 11, a forwarding node 12 and a terminal device 13. The video source node 11, the forwarding node 12 and the terminal device 13 shown in fig. 1a are only exemplary and are not limited to the implementation form of the two.
In the present embodiment, the video source node 11 refers to an electronic device that can provide video data to the forwarding node 12. In some embodiments, the video source node 11 may be an image capturing device with a video capturing function, for example, the video source node 11 may be a camera, a video camera, or a terminal device with a video capturing function. Wherein, terminal equipment can be smart mobile phone, panel computer or wearable equipment etc.. For an image capture device, a video may be captured and the captured video may be provided to the forwarding node 12. That is, for an image capture device, the video stream provided to forwarding node 12 may be a video stream captured by the image capture device in real-time.
If the video source node 11 is an image capturing device, the video source node 11 and the forwarding node 12 may be connected wirelessly or through wires. Alternatively, the video source node 11 may be communicatively connected to the forwarding node 12 through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Alternatively, the video source node 11 may be communicatively connected to the forwarding node 12 via bluetooth, WiFi, infrared, etc.
In other embodiments, the video source node 11 is a storage node that stores video data. The storage node may be any storage medium or electronic device with a data storage function. For example, the storage node may be a storage medium such as a hard disk and a usb disk; and the device can also be a computer, a mobile phone or a wearable device and other terminal devices. Alternatively, the storage node may be a single server device, a cloud server array, or a Virtual Machine (VM), a container group, or the like having a storage function in the cloud server array. For storage nodes, the forwarding node 12 may be provided with advanced video data. That is, for a storage node, the video stream provided to the forwarding node 12 may be a video stream that is pre-stored at the storage node. The storage node may be a storage medium deployed on a physical machine where the forwarding node 12 is located, or may be a storage medium deployed on another physical machine.
For the video source node 11 being a storage medium deployed on another physical machine, the video source node 11 and the forwarding node 12 may be connected wirelessly or by wire, and the specific communication mode thereof may refer to the communication mode between the video source node 11 and the forwarding node 12 when the video source node 11 is an image capture device.
In the present embodiment, the number of video source nodes 11 may be 1 or more. Plural means 2 or more. The video streams provided by the plurality of video source nodes 11 may be video streams taken from different perspectives for the same scene, i.e., multi-perspective video streams.
In this embodiment, the forwarding node 12 refers to a computer device that can perform video data management and provide a service related to video forwarding for a user, and generally has the capability of undertaking and guaranteeing the service. The user of the forwarding node 12 refers to a provider of a live broadcast service. For example, if a company is about to live an event, the user of the forwarding node 12 is the company; for another example, if a user wants to live broadcast the delivery, the user of the forwarding node 12 may be a host for live broadcast delivery or a platform for providing live broadcast delivery service; and so on.
In this embodiment, the number of forwarding nodes 12 may be 1 or more. Multiple forwarding nodes 12 may be deployed on the same physical machine or on different physical machines. As for the physical Machine where the forwarding node 12 is located, the physical Machine may be a single server device, or may also be a cloud server array, or a Virtual Machine (VM), a container group, or the like running in the cloud server array. In addition, the physical machine may also refer to other computing devices with corresponding service capabilities, for example, a terminal device (running a service program) such as a computer.
In the present embodiment, the implementation form of the forwarding node 12 is not limited. In some embodiments, forwarding node 12 may be a video source node other than the first video source node. In other embodiments, forwarding node 12 may be implemented as a director. Alternatively, the forwarding node 12 may be a private director of the video playing service provider, and may be deployed in any physical space specified by the video playing service provider, or may be deployed in a private cloud of the video playing service provider. The video playing service provider can provide live broadcast service, video on demand service or recorded broadcast service, etc. Accordingly, the video playing system provided by the embodiment of the present application can be applied to a live broadcast scene, a video on demand scene, or a video recording and playing scene, but is not limited thereto.
Alternatively, the forwarding node 12 may also be a director station provided by a cloud developer, and may be deployed in a public cloud of the cloud developer to serve as a cloud director station. In this case, the video playing service provider may apply for the director from the cloud director to provide the director service for live broadcasting thereof. Optionally, the video playing service provider may register in advance in the cloud broadcasting guide, and log in the cloud broadcasting guide by using information such as a pre-registered account number and a pre-registered password, so as to use the cloud broadcasting guide service provided by the cloud broadcasting guide.
Accordingly, the cloud director may provide a director service to the video playback service provider. In some embodiments, a video playback service provider may access a cloud director through a browser of its director device. The cloud director station returns a login page to the director equipment; the video playing service provider can log in the cloud program director by using information such as pre-registered account numbers, passwords and the like. Further, the cloud director can return a director interface to the director equipment, and the video playing service provider can perform director control through the director interface.
Alternatively, the video playing service provider may autonomously develop the director Interface, and call the director service through an Application Programming Interface (API) provided by the cloud director, so as to implement the docking of the director with the API Interface provided by the cloud director 12.
In this embodiment, the terminal device 13 refers to a computer device used by a user and having functions of communication, video playing, and the like, and may be, for example, a smart phone, a tablet computer, a personal computer, a wearable device, and the like. Alternatively, the terminal device 13 may also be a video playback device formed by a Set Top Box (STB) and a television. Among them, the digital video converter box is generally called a set-top box or set-top box.
In this embodiment, the forwarding node 12 and the terminal device 13 may be connected wirelessly or through a wire, and the specific communication manner may refer to the communication manner between the video source node 11 and the forwarding node 12, which is not described herein again.
In this embodiment, the forwarding node 12 may provide the video stream acquired from the video source node 11 to the terminal device 13, and the terminal device 13 plays the video stream to the viewer. The forwarding node 12 may directly provide the video stream acquired from the video source node 11 to the terminal device 13, or may process the video stream acquired from the video source node 11 and provide the processed video stream to the terminal device 13. For convenience of description and distinction, in the following embodiments of the present application, a video stream provided by the video source node 11 to the forwarding node 12 is defined as a first video stream; and defines the video stream provided by the forwarding node 12 to the terminal device 13 as the second video stream.
The second video stream may be the first video stream, or may be another video stream generated based on the first video stream. For example, the second video stream may be a video stream obtained by processing the first video stream by forwarding node 12. In some embodiments, as shown in fig. 1b, the video source node 11 is a plurality of image capturing devices, and the plurality of image capturing devices capture the same scene from different perspectives, so as to obtain a multi-perspective video stream. For the forwarding node 12, the first video stream is a multi-view video stream. The plurality of video source nodes 11 may provide the first video stream to the forwarding node 12 in the form of video frames at the set video transmission rate. The view angle of each video source node 11 is different, and the view angle of the video frame provided to the forwarding node 12 is also different. For the forwarding node 12, video frames provided by a plurality of video source nodes 11 are received at a time, and form multi-view video frames. Accordingly, forwarding node 12 may treat each received multiview video frame as multiple layers. The number of layers may be determined by the number of video source nodes 11, and one frame of the multiview video frame is used as one layer. Further, the forwarding node 12 may render the layers onto the background layer according to the set rendering template, so as to obtain a frame of video frame in the second video stream. Optionally, as shown in fig. 1c, the forwarding node 12 may perform layer merging on multiple layers according to a set rendering template, and map the multiple layers after merging on a background layer, to obtain a Frame of video Frame in the second video stream (i.e., the video Frame in fig. 1 c). And processing the multi-view video frames received each time according to the same processing mode to obtain a second video stream.
Further, the forwarding node 12 provides the second video stream to the terminal device 13, and the second video stream is played by the terminal device 13. Alternatively, the forwarding node 12 may provide the second video stream to the terminal device 13 in batches in the form of video frames according to the set video transmission rate.
In practical applications, a communication between the video source node 11 and the forwarding node 12 may be failed, which may cause video cut-off, jam and the like between the video source node 11 and the forwarding node 12, and the forwarding node 12 cannot continuously acquire a video frame of the first video stream, and certainly cannot continuously provide a new video frame to the terminal device 13, so that cut-off occurs in the terminal device 13, and the viewing experience of viewers is undoubtedly affected. The failure of the communication between the video source node 11 and the forwarding node 12 includes: one or more of a failure of acquisition or video transmission at the video source node 11, a failure of video reception at the forwarding node 12, and a failure of a transmission link between the video source node 11 and the forwarding node 12. The plurality means 2 or more than 2.
In order to solve the above problem, in the present embodiment, the forwarding node 12 may buffer the video frames of the second video stream that have been provided to the terminal device 13 in the process of providing the second video stream to the terminal device 13. Further, under the condition that the forwarding node 12 cannot acquire the video frame provided by the first video source node, the forwarding node 12 may acquire a target video frame from the cached video frame; and provides the target video frame to the terminal device 13 for playback. Therefore, even if video cutoff occurs between the video source node 11 and the forwarding node 12, the terminal device 13 can normally acquire video frames to play, so that the probability of video cutoff occurring in the terminal device in the video forwarding process is reduced, the video playing stability is improved, and the user watching experience is facilitated to be improved.
In this embodiment of the application, the forwarding node 12 may time the interval time for receiving two adjacent frames of video frames each time a video frame provided by the video source node 11 is received; if the forwarding node 12 has not received the next video frame provided by the video source node 11 when the interval time is greater than or equal to the set duration (defined as the first duration), it may be determined that the forwarding node 12 cannot acquire the video frame provided by the video source node 11. The first duration may be determined by the video transmission rate between the video source node 11 and the forwarding node 12 and the video transmission rate between the forwarding node 12 and the terminal device 13. Preferably, the first duration is less than or equal to the user's persistence of vision.
Further, the forwarding node 12 may obtain a target video frame from the cached video frames under the condition that the video frame provided by the video source node 11 cannot be obtained; and provides the target video frame to the terminal device 13 for playback. The target video frame is 1 frame or a plurality of frames of video frames, and the plurality of frames refer to 2 frames or more than 2 frames. The video frames cached by the forwarding node 12 include: the video frames of the second video stream that have been forwarded to the terminal device 13 may further include: the video frames of the other video stream that have been forwarded to the terminal device 13 before the second video stream.
In some embodiments, a video frame selection policy is set in the forwarding node 12, and the target video frame may be obtained from the video frames cached by the forwarding node 12 according to the video frame selection policy. In this embodiment, the number and specific implementation of the video frame selection strategies are not limited. The following is an exemplary description in connection with several alternative embodiments.
Selection strategy 1: the target video frame is an N frame video frame closest to the current time among the video frames of the second video stream cached by the forwarding node 12. N is a positive integer. Accordingly, the forwarding node 12 may acquire, from the buffered video frames of the second video stream, the N frames closest to the current time as the target video frame, and provide the target video frame to the terminal device 13. Preferably, N is 1, so that the occurrence of rewinding can be prevented, which helps to further improve the viewing experience of the viewers.
Selection strategy 2: the video frames cached by the forwarding node 12 carry tag information. Wherein the tag information may be set by a user of the director device 14 (the director). Optionally, forwarding node 12 may provide a tagging interface to the broadcasting device 14. Director device 14 may present the tagging interface for tagging information for cached video frames by the director. Wherein the tag information may represent characteristics of the buffered video frames. For example, the tag information may be a highlight or an important story, etc. Accordingly, the forwarding node 12 may obtain the tag information of the cached video frames of the second video stream, and may obtain, from the cached video frames of the second video stream, a video frame whose tag information is the set tag information as the target video frame. The set label information may be a highlight or an important plot.
Selection strategy 3: the forwarding node 12 may obtain the access amount of the video frame of the cached second video stream; and acquiring the first target video frame from the cached video frames of the second video stream according to the access amount of the cached video frames of the second video stream. Alternatively, the forwarding node 12 may obtain, from the cached video frames of the second video stream, a video frame with an access amount greater than or equal to the set access amount threshold as the target video frame. Alternatively, the forwarding node 12 may sort the video frames of the cached second video stream according to the access amount of the video frames of the cached second video stream; and selecting the M frames of video frames as target video frames in the order of the access amount from high to low. Wherein M is a positive integer.
Selection strategy 4: the forwarding node 12 may obtain a video frame containing the target object from the buffered video frames of the second video stream as the target video frame. Wherein, the target object can be a designated person or role, etc.
Wherein the target object is specifiable by a user of the terminal device 13. The terminal device 13 may present a target object selection interface for selection of a target object by a user of the terminal device 13. Terminal device 13 may provide an identification of the target object to forwarding node 12 in response to the selected completion event. The forwarding node 12 may obtain the characteristics of the target object according to the identifier of the target object; identifying a video frame containing the target object from the cached video frames of the second video stream according to the characteristics of the target object; and acquiring N frames of video frames closest to the current time according to the timestamp information of the video frames containing the target object to serve as target video frames.
Optionally, the target object selection interface may include: a complete control is selected. Accordingly, the above-mentioned selection completion event may be implemented as an event generated by the terminal device 13 in response to the touch operation for the selection completion control.
Selection strategy 5: the video frames buffered by the forwarding node 12 further include: other video streams that have been played before the second video stream. Wherein the other video frames may be advertisements already played, a feature of the second video stream, short videos, etc. Accordingly, forwarding node 12 may also take video frames of other video streams as target video frames.
It should be noted that in the embodiment of the present application, the video frame selection policy set in the forwarding node 12 may be 1 or more. The plurality means 2 or more than 2. The selection policies 1-5 may be partially implemented in the forwarding node 12 or may be implemented in the forwarding node 12. If the video frame selection policy set in the forwarding node 12 is multiple. For these multiple selection strategies, priorities of the multiple video frame selection strategies may be preset. Accordingly, the forwarding node 12 may obtain the priority of the set video frame selection policy; determining a target video frame selection strategy from a plurality of video frame selection strategies according to the priority of the video frame selection strategy; and acquiring a first target video frame from the cached video frames according to the target video frame selection strategy. Alternatively, the forwarding node 12 may select the highest priority from among a plurality of video frame selection policies as the target video frame selection policy.
Wherein the priority of the plurality of video frame selection strategies may be set by a director of the director device 14 or a user of the end device. If the setting is made by the user of the terminal device 13. As shown in fig. 1e, the terminal device 13 may present the selection information item. The selection information item includes: various video frame selection strategies. The user may set the priority of various video frame selection strategies. Wherein the numbers 1-5 in fig. 1e may represent priority order. Further, the terminal device 13 may also provide priorities of a plurality of video frame selection policies to the forwarding node 12 in response to the setting completion event. Optionally, as shown in fig. 1e, the terminal device 13 may also exhibit a setting completion control "determine". Accordingly, the setting completion event may be implemented as an event generated by the terminal device 13 in response to a touch operation for the setting completion control. Of course, the priorities of the video frame selection policies may be set by the user of the director device 14, and the setting manner may refer to the above-mentioned related contents of the priorities of the video frame selection policies set by the user of the terminal device 13, which will not be described herein again.
Alternatively, the target video frame selection policy may be set by a director of director device 14 or a user of a terminal device. If the setting is made by the user of the terminal device 13. As shown in fig. 1f, the terminal device 13 may present the selection information item. The selection information item includes: various video frame selection strategies. The user may select a video frame selection policy to be employed from a plurality of video frame selection policies. In fig. 1f, "√" after the selection policy can represent a selected target video frame selection policy. Accordingly, terminal device 13 may provide the selected video frame selection policy as an identification of the target video frame selection policy to forwarding node 12 in response to a selected completion event for the plurality of video frame selection policies. Of course, the priority of the target video frame selection policy may be selected by the user of the director device 14, and the selection manner may refer to the content related to the target video frame selection policy selected by the user of the terminal device 13, which is not described herein again.
Accordingly, forwarding node 12 may obtain, according to the identifier of the target video frame selection policy, the target video frame selection policy selected by the user of terminal device 13 (or the director of director device 14); and acquiring the target video frame from the cached video frame according to the target video frame selection strategy.
The forwarding node 12 provided in the embodiment of the present application provides a transition service in addition to the above-mentioned directing service for the same video stream. The transition mainly includes scene switching, layout switching, advertisement insertion and the like. The layout refers to the video stream layout in the Preview area (PVM) of the director interface and the video stream layout in the Program presentation area (PGM). PVW are used primarily with PGM because of the need to look ahead at the effect when the PGM signal is actually played out. If the effect is felt to be acceptable, the PGM is switched to PGM.
In this embodiment, forwarding node 11 may provide a transition service. For the director, director equipment 14 is operable to switch video sources when a transition is required. In the present embodiment, for convenience of description and distinction, the video source node 11 that provides the first video stream before switching the video source is defined as a first video source node; the video source providing the video stream after switching the video sources is defined as a second video source node, such as the second video source node 15 shown in fig. 1 d. The implementation of the second video source node 15 can refer to the related content of the video source node 11, and is not described herein again. Fig. 1d illustrates only the second video source node 15 as a desktop computer, but the present invention is not limited thereto.
The second video source node 15 and the first video source node 11 may be the same physical machine or different physical machines. If the first video source node 11 and the second video source node 15 are the same physical machine, the video source switching operation can be understood as switching the video stream acquisition path.
Accordingly, the forwarding node 12 may switch the video source from the first video source node 11 to the second video source node 15 in response to the video source switching operation. Optionally, forwarding node 12 may provide a director interface to director device 14 with a video source switching control exposed thereon. The director can trigger the video source switching control to switch the video source. Further, the director device 14 may respond to the video source switching control by presenting the video source node selection control to be switched, and respond to the selection operation of the video source node provided for the video source node selection control by regarding the selected video source node as the second video source node 15. Further, the director device 14 may send a video source switch request to the forwarding node 12 that includes the identity of the second video source node 15 to be switched. A video source switching request may be received for the forwarding node 12 and the video source switched from the first video source node 11 to the second video source node 15 based on the identity of the second video source node 15.
In practical applications, during the transition, the forwarding node 12 may not timely obtain the video stream provided by the second video source node 15 due to network jitter or blanking, which may result in blanking or black screen of the terminal device 13. In this embodiment, in order to prevent the above situation from occurring, the forwarding node 12 may monitor whether the video frame provided by the second video source node 15 is received in time after the video source switching operation occurs. Optionally, the forwarding node 12 may time when the video source switching operation occurs, and if the video frame provided by the second video source node is not received within a set time length (marked as a second time length) of the video source switching operation, obtain the target video frame from the cached video frames of the second video stream; and provides the target video frame to the terminal device 13 for playback. In this embodiment, for convenience of description and distinction, a target video frame obtained from a video frame of a cached second video stream when the forwarding node 12 cannot obtain a video frame provided by the first video source node 11 is defined as a first target video frame; and the forwarding node 12 acquires the target video frame from the video frames of the cached second video stream in the case that the video frame provided by the second video source node is not received within the set time length (marked as the second time length) of the occurrence of the video source switching operation, and defines the target video frame as the second target video frame. The second target video frame may be the same as or different from the first target video frame, and is specifically determined by the time when the video source switching operation occurs. For a specific implementation of the forwarding node 12 obtaining the second target video frame from the video frame of the cached second video stream, reference may be made to the above-mentioned related content of the forwarding node 12 obtaining the first target video frame from the video frame of the cached second video stream, which is not described herein again. In this way, even if the communication between the second video source node 15 and the forwarding node 12 fails to be successful, the forwarding node 12 cannot timely acquire the video frame of the second video source node 15, and for the terminal device 13, the second target video frame can be displayed, so that video cutoff or black screen of the terminal device 13 can be prevented, and the viewing experience of audiences can be improved.
Accordingly, if the forwarding node 12 receives the video frame provided by the second video source node 15 within the second duration set by the occurrence of the video source switching operation, the currently received first video frame and the second target video frame are provided to the terminal device 13 for playing. The video stream provided by the second video source node 15 is defined as a third video stream. The first video frame is a video frame to be played by the terminal device 13, and the second target video frame is a video frame to be disappeared from the display screen of the terminal device 13.
Optionally, to improve the transition effect, the transition effect may be increased. Forwarding node 12 may maintain special effects device information corresponding to a variety of special effects. The plurality means 2 or more than 2. The special effect setting information corresponding to the multiple special effects and the logic for performing special effect processing on the video frame may be embedded in the forwarding node 12 in a plug-in (plugin) manner. The special effect is expanded by adopting a plug-in mode, the special effect can be expanded on the basis of not modifying the director logic provided by the existing forwarding node 12, the existing director logic can be reused, and the development cost and the workload are reduced; the method is also beneficial to realizing the transverse widening of the special effect, and the universality and the flexibility of the special effect setting can be improved.
Optionally, multiple special effects maintained by the forwarding node 12 may adopt a two-stage classification manner, where a special effect occurrence effect (e.g., entry/exit, IN/OUT) is used as a first-stage classification, and a type of the special effect (e.g., hidden/Fly, Fade/Fly) is used as a second-stage classification, and the two-stage classification may be registered IN a macro-defined manner IN a special effect management module (effect _ manager) of the forwarding node 12, and a new special effect may be quickly added without changing a main flow of existing director logic. Optionally, this embodiment may provide a special effect setting at a graph level, and when scene switching is involved, each Layer (Layer) may include a complete special effect, and may be combined at will. Assuming that there are n primary classifications and m secondary classifications at present, each layer can have n × m special effects.
For the forwarding node 12, in response to a special effect selection operation for the first video frame, according to special effect setting information corresponding to the selected first special effect, performing special effect processing on the first video frame to obtain a first video frame after the special effect processing; and responding to the special effect selection operation aiming at the second target video frame, and carrying out special effect processing on the second target video frame according to the special effect setting information corresponding to the selected second special effect so as to obtain the second target video frame after the special effect processing. The first effect is a effect associated with screen entry, that is, the first effect is used to determine the manner in which the first video frame appears on the display screen of the terminal device 13. For example, the first effect may be a fade-in, fly-in, float-in, flash, spin, or bounce, among others. Correspondingly, the special effect setting information corresponding to the first special effect comprises: the start time and the end time of the first effect, the identification of the first effect, the duration and the number of repetitions of each action in the first effect, and the like.
Accordingly, the second effect is the effect associated with the screen-out, i.e. the first effect is used to decide in which way the first video frame disappears from the display screen of the terminal device 13. For example, the first effect may be fading, flying, floating, flashing, spinning, or bouncing, among others. The special effect setting information corresponding to the second special effect includes: the start time and the end time of the second effect, the identification of the second effect, the duration and the number of repetitions of each action in the second effect, and the like.
Optionally, the director interface provided by forwarding node 12 to the director device 14 may expose special effects settings controls. The director can trigger the special effect setting control to carry out special effect setting. Further, director device 14 may present various special effects information items in response to the special effects setting controls. Wherein, a plurality means 2 or more than 2. Further, the director device 14 may treat the selected special effect as the first special effect in response to a selection operation for the variety of special effect information items. Further, director device 14 may send a special effects setting request to forwarding node 12 that includes an identification of the first video frame and an identification of the first special effect. Accordingly, the forwarding node 12 parses the identifier of the first video frame and the identifier of the first special effect from the special effect setting request, and obtains the special effect setting information of the first special effect according to the identifier of the first special effect. Further, the forwarding node 12 may perform special effect processing on the first video frame according to the special effect setting information corresponding to the selected first special effect, so as to obtain the first video frame after the special effect processing. In the same way, the forwarding node 12 may receive a further special effect setting request comprising an identification of the second target video frame and an identification of the second special effect. Accordingly, the forwarding node 12 parses the identifier of the second target video frame and the identifier of the first special effect from the special effect setting request, and obtains special effect setting information of the second special effect according to the identifier of the second special effect. Further, the forwarding node 12 may perform special effect processing on the second target video frame according to the special effect setting information corresponding to the selected second special effect, so as to obtain the second target video frame after the special effect processing.
In this embodiment, a specific implementation of the special effect processing on the first video frame and the second target video frame is not limited. Optionally, the forwarding node 14 may perform, according to special effect setting information corresponding to the first special effect, special effect processing on a plurality of layers included in the first video frame, respectively, to obtain the first video frame after the special effect processing. Correspondingly, the forwarding node 14 may further perform, according to special effect setting information corresponding to the second special effect, special effect processing on a plurality of layers included in the second target video frame, so as to obtain the second target video frame after the special effect processing. Therefore, special effect processing on the layer level can be realized, and special effect interaction between layers can be realized. Optionally, the same special effect may be set for different layers of the same video frame, and different special effects may also be set, and the specific set special effect mode may be selected autonomously by the director.
Or, the forwarding node 12 may configure a corresponding special effect algorithm for the plurality of layers included in the first video frame, and process the plurality of layers included in the first video frame by using the special effect algorithm to obtain the first video frame after the special effect processing. The forwarding node 12 configures a corresponding special effect algorithm for a plurality of layers included in the first video frame, where the special effect algorithm is associated with screen entry. Similarly, the forwarding node 12 may also configure a corresponding special effect algorithm for a plurality of layers included in the second target video frame, and process the plurality of layers included in the second target video frame by using the special effect algorithm to obtain the second target video frame after the special effect processing. The forwarding node 12 configures corresponding special effect algorithms for a plurality of layers included in the second target video frame, where the special effect algorithms are special effect algorithms associated with screen-out.
Further, the forwarding node 12 may provide the first video frame after the special effect processing and the second target video frame after the special effect processing to the terminal device 13 for playing. Accordingly, for terminal device 13, the second target video frame is caused to disappear from the display screen of terminal device 13 in the manner of the second special effect, and the first video frame is caused to appear on the display screen of terminal device 13 in the manner of the first special effect. Therefore, in the transition process, the transition special effect is added, and the watching experience of audiences is further improved.
In order to more clearly illustrate the transition process, the following description is made with reference to the signaling diagram of the transition process shown in fig. 1 g. As shown in fig. 1g, the transition process mainly includes:
step 1: the first video stream (video stream 1) is bound to layer 1. Wherein, layer 1 is used to process video stream 1 to obtain a second video stream. Wherein, the processing of the layer 1 to the video stream 1 includes: layer merging, layer mapping, and the like.
Step 2: the second video stream is input to the current playlist.
And step 3: and acquiring the video frame of the second video stream from the current play list, and pushing the acquired video frame of the second video stream to the terminal device 13. Played by the terminal device 13.
And 4, step 4: switching a video source from a first video stream (video stream 1) to a third video stream (video stream 2) in response to a video source switching operation; and binds video stream 2 to layer 2. Layer 2 is used to process video stream 2 to obtain a fourth video stream. Wherein, the processing of the video stream 2 by the layer 2 includes: layer merging, layer mapping, and the like.
Optionally, layer 2 may also perform special effects processing on the video frames of video stream 2. Optionally, the layer 2 may obtain, according to the special effect setting information of the first special effect, a first video frame after the special effect processing for the first video frame in the video stream 2. The first effect is an effect associated with the entry. The first video frame may be a first frame of video to be played in video stream 2.
And 5: the fourth video stream is input to the current playlist.
Step 6: and updating the cache playlist, that is, in the process of the video source switching operation, caching the video frames of the second video stream pushed to the terminal device 13 by the forwarding node 12 into the cache playlist.
And 7: and canceling the video frame of the second video stream pushed by the layer 1 in the current play list.
And 8: and caching the video frames of the second video stream pushed by the layer 1 to a cache playlist.
Optionally, the layer 1 may further perform special effect processing on the video frame of the second video stream, that is, perform special effect processing according to the special effect setting information of the second special effect, so as to obtain a video frame after the special effect processing. The second effect is an effect associated with the screen-out.
And step 9: acquiring a video frame of a fourth video stream from the current play list, and acquiring a target video frame (the second target video frame) from the buffer play queue; and pushing the acquired video frame of the fourth video stream and the second target video frame to the terminal device 13. Played by the terminal device 13.
Step 10: and canceling the video frame of the second video stream pushed by the layer 1 in the cache play list.
Step 11: the process of logout layer 1 on video frames of the first video stream (video stream 1) proceeds.
In the embodiment of the present application, steps 1 to 11 are not limited to be executed sequentially, and may be executed in parallel. Alternatively, parts are executed sequentially and parts are executed in parallel. The order of execution of the steps is not limited to the order in which the steps are executed.
It should be further noted that the transition special effect provided by the embodiment of the present application may be embedded in the forwarding node 12 in a plug-in form. The director service for forwarding node 12 to the video playback service provider may include: a first mode in which a transition special effect service is provided and a second mode in which no transition special effect service is provided. Optionally, the forwarding node 12 may present a director service mode selection control to a director interface provided by the director device 14. The video playing service provider can autonomously select which director service mode to adopt through the director service mode selection control. For forwarding node 12, the video playback service provider may be provided with the director service mode selected by it in response to a selection operation for the director service mode.
As shown in fig. 1h, the director service architecture provided in the embodiment of the present application mainly includes: the system comprises a program guide application layer, a layer conversion layer, a function module layer, a material layer, a management layer, an algorithm logic layer and an atomic capability layer. The layers can be called from top to bottom in sequence. The director application layer is mainly used for analyzing commands of the director equipment, managing director tasks, scheduling tasks and the like. The layer conversion layer is used for performing layer processing on the video stream provided by the video source node and converting the video stream into a layer with a uniform architecture type. The functional module layer is used to provide processing functions. Wherein, each functional module in the functional module layer mainly includes: the layout function is mainly used for layout processing between layers; the preprocessing function mainly converts the graph layer into a unified architecture type; the rendering function is mainly used for rendering layers to obtain video frames. The special effect function is mainly used for carrying out special effect processing on the image layer. The material layer is a material type supported by the director service, and mainly comprises the following components: video, audio, pictures, text, animation, and the like. The management layer is mainly used for managing video streams and special effects. The stream management module manages video streams, such as playing, pausing, closing and the like of the video streams; the special effect management module mainly manages the special effects, such as determining duration, start and stop time, which special effect is adopted, and the like of the special effects. The algorithm logic layer is mainly used for providing related algorithms. For example, multiplexing algorithms, encoding, decoding algorithms, observation algorithms, registration modes, synchronization modes, notification modes, and special effects algorithms, among others, may be provided. The special effect algorithm may include: an in-screen special effect algorithm and an out-screen special effect algorithm. The atomic capability layer is a calling unit with minimum director service, and may include: configuration modifications, communication protocols, and underlying filtering algorithms, etc. The configuration modification is mainly used for modifying the configuration parameters of each algorithm in the algorithm logic layer; the communication protocol is mainly used for providing a protocol for supporting the director service; such as rtmp protocol, http protocol, etc.; the filtering algorithm may perform filtering processing on the video frame.
In the embodiment of the present application, the functional modules involved in the transition special effect may be embedded in the forwarding node 12 in a plug-in form. For example, the special effects processing function, animation material, special effects management, and special effects algorithm shown in fig. 1h may be embedded in the forwarding node 12 in a plug-in form.
It should be noted that, in order to improve transition efficiency and reduce transition delay, the forwarding node 12 may also utilize hardware acceleration and/or optimize a special effect algorithm to improve transition efficiency. The hardware acceleration can be performed by a high-speed processor. For example, Field-Programmable Gate Array (FPGA) supporting parallel processing may be used instead of the CPU to perform transition special effect processing and the like.
In addition to the above video playing system embodiment, the present application embodiment also provides a video playing method, and the following describes an exemplary live broadcasting method provided by the present application embodiment from the perspective of a forwarding node, respectively.
Fig. 2 is a schematic flowchart of a video playing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
201. a first video stream provided by a first video source node is acquired.
202. In providing a second video stream generated based on the first video stream to the terminal device, video frames of the second video stream that have been provided to the terminal device are buffered.
203. And under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame.
204. And providing the first target video frame for the terminal equipment to play.
In this embodiment, regarding the implementation forms of the first video source node and the forwarding node, the communication manner between the first video source node and the forwarding node, and the implementation forms of the video stream, reference may be made to the relevant contents of the above system embodiment, and details are not repeated herein.
In this embodiment, the number of the first video source nodes may be 1 or more. Plural means 2 or more. The video streams provided by the plurality of video source nodes may be video streams taken from different perspectives for the same scene, i.e., multi-perspective video streams.
In this embodiment, the forwarding node may provide the video stream obtained from the video source node to the terminal device, and the terminal device plays the video stream to the viewer. For the forwarding node, the video stream acquired from the video source node can be directly provided to the terminal device, or the video stream acquired from the video source node can be provided to the terminal device after being processed. For convenience of description and distinction, in the following embodiments of the present application, a video stream provided by a video source node to a forwarding node is defined as a first video stream; and defining the video stream provided by the forwarding node to the terminal equipment as a second video stream.
The second video stream may be the first video stream, or may be another video stream generated based on the first video stream. For example, the second video stream may be a video stream obtained by processing the first video stream by the forwarding node. In some embodiments, the first video source node is a plurality of image capturing devices, and the plurality of image capturing devices capture the same scene from different perspectives, so as to obtain a multi-perspective video stream. For the forwarding node, the first video stream is a multi-view video stream. The plurality of video source nodes may provide the first video stream to the forwarding node in the form of video frames at the set video transmission rate. The visual angle of each video source node is different, and the visual angle of the video frame provided to the forwarding node is also different. For the forwarding node, each time video frames provided by a plurality of video source nodes are received, the video frames form multi-view video frames. Accordingly, the forwarding node may treat the multi-view video frame received each time as multiple layers. The number of layers may be determined by the number of first video source nodes, and one frame of the multiview video frames serves as one layer. Further, the forwarding node may render the layers onto the background layer according to the set rendering template, so as to obtain a frame of video frame in the second video stream. Optionally, as shown in fig. 1c, the forwarding node may perform layer merging on the multiple layers according to a set rendering template, and map the multiple layers after merging on the background layer, so as to obtain a frame of video frame in the second video stream. And processing the multi-view video frames received each time according to the same processing mode to obtain a second video stream.
Further, the forwarding node provides the second video stream to the terminal device, and the second video stream is played by the terminal device. Alternatively, the forwarding node may provide the second video stream to the terminal device in batches in the form of video frames according to the set video transmission rate.
In practical application, a communication between a video source node and a forwarding node may fail, which may cause video cutoff, blocking, and the like between the video source and the forwarding node, and the forwarding node cannot continue to acquire a video frame of a first video stream, and certainly cannot continue to provide a new video frame to a terminal device, thereby causing the cutoff of the terminal device, which undoubtedly may affect the viewing experience of viewers. Wherein, the fault of the communication between the video source node and the forwarding node comprises: one or more of a video source node acquisition or video transmission failure, a forwarding node video reception failure, and a transmission link between a video source node and a forwarding node failure. The plurality means 2 or more than 2.
In order to solve the above problem, in this embodiment, in step 202, the forwarding node may buffer video frames of the second video stream that have been provided to the terminal device in the process of providing the second video stream to the terminal device. Further, in step 203, in a case that the forwarding node cannot acquire the video frame provided by the first video source node, a target video frame may be acquired from the cached video frame of the second video stream; and in step 204, the target video frame is provided to the terminal device for playing. Therefore, even if video cutoff occurs between the first video source node and the forwarding node, the terminal equipment can normally acquire video frames to play, the probability of video cutoff occurring in the terminal equipment in the video playing process is further reduced, the video playing stability is improved, and the user watching experience is favorably improved.
In the embodiment of the application, the forwarding node can time the interval time for receiving two adjacent frames of video frames when receiving the video frames provided by the video source node each time; if the forwarding node does not receive the next video frame provided by the video source node when the interval time is greater than or equal to the set time length (defined as the first time length), it may be determined that the forwarding node cannot acquire the video frame provided by the video source node. The first duration may be determined by a video transmission rate between the video source node and the forwarding node and a video transmission rate between the forwarding node and the terminal device. Preferably, the first duration is less than or equal to the user's persistence of vision.
Further, the forwarding node may obtain a target video frame from the cached video frames of the second video stream under the condition that the video frame provided by the video source node cannot be obtained; and providing the target video frame for the terminal equipment to play. The target video frame is 1 frame or a plurality of frames of video frames, and the plurality of frames refer to 2 frames or more than 2 frames.
In some embodiments, a video frame selection policy is set in the forwarding node, and the target video frame may be obtained from the video frames cached by the forwarding node according to the video frame selection policy. In this embodiment, the number and specific implementation of the video frame selection strategies are not limited. The following is an exemplary description in connection with several alternative embodiments.
Selection strategy 1: and the target video frame is the N frames closest to the current time in the video frames of the second video stream cached by the forwarding node. N is a positive integer. Accordingly, the forwarding node may obtain the N frames of video frames closest to the current time from the buffered video frames of the second video stream as the target video frame, and provide the target video frame to the terminal device. Preferably, N is 1, so that the occurrence of rewinding can be prevented, which helps to further improve the viewing experience of the viewers.
Selection strategy 2: the video frames cached by the forwarding nodes carry tag information. Wherein the tag information may be set by a user of the director device (the director). Optionally, the forwarding node may provide a tagging interface to the broadcasting device. The director equipment may present the label setting interface and set label information for the cached video frames by the director. Wherein the tag information may represent characteristics of the buffered video frames. For example, the tag information may be a highlight or an important story, etc. Accordingly, the forwarding node may obtain the tag information of the cached video frames of the second video stream, and may obtain, from the cached video frames of the second video stream, a video frame whose tag information is the set tag information as the target video frame. The set label information may be a highlight or an important plot.
Selection strategy 3: the forwarding node can acquire the access amount of the video frames of the cached second video stream; and acquiring the first target video frame from the cached video frames of the second video stream according to the access amount of the cached video frames of the second video stream. Optionally, the forwarding node may obtain, from the cached video frames of the second video stream, a video frame with an access amount greater than or equal to a set access amount threshold as the target video frame. Or the forwarding node may sort the video frames of the cached second video stream according to the access amount of the video frames of the cached second video stream; and selecting the M frames of video frames as target video frames in the order of the access amount from high to low. Wherein M is a positive integer.
Selection strategy 4: the forwarding node may obtain a video frame containing the target object from the cached video frames of the second video stream as the target video frame. Wherein, the target object can be a designated person or role, etc.
Wherein the target object is specifiable by a user of the terminal device. The terminal device can display a target object selection interface, and a user of the terminal device selects a target object. The terminal device may provide an identification of the target object to the forwarding node in response to the selected completion event. The forwarding node can acquire the characteristics of the target object according to the identification of the target object; identifying a video frame containing the target object from the cached video frames of the second video stream according to the characteristics of the target object; and acquiring N frames of video frames closest to the current time according to the timestamp information of the video frames containing the target object to serve as target video frames.
Optionally, the target object selection interface may include: a complete control is selected. Accordingly, the selection completion event can be implemented as an event generated by the terminal device in response to the touch operation for the selection completion control.
Selection strategy 5: the forwarding node further comprises: other video streams that have been played before the second video stream. Wherein the other video frames may be advertisements already played, a feature of the second video stream, short videos, etc. Accordingly, the forwarding node may also take video frames of other video streams as target video frames.
It should be noted that, in the embodiment of the present application, the video frame selection policy set in the forwarding node may be 1 or more. The plurality means 2 or more than 2. The selection strategies 1-5 may be partially or completely arranged in the forwarding node. If the video frame selection strategy set in the forwarding node is multiple. For these multiple selection strategies, priorities of the multiple video frame selection strategies may be preset. Correspondingly, the forwarding node can acquire the priority of the set video frame selection strategy; determining a target video frame selection strategy from a plurality of video frame selection strategies according to the priority of the video frame selection strategy; and acquiring a first target video frame from the cached video frames according to the target video frame selection strategy. Alternatively, the forwarding node may select the highest priority from among a plurality of video frame selection policies as the target video frame selection policy.
The priority of the video frame selection strategies can be set by a director of the director equipment or a user of the terminal equipment. If the setting is made by the user of the terminal device. The terminal device may present the selection information item. The selection information item includes: various video frame selection strategies. The user may set the priority of various video frame selection strategies. Further, the terminal device may also provide priorities of a plurality of video frame selection policies to the forwarding node in response to the setting completion event. Optionally, the terminal device may further display a setting completion control. Accordingly, the setting completion event can be implemented as an event generated by the terminal device in response to the touch operation for the setting completion control. Of course, the priorities of the multiple video frame selection policies may be set by a user of a director of the director device, and the setting manner may refer to the above-mentioned related content of the priorities of the multiple video frame selection policies set by the user of the terminal device, which is not described herein again.
Alternatively, the target video frame selection policy may be set by a director of the director device or a user of the terminal device. If the setting is made by the user of the terminal device. The terminal device may present the selection information item. The selection information item includes: various video frame selection strategies. The user may select a video frame selection policy to be employed from a plurality of video frame selection policies. Accordingly, the terminal device may provide the selected video frame selection policy as an identification of the target video frame selection policy to the forwarding node in response to a selection completion event for the plurality of video frame selection policies. Of course, the priority of the target video frame selection policy may be selected by a user of a director of the director device, and the selection manner may refer to the above-mentioned related content of the target video frame selection policy selected by the user of the terminal device, which is not described herein again.
Accordingly, the forwarding node may obtain the target video frame selection policy selected by the user of the terminal device (or the director of the director device 14) according to the identifier of the target video frame selection policy; and acquiring the target video frame from the cached video frame according to the target video frame selection strategy.
The forwarding node provided by the embodiment of the application provides a transition service in addition to the above-mentioned directing service for the same video stream. The transition mainly includes scene switching, layout switching, advertisement insertion and the like. The layout refers to the video stream layout in the Preview area (PVM) of the director interface and the video stream layout in the Program presentation area (PGM). PVW are used primarily with PGM because of the need to look ahead at the effect when the PGM signal is actually played out. If the effect is felt to be acceptable, the PGM is switched to PGM.
In this embodiment, the forwarding node may provide a transition service. For the director, the director equipment is operable to switch video sources when a transition is required. In this embodiment, for convenience of description and distinction, a video source node that provides a first video stream before switching video sources is defined as a first video source node; and defining the video source which provides the video stream after the video source is switched as a second video source node. The implementation of the second video source node can refer to the related contents of the above system embodiments, and is not described herein again.
The second video source node and the first video source node may be the same physical machine or different physical machines. If the first video source node and the second video source node are the same physical machine, the video source switching operation can be understood as switching the video stream acquisition path.
The following describes an exemplary transition method provided in this embodiment with reference to the transition process shown in fig. 3. As shown in fig. 3, the transition method includes:
301. switching a video source from a first video source node to the second video source node in response to a video source switching operation.
302. And if the video frames provided by the second video source node are not received within a second time length set by the video source switching operation, acquiring a second target video frame from the cached video frames of the second video stream.
303. And providing the second target video frame for the terminal equipment to play.
In this embodiment, the forwarding node may switch the video source from the first video source node to the second video source node in response to a video source switching operation. For a specific implementation of the forwarding node responding to the video source switching operation to switch the video source from the first video source node to the second video source node, reference may be made to relevant contents of the above system embodiment, which is not described herein again.
In practical application, in a transition process, a forwarding node may not timely acquire a video stream provided by a second video source node due to network jitter or cut-off, so that cut-off or black screen of a terminal device is caused. In this embodiment, in order to prevent the above situation from occurring, the forwarding node may monitor whether to receive a video frame provided by the second video source node in time after the video source switching operation occurs. Optionally, the forwarding node may time when the video source switching operation occurs, and if the video frame provided by the second video source node is not received within a set time length (marked as a second time length) of the video source switching operation, obtain a target video frame from the cached video frames of the second video stream; and providing the target video frame for the terminal equipment to play. In this embodiment, for convenience of description and distinction, a target video frame obtained from a video frame of a cached second video stream when the forwarding node cannot obtain a video frame provided by a first video source node is defined as a first target video frame; and when the forwarding node does not receive the video frame provided by the second video source node within the set time length (marked as the second time length) of the occurrence of the video source switching operation, acquiring a target video frame from the cached video frames of the second video stream, and defining the target video frame as a second target video frame. The second target video frame may be the same as or different from the first target video frame, and is specifically determined by the time when the video source switching operation occurs. For a specific implementation of the forwarding node obtaining the second target video frame from the video frame of the cached second video stream, reference may be made to the above-mentioned related content of the forwarding node obtaining the first target video frame from the video frame of the cached second video stream, which is not described herein again. Therefore, in the transition process, even if communication between the second video source node and the forwarding node fails, the forwarding node cannot acquire the video frame of the second video source node in time, the second target video frame can be displayed for the terminal equipment, the terminal equipment can be prevented from video cutoff or black screen, and the watching experience of audiences is improved.
Correspondingly, if the forwarding node receives the video frame provided by the second video source node within the second duration set by the video source switching operation, the currently received first video frame and the second target video frame are transmitted
And providing the terminal equipment for playing. The first video frame is a video frame to be played by the terminal equipment, and the second target video frame is a video frame to be lost from a display screen of the terminal equipment.
Optionally, to improve the transition effect, the transition effect may be increased. The forwarding node may maintain special effect device information corresponding to multiple special effects. The plurality means 2 or more than 2. The special effect setting information corresponding to the various special effects and the logic for carrying out special effect processing on the video frame can be embedded into the forwarding node in a plug-in (plugin) mode. The special effect is expanded by adopting a plug-in mode, the special effect can be expanded on the basis of not modifying the director logic provided by the existing forwarding node, the existing director logic can be reused, and the development cost and the workload are reduced; the method is also beneficial to realizing the transverse widening of the special effect, and the universality and the flexibility of the special effect setting can be improved.
Optionally, multiple special effects maintained by the forwarding node may adopt a two-stage classification manner, where an effect of occurrence of a special effect (e.g., entry/exit, IN/OUT) is used as a first-stage classification, and a type of the special effect (e.g., hidden/Fly, Fade/Fly) is used as a second-stage classification, and the two-stage classification may be registered IN a special effect management module (effect _ manager) of the forwarding node IN a macro-defined manner, and a new special effect may be quickly added without changing a main flow of existing director logic. Optionally, this embodiment may provide a special effect setting at a graph level, and when scene switching is involved, each Layer (Layer) may include a complete special effect, and may be combined at will. Assuming that there are n primary classifications and m secondary classifications at present, each layer can have n × m special effects.
For the forwarding node, in response to a special effect selection operation for the first video frame, according to special effect setting information corresponding to the selected first special effect, performing special effect processing on the first video frame to obtain a first video frame after the special effect processing; and responding to the special effect selection operation aiming at the second target video frame, and carrying out special effect processing on the second target video frame according to the special effect setting information corresponding to the selected second special effect so as to obtain the second target video frame after the special effect processing. The first effect is an effect associated with screen entry, that is, the first effect is used for determining a manner in which the first video frame appears on the display screen of the terminal device. For example, the first effect may be a fade-in, fly-in, float-in, flash, spin, or bounce, among others. Correspondingly, the special effect setting information corresponding to the first special effect comprises: the start time and the end time of the first effect, the identification of the first effect, the duration and the number of repetitions of each action in the first effect, and the like.
Accordingly, the second effect is the effect associated with the out-of-screen, i.e. the first effect is used to decide in which way the first video frame is lost from the display screen of the terminal device. For example, the first effect may be fading, flying, floating, flashing, spinning, or bouncing, among others. The special effect setting information corresponding to the second special effect includes: the start time and the end time of the second effect, the identification of the second effect, the duration and the number of repetitions of each action in the second effect, and the like.
Optionally, the director interface provided by the forwarding node to the director device may show a special effect setting control. The director can trigger the special effect setting control to carry out special effect setting. Further, the director equipment may present various special effect information items in response to the special effect setting control. Wherein, a plurality means 2 or more than 2. Further, the director device may treat the selected special effect as the first special effect in response to a selection operation for the plurality of special effect information items. Further, the director device may send a special effects setting request to the forwarding node, the special effects setting request including an identification of the first video frame and an identification of the first special effect. Correspondingly, the forwarding node analyzes the identifier of the first video frame and the identifier of the first special effect from the special effect setting request, and obtains the special effect setting information of the first special effect according to the identifier of the first special effect. Further, the forwarding node may perform special effect processing on the first video frame according to special effect setting information corresponding to the selected first special effect, so as to obtain the first video frame after the special effect processing. In the same way, the forwarding node may receive a further special effect setting request comprising an identification of the second target video frame and an identification of the second special effect. Correspondingly, the forwarding node analyzes the identifier of the second target video frame and the identifier of the first special effect from the special effect setting request, and obtains the special effect setting information of the second special effect according to the identifier of the second special effect. Further, the forwarding node may perform special effect processing on the second target video frame according to special effect setting information corresponding to the selected second special effect, so as to obtain the second target video frame after the special effect processing.
In this embodiment, a specific implementation of the special effect processing on the first video frame and the second target video frame is not limited. Optionally, the forwarding node may perform, according to special effect setting information corresponding to the first special effect, special effect processing on a plurality of layers included in the first video frame, respectively, to obtain the first video frame after the special effect processing. Correspondingly, the forwarding node may further perform, according to special effect setting information corresponding to the second special effect, special effect processing on the plurality of layers included in the second target video frame, so as to obtain the second target video frame after the special effect processing. Therefore, special effect processing on the layer level can be realized, and special effect interaction between layers can be realized. Optionally, the same special effect may be set for different layers of the same video frame, and different special effects may also be set, and the specific set special effect mode may be selected autonomously by the director.
Further, the forwarding node may provide the first video frame after the special effect processing and the second target video frame after the special effect processing to the terminal device for playing. Accordingly, for the terminal device, the second target video frame is caused to disappear from the display screen of the terminal device in the manner of the second special effect, and the first video frame is caused to appear on the display screen of the terminal device in the manner of the first special effect. Therefore, in the transition process, the transition special effect is added, and the watching experience of audiences is further improved.
It should be noted that the above-mentioned alternative embodiments of fig. 2 and the embodiments related to fig. 3 and 3 can be implemented individually or in combination. When the embodiments shown in fig. 2 and 3 are implemented in combination, the order of execution is not limited.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 201 and 202 may be device a; for another example, the execution subject of step 201 may be device a, and the execution subject of step 202 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps in the video playing method.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 4, includes: memory 40a, processor 40b and communication component 40 c. The memory 40a is used for storing computer programs.
The processor 40b is coupled to the memory 40a for executing a computer program for performing: obtaining a first video stream provided by a first video source node through a communication component 40 c; in providing the second video stream generated based on the first video stream to the terminal device through the communication component 40c, buffering video frames of the second video stream that have been provided to the terminal device; under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame; and provides the first target video frame to the terminal device for playback through the communication component 40 c.
In some embodiments, the processor 40b is further configured to: timing the interval time between receiving two adjacent video frames each time a video frame provided by the first video source node is received through the communication component 40 c; and if the next video frame provided by the first video source node is not received when the interval time is greater than or equal to the set first time, determining that the video frame provided by the first video source node cannot be acquired.
In other embodiments, when the processor 40b obtains the first target video frame from the buffered video frames, it is specifically configured to: and acquiring a first target video frame from the cached video frames according to a set video frame selection strategy.
Further, when the processor 40b obtains the first target video frame from the buffered video frames, it is specifically configured to: acquiring the priority of a set video frame selection strategy; determining a target video frame selection strategy from a plurality of video frame selection strategies according to the priority of the video frame selection strategies; according to a target video frame selection strategy, acquiring a first target video frame from the cached video frames; or acquiring a target video frame selection strategy selected by a user of the terminal equipment; and acquiring a first target video frame from the cached video frames according to the target video frame selection strategy.
Further, when the processor 40b obtains the first target video frame from the buffered video frames, it is specifically configured to: acquiring N frames of video frames closest to the current time from the cached video frames of the second video stream to serve as first target video frames; wherein N is a positive integer; or, obtaining tag information of a video frame of the cached second video stream; according to the tag information of the video frames of the cached second video stream, acquiring the video frames with the tag information being set tag information from the video frames of the cached second video stream as a first target video frame; or acquiring the access quantity of the video frames of the cached second video stream; acquiring a first target video frame from the cached video frames of the second video stream according to the access amount of the cached video frames of the second video stream; or, acquiring a video frame containing a target object from the cached video frames of the second video stream as a first target video frame; alternatively, the buffered video frames include: other video streams that have been played before the second video stream; the video frames of the other video streams are taken as the first target video frame.
In other embodiments, when the processor 40b obtains the first target video frame from the buffered video frames of the second video stream, it is specifically configured to: acquiring N frames of video frames closest to the current time from the cached video frames of the second video stream to serve as first target video frames; wherein N is a positive integer.
In still other embodiments, the number of the first video source nodes is plural; a plurality of first video source nodes acquire a first scene from different perspectives. Accordingly, when the processor 40b acquires the first video stream provided by the first video source node, it is specifically configured to: the multi-view video streams provided by the plurality of first video source nodes are acquired as the first video stream by the communication component 40 c.
Accordingly, the processor 40b is further configured to: taking the multi-view video frames provided by the plurality of first video source nodes received through the communication component 40c at a time as a plurality of image layers; and rendering the layers to the background layer according to a set rendering template to obtain a frame of video frame in the second video stream.
In some other embodiments, the processor 40b is further configured to: switching a video source from a first video source node to a second video source node in response to a video source switching operation; if the video frames provided by the second video source node are not received within a second time length set by the video source switching operation, acquiring second target video frames from the cached video frames of the second video stream; and provides the second target video frame to the terminal device for playback through the communication component 40 c. Correspondingly, if the video frame provided by the second video source node is received within the second duration set by the occurrence of the video source switching operation, the currently received first video frame and the second target video frame are provided to the terminal device for playing through the communication component 40 c.
Optionally, when the processor 40b provides the currently received first video frame and the second target video frame to the terminal device for playing, the processor is specifically configured to: responding to a special effect selection operation aiming at a first video frame, and carrying out special effect processing on the first video frame according to special effect setting information corresponding to the selected first special effect to obtain a first video frame after the special effect processing; responding to a special effect selection operation aiming at a second target video frame, and carrying out special effect processing on the second target video frame according to special effect setting information corresponding to the selected second special effect to obtain a second target video frame after the special effect processing; and the first video frame after the special effect processing and the second target video frame after the special effect processing are provided for the terminal equipment to be played through the communication component 40 c.
Wherein the first special effect is a special effect associated with screen entry; the second effect is an effect associated with the screen-out.
Further, when performing special effect processing on the first video frame, the processor 40b is specifically configured to: according to the special effect setting information corresponding to the first special effect, the multiple image layers included in the first video frame are respectively subjected to special effect processing, so that the first video frame after the special effect processing is obtained.
Accordingly, when performing special effect processing on the second target video frame, the processor 40b is specifically configured to: and respectively carrying out special effect processing on a plurality of image layers included in the second target video frame according to special effect setting information corresponding to the second special effect so as to obtain the second target video frame after the special effect processing.
In some optional embodiments, as shown in fig. 4, the electronic device may further include: power supply assembly 40d, etc. In some embodiments, the electronic device is a terminal device such as a computer, and the electronic device may further include: display 40e and audio component 40 f. Only some of the components are schematically shown in fig. 4, and it is not meant that the electronic device must include all of the components shown in fig. 4, nor that the electronic device can include only the components shown in fig. 4.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for devices with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
The electronic device provided by this embodiment can be used as a forwarding node. In the process of pushing the video stream to the terminal equipment, caching the video frame provided for the terminal equipment; and under the condition that the video frames provided by the video source node cannot be obtained, obtaining the target video frame from the cached video frames, and providing the target video frame for the terminal equipment to play. Therefore, for the terminal equipment, even if the communication between the video source node and the forwarding node fails, the video can be played to the audience all the time, the probability of video cut-off of the terminal equipment in the video playing process is further reduced, and the user watching experience is favorably improved.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (30)

1. A video playback system, comprising: the system comprises a first video source node, a forwarding node and terminal equipment;
the first video source node is used for providing a first video stream to the forwarding node;
the forwarding node is configured to cache a video frame of a second video stream provided to the terminal device in a process of providing the second video stream to the terminal device; the second video stream is generated based on the first video stream; under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame; and providing the first target video frame for the terminal equipment to play.
2. The system of claim 1, wherein the first video source node is an image capture device; the first video stream is a video stream acquired by the video stream acquisition equipment in real time;
or, the first video source node is a storage node, and the first video stream is a video stream pre-stored in the storage node.
3. The system of claim 1, wherein the forwarding node is further configured to:
timing the interval time of receiving two adjacent video frames when receiving the video frames provided by the first video source node each time;
and if the next video frame provided by the first video source node is not received when the interval time is greater than or equal to the set first time, determining that the video frame provided by the first video source node cannot be acquired.
4. The system according to claim 1, wherein the forwarding node, when acquiring the first target video frame from the cached video frames, is specifically configured to:
and acquiring a first target video frame from the cached video frames according to a set video frame selection strategy.
5. The system of claim 4, wherein the video frame selection strategy is plural; when the first target video frame is acquired from the cached video frames according to the set video frame selection policy, the method is specifically configured to:
acquiring the priority of a set video frame selection strategy; determining a target video frame selection strategy from the multiple video frame selection strategies according to the priority of the slave video frame selection strategy; acquiring a first target video frame from the cached video frames according to the target video frame selection strategy;
alternatively, the first and second electrodes may be,
acquiring a target video frame selection strategy selected by a user of the terminal equipment; and acquiring a first target video frame from the cached video frames according to the target video frame selection strategy.
6. The system of claim 5, wherein the terminal device is configured to: presenting a selection information item, the selection information item comprising: a plurality of video frame selection strategies for the user to set priorities of the plurality of video frame selection strategies; providing priorities of the plurality of video frame selection policies to the forwarding node in response to a setup complete event;
alternatively, the first and second electrodes may be,
the terminal device is configured to: presenting a selection information item, the selection information item comprising: a plurality of video frame selection strategies for the user to set the adopted target video frame selection strategy; in response to a selected completion event for the plurality of video frame selection policies, providing the selected video frame selection policy to the forwarding node as an identification of the target video frame selection policy.
7. The system according to claim 4, wherein the forwarding node, when acquiring the first target video frame from the cached video frames, is specifically configured to:
acquiring N frames of video frames closest to the current time from the cached video frames of the second video stream to serve as the first target video frame; wherein N is a positive integer;
alternatively, the first and second electrodes may be,
obtaining tag information of a video frame of the cached second video stream; according to the tag information of the video frame of the cached second video stream, acquiring the video frame with the tag information being set tag information from the video frame of the cached second video stream as the first target video frame;
alternatively, the first and second electrodes may be,
obtaining the access amount of the video frames of the cached second video stream; acquiring the first target video frame from the video frame of the cached second video stream according to the access amount of the video frame of the cached second video stream;
alternatively, the first and second electrodes may be,
acquiring a video frame containing a target object from the cached video frames of the second video stream to serve as the first target video frame;
alternatively, the first and second electrodes may be,
the buffered video frames include: other video streams that have been played before the second video stream; and taking the video frame of the other video stream as the first target video frame.
8. The system of claim 1, wherein the first video source node is plural in number; the system comprises a plurality of first video source nodes, a plurality of second video source nodes and a plurality of video source nodes, wherein the plurality of first video source nodes are used for collecting a first scene from different visual angles to obtain a multi-visual-angle video stream as the first video stream; providing the first video stream to the forwarding node in a video frame mode according to the set video transmission rate;
the forwarding node is further configured to: taking multi-view video frames provided by a plurality of first video source nodes received each time as a plurality of image layers; and rendering the layers to a background layer according to a set rendering template so as to obtain a frame of video frame in the second video stream.
9. The system of claim 1, further comprising: a second video source node;
the forwarding node is further configured to: switching a video source from the first video source node to the second video source node in response to a video source switching operation;
if the video frames provided by the second video source node are not received within a second time length set by the video source switching operation, acquiring second target video frames from the cached video frames of the second video stream; and providing the second target video frame for the terminal equipment to play.
10. The system of claim 9, wherein the forwarding node is further configured to:
and if the video frame provided by the second video source node is received within a second time length set by the video source switching operation, providing the currently received first video frame and the second target video frame for the terminal equipment to play.
11. The system according to claim 10, wherein when the forwarding node provides the currently received first video frame and the second target video frame to the terminal device for playing, the forwarding node is specifically configured to:
responding to a special effect selection operation aiming at a first video frame, and carrying out special effect processing on the first video frame according to special effect setting information corresponding to the selected first special effect to obtain a first video frame after the special effect processing;
responding to a special effect selection operation aiming at a second target video frame, and carrying out special effect processing on the second target video frame according to special effect setting information corresponding to the selected second special effect to obtain a second target video frame after the special effect processing;
and providing the first video frame after the special effect processing and the second target video frame after the special effect processing for the terminal equipment to play.
12. The system according to claim 11, wherein the forwarding node, when performing special effects processing on the first video frame, is specifically configured to:
according to the special effect setting information corresponding to the first special effect, respectively carrying out special effect processing on a plurality of image layers included in the first video frame to obtain a first video frame after the special effect processing;
when the forwarding node performs special effect processing on the second target video frame, the forwarding node is specifically configured to:
and respectively performing special effect processing on a plurality of image layers included in the second target video frame according to special effect setting information corresponding to the second special effect to obtain the second target video frame after the special effect processing.
13. The system of claim 11, wherein the first effect is an effect associated with screen entry; the second effect is an effect associated with the screen-out.
14. The system of claim 11, wherein the forwarding node maintains special effects setting information corresponding to a plurality of special effects; the plurality of effects includes the first effect and the second effect;
and the special effect setting information corresponding to the various special effects is embedded into the forwarding node in a plug-in mode.
15. The system according to any of claims 1-14, wherein the forwarding node is a cloud director.
16. A video playback method, comprising:
acquiring a first video stream provided by a first video source node;
in the process of providing a second video stream generated based on the first video stream to a terminal device, caching video frames of the second video stream which are provided to the terminal device;
under the condition that the video frame provided by the first video source node cannot be acquired, acquiring a first target video frame from the cached video frame;
and providing the first target video frame for the terminal equipment to play.
17. The method of claim 16, further comprising:
timing the interval time of receiving two adjacent video frames when receiving the video frames provided by the first video source node each time;
and if the next video frame provided by the first video source node is not received when the interval time is greater than or equal to the set first time, determining that the video frame provided by the first video source node cannot be acquired.
18. The method of claim 16, wherein the obtaining the first target video frame from the buffered video frames of the second video stream comprises:
and acquiring a first target video frame from the cached video frames according to a set video frame selection strategy.
19. The method of claim 18, wherein the video frame selection strategy is multiple; when the first target video frame is acquired from the cached video frames according to the set video frame selection policy, the method is specifically configured to:
acquiring the priority of a video frame selection strategy set by a user of the terminal equipment; determining a target video frame selection strategy from the multiple video frame selection strategies according to the priority of the slave video frame selection strategy; acquiring a first target video frame from the cached video frames according to the target video frame selection strategy;
alternatively, the first and second electrodes may be,
acquiring a target video frame selection strategy selected by a user of the terminal equipment; and acquiring a first target video frame from the cached video frames according to the target video frame selection strategy.
20. The method according to claim 18, wherein when obtaining the first target video frame from the cached video frames according to the set video frame selection policy, the method is specifically configured to:
acquiring N frames of video frames closest to the current time from the cached video frames of the second video stream to serve as the first target video frame; wherein N is a positive integer;
alternatively, the first and second electrodes may be,
obtaining tag information of the video frame of the cached second video stream; according to the tag information of the video frame of the cached second video stream, acquiring the video frame of which the tag information is a set tag from the video frame of the cached second video stream as the first target video frame;
alternatively, the first and second electrodes may be,
obtaining the access amount of the video frame of the cached second video stream; acquiring the first target video frame from the video frame of the cached second video stream according to the access amount of the video frame of the cached second video stream;
alternatively, the first and second electrodes may be,
acquiring a video frame containing a target object from the cached video frames of the second video stream to serve as the first target video frame;
alternatively, the first and second electrodes may be,
the buffered video frames include: other video streams that have been played before the second video stream; the other video stream is taken as the first target video frame.
21. The method of claim 16, wherein the obtaining the first target video frame from the buffered video frames of the second video stream comprises:
acquiring N frames of video frames closest to the current time from the cached video frames of the second video stream to serve as the first target video frame; wherein N is a positive integer.
22. The method of claim 16, wherein the first video source node is plural in number; a plurality of first video source nodes collect a first scene from different visual angles;
the acquiring a first video stream provided by a first video source node comprises:
and acquiring the multi-view video streams provided by the plurality of first video source nodes as the first video streams.
23. The method of claim 22, further comprising:
taking multi-view video frames provided by a plurality of first video source nodes received each time as a plurality of image layers;
and rendering the layers to a background layer according to a set rendering template so as to obtain a frame of video frame in the second video stream.
24. The method of claim 16, further comprising:
switching a video source from the first video source node to the second video source node in response to a video source switching operation;
if the video frames provided by the second video source node are not received within a second time length set by the video source switching operation, acquiring second target video frames from the cached video frames of the second video stream; and providing the second target video frame for the terminal equipment to play.
25. The method of claim 24, further comprising:
and if the video frame provided by the second video source node is received within a second time length set by the video source switching operation, providing the currently received first video frame and the second target video frame for the terminal equipment to play.
26. The method of claim 25, wherein providing the currently received first video frame and the second target video frame to the terminal device for playing comprises:
responding to a special effect selection operation aiming at a first video frame, and carrying out special effect processing on the first video frame according to special effect setting information corresponding to the selected first special effect to obtain a first video frame after the special effect processing;
responding to a special effect selection operation aiming at a second target video frame, and carrying out special effect processing on the second target video frame according to special effect setting information corresponding to the selected second special effect to obtain a second target video frame after the special effect processing;
and providing the first video frame after the special effect processing and the second target video frame after the special effect processing for the terminal equipment to play.
27. The method of claim 21, wherein performing a special effect process on the first video frame according to the special effect setting information corresponding to the selected first special effect comprises:
according to the special effect setting information corresponding to the first special effect, respectively carrying out special effect processing on a plurality of image layers included in the first video frame to obtain a first video frame after the special effect processing;
the performing special effect processing on the second target video frame according to the special effect setting information corresponding to the selected second special effect includes:
and respectively performing special effect processing on a plurality of image layers included in the second target video frame according to special effect setting information corresponding to the second special effect to obtain the second target video frame after the special effect processing.
28. The method of claim 26, wherein the first effect is an effect associated with screen entry; the second effect is an effect associated with the screen-out.
29. An electronic device, comprising: a memory, a processor, and a communications component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 16-28.
30. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 16-28.
CN202010758734.1A 2020-07-31 2020-07-31 Video playing method, device, system and storage medium Pending CN114071215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010758734.1A CN114071215A (en) 2020-07-31 2020-07-31 Video playing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010758734.1A CN114071215A (en) 2020-07-31 2020-07-31 Video playing method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN114071215A true CN114071215A (en) 2022-02-18

Family

ID=80227584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010758734.1A Pending CN114071215A (en) 2020-07-31 2020-07-31 Video playing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN114071215A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420316A (en) * 2007-10-19 2009-04-29 株式会社日立制作所 Video distribution system for switching video streams
JP2009105482A (en) * 2007-10-19 2009-05-14 Denso Corp Image display system for vehicle
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN111182322A (en) * 2019-12-31 2020-05-19 北京达佳互联信息技术有限公司 Director control method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420316A (en) * 2007-10-19 2009-04-29 株式会社日立制作所 Video distribution system for switching video streams
JP2009105482A (en) * 2007-10-19 2009-05-14 Denso Corp Image display system for vehicle
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN111182322A (en) * 2019-12-31 2020-05-19 北京达佳互联信息技术有限公司 Director control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11871093B2 (en) Socially annotated audiovisual content
US10049702B2 (en) Digital video recorder options for editing content
US10225613B2 (en) Method and apparatus for video playing processing and television
CN111083515B (en) Method, device and system for processing live broadcast content
JP5567851B2 (en) Method, system, and computer program for displaying a secondary media stream within a primary media stream
US9003452B2 (en) Systems, methods, and apparatus for recording broadband content
US20130279879A1 (en) Playback device, control method for playback device, generating device, control method for generating device, recording medium, data structure, control program, and recording medium recording the program
CA2991631A1 (en) Media production system with scheduling feature
US9271046B2 (en) Switching method of different display windows of a TV
CN110213661A (en) Control method, smart television and the computer readable storage medium of full video
US10516911B1 (en) Crowd-sourced media generation
US11166073B2 (en) Dynamically adjusting video merchandising to reflect user preferences
US20220248080A1 (en) Synchronization of multi-viewer events containing socially annotated audiovisual content
CN114154012A (en) Video recommendation method and device, electronic equipment and storage medium
CN114189696B (en) Video playing method and device
US9973813B2 (en) Commercial-free audiovisual content
US20230074478A1 (en) Video distribution device, video distribution method, and video distribution program
JP2019504517A (en) Method, system, and medium for presenting content items while buffering video
CN110679153B (en) Method for providing time placement of rebuffering events
CN105142003B (en) Television program playing method and device
US20140115032A1 (en) Preserving a consumption context for a user session
US20140359668A1 (en) Method, electronic device, and computer program product
CN113473165A (en) Live broadcast control system, live broadcast control method, device, medium and equipment
US11157146B2 (en) Display apparatus and control method thereof for providing preview content
CN114071215A (en) Video playing method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40066421

Country of ref document: HK