CN110784750B - Video playing method and device and computer equipment - Google Patents

Video playing method and device and computer equipment Download PDF

Info

Publication number
CN110784750B
CN110784750B CN201910745018.7A CN201910745018A CN110784750B CN 110784750 B CN110784750 B CN 110784750B CN 201910745018 A CN201910745018 A CN 201910745018A CN 110784750 B CN110784750 B CN 110784750B
Authority
CN
China
Prior art keywords
video
file
video file
playing
playing time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910745018.7A
Other languages
Chinese (zh)
Other versions
CN110784750A (en
Inventor
彭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910745018.7A priority Critical patent/CN110784750B/en
Publication of CN110784750A publication Critical patent/CN110784750A/en
Application granted granted Critical
Publication of CN110784750B publication Critical patent/CN110784750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The embodiment of the application provides a video playing method, wherein in the process of loading or playing any one of a plurality of video files (marked as a first video file) to be loaded and to be played, the video file (marked as a second video file) played adjacent to the video file can be preloaded, so that the accuracy of the initial playing time of each video file is improved by utilizing the attribute information (such as playing progress, playing time of the last frame image and the like) of the first video file, and the initial playing time of each segmented video file is determined by directly utilizing the dividing point of the total duration of the whole video file which is not segmented.

Description

Video playing method and device and computer equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video playing method and apparatus, and a computer device.
Background
With the development of computer technology and multimedia technology, the current browser can realize the playing of audio and video files without installing a special plug-in, and great convenience is provided for video websites such as video on demand services and the like. In the loading and playing process of the video file, a mode of loading and playing the video in a segmented mode can be adopted, so that long buffer time before playing is avoided, and the video watching experience of a user is reduced.
However, in the existing playing process of multiple video files, the total duration of the entire video file is usually divided, and the initial playing time of each video file is determined, which often causes the problems of playing jam, screen flashing and the like, and reduces the playing fluency of the segmented multiple video files.
Disclosure of Invention
In view of this, the present application provides a video playing method, apparatus and computer device, which determine the starting playing time of the next video file by using the attribute information of the video file when loading or playing a video file, so as to implement seamless splicing playing among multiple video files and ensure the fluency of video playing.
In order to achieve the above object, the present application provides a video playing method, including:
determining a plurality of video files to be played;
in the process of loading or playing a first video file, preloading a second video file, and obtaining the initial playing time of the second video file by using the attribute information of the first video file; the first video file and the second video file are two video files which are played randomly and adjacently in the plurality of video files, and the second video file is connected with the last frame of image of the first video file;
and switching to the second video file from the first video file to play according to the initial playing time of the second video file.
Optionally, the obtaining the initial playing time of the second video file by using the attribute information of the first video file includes:
analyzing a header file of the first video file to obtain the playing time and the first starting playing time of the last frame of image of the first video file;
and performing offset accumulation on the first initial playing time by using the playing time of the last frame of image of the first video file to obtain a second initial playing time of a second video file.
Optionally, the preloading a second video file during the loading or playing process of a first video file, and obtaining the starting playing time of the second video file by using the attribute information of the first video file, and playing the second video file includes:
pre-activating a first video file and a second video file;
playing the first video file in a current playing window, and monitoring the playing progress of the first video file;
if the playing progress meets a preset condition, preloading the second video file;
when the playing of the first video file is finished, playing the loaded video data in the second video file in the current playing window;
and before the playing of the first video file is finished, outputting at least one frame of image in the second video file in a background.
Optionally, in the process of preloading the second video file, the method further includes:
switching the display attribute of a second video label of the second video file to a hidden state, and outputting at least one frame of image of the second video file;
when the playing of the first video file is finished, playing the loaded video data in the second video file in the current playing window includes:
when the playing of the first video file is finished, switching the display attribute of the first video label of the first video file from the display state to the hidden state, and switching the display attribute of the second video label from the hidden state to the display state, so that the loaded video data in the second video file is played by the current playing window;
the display attribute of the video label is in a hidden state, which means that the height and width of a playing window of a corresponding video file are both 0 and the video file is output in a mute state.
An embodiment of the present application further provides a video playing device, where the device includes:
the video file determining module is used for determining a plurality of video files to be played;
the device comprises an initial playing time acquisition module, a playing time calculation module and a playing time calculation module, wherein the initial playing time acquisition module is used for preloading a second video file in the process of loading or playing a first video file and obtaining the initial playing time of the second video file by utilizing the attribute information of the first video file; the first video file and the second video file are two video files which are played in any adjacent mode in the plurality of video files, and the second video file is connected with the last frame of image of the first video file;
and the video playing module is used for playing the second video file according to the initial playing time of the second video file.
An embodiment of the present application further provides a computer device, where the computer device includes:
the communication interface is used for receiving a video file sent by a video source server;
a memory for storing a program for implementing the video playback method as described above;
and the processor is used for loading and executing the program stored in the memory to realize the steps of the video playing method.
As can be seen from the foregoing technical solutions, compared with the prior art, in the process of loading or playing any one of a plurality of video files to be played that need to be loaded (denoted as a first video file), a video file (denoted as a second video file) played adjacent to the video file can be preloaded, so that the initial playing time of the second video file is obtained by using attribute information (such as a playing progress, a playing time of a last frame image, and the like) of the first video file, and the initial playing time of each segmented video file is determined by directly using a dividing point of a total duration of the whole video file that is not segmented.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of a system structure for implementing a video playing method provided in the present application;
fig. 2 is a flowchart of an alternative example of a video playing method provided in the present application;
FIG. 3 is a flow chart of yet another alternative example of a video playback method provided herein;
fig. 4 is a schematic diagram illustrating that a data cache space records a plurality of video stream files in the video playing method provided by the present application;
FIG. 5 is a flow chart of yet another alternative example of a video playback method provided herein;
fig. 6 is a schematic diagram illustrating a method for switching a plurality of video tags in a video playing method provided in the present application;
FIG. 7 is a flow chart of yet another alternative example of a video playback method provided herein;
fig. 8 is a block diagram of an alternative example of a video playback device provided in the present application;
fig. 9 is a block diagram of still another alternative example of a video playback device provided in the present application;
FIG. 10 is a block diagram of yet another alternative example of a video playback device provided herein;
fig. 11 is a schematic hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
In combination with the above description of the background art, the existing video playing method is characterized in that seamless splicing and playing between different video files cannot be achieved, so that abnormal situations such as short pause and screen flashing occur when different video files are switched and played, and the fluency of the whole video playing process is affected. Therefore, the seamless splicing among the video files is expected to be achieved so as to guarantee the fluency of the video playing process.
The research has noticed that there are many types of browsers on the market, and the video playing standards that different types of browsers can support may be different, such as the MSE (Media Source Extension) standard, and not all browsers support the MSE standard.
The MSE standard provides an interface that allows a developer to precisely control cache time, media instructions, and memory release, the developer may replace a src attribute of a video tag with a Mediasource object, where the src attribute specifies a URL (Uniform Resource Locator) of a video to be played, and may create a plurality of SourceBuffer objects to progressively add data to the video tag, and a specific addition process is not described in detail.
In practical application, for a browser supporting the MSE standard, a video can be played in an fmp 4-to-package manner, and for a browser not supporting the MSE standard, a video playing manner can be performed by switching a plurality of video tags corresponding to a plurality of video files. Therefore, in the process of implementing seamless splicing and playing of multiple video files for these two types of browsers, it is necessary to implement the process by combining the video loading mode corresponding to the video playing mode adopted by the type of browser, and the specific implementation process may refer to the description of the corresponding embodiment below.
The method includes the steps that no matter which video playing mode is adopted, when one video file is loaded or played, the next video file is preloaded, attribute information of the loaded video file is utilized, the initial playing time of the next video file is determined, the initial playing time of each video file is determined by directly utilizing a dividing point of the total duration of the whole video file before segmentation, the accuracy of the initial playing time of each segmented video file is greatly improved, and therefore seamless splicing playing of a plurality of video files is guaranteed.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, in order to implement the system architecture diagram of the video playing method provided in the present application, the system may include at least one client 11 and at least one streaming server 12, where:
the client 11 may be an application installed in a terminal device used by a user, such as a browser, and the user may log in a certain video playing website through the browser, such as various video on demand/live websites and the like.
In practical applications, the same streaming media resource may be shared by multiple video websites, that is, multiple clients 11 may play the same video file, and a user may select a client to be used for watching a video according to his/her preference. Thus, the system may include a plurality of different types of clients, such as client 1, client 2, …, and so on, as shown in fig. 1, but is not limited to the system architecture shown in fig. 1.
The streaming media server 12 may be a service device for providing streaming media data, and may be matched with the client 11, and provide the streaming media data of each video file that can be played by the client for the client, so that, for different types of clients, the streaming media server for providing the streaming media data for the client may be different, and in a case that the client includes multiple different types of clients, the system may include a corresponding type of streaming media server, and details of the type and the working process of the streaming media server are not described in this application.
The different types of streaming media servers can communicate with each other, so that the client 1 downloads streaming media data from the streaming media server 2 through the streaming media server 1 to play, and thus, the cross-application platform playing of the same video file is realized, but the present invention is not limited to the implementation manner described in this embodiment.
It should be understood that, in practical applications of the present application, the system architecture proposed in this embodiment is not limited to the structural components shown in fig. 1, and may further include a browser server as needed, so that the web client obtains streaming media data and the like sent by the corresponding streaming media server through the browser server; the system can further comprise a data server for recording various intermediate data generated in the video playing process, such as video downloading record and the like, and the configuration can be carried out according to actual needs, and the detailed description is omitted in the application.
Referring to fig. 2 in conjunction with the system architecture shown in fig. 1, an alternative example flowchart of a video playing method provided in the present application is shown, where the method may be applied to a terminal device, specifically, a client of the terminal device, where the client in the present application mainly refers to a web page client, and is suitable for playing a scene of a video through a browser, as shown in fig. 2, the method may include, but is not limited to, the following steps:
s11, acquiring running environment information of the client;
s12, selecting a target video playing mode matched with the client running environment information;
in combination with the above description, for the MSE standard used for video playing, not all browser environments can support, and for a browser supporting the MSE standard, a video file acquired from a streaming media server may be processed according to the MSE standard, an obtained data stream is progressively added to a video tag, and then the video tag is switched to play a video; and for the browser which does not support the MSE standard, the video is directly acquired from the streaming media server, and the video label is directly switched to play the video. It can be seen that the manner in which video playing is implemented for different types of browsers is different.
Therefore, before playing a plurality of video files, the browser environment may be detected to determine whether it supports the MSE standard, and then it is determined which video playing mode is to be used to process the video files downloaded from the streaming media server. Based on this, the obtained client operating environment information may include client configuration information, which mainly indicates whether the client supports the MSE standard, and the specific information content is not limited in the present application.
Optionally, the present application may pre-establish a corresponding relationship between different client operating environment information and different video playing modes, such as a corresponding relationship table, so that after the client operating environment information used by the current user is obtained, the video playing mode matched with the client operating environment information may be obtained according to the corresponding relationship, and the specific implementation manner of step S12 is not limited in the present application.
Step S13, executing the target video playing mode and determining a plurality of video files to be played;
step S14, preloading a second video file in the loading or playing process of the first video file, and obtaining the initial playing time of the second video file by utilizing the attribute information of the first video file;
and S15, switching from the first video file to the second video file to play according to the initial playing time of the second video file.
The first video file may be any one of video files loaded from a video source server, and the second video file is a video file linked with a last frame image of the first video file. That is to say, the first video file and the second video file are video files to be played, any two video files are played adjacently, the first video file and the second video file are taken as an example to illustrate how the two video files played adjacently achieve seamless splicing playing, the seamless splicing processing process for any two other adjacent video files is similar, and detailed description is not given in the present application.
In addition, the attribute information of the first video file may be a playing time of a last frame image of the first video file, or may be a parameter such as a playing progress of the first video file, and may be specifically determined according to a current client operating environment, that is, according to the selected target video playing mode, which may refer to the description of the corresponding part of the following embodiment.
In combination with the above analysis, the adapted target video playing modes are different for different client operating environments, such as different browser environments, and can be recorded as a first video playing mode, a second video playing mode, and the like, so that the video playing method provided by the application can be applied to various browser environments, and the application range is expanded.
The first video playing mode can be an MSE playing mode, the second video playing mode can be a video tag (namely, a video tag) playing mode, in the application, the MSE playing mode can be suitable for splitting a video into a plurality of video files, the video files can be progressively loaded, the playing time of the last frame image of the previous video file is utilized to continuously accumulate and offset the video initial playing time to obtain the initial playing time of the currently loaded video file (namely, the video file played adjacent to the previous video file), the playing time of the last frame image of the currently loaded video file is accumulated on the basis, the initial playing time of the next video file is obtained, and the respective initial playing time of the video files can be obtained by analogy in turn, so that the seamless splicing playing of the video files is realized according to the initial playing time.
In this case, the starting playing time of the video file may be a starting DTS time (basic media decode time), and the specific method for acquiring the DTS time is not limited in the present application.
For the video tag playing mode, the method and the device can be suitable for scenes of interactive playing of multiple videos, but are not limited to the scenes, respective video tags of multiple video files to be played can be known in advance, a preloading mode can be adopted, and seamless splicing playing of the multiple video files is achieved by adjusting display attributes of the video tags of two video files which are played and switched.
Regarding the specific implementation process for implementing seamless splicing and playing of multiple video files according to the two video playing modes, reference may be made to the description of the corresponding embodiments below, but the implementation method is not limited to the description of the embodiments below, and the implementation process may be flexibly and appropriately adjusted according to actual requirements, and the detailed description is not given in this application.
Therefore, the method can ensure that various types of clients used by a user play videos, can automatically select a proper video playing mode, and can realize the loading and playing of the videos, and in the process of playing a plurality of video files, the method obtains the initial playing time of the second video file by using the attribute information (such as playing progress, playing time of the last frame of image and the like) of the first video file in the process of loading or playing the first video file and preloading the second video file.
The following description will be made with respect to a client operating environment supporting the MSE standard, where a client executes a first video playing mode to implement a process of playing a plurality of video files in a seamless splicing manner, for example, fig. 3 shows a flowchart of another optional example of a video playing method proposed by the present application, where the steps of the method may be executed by the client, and specifically, but not limited to the following steps:
step S21, sending a video segment loading request to a video source server;
step S22, loading a plurality of video files from the video source server according to a progressive loading mode;
in this embodiment, the MSE standard is used to play the video, and a progressive loading manner, that is, a step-wise progressive loading manner, may be adopted, for example, a complete video file is divided into a plurality of small video files, and then the video files are sequentially downloaded according to the download address and the play sequence of each video file and then played.
The method and the device have the advantages that the loading of the video files can be realized by using one loading window, a maximum buffer data threshold value and an offset pointer for driving data to be continuously loaded can be configured for the loading window in advance, and how to use the buffer data threshold value and the offset pointer of the loading window to realize the loading process of a plurality of video files is not detailed.
The loaded video files can be video files related to target video tags, the target video tags can be tags of the current browser page, and the content of the video files is not limited in the application.
Step S23, analyzing the header file of the loaded first video file to obtain first video configuration information;
for a video file loaded from a video source server, which is usually a file in an mp4 (a set of compression coding standards for audio and video information), according to the principle of implementing video stream playing in MSE, a video file in an mp4 format needs to be converted into a video stream file in an fmp4 (fragment mp 4) format, where the fmp4 format is a video format encoded by using h.264/AAC, and is different from a non-fragment-mp4 file, and the fmp4 divides the video file into a plurality of video files, so that progressive loading and adaptive playing can be performed more conveniently.
Since the video configuration information of the mp4 file loaded from the video source server is usually recorded in the header file (moov) thereof, in the video loading process, when the header file of the loaded first video file (any one of the multiple video files to be played) is parsed, the first video configuration information of the first video file loaded this time, such as the audio and video information and the frame information of the video, specifically, such as the width, the duration, the bitrate, the encoding format, the frame list, the key frame list of the video file, and the corresponding timestamp and the position in the file, etc., may be obtained.
Step S24, obtaining the playing time and the first starting playing time of the last frame image of the first video file by utilizing the first video configuration information;
s25, assembling the first video configuration information and the first starting playing time to obtain a first video stream header file; step S26, utilizing the playing time of the last frame image of the first video file to perform offset accumulation on the first starting playing time to obtain a second starting playing time of the second video file so as to write a second video stream header file corresponding to the second video file;
the first video file may be any one of video files loaded from a video source server, and the second video file is a video file linked with a last frame image of the first video file. The method and the device only explain how to realize seamless splicing of two adjacent video files, namely the first video file and the second video file, and have similar seamless splicing processing procedures of any two adjacent video files, and detailed description is not given in the application.
The video stream header file may be a header file in a standard fmp4 format, and the assembly manner of the header file is not limited in this application, and in this embodiment, for the header file of each loaded video file, the header file in the fmp4 format needs to be assembled in the above manner, and details are not described.
In this embodiment, in order to ensure seamless splicing and playing between the video file and the next video file played adjacently to the video file, the start playing time of the next video file may be obtained by accumulating the start offset time according to the playing time of the last frame image of the video file. The following code may be used for implementation, but not limited to:
lastDts(n)=getLastDts(boxes stsz);
startDts(n)=startDts(n-1)+lastDts(n-1);
wherein lastDts (n) may represent a playing time of a last frame image of the nth video file, that is, a DTS time (basemediadecoder time), and is obtained from a header file of the mp4 file by using a get () function, and a specific implementation process is not described in detail. startDts (n) may indicate the starting playing time of the nth video file, i.e. the encoding time of the first frame image of the video file n. As can be seen from the above calculation codes, the playing time lastDts (n-1) of the last frame of image of the adjacent previous video file (n-1) and the playing time startDts (n-1) of the first frame of image are summed, that is, the playing time lastDts (n-1) of the last frame of image is shifted on the basis of the starting playing time startDts (n-1) of the adjacent previous video file (n-1), so as to obtain the starting playing time of the video file n.
Based on this, the starting playing time of the next video file n +1 adjacent to the currently loaded video file n can be obtained according to the calculation manner of startDts (n + 1) = startDts (n) + lastDts (n). It can be seen that, in the process of parsing a currently loaded video file, after determining the start playing time for loading an adjacent next video file, when parsing the header file of the next video file and assembling the header file into a header file (i.e., initialization segments) of fmp4 corresponding to a standard, the obtained start playing time of the next video file may be written into the header file, and a specific assembling manner may be determined according to a difference between mp4 and fmp4, which is not described in detail herein.
Therefore, after the second initial playing time of the second video file is obtained, the header file of the second video file may be parsed in the manner described above to obtain the second video configuration information, which is then assembled into the second video stream header file, and the second initial playing time is written into the second video stream header file, so that the playing time of the last frame image of the second video file may be utilized to accumulate the third initial playing time of the third video file (i.e., the video file linked with the last frame image of the second video file), and so on, the respective initial playing times of the plurality of video files may be obtained and added to the corresponding video stream header file, which is not described in detail herein.
Step S27, the video data of the loaded video file is converted and packaged into a corresponding video stream file, and the corresponding video stream header file is added into the video stream file;
step S28, writing the video stream file into a task queue;
after the conversion from different mp4 formats to header files in the fmp4 format is completed in the above manner, the original mp4 data (i.e., s video data in a video file) downloaded from the video source server may be converted and encapsulated into video stream files in the fmp4 format (i.e., media segments) according to the key frame interval, and the fmp4 encapsulation method is not described in detail.
For a video file progressively downloaded from a video source server, after obtaining a corresponding video stream file according to the above processing manner, a padding queue (filling queue), that is, a task queue of this embodiment, may be written in.
Step S29, sequentially reading the stored video stream files from the task queue, and writing the video stream files into a data cache space associated with the target video tag;
in this embodiment, in the process of progressively loading video files from a video source server, each video file may be subjected to decapsulation processing in the manner described above to obtain a corresponding video stream file, and the video stream file is sequentially written into a padding queue, so that the MSE may directly obtain a video stream segment from the padding queue and append (append) the video stream segment to a data cache space associated with a video tag (video tag).
Therefore, the MSE can continuously acquire video stream data from the padding queue to serve as a new video file to be played, and the position which is vacant in the padding queue is continuously written into the newly loaded video stream file, so that the video can be continuously played.
The video tag belongs to a tag definition in hypertext markup language html5 and is used for playing (loading) videos, and the application method of the video tag in traditional video playing and the playing process of html5 videos in a browser are not described in detail in the application.
It needs to be explained that the respective initial playing time of a plurality of video files is obtained and processed into corresponding video stream files, and in the process of writing into the data cache space of the target video tag, the video stream file of a certain video file can be successfully written into the data cache space, and then the loaded next video file is processed, and so on, and the video stream files of the video files are obtained progressively; certainly, in the processing processes at the different stages, for example, the process of ending to writing into the task queue may be writing the video stream files of the multiple video files into the task queue, and then writing into the data buffer area, and so on, that is, in the process of writing the video stream files of the multiple loaded video files into the data buffer space, the processing time interval of each video file is not limited, and may be synchronous or may be processed one by one, or may be processed at preset time intervals, and so on, and the detailed description is not given in this application.
Step S210, playing the corresponding video stream file according to the initial playing time of the video stream file cached in the data caching space.
According to the above analysis, the present embodiment can obtain the start playing time of each video stream file (i.e. each fmp4 format video file), and the MSE may set the mode of the SourceBuffer to "segment". The field of the mode determines how the video file is played, a value of the field is usually a segment or a sequence, and the segment may represent that the video file is played according to a video playing time pts (in this embodiment, the start playing time) in the video stream.
Thus, each video stream file cached in the data cache space can be cached in a manner as shown in fig. 4, that is, according to the size of the initial playing time, the storage sequence and the playing sequence of each video stream file (such as stream0, stream1, stream, etc. in fig. 4) are determined, seamless splicing between two adjacent played video stream files is ensured, the viewing experience for a user is that the user views a loaded complete video, and the video playing fluency is ensured.
In practical application, some browsers do not support video playing of the MSE standard, and for such browser clients, the following method can be specifically adopted in the application to realize seamless splicing playing of multiple video files, so as to ensure fluency of video playing, but the method is not limited to the video playing method described in this embodiment. Specifically, referring to the flowchart of another optional example of the video playing method provided in this application shown in fig. 5, the method may include:
step S31, activating a first video file and a second video file in advance;
the first video file may be a video file that a user may play first, and the second video file may be a video file that is continuously played, that is, a video file linked with a last frame image of the first video file, that is, the first video file and the second video file are any two video files that are played adjacently to be played.
In practical applications, the client usually needs to determine in advance which video files are available for the user to play, and the video files can be activated in advance in sequence. For example, in a scene where a browser page is used to play a video, each video file may be configured with a video tag, where the video tag includes a download address URL of the video file, and the browser may pre-activate the video tag to obtain a video file corresponding to the video tag.
Step S32, playing a first video file in a current playing window, and monitoring the playing progress of the first video file;
in practical application of this embodiment, for a video file played in a current display interface, the display state of a video tag of the video file can be switched to the display state, as shown in fig. 6, the width-to-height ratio of a playing window in the display attribute of the video tag can be switched to 1:1, such as width =100%, height =100%, or a full-screen state with other ratios, so that the video file is displayed in a browser page, that is, the first video file is played in the playing window currently in the full-screen state.
Step S33, preloading a second video file when the playing progress meets a preset condition to obtain cached video data;
since the seamless splicing playing of the plurality of video files is required to be realized, the playing progress of the first video file can be monitored in the playing process of the first video file, and the playing progress can be playing time length, playing time ratio and the like, so that when the first video file is about to be played or played for a certain time length or a certain time ratio and the like, the second video file to be played adjacently is preloaded in time.
It can be seen that the preset condition may be whether the current playing time of the video file reaches the preset playing time, whether the current playing time ratio reaches the preset time ratio, and the like, and may be determined according to the specific content of the playing progress, and the content of the preset condition is not limited by the present application.
According to the analysis, the method and the device have the advantages that the video files do not need to be loaded simultaneously, and the adjacent video file is preloaded before the playing of one video file is finished, so that the effect of seamless splicing playing is not influenced, the phenomenon that the playing effect of the currently played video file is influenced by simultaneously loading a plurality of video files is avoided, and the playing effect of the currently played video file is influenced, such as playing card pause and the like.
Step S34, switching the display attribute of the second video label of the second video file into a hidden state, and outputting at least one frame of image of the second video file;
step S35, when the playing of the first video file is finished, switching the display attribute of the first video tag of the first video file from the display state to the hidden state, and switching the display attribute of the second video tag of the second video file from the hidden state to the display state, so that the current playing window plays the cached video data of the second video file.
Following the above analysis, the display attribute of the video tag of the video file is switched to the hidden state, which may be to switch the ratio of the middle width to the height of the display attribute to 0, for example, the height and the width of the playing window for playing the video file are both set to 0, and the capacity is also 0, so that the video file is hidden in the browser page, and at the same time, the video file can also be set to be output in the mute state, and the video tag is set to 0x0, that is, for the video file preloaded in the background, the mute state is output from the 0x0 starting address; correspondingly, the display attribute of the video tag of the video file is switched to the display state, the ratio of the middle width to the height of the display attribute may be switched to 1, that is, both the height and the width of the playing window may be 100% (but not limited to this setting), so that the video file played in the foreground is output in a manner of being overlaid on the player.
In this embodiment, in the process of preloading the second video file, the cached video data may be played in a silent manner in the background, that is, the cached video data of the second video file is played under the condition that the display attribute of the video tag is switched to the hidden state, and the watching of the video played in the current playing window by the user is not affected.
When the playing of the first video file played in the foreground is finished, the second video tag of the second video file played in the background can be switched to the full state, that is, the display attribute of the second video file is switched to the display state, so that the current display interface can play the video of the cached second video file, meanwhile, the first video tag of the currently played first video file can be switched back to the background, even if the first video tag is hidden and in the mute state, meanwhile, according to the above mode, the background can continuously load and play the third video file in a mute manner, and so on, seamless splicing playing of a plurality of video files is realized, and the third video file is a video file linked with the last frame image of the second video file.
Optionally, in practical application, in order to ensure that switching is performed after a background sees a picture of a video tag of a pre-cached video file, before switching display attributes of video tags of different video files, it may be verified whether a video file to be played is successfully loaded, if not, the video file continues to wait for loading, if yes, loading is performed according to the manner described in this embodiment, and a specific process may refer to, but is not limited to, the flowchart shown in fig. 7.
In fig. 7, a foreground video may refer to the first video file, a background video may refer to the second video file, and in combination with the description of the above embodiment, when the first video file plays playing on the foreground, the background may preload the second video file from the background, play at least one frame of image, make play time playtime greater than 0, pause playing (for example, execute a pause () function), wait for the player that has finished playing the first video file, or trigger an operation playNext to play the next video by a user, the foreground may first detect whether the second video file on the background is ready, and if not, wait for the second video file preloaded on the background to play at least one frame of image, that is, after the background video file is ready, switch from the first video file to the second video file to play. The method includes the steps of detecting whether a second video file of a background is ready, sending a request seek to the background, and if seeked (ready) is fed back, determining that the second video file of the background is ready.
Therefore, the embodiment realizes the switching of the plurality of video labels according to the mode, realizes the seamless splicing playing of the plurality of video files, and ensures that the video connection in the sense of the user has no phenomena of blocking, flickering and the like.
Referring to fig. 8, a flowchart of an alternative example of a video playing apparatus provided in the present application, where the apparatus may be applied to a computer device, such as a terminal device, the apparatus may include:
a video determining module 21, configured to determine a plurality of video files to be played;
an initial playing time obtaining module 22, configured to preload a second video file in a process of loading or playing a first video file, and obtain an initial playing time of the second video file by using attribute information of the first video file; the first video file and the second video file are two video files which are played in any adjacent mode in the plurality of video files, and the second video file is connected with the last frame of image of the first video file;
and the video playing module 23 is configured to play the second video file according to the initial playing time of the second video file.
Optionally, as shown in fig. 9, the start playing time obtaining module 22 may include:
a file parsing unit 2211, configured to parse a header file of the first video file to obtain a playing time and a first starting playing time of a last frame image of the first video file;
an initial playing time obtaining unit 2212, configured to perform offset accumulation on the first initial playing time by using the playing time of the last frame image of the first video file, so as to obtain a second initial playing time of a second video file.
As an alternative embodiment of the present application, the apparatus may further include:
the header file analysis module is used for analyzing the header file of the loaded second video file to obtain second video configuration information;
the video stream header file assembling module is used for assembling the second video configuration information and the second initial playing time to obtain a second video stream header file;
the package conversion module is used for converting and packaging the video data of the second video file into a second video stream file and adding the second video stream header file into the second video stream file;
correspondingly, the video playing module may be specifically configured to perform video playing according to the starting playing time of the second video stream file.
Optionally, on the basis of the foregoing embodiment, the apparatus may further include:
the writing module is used for writing the second video stream file into a task queue;
the reading module is used for sequentially reading the stored video stream files from the task queue and writing the video stream files into a data cache space associated with a target video tag;
correspondingly, the video playing module 23 may be further specifically configured to play the corresponding video stream file according to the starting playing time of the video stream file cached in the data caching space.
As another optional embodiment of the present application, as shown in fig. 10, the start playing time obtaining module 22 may include:
an activation unit 2221 configured to activate the first video file and the second video file in advance;
a playing monitoring unit 2222, configured to play the first video file in a current playing window, and monitor a playing progress of the first video file;
the preloading unit 2223 is configured to preload the second video file when the playing progress meets a preset condition.
Correspondingly, the video playing module 23 is specifically configured to play the loaded video data in the second video file in the current playing window when the playing of the first video file is finished.
Before the first video file is played, at least one frame of image in the second video file is output by a background so as to realize seamless switching of the video files and guarantee video playing fluency.
Optionally, as shown in fig. 10, the apparatus may further include:
a background output module 24, configured to switch a display attribute of a second video tag of the second video file to a hidden state, and output at least one frame of image of the second video file;
accordingly, the video playing module 23 may include:
an adjusting playing unit 231, configured to switch the display attribute of the first video tag of the first video file from the display state to the hidden state and switch the display attribute of the second video tag from the hidden state to the display state when the playing of the first video file is finished, so that the current playing window plays the loaded video data in the second video file;
the display attribute of the video label is in a hidden state, which means that the height and width of a playing window for playing a corresponding video file are both 0 and the video label is output in a mute state.
On the basis of the foregoing embodiments, the video playing apparatus may further include:
the information acquisition module is used for acquiring the running environment information of the client;
and the mode selection module is used for selecting a target video playing mode matched with the client running environment information, wherein the target video playing mode is a first video playing mode or a second video playing mode.
Therefore, the user can play videos by using various types of clients, loading and playing can be achieved, and the smoothness of the whole playing process can be guaranteed.
The embodiment of the present application further provides a storage medium, on which a program is stored, and the program is called and executed by a processor, so as to implement the steps of the video playing method described in the above method embodiment.
Referring to fig. 11, a hardware structure diagram of a computer device provided in an embodiment of the present application, the computer device may include: a communication interface 31, a memory 32, and a processor 33;
in the embodiment of the present invention, the communication interface 31, the memory 32, and the processor 33 may implement mutual communication through a communication bus, and the number of the communication interface 31, the memory 32, the processor 33, and the communication bus may be at least one.
Optionally, the communication interface 31 may be an interface of a communication module, such as an interface of a WIFI module, an interface of a GSM module, and the like, and is configured to receive a video file sent by the video source server; of course, the communication interface 31 may also include an interface for implementing data interaction inside the computer device, such as serial/parallel interface.
The processor 33 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention.
The memory 32 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The memory 32 stores a program, and the processor 33 calls the program stored in the memory 32 to implement the steps of the video playing method applied to the computer device, where the specific implementation process may refer to the description of the corresponding parts of the above method embodiments.
In practical applications, the computer device may be a terminal device of a user, such as a notebook computer, a mobile phone, and the like, and therefore, the computer device may further include components such as an input device, a display, a microphone, and the like, and a hardware component structure included in the computer device may be determined according to a specific product type of the computer device, and is not limited to the structure described above.
The embodiments in the present specification are described in a progressive or parallel manner, and each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device and the computer equipment disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A video playback method, the method comprising:
determining a plurality of video files to be played;
in the process of loading or playing a first video file, preloading a second video file, and obtaining the initial playing time of the second video file by using the attribute information of the first video file; the first video file and the second video file are two video files which are played in any adjacent mode in the plurality of video files, the second video file is connected with the last frame image of the first video file, and the attribute information of the first video file is determined according to the selected target video playing mode;
analyzing the header file of the loaded second video file to obtain second video configuration information;
assembling the second video configuration information and the second initial playing time to obtain a second video stream header file;
the video data of the second video file is converted and packaged into a second video stream file, and the second video stream header file is added into the second video stream file;
switching from the first video file to the second video file for playing according to the initial playing time of the second video file, wherein switching from the first video file to the second video file for playing according to the initial playing time of the second video file comprises: video playing is carried out according to the initial playing time of the second video stream file;
wherein, the obtaining the initial playing time of the second video file by using the attribute information of the first video file comprises:
analyzing a header file of the first video file to obtain the playing time and the first starting playing time of the last frame of image of the first video file;
and performing offset accumulation on the first initial playing time by using the playing time of the last frame of image of the first video file to obtain a second initial playing time of a second video file.
2. The method of claim 1, further comprising:
writing the second video stream file into a task queue;
sequentially reading the stored video stream files from the task queue, and writing the video stream files into a data cache space associated with a target video tag;
the playing the second video file according to the initial playing time of the second video file includes:
and playing the corresponding video stream file according to the initial playing time of the video stream file cached in the data caching space.
3. The method according to claim 1, wherein the preloading a second video file during loading or playing of a first video file, obtaining an initial playing time of the second video file by using attribute information of the first video file, and playing the second video file according to the initial playing time of the second video file comprises:
pre-activating a first video file and a second video file;
playing the first video file in a current playing window, and monitoring the playing progress of the first video file;
if the playing progress meets a preset condition, preloading the second video file;
when the playing of the first video file is finished, playing the loaded video data in the second video file in the current playing window;
and before the playing of the first video file is finished, the background outputs at least one frame of image in the second video file.
4. The method of claim 3, wherein during the preloading of the second video file, the method further comprises:
switching the display attribute of a second video label of the second video file to a hidden state, and outputting at least one frame of image of the second video file;
when the playing of the first video file is finished, playing the loaded video data in the second video file in the current playing window includes:
when the playing of the first video file is finished, switching the display attribute of the first video label of the first video file from a display state to the hidden state, and switching the display attribute of the second video label from the hidden state to the display state, so that the current playing window plays the loaded video data in the second video file;
the display attribute of the video label is in a hidden state, which means that the height and width of the playing window of the corresponding video file are both 0, and the video file is output in a mute state.
5. The method of any one of claims 1 to 4, further comprising:
acquiring running environment information of a client;
and selecting a target video playing mode matched with the client running environment information, wherein the target video playing mode is a first video playing mode or a second video playing mode.
6. A video playback apparatus, comprising:
the video file determining module is used for determining a plurality of video files to be played;
the device comprises an initial playing time acquisition module, a playing time calculation module and a playing time calculation module, wherein the initial playing time acquisition module is used for preloading a second video file in the process of loading or playing a first video file and obtaining the initial playing time of the second video file by utilizing the attribute information of the first video file; the first video file and the second video file are two video files which are played in any adjacent mode in the plurality of video files, the second video file is connected with the last frame image of the first video file, and the attribute information of the first video file is determined according to the selected target video playing mode;
the header file analysis module is used for analyzing the header file of the loaded second video file to obtain second video configuration information;
the video stream header file assembling module is used for assembling the second video configuration information and the second starting playing time to obtain a second video stream header file;
the package conversion module is used for converting and packaging the video data of the second video file into a second video stream file and adding the second video stream header file into the second video stream file;
a video playing module, configured to play the second video file according to an initial playing time of the second video file, where switching from the first video file to the second video file for playing according to the initial playing time of the second video file includes: video playing is carried out according to the initial playing time of the second video stream file;
wherein, the starting playing time obtaining module includes:
the file analysis unit is used for analyzing the header file of the first video file to obtain the playing time and the first starting playing time of the last frame image of the first video file;
and the initial playing time acquisition unit is used for performing offset accumulation on the first initial playing time by using the playing time of the last frame image of the first video file to obtain a second initial playing time of a second video file.
7. A computer device, characterized in that the computer device comprises:
the communication interface is used for receiving a video file sent by a video source server;
a memory for storing a program for implementing the video playback method according to any one of claims 1 to 5;
a processor for loading and executing the program stored in the memory to realize the steps of the video playing method according to any one of claims 1 to 5.
CN201910745018.7A 2019-08-13 2019-08-13 Video playing method and device and computer equipment Active CN110784750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910745018.7A CN110784750B (en) 2019-08-13 2019-08-13 Video playing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910745018.7A CN110784750B (en) 2019-08-13 2019-08-13 Video playing method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN110784750A CN110784750A (en) 2020-02-11
CN110784750B true CN110784750B (en) 2022-11-11

Family

ID=69383959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910745018.7A Active CN110784750B (en) 2019-08-13 2019-08-13 Video playing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN110784750B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111615002B (en) * 2020-04-30 2021-07-27 腾讯科技(深圳)有限公司 Video background playing control method, device and system and electronic equipment
CN112202751B (en) * 2020-09-25 2022-06-07 腾讯科技(深圳)有限公司 Animation processing method and device, electronic equipment and storage medium
CN112954431A (en) * 2021-01-29 2021-06-11 北京奇艺世纪科技有限公司 Video playing method and device, video playing equipment and readable storage medium
CN113014833A (en) * 2021-03-09 2021-06-22 湖南快乐阳光互动娱乐传媒有限公司 Video playing method and device
CN114401441B (en) * 2022-01-12 2024-04-02 深圳市酷开网络科技股份有限公司 Short video play processing method and device, intelligent terminal and storage medium
CN114051152A (en) * 2022-01-17 2022-02-15 飞狐信息技术(天津)有限公司 Video playing method and device, storage medium and electronic equipment
CN114615550B (en) * 2022-03-17 2023-12-08 北京奇艺世纪科技有限公司 Video acquisition method and device
CN114615548B (en) * 2022-03-29 2023-12-26 湖南国科微电子股份有限公司 Video data processing method and device and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953073A (en) * 1996-07-29 1999-09-14 International Business Machines Corp. Method for relating indexing information associated with at least two indexing schemes to facilitate the play-back of user-specified digital video data and a video client incorporating the same
CN103517131A (en) * 2012-08-14 2014-01-15 Tcl集团股份有限公司 Method for playing segmental videos in gapless mode in television set and television set
CN103634605A (en) * 2013-12-04 2014-03-12 百度在线网络技术(北京)有限公司 Processing method and device for video images
CN103873921A (en) * 2014-03-26 2014-06-18 北京奇艺世纪科技有限公司 Seamless video play method and player
CN104394380A (en) * 2014-12-09 2015-03-04 浙江省公众信息产业有限公司 Video monitoring management system and playback method of video monitoring record
CN108260014A (en) * 2018-04-12 2018-07-06 腾讯科技(上海)有限公司 A kind of video broadcasting method and terminal and storage medium
CN109688473A (en) * 2018-12-07 2019-04-26 广州市百果园信息技术有限公司 More video broadcasting methods and storage medium, computer equipment

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2340144A1 (en) * 2000-03-09 2001-09-09 Ethereal Minds, Inc. System and method for pre-loading still imagery data in an interactive multimedia presentation environment
US7418190B2 (en) * 2002-08-22 2008-08-26 Microsoft Corporation Accelerated access to frames from a compressed digital video stream without keyframes
JP4319094B2 (en) * 2004-06-11 2009-08-26 ソニー株式会社 Data processing apparatus, data processing method, program, program recording medium, and data recording medium
JP2006134520A (en) * 2004-11-08 2006-05-25 Toshiba Corp Information storage medium, and method and device for information reproduction
JP2006186842A (en) * 2004-12-28 2006-07-13 Toshiba Corp Information storage medium, information reproducing method, information decoding method, and information reproducing device
JP2007080357A (en) * 2005-09-13 2007-03-29 Toshiba Corp Information storage medium, information reproducing method, information reproducing apparatus
GB2441365B (en) * 2006-09-04 2009-10-07 Nds Ltd Displaying video data
US10713018B2 (en) * 2009-12-07 2020-07-14 International Business Machines Corporation Interactive video player component for mashup interfaces
CN103024456B (en) * 2011-09-27 2016-02-24 腾讯科技(深圳)有限公司 A kind of Online Video player method and video playback server
CN105208442B (en) * 2014-06-27 2018-06-26 贝壳网际(北京)安全技术有限公司 A kind of video broadcasting method and device of video playing application program
CN105992068A (en) * 2015-05-19 2016-10-05 乐视移动智能信息技术(北京)有限公司 Video file preview method and device
CN105681912A (en) * 2015-10-16 2016-06-15 乐视致新电子科技(天津)有限公司 Video playing method and device
CN105657523B (en) * 2016-01-28 2019-11-08 腾讯科技(深圳)有限公司 The method and apparatus that video preloads
CN106572358B (en) * 2016-11-11 2022-03-08 青岛海信宽带多媒体技术有限公司 Live broadcast time shifting method and client
CN108882018B (en) * 2017-05-09 2020-10-20 阿里巴巴(中国)有限公司 Video playing and data providing method in virtual scene, client and server

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953073A (en) * 1996-07-29 1999-09-14 International Business Machines Corp. Method for relating indexing information associated with at least two indexing schemes to facilitate the play-back of user-specified digital video data and a video client incorporating the same
CN103517131A (en) * 2012-08-14 2014-01-15 Tcl集团股份有限公司 Method for playing segmental videos in gapless mode in television set and television set
CN103634605A (en) * 2013-12-04 2014-03-12 百度在线网络技术(北京)有限公司 Processing method and device for video images
CN103873921A (en) * 2014-03-26 2014-06-18 北京奇艺世纪科技有限公司 Seamless video play method and player
CN104394380A (en) * 2014-12-09 2015-03-04 浙江省公众信息产业有限公司 Video monitoring management system and playback method of video monitoring record
CN108260014A (en) * 2018-04-12 2018-07-06 腾讯科技(上海)有限公司 A kind of video broadcasting method and terminal and storage medium
CN109688473A (en) * 2018-12-07 2019-04-26 广州市百果园信息技术有限公司 More video broadcasting methods and storage medium, computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A lightweight personalized image preloading method for IPTV system》;Wen-Chang Tsai;《2017 19th International Conference on Advanced Communication Technology》;20170330;全文 *
《基于HTML5的多媒体播放网站》;郑培纯;《中国优秀硕士学位论文全文数据库》;20150615;全文 *

Also Published As

Publication number Publication date
CN110784750A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110784750B (en) Video playing method and device and computer equipment
US10187668B2 (en) Method, system and server for live streaming audio-video file
US10645465B2 (en) Video file universal identifier for metadata resolution
US8812735B2 (en) Content reproduction system, content reproduction apparatus, program, content reproduction method, and providing content server
CN108965397A (en) Cloud video editing method and device, editing equipment and storage medium
WO2017063399A1 (en) Video playback method and device
CN110062284B (en) Video playing method and device and electronic equipment
US9792363B2 (en) Video display method
US10149020B2 (en) Method for playing a media stream in a browser application
CA2943975C (en) Method for associating media files with additional content
US20110289108A1 (en) Assisted Hybrid Mobile Browser
US9666233B2 (en) Efficient video frame rendering in compliance with cross-origin resource restrictions
CN111669645B (en) Video playing method and device, electronic equipment and storage medium
JP2021508995A (en) Network playback method, device and storage medium for media files
AU2016410930A1 (en) Online television playing method and apparatus
US20150268808A1 (en) Method, Device and System for Multi-Speed Playing
CN112995698A (en) Video playing method, client, service platform and intelligent conference system
CN114040245A (en) Video playing method and device, computer storage medium and electronic equipment
CN108632644B (en) Preview display method and device
KR20140007893A (en) A method for optimizing a video stream
JP2016072858A (en) Media data generation method, media data reproduction method, media data generation device, media data reproduction device, computer readable recording medium and program
CN111918074A (en) Live video fault early warning method and related equipment
US10049158B1 (en) Analyzing user behavior relative to media content
US10547878B2 (en) Hybrid transmission protocol
US20160173551A1 (en) System and method for session mobility for adaptive bitrate streaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant