CN114979712A - Video playing starting method, device, equipment and storage medium - Google Patents

Video playing starting method, device, equipment and storage medium Download PDF

Info

Publication number
CN114979712A
CN114979712A CN202210522521.8A CN202210522521A CN114979712A CN 114979712 A CN114979712 A CN 114979712A CN 202210522521 A CN202210522521 A CN 202210522521A CN 114979712 A CN114979712 A CN 114979712A
Authority
CN
China
Prior art keywords
frame
frames
video
media
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210522521.8A
Other languages
Chinese (zh)
Other versions
CN114979712B (en
Inventor
王磊
桂润祥
曾显华
李晨光
曾栩鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202210522521.8A priority Critical patent/CN114979712B/en
Priority claimed from CN202210522521.8A external-priority patent/CN114979712B/en
Publication of CN114979712A publication Critical patent/CN114979712A/en
Application granted granted Critical
Publication of CN114979712B publication Critical patent/CN114979712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure discloses a video playing starting method, a video playing starting device, video playing equipment and a storage medium. Determining a media frame to be issued according to the broadcasting starting time; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the timestamp of the first residual media frame, and issuing the first residual media frame with the timestamp updated to the client, so that the client starts playing the video according to the first residual media frame. According to the video starting playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is larger than the number of the target frames, the frame loss processing is performed on the media frames to be sent, so that not only can the decoding pressure be relieved, but also the timeliness of the video starting playing can be ensured, and the playing pause is reduced.

Description

Video playing starting method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of video playing, and in particular, to a video playing starting method, device, equipment and storage medium.
Background
When the video is started, redundancy exists in the sent video frame data. In order to quickly start the first frame, decoding of all delivered video frames, including decoding of redundant video frames, needs to be completed in a short time, which causes great performance pressure on a Central Processing Unit (CPU), thereby affecting scheduling of other threads, further causing phenomena such as interface switching stagnation, delayed arrival of message response, and the like, and causing delay of video playing, thereby causing serious impact on user experience.
Disclosure of Invention
The embodiment of the disclosure provides a video starting method, a video starting device, a video starting equipment and a storage medium, frame loss processing is performed on a transmitted video frame to a certain degree, decoding pressure can be relieved, and timeliness of video starting can be guaranteed.
In a first aspect, an embodiment of the present disclosure provides a video playing method, including:
determining a media frame to be issued according to the play-on time; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame;
and updating the timestamp of the first residual media frame, and issuing the first residual media frame with the timestamp updated to the client, so that the client starts playing the video according to the first residual media frame.
In a second aspect, an embodiment of the present disclosure further provides a video playing method, including:
receiving a media frame to be decoded issued by the CDN; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first residual media frame;
and updating the timestamp of the first residual media frame, decoding the first residual media frame after the timestamp is updated, and starting playing the video based on the decoded first residual media frame.
In a third aspect, an embodiment of the present disclosure further provides a video playing device, including:
the to-be-issued media frame determining module is used for determining the to-be-issued media frame according to the broadcasting starting time; wherein the media frames comprise video frames and audio frames;
the target frame number determining module is used for determining the number of target frames according to the minimum delay time length;
a frame loss processing module, configured to perform frame loss processing on the to-be-sent media frame to obtain a first remaining media frame if the number of the to-be-sent media frames is greater than the number of the target frames;
and the timestamp updating module is used for updating the timestamp of the first residual media frame and sending the first residual media frame with the updated timestamp to the client, so that the client plays the video according to the first residual media frame.
In a fourth aspect, an embodiment of the present disclosure further provides a video playing device, including:
the receiving module of the media frame to be decoded is used for receiving the media frame to be decoded issued by the CDN; wherein the media frames comprise video frames and audio frames;
the target frame number determining module is used for determining the number of target frames according to the minimum delay time length;
a frame loss processing module, configured to perform frame loss processing on the to-be-decoded media frame to obtain a first remaining media frame if the number of the to-be-decoded media frames is greater than the number of the target frames;
and the timestamp updating module is used for updating the timestamp of the first residual media frame, decoding the first residual media frame after the timestamp is updated, and starting playing the video based on the decoded first residual media frame.
In a fifth aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the video playing method according to the embodiment of the disclosure.
In a sixth aspect, the disclosed embodiments disclose a computer-readable medium, on which a computer program is stored, which when executed by a processing device, implements a video start-up method as described in the disclosed embodiments.
The embodiment of the disclosure discloses a video playing starting method, a video playing starting device, video playing equipment and a storage medium. Determining a media frame to be issued according to the play-on time; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the timestamp of the first residual media frame, and issuing the first residual media frame with the updated timestamp to the client, so that the client plays the video according to the first residual media frame. According to the video starting playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is greater than the number of the target frames, the frame dropping processing is performed on the media frames to be sent, so that not only can the decoding pressure be relieved, but also the timeliness of starting playing of the video can be ensured, and the playing jam is reduced.
Drawings
Fig. 1 is a flow chart of a video launch method in an embodiment of the present disclosure;
FIG. 2a is an exemplary diagram of a frame loss process in an embodiment of the present disclosure;
FIG. 2b is an exemplary diagram of a frame loss process in an embodiment of the present disclosure;
FIG. 2c is an exemplary diagram of a frame loss process in an embodiment of the present disclosure;
FIG. 3a is an exemplary diagram of an update timestamp in an embodiment of the present disclosure;
FIG. 3b is an exemplary diagram of updating a timestamp in an embodiment of the disclosure
FIG. 4 is a flow chart of a video launch method in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a video playback device in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a video playback device in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In a video coding sequence, there are mainly three types of coded frames: key frames (referred to as I-frames for short), forward reference frames (referred to as P-frames for short), bidirectional reference frames (referred to as B-frames for short). In a video coding sequence, a complete Group of video frames (GOP) consists of video frames starting from a current I-frame and proceeding to the next I-frame. When decoding video, it must start with an I-frame for normal decoding. Therefore, in the start-up stage, when the CDN issues a video frame, it is necessary to issue a video frame between a forward I frame closest to the start-up time and the start-up time. In order to quickly start the first frame, decoding of all video frames delivered by the CDN needs to be completed in a short time, including decoding of redundant video frames, which causes great performance pressure on the CPU, thereby affecting scheduling of other threads, further causing phenomena such as UI interface switching jamming, delayed arrival of message response, and the like, and seriously affecting user experience.
Fig. 1 is a flowchart of a video start-playing method according to an embodiment of the present disclosure, where this embodiment may be applicable to a case where a video frame is screened during video start-playing, and the method may be executed by a video start-playing apparatus, where the apparatus may be composed of hardware and/or software, and may generally be integrated in a device having a video start-playing function, where the device may be a CDN server. As shown in fig. 1, the method specifically includes the following steps:
and S110, determining the media frame to be transmitted according to the broadcasting starting time.
A media frame is understood to be a multimedia frame, including video frames and audio frames. The play-on time may be a time for starting play, arbitrarily selected by the user, in the video, and may be determined according to a time length from the start time of the video, for example: the play-on-start time is 5 seconds, and it can be understood that the video is played from a position 5 seconds away from the video start time.
In this embodiment, in order to ensure that the delivered video frame can be decoded normally, the video frame between the forward I frame closest to the start-up time and the start-up time needs to be delivered.
Specifically, the process of determining the media frame to be transmitted according to the play-on time may be: determining a complete video frame group corresponding to the play-on time; and determining the media frame between the head frame time and the play-starting time of the complete video frame as the media frame to be issued.
In this embodiment, the GOP at which the play-off time is located may be determined by dividing the play-off time by the duration of a complete group of video frames GOP. For example, assuming that GOP is 2 seconds, the play-out time is 5 seconds, the play-out time is in the 3 rd GOP, and the first frame time of the 3 rd GOP is 4 seconds, the multi-bit media frame between 4 seconds and 5 seconds is determined as the multimedia frame to be delivered. The number of the multimedia frames to be transmitted can be determined by the time interval and the frame rate between the first frame time and the play-on time of the corresponding GOP. Assuming that the frame rate FPS is 25, the number of multimedia frames to be delivered is 25 × 1 — 25. For a video frame, the video frame to be transmitted comprises: an I frame at 4 seconds and a video frame between 4 seconds and 5 seconds; for audio frames, the audio frame to be sent down is an audio frame between 4 seconds and 5 seconds.
And S120, determining the number of the target frames according to the minimum delay time length.
The minimum delay time may be obtained from configuration information of the video player, and may be dynamically set, where the unit of the minimum delay time is ms. The target frame number may be understood as the maximum number of frames that ensures that the video starts the first frame smoothly.
In this embodiment, the manner of determining the number of target frames according to the minimum delay duration may be: acquiring a frame rate of a medium; the number of target frames is determined based on the frame rate and the minimum delay duration.
The method for determining the number of target frames based on the frame rate and the minimum delay time may be: multiplying the minimum delay time length by the frame rate and dividing the result by the settingAnd obtaining the target frame number. Wherein the set value may be set to 1000. For example, the calculation formula of the number of target frames can be expressed as:
Figure BDA0003642288610000071
wherein, T delay For minimum delay duration, PFS is frame rate. Assuming that the minimum delay time is 200ms and the frame rate is 25, the number of target frames is: 5.
and S130, if the number of the media frames to be transmitted is greater than the number of the target frames, performing frame loss processing on the media frames to be transmitted to obtain a first residual media frame.
In this embodiment, if the number of the media frames to be transmitted is less than or equal to the number of the target frames, frame loss processing is not required to be performed on the media frames to be transmitted; and if the number of the media frames to be transmitted is greater than the number of the target frames, performing frame loss processing on the media frames to be transmitted. In this embodiment, in order to ensure normal decoding of a video frame, an I frame needs to be retained, and the remaining video frames are subjected to frame loss processing according to a certain rule.
Specifically, the method for obtaining the first remaining media frame by performing frame loss processing on the media frame to be sent may be: if the video frame does not contain the bidirectional reference frame, determining a first frame loss quantity according to the quantity of the target frames and the quantity of the media frames to be transmitted; discarding the video frames with the first frame loss quantity behind the time stamps in the video frames to be issued; and discarding the audio frames with the first frame loss quantity with the front time stamps in the audio frames to be transmitted.
The video frame to be transmitted is composed of an I frame and a P frame if the video frame does not contain the B frame. And subtracting the target frame number from the number of the media frames to be transmitted to obtain the number of frames needing to be discarded, namely the first discarded number. Assuming that the number of media frames to be delivered is 9 and the number of target frames is 5, the number of frames to be discarded is 4, i.e. it is ensured that the number of remaining frames is less than or equal to the number of target frames.
Specifically, after the first frame loss quantity is determined, the video frames with the first frame loss quantity with the later timestamps in the video frames to be sent are discarded; and discarding the audio frames with the first frame loss quantity with the front time stamps in the audio frames to be transmitted. For example, fig. 2a is an exemplary diagram of a frame dropping process in this embodiment, and as shown in fig. 2a, assuming that the number of media frames to be delivered is 10 and the number of target frames is 5, the number of frames that need to be dropped is 5. For video frames, discarding the video frames with sequence numbers of 1.5-1.9; for audio frames, audio frames with sequence numbers 1.0-1.4 are discarded. And discarding the video frame with the later time in the video frame after the later time stamp, so that the normal decoding of the video frame can be ensured.
Specifically, the method for performing frame loss processing on the media frame to be transmitted to obtain the first remaining media frame may be: if the video frame contains the bidirectional reference frame, all the bidirectional reference frames contained in the video frame to be issued are discarded, and a second residual video frame is obtained; if the number of the second residual video frames is less than or equal to the number of the target frames, acquiring the second frame loss number of all discarded bidirectional reference frames; and discarding the audio frames with the second frame loss number before the time stamp in the audio frames to be transmitted.
In this embodiment, after discarding all B frames, if the remaining number of video frames is less than or equal to the target number of frames and the remaining number of video frames already meets the requirement, frame discarding does not need to be performed again. And determining the number of all the discarded B frames as a second frame loss number, and discarding the audio frames with the second frame loss number with the front timestamps in the audio frames to be transmitted. Exemplarily and exemplarily, fig. 2B is an exemplary diagram of a frame loss processing in this embodiment, as shown in fig. 2B, in a video frame, B frames are shown in bold and oblique, that is, the sequence numbers 1.1, 1.3, 1.5, 1.7 and 1.9 correspond to the video frames being B frames, all the B frames are first discarded, 5 video frames are left, the number meets the requirement, and frame loss does not need to continue. In video frames, 5 frames of video are dropped altogether, and therefore, audio frames with sequence numbers 1.0-1.4 need to be dropped. In this embodiment, the B frame is discarded first, i.e., normal decoding is not affected, and the frame loss efficiency can also be improved.
Optionally, if the number of the second remaining video frames is greater than the number of the target frames, determining a third frame loss number according to the number of the second remaining video frames and the number of the target frames; discarding the video frames with the third frame loss number in the second residual video frames after the timestamp; determining the total frame loss quantity according to the third frame loss quantity and the second frame loss quantity; and discarding the audio frames with the total frame loss number with the front time stamps in the audio frames to be transmitted.
In this embodiment, if the number of the second remaining video frames is greater than the number of the target frames, the frame loss requirement is not satisfied, and frame loss needs to be continued. And subtracting the target frame number from the number of the second residual video frames to obtain the number of frames needing to be continuously dropped, namely a third frame dropping number, and dropping the video frames with the third frame dropping number with the later time stamps in the second residual video frames. And summing the number of the discarded B frames and the third frame loss number to obtain the total frame loss number, and finally discarding the audio frames with the total frame loss number with the front timestamps in the audio frames to be transmitted. Exemplarily, fig. 2c is an exemplary diagram of a frame dropping process in this embodiment, as shown in fig. 2c, it is assumed that the number of target frames is 5, and the number of multimedia frames to be transmitted is 15, wherein the number of B frames is 7, after all B frames are dropped, 8 frames of video remain, and the number is still greater than the number of target frames, 3 frames of video need to be dropped continuously, so that video frames with sequence numbers of 2.0, 2.2, and 2.4 are dropped continuously. This drops a total of 10 frames of video, thus requiring audio frames with timestamps of 1.0-1.9 to be dropped.
And S140, updating the timestamp of the first residual media frame, and sending the first residual media frame with the updated timestamp to the client, so that the client plays the video according to the first residual media frame.
In this embodiment, after the frame loss processing is performed on the video frame, the timestamps in the video frame are not continuous any more, and in order to ensure normal decoding of the video frame, the timestamps of the remaining video frames need to be updated. For audio frames, the timestamps of the remaining audio frames are not required to be updated because the audio frames with the earlier timestamps are discarded and the timestamps of the remaining audio frames are still continuous.
Specifically, the way of updating the timestamp of the first remaining media frame may be: for each remaining video frame, determining a number of video frames dropped after the timestamp of the remaining video frame; determining the timestamp offset of the rest video frames according to the number of discarded video frames and the frame rate; the timestamps of the remaining video frames are updated based on the timestamp offsets.
In this embodiment, the number of discarded video frames is set at the frame rate, the timestamp offset of the current remaining video frame is obtained, and then the timestamp offset of the current remaining video frame is accumulated to obtain the updated timestamp. Illustratively, FIGS. 3 a-3 b are exemplary diagrams of an update timestamp. As shown in fig. 3a, for the remaining video frames 1.0-1.4, since 5 video frames are discarded thereafter, the timestamps of the video frames with sequence numbers 1.0-1.4 are all shifted backward by the time length corresponding to 5 video frames, that is, the video frame with sequence number 1.0 is shifted to the timestamp corresponding to the video frame with original sequence number 1.5, the video frame with sequence number 1.1 is shifted to the timestamp corresponding to the video frame with original sequence number 1.6, and so on, so as to update the timestamp of each remaining video frame. As shown in fig. 3b, for the remaining video frames with sequence number 1.0, since 5 video frames are discarded thereafter, it is necessary to shift the duration corresponding to 5 videos backward, i.e. to shift the timestamp corresponding to the original video frame with sequence number 1.5; for the remaining video frames with sequence number 1.2, since 4 video frames are discarded thereafter, the time length corresponding to 4 videos needs to be shifted backwards, that is, the time length is shifted to the time stamp corresponding to the original video frame with sequence number 1.6; for the remaining video frames with sequence number 1.4, since 3 video frames are discarded thereafter, the time length corresponding to 3 videos needs to be shifted backwards, that is, the time length is shifted to the time stamp corresponding to the original video frame with sequence number 1.7; and by analogy, updating the time stamps of all the rest video frames is realized.
In this embodiment, after the timestamps of the remaining video frames are updated, the remaining media frames are sent to the client, so that the client decodes the received media frames and starts playing the video based on the decoded media frames.
According to the technical scheme of the embodiment of the disclosure, the media frame to be transmitted is determined according to the play-on time; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the timestamp of the first residual media frame, and issuing the first residual media frame with the timestamp updated to the client, so that the client plays the video according to the first residual media frame. According to the video starting playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is greater than the number of the target frames, the frame dropping processing is performed on the media frames to be sent, so that not only can the decoding pressure be relieved, but also the timeliness of starting playing of the video can be ensured, and the playing jam is reduced.
Fig. 4 is a flowchart of a video play-starting method provided by an embodiment of the present disclosure, where the present embodiment is applicable to a case where a received video frame is filtered when a video is played, and the method may be executed by a video play-starting apparatus, where the apparatus may be composed of hardware and/or software, and may be generally integrated in a device with a video play-starting function, where the device may be a client. Based on the above embodiment, as shown in fig. 1, the method specifically includes the following steps:
and S410, receiving a media frame to be decoded, which is delivered by the CDN.
Wherein the media frames include video frames and audio frames. In this embodiment, a user triggers a play operation at any moment in a video through a video APP, generates a start-play request according to the triggered play operation, and finally sends the start-play request to a CDN server. The CDN server determines the media frame to be issued according to the broadcast starting time in the broadcast starting request and issues the media frame to be issued to the client. The CDN may refer to the foregoing embodiment for determining the manner of the media frame to be delivered according to the play start time, which is not described herein again.
And S420, determining the number of the target frames according to the minimum delay time length.
The minimum delay time may be obtained from configuration information of the video player, and may be dynamically set, where the unit of the minimum delay time is ms. The target frame number may be understood as the maximum number of frames that ensures that the video starts the first frame smoothly. Specifically, the method for determining the number of target frames according to the minimum delay duration may refer to the foregoing embodiment, and details are not described here.
S430, if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first remaining media frame.
In this embodiment, if the number of the media frames to be transmitted is less than or equal to the number of the target frames, frame loss processing is not required to be performed on the media frames to be transmitted; and if the number of the media frames to be sent is larger than the number of the target frames, frame loss processing needs to be carried out on the media frames to be sent. In this embodiment, in order to ensure normal decoding of a video frame, an I frame needs to be retained, and the remaining video frames are subjected to frame loss processing according to a certain rule.
Specifically, the method for performing frame loss processing on the media frame to be decoded may refer to the method for performing frame loss processing on the media frame to be sent down in the foregoing embodiment, which is not described herein again.
S440, updating the timestamp of the first remaining media frame, decoding the first remaining media frame after updating the timestamp, and starting playing the video based on the decoded first remaining media frame.
In this embodiment, after the frame loss processing is performed on the video frame, the timestamps in the video frame are not continuous any more, and in order to ensure the normal decoding of the video frame, the timestamps of the remaining video frames need to be updated. For audio frames, the timestamps of the remaining audio frames are not required to be updated because the audio frames with the earlier timestamps are discarded and the timestamps of the remaining audio frames are still continuous.
Specifically, the above embodiments may be referred to for updating the timestamp of the first remaining media frame, and details are not repeated here.
In this embodiment, after the timestamp of the video frame is updated, the remaining media frames are decoded, and video playing is performed based on the decoded remaining media frames.
According to the technical scheme of the embodiment of the disclosure, a media frame to be decoded sent by a CDN is received; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first residual media frame; and updating the time stamp of the first residual media frame, decoding the first residual media frame after the time stamp is updated, and starting playing the video based on the decoded first residual media frame. According to the video starting playing method provided by the embodiment of the disclosure, when the number of the media frames to be decoded is greater than the number of the target frames, the frame dropping processing is performed on the media frames to be decoded, so that the decoding pressure can be relieved, the timeliness of starting playing of the video can be ensured, and the playing jam is reduced.
Fig. 5 is a schematic structural diagram of a video playing apparatus provided in a content delivery server CDN according to an embodiment of the present disclosure, and as shown in fig. 5, the apparatus includes:
a to-be-transmitted media frame determining module 510, configured to determine a to-be-transmitted media frame according to the play-out time; wherein the media frames comprise video frames and audio frames;
a target frame number determining module 520, configured to determine the number of target frames according to the minimum delay duration;
a frame loss processing module 530, configured to perform frame loss processing on the to-be-sent media frame to obtain a first remaining media frame if the number of the to-be-sent media frames is greater than the number of the target frames;
and a timestamp updating module 540, configured to update the timestamp of the first remaining media frame, and send the first remaining media frame with the updated timestamp to the client, so that the client starts playing the video according to the first remaining media frame.
Optionally, the to-be-transmitted media frame determining module 510 is further configured to:
determining a complete video frame group corresponding to the broadcasting starting time;
and determining the media frame between the start frame time and the play-out time of the complete video frame group as a media frame to be issued.
Optionally, the target frame number determining module 520 is further configured to:
acquiring the frame rate of the media;
determining a number of target frames based on the frame rate and the minimum delay duration.
Optionally, the frame loss processing module 530 is further configured to:
if the video frame does not contain the bidirectional reference frame, determining a first frame loss quantity according to the quantity of the target frames and the quantity of the media frames to be transmitted;
discarding the video frames with the first frame loss quantity behind the time stamps in the video frames to be issued;
and discarding the audio frames with the first frame loss number with the front time stamps in the audio frames to be sent down.
Optionally, the frame loss processing module 530 is further configured to:
if the video frame contains a bidirectional reference frame, discarding all bidirectional reference frames contained in the video frame to be issued to obtain a second residual video frame;
if the number of the second residual video frames is less than or equal to the number of the target frames, acquiring the second frame loss number of all discarded bidirectional reference frames;
and discarding the audio frames with the second frame loss quantity with the time stamps at the front in the audio frames to be transmitted.
Optionally, the frame loss processing module 530 is further configured to:
if the number of the second residual video frames is larger than the number of the target frames, determining a third frame loss number according to the number of the second residual video frames and the number of the target frames;
discarding the video frames with the third frame loss number behind the timestamp in the second remaining video frames;
determining the total frame loss quantity according to the third frame loss quantity and the second frame loss quantity;
and discarding the audio frames with the total frame loss quantity with the time stamps at the front in the audio frames to be issued.
Optionally, the timestamp updating module 540 is further configured to:
for each remaining video frame, determining a number of video frames dropped after a timestamp of the remaining video frame;
determining the timestamp offset of the residual video frames according to the number of the discarded video frames and the frame rate;
updating timestamps of the remaining video frames based on the timestamp offsets.
Fig. 6 is a video playing device provided in a client according to an embodiment of the present disclosure, and as shown in fig. 6, the video playing device includes:
a to-be-decoded media frame receiving module 610, configured to receive a to-be-decoded media frame delivered by the CDN; wherein the media frames comprise video frames and audio frames;
a target frame number determining module 620, configured to determine the number of target frames according to the minimum delay duration;
a frame loss processing module 630, configured to perform frame loss processing on the to-be-decoded media frame to obtain a first remaining media frame if the number of the to-be-decoded media frames is greater than the number of the target frames;
the timestamp updating module 640 is configured to update a timestamp of the first remaining media frame, decode the first remaining media frame after the timestamp is updated, and start playing a video based on the decoded first remaining media frame.
The device can execute the methods provided by all the embodiments of the disclosure, and has corresponding functional modules and beneficial effects for executing the methods. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the disclosure.
Referring now to FIG. 7, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like, or various forms of servers such as a stand-alone server or a server cluster. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 300 may include a processing means (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a read-only memory device (ROM)302 or a program loaded from a storage device 305 into a random access memory device (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program containing program code for performing a method for recommending words. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 305, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a media frame to be issued according to the play-on time; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the timestamp of the first residual media frame, and sending the first residual media frame with the timestamp updated to the client, so that the client plays the video according to the first residual media frame. Or, receiving a media frame to be decoded sent by the CDN; wherein the media frames comprise video frames and audio frames; determining the number of target frames according to the minimum delay time; if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first residual media frame; and updating the time stamp of the first residual media frame, decoding the first residual media frame after the time stamp is updated, and playing the video based on the decoded first residual media frame.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, a video playing method is disclosed in the present disclosure, where the method is performed by a content delivery server CDN, and includes:
determining a media frame to be issued according to the play-on time; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame;
and updating the timestamp of the first residual media frame, and issuing the first residual media frame with the timestamp updated to the client, so that the client starts playing the video according to the first residual media frame.
Further, determining the media frame to be transmitted according to the play-on time includes:
determining a complete video frame group corresponding to the broadcasting starting time;
and determining the media frame between the start frame time and the play-out time of the complete video frame group as a media frame to be issued.
Further, determining the number of target frames according to the minimum delay time length includes:
acquiring the frame rate of the media;
determining a number of target frames based on the frame rate and the minimum delay duration.
Further, performing frame loss processing on the media frame to be sent down to obtain a first remaining media frame, including:
if the video frame does not contain the bidirectional reference frame, determining a first frame loss quantity according to the quantity of the target frames and the quantity of the media frames to be transmitted;
discarding the video frames with the first frame loss quantity behind the time stamps in the video frames to be issued;
and discarding the audio frames with the first frame loss quantity, of which the time stamps are earlier, in the audio frames to be issued.
Further, performing frame loss processing on the media frame to be sent down to obtain a first remaining media frame, including:
if the video frame contains a bidirectional reference frame, discarding all bidirectional reference frames contained in the video frame to be issued to obtain a second residual video frame;
if the number of the second residual video frames is less than or equal to the number of the target frames, acquiring a second frame loss number of all discarded bidirectional reference frames;
and discarding the audio frames with the second frame loss quantity with the time stamps at the front in the audio frames to be transmitted.
Further, still include:
if the number of the second residual video frames is larger than the number of the target frames, determining a third frame loss number according to the number of the second residual video frames and the number of the target frames;
discarding the video frames with the third frame loss number behind the timestamp in the second remaining video frames;
determining the total frame loss quantity according to the third frame loss quantity and the second frame loss quantity;
and discarding the audio frames with the total frame discarding number which is in front of the time stamps in the audio frames to be sent.
Further, updating the timestamp of the first remaining media frame comprises:
for each remaining video frame, determining a number of video frames dropped after a timestamp of the remaining video frame;
determining the timestamp offset of the residual video frames according to the number of the discarded video frames and the frame rate;
updating timestamps of the remaining video frames based on the timestamp offsets.
The embodiment of the present disclosure further provides a video playing method, where the method is executed by a client, and includes:
receiving a media frame to be decoded issued by the CDN; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first residual media frame;
and updating the time stamp of the first residual media frame, decoding the first residual media frame after the time stamp is updated, and playing the video based on the decoded first residual media frame.
It will be appreciated that various forms of the above-shown routines may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (12)

1. A method for initiating a video, comprising:
determining a media frame to be issued according to the play-on time; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be issued is larger than the number of the target frames, performing frame loss processing on the media frames to be issued to obtain a first residual media frame;
and updating the timestamp of the first residual media frame, and sending the first residual media frame with the timestamp updated to the client, so that the client plays the video according to the first residual media frame.
2. The method of claim 1, wherein determining the media frame to be transmitted according to the play-out time comprises:
determining a complete video frame group corresponding to the broadcasting starting time;
and determining the media frame between the start frame time and the play-out time of the complete video frame group as a media frame to be issued.
3. The method of claim 1, wherein determining the number of target frames based on the minimum delay period comprises:
acquiring the frame rate of the media;
determining a number of target frames based on the frame rate and the minimum delay duration.
4. The method of claim 1, wherein the step of performing frame loss processing on the to-be-transmitted media frame to obtain a first remaining media frame comprises:
if the video frame does not contain the bidirectional reference frame, determining a first frame loss quantity according to the quantity of the target frames and the quantity of the media frames to be transmitted;
discarding the video frames with the first frame loss quantity behind the time stamps in the video frames to be issued;
and discarding the audio frames with the first frame loss quantity, of which the time stamps are earlier, in the audio frames to be issued.
5. The method of claim 1, wherein the frame loss processing is performed on the to-be-transmitted media frame to obtain a first remaining media frame, and the method comprises:
if the video frame contains a bidirectional reference frame, discarding all bidirectional reference frames contained in the video frame to be issued to obtain a second residual video frame;
if the number of the second residual video frames is less than or equal to the number of the target frames, acquiring a second frame loss number of all discarded bidirectional reference frames;
and discarding the audio frames with the second frame loss quantity with the time stamps at the front in the audio frames to be transmitted.
6. The method of claim 5, further comprising:
if the number of the second residual video frames is larger than the number of the target frames, determining a third frame loss number according to the number of the second residual video frames and the number of the target frames;
discarding the video frames with the third frame loss number behind the timestamp in the second remaining video frames;
determining the total frame loss quantity according to the third frame loss quantity and the second frame loss quantity;
and discarding the audio frames with the total frame loss quantity with the time stamps at the front in the audio frames to be issued.
7. The method of claim 1, wherein updating the timestamp of the first remaining media frame comprises:
for each remaining video frame, determining a number of video frames dropped after a timestamp of the remaining video frame;
determining the timestamp offset of the residual video frames according to the number of the discarded video frames and the frame rate;
updating timestamps of the remaining video frames based on the timestamp offsets.
8. A method for initiating a video, comprising:
receiving a media frame to be decoded issued by the CDN; wherein the media frames comprise video frames and audio frames;
determining the number of target frames according to the minimum delay time;
if the number of the media frames to be decoded is larger than the number of the target frames, performing frame loss processing on the media frames to be decoded to obtain a first residual media frame;
and updating the timestamp of the first residual media frame, decoding the first residual media frame after the timestamp is updated, and starting playing the video based on the decoded residual media frame.
9. A video playback apparatus, comprising:
the to-be-issued media frame determining module is used for determining the to-be-issued media frame according to the broadcasting starting time; wherein the media frames comprise video frames and audio frames;
the target frame number determining module is used for determining the number of target frames according to the minimum delay time length;
a frame loss processing module, configured to perform frame loss processing on the to-be-issued media frame to obtain a first remaining media frame if the number of the to-be-issued media frames is greater than the number of the target frames;
and the timestamp updating module is used for updating the timestamp of the first residual media frame and sending the first residual media frame with the updated timestamp to the client, so that the client plays the video according to the first residual media frame.
10. A video playback apparatus, comprising:
the receiving module of the media frame to be decoded is used for receiving the media frame to be decoded issued by the CDN; wherein the media frames comprise video frames and audio frames;
the target frame number determining module is used for determining the number of target frames according to the minimum delay time length;
a frame loss processing module, configured to perform frame loss processing on the to-be-decoded media frame to obtain a first remaining media frame if the number of the to-be-decoded media frames is greater than the number of the target frames;
and the timestamp updating module is used for updating the timestamp of the first residual media frame, decoding the first residual media frame after the timestamp is updated, and starting playing the video based on the decoded first residual media frame.
11. An electronic device, characterized in that the electronic device comprises:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the video launch method of any of claims 1-8.
12. A computer-readable medium, on which a computer program is stored which, when being executed by processing means, carries out the video start-up method according to any one of claims 1 to 8.
CN202210522521.8A 2022-05-13 Video playing method, device, equipment and storage medium Active CN114979712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210522521.8A CN114979712B (en) 2022-05-13 Video playing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210522521.8A CN114979712B (en) 2022-05-13 Video playing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114979712A true CN114979712A (en) 2022-08-30
CN114979712B CN114979712B (en) 2024-07-26

Family

ID=

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856812A (en) * 2014-03-25 2014-06-11 北京奇艺世纪科技有限公司 Video playing method and device
CN105933800A (en) * 2016-04-29 2016-09-07 联发科技(新加坡)私人有限公司 Video play method and control terminal
CN106817614A (en) * 2017-01-20 2017-06-09 努比亚技术有限公司 Audio frequency and video frame losing device and method
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN110113621A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of media information
CN110392269A (en) * 2018-04-17 2019-10-29 腾讯科技(深圳)有限公司 Media data processing method and device, media data playing method and device
CN110572695A (en) * 2019-08-07 2019-12-13 苏州科达科技股份有限公司 media data encoding and decoding methods and electronic equipment
CN111436009A (en) * 2019-01-11 2020-07-21 厦门雅迅网络股份有限公司 Real-time video stream transmission and display method and transmission and play system
US10862944B1 (en) * 2017-06-23 2020-12-08 Amazon Technologies, Inc. Real-time video streaming with latency control
CN112135163A (en) * 2020-09-27 2020-12-25 京东方科技集团股份有限公司 Video playing starting method and device
CN113490055A (en) * 2021-07-06 2021-10-08 三星电子(中国)研发中心 Data processing method and device
CN114189711A (en) * 2021-11-16 2022-03-15 北京金山云网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856812A (en) * 2014-03-25 2014-06-11 北京奇艺世纪科技有限公司 Video playing method and device
CN105933800A (en) * 2016-04-29 2016-09-07 联发科技(新加坡)私人有限公司 Video play method and control terminal
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN106817614A (en) * 2017-01-20 2017-06-09 努比亚技术有限公司 Audio frequency and video frame losing device and method
US10862944B1 (en) * 2017-06-23 2020-12-08 Amazon Technologies, Inc. Real-time video streaming with latency control
CN110113621A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of media information
CN110392269A (en) * 2018-04-17 2019-10-29 腾讯科技(深圳)有限公司 Media data processing method and device, media data playing method and device
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN111436009A (en) * 2019-01-11 2020-07-21 厦门雅迅网络股份有限公司 Real-time video stream transmission and display method and transmission and play system
CN110572695A (en) * 2019-08-07 2019-12-13 苏州科达科技股份有限公司 media data encoding and decoding methods and electronic equipment
CN112135163A (en) * 2020-09-27 2020-12-25 京东方科技集团股份有限公司 Video playing starting method and device
CN113490055A (en) * 2021-07-06 2021-10-08 三星电子(中国)研发中心 Data processing method and device
CN114189711A (en) * 2021-11-16 2022-03-15 北京金山云网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102577272B (en) The cacheable media streaming of low latency
CN111147606B (en) Data transmission method, device, terminal and storage medium
WO2013044705A1 (en) Online video playing method and video playing server
CN112135169B (en) Media content loading method, device, equipment and medium
CN110545472B (en) Video data processing method and device, electronic equipment and computer readable medium
US9325765B2 (en) Multimedia stream buffer and output method and multimedia stream buffer module
CN112199174A (en) Message sending control method and device, electronic equipment and computer readable storage medium
CN112423140A (en) Video playing method and device, electronic equipment and storage medium
CN113891132A (en) Audio and video synchronization monitoring method and device, electronic equipment and storage medium
CN113794942B (en) Method, apparatus, system, device and medium for switching view angle of free view angle video
CN113364767B (en) Streaming media data display method and device, electronic equipment and storage medium
CN113542856A (en) Reverse playing method, device, equipment and computer readable medium for online video
CN111478916B (en) Data transmission method, device and storage medium based on video stream
CN114979712A (en) Video playing starting method, device, equipment and storage medium
CN114979712B (en) Video playing method, device, equipment and storage medium
CN113242446B (en) Video frame caching method, video frame forwarding method, communication server and program product
WO2022188618A1 (en) Resource preloading method, apparatus and device, and storage medium
CN112153322B (en) Data distribution method, device, equipment and storage medium
CN112637668B (en) Video playing method, device, equipment and medium
CN114979762A (en) Video downloading and transmission method, device, terminal equipment, server and medium
CN112887742B (en) Live stream processing method, device, equipment and storage medium
CN114257870A (en) Short video playing method, device, equipment and storage medium
CN115225917A (en) Recording plug-flow method, device, equipment and medium
CN112995780B (en) Network state evaluation method, device, equipment and storage medium
CN113556352B (en) Information pushing method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant