CN115460458B - Video frame loss method and device - Google Patents

Video frame loss method and device Download PDF

Info

Publication number
CN115460458B
CN115460458B CN202211066641.8A CN202211066641A CN115460458B CN 115460458 B CN115460458 B CN 115460458B CN 202211066641 A CN202211066641 A CN 202211066641A CN 115460458 B CN115460458 B CN 115460458B
Authority
CN
China
Prior art keywords
video
frame
time
decoding
decoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211066641.8A
Other languages
Chinese (zh)
Other versions
CN115460458A (en
Inventor
牛俊慧
罗小伟
郭春磊
李�荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN202211066641.8A priority Critical patent/CN115460458B/en
Publication of CN115460458A publication Critical patent/CN115460458A/en
Application granted granted Critical
Publication of CN115460458B publication Critical patent/CN115460458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to the field of video display technologies, and in particular, to a method and apparatus for video frame loss. The method comprises the following steps: the first time to decode a single video frame is determined based on the time consuming decoding of the decoded video frame of the target video by the video decoding module. Determining whether to execute a frame loss strategy on the un-decoded video frames of the target video according to the first time and a second time allowed by the video decoding module to decode single video frames, wherein the second time is determined according to the video processing capacity of the video decoding module. And if the frame loss strategy is determined to be executed on the un-decoded video frames of the target video, determining target video frames from the un-decoded video frames of the target video, wherein the target video frames are not sent to the video decoding module for decoding. According to the scheme provided by the embodiment of the invention, the target video is ensured to be smoothly played by losing frames to ensure that the decoding time of the decoded video frames is adapted to the video processing capacity of the video decoding module.

Description

Video frame loss method and device
Technical Field
The present invention relates to the field of video display technologies, and in particular, to a method and apparatus for video frame loss.
Background
With the development of multimedia technology, more and more media service providers push high definition video that can improve the viewing experience of users. High definition video generally has high resolution and high frame rate. For example, for 4K video, the picture resolution may reach 3840x2160, and the video frame rate may reach 60 frames/second or even 120 frames/second. The characteristics of the high-definition video can provide cinema-level visual experience for users on one hand, and also provide higher requirements for the video processing capability of video playing equipment on the other hand. If the processing capability of the video playback device is limited, the video playback device may not support or be able to smoothly play the video with high resolution and high frame rate. Therefore, how to improve the adaptive playing capability of the video playing device to video becomes a problem to be solved.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a method and apparatus for video frame loss, which ensure smooth playing of a target video by frame-losing the target video so that decoding time of a decoded video frame is adapted to video processing capability of a video decoding module.
In a first aspect, an embodiment of the present invention provides a video frame loss method, including:
Determining a first time for decoding a single video frame according to the time consumption of the video decoding module for decoding the decoded video frame of the target video;
determining whether to execute a frame loss strategy on the un-decoded video frames of the target video according to the first time and a second time allowed by the video decoding module to decode a single video frame, wherein the second time is determined according to the video processing capacity of the video decoding module;
and if the frame loss strategy is determined to be executed on the un-decoded video frames of the target video, determining target video frames from the un-decoded video frames of the target video, wherein the target video frames are not sent to the video decoding module for decoding.
Optionally, the determining the first time for decoding the single video frame according to the time consumed by the video decoding module for decoding the decoded video frame of the target video includes:
calculating real-time consumption of decoding a single video frame according to the decoded video frame number and the decoding time consumption of the video decoding module;
and calculating the first time according to the real-time consumption and the key frame time consumption of the key frame of the target video.
Optionally, the method further comprises: determining a plurality of target key frames from key frames of the target video before initially starting decoding of the target video;
Acquiring the pre-decoding time consumption of the video decoding module on the target key frames;
and calculating the key frame time consumption according to the pre-decoding time consumption.
Optionally, the second time is determined according to a video processing capability of the video decoding module, including:
the second time is determined according to the video frame rate of the target video and the frame sending time and the display sending time of the video decoding module.
Optionally, the first time is determined according to the decoding time consumption of each video frame falling into a first time window, and the first time window is a decoded window nearest to the current time;
the determining whether to execute the frame loss policy on the un-decoded video frame of the target video according to the first time and the second time allowed by the video decoding module to decode the single video frame comprises:
determining whether to execute a frame loss strategy on each un-decoded video frame falling into a second time window according to the first time and the second time; the second time window is the next adjacent time window of the first time window, the decoding time consumption of each video frame corresponding to the second time window is used for recalculating the first time, the recalculated first time is used for executing a frame loss strategy for the next adjacent time window of the second time window.
Optionally, the determining whether to execute the frame loss policy on the un-decoded video frame of the target video according to the first time and the second time allowed by the video decoding module to decode the single video frame includes:
and if the first time is greater than the second time and the difference between the first time and the second time is less than a first threshold, executing a frame loss strategy on the un-decoded video frames of the target video.
Optionally, the determining the target video frame from the un-decoded video frames of the target video includes:
determining the frame number to be lost according to the first time and the second time;
and determining the target video frame from the un-decoded video frame of the target video according to the video frame type of the un-decoded video frame, whether the un-decoded video frame is a reference frame of other frames and the frame number to be lost.
Optionally, the determining the frame number to be dropped according to the first time and the second time includes:
calculating a first frame rate according to the first time;
calculating a second frame rate according to the second time;
and calculating the frame number to be lost in the unit time according to the first frame rate and the second frame rate, wherein the frame number to be lost in the unit time is used for determining the frame number of the target video frame in at least one subsequent unit time.
Optionally, the determining the target video frame from the un-decoded video frames of the target video according to the video frame type of the un-decoded video frame, whether the un-decoded video frame is a reference frame of other frames, and the frame to be lost number includes:
if the current un-decoded video frame is a key frame, transmitting the current un-decoded video frame to a video decoding module for decoding and judging whether the next un-decoded video frame of the current un-decoded video frame is the target video frame or not;
if the current un-decoded video frame is a forward predictive coding frame and is a reference frame of other video frames, transmitting the current un-decoded video frame to a video decoding module for decoding and judging whether the next un-decoded video frame of the current un-decoded video frame is the target video frame or not;
if the current un-decoded video frame is a forward predictive coding frame, the current un-decoded video frame is a non-reference frame, and the frame number of the lost frame does not reach the frame number to be lost, the current un-decoded video frame is the target video frame, and whether the next un-decoded video frame of the current un-decoded video frame is the target video frame is judged;
if the current un-decoded video frame is a bi-directional prediction interpolation coding frame and the frame number of the lost frame does not reach the frame number to be lost, the current un-decoded video frame is the target video frame and whether the next un-decoded video frame of the current un-decoded video frame is the target video frame is judged.
Optionally, the method further comprises: before the decoding of the target video is started initially, determining an initial first time length in the target video, and giving an initial value to the first time according to the number of video frames contained in the initial first time length;
and determining whether to execute a frame loss strategy for each video frame in the initial first duration according to the initial value of the first time and the second time.
In a second aspect, an embodiment of the present invention provides a video playing device, including:
the feedback module is used for determining the first time for decoding a single video frame according to the time consumption of the video decoding module for decoding the decoded video frame of the target video;
the frame loss module is used for determining whether to execute a frame loss strategy on the un-decoded video frames of the target video according to the first time and a second time allowed by the video decoding module to decode single video frames, wherein the second time is determined according to the video processing capacity of the video decoding module; and if the frame loss strategy is determined to be executed on the un-decoded video frames of the target video, determining target video frames from the un-decoded video frames of the target video, wherein the target video frames are not sent to the video decoding module for decoding.
In a third aspect, an embodiment of the present invention provides a video playing device, including: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of the first aspect or any of the above aspects.
In a fourth aspect, an embodiment of the present invention provides a chip, including: a processor for executing computer program instructions stored in a memory, wherein the computer program instructions, when executed by the processor, trigger the chip to perform the method of the first aspect or any of the first aspects.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where the program when run controls a device in which the computer readable storage medium is located to perform the method according to the first aspect or any one of the first aspects.
According to the embodiment of the invention, the frame loss is executed according to the decoding time consumption of the decoded video frames and the decoding capability of the video playing device, so that the decoding time consumption of the video frames needing to be decoded after the frame loss can be adapted to the video processing capability of the video decoding module, and the normal playing of the target video in the video playing device is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a video playing device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of another video playing device according to an embodiment of the present invention;
fig. 3 is a flowchart of a video frame loss method according to an embodiment of the present invention;
fig. 4 is a flowchart of another video frame loss method according to an embodiment of the present invention;
fig. 5 is a flowchart of another video frame loss method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video playing device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
When playing video, the video playing device needs to decode the target video to be played, and then outputs the video based on the decoded file. If the video processing capability of the video playback device is limited, when the video playback device decodes a video file with a higher resolution and frame rate, it may happen that the video decoding time is longer than the set decoding time threshold, i.e. the decoding delay of the video frames occurs. The decoding delay of the video frames is accumulated frame by frame, so that the problems of video picture jamming, asynchronous audio and video and the like can be caused, and the watching experience of a user is influenced.
In order to enable the video playing device to adapt to as many video files as possible, the related art proposes a frame loss policy. The frame loss strategy is to prevent decoding of some video frames of the target video or discard some decoded video frames without display output, so as to reduce decoding delay of video frames as much as possible, and enable continuous non-blocking and audio-video synchronization of the output display pictures. In the related art, when a frame loss policy is executed, an output effect of a target video is generally detected. When the conditions of picture blocking, asynchronous audio and video and the like of the target video occur, the decoding of the current frame is skipped, the decoding work of the next frame of video is directly executed, or the video frame to be output and displayed currently is discarded. The frame loss strategy in the related art has certain hysteresis. And if the playing effect is not good due to the limited video processing capability of the video playing device, the target video playing effect is not ideal again because the subsequent decoding delay is gradually accumulated after the current situation that the playing effect is not ideal is overcome by frame loss.
In view of the problems of the related frame loss strategies, the embodiment of the invention provides a video frame loss method. According to the video frame loss method, frame loss is carried out according to the decoding time consumption of the decoded video frames and the decoding capacity of the video playing equipment, so that the time consumption required to be decoded after frame loss can be adapted to the video processing capacity of the video playing equipment, and normal playing of a target video in the video playing equipment is ensured.
The video frame loss method of the embodiment of the invention can be applied to video playing equipment. The video playing device is a device with video playing capability. For example, a cell phone, tablet, computer, wearable device, vehicle-mounted device, smart home device, augmented reality (augmented reality, AR)/Virtual Reality (VR) device, etc. with video playback capability.
Referring to fig. 1, a schematic structural diagram of a video playing device according to an embodiment of the present invention is provided. The video playing device includes: the device comprises a pre-detection module 105, a judgment module 106, a feedback module 101, a frame loss module 102, a video decoding module 103 and a video output module 104. Wherein:
the pre-detection module 105 is configured to detect the video processing capability of the video decoding module 103 before the decoding of the target video is initially started. Key frame time T consisting essentially of acquiring key frames of the target video decoded by video decoding module 103 I Acquiring a second time T allowed by the video decoding module 103 to decode a single video frame max And initializing the real-time elapsed for the video decoding module 103 to decode the individual video frames.
In some embodiments, the pre-fetch module 105 takes time T to acquire a keyframe I Comprising: the pre-fetch module 105 determines a number of target key frames from the key frames of the target video before initially initiating decoding of the target video. The pre-fetch module 105 sends the target key frames to the video decoding module 103 for pre-decoding. The pre-fetch module 105 obtains the pre-decoding time of the video decoding module 103 for the number of target key frames. The pre-detection module 105 calculates the key frame time consumption T according to the pre-decoding time consumption I . Alternatively, T I =T Pre-decoding is time consuming /N Key frame number . In some embodiments, the pre-screening module 105 determining target key frames from key frames of the target video includes: and ordering each key frame contained in the target video frame according to the sequence from the big to the small of the code stream length. Determining the number N of key frames to be selected according to the video frame rate and the video length of the target video Key frame number . Selecting N from the reordered key frame sequence Key frame number Video frames, as target key frames. Optionally, the key frame time is used for representing Video decoding module 103 decodes the most complex video frames in the target video, which is time consuming.
In some embodiments, the second time may represent the decoding capability of video decoding module 103 for a single video frame. Alternatively, the second time may be the maximum time allowed for decoding a single video frame, as determined by the video processing capabilities of video decoding module 103. Alternatively, the second time may be T max And (3) representing. Optionally, the second time T max May be preset in the feedback module 101. Optionally, the second time T max Or may be calculated by the pre-check module 105. In some embodiments, the time consuming for video decoding module 103 to decode a single video frame includes not only decoding time consuming, but also frame and display time consuming for video decoding module 103. The pre-detection module 105 thus calculates the second time T max The effect of frame transmission and time consuming transmission is considered. The pre-detection module 105 thus calculates a second time T max Comprising the following steps: the pre-detection module 105 calculates a second time T according to the video frame rate of the target video and the frame and display time of the video decoding module 103 max . In some embodiments of the present invention, in some embodiments,the frame rate represents the video frame rate of the target video in frames per second. a represents the frame and display time of the video decoding module 103. In a specific example, if the frame rate of the target video is 60fps, a=2 ms, then T max =[1000/60]-2=16–2=14ms。
In the embodiment of the present invention, the real-time consumed for the video decoding module 103 to decode a single video frame is referred to as a first time, and optionally, the first time may be denoted by T. In some embodiments, upon initial initiation of decoding of the target video, the pre-fetch module 105 assigns an initial value to T, noted T Initial initiation . The pre-detection module 105 assigns an initial value to T including: the pre-fetch module 105 determines an initial first time period elapsedTime in the target video before initially starting decoding of the target video. The pre-detection module 105 assigns an initial value to the first time based on the number of video frames contained in the initial first time. Alternatively, the initial first time period elapsieThe number of video frames contained within dTime is frameNum.
The pre-test module 105 will T I 、T Initial initiation And T max To the video decoding module 103. The video decoding module 103 will T Initial initiation And T max To the decision block 106. The video decoding module 103 will T I 、T Initial initiation And T max To the feedback module 101.
The judging module 106 is configured to, according to T Initial initiation And T max And judging whether to execute a frame loss strategy for each video frame in the initial first duration of the target video. Alternatively, (1) if T Initial initiation <T max It is illustrated that the real-time consumption of decoding a single video frame is within the video processing capabilities of the video decoding module 103, and that no frame loss is required for each video frame within the initial first duration. (2) If T Initial initiation =T max The real-time consumption of decoding a single video frame is equal to the video processing capability of the video decoding module 103, and the video output module 104 just can smoothly play the target video, so that the video frames in the first duration can be decoded by the video decoding module 103, and in the decoding process, whether the frame loss of the un-decoded video frames in the first duration is required is further determined according to the playing effect of the target video. (3) T (T) Initial initiation >T max It is illustrated that decoding a single video frame takes more real-time than the video processing capabilities of video decoding module 103, requiring a frame loss of the video frame for a first duration. The judgment module 106 sends the judgment result to the feedback module 101.
The feedback module 101 is configured to (1) send the determination result of the determination module 106 to the frame loss module 102 before the decoding of the target video is started initially. (2) During the process of decoding the video frames by the video decoding module 103, the feedback module 101 is configured to obtain decoding parameters of the video frames by the video decoding module 103 in real time. The decoding parameters may for example comprise decoding time stamps. The feedback module 101 may obtain the decoding time consumption of the decoded video frame based on the decoding time stamp. Alternatively, the feedback module 101 may determine the real-time consumption of the video decoding module 103 for decoding a single video frame, i.e. obtain the first time T, according to the decoding time consumption of the decoded video frame. Optionally, the feedback module 101 may further determine a decoding delay of the decoded video frame according to the decoding time stamp of the decoded video frame, and determine whether the decoded video frame is already out of synchronization with the audio frame according to the decoding delay.
In some embodiments, the feedback module 101 calculating the first time T from the decoding time consumption of the decoded video frame comprises: the feedback module 101 calculates the first time T from the decoding elapsed time of each video frame falling within the first time window. Optionally, the first time window is a decoded window nearest to the current time.
In some embodiments, the feedback module 101 may determine the first time T based on a ratio of decoding time consumption to frame number of the decoded video frames. In some embodiments, the feedback module 101 may calculate the real-time consumption T of decoding a single video frame based on the ratio of the decoding time consumption to the number of frames of the decoded video frame Real time . According to the time consumption T in real time Real time And key frame time consuming T for decoding key frames of target video I The first time T is calculated. For example, t=1/2T I +T Dynamic state ). Further, the feedback module 101 sends the calculated first time T and the acquired decoding delay and other parameters to the frame loss module 102, so that the frame loss module 102 executes the frame loss policy.
The frame loss module 102 performs a frame loss policy. Optionally, the frame dropping module 102 is configured to (1) perform frame dropping on each video frame in the first duration according to the determination result from the determining module 106 when the decoding of the target video is initially started. The specific frame loss mode can be referred to as a frame loss mode in the video decoding process. (2) In the process of decoding the video frames by the video decoding module 103, the first time T and the second time T allowed by the video decoding module 103 for decoding the single video frames are used max Determining whether a frame loss policy needs to be performed on the undecoded video frames of the target video, and when it is determined that frame loss is required, for frame-losing the undecoded video frames of the target video.
In the course of the video decoding process,the frame loss module 102 is configured to perform the frame loss according to the first time T and the second time T max Determining whether frame dropping needs to be performed on an undecoded video frame of the target video includes: the frame loss module 102 is configured to send a frame to the frame buffer according to the first time T and the second time T max Whether the frame is required to be lost for the current frame or whether the frame is required to be lost for each un-decoded video frame falling into the second time window is judged. Optionally, the second time window is a next adjacent time window of the first time window.
In some embodiments, the frame loss module 102 is configured to determine the first time T and the second time T max The specific way of determining whether frame loss needs to be performed on an undecoded video frame may include: (1) If T<T max It is illustrated that decoding a single video frame is time consuming in real-time within the video processing capabilities of video decoding module 103, and that no frame loss is required for the un-decoded video frames. (2) If T=T max The real-time consumption of decoding a single video frame is equal to the video processing capability of the video decoding module 103, the video output module 104 can just play the target video smoothly, and the frame loss module 102 can decide whether to need to frame-lose the frame of the un-decoded video according to the decoding delay parameter, the playing effect of the target video and the like. (3) If T >T max It is indicated that the real-time consumption of decoding a single video frame is greater than the video processing capability of the video decoding module 103, and at this time, a frame loss strategy can be adopted to ensure smooth playing of the target video.
In some embodiments, the first threshold M is also preset according to the video processing capability of the video decoding module 103. When the first time T is greater than the second time T max And a first time T and a second time T max When the difference value of the video frame is smaller than the first threshold value M, the frame loss strategy can be adopted to enable the target video to be played smoothly, namely, the frame loss of the un-decoded video frame of the target video is determined. When the first time T is greater than the second time T max And a first time T and a second time T max When the difference value of (2) is greater than or equal to the first threshold value M, it indicates that the real-time consumption of decoding a single video frame exceeds the video processing capability extremum of the video decoding module 103, and even if the frame is lost, the target video cannot be smoothly played, and at this time, the video playing device can be prompted not to supportAnd playing the target video.
Correspondingly, when the first time T is greater than the second time T max And a first time T and a second time T max When the difference value of (c) is less than the first threshold, the frame loss module 102 determines that a frame needs to be lost for the current undecoded video frame, or determines that a frame needs to be lost for each undecoded video frame that falls within the second time window. Alternatively, when decoding the target video is initially started, if T Initial initiation Greater than T max And T is Initial initiation And a second time T max If the difference value of (b) is smaller than the first threshold, the frame dropping module 102 drops frames of each video frame in the initial first duration. Alternatively, the initial first duration may be the same as the length of the first time window.
In some embodiments, the frame loss module 102 loses frames to a current video frame or to each of the undecoded video frames that fall within the second time window, comprising: according to the first time T and the second time T max And determining the frame number to be lost. The target video frame that need not be sent to the video decoding module 103 for decoding is determined according to the video frame type of the un-decoded video frame, whether the un-decoded video frame is a reference frame of other frames, and the frame number to be lost. Optionally, when decoding of the target video is initially initiated, then according to T Initial initiation And a second time T max The frame number to be lost is determined, and other steps are the same as those in the video decoding process, and are not repeated.
In some embodiments, the manner in which the frame loss module 102 determines whether the current undecoded video frame is the target video frame described above includes:
if the current un-decoded video frame is a key frame (I frame for short), the current un-decoded video frame is not the target video frame. The frame loss module 102 transmits the current undecoded video frame to the video decoding module 103 for decoding. The frame loss module 102 continues to determine whether the next undecoded video frame of the current undecoded video frame is the target video frame.
Alternatively, if the current undecoded video frame is a forward predictive encoded frame (abbreviated P-frame) and is a reference frame for other video frames, the current undecoded video frame is not the target video frame. The frame loss module 102 transmits the current undecoded video frame to the video decoding module 103 for decoding. The frame loss module 102 continues to determine whether the next undecoded video frame of the current undecoded video frame is the target video frame.
Optionally, if the current undecoded video frame is a P frame, the current undecoded video frame is a non-reference frame, and the frame number of the lost frame does not reach the frame number to be lost, the current undecoded video frame is the target video frame. The frame loss module 102 does not send the current undecoded video frame to the video decoding module 103 for decoding. The frame loss module 102 continues to determine whether the next undecoded video frame of the current undecoded video frame is the target video frame. Alternatively, if the current undecoded video frame is a P frame, the current undecoded video frame is a non-reference frame, and the frame number of the lost frame has reached the frame number to be lost, the frame of the current undecoded video frame is not lost and sent to the video decoding module 103 for decoding.
Alternatively, if the current undecoded video frame is a bi-directional predictive interpolation encoded frame (abbreviated as B frame) and the frame number of the lost frame does not reach the frame number to be lost, the current undecoded video frame is the target video frame. The frame loss module 102 does not send the current undecoded video frame to the video decoding module 103 for decoding. The frame loss module 102 continues and determines whether the next undecoded video frame of the current undecoded video frame is the target video frame. Alternatively, if the current undecoded video frame is a B frame and the frame number of the lost frame has reached the frame number to be lost, the current undecoded video frame is not lost and sent to the video decoding module 103 for decoding.
In some embodiments, the above manner of calculating the frame number to be dropped may include: a first frame rate is calculated based on the first time T. According to the second time T max A second frame rate is calculated. And according to the first frame rate and the second frame rate, calculating the frame number to be lost in the unit time length, and optionally, the frame number to be lost in the unit time length is used for frame loss of the un-decoded video frames in at least one subsequent unit time length. Optionally, the above determination of whether the frame number of the lost frame has reached the frame number to be lost may determine, unit by unit, whether the frame number of the lost frame has reached the frame number to be lost of the unit time lengthA number. Alternatively, the second time window may include a plurality of unit durations. And in the second time window, determining whether the frame number of the lost frames reaches the frame number to be lost of the unit time length or not according to the unit time length.
The video decoding module 103 is configured to decode the video frame sent by the frame loss module 102. The video output module 104 is configured to output the decoded video frames output by the video decoding module 103.
It should be understood that the division of the modules of the video playing device shown in fig. 1 is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; it is also possible that part of the modules are implemented in the form of software called by the processing element and part of the modules are implemented in the form of hardware. For example, the determining module 106 and the feedback module 101 may be processing elements separately set up, and optionally, the functions implemented by the determining module 106 and the feedback module 101 may be integrated into a chip of the electronic device. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter ASIC), or one or more microprocessors (Digital Singnal Processor; hereinafter DSP), or one or more field programmable gate arrays (Field Programmable Gate Array; hereinafter FPGA), etc. For another example, the modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
Referring to fig. 2, a schematic structural diagram of another video playing device according to an embodiment of the present invention is provided. As shown in fig. 2, a viewThe frequency playing device includes: a feedback module 101, a frame loss module 102, a video decoding module 103 and a video output module 104. Wherein T is as described above I 、T Initial initiation And T max May be preset at the feedback module 101. Optionally, the feedback module 101 further has a pre-detection module 105 in addition to the functions described in fig. 1, and the feedback module 101 may calculate the T by itself before initially starting decoding the target video I 、T Initial initiation And T max . Further, the frame loss module 102 may further have a function of the judging module 106 on the basis of including the functions described in fig. 1. The function of each module in fig. 2 may be referred to in the description of fig. 1, and will not be described herein.
Referring to fig. 3, a flowchart of a video frame loss method is provided in an embodiment of the present invention. The method shown in fig. 3 is an initial start-up decoding process of the target video. As shown in fig. 3, the processing steps of the method include:
201, the pre-inspection module calculates T I 、T max T is as follows Initial initiation . The pre-inspection module carries out T I 、T max T is as follows Initial initiation Send to the video decoding module, which will send T Initial initiation And T max Sending the T to a judging module I 、T Initial initiation And T max And sending the data to a feedback module.
202, the judging module judges according to T Initial initiation And T max And judging whether to execute a frame loss strategy on each video frame in the initial first duration to obtain a judging result. The specific judgment process can be seen from the description of fig. 1.
203, the judging module sends the judging result to the feedback module, and the feedback module sends the judging result to the frame loss module.
204, the frame loss module determines whether to need to frame-lose each video frame in the initial first duration according to the judging result. If not, then step 205 is performed; if so, step 206 is performed.
And 205, the frame loss module sends each video frame in the initial first duration to the video decoding module for decoding, and the video output module displays and outputs the video frames.
206, the frame loss module judges whether each video frame in the initial first duration is a target video frame without decoding.
207, for the target video frames which do not need to be decoded in the initial first time period, the frame loss module does not send the target video frames to the video decoding module for decoding.
208, for the non-target video frames to be decoded in the initial first time period, the frame loss module sends the non-target video frames to the video decoding module for decoding, and the non-target video frames are displayed and output through the video output module. The manner in which the target video frame is determined can be seen from the description of fig. 1. Wherein T is Initial initiation Can be equivalent to T, based on T Initial initiation And T max And judging the target video frames in the first initial duration.
Optionally, when the video decoding module decodes each video frame in the initial first duration, the feedback module obtains the decoding time consumption of the video decoding code block for each video frame in the initial first duration, and updates the value of T according to the decoding time consumption. Optionally, the updated T value is used for frame loss decision of the video frame of the next duration. Alternatively, the length of the initial first time period may be equal to the length of the first time window.
Referring to fig. 4, another flowchart of a video frame loss method is provided in an embodiment of the present invention. The method shown in fig. 4 is applied to the decoding flow of the target video. As shown in fig. 4, the processing steps of the method include:
301, a feedback module obtains the time consumed by a video decoding module to decode a decoded video frame. For example, the feedback module obtains the decoding time T of the video decoding module for N consecutive video frames N The consecutive N frames are N decoded video frames nearest to the current time.
The feedback module calculates 302 a first time T to decode a single video frame based on the decoding time consumption of the decoded video frame. Alternatively, if decoding consecutive N video frames takes time T N Then t=t N N. The feedback module sends the first time T to the frame loss module.
303, the frame loss module decodes the video according to the first time TSecond time T allowed by module decoding single video frame max Determining whether to decode the current un-decoded video frame F i And executing a frame loss strategy. If T is less than or equal to T max Then not to video frame F i The frame loss policy, step 308 is performed. If T>T max And T-T max <M, then determine the video frame F i A frame loss policy is executed and step 304 is executed. If T>T max And T-T max And (3) not supporting playing the target video by the video playing equipment at the moment, and ending the method.
304, the frame loss module determines the current undecoded video frame F i Whether it is the target video frame. Alternatively, the frame loss module may be configured to determine the frame loss based on the current undecoded video frame F i Video frame type of (a) and current un-decoded video frame F i Determining whether it is a reference frame of other un-decoded video frames or not and the number of frames to be lost i Whether it is the target video frame.
305, if the video frame F is not currently decoded i Is a target video frame, and the frame loss module is used for decoding the current un-decoded video frame F i Frame loss, i.e. the frame loss module does not decode the current video frame F i And sending the video information to the video decoding module for decoding.
306, the frame loss module determines whether there is a next undecoded video frame F i+1 . If so, go to step 307; if not, the method ends.
307, the frame loss module will next un-decoded video frame F i+1 Determining as a current undecoded video frame F i And jumps to step 303.
308, the frame loss module will decode the current un-decoded video frame F i And sends to the video decoding module for decoding and jumps to step 306.
Optionally, the video frame F is decoded at the video decoding module i In the decoding process, the feedback module also acquires the video frame F from the video decoding module i Is time consuming to decode. The feedback module is used for controlling the video frame F i Is time consuming to decode, updating the first time T. The updated first time T is used for deciding whether to pair the video F i+1 And executing a frame loss strategy.
According to the embodiment of the invention, whether to execute a frame loss strategy on the current video frame is determined frame by frame according to the time consumption of decoding the continuous N video frames, and whether to lose the frame of the current video is specifically determined.
Referring to fig. 5, a flowchart of another video frame loss method according to an embodiment of the present invention is provided. As shown in fig. 5, the processing steps of the method include:
401, the feedback module obtains the decoding time consumption W of the video decoding module for each video frame falling in the first time window time . The first time window is the decoded window closest to the current time. The first time window contains a number of video frames W.
402, the feedback module is based on W time And W computing real-time consuming T of decoding a single video frame W
403, feedback module according to T W And T I A first time T for decoding a single video frame is calculated. The feedback module sends the first time T to the frame loss module.
404, the frame loss module is used for obtaining a frame loss according to the first time T and the second time T max It is determined whether to perform a frame loss policy for each of the undecoded video frames that fall within the second time window. If it is determined that the frame loss policy is executed for each undecoded video frame that falls within the second time window, then step 405 is executed; if it is determined that the frame loss policy is not to be executed for each of the undecoded video frames that fall within the second time window, step 409 is executed.
405 according to the first time T and the second time T max And determining the frame number f_num to be lost. Specifically, a first frame rate is calculated based on the first time T. The first frame rate represents a real-time frame rate of the video decoding module currently decoding. A second frame rate is calculated based on the second time. The second frame rate represents a maximum frame rate decoded by the video decoding module. According to the first frame rate and the second frame rate, calculating the frame to be lost in the unit time lengthFrame number f_num. Alternatively, the frame number f_num to be dropped in the unit duration may be determined according to the difference between the first frame rate and the second frame rate. That is, when the real-time consumption of decoding a single video frame of the video decoding module is greater than the maximum time consumption of decoding a single video frame, the frame number f_num to be lost may be determined according to the difference between the maximum frame rate decoded by the video decoding module and the real-time frame rate. Alternatively to this, the method may comprise,f_num is used to determine the number of frames lost in at least one unit time period to follow.
And 406, judging whether each of the non-decoded video frames falling into the second time window is a target video frame by frame according to the video frame type of the non-decoded video frame falling into the second time window, whether the non-decoded video frame is a reference frame of other frames and the frame number to be lost. Optionally, the second time window comprises a plurality of unit durations. The frame loss module can determine whether the frame loss frame number reaches the frame to be lost or not one by one in unit time length when judging whether the frame loss frame number reaches the frame to be lost or not.
407, for the target video frame which falls in the second time window and does not need to be decoded, the frame loss module does not send the target video frame to the video decoding module for decoding.
408, for the non-target video frames which fall into the second time window and need to be decoded, the frame loss module sends the non-target video frames to the video decoding module for decoding, and the non-target video frames are displayed and output through the video output module.
409, the frame loss module sends each video frame falling into the second time window to the video decoding module for decoding, and displays and outputs the video frames through the video output module.
Optionally, when the video decoding module decodes each video frame falling into the second time window, the feedback module obtains the decoding time consumption of the video decoding code block for each video frame of the second time window, and updates the value of T according to the decoding time consumption. Optionally, the updated T value is used for frame loss decision of the video frame of the next time window.
The embodiment of the invention adopts a sliding window strategy, and determines the frame loss strategy of the current window according to the decoding real-time consumption of the last window.
Referring to fig. 6, a schematic structural diagram of a video playing device according to an embodiment of the present invention is provided. As shown in fig. 6, the video playback device is in the form of a general purpose computing device. Components of the video playback device may include, but are not limited to: one or more processors 510, a communication interface 520, a memory 530, and a communication bus 540 that connects the various system components (including the memory 530, the communication interface 520, and the processor 510).
Communication bus 540 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Electronic devices typically include a variety of computer system readable media. Such media can be any available media that can be accessed by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 530 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) and/or cache memory. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Memory 530 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the method steps of embodiments of the present invention.
A program/utility having a set (at least one) of program modules may be stored in the memory 530, such program modules include, but are not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules typically carry out the functions and/or methods of the embodiments described herein.
The processor 510 executes programs stored in the memory 530 to perform various functional applications and data processing, for example, to implement the video frame dropping method provided by the embodiment of the present application.
In a specific implementation, the embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a program, where the program may implement some or all of the steps in each embodiment provided by the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, the embodiment of the application further provides a chip, which comprises: and a processor for executing computer program instructions stored in the memory, wherein the computer program instructions, when executed by the processor, trigger the chip to execute the video frame loss method of the embodiments of the present application.
In a specific implementation, an embodiment of the present invention further provides a computer program product, where the computer program product contains executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the above method embodiments.
In the embodiments of the present invention, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. The video frame loss method is characterized by comprising the following steps of:
determining a first time for decoding a single video frame according to the time consumption of the video decoding module for decoding the decoded video frame of the target video; calculating real-time consumption of decoding a single video frame according to the decoded video frame number and the decoding time consumption of the video decoding module; calculating the first time according to the real-time consumption and the key frame time consumption of the key frame of the target video;
determining whether to perform a frame loss policy on the un-decoded video frames of the target video according to the first time and a second time allowed by the video decoding module to decode a single video frame, wherein the second time is a maximum time allowed by decoding the single video frame determined according to the video processing capability of the video decoding module;
if the first time is greater than the second time and the difference between the first time and the second time is less than a first threshold, determining to execute a frame loss policy on the undecoded video frames of the target video;
And if the frame loss strategy is determined to be executed on the un-decoded video frames of the target video, determining target video frames from the un-decoded video frames of the target video, wherein the target video frames are not sent to the video decoding module for decoding.
2. The method according to claim 1, wherein the method further comprises:
determining a plurality of target key frames from key frames of the target video before initially starting decoding of the target video;
acquiring the pre-decoding time consumption of the video decoding module on the target key frames;
and calculating the key frame time consumption according to the pre-decoding time consumption.
3. The method of claim 1, wherein the second time is determined based on video processing capabilities of the video decoding module, comprising:
the second time is determined according to the video frame rate of the target video and the frame sending time and the display sending time of the video decoding module.
4. A method according to any one of claims 1-3, wherein the first time is determined from the decoding time consumption of each video frame falling within a first time window, the first time window being the decoded window closest to the current time;
The determining whether to execute the frame loss policy on the un-decoded video frame of the target video according to the first time and the second time allowed by the video decoding module to decode the single video frame comprises:
determining whether to execute a frame loss strategy on each un-decoded video frame falling into a second time window according to the first time and the second time; the second time window is the next adjacent time window of the first time window, the decoding time consumption of each video frame corresponding to the second time window is used for recalculating the first time, the recalculated first time is used for executing a frame loss strategy for the next adjacent time window of the second time window.
5. The method of claim 1, wherein the determining a target video frame from the un-decoded video frames of the target video comprises:
determining the frame number to be lost according to the first time and the second time;
and determining the target video frame from the un-decoded video frame of the target video according to the video frame type of the un-decoded video frame, whether the un-decoded video frame is a reference frame of other frames and the frame number to be lost.
6. The method of claim 5, wherein the determining the number of frames to be dropped based on the first time and the second time comprises:
calculating a first frame rate according to the first time;
calculating a second frame rate according to the second time;
and calculating the frame number to be lost in the unit time according to the first frame rate and the second frame rate, wherein the frame number to be lost in the unit time is used for determining the frame number of the target video frame in at least one subsequent unit time.
7. The method of claim 5, wherein determining the target video frame from the undecoded video frames of the target video based on the video frame type of the undecoded video frame, whether the undecoded video frame is a reference frame of other frames, and the frame number to be dropped comprises:
if the current un-decoded video frame is a key frame, transmitting the current un-decoded video frame to a video decoding module for decoding and judging whether the next un-decoded video frame of the current un-decoded video frame is the target video frame or not;
if the current un-decoded video frame is a forward predictive coding frame and is a reference frame of other video frames, transmitting the current un-decoded video frame to a video decoding module for decoding and judging whether the next un-decoded video frame of the current un-decoded video frame is the target video frame or not;
If the current un-decoded video frame is a forward predictive coding frame, the current un-decoded video frame is a non-reference frame, and the frame number of the lost frame does not reach the frame number to be lost, the current un-decoded video frame is the target video frame, and whether the next un-decoded video frame of the current un-decoded video frame is the target video frame is judged;
if the current un-decoded video frame is a bi-directional prediction interpolation coding frame and the frame number of the lost frame does not reach the frame number to be lost, the current un-decoded video frame is the target video frame and whether the next un-decoded video frame of the current un-decoded video frame is the target video frame is judged.
8. The method according to claim 1, wherein the method further comprises:
before the decoding of the target video is started initially, determining an initial first time length in the target video, and giving an initial value to the first time according to the number of video frames contained in the initial first time length;
and determining whether to execute a frame loss strategy for each video frame in the initial first duration according to the initial value of the first time and the second time.
9. A video playback device, comprising:
The feedback module is used for determining the first time for decoding a single video frame according to the time consumption of the video decoding module for decoding the decoded video frame of the target video; calculating real-time consumption of decoding a single video frame according to the decoded video frame number and the decoding time consumption of the video decoding module; calculating the first time according to the real-time consumption and the key frame time consumption of the key frame of the target video;
the frame loss module is used for determining whether to execute a frame loss strategy on the un-decoded video frames of the target video according to the first time and a second time allowed by the video decoding module to decode the single video frame, wherein the second time is the maximum time allowed by decoding the single video frame and determined according to the video processing capacity of the video decoding module; if the first time is greater than the second time and the difference between the first time and the second time is less than a first threshold, determining to execute a frame loss policy on the undecoded video frames of the target video; and if the frame loss strategy is determined to be executed on the un-decoded video frames of the target video, determining target video frames from the un-decoded video frames of the target video, wherein the target video frames are not sent to the video decoding module for decoding.
10. A video playback device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-8.
11. A chip, comprising: a processor for executing computer program instructions stored in a memory, wherein the computer program instructions, when executed by the processor, trigger the chip to perform the method of any one of claims 1 to 8.
12. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run controls a device in which the computer readable storage medium is located to perform the method according to any one of claims 1 to 8.
CN202211066641.8A 2022-09-01 2022-09-01 Video frame loss method and device Active CN115460458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211066641.8A CN115460458B (en) 2022-09-01 2022-09-01 Video frame loss method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211066641.8A CN115460458B (en) 2022-09-01 2022-09-01 Video frame loss method and device

Publications (2)

Publication Number Publication Date
CN115460458A CN115460458A (en) 2022-12-09
CN115460458B true CN115460458B (en) 2023-09-19

Family

ID=84300894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211066641.8A Active CN115460458B (en) 2022-09-01 2022-09-01 Video frame loss method and device

Country Status (1)

Country Link
CN (1) CN115460458B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041669B (en) * 2023-09-27 2023-12-08 湖南快乐阳光互动娱乐传媒有限公司 Super-division control method and device for video stream and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856812A (en) * 2014-03-25 2014-06-11 北京奇艺世纪科技有限公司 Video playing method and device
CN114567796A (en) * 2022-03-04 2022-05-31 北京字节跳动网络技术有限公司 Frame loss method, device, server and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856812A (en) * 2014-03-25 2014-06-11 北京奇艺世纪科技有限公司 Video playing method and device
CN114567796A (en) * 2022-03-04 2022-05-31 北京字节跳动网络技术有限公司 Frame loss method, device, server and medium

Also Published As

Publication number Publication date
CN115460458A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
US10659847B2 (en) Frame dropping method for video frame and video sending apparatus
US8997160B2 (en) Variable bit video streams for adaptive streaming
US20170318323A1 (en) Video playback method and control terminal thereof
TWI511544B (en) Techniques for adaptive video streaming
EP2300928B1 (en) Client side stream switching
CN104967884B (en) A kind of bitstreams switching method and apparatus
US8483551B2 (en) Method for generating double-speed IDR-unit for trick play, and trick play system and method using the same
AU2012207151A1 (en) Variable bit video streams for adaptive streaming
US20070217505A1 (en) Adaptive Decoding Of Video Data
CN113287319B (en) Method and apparatus for optimizing encoding operations
JP5521940B2 (en) Encoding method, decoding method, encoding device, and decoding device
US20170180746A1 (en) Video transcoding method and electronic apparatus
EP3073754A1 (en) Systems and methods for video play control
CN115460458B (en) Video frame loss method and device
CN113490055A (en) Data processing method and device
US20100135417A1 (en) Processing of video data in resource contrained devices
CN104053002A (en) Video decoding method and device
CN115150610A (en) Image processing method, device and equipment
CN113286146B (en) Media data processing method, device, equipment and storage medium
CN115150611A (en) Image processing method, device and equipment
US8203619B2 (en) Target bit rate decision method for wavelet-based image compression
TWI439137B (en) A method and apparatus for restructuring a group of pictures to provide for random access into the group of pictures
CN105847822A (en) Video decoding method and device
CN115278308B (en) Media stream processing method, device, equipment and storage medium
JP7304419B2 (en) Transmission device, transmission method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant