CN116965026A - Video encoder, video decoder and corresponding methods - Google Patents

Video encoder, video decoder and corresponding methods Download PDF

Info

Publication number
CN116965026A
CN116965026A CN202180095330.1A CN202180095330A CN116965026A CN 116965026 A CN116965026 A CN 116965026A CN 202180095330 A CN202180095330 A CN 202180095330A CN 116965026 A CN116965026 A CN 116965026A
Authority
CN
China
Prior art keywords
slice
reference frame
frame
optimal reference
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180095330.1A
Other languages
Chinese (zh)
Inventor
于建华
张怡浩
左文明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN116965026A publication Critical patent/CN116965026A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video encoder, a video decoder and a corresponding method, wherein the video encoder comprises a stripe encoder and an encoding end optimal reference frame management module based on channel feedback, and the stripe encoder is used for dividing a video frame in video data into a plurality of stripes; and the method is used for respectively encoding the plurality of stripes to obtain a plurality of stripe code streams; and the optimal reference frame management module at the encoding end is used for replacing the data corresponding to the first stripe position in the optimal reference frame with the reconstruction data of the first stripe when the first stripe code stream in the plurality of stripe code streams is successfully transmitted according to the channel feedback, and the replaced optimal reference frame is used as the reference frame of the next video frame encoding. In this way, during encoding and decoding, the data which is failed to be sent is not added into the optimal reference frame, that is, the decoding of the next video frame is not affected. According to the optimal reference frame, the reference relation is adjusted, the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.

Description

Video encoder, video decoder and corresponding methods Technical Field
The present application relates to the field of video encoding and decoding technologies, and in particular, to a video encoder, a video decoder, and corresponding methods.
Background
The wireless screen throwing technology can throw conference content, multimedia files, game pictures, movies and video pictures on the other screen for presentation, and interaction can be performed without various connecting wires. The wireless screen projection technology is more and more widely applied to daily life of people, and more convenience is provided for work and entertainment.
The current mainstream wireless screen-throwing technology comprises a Miracast technology, and the basic principle of the Miracast technology is that a transmitting end device compresses audio and video, mixes compressed audio and video code streams into a transmission stream, uses a real-time streaming protocol (real-time streaming protocol, RTSP) to transmit the transmission stream through a WiFi channel, and transmits the transmission stream to a receiving end device to decode and then play the transmission stream. Miracast video coding adopts a standard H.264 protocol, and adopts a standard group of pictures (group of pictures, GOP) structure during coding, and comprises I frames and P frames, wherein the I frames are intra-frame prediction frames and do not depend on previous frames, and the P frames are inter-frame prediction frames and depend on previous frames. However, wiFi channels cannot provide stable wireless bandwidth, and when the channel fades, data packets are lost, so that compressed code streams of a certain frame are lost. Because the previous frame needs to be referred to when the P frame is decoded, whether the I frame is lost or the P frame is lost, a series of frames cannot be decoded, so that frequent continuous multi-frame blocking is caused, and the wireless screen throwing performance is poor.
Disclosure of Invention
The application provides a video encoder, a video decoder and a corresponding method, which are beneficial to solving the problem of frequent continuous multi-frame blocking during wireless screen projection and improving the performance of wireless screen projection.
In a first aspect, the present application provides a video encoder, where the video encoder includes a slice encoder and an encoding end optimal reference frame management module based on channel feedback, where the slice encoder is configured to divide a first video frame in video data into a plurality of slice slices; the method comprises the steps of obtaining a plurality of slice code streams by encoding a plurality of slices respectively; and the optimal reference frame management module at the encoding end is used for replacing the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice when the first slice code stream in the plurality of slice code streams is successfully transmitted according to channel feedback, wherein the replaced optimal reference frame is used as a reference frame for encoding a next video frame.
In the video encoder, the optimal reference frame management module at the encoding end replaces the data corresponding to the first slice position in the optimal reference frame with the successfully transmitted reconstruction data of the first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the slice encoder is further configured to: and receiving the replaced optimal reference frame output by the optimal reference frame management module of the coding end.
With reference to the first aspect, in one possible implementation manner of the first aspect, when a first slice bitstream of the plurality of slice bitstreams is successfully transmitted according to channel feedback, the method further includes replacing, by reconstructed data of the first slice, data corresponding to the first slice position in an optimal reference frame, where the optimal reference frame management module at the encoding end is specifically configured to: receiving a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted; and when the first mark represents that the first slice code stream is successfully transmitted, replacing the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice.
With reference to the first aspect, in a possible implementation manner of the first aspect, the encoding end optimal reference frame management module is further configured to: and receiving the reconstruction data of the first slice output by the slice encoder.
With reference to the first aspect, in a possible implementation manner of the first aspect, the video encoder further includes a frame type control module, where the frame type control module is configured to: setting a 1 st frame in the video data as an I frame; and when the plurality of slice code streams corresponding to the 1 st frame are successfully transmitted, setting the video frames after the 1 st frame as P frames.
With reference to the first aspect, in a possible implementation manner of the first aspect, the frame type control module is further configured to: and when the transmission of the plurality of slice code streams corresponding to the 1 st frame fails, setting the video frames after the 1 st frame as I frames until the transmission of the plurality of slice code streams corresponding to the n th frame succeeds, and setting the video frames after the n th frame as P frames, wherein n is an integer greater than 1.
In a second aspect, the present application provides a video decoder, where the video decoder includes a slice decoder, a decoding end optimal reference frame management module, and a multi-way switch MUX module, where the slice decoder is configured to decode an acquired first slice code stream to obtain reconstructed data of the first slice; the optimal reference frame management module at the decoding end is configured to replace the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice, where the replaced optimal reference frame is used as a reference frame for decoding a next video frame; the MUX module is used for outputting the reconstruction data of the first slice; and the data corresponding to the second slice position in the optimal reference frame is output when the second slice code stream fails to be received.
In the video decoder, the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice which is successfully transmitted, and the replaced optimal reference frame is used as the reference frame for decoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video decoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
With reference to the second aspect, in a possible implementation manner of the second aspect, the slice decoder is further configured to: and receiving the replaced optimal reference frame output by the optimal reference frame management module at the decoding end.
With reference to the second aspect, in a possible implementation manner of the second aspect, the decoding-end optimal reference frame management module is further configured to: and receiving the reconstruction data of the first slice output by the slice decoder.
With reference to the second aspect, in a possible implementation manner of the second aspect, in outputting reconstructed data of the first slice, the MUX module is specifically configured to: and when a second mark is received, outputting the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
With reference to the second aspect, in a possible implementation manner of the second aspect, when the receiving of the second slice code stream fails, the output module is specifically configured to: and when a third mark is received, outputting data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
In a third aspect, the present application provides a video coding method, where the method is applied to a video encoder, and the video encoder includes a slice encoder and a coding end optimal reference frame management module based on channel feedback, and the method includes: the slice encoder divides a first video frame in video data into a plurality of slice slices; the slice encoder encodes the slices respectively to obtain a plurality of slice code streams; according to channel feedback, when the transmission of a first slice code stream in the plurality of slice code streams is successful, the optimal reference frame management module of the coding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for coding a next video frame.
In the method, the optimal reference frame management module at the encoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the successfully transmitted first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
With reference to the third aspect, in a possible implementation manner of the third aspect, the method further includes: and the slice encoder receives the replaced optimal reference frame output by the optimal reference frame management module at the encoding end.
With reference to the third aspect, in a possible implementation manner of the third aspect, according to channel feedback, when a first slice bitstream of the plurality of slice bitstreams is successfully transmitted, the encoding-end optimal reference frame management module replaces the reconstructed data of the first slice with the data corresponding to the first slice position in an optimal reference frame, including: the optimal reference frame management module of the coding end receives a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted or not; when the first mark represents that the first slice code stream is successfully sent, the encoding end optimal reference frame management module replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice.
With reference to the third aspect, in a possible implementation manner of the third aspect, the method further includes: and the optimal reference frame management module of the coding end receives the reconstruction data of the first slice output by the slice encoder.
With reference to the third aspect, in a possible implementation manner of the third aspect, the video encoder further includes a frame type control module, and the method further includes: the frame type control module sets a 1 st frame in the video data as an I frame; and when the plurality of slice code streams corresponding to the 1 st frame are successfully transmitted, the frame type control module sets the video frames after the 1 st frame as P frames.
With reference to the third aspect, in a possible implementation manner of the third aspect, the method further includes: and when the transmission of the plurality of slice code streams corresponding to the 1 st frame fails, the frame type control module sets the video frames after the 1 st frame as I frames until the transmission of the plurality of slice code streams corresponding to the n th frame succeeds, and sets the video frames after the n th frame as P frames, wherein n is an integer greater than 1.
In a fourth aspect, the present application provides a video decoding method, the method being applied to a video decoder, the video decoder including a slice decoder, a decoding-end optimal reference frame management module, and a multi-way switch MUX module, the method comprising: the slice decoder decodes the acquired first slice code stream to obtain reconstruction data of the first slice; the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for decoding a next video frame; the MUX module outputs the reconstruction data of the first slice; and when the second slice code stream fails to be received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame.
In the method, the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice which is successfully transmitted, and the replaced optimal reference frame is used as the reference frame for decoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video decoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
With reference to the fourth aspect, in a possible implementation manner of the fourth aspect, the method further includes: and the slice decoder receives the replaced optimal reference frame output by the optimal reference frame management module at the decoding end.
With reference to the fourth aspect, in a possible implementation manner of the fourth aspect, the method further includes: and the decoding end optimal reference frame management module receives the reconstruction data of the first slice output by the slice decoder.
With reference to the fourth aspect, in a possible implementation manner of the fourth aspect, the outputting, by the MUX module, the reconstructed data of the first slice includes: and when a second mark is received, the MUX module outputs the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
With reference to the fourth aspect, in a possible implementation manner of the fourth aspect, when the receiving of the second slice code stream fails, the outputting, by the MUX module, data corresponding to the second slice position in the optimal reference frame includes: and when a third mark is received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
Drawings
FIG. 1 is a schematic diagram of the Miracast technique;
fig. 2 is a schematic diagram of a standard GOP structure;
FIG. 3 is a schematic diagram of a system framework to which embodiments of the present application are applied;
fig. 4 is a schematic diagram of a video encoder according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another video encoder according to an embodiment of the present application;
fig. 6 is a schematic diagram of a video decoder according to an embodiment of the present application;
fig. 7 is a schematic diagram of slice structure division according to an embodiment of the present application;
fig. 8 is a schematic flow chart of video encoding and decoding according to an embodiment of the present application;
fig. 9 is a schematic flow chart of a video encoding method according to an embodiment of the present application;
fig. 10 is a flowchart of a video decoding method according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
In the following, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
Reference to "at least one" in embodiments of the application means one or more, and "plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
And, unless specified to the contrary, references to "first," "second," etc. ordinal words of embodiments of the present application are used for distinguishing between multiple objects and are not used for limiting the order, timing, priority, or importance of the multiple objects. For example, the first information and the second information are only for distinguishing different information, and are not indicative of the difference in content, priority, transmission order, importance, or the like of the two information.
In order to facilitate understanding of the present application, concepts related to the present application will be explained first:
wireless screen throwing technology: the wireless screen throwing technology can throw conference content, multimedia files, game pictures, movies and video pictures on the other screen for presentation, and interaction can be performed without various connecting wires. Currently, the mainstream wireless screen-casting technology mainly includes digital living network alliance (digital living network alliance, DLNA), airPlay and Miracast technologies. The DLNA technology is an older wireless screen-throwing technology, has wide support, and can send multimedia file streams such as videos, music, photos and the like of the mobile phone to a television. However, DLNA technology can only deliver multimedia streams, but cannot deliver game pictures, PPT plays, and the like. The AirPlay technology is a screen mirroring technology, namely the expansion of a mobile phone screen, has narrower support, easily causes the problems of blocking and screen-splash, and has poorer screen-throwing image quality. The Miracast technology is a newer wireless screen-casting technology, and is based on a wireless Direct connection (WiFi Direct) technology, but the problems of large delay, blocking and screen-spraying exist, and the screen-casting image quality is poor.
The Miracast technology is a wireless display standard based on a wireless direct connection technology, and devices supporting the standard can share video pictures in a wireless mode. For example, a cell phone may play a movie or photo directly on a television or other device through Miracast technology without being affected by the length of the connection cable. Unlike DLNA technology, miracast technology also has a mirror function similar to AirPlay technology, and can directly put the screen content of a mobile phone on a television screen, so that games and the like can be played through the television screen.
Referring to fig. 1, fig. 1 is a schematic diagram of the Miracast technology. As shown in fig. 1, the basic principle of the Miracast technology is that a transmitting end device intercepts a picture of a system and compresses the picture into an h.264 standard code stream, and intercepts audio and compresses the intercepted audio into an advanced audio coding (advanced audio coding, AAC) code stream. And then mixing the compressed audio and video code streams into Transport Streams (TS), and transmitting the TS by using real-time streaming protocol (real-time streaming protocol, RTSP) through WiFi Direct technology. The receiving terminal equipment receives through RTSP protocol, then decodes the audio and video, and finally plays the picture and the sound. Essentially, the receiving end device can be seen as an RTSP live stream player.
Miracast video coding uses the standard H.264 protocol, and uses the standard group of pictures (group of pictures, GOP) structure, including I-frames and P-frames, for coding.
Referring to fig. 2, fig. 2 is a schematic diagram of a standard GOP structure. As shown in fig. 2, under the standard GOP structure, every 30 frames inserts an I frame followed by 29P frames. Wherein the I frame is an intra-frame predicted frame independent of a previous frame, the P frame is an inter-frame predicted frame dependent of a previous frame. For example, as shown in fig. 2, I frames do not depend on previous frames, whereas P1 frames depend on I frames, P2 frames depend on P1 frames, etc.
Since the WiFi channel cannot provide a stable wireless bandwidth, there is a relatively obvious bandwidth jitter, and when the channel fades, a data packet is lost, so that frequent continuous multi-frame blocking or screen-missing is caused, which is specifically explained as follows: after packet loss in the wireless channel, the Miracast technology adopts a retransmission mechanism. When the wireless channel is bad to a certain extent, the retransmission data will fail, and the compressed code stream of a certain frame will be lost. The I frame only adopts intra-frame compression, so that the compression rate is low, the code stream is large, the probability of transmission failure of the I frame in a wireless channel is larger, and the probability of transmission failure of the P frame is relatively smaller. Since the P frame depends on the previous frame, whether the I frame is lost or the P frame is lost, a series of frames cannot be decoded until the next I frame cannot be decoded normally, so that frequent continuous multi-frame blocking is caused, and the wireless screen throwing performance is poor. On the other hand, because the I frame code rate is higher than that of the P frame, when the I frame interval is too large, the time side length of continuous multi-frame blocking occurs; when the interval of the I frames is too small, the probability of occurrence of a stuck state is increased because of the high code rate characteristic of the I frames, the screen-display condition can occur, and the performance of wireless screen-display is poor.
Having described the background of the application as above, the technical features of the embodiments of the present application are described below.
Referring to fig. 3, fig. 3 is a schematic diagram of a system framework to which an embodiment of the present application is applied. As shown in fig. 3, the embodiment of the application is mainly applied to the field of wireless screen projection. In a system to which embodiments of the present application are applied, a Video Encoder (VENC) at the transmitting end encodes video data from a display subsystem (display subsystem, DSS) or a graphics processor (graphics processing unit, GPU). Each frame is divided into a plurality of slices for independent coding during coding, and a coded slice code stream (stream) is transmitted in a timing manner through a wireless transmitter. When the fixed timing arrives, the wireless transmitter stops the data transmission and feeds back to the VENC whether a successful flag is sent. When the transmission is successful, the VENC replaces the reconstruction data of the slice which is successfully transmitted with the data corresponding to the slice position in the optimal reference frame, wherein the reconstruction data is the data consistent with the video data output to the DSS by the video decoder; and if the transmission fails, the transmission is not replaced. The wireless receiver at the receiving end transmits the successfully received slice code stream to a Video Decoder (VDEC) for decoding, and simultaneously feeds back the reception failure flag to the slice with the reception failure. The VDEC decodes the successfully received slice code stream and then transmits video data to the DSS for processing and displaying; for the slice with failed reception, the data corresponding to the slice position in the optimal reference frame is transmitted to the DSS for processing and display.
In the system, the video encoder replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the successfully transmitted first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame in the video decoder. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
Referring to fig. 4, fig. 4 is a schematic diagram of a video encoder according to an embodiment of the present application. As shown in fig. 4, the video encoder includes a slice encoder 401 and a coding-end optimal reference frame management module 402 based on channel feedback.
In the video encoder shown in fig. 4, a slice encoder 401 is configured to divide a first video frame in video data into a plurality of slices; and the method is used for respectively encoding the plurality of slices to obtain a plurality of slice code streams.
And the encoding end optimal reference frame management module 402 is configured to replace data corresponding to the first slice position in the optimal reference frame with reconstructed data of the first slice when the first slice code stream in the plurality of slice code streams is successfully transmitted according to channel feedback, where the replaced optimal reference frame is used as a reference frame for encoding a next video frame.
Optionally, the video data is DSS or GPU data, and the code stream output by the slice encoder 401 is sent by a wireless transmitter.
Optionally, the slice encoder 401 is further configured to: the replaced optimal reference frame output by the encoding-side optimal reference frame management module 402 is received.
Optionally, in terms of replacing the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice when the first slice code stream of the plurality of slice code streams is successfully transmitted according to the channel feedback, the encoding end optimal reference frame management module 402 is specifically configured to: receiving a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted or not; and when the first mark represents that the first slice code stream is successfully transmitted, replacing the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice.
Specifically, the first slice code stream output by the slice encoder 401 is sent through a wireless transmitter, and the wireless transmitter feeds back a first flag to the optimal reference frame management module 402 at the encoding end, where the first flag is used to characterize whether the first slice code stream is sent successfully. When the first mark represents that the first slice code stream is successfully sent, the coding end optimal reference frame management module 402 replaces the reconstruction data of the first slice with the data corresponding to the first slice position in the optimal reference frame; and when the first mark represents that the transmission of the first slice code stream fails, the first mark is not replaced.
Optionally, the encoding-end optimal reference frame management module 402 is further configured to: the reconstructed data of the first slice output by the slice encoder 401 is received.
In the video encoder, the optimal reference frame management module at the encoding end replaces the data corresponding to the first slice position in the optimal reference frame with the successfully transmitted reconstruction data of the first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
Referring to fig. 5, fig. 5 is a schematic diagram of another video encoder according to an embodiment of the present application. As shown in fig. 5, the video encoder includes a slice encoder 501, a coding-end optimal reference frame management module 502 based on channel feedback, and a frame type control module 503. In the video encoder shown in fig. 5, a slice encoder 501 is configured to divide a first video frame in video data into a plurality of slices; and the method is used for respectively encoding the plurality of slices to obtain a plurality of slice code streams.
And the encoding end optimal reference frame management module 502 is configured to replace data corresponding to the first slice position in the optimal reference frame with reconstructed data of the first slice when the first slice code stream in the plurality of slice code streams is successfully transmitted according to channel feedback, where the replaced optimal reference frame is used as a reference frame for encoding a next video frame.
A frame type control module 503, configured to set a 1 st frame in the video data as an I frame; and the method is used for setting the video frames after the 1 st frame as P frames when the transmission of the plurality of slice code streams corresponding to the 1 st frame is successful; the frame type control module 503 is further configured to set a video frame after the 1 st frame as an I frame when the transmission of the plurality of slice streams corresponding to the 1 st frame fails, and set a video frame after the n frame as a P frame until the transmission of the plurality of slice streams corresponding to the n frame succeeds, where n is an integer greater than 1.
Specifically, the slice encoder 501 encodes video data from the DSS or GPU. Each frame is divided into a plurality of slices for independent coding during coding, and the coded code stream is transmitted by a wireless transmitter. The wireless transmitter feeds back a first flag to the encoding end optimal reference frame management module 502 and the frame type control module 503, where the first flag is used to characterize whether the first slice code stream is successfully transmitted. When the first mark represents that the first slice code stream is successfully sent, the encoding end optimal reference frame management module 502 replaces the reconstruction data of the first slice with the data corresponding to the first slice position in the optimal reference frame; and when the first mark represents that the transmission of the first slice code stream fails, the first mark is not replaced. The optimal reference frame management module 502 at the encoding end outputs an optimal reference frame to the slice encoder 501, and the slice encoder 501 adjusts the reference relationship according to the optimal reference frame to encode the next video frame. The frame type control module 503 is responsible for frame type control, setting the video frame as an I frame or a P frame.
In the video encoder, the optimal reference frame management module at the encoding end replaces the data corresponding to the first slice position in the optimal reference frame with the successfully transmitted reconstruction data of the first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved. On the other hand, the original GOP structure with the first frame being an I frame and the 29P frames being one cycle is improved through the frame type control module, and the number of the I frames is reduced by adopting the GOP structure with all the P frames after the I frames. Because the code rate of the I frames is higher, the probability of blocking can be reduced after the number of the I frames is reduced, and the performance of wireless screen throwing is improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a video decoder according to an embodiment of the present application. As shown in fig. 6, the video decoder includes a slice decoder 601, a decoding-end optimal reference frame management module 602, and a multi-way switch MUX module 603.
In the video decoder shown in fig. 6, a slice decoder 601 is configured to decode the acquired first slice code stream to obtain reconstructed data of the first slice.
The decoding end optimal reference frame management module 602 is configured to replace the reconstructed data of the first slice with the data corresponding to the first slice position in the optimal reference frame, where the replaced optimal reference frame is used as a reference frame for decoding a next video frame.
The MUX module 603 is configured to output the reconstructed data of the first slice; and the data corresponding to the second slice position in the optimal reference frame is output when the second slice code stream fails to be received.
Optionally, the slice decoder 601 is further configured to: the replaced optimal reference frame output by the decoding-end optimal reference frame management module 602 is received.
Optionally, the decoding-end optimal reference frame management module 602 is further configured to: the reconstructed data of the first slice output from the slice decoder 601 is received.
Optionally, in outputting the reconstructed data of the first slice, the MUX module 603 is specifically configured to: and when receiving a second mark, outputting the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
Optionally, in outputting data corresponding to the second slice position in the optimal reference frame when the second slice code stream fails to be received, the MUX module 603 is specifically configured to: and when a third mark is received, outputting data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
Specifically, the slice decoder 601 is responsible for decoding a slice code stream that is successfully transmitted, and the first slice code stream is taken as an example for illustration. The slice decoder 601 decodes the first slice code stream and outputs reconstructed data of the first slice. The optimal reference frame management module 602 at the decoding end is responsible for maintaining an optimal reference frame, replacing the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice output by the slice decoder 601, and providing the optimal reference frame for the slice decoder 601. The MUX module 603 is responsible for outputting the reconstructed data of the first slice output by the slice decoder 601, and outputting the data corresponding to the second slice position in the optimal reference frame when the second slice code stream fails to be transmitted.
In the video decoder, the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice which is successfully transmitted, and the replaced optimal reference frame is used as the reference frame for decoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video decoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
In the above embodiment, the slice encoder in the video encoder is configured to divide a video frame in video data into a plurality of slices, and then encode the slices respectively to obtain a plurality of slice code streams. In the dividing process, a video frame is uniformly divided into M slices, where the value of M may be, for example, 1 to 8.
Referring to fig. 7, fig. 7 is a schematic diagram of a slice structure division according to an embodiment of the present application. As shown in fig. 7, the value of M is 4, and the image of one frame 3840x2560 may be divided into 4 slices 3840x640, namely slice0, slice1, slice and slice3. Fig. 7 is merely an example, and does not limit the specific division of video frames.
Referring to fig. 8, fig. 8 is a schematic flow chart of video encoding and decoding according to an embodiment of the present application. Referring to fig. 8, the video encoding and decoding process provided by the embodiment of the present application is as follows:
1. the sender performs slice level timing, for example, when the frame rate is 60Hz, the time of each frame is 16.6ms, and the time of each slice is 4.15ms.
2. The video encoder completes the encoding of one slice level in a timed time. The 0 th slice (F0-S0) of the 0 th frame is encoded at timing 0, the 1 st slice (F0-S1) of the 0 th frame is encoded at timing 1, the 2 nd slice (F0-S2) of the 0 th frame is encoded at timing 2, and the 3 rd slice (F0-S3) of the 0 th frame is encoded at timing 3. The processing of the subsequent frame may refer to the processing of the 0 th frame.
3. The wireless transmitter transmits the compressed data at the next timing encoded by the video encoder, feeds back a transmission success flag to the transmitting end if the data transmission is successful in the timing, and feeds back a transmission failure flag to the transmitting end if the data transmission is not completed in the timing. That is, the 0 th slice of the 0 th frame is transmitted at timing 1, the 1 st slice of the 0 th frame is transmitted at timing 2, the 2 nd slice of the 0 th frame is transmitted at timing 3, and the 3 rd slice of the 0 th frame is transmitted at timing 4. The processing of the subsequent frame may refer to the processing of the 0 th frame.
4. And the transmitting end completes the maintenance of the optimal reference frame according to the feedback of the wireless transmitter at the next timing of the wireless transmitter. Namely, the reconstruction data of the 0 th slice of the 0 th frame is refreshed into the optimal reference frame at the timing 2, the reconstruction data of the 1 st slice of the 0 th frame is refreshed into the optimal reference frame at the timing 3, the reconstruction data of the 2 nd slice of the 0 th frame is refreshed into the optimal reference frame at the timing 4, and the reconstruction data of the 3 rd slice of the 0 th frame is refreshed into the optimal reference frame at the timing 5. The processing of the subsequent frame may refer to the processing of the 0 th frame. As shown in fig. 8, if the code stream transmission of the 1 st slice (F1-S1) of the 1 st frame and the 2 nd slice (F2-S2) of the 2 nd frame fails, the reconstructed data of F1-S1 and F2-S2 are not updated to the optimal reference frame.
5. The video encoder refers to the data of the 0 th frame when encoding the 1 st frame. For example, the video encoder encodes the data of the 0 th slice and the 1 st slice in the optimal reference frame when the 0 th slice of the 1 st frame is referenced; the 1 st slice of the 1 st frame is encoded, and the data of the 0 st, 1 st and 2 nd slices in the optimal reference frame are referenced; the data of the 1 st slice, the 2 nd slice and the 3 rd slice in the optimal reference frame are referenced when the 2 nd slice of the 1 st frame is encoded; the 3 rd slice of the 1 st frame is encoded with reference to the data of the 2 nd slice and the 3 rd slice in the optimal reference frame. The subsequent frames are all P frames, and the processing of the subsequent frames can refer to the processing of the 1 st frame.
6. The wireless receiver directly sends the successfully received slice to the video decoder for decoding, and feeds a failure mark back to the video decoder for the slice with failed reception.
7. For the successfully received slice, the video decoder outputs and displays the video data after decoding, and simultaneously refreshes the reconstruction data of the successfully received slice into the optimal reference frame, and for the slice with the failure reception, the video decoder outputs the data corresponding to the slice position with the failure reception in the optimal reference frame.
In the above flow, the video encoder adjusts the reference relationship according to the optimal reference frame, and the video encoder and the video decoder maintain the same optimal reference frame, and the reference frames during encoding and decoding all point to the optimal reference frame, but not a specific frame. The problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
Referring to fig. 9, fig. 9 is a flowchart of a video encoding method according to an embodiment of the present application. The method is applied to a video encoder, the video encoder comprises a slice encoder and an encoding end optimal reference frame management module based on channel feedback, and the method comprises the following steps:
s901, the slice encoder divides a first video frame in video data into a plurality of slice slices.
And S902, the slice encoder encodes the slices respectively to obtain a plurality of slice code streams.
And S903, according to channel feedback, when the transmission of a first slice code stream in the plurality of slice code streams is successful, the optimal reference frame management module of the coding end replaces the data corresponding to the first slice position in an optimal reference frame with the reconstruction data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for coding a next video frame.
Optionally, the method further comprises: and the slice encoder receives the replaced optimal reference frame output by the optimal reference frame management module at the encoding end.
Optionally, when the sending of the first slice code stream in the plurality of slice code streams is successful according to the channel feedback, the encoding end optimal reference frame management module replaces the reconstructed data of the first slice with the data corresponding to the first slice position in the optimal reference frame, including: the optimal reference frame management module of the coding end receives a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted or not; when the first mark represents that the first slice code stream is successfully sent, the encoding end optimal reference frame management module replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice.
Optionally, the method further comprises: and the optimal reference frame management module of the coding end receives the reconstruction data of the first slice output by the slice encoder.
Optionally, the video encoder further includes a frame type control module, and the method further includes: the frame type control module sets a 1 st frame in the video data as an I frame; and when the plurality of slice code streams corresponding to the 1 st frame are successfully transmitted, the frame type control module sets the video frames after the 1 st frame as P frames.
Optionally, the method further comprises: and when the transmission of the plurality of slice code streams corresponding to the 1 st frame fails, the frame type control module sets the video frames after the 1 st frame as I frames until the transmission of the plurality of slice code streams corresponding to the n th frame succeeds, and sets the video frames after the n th frame as P frames, wherein n is an integer greater than 1.
In the method, the optimal reference frame management module at the encoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the successfully transmitted first slice according to the channel feedback, and the replaced optimal reference frame is used as the reference frame for encoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video encoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
Referring to fig. 10, fig. 10 is a flowchart of a video decoding method according to an embodiment of the present application. The method is applied to a video decoder, the video decoder comprises a slice decoder, a decoding end optimal reference frame management module and a multi-way switch MUX module, and the method comprises the following steps:
s1001, the slice decoder decodes the acquired first slice code stream to obtain reconstruction data of the first slice.
S1002, the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for decoding a next video frame.
S1003, the MUX module outputs the reconstruction data of the first slice; and when the second slice code stream fails to be received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame.
Optionally, the method further comprises: and the slice decoder receives the replaced optimal reference frame output by the optimal reference frame management module at the decoding end.
Optionally, the method further comprises: and the decoding end optimal reference frame management module receives the reconstruction data of the first slice output by the slice decoder.
Optionally, the outputting, by the MUX module, the reconstructed data of the first slice includes: and when a second mark is received, the MUX module outputs the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
Optionally, when the receiving of the second slice code stream fails, the MUX module outputs data corresponding to the second slice position in the optimal reference frame, including: and when a third mark is received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
In the method, the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice which is successfully transmitted, and the replaced optimal reference frame is used as the reference frame for decoding the next video frame. That is, the data that failed to be transmitted is not added to the optimal reference frame, i.e., does not affect the decoding of the next video frame. The video decoder adjusts the reference relation according to the optimal reference frame, so that the problem of continuous multi-frame blocking caused by data loss is avoided, and the performance of wireless screen throwing is improved.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (22)

  1. A video encoder is characterized in that the video encoder comprises a slice encoder and an encoding end optimal reference frame management module based on channel feedback, wherein,
    the slice encoder is configured to divide a first video frame in video data into a plurality of slice slices; the method comprises the steps of obtaining a plurality of slice code streams by encoding a plurality of slices respectively;
    and the optimal reference frame management module at the encoding end is used for replacing the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice when the first slice code stream in the plurality of slice code streams is successfully transmitted according to channel feedback, wherein the replaced optimal reference frame is used as a reference frame for encoding a next video frame.
  2. The video encoder of claim 1, wherein the slice encoder is further configured to:
    And receiving the replaced optimal reference frame output by the optimal reference frame management module of the coding end.
  3. The video encoder according to claim 1 or 2, wherein, according to channel feedback, when a first slice stream of the plurality of slice streams is successfully transmitted, the reconstructed data of the first slice is replaced with data corresponding to the first slice position in an optimal reference frame, and the encoding-side optimal reference frame management module is specifically configured to:
    receiving a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted;
    and when the first mark represents that the first slice code stream is successfully transmitted, replacing the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice.
  4. A video encoder as defined in any one of claims 1-3 wherein the encoding-side optimal reference frame management module is further configured to:
    and receiving the reconstruction data of the first slice output by the slice encoder.
  5. The video encoder of any of claims 1-4, further comprising a frame type control module configured to:
    Setting a 1 st frame in the video data as an I frame;
    and when the plurality of slice code streams corresponding to the 1 st frame are successfully transmitted, setting the video frames after the 1 st frame as P frames.
  6. The video encoder of claim 5, wherein the frame type control module is further configured to:
    and when the transmission of the plurality of slice code streams corresponding to the 1 st frame fails, setting the video frames after the 1 st frame as I frames until the transmission of the plurality of slice code streams corresponding to the n th frame succeeds, and setting the video frames after the n th frame as P frames, wherein n is an integer greater than 1.
  7. A video decoder is characterized in that the video decoder comprises a slice decoder, a decoding end optimal reference frame management module and a multi-way switch MUX module, wherein,
    the slice decoder is configured to decode the acquired first slice code stream to obtain reconstructed data of the first slice;
    the optimal reference frame management module at the decoding end is configured to replace the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice, where the replaced optimal reference frame is used as a reference frame for decoding a next video frame;
    The MUX module is used for outputting the reconstruction data of the first slice; and the data corresponding to the second slice position in the optimal reference frame is output when the second slice code stream fails to be received.
  8. The video decoder of claim 7, wherein the slice decoder is further configured to:
    and receiving the replaced optimal reference frame output by the optimal reference frame management module at the decoding end.
  9. The video decoder of claim 7 or 8, characterized in that the decoding-side optimal reference frame management module is further configured to:
    and receiving the reconstruction data of the first slice output by the slice decoder.
  10. The video decoder according to any of claims 7-9, characterized in that in outputting the reconstructed data of the first slice, the MUX module is specifically configured to:
    and when a second mark is received, outputting the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
  11. The video decoder according to any of claims 7-10, wherein the MUX module is configured to, when the second slice stream fails to receive, output data corresponding to the second slice position in the optimal reference frame:
    And when a third mark is received, outputting data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
  12. A video encoding method, wherein the method is applied to a video encoder, the video encoder comprises a slice encoder and an encoding end optimal reference frame management module based on channel feedback, and the method comprises:
    the slice encoder divides a first video frame in video data into a plurality of slice slices;
    the slice encoder encodes the slices respectively to obtain a plurality of slice code streams;
    according to channel feedback, when the transmission of a first slice code stream in the plurality of slice code streams is successful, the optimal reference frame management module of the coding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for coding a next video frame.
  13. The method according to claim 12, wherein the method further comprises:
    and the slice encoder receives the replaced optimal reference frame output by the optimal reference frame management module at the encoding end.
  14. The method according to claim 12 or 13, wherein the encoding-side optimal reference frame management module replaces the reconstructed data of the first slice with the data corresponding to the first slice position in the optimal reference frame when the transmission of the first slice of the plurality of slices is successful according to the channel feedback, including:
    the optimal reference frame management module of the coding end receives a first mark, wherein the first mark is used for representing whether the first slice code stream is successfully transmitted or not;
    when the first mark represents that the first slice code stream is successfully sent, the encoding end optimal reference frame management module replaces the data corresponding to the first slice position in the optimal reference frame with the reconstructed data of the first slice.
  15. The method according to any one of claims 12-14, further comprising:
    and the optimal reference frame management module of the coding end receives the reconstruction data of the first slice output by the slice encoder.
  16. The method of any of claims 12-15, wherein the video encoder further comprises a frame type control module, the method further comprising:
    the frame type control module sets a 1 st frame in the video data as an I frame;
    And when the plurality of slice code streams corresponding to the 1 st frame are successfully transmitted, the frame type control module sets the video frames after the 1 st frame as P frames.
  17. The method of claim 16, wherein the method further comprises:
    and when the transmission of the plurality of slice code streams corresponding to the 1 st frame fails, the frame type control module sets the video frames after the 1 st frame as I frames until the transmission of the plurality of slice code streams corresponding to the n th frame succeeds, and sets the video frames after the n th frame as P frames, wherein n is an integer greater than 1.
  18. A video decoding method, wherein the method is applied to a video decoder, the video decoder comprises a slice decoder, a decoding end optimal reference frame management module and a multi-way switch MUX module, and the method comprises:
    the slice decoder decodes the acquired first slice code stream to obtain reconstruction data of the first slice;
    the optimal reference frame management module at the decoding end replaces the data corresponding to the first slice position in the optimal reference frame with the reconstruction data of the first slice, wherein the replaced optimal reference frame is used as a reference frame for decoding a next video frame;
    The MUX module outputs the reconstruction data of the first slice; and when the second slice code stream fails to be received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame.
  19. The method of claim 18, wherein the method further comprises:
    and the slice decoder receives the replaced optimal reference frame output by the optimal reference frame management module at the decoding end.
  20. The method according to claim 18 or 19, characterized in that the method further comprises:
    and the decoding end optimal reference frame management module receives the reconstruction data of the first slice output by the slice decoder.
  21. The method of any of claims 18-20, wherein the MUX module outputting the reconstructed data of the first slice comprises:
    and when a second mark is received, the MUX module outputs the reconstruction data of the first slice, wherein the second mark is used for representing that the first slice code stream is successfully received.
  22. The method according to any one of claims 18-21, wherein the MUX module outputting data corresponding to the second slice position in the optimal reference frame when the second slice stream fails to receive, comprises:
    And when a third mark is received, the MUX module outputs data corresponding to the second slice position in the optimal reference frame, wherein the third mark is used for representing that the second slice code stream fails to be received.
CN202180095330.1A 2021-03-31 2021-03-31 Video encoder, video decoder and corresponding methods Pending CN116965026A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/084394 WO2022205064A1 (en) 2021-03-31 2021-03-31 Video encoder, video decoder and corresponding method

Publications (1)

Publication Number Publication Date
CN116965026A true CN116965026A (en) 2023-10-27

Family

ID=83455459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180095330.1A Pending CN116965026A (en) 2021-03-31 2021-03-31 Video encoder, video decoder and corresponding methods

Country Status (2)

Country Link
CN (1) CN116965026A (en)
WO (1) WO2022205064A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3068002B2 (en) * 1995-09-18 2000-07-24 沖電気工業株式会社 Image encoding device, image decoding device, and image transmission system
US8494049B2 (en) * 2007-04-09 2013-07-23 Cisco Technology, Inc. Long term reference frame management with error video feedback for compressed video communication
US10567756B2 (en) * 2018-05-31 2020-02-18 Agora Lab, Inc. Slice level reference picture reconstruction
US10567757B2 (en) * 2018-05-31 2020-02-18 Agora Lab, Inc. Dynamic reference picture reconstruction

Also Published As

Publication number Publication date
WO2022205064A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
US10728594B2 (en) Method and apparatus for transmitting data of mobile terminal
TWI623225B (en) Video playback method and control terminal thereof
US8929297B2 (en) System and method of transmitting content from a mobile device to a wireless display
US8670437B2 (en) Methods and apparatus for service acquisition
CN101529907B (en) Device and method for reducing channel-change time
US20200213625A1 (en) Recovery From Packet Loss During Transmission Of Compressed Video Streams
US10771821B2 (en) Overcoming lost IP packets in streaming video in IP networks
US20150101003A1 (en) Data transmission apparatus, system and method
US20130263201A1 (en) Data transmission apparatus, system and method
JP2014531878A (en) Network streaming of media data
CN111372145B (en) Viewpoint switching method and system for multi-viewpoint video
CN110740380A (en) Video processing method and device, storage medium and electronic device
KR20100136999A (en) Staggercasting with temporal scalability
JP2012165380A (en) Fast channel change companion stream solution with bandwidth optimization
KR20150086110A (en) Apparatus and Method for Transmitting Encoded Video Stream
US10298975B2 (en) Communication apparatus, communication data generation method, and communication data processing method
US20140321556A1 (en) Reducing amount of data in video encoding
KR20140028059A (en) On-demand intra-refresh for end-to-end coded video transmission systems
CN116965026A (en) Video encoder, video decoder and corresponding methods
WO2009007508A1 (en) Method and apparatus for improving mobile broadcast quality
CN110798713A (en) Time-shifted television on-demand method, terminal, server and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination