CN110636333A - Frame loss processing method and device - Google Patents

Frame loss processing method and device Download PDF

Info

Publication number
CN110636333A
CN110636333A CN201910764122.0A CN201910764122A CN110636333A CN 110636333 A CN110636333 A CN 110636333A CN 201910764122 A CN201910764122 A CN 201910764122A CN 110636333 A CN110636333 A CN 110636333A
Authority
CN
China
Prior art keywords
frame
loss
frame loss
original image
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910764122.0A
Other languages
Chinese (zh)
Inventor
唐春平
范志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN201910764122.0A priority Critical patent/CN110636333A/en
Publication of CN110636333A publication Critical patent/CN110636333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a frame loss processing method and a device, which relate to the field of computer images, wherein the method comprises the steps of receiving coded data, decoding the coded data to obtain at least one original frame image, and storing the at least one original frame image; acquiring a frame loss notification message, wherein the frame loss notification message carries identification information indicating that frame loss occurs; acquiring a lost frame receiving equipment identifier corresponding to the lost frame notification message; and according to the frame loss receiving equipment identification, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to corresponding frame loss equipment. The method and the device can solve the problems that the frame loss occurs in the existing receiving end, if the lost image frame is retransmitted, the real-time image playing of the receiving end is delayed, and if the reference frame is switched at once, the concurrent data volume of the VGR side is increased instantly.

Description

Frame loss processing method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a frame loss processing method and apparatus.
Background
The one-to-many transmission of the real-time pictures refers to: and distributing the same real-time picture source to a plurality of accessed receiving terminals. In this scenario, a sending end (S end) encodes a captured image, an image router (VGR) distributes the encoded image, and a receiving end (R end) decodes and displays the encoded image received by each. The basic architecture can be seen in fig. 1. Three R terminals (R1, R2, and R3) are schematically shown in fig. 1, and those skilled in the art will appreciate that one or more of the R terminals may be actually provided.
In the above framework, when the S-side performs inter-frame prediction coding on the image, the coded non-reference frame needs to be coded according to the content of the reference frame, and in this case, once the receiving end loses a frame, the non-reference frame may not be decoded normally, and the problem of screen skip or frame skip may occur. If the lost image frame is retransmitted, the real-time image playing of the receiving end is delayed. If the S-side switches the I frame immediately, the amount of concurrent data on the VGR side is increased instantaneously, and especially, if the S-side switches the I frame continuously, the bandwidth on the VGR side is greatly affected if the S-side frequently loses frames when the network status is not good.
Disclosure of Invention
The embodiment of the disclosure provides a frame loss processing method and device, which can solve the problems that the frame loss occurs in the conventional receiving end, if the lost image frame is retransmitted, the real-time image playing of the receiving end is delayed, and if an S end switches an I frame immediately, the concurrent data volume of a VGR side is increased instantly. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, a frame loss processing method is provided, the method including:
receiving encoded data, decoding the encoded data to obtain at least one frame of original image, and storing the at least one frame of original image;
acquiring a frame loss notification message, wherein the frame loss notification message carries identification information indicating that frame loss occurs;
acquiring a lost frame receiving equipment identifier corresponding to the lost frame notification message;
and according to the frame loss receiving equipment identification, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to corresponding frame loss equipment.
In one embodiment, the frame loss notification message further carries a frame number of the frame loss, and the method further includes:
judging whether the corresponding frame loss is a reference frame according to the frame number of the frame loss;
the step of encoding the reference frame of the stored original image of the last frame according to the identifier of the frame loss receiving device and sending the encoded reference frame to the corresponding frame loss device comprises the following steps:
and if the frame loss is a reference frame, the reference frame coding is carried out on the stored original image of the last frame according to the identification of the frame loss receiving equipment, and the reference frame coding is sent to the corresponding frame loss equipment.
In one embodiment, the processing of frame loss according to the at least one original image comprises:
acquiring a lost frame receiving equipment identifier corresponding to the lost frame notification message;
and according to the frame loss receiving equipment identification, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to corresponding frame loss equipment.
In one embodiment, the method further comprises:
and if the lost frame is a non-reference frame, transmitting the received coded data to all receiving devices.
In one embodiment, decoding the encoded data to obtain at least one original image, and storing the at least one original image includes:
and decoding the coded data to obtain YUV data of the at least one frame of original image, and storing the YUV data of the at least one frame of original image.
In one embodiment, the determining whether the corresponding frame loss is a reference frame according to the frame number of the frame loss comprises:
inquiring whether the frame number of the lost frame exists in a preset reference frame list or not, and if so, judging that the lost frame is a reference frame;
if the frame loss does not exist, judging that the frame loss is a reference frame; the preset reference frame list stores frame numbers of all reference frames.
According to a second aspect of the embodiments of the present disclosure, there is provided a frame loss processing apparatus, including:
the receiving module is used for receiving the coded data, decoding the coded data to obtain at least one frame of original image and storing the at least one frame of original image;
a notification obtaining module, configured to obtain a frame loss notification message, where the frame loss notification message carries identification information indicating that a frame loss occurs;
an obtaining module, configured to obtain a frame loss receiving device identifier corresponding to the frame loss notification message;
and the first sending module is used for coding the reference frame of the stored original image of the last frame according to the identification of the frame loss receiving equipment and sending the coded reference frame to the corresponding frame loss equipment.
In one embodiment, the apparatus further includes a determining module, configured to determine whether a corresponding frame loss is a reference frame according to the frame number of the frame loss;
and the first sending module is specifically used for coding the reference frame of the stored original image of the last frame according to the identifier of the frame loss receiving device and sending the coded reference frame to the corresponding frame loss device if the frame loss is the reference frame.
In one embodiment, the above apparatus further comprises:
and the second sending module is used for sending the received coded data to all receiving devices if the lost frame is a non-reference frame.
In one embodiment, the frame loss notification message further carries a frame number of a frame loss, and the device further comprises a judging module for judging whether the corresponding frame loss is a reference frame according to the frame number of the frame loss;
and the processing module is specifically used for performing frame loss processing according to the at least one frame original image if the frame loss is a reference frame.
In one embodiment, the processing module comprises:
the identification obtaining submodule is used for obtaining the identification of the frame loss receiving equipment corresponding to the frame loss notification message;
and the sending submodule is used for coding the reference frame of the stored original image of the last frame according to the identification of the frame loss receiving equipment and sending the coded reference frame to the corresponding frame loss equipment.
In one embodiment, the receiving module is specifically configured to:
receiving coded data, decoding the coded data to obtain YUV data of the at least one frame of original image, and storing the YUV data of the at least one frame of original image.
In one embodiment, the determining module is specifically configured to:
inquiring whether the frame number of the lost frame exists in a preset reference frame list or not, and if so, judging that the lost frame is a reference frame;
if not, judging the frame loss as a non-reference frame; the preset reference frame list stores frame numbers of all reference frames.
The scheme is applied to a scene of one-to-many transmission of real-time pictures, and mainly comprises the following technical scheme: the R end reports frame loss notification information to the VGR; and the VGR performs frame loss processing according to the frame loss notification information, wherein the specific frame loss processing is to encode the YUV data of the stored last frame image into an I frame and send the I frame to the R end for sending the frame loss. Therefore, the instant bandwidth at the VGR is not suddenly and remarkably increased, the images of the R end with frame loss and other R ends can be normally decoded, and the decoding and playing of subsequent images are not influenced; and the processing overhead of the S end does not need to be increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a scene architecture diagram used in the frame loss processing method provided in the embodiment of the present disclosure;
fig. 2 is a flowchart of a frame loss processing method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a frame loss processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of a usage scenario of a frame loss processing method according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a frame loss processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a frame loss processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a frame loss processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Interframe predictive coding is a coding method for achieving the purpose of image compression by using the correlation between video image frames, i.e. temporal correlation. In the present invention, we can roughly divide all image frames into: reference frames (I-frames) and non-reference frames (P-frames). The reference frame refers to an image frame which contains complete information and does not need to rely on other image frames for decoding; the reference frame refers to an image frame that has a reference function for at least one image frame, that is, when one image frame is a reference frame, at least one image frame needs to be encoded depending on the image.
It is apparent that the loss of a reference frame will cause the subsequent at least one, multiple, and even all subsequent frames to fail to decode properly.
In real-time video transmission, at least the following common interframe coding methods are used:
the first method comprises the following steps: except the first frame, other frames in the video are coded by referring to the previous frame until the whole video is coded;
and the second method comprises the following steps: dividing a video into groups of pictures (GOPs), wherein each frame in each GOP is formed by the following steps: IPPPPPPPPP, all P frames are encoded with reference to the first I frame; the method has application in H264 coding;
and the third is that: dividing a video into a plurality of GOPs, wherein each frame in each GOP is formed by the following steps: IPPPPPPPPP, each P frame is encoded with reference to its previous frame (I or P possible);
the fourth encoding method: each image frame is compared with a predetermined number (e.g., 16) of previous image frames, and the image with the highest similarity is used as its reference frame.
Except the second mode, most image frames in other coding modes are reference frames, so that frame loss can be directly carried out once frame loss occurs in the video transmission process; in the second mode, the number of the non-reference frames is greater than that of the reference frames, and when the non-reference frames are lost, the subsequent image decoding is not affected, so that whether frame loss processing is performed or not can be determined according to a specific frame loss type in the video transmission process.
Based on the above analysis, the following is a detailed description by two specific examples.
Example one
The scheme in this embodiment may be applied to the first, third, and fourth cases, but is not limited to these cases, and the frame loss processing may be performed by using the scheme as long as the number of reference frames is greater than the number of non-reference frames in the inter-frame prediction coding method.
Fig. 2 is a flowchart of a frame loss processing method provided in an embodiment of the present disclosure, and as shown in fig. 2, the frame loss processing method includes the following steps:
step 101, receiving encoded data, decoding the encoded data to obtain at least one frame of original image, and storing the at least one frame of original image;
the original image frame refers to YUV data of the last frame image.
In the data transmission process, the VGR continuously receives the coded data from the S end, and in the process of receiving the coded data, the VGR continuously locally decodes the coded data to obtain YUV data of each decoded frame of image; and locally caching the latest one or more frames of YUV data.
102, obtaining a frame loss notification message, wherein the frame loss notification message carries identification information indicating that frame loss occurs;
the R end carries out frame loss monitoring, judges whether a frame loss event occurs or not, and sends a frame loss notification message when the frame loss event occurs;
specifically, the R-side detects the continuity of the frame numbers in the received image frame information, and when a discontinuous condition occurs, it is determined that a frame loss event occurs.
In practical application, each frame of image coding information has a certain rule on the frame number, and each frame of image coding information is sent in sequence, so that whether the frame number is continuous or not can be detected at a receiving end according to the setting rule of the frame number, and if the frame number is discontinuous, frame loss can be determined;
in addition, whether a frame loss event occurs can also be comprehensively judged according to Round-Trip Time (RTT) and the message sending interval Time.
103, acquiring a lost frame receiving device identifier corresponding to the lost frame notification message;
specifically, the identifier of the frame loss receiving device may be obtained through a channel between the VGR and each R, or the identifier of the frame loss receiving device may be obtained by receiving a report message from the frame loss device.
And step 104, according to the frame loss receiving device identification, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to a corresponding frame loss device.
When any one of the R terminals sends a frame loss notification message, the VGR locally takes out the YUV data of the current latest frame (the last frame cached locally) image, encodes the taken-out YUV data into an I frame, and sends the I frame to the corresponding R terminal. After receiving the YUV data coded into the I frame, the corresponding R end can decode and display the frame data and decode subsequent data according to the decoded data.
Here, when the VGR receives the frame loss notification message, the VGR encodes the YUV data of the last frame image cached locally, encodes the YUV data into an I frame, and sends the I frame to the R terminal where the frame loss occurs, so that normal decoding and display of subsequent videos at the corresponding R terminal can be ensured, and real-time performance of video transmission is also ensured.
Further, after performing frame loss processing according to the at least one original image, the method further includes: and receiving the coded data sent by the S terminal, and forwarding the coded data to each R terminal.
Example two
The present embodiment is applicable to the following application scenarios: when the adopted interframe prediction coding mode enables the number of reference frames generated in coding to be smaller than the number of non-reference frames, whether frame loss processing is carried out or not can be determined according to the frame loss type, and the judging process of whether frame loss processing is carried out or not is carried out in VGR.
The embodiment discloses a frame loss processing method, as shown in fig. 3, which specifically includes the following steps:
step 201, receiving encoded data, decoding the encoded data to obtain at least one frame of original image, and storing the at least one frame of original image;
the VGR built-in decoder in all embodiments of the present disclosure is configured to decode the encoded data received from the S-side continuously to obtain decoded YUV data, and buffer the YUV data of the latest one or more frames of images. The VGR can also store a reference frame list of the latest frame or multi-frame image in the decoding process, and can directly judge whether the frame with the corresponding frame number is a reference frame according to the reference frame list.
Step 202, obtaining a frame loss notification message, wherein the frame loss notification message carries identification information indicating that frame loss occurs and a frame number of the frame loss;
step 203, judging whether the corresponding frame loss is a reference frame according to the frame number of the frame loss;
for example, determining whether the corresponding frame loss is a reference frame according to the frame number of the frame loss includes:
inquiring whether the frame number of the lost frame exists in a preset reference frame list, and if so, judging the lost frame to be a reference frame;
if not, judging the frame loss as a non-reference frame; the preset reference frame list stores frame numbers of all reference frames.
And 204, if the frame loss is a reference frame, according to the identification of the frame loss receiving device, performing reference frame coding on the stored original image of the last frame and sending the encoded original image to a corresponding frame loss device.
And if the lost frame is a non-reference frame, continuously receiving the coded data sent by the S terminal and forwarding the coded data to each R terminal.
The following detailed description of the present invention is provided by way of a specific example.
A typical application scenario of the present invention is shown in fig. 4, and referring to fig. 4, an encoded image in one real-time picture source S is distributed to three different receiving terminals R through VGR. Wherein, S is any equipment with video image acquisition coding transmission capability. R may be any device capable of receiving, decoding, and displaying video image data.
Referring to fig. 4, during the process of encoding and transmitting images by the S-terminal, due to the influence of the network speed, frame loss occurs in R3 (frame 3 is lost), or R3 is newly accessed (received from frame 4, which can be regarded as a special frame loss).
At this time, according to the technical scheme provided by the present disclosure, there are two processing methods:
the first method comprises the following steps: and the R3 reports a frame loss notification message to the VGR, and after receiving the frame loss notification message, the VGR encodes YUV data of the last frame of original image (namely, the YUV data of the decoded 4 th frame of image and possibly the YUV data of the 5 th frame of image) buffered currently into an I frame (reference frame) and sends the I frame (reference frame) to the R3.
And the second method comprises the following steps: when detecting a frame loss event, R3 reports a frame loss notification message to the VGR, where the frame loss notification message includes: the identification information and the frame number of the frame loss are used for indicating the occurrence of the frame loss event; the VGR searches a preset reference frame list locally according to the frame number of the lost frame; judging whether the corresponding frame is a reference frame according to the result of searching the preset reference frame list; if yes, finding YUV data of the last frame of original image locally, coding the YUV data into an I frame, and sending the I frame to R3; if not, no frame loss processing is carried out.
Fig. 5 is a structural diagram of a frame loss processing apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the frame loss processing apparatus 50 includes a receiving module 501, a notification obtaining module 502, an obtaining module 503, and a first sending module 504, where the receiving module 501 is configured to receive encoded data, decode the encoded data to obtain at least one original image, and store the at least one original image; the notification obtaining module 502 is configured to obtain a frame loss notification message, where the frame loss notification message carries identification information indicating that a frame loss occurs; the obtaining module 503 is configured to obtain a frame loss receiving device identifier corresponding to the frame loss notification message; the first sending module 504 is configured to perform reference frame coding on the stored last frame original image according to the identifier of the frame loss receiving device, and send the reference frame coding to a corresponding frame loss device.
In one embodiment, the receiving module 501 is specifically configured to:
receiving coded data, decoding the coded data to obtain YUV data of the at least one frame of original image, and storing the YUV data of the at least one frame of original image.
Fig. 6 is a structural diagram of a frame loss processing apparatus according to an embodiment of the present disclosure, as shown in fig. 6, the frame loss processing apparatus 60 includes a receiving module 601, a notification obtaining module 602, a determining module 603, a obtaining module 604, and a first sending module 605, where the frame loss notification message also carries a frame number of a frame loss, and the determining module 603 is configured to determine whether a corresponding frame loss is a reference frame according to the frame number of the frame loss; the first sending module 604 is configured to perform reference frame coding on the stored last frame original image according to the identifier of the frame loss receiving device, and send the reference frame coding to a corresponding frame loss device.
In one embodiment, the determining module 603 is specifically configured to:
inquiring whether the frame number of the lost frame exists in a preset reference frame list or not, and if so, judging that the lost frame is a reference frame;
if not, judging the frame loss as a non-reference frame; the preset reference frame list stores frame numbers of all reference frames.
Fig. 7 is a structural diagram of a frame loss processing apparatus according to an embodiment of the present disclosure, as shown in fig. 7, the frame loss processing apparatus 70 includes a receiving module 701, a notification obtaining module 702, an obtaining module 703, a first sending module 704, and a second sending module 705, where the second sending module 705 is configured to send received encoded data to all receiving devices if the frame loss is a non-reference frame.
Based on the frame loss processing method described in the embodiment corresponding to fig. 2 and fig. 3, an embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores computer instructions for executing the frame loss processing method described in the embodiment corresponding to fig. 2 and fig. 3, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A method for handling frame loss, the method comprising:
receiving encoded data, decoding the encoded data to obtain at least one frame of original image, and storing the at least one frame of original image;
acquiring a frame loss notification message, wherein the frame loss notification message carries identification information indicating that frame loss occurs;
acquiring a lost frame receiving equipment identifier corresponding to the lost frame notification message;
and according to the frame loss receiving equipment identification, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to corresponding frame loss equipment.
2. The frame loss processing method of claim 1, wherein the frame loss notification message further carries a frame number of a frame loss, the method further comprising:
judging whether the corresponding frame loss is a reference frame according to the frame number of the frame loss;
the step of encoding the reference frame of the stored original image of the last frame according to the identifier of the frame loss receiving device and sending the encoded reference frame to the corresponding frame loss device comprises the following steps:
and if the frame loss is a reference frame, according to the identification of the frame loss receiving equipment, carrying out reference frame coding on the stored original image of the last frame and sending the coded original image of the last frame to the corresponding frame loss equipment.
3. The frame loss processing method of claim 2, further comprising:
and if the lost frame is a non-reference frame, transmitting the received coded data to all receiving devices.
4. The frame loss processing method according to claim 1, wherein said decoding the encoded data to obtain at least one original image, and storing the at least one original image comprises:
and decoding the coded data to obtain YUV data of the at least one frame of original image, and storing the YUV data of the at least one frame of original image.
5. The frame loss processing method of claim 2, wherein the determining whether the corresponding frame loss is a reference frame according to the frame number of the frame loss comprises:
inquiring whether the frame number of the lost frame exists in a preset reference frame list, and if so, judging the lost frame to be a reference frame;
if not, judging the frame loss as a non-reference frame; the preset reference frame list stores frame numbers of all reference frames.
6. A frame loss processing apparatus, the apparatus comprising:
the receiving module is used for receiving the coded data, decoding the coded data to obtain at least one frame of original image and storing the at least one frame of original image;
a notification obtaining module, configured to obtain a frame loss notification message, where the frame loss notification message carries identification information indicating that a frame loss occurs;
an obtaining module, configured to obtain a frame loss receiving device identifier corresponding to the frame loss notification message;
and the first sending module is used for coding the reference frame of the stored original image of the last frame according to the identification of the frame loss receiving equipment and sending the coded reference frame to the corresponding frame loss equipment.
7. The frame loss processing apparatus of claim 6, wherein the frame loss notification message further carries a frame number of a frame loss, and the apparatus further comprises a determining module, configured to determine whether a corresponding frame loss is a reference frame according to the frame number of the frame loss;
and the first sending module is specifically used for coding the reference frame of the stored original image of the last frame according to the identifier of the frame loss receiving device and sending the coded reference frame to the corresponding frame loss device if the frame loss is the reference frame.
8. The frame loss processing apparatus of claim 6, wherein the apparatus further comprises:
and the second sending module is used for sending the received coded data to all receiving devices if the lost frame is a non-reference frame.
9. The device of claim 6, wherein the receiving module is specifically configured to:
receiving coded data, decoding the coded data to obtain YUV data of the at least one frame of original image, and storing the YUV data of the at least one frame of original image.
10. The device for processing frame loss according to claim 7, wherein the determining module is specifically configured to:
inquiring whether the frame number of the lost frame exists in a preset reference frame list, and if so, judging the lost frame to be a reference frame;
if not, judging the frame loss as a non-reference frame; the preset reference frame list stores frame numbers of all reference frames.
CN201910764122.0A 2019-08-19 2019-08-19 Frame loss processing method and device Pending CN110636333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910764122.0A CN110636333A (en) 2019-08-19 2019-08-19 Frame loss processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910764122.0A CN110636333A (en) 2019-08-19 2019-08-19 Frame loss processing method and device

Publications (1)

Publication Number Publication Date
CN110636333A true CN110636333A (en) 2019-12-31

Family

ID=68970414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910764122.0A Pending CN110636333A (en) 2019-08-19 2019-08-19 Frame loss processing method and device

Country Status (1)

Country Link
CN (1) CN110636333A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0868086A1 (en) * 1997-03-24 1998-09-30 Oki Electric Industry Co., Ltd. Video decoder
US20060146830A1 (en) * 2004-12-30 2006-07-06 Microsoft Corporation Use of frame caching to improve packet loss recovery
CN101370139A (en) * 2007-08-17 2009-02-18 华为技术有限公司 Method and device for switching channels
CN101742271A (en) * 2008-11-10 2010-06-16 华为技术有限公司 Method, system and device for transmitting stream media data
CN102209237A (en) * 2011-05-26 2011-10-05 杭州华三通信技术有限公司 Method for reducing overlapping of frame I in on demand of live media stream and video management server
CN102223539A (en) * 2011-06-24 2011-10-19 武汉长江通信产业集团股份有限公司 Processing method for splash screen caused by picture coding frame loss
CN102291561A (en) * 2010-06-18 2011-12-21 微软公司 Reducing use of periodic key frames in video conferencing
CN103780971A (en) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 RUDP-based real-time video transmission method under internet condition
CN105519121A (en) * 2014-06-27 2016-04-20 北京新媒传信科技有限公司 Method for routing key frame and media server
CN106162374A (en) * 2016-06-29 2016-11-23 成都赛果物联网技术有限公司 The intracoded frame robust transmission method of a kind of low complex degree and system
CN110113610A (en) * 2019-04-23 2019-08-09 西安万像电子科技有限公司 Data transmission method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0868086A1 (en) * 1997-03-24 1998-09-30 Oki Electric Industry Co., Ltd. Video decoder
US20060146830A1 (en) * 2004-12-30 2006-07-06 Microsoft Corporation Use of frame caching to improve packet loss recovery
CN101370139A (en) * 2007-08-17 2009-02-18 华为技术有限公司 Method and device for switching channels
CN101742271A (en) * 2008-11-10 2010-06-16 华为技术有限公司 Method, system and device for transmitting stream media data
CN102291561A (en) * 2010-06-18 2011-12-21 微软公司 Reducing use of periodic key frames in video conferencing
CN102209237A (en) * 2011-05-26 2011-10-05 杭州华三通信技术有限公司 Method for reducing overlapping of frame I in on demand of live media stream and video management server
CN102223539A (en) * 2011-06-24 2011-10-19 武汉长江通信产业集团股份有限公司 Processing method for splash screen caused by picture coding frame loss
CN103780971A (en) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 RUDP-based real-time video transmission method under internet condition
CN105519121A (en) * 2014-06-27 2016-04-20 北京新媒传信科技有限公司 Method for routing key frame and media server
CN106162374A (en) * 2016-06-29 2016-11-23 成都赛果物联网技术有限公司 The intracoded frame robust transmission method of a kind of low complex degree and system
CN110113610A (en) * 2019-04-23 2019-08-09 西安万像电子科技有限公司 Data transmission method and device

Similar Documents

Publication Publication Date Title
US11032575B2 (en) Random access in a video bitstream
US11997313B2 (en) Dependent random access point pictures
US10037606B2 (en) Video analysis apparatus, monitoring system, and video analysis method
CN108737825B (en) Video data encoding method, apparatus, computer device and storage medium
US10009628B2 (en) Tuning video compression for high frame rate and variable frame rate capture
JP6348188B2 (en) Robust encoding and decoding of pictures in video
US10165290B2 (en) Method for encoding digital video data
US8811483B2 (en) Video processing apparatus and method
US9491487B2 (en) Error resilient management of picture order count in predictive coding systems
US9264737B2 (en) Error resilient transmission of random access frames and global coding parameters
JP2000175187A (en) Method for refreshing area base for video compression
CN107800989B (en) Video display method and system based on dynamic frame rate detection and network video recorder
US20130058409A1 (en) Moving picture coding apparatus and moving picture decoding apparatus
US20120106632A1 (en) Method and apparatus for error resilient long term referencing block refresh
CN111093079A (en) Image processing method and device
US11323730B2 (en) Temporally-overlapped video encoding, video decoding and video rendering techniques therefor
CN104980763B (en) Video code stream, video coding and decoding method and device
CN110602507A (en) Frame loss processing method, device and system
CN110636333A (en) Frame loss processing method and device
CN101090500A (en) Code-decode method and device for video fast forward
CN116264622A (en) Video encoding method, video encoding device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191231

RJ01 Rejection of invention patent application after publication