CA2847028C - Resilient signal encoding - Google Patents

Resilient signal encoding Download PDF

Info

Publication number
CA2847028C
CA2847028C CA2847028A CA2847028A CA2847028C CA 2847028 C CA2847028 C CA 2847028C CA 2847028 A CA2847028 A CA 2847028A CA 2847028 A CA2847028 A CA 2847028A CA 2847028 C CA2847028 C CA 2847028C
Authority
CA
Canada
Prior art keywords
frame
frames
difference
encoding
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2847028A
Other languages
French (fr)
Other versions
CA2847028A1 (en
Inventor
Christian Joseph Eric Montminy
Gaelle Christine Martin-Cocher
Dake He
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BlackBerry Ltd
Original Assignee
BlackBerry Ltd
2236008 Ontario Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/849,983 external-priority patent/US9774869B2/en
Priority claimed from EP13160824.2A external-priority patent/EP2785062B1/en
Application filed by BlackBerry Ltd, 2236008 Ontario Inc filed Critical BlackBerry Ltd
Publication of CA2847028A1 publication Critical patent/CA2847028A1/en
Application granted granted Critical
Publication of CA2847028C publication Critical patent/CA2847028C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Detection And Prevention Of Errors In Transmission (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A system and method for resilient signal encoding provide for encoding a data signal to reduce bandwidth required to transmit the encoded signal while mitigating the impact of frames lost or corrupted during transmission. A first frame of the data signal is encoded as an independently decodable frame and is assigned as a reference frame. Subsequent frames of the data signal are encoded as different frames relative to the reference frame. The independently decodable frame and the difference frames are transmitted to a receiver. The receiver decodes the frames and sends an acknowledgement for one or more successfully decoded difference frames. When an acknowledgment is received, a corresponding data signal frame is assigned as the reference frame. Subsequent difference frames are encoded relative to the newly assigned reference frame.

Description

RESILIENT SIGNAL ENCODING
BACKGROUND
1. Technical Field [0001] The present disclosure relates to the field of encoding digital signals to reduce bandwidth utilization. In particular, to a system and method for resilient signal encoding.
2. Related Art [0002] A digital signal that is composed of successive frames of information (a.k.a.
data) may be encoded using various mechanisms in order to reduce the bandwidth required to transmit the signal. One such mechanism is inter-frame encoding where in some frames may be encoded as independently decodable frames (a.k.a. i-frames) while the remaining frames each may be encoded relative to an independently decodable frame as difference frames (p-frames) or relative to another p-frame. The mechanism is susceptible to some p-frames becoming undecodable when the frame relative to which they were encoded is lost or corrupted in transmission. Some mechanisms (e.g.
Internet Engineering Task Force (IETF), Ott, J., Wenger, S., Sato, N., Burmeister, C., and J. Rey, "Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/AVPF)", RFC 4585, DOT 10.17487/RFC4585, July 2006) are used whereby a receiver of the encoded frames may inform the encoder of the frames when some received frames are undecodable.
[0003] Typically a trade-off is made between resilience to frame loss or corruption and a degree of bandwidth utilization (i.e. compression) in either or both of a downlink channel and an uplink channel.
BRIEF DESCRIPTION OF DRAWINGS
[0004] The system and method may be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure.
Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
[0005] Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description.
[0006] Fig. 1 is a schematic representation of a time series of encoded frames of a digital signal.
[0007] Fig. 2 is a schematic representation of a method for resilient signal encoding.
[0008] Fig. 3 is a schematic representation of system for resilient signal encoding.
DETAILED DESCRIPTION
[0009] A system and method for resilient signal encoding is described herein.
The system and method provide for encoding a digital signal to reduce bandwidth required to transmit the encoded signal while mitigating the impact of frames lost or corrupted during transmission. The system and method may be used in applications such as, for example, video conferencing, telepresence, streaming media, video editing and other similar applications.
[0010] Figure 1 is a schematic representation of a time series of encoded frames of an encoded signal. The encoded signal is derived from an input signal in accordance with the system and method for resilient signal encoding. The input signal may be a digital signal that comprises a sequence of data samples (e.g. frames) that represent, for example, a video stream or other similar content. For illustrative purposes the input signal describes herein represents a video stream comprising a sequence of video frames Si -S7 but the illustration is not intended to be limiting in any way. In Figure 1 time is represented in the horizontal axis progressing from left (earlier time) to right (later time).
[0011] A first frame of the input signal Si may be encoded as an independently decodable frame (I1) and placed in the encoded signal as encoded frame El. The independently decodable frame El may be transmitted to a receiver for decoding. The receiver may decode the independently decodable frame without reference to any other frames in the encoded signal as decoded frame Dl. A subsequent frame of the input signal S2 may be encoded as a difference frame (P2-1) that represents a content difference between the first frame Si and the subsequent frame S2. The first frame acts as a reference frame for the encoding of the subsequent frame. The difference frame E2 may be transmitted to the receiver for decoding. The receiver may decode the difference frame E2 by referencing the independently decodable frame El. The independently decodable frame El, which corresponds to the first frame Si, acts as a reference frame for the decoding of the difference frame E2. Further frames of the input signal S3 and S4 may be encoded as difference frames E3 and E4 in a manner similar to that described above with reference to the subsequent frame S2.
[0012] When the receiver successfully decodes a difference frame, the receiver may send an acknowledgement (ACK) indicating that the difference frame was successfully decoded. In response to receiving the acknowledgement (ACK), the frame associated with acknowledgement S3 may be assigned as the reference frame and further subsequent frames of the input signal S5, S6 and S7 may be encoded as difference frames E5, E6 and E7 that represent a content difference between the frame S5, S6 and S7 and the reference frame S3. The receiver may send an acknowledgement for each difference frame that is successfully decoded. Alternatively the receiver may only send an acknowledgement for difference frames that are marked as acknowledgement-requested and that have been successfully decoded. During encoding some difference frames such as, for example, every Nth frame may be marked as acknowledgement-requested before they are transmitted. Difference frames that are marked as acknowledgement-requested may be candidate reference frames. The interval N for marking candidate reference frames may be a number between 2 and a predetermined upper limit (e.g. 3000) that may be preconfigured or may be user specified. In a further alternative, a difference frame may be marked as acknowledgement-requested (e.g. a candidate reference frame) based on the content of the frame and/or one or more frames occurring before or after the frame, for example, to minimize the difference between the reference frame and subsequently encoded frames. In another alternative, frames designated as golden frames (e.g. as specified in Internet Engineering Task Force (IETF), Bankoski, J., Koleszar, J., Quillio, L., Salonen, J., Wilkins, P., and Y. Xu, "VP8 Data Format and Decoding Guide", RFC
6386, DOT 10.17487/RFC6386, November 2011) may be marked as acknowledgement-requested. In the example illustrated in Figure 1, encoded frame E3 was successfully decoded and an acknowledgement was sent. Subsequently, frames S5, S6 and S7 are encoded as difference frames E5, E6 and E7 relative to S3 that corresponds to frame E3 associated with the sent acknowledgement.
[0013] Alternatively, or in addition, the receiver may send an acknowledgement whether or not a difference frame was successfully decoded. The acknowledgement may include a binary variable, flag or other similar indicator that has one value or state to signify successful decoding and another value or state to signify a failure to successfully decode the difference frame. The acknowledgement may further include one or more indicators of the outcomes (e.g. success or failure) of decoding earlier frames. For example, the acknowledgement may also include indicators of the outcomes of decoding of K previous frames. The number of previous frames K may be a number (e.g. 7, 15, 23 or 31) that may be preconfigured or may be user specified. The acknowledgement may take the form of a binary bit mask in which a first bit represents the outcome of decoding a current frame and K further bits represent the outcome of decoding K
previous frames.
In a further alternative, the indicator of the outcome of decoding each frames may assigned to a class where the class specifies the number of times (e.g. 32, 16, 8 or 0) that the indicator will be repeated (e.g. included in an acknowledgement). The inclusion of an indicator of the outcome of decoding previous frames may be limited to previous frames that are candidate reference frames (e.g. frames that are mark acknowledgement-requested). The redundant inclusion of indicators of the outcome of decoding one or more previous frames in the acknowledgement allows the system and method to be resilient when one or more acknowledgements are lost during transmission. An indicator that fails to be returned a first lost acknowledgement may be successfully returned in a subsequent acknowledgement.
[0014] A failure to successfully decode frame E6 is shown resulting in no acknowledgement being sent for E6. Encoded frame E7 is successfully decoded despite the failure to decode E6 because E3 was successfully decoded to D3 and E7 was encoded relative to S3. Due to the failure to decode E6, no acknowledgement was sent but the system and method may receive an acknowledgement of the successful decoding of instead. Difference frame E7 may, for example, be marked as acknowledgement-requested when no acknowledgement is received for E6. Subsequently encoded frames may be encoded relative to S7. Alternatively, when no acknowledgement is received for E6 subsequent frames may continue to be encoded relative to the current reference frame S3 until an acknowledgement is received for a subsequent candidate reference frame, for example the next Nth frame.
[0015] In the illustrated example of Figure 1, S4 is encoded relative to Si even though acknowledgement was sent for E3. After an acknowledge is sent, encoding of subsequent frames may continue to use a previous reference frame for some time due to various factors including propagation delay, caching, pipelining and other similar latency related factors.
[0016] Alternatively, or in addition, encoding may operate in one of two possible modes: normal mode and safe mode. Selection of one of the two modes may be responsive to feedback from the receiver (e.g. acknowledgements received). In safe mode encoding only uses as a reference frame a frame for which an acknowledgement, including a positive indication of successfully decoding, has been received.
In normal mode encoding may use as a reference frame any frame including a frame for which no acknowledgement including a positive indication of successfully decoding has been received. Encoding may switch from normal mode to safe mode when, for example, an acknowledgement is received that includes an indicator of unsuccessful decoding of a frame. Encoding may switch from safe mode to normal mode when an abatement criterion is met. The abatement criterion may include, for example, when a positive indication of successfully decoding has been received for L success frames, where L is a number that may be preconfigured or may be user specified. Alternatively, the abatement criterion may specify a minimum time duration during which encoding remains in safe mode to mitigate frequent switching between safe and normal mode. Encoding may be configured to start either in normal mode or safe mode.
[0017] The independently decodable frame and each of the difference frames is uniquely identifiable using, for example, sequence numbers that may be included in the encoded frames. Each difference frame may also include identification of the reference frame from which it was encoded and that may be used to decode the difference frame.
Each acknowledgement sent by the receiver may identify the frame that was successfully decoded using, for example, the sequence number of the frame.
[0018] The system and method may encode and transmit only a single independently decodable frame thereby mitigating the bandwidth required to transmit the encoded signal. Further independently decodable frames may be encoded and transmitted at any time responsive to input such as, for example, a force reset request or detection of the loss or failure to decode a previous independently decodable frame. When the receiver successfully decodes an independently decodable frame, the receiver may send an acknowledgement (ACK) indicating that the independently decodable frame was successfully decoded. In the absence of receiving an acknowledgement, a further independently decodable frame may be encoded and transmitted to the receiver.
[0019] The system and method for resilient signal encoding may use various forms of inter-frame compression either lossy or lossless. The inter-frame compression may be used in combination with other compression techniques (e.g. predictive encoding). The system and method for resilient signal encoding may use encoding formats such as, for example, 3. International Telecommunication Union; ITU-T
H.264;
Telecommunication Standardization Sector of ITU, (05/2003), Series H, Advanced video coding for generic audiovisual services or International Telecommunication Union; ITU-T H.265; Telecommunication Standardization Sector of ITU, (04/2013), Series H, High Efficiency Video Coding including scalable and multiview extensions of these formats such as scalable video coding (SVC), multiview video coding (MVC), 3-dimensional coding (3D), and The WebM Project VP8 or VP9 coding.
[0020] The acknowledgements may be received using in-band signaling in accordance with a video codec syntax being used for the encoded frames (e.g. H.264), as part of an ITU-T Supplementary Enhancement Information (SEI) message, or using a user data extension mechanism. Alternatively or in addition, the acknowledgements may be sent out-of-band using mechanisms such as, for example, as part of an IETF Session Initiation Protocol (SIP) message, as part of a Moving Picture Expert Group (MPEG) green MPEG
set of metadata dedicated to resource saving, as part of an IETF Real Time Communication Web (RTCWeb) mechanism, as part of an IETF codec control message or as part of an IETF Audio-Visual Profile with Feedback (AVPF) message.
[0021] The acknowledgement-requested marking of a frame may be sent as a flag or as a syntax element using in-band signaling in accordance with a video codec syntax being used for the encoded frames (e.g. H.264), as part of an ITU-T SEI message or using a user data extension mechanism. Alternatively, or in addition, the acknowledgement-requested marking of a frame my be sent out-of-band using mechanisms such as, for example, as part of a MPEG green MPEG set of metadata dedicated to resource saving, as part of an IETF RTCWeb mechanism or as part of an IETF SIP message.
[0022] Fig. 2 is a representation of a method for resilient signal encoding.
The method 200 may, for example, be implemented using the system 300 described herein with reference to Figure 3. The method 200 includes the following acts. Receiving a signal as a sequence of frames 202. Encoding a first frame, of the received frames, as an independently decodable frame (a.k.a. an i-frame) and assigning or designating the first frame as a reference frame 204. Encoding subsequent frames, of the received signal, as difference frames (a.k.a. p-frames) representing a difference between a current frame, of the subsequent frames, and the reference frame 206. Transmitting the independently decodable frame and the difference frames, as they are encoded, to a receiver 208.
Receiving one or more acknowledgements (a.k.a. ACK) each indicating the outcome of frame decoding 210. An acknowledgement may be sent by the receiver when an independently decodable frame or a difference frame has been successfully decoded. An acknowledgement may be sent by the receiver when a golden frame has been successfully decoded. The receiver may only send an acknowledgement for difference frames that are marked as acknowledgement-requested. The acknowledgement may further include one or more indicators of the outcomes (e.g. success or failure) of decoding earlier frames. For example, the acknowledgement may also include indicators of the outcomes of decoding of K previous frames. Assigning or designating the received frame corresponding to a difference frame associated with a received acknowledgement as the reference frame 212. Subsequently encoded difference frames as described in act 206 may represent a difference between the current frame and the newly assigned reference frame. In an alternative embodiment, the method may further include switching between a safe mode according to act 212 described above and a normal mode where encoding may use as a reference frame any frame including a frame for which no acknowledgement including a positive indication of successfully decoding has been received.
[0023] Figure 3 is a schematic representation of a system for resilient signal encoding 300. The system 300 comprises a processor 302, memory 304 (the contents of which are accessible by the processor 302) and an I/0 interface 306. The memory 304 may store instructions which when executed using the process 302 may cause the system 300 to render the functionality associated with a signal receiver 308, an encoder 310, a signal transmitter 312 and a decoder 314. Alternatively, or in addition, the instructions when executed using the process 302 may configure the system 300 to implement the acts of method 200. In addition the memory 304 may store information in data structures including, for example, signal frames 316, encoded frames 318, reference frame 320, decoded frames 322 and golden frames.
[0024] The processor 302 may comprise a single processor or multiple processors that may be disposed on a single chip, on multiple devices or distributed over more that one system. The processor 302 may be hardware that executes computer executable instructions or computer code embodied in the memory 304 or in other memory to perform one or more features of the system. The processor 302 may include a general purpose processor, a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a digital circuit, an analog circuit, a microcontroller, any other type of processor, or any combination thereof.
[0025] The memory 304 may comprise a device for storing and retrieving data, processor executable instructions, or any combination thereof The memory 304 may include non-volatile and/or volatile memory, such as a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a flash memory. The memory 304 may comprise a single device or multiple devices that may be disposed on one or more dedicated memory devices or on a processor or other similar device. Alternatively or in addition, the memory 304 may include an optical, magnetic (hard-drive) or any other form of data storage device.
[0026] The memory 304 may store computer code, such as signal receiver 308, encoder 310, signal transmitter 312 and decoder 314 as described herein. The computer code may include instructions executable with the processor 302. The computer code may be written in any computer language, such as C, C++, assembly language, channel program code, and/or any combination of computer languages. The signal receiver 308 may receive a signal as a sequence of frames. The encoder 310 may encode a first frame, of the received frames, as an independently decodable frame and assign the first frame as a reference frame. The encoder 310 may encode subsequent frames, of the received signal, as difference frames representing a difference between a current frame, of the subsequent frames, and the reference frame. The signal transmitter 312 may transmit the independently decodable frame and the difference frames, as they are encoded, to a receiver. The signal receiver 308 may also receive acknowledgements associated with successfully decoded frames. When an acknowledgement is received, the received frame corresponding to the difference frame associated with a received acknowledgement may be assigned as the reference frame. Subsequently encoded frames of the received frames may be encoded relative to the newly assigned reference frame by the encoder 310. The system 300 may not include the decoder 314 when an encoded signal is only transmitted to the receiver (e.g. in one-way transmission). When encoded signals are transmitted to the receiver and also received by the system (e.g. in two-way transmission) the system 300 may include the decoder 314. The decoder 314 decodes the received encoded frames and when a frame is successfully decoded, the decoder 314 may send an acknowledgement to the transmitter of the encoded frames. The receiver may only send an acknowledgement for difference frames that are marked as acknowledgement-requested.
[0027] The I/0 interface 306 may be used to connect devices such as, for example, data transmission media and other components of the system 300.
[0028] All of the disclosure, regardless of the particular implementation described, is exemplary in nature, rather than limiting. The system 300 may include more, fewer, or different components than illustrated in Figure 3. Furthermore, each one of the components of system 300 may include more, fewer, or different elements than is illustrated in Figure 3. Flags, data, databases, tables, entities, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be distributed, or may be logically and physically organized in many different ways. The components may operate independently or be part of a same program or hardware. The components may be resident on separate hardware, such as separate removable circuit boards, or share common hardware, such as a same memory and processor for implementing instructions from the memory. Programs may be parts of a single program, separate programs, or distributed across several memories and processors.
[0029] The functions, acts or tasks illustrated in the figures or described may be executed in response to one or more sets of logic or instructions stored in or on computer readable media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, distributed processing, and/or any other type of processing. In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the logic or instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the logic or instructions may be stored within a given computer such as, for example, a CPU.
[0030] While various embodiments of the system and method for on-demand user control have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the present invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents.

Claims (29)

1. A method of encoding a signal comprising:
receiving, over time, a sequence of input frames that comprise the signal;
encoding a first frame of the sequence of input frames as an independently decodable frame and assigning the first frame as a current reference frame;
encoding subsequently received input frames, in the sequence of input frames, each as a difference frame that represents a difference between the input frame and the current reference frame;
transmitting the independently decodable frame and each of the difference frames, after each is encoded, to a receiver;
receiving one or more acknowledgements from the receiver;
assigning an input frame corresponding to a difference frame associated with a received acknowledgment, of the one or more acknowledgements, as the current reference frame;
switching to a safe operating mode when one or more acknowledgements indicate that one of the independently decodable frame or the difference frames was unsuccessfully decoded; and switching to a normal operating mode when an abatement criterion is met;
where the one or more acknowledgements are received through an in-band or an out-of-band signalling.
2. The method of encoding a signal of claim 1, where the signal includes a video stream.
3. The method for encoding a signal of claim 1 or claim 2, where the received one or more acknowledgements includes one or more indicators each signifying an outcome of a decoding of an associated difference frame.
4. The method for encoding a signal of any one of claims 1-3, where the receiver decodes each difference frame using a frame previously decoded by the receiver that corresponds to the current reference frame from which the difference frame was derived without referencing any intervening frames in the sequence of frames between the current reference frame and the difference frame.
5. The method for encoding a signal of any one of claims 1-4, where each of the one or more acknowledgements is associated with one or more difference frames that are each marked as an acknowledgement-requested.
6. The method for encoding a signal of claim 5, where each N difference frame is marked as the acknowledgement-requested before being transmitted and where N is a predetermined value.
7. The method for encoding a signal of claim 5, where the acknowledgement-requested mark for each of the one or more difference frames is sent in-band using mechanisms including any of: in accordance with a video codec syntax being used for the encoding the input frames, in accordance with an ITU-T Supplementary Enhancement Information (SEI) message, and using a user data extension mechanism.
8. The method for encoding a signal of claim 5, where the acknowledgement-requested mark for each of the one or more difference frames is sent out-of-band using mechanisms including any of: a Moving Picture Expert Group (MPEG) green MPEG set of metadata, an IETF RTC web mechanism and an IETF Session Initiation Protocol (SIP) message.
9. The method for encoding a signal of any one of claims 1-8, where the independently decodable frame and each difference frame are enuoded in accordance with an ITU-T Video Coding Experts Group H.264 standard.
10. The method for encoding a signal of any one of claims 1-9, where the one or more acknowledgements are received in-band using mechanisms in accordance with: a video codec syntax, a ITU-T Supplementary Enhancement Information (SEI) message, or a user data extension mechanism.
11. The method for encoding a signal of any one of claims 1-10, where the one or more acknowledgements are received out-of-band using mechanisms including any of:
an IETF

RTC web mechanism, an IETF codec control message, an IETF AVPF message, a Moving Picture Expert Group (MPEG) green MPEG set of metadata, IETF codec control message and a IETF Session Initiation Protocol (SIP) message.
12. The method for encoding a signal of any one of claims 1-11, where the method switches between operating modes and the method further comprises:
when in a safe mode, assigning an input frame corresponding to a difference frame associated with a received acknowledgment, of the one or more acknowledgements, as the current reference frame; and when in a normal mode, assigning an input frame as the current reference frame regardless of receiving an acknowledgement associated with a difference frame corresponding to the input frame.
13. The method for encoding a signal of claim 12, where the abatement criterion is met upon receipt of a positive indication of successful decoding of a predetermined number of successive independently decodable frames or the difference frames.
14. The method for encoding a signal of any one of claims 1-13, where the abatement criterion is met upon completion of a predetermined time duration of operation in the safe operating mode.
15. A system for encoding a signal comprising:
one or more processors; and memory containing instructions executable by the one or more processors to configure the system to implement the method of any one of claims I to 14.
16. A computer-readable medium storing instructions which, when executed by a processor in a system for encoding a signal, cause the processor to :
receive, over time, a sequence of input frames that comprise the signal;
encode a first frame of the sequence of input frames as an independently decodable frame and assigning the first frame as a current reference frame;

encode subsequently received input frames, in the sequence of input frames, each as a difference frame that represents a difference between the input frame and the current reference frame;
transmit the independently decodable frame and each of the difference frames, after each is encoded, to a receiver;
receive one or more acknowledgements from the receiver;
assign an input frame corresponding to a difference frame associated with a received acknowledgment, of the one or more acknowledgements, as the current reference frame;
switch to a safe operating mode when one or more acknowledgements indicate that one of the independently decodable frame or the difference frames was unsuccessfully decoded; and switch to a normal operating mode when an abatement criterion is met;
where the one or more acknowledgements are received through an in-band or an out-of-band signalling.
17. The computer-readable medium of claim 16, where the signal includes a video stream.
18. The computer-readable medium of claim 16, where the received one or more acknowledgements includes one or more indicators each signifying an outcome of a decoding of an associated difference frame.
19. The computer-readable medium of claim 16, where the receiver decodes each difference frame using a frame previously decoded by the receiver that corresponds to the current reference frame from which the difference frame was derived without referencing any intervening frames in the sequence of frames between the current reference frame and the difference frame.
20. The computer-readable medium of claim 16, where each of the one or more acknowledgements is associated with one or more difference frames that are each marked as an acknowledgment-requested.
21. The computer-readable medium of claim 20, where each N difference frame is marked as the acknowledgment-requested before being transmitted and where N is a predetermined value.
22. The computer-readable medium of claim 20, where the acknowledgment-requested mark for each of the one or more difference frames is sent in-band using mechanisms including any of: in accordance with a video codec syntax being used for the encoding the input frames, in accordance with an ITU-T Supplementary Enhancement Information (SEI) message, and using a user data extension mechanism.
23. The computer-readable medium of claim 20, where the acknowledgment-requested mark for each of the one or more difference frames is sent out-of-band using mechanisms including any of: a Moving Picture Expert Group (MPEG) green MPEG set of metadata, an IETF RTC
web mechanism and an IETF Session Initiation Protocol (SIP) message.
24. The computer-readable medium of claim 16, where the independently decodable frame and each difference frame are encoded in accordance with an ITU-T Video Coding Experts Group H.264 standard.
25. The computer-readable medium of claim 16, where the one or more acknowledgements are received in-band using mechanisms in accordance with: a video codec syntax, a ITU-T
Supplementary Enhancement Information (SEI) message, or a user data extension mechanism.
26. The computer-readable medium of claim 16, where the one or more acknowledgements are received out-of-band using mechanisms including any of: an IETF RTC web mechanism, an IETF codec control message, an IETF AVPF message, a Moving Picture Expert Group (MPEG) green MPEG set of metadata, IETF codec control message and a IETF
Session Initiation Protocol (SIP) message.
27. The computer-readable medium of claim 16, further comprising:
when in a safe mode, assigning an input frame corresponding to a difference frame associated with a received acknowledgment, of the one or more acknowledgements, as the current reference frame; and when in a normal mode, assigning an input frame as the current reference frame regardless of receiving an acknowledgement associated with a difference frame corresponding to the input frame.
28. The computer-readable medium of claim 27, where the abatement criterion is met upon receipt of a positive indication of successful decoding of a predetermined number of successive independently decodable frames or the difference frames.
29. The computer-readable medium of claim 27, where the abatement criterion is met upon completion of a predetermined time duration of operation in the safe operating mode.
CA2847028A 2013-03-25 2014-03-19 Resilient signal encoding Active CA2847028C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/849,983 US9774869B2 (en) 2013-03-25 2013-03-25 Resilient signal encoding
EP13160824.2A EP2785062B1 (en) 2013-03-25 2013-03-25 Resilient signal encoding
EP13160824.2 2013-03-25
US13/849,983 2013-03-25

Publications (2)

Publication Number Publication Date
CA2847028A1 CA2847028A1 (en) 2014-09-25
CA2847028C true CA2847028C (en) 2019-01-08

Family

ID=51610640

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2847028A Active CA2847028C (en) 2013-03-25 2014-03-19 Resilient signal encoding

Country Status (1)

Country Link
CA (1) CA2847028C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4383711A1 (en) * 2022-12-05 2024-06-12 Matthias Auchmann Method for verifying image data encoded in an encoder unit
EP4383710A1 (en) * 2022-12-05 2024-06-12 Matthias Auchmann Method for verifying video data encoded in an encoder unit

Also Published As

Publication number Publication date
CA2847028A1 (en) 2014-09-25

Similar Documents

Publication Publication Date Title
CN107409229B (en) Method and computing system for syntactic structures
KR102058759B1 (en) Signaling of state information for a decoded picture buffer and reference picture lists
JP6017574B2 (en) Reference picture marking
CN103843341B (en) Decoder and its method for managing the picture in video decoding process
US8929443B2 (en) Recovering from dropped frames in real-time transmission of video over IP networks
US8259802B2 (en) Reference pictures for inter-frame differential video coding
US9380313B2 (en) Techniques for describing temporal coding structure
US11445223B2 (en) Loss detection for encoded video transmission
CN110392284B (en) Video encoding method, video data processing method, video encoding apparatus, video data processing apparatus, computer device, and storage medium
CN108141581B (en) Video coding
JP6672159B2 (en) Reference picture selection
TWI499306B (en) Sync frame recovery in real time video transmission system
TW200904194A (en) Feedback based scalable video coding
MXPA05011533A (en) Picture coding method.
US9774869B2 (en) Resilient signal encoding
US20130058409A1 (en) Moving picture coding apparatus and moving picture decoding apparatus
CA2847028C (en) Resilient signal encoding
CN112995214B (en) Real-time video transmission system, method and computer readable storage medium
EP2785062A1 (en) Resilient signal encoding
US20130101030A1 (en) Transmission of video data
CN103024374A (en) Transmission of video data
US9641907B2 (en) Image transmission system with finite retransmission and method thereof
TW201424384A (en) System and method for decoding a video
US20140233653A1 (en) Decoder and encoder for picture outputting and methods thereof
CN104243989A (en) Video encoding and decoding system and video stream transmission method