CN102946534A - Video coding - Google Patents

Video coding Download PDF

Info

Publication number
CN102946534A
CN102946534A CN201210320556XA CN201210320556A CN102946534A CN 102946534 A CN102946534 A CN 102946534A CN 201210320556X A CN201210320556X A CN 201210320556XA CN 201210320556 A CN201210320556 A CN 201210320556A CN 102946534 A CN102946534 A CN 102946534A
Authority
CN
China
Prior art keywords
frame
distortion
coding
encoder
affirmation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210320556XA
Other languages
Chinese (zh)
Inventor
D.赵
M.尼尔森
R.瓦芬
A.杰弗里莫夫
S.V.安德森
P.卡尔森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Skype Ltd Ireland
Original Assignee
Skype Ltd Ireland
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1115201.4A external-priority patent/GB2495467B/en
Application filed by Skype Ltd Ireland filed Critical Skype Ltd Ireland
Publication of CN102946534A publication Critical patent/CN102946534A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/19Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Abstract

A method of performing a rate-distortion optimization process comprising, for each of a plurality of target image portions to be encoded in each of a plurality of frames, selecting a preferred one of a set of encoding modes by optimizing a function comprising an estimate of distortion for the target image portion and a measure of bit rate required to encode the target image portion, wherein the estimate of distortion is based on source coding distortion and an estimate of a distortion that would be experienced due to possible loss over the channel; encoding the target image portion into the encoded video stream using the selected encoding mode; and transmitting the encoded video stream over the channel. The rate-distortion optimization process for a current one of the frames is performed in dependence on feedback received from the receiving terminal based on an earlier one of the frames.

Description

Video coding
Technical field
The present invention relates to when select being used for the coding mode that the part of vision signal is encoded equilibrium compromise between bit rate and distortion.When the present invention may can be applicable to special (but not getting rid of) in real time encoded video streams, be the live video streams of live video streams such as video call, wherein and encoder must dynamically be encoded when receiving this stream from camera or such as this type of that this flows to send.
Background technology
In Fig. 1 a, schematically illustrate the video data stream that to encode.This stream comprises a plurality of frames (F), and each is illustrated in the video image in the different corresponding moment.To be familiar with such as those skilled in the art, for the purpose of encoding, each frame (F) is divided into part and each part also can be subdivided into less subdivision, and each part or subdivision comprise a plurality of pixels.For example, according to a kind of term, each frame of the video flowing that encode is divided into macro block (MB) and each macro block is subdivided into piece or son fast (b), and each piece or sub-block comprise a plurality of pixels.Each frame also can be divided into the section that can independently decode, and each section comprises one or more macro blocks.Notice that the division shown in Fig. 1 a only illustrates for the illustrative purpose and will understand that these needn't mean the encoding scheme corresponding to any reality---for example each frame may comprise the more macro block of big figure.
In the block diagram of Fig. 2, schematically illustrate the example communication system that wherein can adopt Video coding.Communication system comprises the first transmitting terminal 12 and the second receiving terminal 22.For example, each terminal 12,22 can comprise mobile phone or smart phone, panel computer, laptop computer, desktop computer or other household electrical appliance such as television set, set-top box, stereophonic sound system etc.Be coupled to the first and second terminals 12,22 each being operated property communication network 32 and the first transmitting terminal 12 and be arranged to thus to send the signal that will be received by the second receiving terminal 22.Certainly transmitting terminal 12 also can receive signals and vice versa from receiving terminal 22, but the purpose in order to discuss, and describes to send and describe from the angle of the second terminal 22 in this article receiving from the angle of first terminal 12.Communication network 32 can comprise that for example packet-based network is such as wide area Internet and/or local area network (LAN) and/or mobile cellular network.
First terminal 12 comprises that storage medium 14 is such as flash memory or other electronic memory, magnetic storage apparatus and/or light storage device.First terminal 12 also comprises: the processing unit 16 with CPU form of one or more nuclears; Transceiver has transmitter 18 at least such as wired or wireless modulator-demodulator; And video camera 15, this video camera 15 can or can not be contained in the shell identical with the remainder of terminal 12.Be coupled to processing unit 16 to storage medium 14, video camera 15 and transmitter 18 each being operated property, and transmitter 18 via wired or wireless being operated property of link be coupled to network 32.Similarly, the second terminal 22 comprises: storage medium 24 such as electronics, magnetic and/or light storage device; And the processing unit 26 with CPU form of one or more nuclears.The second terminal comprises: transceiver has receiver 28 at least such as wired or wireless modulator-demodulator; And screen 25, this screen 25 can or can not be contained in the shell identical with the remainder of terminal 22.Be coupled to corresponding processing unit 26 to the storage medium 24 of the second terminal, screen 25 and receiver 28 each being operated property, and receiver 28 via wired or wireless being operated property of link be coupled to network 32.
Storage medium 14 on the first terminal 12 is stored the video encoder that is arranged in processing unit 16 operations at least.Encoder receives " original " (un-encoded) input video stream from video camera 15 by operation the time, video flowing is encoded in order to it is compressed into than low bit rate stream, and the video flowing of output encoder is to send to the receiver 28 of the second terminal 22 via transmitter 18 and communication network 32.Storage medium on the second terminal 22 is stored the Video Decoder that is arranged in 26 operations of its oneself processing unit at least.Decoder decodes to output to screen 25 from the video flowing of receiver 28 received codes and to it by operation the time.The generic term that can be used to refer to for encoder and/or decoder is codec.
But the target of Video Codec is to be reduced to the required bit rate of transmission vision signal to keep simultaneously the highest quality of energy.This target is by utilizing statistical redundancy degree (similarity in the vision signal) to realize with perception irrelevance (relevant with human visual system's susceptibility).
Most of current Video Codecs are based on the framework that comprises following function: from other block of pixels predict pixel piece, and the conversion of prediction residual, the quantification of conversion coefficient, and the entropy of quantization index coding.These steps help to reduce redundancy and irrelevance.
To carrying out reference with Documents:
Figure DEST_PATH_IMAGE001
Figure 286358DEST_PATH_IMAGE002
Typically the pixel the frame of video outside present frame is carried out this prediction (inter prediction) and the pixel from same number of frames is carried out this prediction (infra-frame prediction).In other words, if use intraframe coding to encode, then the piece of frame, sub-block or other parts (object block or part) are encoded with respect to another piece, sub-block or image section in the same number of frames (reference block or part); And if the use interframe encode is encoded, then object block or part are encoded with respect to the reference block in another frame or part.This process is commonly referred to prediction or predictive coding.Therefore interframe or intra-framed prediction module will generate the prediction that form for example is following indication: in the situation that intraframe coding is contiguous block or sub-block, and/or in the situation that interframe encode is motion vector.Typically, encoder also generates residual signals that " residue " between piece that expression predicts and the actual block (perhaps sub-block and the actual subchunks of prediction, etc.) differs from.Residual error, motion vector and any required data related with infra-frame prediction then typically via further code level such as quantizer and entropy coder and output in the video flowing of coding.Therefore, the most several piece in the video can be encoded according to the difference between the piece, and this compares that requirement bit is still less encoded and therefore save bit rate with the absolute pixel value is encoded.Intraframe predictive coding typically requires than the more bit of inter prediction, but with absolute encoder is compared still expression and is saved.The interframe that video is fit to and the details of intraframe coding technology will be familiar with those skilled in the art.
Modern codec allows to use different predictive coding patterns for the different piece in the frame.Possibility with different coding option has increased the rate distortion efficient of video encoder.Must find optimum coded representation for each frame zone.Typically, such zone is the macro block of 16 * 16 pixels for example.Therefore that is, might select individually infra-frame prediction or inter-frame forecast mode for each macro block, so that can be with encode different macro blocks in the same number of frames of different mode.In some codec, also might use different patterns based on different macroblock partition levels, for example select in the higher complexity pattern or between than the low-complexity pattern, in described higher complexity pattern, carry out independent prediction for each 4 * 4 sub-block in the macro block and described than the low-complexity pattern in based on only 8 * 8 or 8 * 16 or even whole macro block carry out prediction.Enabled mode also can comprise for the different options of carrying out prediction.For example such as indicative icon among Fig. 1 b, the pixel of 4 * 4 sub-blocks (b) can be by the downward extrapolation of neighborhood pixels of the sub-block of the self-tightening side of connecting always or by determining to the next door extrapolation from the sub-block that is right after the left side in a kind of frame mode.Another special predictive mode that is called " dancing mode " also can be provided in some codec, and this can be considered as the alternative type of inter-frame mode.In dancing mode (PSkip), the motion vector of target is based on to the top with infer and do not exist the coding of residual error coefficient to the motion vector on the left side.The mode of wherein inferring motion vector is consistent with motion-vector prediction, so motion vector difference is zero, is skipped blocks so only require to signal macro block.
Fig. 3 is indicative icon such as the high level block diagram of the encoder that may implement in transmitting terminal 12.This encoder comprises: discrete cosine transform (DCT) module 51, quantizer 53, inverse transform module 61, inverse quantizer 63, intra-framed prediction module 41, inter prediction module 43 and subtraction stage (-).This encoder also comprises switch 47 and mode selection module 49.Each module preferably is embodied as on the storage medium 14 that is stored in transmitting terminal and arranges a part that is used for the code carried out at its processing unit 16, is not implemented in whole or in part possibility in the special hardware circuit although do not get rid of in these some or all.
In switch 47 and the mode selection module 49 each is arranged to receive the example of the input video stream that comprises a plurality of macro block MB.Mode selection module 49 is arranged to select coding mode " o " for each macro block and functionally is coupled to multiplexer 47 in order to thereby its control is decided the output of inverse quantizer 63 is sent to the input of intra-framed prediction module 41 or inter prediction module 43 on the pattern of selection.Mode selection module 49 also can be arranged to the pattern " o " (for example indicating 4 * 4 compartment models, 8 * 8 patterns, dancing mode etc.) of relevant prediction module 41,43 indication selections and receive from prediction module 41,43 feedacks to be used for selecting the pattern of next frame.Then the output of intra-framed prediction module 41 or inter prediction module 43 be coupled to the input of subtraction stage (-), this subtraction stage (-) is arranged to receive the input video stream of un-encoded in its another input and deducts the piece of prediction from the copy of its un-encoded, therefore generates residual signals.Then residual block transmits through conversion (DCT) module 51(its residual values in module 51 and is switched in the frequency domain), then the value to quantizer 53(conversion in quantizer 53 is converted into the discrete quantized index).The figure signal that quantizes is fed through inverse quantizer 63 and inverse transform module 61 and uses for selected prediction module 41,43 with the predicted version that generates (as seeing at the decoder place) piece or sub-block.The indication of the prediction of in prediction module 41,43, using, the motion vector that is generated by inter prediction module 43 and all be output to be included in the video flowing of coding by the manipulative indexing of the quantification of transform and quantization module 51,53 residual errors that generate; Typically can use further lossless coding level that lossless coding technique as known in the art further compresses such as the entropy coder (not shown) via the quantization index of wherein predicted value and conversion.
According to above, therefore coded representation can comprise piece partition information, predictive mode, motion vector, quantification accuracy etc.Optimum the encoding option depends on video content, bit rate, early coding decision-making etc.The quantification accuracy of conversion coefficient typically is chosen as and satisfies the bit rate constraint.In addition, should make distortion minimization.
For example, H.264 video encoder is provided at the great flexibility [1] of selecting the predictive mode aspect.For the inter prediction of luminance component, the macro block of 16 * 16 pixels can be expressed as two pieces of two pieces of piece of 16 * 16 pixels or 16 * 8 pixels or 8 * 16 pixels or four pieces of 8 * 8 pixels.Further, four sub-blocks of two sub-blocks of two sub-blocks of 8 * 8 pieces that can be expressed as 8 * 8 pixels or 8 * 4 pixels or 4 * 8 pixels or 4 * 4 pixels.For the macroblock partition of each permission, attempt inter prediction.The inter prediction of piece typically uses estimated (one or more) reference frame of subpixel accuracy and (one or more) motion vector (with the spatial displacement of reference block in the respective reference frame) to represent by indexation.For the infra-frame prediction of luminance component, exist 16 * 16 four kinds may pattern and nine kinds of 4 * 4 sub-blocks may patterns.Further, there are four kinds of possibility patterns for chromatic component.Select optimum prediction mode by the performance that compares interframe and intra prediction mode.
Video Codec such as the rate-distortion performance of AVC [1] H.264 depends on the performance of Macroblock Mode Selection o to a great extent.This is according to example such as frame mode or the rate distortion of inter-frame mode is compromise determines that whether macro block is by the process of forced coding.From the robustness angle, inter-coded macroblocks is useful, because they stop time error to propagate (suppose to use affined infra-frame prediction, namely forbid the infra-frame prediction from the inter prediction macro block).Yet, therefore inter-coded macroblocks is compared aspect speed generally more expensive with inter-coded macroblocks, importantly systematically introduce inter-coded macroblocks so that given a certain bit budget and channel condition and the distortion (for example average distortion) at decoder place is minimized.The people such as Zhang [2] propose such system framework with poor based on the expection mean square at decoder place and (SSD) minimize to introduce inter-coded macroblocks.By following the tracks of potential distortion, the people such as Zhang can calculate the relevant bias term (bias term) of error propagation distortion with (at the decoder place) expection, add the error propagation distortion of described expection to the source code distortion during cost of the inter macroblocks in calculation code device rate distortion loop.
The rate-distortion performance optimization problem can come formulistic according to minimal distortion under bit rate constraint R.Lagrange optimization framework often is used for finding the solution this problem, can be formulated as according to this described optimization criterion:
Wherein J represents Lagrangian, and D represents measure (function of pattern o and macro block m or macro block child partition) of distortion, and R is bit rate, and λ is the compromise parameter that defines between distortion and the speed.Normally used distortion measure (measure) be original and reconstructed pixel between the difference of two squares and (SSD) or the absolute difference between original and the predict pixel and (SAD).
In this application, find the solution Lagrangian optimization problem and mean to find and make the minimized coding mode o of Lagrangian J, wherein Lagrangian J comprises the item that represents distortion, item and the expression compromise factor (" Lagrangian multiplier (multiplier) ") between the two of expression bit rate at least.Along with coding mode o changes towards coding mode more thorough or more good quality, then distorterence term D will reduce.Yet rate term R will increase simultaneously, and a bit locate in certain relevant with λ, and the increase of R will reducing above D.Therefore, expression formula J will have a certain minimum value, and this coding mode o that place occurs is regarded as the optimum code pattern.
In this sense, bit rate R, the λ R that perhaps puts in a good word for definitely is to this optimal settings constraint, because this makes the optimum code pattern end ever-increasing quality.Find the residing pattern of this optimal balance will depend on λ, so λ can be regarded as representing trading off between bit rate and the distortion.
The Lagrange optimization is used in the process of selecting the coding decision-making usually, and is applied to each frame zone (for example each macro block of 16 * 16 pixels).Usually, can estimate this distortion and process level to take all into account.These comprise prediction, transform and quantization.In addition, in order to calculate reconstructed pixel, must carry out the step of re-quantization, inverse transformation and inverse prediction.SSD causes higher quality often preferably as distortion criterion because it is compared with SAD.Usually, speed is also taken into account all parameters that needs is encoded, and comprises the parameter of describing prediction and the conversion coefficient [4] of quantification.
In [2], the people authors such as Zhang estimate not only because source code but also because the potential distortion in the decoder that channel errors causes, that is because losing of data and the possible distortion that will stand during transmitted signal on channel.Then the potential distortion of estimating is used for selecting (if having the probability of channel errors) towards the intraframe coding offset mode indirectly.
" end-to-end " distortion expression formula of Zhang based on the difference of two squares and (SSD) distortion measure and the hypothesis Bernoulli Jacob that is used for losing macro block distribute.Optimal Macroblock Mode o OptProvided by following formula:
D wherein s(m, o) expression is for the SSD distortion between macro block m and macro block mode o, original and the reconstructed pixel, and R is total speed, and λ is the Lagrange multiplier that distortion and rate term are connected.D Ep_ref(m, o) expression is because the expection distortion in the reference macroblock in the decoder that error propagation causes.D Ep_ref(m, o) therefore provides bias term, and this bias term makes this optimization biasing towards intraframe coding if the error propagation distortion becomes too greatly.D Ep_ref(m, o) is for the inter-coded macroblocks pattern and Yan Weiling.Expression formula D s(m, o)+D Ep_ref(m, o)+λ R (m, o) can be regarded as the example of Lagrangian J.Argmin oThe value of argument o when output expression formula J is minimum value.
In [2], a D Ep_ref(m, o) follows the motion of object and uses current motion vector to calculate from total distortion figure.Total error expected propagation distortion figure D EpDrive and after each Macroblock Mode Selection, be updated to by carrying out error concealment:
Figure DEST_PATH_IMAGE005
Wherein n is frame number, k the child partition (being piece or sub-block) of m (k) expression macro block m, and p is the probability of packet loss, D Ec_recSSD between reconstruct in the presentation code device and the error concealment pixel, and D Ec_epBe the expection SSD between the error concealment pixel in the encoder.
In [2], D EpBe stored on 4 * 4 grids on each macro block of frame, namely every macro block has 16 D EpValue, so per 4 * 4 sub-block of pixels of each macro block have a D EpValue.As shown in Fig. 1 c, D Ep_ref(m (k), the calculating of o) (namely propagating with reference to distortion at the error expected of the sub-block k of time n in macro block m) then is implemented as the D from four sub-blocks of the previous frame of time n-1 EpThe weighted sum of value.These weights are to determine according to the motion vector of the piece m that discusses.In other words:
Figure 419454DEST_PATH_IMAGE006
Weight w wherein iWith overlapping regional proportional and q wherein i(k i) macro block q among the expression previous frame n-1 iSub-block k i
Fig. 1 c provides reference example temper piece b1 ... b4(in this example k counts through b1 corresponding to b1 and i ... b4) calculate the error expected propagation with reference to the diagram of distortion from motion vector and error expected propagation distortion figure.
Summary of the invention
The people's such as Zhang [2] process only based at the encoder place purely about the prior probability hypothesis that likelihood makes of losing on the channel.
Yet some existing communication system provides feedback mechanism to return the purpose of some information from receiver to sender report and/or be used for the control purpose being used for.For example, which frame encoder can receive back about correctly arrives the information that decoder and/or which frame are lost in transmission, and can generate interior frame as response and propagate with timing error.Yet existing mechanism is too simple, because it only triggers the generation of whole interior frame, and also not in conjunction with because the still to be confirmed or report frame part of losing or losing of other frame and any probability Estimation of the distortion that may stand.
The people's such as Zhang [2] algorithm is only considered to send the situation of video and do not consider may using or availability of any feedback at error-prone channel, and like this, and the process of Zhang is not based on any actual aposterior knowledge of channel.
From other angle, the routine of feedback is used the generation only trigger whole interior frame, and the level of the various piece in frame does not have model selection (for example one by one macro block).The routine of feedback is used and also not to be related to because may lose and the estimation of the distortion that will stand on the channel.
The inventor proposes to utilize information from the decoder feedback to the encoder (arriving state such as grouping and/or frame) to come losing adaptive rate-distortion optimization process and compare with the people's such as Zhang method thus and improve the global rate distortion performance in the further adaptive encoder on the other hand.
The first embodiment of the present invention can be utilized the system of short-term and long term reference.For example, H.264 the support of AVC standard is labeled as the functional of so-called " for a long time " reference with some reference frame.These long term references are retained in the decoded picture buffering device until till clearly being removed.This forms contrast with " short-term " reference frame, and short-term new in " short-term " reference frame is with reference to the oldest short-term reference frame in overwrite (overwrite) the decoded picture buffering device.
According to the first embodiment of the present invention, it is long term reference (except about the information of having lost which frame) in the available nearest affirmation in decoder place that feedback mechanism can be used for making encoder which is known.Hereinafter, the reference of the affirmation error free reference that preferably means the to confirm reference of error propagation distortion (namely without any) but not only be the reference that itself is identified.In other words, with reference to should preferably confirming according to following strict difinition: with reference to be confirmed to be receive and in the history of this reference relevant all also be confirmed to be reception so that can knownly there not be error propagation; Only confirm in contrast current reference to be received and not strict its history of affirmation.Note, only itself being identified also is free from error (not having propagated error) with respect to the part in the error free frame that is encoded still.
Can be used for stoping similarly error propagation in the decoder with intraframe coding based on the inter prediction of the long term reference frame of confirming.Use from the benefit of the inter prediction of long term reference be inter prediction generally cause for given level of distortion than low bit rate.
By utilizing the long term reference of confirming to carry out inter prediction, this first embodiment of the present invention can use additional macro-block coding pattern, being used for for example framework of Zhang [2], but generally speaking described additional macroblock coding mode can stop error propagation with lower related bit rate similarly with intraframe coding.
The people's such as Zhang [2] algorithm only considers that two kinds of dissimilar coding modes are in the frame and interframe encode.Error propagation in the formula (3) is with reference to distortion D in this case Ep_ref(m, o) is only for the macro block mode of intraframe coding and Yan Weiling.Yet the first embodiment of the present invention has enlarged the available code set of modes to comprise the interframe encode from the long term reference of confirming.D Ep_ref(m, o) then is set as all is zero for intraframe coding but also for the interframe encode from the affirmation reference not only.The advantage of this coding mode is that generally speaking it can be still to stop error propagation with lower bit rate with the similar mode of intraframe coding.
The modification of the first embodiment of the present invention is used following design: for a certain reference that receives not yet feedback (for example long term reference), this is with reference to only having based on a certain probability that becomes the reference of error free affirmation from the intermediate report of decoder.For example, can introduce another available coding mode, it is distinguished based on the use of the inter prediction of the long term reference of non-affirmation and use based on the inter prediction of the short-term reference of non-affirmation.For the long term reference of non-affirmation, the time (or be frame number equivalently) of the long term reference of confirming according to the prior estimate of losing probability and based on last (recently) in the reference history of non-affirmation, reduction error propagation distortion D Ep_refThe estimation of (m, o).
In another modification of the first embodiment, when the short-term in two-way time (RTT) and the decoded picture buffering device of grouping was compared enough low (be grouping from the time that transmitter advances to receiver and again returns two-way time) with reference to number, then identical concept can be applied to the short-term reference.In other words, for enough little RTT, alternative or additional possibility are that the short-term reference is characterized as affirmation, and it then can be to use with the similar mode of affirmation long term reference discussed above.Again, to the required algorithm of formula (3) change be not only for intraframe coding and also for from the interframe encode of confirming reference with D Ep_ref(m, o) is set as zero.
Note in addition, described affirmation need to be based on whole frame.Can change into and be arranged to receive to the affirmation of partial frame (for example section) only and according to confirming or its disappearance (the clearly report that does not perhaps receive) and differently treat those different pieces.
In the second embodiment of the present invention, the information from the decoder feedback to the encoder such as grouping and/or frame arrival state are used to regulate the potential distortion map in the encoder and compare with the people's such as Zhang method thus and improved the global rate distortion performance.
According to the second embodiment, each frame in the decoded picture buffering device of potential error propagation distortion map (together with error concealment reconstruct distortion map, error concealment error propagation figure, corresponding mode decision and motion vector information) and encoder or cut into slices and store relatedly.The feedback information that then this second embodiment is used to the self-demarking code device upgrades potential distortion map.Feedback information promotes the potential distortion of refinement to follow the tracks of, thereby produces better rate-distortion performance.
Signal the feedback information that particular frame has arrived decoder if encoder receives, then can the error propagation distortion map from formula (3) remove the error concealment contribution.On the contrary, signal the feedback information of losing particular frame or section at the decoder place if receive, then recomputate related error propagation distortion map in order to only comprise contribution from the error concealment distortion, i.e. second and the 3rd (carrying out normalization with p) in the right-hand side of formula (3).
Then, little if the reference picture number in two-way time (RTT) and the decoded picture buffering device is in a ratio of, then might recursively use formula (3) that the potential error propagation figure that regulates at time n-RTT is propagated into error propagation distortion map at time n-1.Then the error propagation distortion map of upgrading at time n-1 will be the D at time n that uses in model selection process (2) for calculating Ep_refThe basis.This causes the more accurately tracking to potential distortion map, so has improved the global rate distortion performance of system.
The above has summarized some certain exemplary embodiments, but more generally the invention provides a kind of system, computer program and device that meets hereinafter.
According to an aspect of the present invention, provide a kind of encoder place in transmitting terminal that video flowing is encoded to send to the method for the decoder at receiving terminal place at Erasure channel, described method comprises: carry out rate-distortion optimization process, this rate-distortion optimization process comprises each for a plurality of target images parts that will encode in each of a plurality of frames, comprise that by optimization the distortion estimation of target image part and the function of measuring for required bit rate that target image is partly encoded select the optimized encoding pattern in the coding mode set, wherein this distortion estimation is based on the source code distortion and because may lose and the estimation of the distortion that will stand on the channel; Use the coding mode of selecting target image partly to be encoded into the video flowing of coding; And the video flowing that sends coding at channel; Wherein carry out rate-distortion optimization process for the present frame of described frame based on the early frame of described frame according to the feedback that receives from receiving terminal.
Described feedback can comprise with lower one: receive described early at least part of affirmation of frame, and do not receive described early at least part of report of frame.
In the first embodiment of the present invention, the coding mode set can comprise the reference inter-frame forecast mode of affirmation, and it is partly encoded to target image with respect to the corresponding reference section in the early frame part of the early frame of confirming or affirmation.
The reference inter-frame forecast mode of confirming can be used in the coding mode selection and use, condition is that reference section is confirmed to be reception and relevant all of reference section coding also have been confirmed to be reception, so that can the known reference part can not propagate by Errors.
The execution of losing adaptive rate-distortion optimization process according to described feedback can comprise: set because being estimated as of the distortion due to losing is zero, condition is to receive the feedback that comprises described affirmation.
The method can comprise: at the coder side example of encoder place operation decoder, and keep the decoded picture buffering device at the encoder place, this decoded picture buffering device file layout is short-term and the long term reference by the reference image data of the coder side example reconstruct of decoder, its with reference to can be by successive frame overwrite automatically a middle or short term, and long term reference can be removed based on clearly removing the condition of order; And the reference inter-frame forecast mode of wherein confirming can come target image is partly encoded with respect to the corresponding long term reference that has received at the receiving terminal place that is confirmed to be in the long term reference in the decoded picture buffering device.
The coding mode set can comprise the interframe encoding mode of at least frame mode, at least one non-affirmation and the inter-frame forecast mode of described affirmation.
The coding mode set can comprise the long term reference interframe predictive mode of non-affirmation, and it comes target image is partly encoded with respect to the corresponding long term reference in the partial frame of storing in the decoded picture buffering device or the non-acknowledgement frame; Wherein the execution of losing adaptive rate-distortion optimization process according to described feedback can comprise: according to the losing probability of estimating with since the early frame of last affirmation or the early frame time partly of affirmation, determine because the estimation of the distortion due to losing.
Coding mode is gathered predictive mode between the short-term reference frame that also can comprise non-affirmation.
The method can comprise: at the coder side example of encoder place operation decoder, and keep the decoded picture buffering device at the encoder place, this decoded picture buffering device file layout is short-term and the long term reference by the reference image data of the coder side example reconstruct of decoder, its with reference to can be by successive frame overwrite automatically a middle or short term, and long term reference can be removed based on clearly removing the condition of order; And the reference inter-frame forecast mode of wherein confirming can come target image is partly encoded with respect to being confirmed to be the corresponding short-term reference that has received at the receiving terminal place in the decoded picture buffering device.
The coding mode set can comprise free intra-frame encoding mode, and its permission assigns to target image is partly carried out intraframe coding from the reference section of interframe encode.
In the second embodiment of the present invention, can comprise according to the execution of losing adaptive rate-distortion optimization process of described feedback: according to described feedback, regulate early early distortion estimation of frame of frame or part; And the distortion estimation of forward propagation adjusting is to use with respect to present frame.
The execution of losing adaptive rate-distortion optimization process according to described feedback can comprise: according in described affirmation and the described report at least one, regulate early early distortion estimation of frame of frame or part; And the distortion estimation of forward propagation adjusting is to use with respect to present frame.
Because possible losing and the distortion estimation that will stand can be based on the first contribution, the estimations of the distortion that expression (if the target part arrives by channel really) will stand because target is partly predicted not the arriving of reference section in the target part history that relies on; And second contribution, the estimation of the distortion that expression will stand owing to hiding.
The second contribution can comprise: expression target part is with respect to the contribution of the hiding distortion measure of image section, if lose this target part then it will be used for losing of vanishing target part at channel; And expression is because the losing of image section in the target part history that the cache of target part relies on and the contribution of the distortion estimation that will stand.
Can comprise take lower one or two according to the execution of losing adaptive rate-distortion optimization process of described feedback: set the second contribution as zero for frame early, condition is to receive the feedback that comprises described affirmation; And to set the first contribution for frame early be zero, and condition is to receive the feedback that comprises the described report that does not receive.
According to a further aspect in the invention, provide a kind of video flowing has been encoded to send to the transmitting terminal of the decoder at receiving terminal place at Erasure channel, described transmitting terminal comprises: encoder, be configured to carry out rate-distortion optimization process, this rate-distortion optimization process comprises each for a plurality of target images parts that will encode in each of a plurality of frames, comprise that by optimization the distortion estimation of target image part and the function of measuring for required bit rate that target image is partly encoded select the optimized encoding pattern in the coding mode set, wherein this distortion estimation is based on the source code distortion and because may lose and the estimation of the distortion that will stand on the channel, and this encoder is arranged to use the coding mode of selection target image partly to be encoded into the video flowing of coding; And reflector, be arranged to send the video flowing of encoding at channel; Wherein this encoder is configured to so that carry out rate-distortion optimization process for the present frame of described frame based on the early frame of described frame according to the feedback that receives from receiving terminal.
In an embodiment, encoder can further be configured to the operation of the method characteristic of executive basis above any.
According to a further aspect in the invention, provide a kind of and at the transmitting terminal place video flowing has been encoded to send to the computer program of the decoder at receiving terminal place at Erasure channel, described computer program is embodied on the computer-readable medium and comprises code, described code is configured to operation below carrying out when transmitting terminal is moved: carry out rate-distortion optimization process, this rate-distortion optimization process comprises each for a plurality of target images parts that will encode in each of a plurality of frames, comprise that by optimization the distortion estimation of target image part and the function of measuring for required bit rate that target image is partly encoded select the optimized encoding pattern in the coding mode set, wherein this distortion estimation is based on the source code distortion and because may lose and the estimation of the distortion that will stand on the channel; Use the coding mode of selecting target image partly to be encoded into the video flowing of coding; And the video flowing that sends coding at channel; Wherein carry out rate-distortion optimization process for the present frame of described frame based on the early frame of described frame according to the feedback that receives from receiving terminal.
In an embodiment, code can further be configured to by the operation of the method characteristic of when operation executive basis above any.
Description of drawings
In order to understand better the present invention and how can it to be implemented in order illustrating, by way of example accompanying drawing is carried out reference, in the accompanying drawings:
Fig. 1 a is schematically showing of video flowing,
Fig. 1 b is schematically showing of some intraframe predictive coding patterns,
Fig. 1 c is the schematically showing of calculating of error propagation distortion,
Fig. 2 is the schematic block diagram of communication system,
Fig. 3 is the schematic block diagram of encoder, and
Fig. 4 is the schematic block diagram that adopts the system of the feedback from the decoder to the encoder.
Embodiment
The information (for example grouping and/or frame arrive state) of utilization from the decoder feedback to the encoder is hereinafter described so that further adaptive coded system and the method for losing adaptive rate-distortion optimization process and improving thus the global rate distortion performance.Encoder be similar to about Fig. 3 describe but have the encoder of the mode selection module 49 of modification.It can be used for the video flowing of illustrated kind among Fig. 1 is encoded and is implemented in the communication system such as Fig. 2.
As mentioned, model selection can relate to the Lagrangian type of functions of optimization (for example minimizing):
Figure DEST_PATH_IMAGE007
Wherein J represents Lagrangian, and D represents measure (function of pattern o and macro block m or macro block child partition) of distortion, and R is bit rate, and λ is the compromise parameter that defines between distortion and the speed.
Distorterence term D only considers the source code distortion under regular situation, namely since the defective in the encoder such as by the distortion that quantizes to introduce.It is not considered may be because the distortion that the loss of data on the channel is for example introduced owing to the packet loss in the transmission on the packet-based network 32.
On the other hand, losing adaptive technology such as those technology of the present invention and Zhang [2] attempts to define and considers source code and because the measuring of " end-to-end " distortion of the caused distortion of loss of data on the channel.The end-to-end distortion of given (target) piece, macro block or sub-block can be described as:
Figure 627712DEST_PATH_IMAGE008
D wherein ArrivalThe estimation of the distortion that will stand if object block arrives decoder really, and D LossBe if since the packet loss on the channel for example since the grouping that comprises object block on the packet-based network 32 lose that this piece does not arrive decoder then the estimation of the distortion that will stand.Parameter p is the probability Estimation of the event of losing that is lost of the piece discussed in causing of occuring of channel or image section, for example probability Estimation of packet loss.For convenience's sake, term " piece " can some place here be used for usually referring to relevant frame subregion level piece or the sub-block of H.264 some standard (for example such as).
D ArrivalNot only represent source code distortion but also expression because the i.e. according to this distortion in one or more reference blocks of target of prediction piece of the passing distortion of piece and with the distortion of introducing.Therefore, D ArrivalComprise source code distorterence term D sWith error propagation distorterence term D Ef_refBoth, this error propagation distorterence term D Ef_refDistortion (that is, with the distortion in the object block reference block that is carried to forward in the object block) in the object block history of expression prediction:
Figure DEST_PATH_IMAGE009
D LossComprise because losing due to hiding.If do not receive object block, then decoder will be used hidden algorithm, and this hidden algorithm can relate to the piece that freezes early decoding or piece (from present frame and/or previous frame) interpolation or the extrapolation of decoding from one or more successes.Therefore, D LossCan be designated because the distortion due to this hiding process:
Figure 504402DEST_PATH_IMAGE010
Therefore check formula (5), a D sIf expression does not exist the estimation of the distortion of losing then will stand, a D EcThe estimation of the distortion that will stand if object block has been lost in expression, and a D Ep_refIf but expression successfully receives the estimation that some things in object block its history are lost the distortion that (if the reference block of object block is lost, perhaps the reference block of reference block is lost, etc.) then will stand.
D sAnd D Ep_refIt is the function that coding mode is selected o.D EcTherefore not the function of model selection o, omit (how it doesn't matter to the piece coding lost---it still is lost) from Lagrangian formulation.Therefore, optimization can be written as:
D sDetermine, because it is based on the information that the encoder place can be known, for example based on the sample value of original input sample value s and reconstruct
Figure 638449DEST_PATH_IMAGE012
Between poor.Encoder is at the parallel example (or it is approximate) of coder side operation decoder---referring to the illustration that the inter prediction module 43 among Fig. 3 is described in detail in detail.Inter prediction module 43 comprises motion compensated prediction (MCP) piece 44 and summing stage (+), and described summing stage (+) is arranged through the sample of combined prediction
Figure DEST_PATH_IMAGE013
Residual error with reconstruct
Figure 967799DEST_PATH_IMAGE014
Determine the sample of reconstruct
Figure 166699DEST_PATH_IMAGE012
, namely for each sample index i,
Figure DEST_PATH_IMAGE015
In the situation that interframe encode, at the sample of encoder place prediction
Figure 648627DEST_PATH_IMAGE013
Can with the sample of reference block
Figure 704308DEST_PATH_IMAGE016
(reference block in the reference frame only has been offset motion vector with respect to target frame---referring to Fig. 1 c, will again discuss soon) identical.
Therefore, encoder can be determined the sample of actual sample s and reconstruct Between poor, such as (up to now, this has ignored the possibility of losing that will be introduced in the further distortion that stands at the decoder place) seen at the encoder end.Difference in the sample can for example be calculated as the difference of two squares and (SSD) error on all sample index i of the object block of discussing:
Figure DEST_PATH_IMAGE017
Yet, D Ep_refStill wait to estimate, this will be based on a certain estimation of making the channel that will send about (for example on packet-based network 32) coded data thereon.
For realizing this, the mode selection module 49 in the encoder can be configured to keep error propagation distortion map D Ep, described error propagation distortion map D EpEach interior macro block of nearest frame of encoding or the distortion of macroblock partition are described.Mode selection module 49 also is arranged to determine to comprise the Probability p that will will lose at channel from the grouping of the reference block of its target of prediction piece (therefore also be arranged to impliedly or determine clearly probability 1-p that grouping arrives really).Probability p can pre-determine in the design phase based on statistical modeling, and mode selection module 49 is by determining p from memory 14 values of fetching in this case.Yet another possibility will be that mode selection module 49 is determined p based on the feedback from receiver 22.
Error propagation figure can be expressed as:
Error propagation figure D EpComprise the macro block m in the frame of nearest coding or more preferably be the distortion estimation of each child partition (piece or sub-block) m (k).Therefore, it can more clearly be written as:
Figure DEST_PATH_IMAGE019
Wherein m (k) represents k the child partition (for example sub-block) of macro block m, and p is the probability of packet loss.
D LossEqual D Ec, as mentioned above.D Ep_arrivalPoor on the channel of expression is namely the reconstructed sample at encoder place and poor between the reconstructed sample at decoder place.For example, this can and (SSD) quantize according to the difference of two squares:
Figure 92936DEST_PATH_IMAGE020
Wherein Be receive at the decoder place consider source code distortion and because (index i's) sample of the distortion due to the channel.That is,
Figure 335829DEST_PATH_IMAGE022
The input sample of original un-encoded,
Figure DEST_PATH_IMAGE023
The reconstructed sample of considering source code distortion (for example owing to quantizing) at the encoder place, and
Figure 639772DEST_PATH_IMAGE021
To consider the sample that comprises the end-to-end total distortion that diminishes channel effect;
Figure 226480DEST_PATH_IMAGE024
D Ep_arrivalCan expand to:
Figure DEST_PATH_IMAGE025
Wherein
Figure 932267DEST_PATH_IMAGE026
It is the sample of reconstruct residual error.Therefore:
Therefore be updated in the formula (9), error propagation figure can be rewritten as:
Figure 80483DEST_PATH_IMAGE028
Perhaps
Figure DEST_PATH_IMAGE029
Consider the pattern optimization problem, it also can be write as:
Figure 871722DEST_PATH_IMAGE030
Wherein n is frame number, i.e. D Ep(n+1) be given existing decision-making o OptWith the frame distortion D of time n formerly Ep(n) to be used for being made at the error propagation figure of model selection of the frame of time n+1.
As in Zhang [2], D EcItem also can expand to:
Figure DEST_PATH_IMAGE031
D wherein Ec-recSSD between reconstruct in the presentation code device and the error concealment pixel, and D Ec-epBe the expection SSD between the error concealment pixel in the encoder.
Check formula (3), as explained above, a D Ep-refIf but expression successfully receives some things in object block its history and is lost the distortion that (if the reference block of object block is lost, perhaps the reference block of reference block is lost, etc.) then will stand.Further, D Ec-recExpression is because the estimation of the distortion due to the character of hidden algorithm itself (is similar to a little the intrinsic source code distortion D for prediction s).D Ec-epIf then the expression object block be lost some things in the history of (therefore need to be hidden at the decoder place) and institute's vanishing target piece be lost (if finish cache from piece be lost, perhaps prediction or hide this piece from piece be lost, etc.) estimation of the distortion that then will stand.
Therefore, distortion map D EpComprise: because the contribution due to newly losing, by D Ec-recAnd part is by D Ec-epProduce; And because the contribution due to passing the losing, by D Ep-refAnd part is also by D Ec-epProduce.
For the first frame in the sequence, will be with intraframe coding encode this frame, in this case D Ep-ref=0 and therefore D Ep=pD Ec
Error concealment distortion D EcCalculated by mode selection module 49.Item D Ec-recBe based on the knowledge of hidden algorithm, and can depend on the certain errors hidden algorithm of use.D Ec-epBe based on existing (nearest) distortion map to be similar to D Ep-refMode calculate, if for example by in the situation that basic hidden algorithm copy the distortion of colocated piece or use more complicated the hiding (same referring to following discussion about Fig. 1 c) of attempting the motion extrapolation then from the weighted sum of the piece b1-b4 calculated distortion of a plurality of previous coding.Can use and calculate D EcAlternate manner---this can be in the encoder reconstructed sample and as the error concealment sample that will see at the decoder place (that is, from same number of frames region duplication, interpolation or the extrapolation of the frame of previous reception or the reception sample with concealment of missing frame or zone) between any estimation of difference.
Then mode selection module 49 keeps the error propagation figure of the frame of each follow-up inter prediction by following operation: follow each model selection decision-making, it is upgraded, comprise now from the knowledge of existing Error Graph and calculate D Ep-refIn the situation that inter prediction (estimation), according to Zhang [2], this uses the motion vector of the frame of discussing to finish.
In Fig. 1 c, illustrate such example.Four sample block b1, b2, b3 and b4 are shown in reference frame F nIn (at time n-1), this reference frame is encoded.(at follow-up time n) target frame F nPiece will be from reference frame F N-1Predict.For example consider target frame F nIn object block b 1For this reason, motion prediction module 44 is determined objective definition frame F nIn object block and reference frame F N-1In reference block (shown by dashed lines) between the motion vector of skew so that when reference block from reference frame F N-1In deviation post move to target frame F nIn object block b 1 'The position time, it provides object block b 1Best estimate.Therefore note, empty reference block needs not to be reference frame F N-1But in the indexation piece, namely need not to be the predetermined segmentation of reference frame, and can be offset any any amount (and in fact even mark that can offset pixels).Therefore but reference block is made of the contribution from four reality indexation piece b1, b2, b3 and b4.
Thereby, carry out to determine D by mode selection module 49 Ep-refThereby be used for upgrading error propagation figure D Ep(n+1) existing calculating comprises calculates for existing figure D EpThe weighted sum of the distortion that the piece (n) or sub-block b1 record to b4:
Figure 262120DEST_PATH_IMAGE032
Perhaps more clearly:
Figure DEST_PATH_IMAGE033
W wherein iThe weight that represents from the contribution of piece or sub-block bi, and D Ep(i) be the error propagation figure clauses and subclauses of piece or sub-block bi.
The above has described following existing process: determine initial error propagation figure D Ep, use error propagation figure selects the optimum code mode decision o for next code Opt, make a strategic decision to upgrade figure D with described coding Ep, then in next coding decision-making, use the figure that upgrades, by that analogy, wherein error propagation figure represents to comprise the end-to-end distortion of the estimation of the losing impact on the channel.For example refer again to Zhang [2].This can be called as in this article loses adaptive rate-distortion optimization (LARDO:loss-adaptive rate-distortion optimization).
Yet, the people's such as Zhang [2] process only based at the encoder place purely about the prior probability hypothesis that likelihood makes of losing on the channel.
The present invention provides improvement with respect to Zhang by following operation: utilize the information (for example grouping and/or frame arrive state) from the decoder feedback to the encoder to come losing adaptive rate-distortion optimization process and improving thus the global rate distortion performance in the further adaptive encoder.
Fig. 4 is the schematic block diagram of describing to be used for to implement the system of encoder of the present invention.Preferably, encoder is apparent in the memory 14 and processing unit 16 of transmitting terminal 12, and decoder is apparent in the storage medium 24 and processing unit 26 of receiving terminal 22.Encoder on the transmitting terminal 12 comprises the coder side example of coding module and decoder module, its mirror image or decoding approximate as that carry out at the decoder place.Coding module comprises positive-going transition module 51 and quantizer 53 and comprises that potentially one or more other levels are such as the entropy coder (not shown).The coder side decoder module comprises inverse quantizer 63 and inverse transform module 61 and comprises that potentially other level is such as entropy decoder.Encoder also comprises motion compensated prediction (MCP) module 44 and subtraction stage (-).Refer again to Fig. 3 to explain the connection between these encoder components.
In addition, in Fig. 3 unshowned be encoder also comprise be connected to coder side decoder module 61,63 and motion compensated prediction module 44 between the path in decoded picture buffering device 65.Decoded picture buffering device 65 comprises a plurality of composition buffer areas, and each forms buffer areas can be labeled as the reference of maintenance short-term or long term reference.Only have an actual buffer in H.264, wherein marker is used to refer to the long term reference possibility of the special-purpose short-term of separating in other execution mode and long-term buffer (but be not precluded within).In Fig. 4, decoded picture buffering device 65 is illustrated as keeping the long term reference 68 of one or more short-terms references 66, one or more unacknowledged long term references 67 and one or more affirmations.
Each forms buffer areas can operate reconstructed version be used to the frame of storing one or more previous coding or section (namely be encoded and then again decoded by decoder module 61,63 coder side example in order to represent frame or section as seeing at the encoder place).These reconstructed version of the frame of previous coding or section are provided as the reference in the inter prediction encoding of present frame or section, even the object block that must encode can be encoded with respect to the reference block in the buffer.
Decoded picture buffering device 65 is arranged such that automatically to upgrade short-term with reference to 66 with each successive frame that is encoded or section.That is, along with each frame or section are encoded, then the decoded version automatic cover of this new frame or section writes on previous another nearest reference frame or section that keeps in the short term buffer.In a preferred embodiment, decoded picture buffering device 65 can keep a plurality of short-terms references 66 and the oldest short-term with reference to the reference that is always overwrite in buffer.The condition of adding do not need to occur for this reason.
As mentioned, H.264 the AVC standard also to allow some reference frame or biopsy marker be long term reference 67,68.These long term references are retained in the decoded picture buffering device until till clearly being removed.That is, they are by the frame of continuous programming code or the automatically overwrite of cutting into slices, and opposite only when another action of encoder or element trigger its additional conditions (for example control command is such as the storage management order) by overwrite or otherwise be removed.Described control command can will take the controller (not shown) of the encoder of what action to send by decision.Can send to decoder such as slice header at the header of coded bit stream for the order of removing long-term buffer.Also can be with similar functional being attached in other standard.Decoder on the receiving terminal 24 comprises motion compensated prediction module 44, decoder module 61,63 and the decoder-side example 44 ', 61 ', 63 ', 65 ', 66 ', 67 ' and 68 ' that is arranged to store corresponding short-term and long term reference 66,67 and 68 decoded picture buffering device 65.
Decoder on the receiving terminal 24 is configured to communicate by letter with the encoder on the transmitting terminal 12 via feedback channel.This feedback is preferably via the identical network 32 that video flowing is sent to 22 warps of receiving terminal, and for example packet-based identical network is such as the internet, although do not get rid of the possibility of alternative feedback mechanism.
By way of example, long term reference can be by the following management of controller.Such as, determine in decoded picture buffering device (for example It pos 0 and It pos 1), to keep two long term references.The first frame that (at time t0) is encoded can be placed among the It pos 0.Can suppose that the first frame will arrive decoder (feedback from decoder will arrive) also behind a RTT so It pos 0 initially is marked as error free affirmation.The next frame that is labeled as long term reference is the frame at time t0+RTT, and it is placed on It pos 1 place.If encoder obtains indicating the feedback that arrives (and not comprising error propagation) in the reference of It pos 1 from decoder, then It pos 1 is marked as error free affirmation and (time t0+2*RTT's) next long term reference frame is placed among the It pos 0.Therefore, two positions form reciprocating type buffer, wherein always have a position of being confirmed and the temporary transient position of surveying of using of quilt error freely.Like this, in the situation that should always there be the error free reference that can be used for losing the quite nearest affirmation that generates the recovery frame in the decoded picture buffering device.Therefore in principle, the It-ref frame is the closer to the current time position, and then interframe encode is more efficient, and this recovery frame will less (in bit).Yet, this only be a kind of controller of the encoder strategy that can be configured to manage long term reference (for for the purpose of the example and describe), and the mode that will understand the system of the long-term and short-term reference of other management in picture buffer is possible (for example, even better mode provide more It-ref).
With reference to the illustrative embodiments of figure 4, the present invention considers to feed back to comprise about the information at the decoded picture buffering device 65 ' at decoder place.Given this feedback, encoder are known which frame in the decoder for example or are cut into slices decoded and do not comprise any error propagation distortion.In Fig. 4, the clauses and subclauses 68 in the decoded picture buffering device 65 refer to the error free frame of this affirmation.Clauses and subclauses 67 in the decoded picture buffering device 65 refer to unacknowledged frame.It is long term reference (except about the information of having lost which frame) in the available nearest affirmation in decoder place that feedback mechanism can be used for making encoder which is known.Hereinafter, the reference of the affirmation error free reference that preferably means the to confirm reference of error propagation distortion (namely without any) but not only be the reference that itself is identified.In other words, with reference to should preferably confirming according to following strict difinition: with reference to be confirmed to be receive and in the history of this reference relevant all also be confirmed to be reception so that can knownly there not be error propagation; Only confirm in contrast current reference to be received and not strict its history of affirmation.Note, only itself being identified also is free from error (not having propagated error) with respect to the part in the error free frame that is encoded still.
According to the first embodiment of the present invention, can be used for stoping error propagation in the decoder with the similar mode of intraframe coding based on the inter prediction of the long term reference frame of confirming (or section).Use from the benefit of the inter prediction of long term reference be inter prediction generally cause for given level of distortion than low bit rate.
By using the long term reference of confirming to carry out inter prediction, the first embodiment of the present invention can be used the additional macroblock coding mode, for example be used in the framework of Zhang [2], but generally speaking described additional macroblock coding mode can stop error propagation with lower related bit rate similarly with intraframe coding.
The people's such as Zhang [2] algorithm only considers that two kinds of dissimilar coding modes are in the frame and interframe encode.In this case, in formula (3) error propagation with reference to distortion D Ep-ref(m, o) is only for the macro block mode of intraframe coding and Yan Weiling.Yet the first embodiment of the present invention has enlarged the available code set of modes to comprise the interframe encode from the long term reference of confirming.The advantage of this coding mode is that generally speaking it can be still to stop error propagation with lower bit rate with the similar mode of intraframe coding.
The required variation of the algorithm that provides above is that the error propagation in formula (2) and (3) also is set to zero with reference to distortion when coding mode is interframe encode from the reference frame of confirming, that is:
Figure 822415DEST_PATH_IMAGE034
In some modification of the first embodiment, above example can make amendment with the following methods.Consider that encoder wherein for example is labeled as reference frame with the regular spaces relevant with two-way time (RTT) situation of long term reference.Therefore, (as shown in Figure 4) decoded picture buffering device 65 will comprise at example place sometime short-term with reference to 66, " non-affirmation " long term reference 67 and the long term reference 68 confirmed.For " non-affirmation " long term reference 67 becomes affirmation according to strict difinition at the decoder place, to lose requiring before the long term reference of non-affirmation, not exist, the long term reference of namely confirming is the decoded frame without any the error propagation distortion.In coder side, receive and transport the feedback that has received the information of which frame about decoder.Given encoder definitely known which frame is marked as (non-affirmation) long term reference and has grouping or the estimation p of Frame loss probability, and the probability that might will convert for the long term reference of non-affirmation the long term reference of affirmation to is set up model.If Frame loss probability is p and has known or predetermined space L between two long term references, then will to become the prior probability of affirmation be (1-p) to the long term reference of non-affirmation LLittle by little, along with receiving positive feedback information, for example decoder has received in the middle of L the frame l up to now, then long term reference with the probability that is identified from (1-p) LChange to (1-p) (L-l)Described probabilistic model can be used for now with top D Ep-ref(m, o) formula is summarised as:
Figure DEST_PATH_IMAGE035
D wherein Ep-LTrefIt (is the D of this frame simply that the error expected of the long term reference of the non-affirmation of (m (k)) expression is propagated EpCopy).
Therefore, this modification of the first embodiment is introduced another coding mode, and it is distinguished based on the use of the inter prediction of the long term reference of non-affirmation and use based on the inter prediction of the short-term reference of non-affirmation.For a certain long term reference that receives not yet feedback, this only has based on from the intermediate report of decoder and become a certain probability of the reference of error free affirmation.Long term reference for non-affirmation, therefore can be according to the prior estimate (being that p itself is not based on feedback) of losing probability and based on time (or being frame number equivalently) of the long term reference of last (recently) in the reference history of non-affirmation affirmation, to error propagation distortion D Ep-refThe estimation of (m, o) is weighted.The estimation of this weight attenuation distortion is in order to reduce or reduce this estimation.
May note, above logic only be example embodiment and be a bit conservative with top form because it supposes that all frames of losing being labeled as long term reference all have impact.Top logic can be by refinement in order to only consider frame or the slice loss that affects long term reference frame.
In another summary, consider that wherein decoder has received long term reference but have the situation of losing between long term reference.Therefore, long term reference according to top strict difinition not by " affirmation ".Yet, if stored mode decision for all macro blocks of frame, then wherein corresponding pattern is set in the frame or the part of the long term reference of confirming is propagated with zero error and is associated with reference to distortion, therefore propagates with reference to comparing with intraframe coding with potential lower rate distortion cost timing error from those regional inter predictions.
Although top example embodiment all based on the concept of long term reference and feedback report, compare when enough low with reference to number when the short-term in two-way time (RTT) and the decoded picture buffering device, identical concept can be applied to the short-term reference.In other words, for enough little RTT, alternative or additional possibility are that short-term is identified with reference to being characterized as, and it then can be to use with the similar mode of affirmation long term reference discussed above.To the required algorithm of the formula in the previous chapters and sections (3) change be not only for intraframe coding and also for from the interframe encode of confirming reference with D Ep-ref(m, o) is set as zero.
The summary of the first embodiment discussed above has been improved the flexibility of between robustness and source code, making aspect trading off.
Putting in addition, suppose affined infra-frame prediction by the LARDO that gives tacit consent to, namely forbid the infra-frame prediction from the inter prediction macro block.Yet in fact the inventor has observed affined infra-frame prediction can cause serious coding distortion (especially on the smooth gradient picture region).Therefore, in particularly preferably modification of the present invention, LARDO should move by affined infra-frame prediction.Implicit is, intra-frame encoding mode (when predicting from the inter prediction macro block) also is associated with reference to distortion with error propagation, is not inter prediction from the error free reference picture of confirming with unique pattern that error propagation is associated with reference to distortion therefore.
In the second embodiment of the present invention, the information from the decoder feedback to the encoder such as grouping and/or frame arrival state are used for regulating the potential distortion map in the encoder and compare with the people's such as Zhang method thus and improved the global rate distortion performance.
According to the second embodiment, the error propagation distortion map D of each frame or frame section EpWith this frame in the decoded picture buffering device 65 of encoder or cut into slices and store relatedly.For each frame or section, also store corresponding error concealment reconstruct distortion map D at the decoded picture buffering device 65 at encoder place Ec-rec, error concealment error propagation figure D Ec-ep, corresponding mode decision o and motion vector information.Then encoder uses the feedback information from decoder to upgrade distortion map.Refer again to formula (3).The distortion that feedback information allows refinement to estimate is followed the tracks of, thereby produces better rate-distortion performance.
Preferably, this is following realization.If encoder receives and signals particular frame or section successfully arrives the feedback information of decoder, error propagation distortion map D that then can be from formula (3) EpRemove error concealment contribution D Ec-recAnd D Ec-epOn the contrary, signal the feedback information of losing particular frame or section at the decoder place if receive, then recomputate related error propagation distortion map D EpIn order to only comprise the contribution from the error concealment distortion, namely second and the 3rd in the right-hand side of formula (3) is D Ec-recAnd D Ec-ep(estimating that with the priori losing probability p carries out normalization).
Then, little if the reference picture number in two-way time (RTT) and the decoded picture buffering device is in a ratio of, then might be by recursively using formula (3) the potential error propagation figure D that regulates at time n-RTT EpPropagate into the error propagation distortion map at time n-1.Error propagation distortion map D in time n-1 renewal EpThen will be the D at time n that uses in model selection process (2) for calculating Ep_refThe basis.This causes the more accurately tracking to potential distortion map, so has improved the global rate distortion performance of system.
To understand the embodiment above only having described by way of example.
Usually, although toply be described soon according to section, macro block and piece or son, the design that these terms needn't be intended to limit and describe in this article is not limited to divide or segment any ad hoc fashion of frame.Further, distortion map can cover whole frame or the zone in frame, and the coding decision process can be applied on the whole frame or only is applied to zone in the frame.Also note, prediction piece granularity needn't be identical with the distortion map granularity or even be connected to distortion map granularity (although not ruled it out).
The difference of two squares and (SSD) often preferred as poor measuring, cause higher quality because it is compared with (SAD) with absolute difference, still do not get rid of a rear possibility or other possibility and generally speaking can implement the present invention with any measuring as the basis that is used for quantizing distortion of the difference between the sample.
Usually, the parameter of also taking into account all needs of measuring of speed is encoded, and comprises the parameter of description prediction and the conversion coefficient of quantification.This optimization can be called as full rate distortion optimization (RDO) in this article.Yet in than low-complexity embodiment, distortion and/or rate term can be processed effects (for example only considering the effect of prediction) of level and come approximate by only considering some but not all.
Further, in the situation that the present invention is described according to two frame n-1 and n or n and n+1 or such as this type of, these needn't refer to two consecutive frames (although in existing codec situation may so) according to some embodiment of the present invention.In certain embodiments, might inter prediction can with respect in addition more early frame carry out, n-1 and n or n and n+1 can be used for referring to respectively the frame of any previous coding or image section and will be from subsequently frame or the part of its prediction about the present invention like this.
Again note, mention in this application because the contribution due to losing or statement " if " anything occurs and so on or such as this type of in data in the situation that channel is lost, this only relates to and may stand the probability what makes by encoder about decoder and suppose (for example p)---encoder certainly do not know with generation what.This probability hypothesis can be modeled in the design phase based on statistics network and be determined in advance, and/or even can be based on dynamically determining from the feedback of decoder.
Given disclosure herein, other modification may become apparent those skilled in the art.Scope of the present invention is not subjected to described embodiment restriction and limited by appended claim.

Claims (10)

1. the encoder place in transmitting terminal encodes to send to the method for the decoder at receiving terminal place at Erasure channel to video flowing, and described method comprises:
Carry out rate-distortion optimization process, this rate-distortion optimization process comprises each for a plurality of target images parts that will encode in each of a plurality of frames, comprise that by optimization the distortion estimation of target image part and the function of measuring for required bit rate that target image is partly encoded select the optimized encoding pattern in the coding mode set, wherein this distortion estimation is based on the source code distortion and because may lose and the estimation of the distortion that will stand on the channel;
Use selected coding mode target image partly to be encoded into the video flowing of coding; And
Send the video flowing of coding at channel;
Wherein according to carrying out rate-distortion optimization process for the present frame of described frame based on the early frame of described frame from the feedback that receiving terminal receives.
2. one kind video flowing encoded to send to the transmitting terminal of the decoder at receiving terminal place at Erasure channel, described transmitting terminal comprises:
Encoder, be configured to carry out rate-distortion optimization process, this rate-distortion optimization process comprises each for a plurality of target images parts that will encode in each of a plurality of frames, comprise that by optimization the distortion estimation of target image part and the function of measuring for required bit rate that target image is partly encoded select the optimized encoding pattern in the coding mode set, wherein this distortion estimation is based on the source code distortion and because may lose and the estimation of the distortion that will stand on the channel, this encoder is arranged to use selected coding mode target image partly to be encoded into the video flowing of coding; And
Reflector is arranged to send the video flowing of encoding at channel;
Wherein this encoder is configured to so that according to carrying out rate-distortion optimization process for the present frame of described frame based on the early frame of described frame from the feedback that receiving terminal receives.
3. the terminal of the method for claim 1 or claim 2, wherein said feedback comprise in following: receive described early at least part of affirmation of frame, and do not receive described early at least part of report of frame.
4. the method for claim 3 or terminal, wherein the coding mode set comprises the reference inter-frame forecast mode of affirmation, it is partly encoded to target image with respect to the corresponding reference section in the early frame of confirming or the part early confirmed in the frame; And
The reference inter-frame forecast mode of wherein confirming is used in the coding mode selection and uses, condition is that reference section is confirmed to be reception and relative all of reference section coding also have been confirmed to be reception, so that the known reference part can not propagated by Errors.
5. the method for claim 4 or terminal, wherein move the coder side example of decoder at the encoder place, and keep the decoded picture buffering device at the encoder place, this decoded picture buffering device file layout is short-term and the long term reference by the reference image data of the coder side example reconstruct of decoder, its with reference to by successive frame overwrite automatically a middle or short term, and long term reference is removed based on clearly removing the condition of order; And
The reference inter-frame forecast mode of wherein confirming with respect in the long term reference in the decoded picture buffering device be confirmed to be the corresponding long term reference that has received at the receiving terminal place come to target image partly encode or the corresponding short-term that received at the receiving terminal place with respect to being confirmed to be in the decoded picture buffering device with reference to coming target image is partly encoded.
6. the method for claim 5 or terminal, wherein the coding mode set comprises the interframe encoding mode of at least frame mode, at least one non-affirmation and inter-frame forecast mode and the free intra-frame encoding mode of described affirmation, and described free intra-frame encoding mode allows to assign to target image is partly carried out intraframe coding from the reference section of interframe encode.
7. the method for claim 6 or terminal, wherein the coding mode set comprises respectively with respect to the non-acknowledgement frame of storing in the decoded picture buffering device or frame non-and confirms that the corresponding long-term and short-term in the part is with reference in the predictive mode between the short-term reference frame of the long term reference interframe predictive mode of the non-affirmation that comes target image is partly encoded and non-affirmation at least one; And
Wherein losing adaptive rate-distortion optimization process according to described feedback execution comprises: according to the losing probability of estimating with since the early frame of last affirmation or the early frame time partly of affirmation, definite because the distortion estimation due to losing.
8. the method for arbitrary aforementioned claim or terminal are wherein carried out according to described feedback and are lost adaptive rate-distortion optimization process and comprise: according to described feedback, regulate early early distortion estimation of frame of frame or part; And the distortion estimation of forward propagation adjusting is to use with respect to present frame.
9. the method for arbitrary aforementioned claim or terminal, wherein because possible losing and the distortion estimation that will stand is contributed based on the first contribution and second, the estimation of the distortion that will stand because target is partly predicted not the arriving of reference section in the target part history that relies on if the first contribution expression target part arrives by channel really, the distortion estimation that the second contribution expression will stand owing to hiding, described the second contribution comprises: expression target part is with respect to the contribution of the hiding distortion measure of image section, if lose objects part then its will be used for losing of vanishing target part on channel; Contribution with the distortion estimation that represents to stand owing to the losing of image section in the target part history of target part cache dependence.
10. at the transmitting terminal place video flowing is encoded to send to the computer program of the decoder at receiving terminal place at Erasure channel for one kind, described computer program is embodied on the computer-readable medium and comprises code, and described code is configured to the operation of each method in executive basis claim 1 and 3 to 9 when transmitting terminal is carried out.
CN201210320556XA 2011-09-02 2012-09-03 Video coding Pending CN102946534A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1115201.4A GB2495467B (en) 2011-09-02 2011-09-02 Video coding
GB1115201.4 2011-09-02
US13/274,881 US9338473B2 (en) 2011-09-02 2011-10-17 Video coding
US13/274881 2011-10-17

Publications (1)

Publication Number Publication Date
CN102946534A true CN102946534A (en) 2013-02-27

Family

ID=46852395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210320556XA Pending CN102946534A (en) 2011-09-02 2012-09-03 Video coding

Country Status (1)

Country Link
CN (1) CN102946534A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014139069A1 (en) * 2013-03-11 2014-09-18 华为技术有限公司 Method and apparatus for repairing video file
CN109587488A (en) * 2018-11-07 2019-04-05 成都随锐云科技有限公司 A kind of choosing method for the long reference frame predicted based on rate-distortion optimization and frame losing
CN109983775A (en) * 2016-12-30 2019-07-05 深圳市大疆创新科技有限公司 The system and method sent for the data based on feedback
CN110149513A (en) * 2014-01-08 2019-08-20 微软技术许可有限责任公司 Select motion vector accuracy
CN110225338A (en) * 2016-12-30 2019-09-10 深圳市大疆创新科技有限公司 Image processing method, device, unmanned vehicle and receiving end
US11546629B2 (en) 2014-01-08 2023-01-03 Microsoft Technology Licensing, Llc Representing motion vectors in an encoded bitstream

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759610A (en) * 2003-01-09 2006-04-12 加利福尼亚大学董事会 Video encoding methods and devices
CN101162930A (en) * 2006-08-23 2008-04-16 富士通株式会社 Wireless communication apparatus and wireless communication method
US20080247469A1 (en) * 2007-04-04 2008-10-09 Sarat Chandra Vadapalli Method and device for tracking error propagation and refreshing a video stream
EP2139138A1 (en) * 2008-06-24 2009-12-30 Alcatel Lucent Radio link adaption of a channel between a first network element and a second network element in a communication network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759610A (en) * 2003-01-09 2006-04-12 加利福尼亚大学董事会 Video encoding methods and devices
CN101162930A (en) * 2006-08-23 2008-04-16 富士通株式会社 Wireless communication apparatus and wireless communication method
US20080247469A1 (en) * 2007-04-04 2008-10-09 Sarat Chandra Vadapalli Method and device for tracking error propagation and refreshing a video stream
EP2139138A1 (en) * 2008-06-24 2009-12-30 Alcatel Lucent Radio link adaption of a channel between a first network element and a second network element in a communication network

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014139069A1 (en) * 2013-03-11 2014-09-18 华为技术有限公司 Method and apparatus for repairing video file
US10136163B2 (en) 2013-03-11 2018-11-20 Huawei Technologies Co., Ltd. Method and apparatus for repairing video file
CN110149513A (en) * 2014-01-08 2019-08-20 微软技术许可有限责任公司 Select motion vector accuracy
CN110149513B (en) * 2014-01-08 2022-10-14 微软技术许可有限责任公司 Selecting motion vector precision
US11546629B2 (en) 2014-01-08 2023-01-03 Microsoft Technology Licensing, Llc Representing motion vectors in an encoded bitstream
US11638016B2 (en) 2014-01-08 2023-04-25 Microsoft Technology Licensing, Llc Selection of motion vector precision
CN109983775A (en) * 2016-12-30 2019-07-05 深圳市大疆创新科技有限公司 The system and method sent for the data based on feedback
CN110225338A (en) * 2016-12-30 2019-09-10 深圳市大疆创新科技有限公司 Image processing method, device, unmanned vehicle and receiving end
US10911750B2 (en) 2016-12-30 2021-02-02 SZ DJI Technology Co., Ltd. System and methods for feedback-based data transmission
US11070732B2 (en) 2016-12-30 2021-07-20 SZ DJI Technology Co., Ltd. Method for image processing, device, unmanned aerial vehicle, and receiver
CN109587488A (en) * 2018-11-07 2019-04-05 成都随锐云科技有限公司 A kind of choosing method for the long reference frame predicted based on rate-distortion optimization and frame losing
CN109587488B (en) * 2018-11-07 2022-08-05 成都随锐云科技有限公司 Long reference frame selection method based on rate distortion optimization and frame loss prediction

Similar Documents

Publication Publication Date Title
KR102064023B1 (en) Video refresh using error-free reference frames
KR102146583B1 (en) Video refresh with error propagation tracking and error feedback from receiver
US9854274B2 (en) Video coding
US8804836B2 (en) Video coding
EP2712481B1 (en) Mode decision with perceptual-based intra switching
EP2712482B1 (en) Low complexity mode selection
US9036699B2 (en) Video coding
CN102946534A (en) Video coding
CN102946533B (en) Video coding
CN102946532A (en) Video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130227