WO2006128072A2 - Method and apparatus for coding motion and prediction weighting parameters - Google Patents

Method and apparatus for coding motion and prediction weighting parameters Download PDF

Info

Publication number
WO2006128072A2
WO2006128072A2 PCT/US2006/020633 US2006020633W WO2006128072A2 WO 2006128072 A2 WO2006128072 A2 WO 2006128072A2 US 2006020633 W US2006020633 W US 2006020633W WO 2006128072 A2 WO2006128072 A2 WO 2006128072A2
Authority
WO
WIPO (PCT)
Prior art keywords
weighting parameters
prediction
parameters
transformed
weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2006/020633
Other languages
English (en)
French (fr)
Other versions
WO2006128072A3 (en
Inventor
Frank Bossen
Alexandros Tourapis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Priority to CN200680016666XA priority Critical patent/CN101176350B/zh
Priority to EP06771414.7A priority patent/EP1891810B1/en
Priority to JP2008508011A priority patent/JP2008541502A/ja
Publication of WO2006128072A2 publication Critical patent/WO2006128072A2/en
Publication of WO2006128072A3 publication Critical patent/WO2006128072A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to the field of video coding and decoding; more particularly, the present invention relates to coding motion and prediction weighting parameters.
  • Motion Compensated (Inter-frame) prediction is a vital component of video coding schemes and standards such as MPEG-1/2, and H.264 (or JVT or MPEG AVC) due to the considerable benefit it could provide in terms of coding efficiency.
  • motion compensation was performed by primarily, if not only, considering temporal displacement. More specifically, standards such as MPEG- 1, 2, and MPEG-4 consider two different picture types for inter-frame prediction, predictive (P) and bi-directionally predicted (B) pictures. Such pictures are partitioned into a set of non-overlapping blocks, each one associated with a set of motion parameters.
  • Figure IA illustrates an example of motion compensation in P pictures.
  • Figure IB illustrates an example of motion compensation in B pictures. In the later case, essentially prediction is formed by averaging both predictions from these two references, using equal weighting factors of (V2, 1 A).
  • the predicted signal is also scaled and/or adjusted using two new parameters w and o.
  • the sample with brightness value J(x, y, t) at position (x, y) at time t is essentially constructed as w * I ⁇ x + dx, y + dy, f) + o, where dx and dy are the spatial displacement parameters (motion vectors).
  • dx and dy are the spatial displacement parameters (motion vectors).
  • these new weighting parameters could also considerably increase the overhead bits required for representing motion information, therefore potentially reducing the benefit of such strategies.
  • Kamikura, et al. in "Global Brightness- Variation Compensation for Video Coding," IEEE Trans on CSVT, vol. 8, pp. 988-1000, Dec. 1998, suggests the usage of only a single set of global parameters (w, o) for every frame. The use or not of these parameters is also signaled at the block level, thereby providing some additional benefit in the presence of local brightness variations.
  • the ITU-H.264 (or JVT or ISO MPEG4 AVC) video compression standard has adopted certain weighted prediction tools that could take advantage of temporal brightness variations and could improve performance.
  • the H.264 video coding standard is the first video compression standard to adopt weighted prediction (WP) for motion compensated prediction.
  • WP weighted prediction
  • Motion compensated prediction may consider multiple reference pictures, with a reference picture index coded to indicate which of the multiple reference pictures is used.
  • P pictures or P slices
  • B pictures two separate reference picture lists are managed, list 0 and list 1.
  • B pictures or B slices
  • bi-prediction using both list 0 and list 1 is allowed.
  • the list 0 and the list 1 predictors are averaged together to form a final predictor.
  • the H.264 weighted prediction tool allows arbitrary multiplicative weighting factors and additive offsets to be applied to reference picture predictions in both P and B pictures. Use of weighted prediction is indicated in the sequence parameter set for P and SP slices. There are two weighted prediction modes; explicit mode, which is supported in P, SP, and B slices, and implicit mode, which is supported in B slices only.
  • weighted prediction parameters are coded in the slice header.
  • a multiplicative weighting factor and an additive offset for each color component may be coded for each of the allowable reference pictures in list 0 for P slices and list 0 and list 1 for B slices.
  • the syntax also allows different blocks in the same picture to make use of different weighting factors even when predicted from the same reference picture store. This can be made possible by using reordering commands to associate more than one reference picture index with a particular reference picture store.
  • the same weighting parameters that are used for single prediction are used in combination for bi-prediction.
  • the final inter prediction is formed for the samples of each macroblock or macroblock partition, based on the prediction type used. For a single directional prediction from list 0
  • SampleP Clipl(((SampleP 0 -fF 0 + 2 1 ⁇ '1 ) » LWD) + O 0 ) (1) and for a single directional prediction from list 1,
  • SampleP Clipl (((SamplePr W 1 + 2 LWD - ⁇ ) » LWD) + O 7 ) (2) and for bi-prediction,
  • SampleP Clipl (((SampleP 0 -JFo + SamplePr ⁇ + 2 LWD ) (3)
  • Clipl() is an operator that clips the sample value within the range [0, l «SampleBitDepth - 1], with SampleBitDepth being the number of bits associated with the current sample, WQ and Oo are the weighting factor and offset associated with the current reference in list 0, Wi and Oi are the weighting factor and offset associated with the current reference in list 1, and LWD is the log weight denominator rounding factor, which essentially plays the role of a weighting factor quantizer.
  • SamplePo and SaHIpIeP 1 are the list 0 and list 1 initial predictor samples, while SampleP is the final weight predicted sample.
  • weighting factors are not explicitly transmitted in the slice header, but instead are derived based on relative distances between the current picture and its reference pictures. This mode is used only for bi-predictively coded macroblocks and macroblock partitions in B slices, including those using direct mode.
  • the same formula for bi-prediction as given in the preceding explicit mode section is used, except that the offset values Oo and Oi are equal to zero, and the weighting factors Wo and Wi are derived using the formulas:
  • Wi (64 * TD D ) I TD B
  • TD D and TD B are the clipped within the range [-128,127] temporal distances between the list 1 reference and the current picture versus the list O reference picture respectively.
  • the H.264 video coding standard enables the use of multiple weights for motion compensation, such use is considerably limited by the fact that the standard only allows signaling of up to 16 possible references for each list at the slice level. This implies that only a limited number of weighting factors could be considered. Even if this restriction did not apply, it could be potentially inefficient if not difficult to signal all possible weighting parameters that may be necessary for encoding a picture. Note that in H.264, weighting parameters for each reference are independently coded without the consideration of a prediction mechanism, while the additional overhead for signaling the reference indices may also be significant.
  • the encoding method comprises generating weighting parameters for multi-hypothesis partitions, transforming the weighting parameters and coding transformed weighting parameters.
  • Figure IA illustrates an example of motion compensation in P pictures.
  • Figure IB illustrates an example of motion compensation in B pictures.
  • Figure 2 is a flow diagram of one embodiment of an encoding process.
  • Figure 3 is a flow diagram of one embodiment of a decoding process.
  • Figure 4 illustrates a tree with parent-child relationships for encoding of bi- prediction motion information.
  • Figure 5 illustrates another tree with node information that may be encoded.
  • Figure 6 is a block diagram of one embodiment of an encoder.
  • Figure 7 is a block diagram of one embodiment of a decoder.
  • Figure 8 is an example of the relationship between the representation and the true value of the weighting parameter.
  • Figure 9 illustrates one embodiment of a flow diagram for a process of using global and local weighting jointly.
  • Figure 10 illustrates a transform process of the weighting parameters for coding frames using biprediction.
  • Figure 11 illustrates the process for weighting parameters for the multi- hypothesis case.
  • Figure 12 illustrates the transform process of the weighting parameters for biprediction with controlled fidelity.
  • Figure 13 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein.
  • An efficient coding scheme for coding weighting parameters within bi- predicted (or multi-predicted) partitions of a video coding architecture is disclosed. This scheme improves performance of the video coding system and can be used to handle local brightness variations.
  • the coding scheme includes a transform process between pairs of weighting parameters associated with each reference for each block that is being subjected to motion compensated prediction. The transform process causes a transform to be applied to the weighting parameters.
  • the transformed weighting parameters are signaled for every block using a prediction method and are coded using a zero tree coding structure. This reduces the overhead required for coding these parameters. While the methods described herein are primarily aimed at bi-predictive partitions. Some of the concepts could also apply to uni-predicted partitions.
  • the interactions between uni-predicted and bi-predicted partitions are used to further improve efficiency. Special consideration is also made for bi- predictive pictures or slices, where a block may be predicted from a different list or both lists.
  • the coding scheme is further extended by combining and considering both global and local weighting parameters. Additional weighting parameters transmitted at the sequence, picture, or slice level are also optionally considered.
  • variable granularity dynamic range of such parameters is also taken into account.
  • weighting parameters are finely adjustable, which provides additional benefits in terms of coding efficiency.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media includes magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
  • Embodiments of the present invention include a coding scheme that uses motion compensation that includes weighted prediction.
  • the weighting parameters associated with a prediction are transformed and then coded with the offset associated with the prediction.
  • the decoding process is a reverse of the encoding process.
  • FIG. 2 is a flow diagram of one embodiment of an encoding process.
  • the process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is ran on a general purpose computer system or a dedicated machine), or a combination of both.
  • the process begins by processing logic generating weighting parameters for multi-hypothesis partitions (e.g., bi-predictive partitions) as part of performing motion estimation (processing block 201).
  • the weighting parameters comprise at least one pair of a set of weighting parameters for each partition (block) of a frame being encoded.
  • the weights for multiple pairs of transformed weighting parameters sum to a constant.
  • different weighting factors for at least two partitions of the frame are used.
  • the weighting parameters compensate for local brightness variations.
  • the weighted prediction parameters have variable fidelity levels. The variable fidelity levels are either predefined or signaled as part of a bitstream that includes an encoded representation of the transformed weighting parameters.
  • one group of weighting parameters have a higher fidelity than those of another group of weighting parameters.
  • processing logic applies a transform to the weighting parameters to create transformed weighting parameters (processing block 202).
  • transforming the weighting parameters comprises applying a transform to at least two weighting parameters associated with each reference.
  • processing logic codes transformed weighting parameters and an offset (processing block 203).
  • the transformed weighting parameters are coded with at least one offset using variable length encoding.
  • the variable length encoding may use a zero tree coding structure.
  • Variable length encoding may comprise Huffman coding or arithmetic coding.
  • coding the transformed weighting parameters comprises differentially coding using predictions based on one of a group consisting of transformed coefficients of neighboring partitions or predetermined weighting parameters.
  • the predetermined weighting parameter are used when no neighboring partition exists for a prediction.
  • the predetermined weighting parameters may be default weighting parameters or globally transmitted weighting parameters.
  • FIG. 3 is a flow diagram of one embodiment of a decoding process.
  • a decoding process is used to decode information encoded as part of the encoding process of Figure 2.
  • the process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • processing logic may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • processing logic decoding a bitstream using variable length decoding to obtain decoded data that includes transformed weighting parameters for multi-hypothesis partitions (e.g., bi-predictive partitions) (processing block 301).
  • Variable length decoding may comprise Huffman coding or arithmetic coding.
  • processing logic applies an inverse transform to the transformed weighting parameters (processing block 302).
  • the inverse transformed weighting parameters comprise at least two weighting parameters associated with a reference.
  • processing logic reconstructs frame data using motion compensation in conjunction with inverse transformed weighting parameters (processing block 303).
  • Weighted prediction is useful for encoding video data that comprises fades, cross-fades/dissolves, and simple brightness variations such as flashes, shading changes etc.
  • Bi-prediction and multi-hypothesis prediction in general
  • Conventional bi- prediction tends to use equal weights (i.e. 1 A) for all predictions, hi one embodiment, one signal is more correlated than another, for example due to temporal distance or quantization or other noise introduced/existing within each signal.
  • I(x, y, t) weight 0 ⁇ I(x + dx 0 , y + dy 0 , t 0 ) + weight ⁇ • I(x + dx ⁇ , y + dy x , t l ) + offset (5)
  • weighty, dx ⁇ , dy & and t k are the weights, horizontal and vertical displacement, and time corresponding to prediction from list k, and offset is the offset parameter.
  • the constant c is equal to 1. This could basically also be seen as the computation of the weighted average between all prediction samples.
  • N-bit quantized parameters wQo and WQ 1 are used. In one embodiment, N is 6 bits. Using substitution, equation 5 above then becomes equal to:
  • I(x,y,t) ⁇ (wQ ti -I(x+ ⁇ x o ,y + dy o ,t o ) + wQ i - I(x + dx 1 ,y + dy ⁇ ,t ⁇ ) + (l « (N-l)j)» N)
  • Equation 6 Considering that the relation in equation 6 may hold for a majority of blocks/partitions, in one embodiment, B_wQo may be further modified to « N) finally resulting in equations:
  • parameters B_wQ 0 and B wQ 1 are coded as is.
  • the parameters B_wQ 0 and B-WQ 1 are differentially coded by considering the values of B_wQo and B-WQ 1 from neighboring bi-predicted partitions. If no such partitions exist, then these parameters could be predicted using default values dB_wQo and dB_wQi (e.g., both values could be set to 0).
  • FIG 10 illustrates a transform process of the weighting parameters for coding frames using bi-prediction.
  • transform 1001 receives the quantized weighting parameters wQo and WQ 1 and produces transformed weighting parameters B_wQo and B-WQ 1 , respectively.
  • These transformed weighting parameters are variable length encoded using variable length encoder 1002 along with the offset. Thereafter, the encoded weighting parameters are sent as part of the bitstream to a decoder.
  • the variable length decoder 1103 decodes the bitstream and produces the transformed weighting parameters B_wQ 0 and B_wQ l5 along with the offset.
  • Inverse transform 1004 inverse transforms the transformed weighting parameters to produce the quantized version of the weighting parameters wQo and WQ 1 .
  • Weighting parameter wQ 0 is multiplied by a motion -compensated sample xo by multiplier 1006 to produce an output which is input into adder 1007.
  • Weighting parameter WQ 1 is multiplied by a motion-compensated sample X 1 using multiplier 1005.
  • the result from multiplier 1005 is added to the output of multiplier 1006 and to 2 N"J using adder 1007.
  • the output of adder 1007 is divided by 2 N by divider 1008. The result of the division is added to the offset using adder 1009 to produce a predicted sample x'.
  • the present invention is not limited to the bi-prediction case; it is equally applicable to the multi-hypothesis case where M different hypothesis are used. By again making the assumption that the most likely relationship between the M hypothesis
  • FIG 11 illustrates the process for weighting parameters for the multi- hypothesis case.
  • the same arrangement as shown in Figure 10 is shown in Figure 11 with the difference that sets of two or more weighting parameters are transformed by transform 1101 and variable length encoder 1102 encodes sets of quantized weighting parameters along with one offset for each set
  • variable length decoder 1103 decodes the bitstream to produce sets of transformed weighting parameters.
  • Each set of transformed weighting parameters is inverse transformed by an inverse transform 1104, the output of which are a combined to produce a predicted sample x' in the same way as is done in Figure 10.
  • Adder 1107 adds the results of all multiplications for all of the quantized weighting parameters.
  • the same picture or slice may have a combination of both bi- predicted and uni-predicted partitions.
  • a bi-predicted partition it is possible for a bi-predicted partition to have a uni-predicted neighbor using either listO or listl prediction, or even a uni-predicted partition using listX (where X could be 0 or 1) having neighbors using a different list.
  • listX where X could be 0 or 1
  • motion vectors within a bi-predicted partition could still be correlated with motion vectors of a uni-predicted partition, this is less likely to be the case for motion vectors of different lists, but also of weighting parameters using different number of hypotheses (i.e.
  • weights from bi-predicted partitions may have little relationship with weights from uni-predicted partitions, while a weight and motion vector from a uni- predicted partition using list 0, may have little relationship with these parameters from another partition using list 1).
  • all weights from a given prediction mode (listO, listl, and bi-predicted) are restricted to be only predicted from adjacent partitions in the same mode, hi one embodiment, if the current partition is a bi-predicted partition, and all R neighbors (NeighcNeighi,... Neighs) are also bi-predicted, then a weighting parameter B wQ k is predicted as:
  • B_wQ k f(Neigh o ,...,Neigh R ,k) + dB_wQ k (13)
  • /( ) corresponds to the relationship used to determine the prediction for the k-th weighting parameter, hi one embodiment, a function used for/ for example, is the median value of all predictors.
  • a function such as the mean, consideration of only the top or left neighbors based on the reference frame they use or/and a predefined cost, etc. However, if Neighj uses a different prediction type, then it may be excluded from this prediction process, hi one embodiment, if no predictions are available, then default values, as previously discussed could be used as a predictor. [0062] hi one embodiment, after prediction, zero tree coding is employed to further efficiently encode the weighting parameters, and in general the motion parameters for a block. Figures 4 and 5 represent two possible trees with the node information that may be encoded .
  • FIG. 4 an example parent-child relationship for encoding of bi- prediction motion information is shown, hi this example, nodes are segmented based on lists, with the horizontal and vertical motion vector displacement difference for ListO having a common parent node and the horizontal and vertical motion vector displacement difference for Listl having a common parent node.
  • motion is segmented based on motion vector component. In this case, the horizontal motion vector displacement difference for both ListO and Listl have a common parent node, while the vertical motion vector displacement difference for both ListO and Listl have a common parent mode.
  • the trees described in Figure 4 and 5 containing both node type data and leaf indices data may be encoded (and subsequently decoded) using tree coding (e.g., zero tree coding) as described in U.S. Patent Application No. 11/172,052, entitled “Method and Apparatus for Coding Positions of Coefficients," filed June 29, 2005.
  • tree coding e.g., zero tree coding
  • Default values for a given prediction type do not have to be fixed, hi one embodiment, a set of global/default parameters are used and coded such as the ones used by the H.264 explicit mode, or alternatively derive these parameters using a mechanism similar to that of the H.264 implicit mode.
  • the method or parameters could be signaled at the picture or slice level.
  • FIG. 9 is a flow diagram of one embodiment for a process of using global and local weighting jointly.
  • the process is performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • the process begins by setting default weighting parameters (processing block 901). hi such a case, there are default scaling and offset parameters for ListO, Listl, and Bipred.
  • the default weighting parameters are ⁇ DScale ⁇ ), DOffsetLo ⁇ .
  • the weighting parameters are ⁇ DScale ⁇ , DOffsetu ⁇ .
  • the default weighting parameters are ⁇ DScal ⁇ Bi LO, DScale B i_ L1 ,DOffset ⁇ .
  • processing logic tests whether the block is a bi-prediction block (processing block 903). If so, processing logic transitions to processing block 904 where the processing logic tests whether there are any available bi-prediction predictors. If there are, processing logic computes the weighting parameter predictors using neighbors using a function f b i pred (Neigho. . . . Neigh] . ) (processing block 905) and processing continues to processing block 914.
  • processing logic sets the weighting parameters to the default (DScale B i j L O , DScale ⁇ i Lh DOffset ⁇ (processing block 906) and the processing transitions to processing block 914.
  • processing block 907 processing logic tests whether the block list is ListO. If it is, processing logic transitions to processing block 908 where the processing logic tests whether there are any available ListO prediction predictors. If there are, processing logic computes the weighting parameter predictors using neighbors f L o(Neigh o . . . . NeighO (processing block 909) and processing continues to processing block 914. If there are no available ListO predictors, processing logic sets the weighting parameters to the default values ⁇ DScale L0 , DOffsetLo ⁇ (processing block 910) and the processing transitions to processing block 914.
  • processing logic tests whether there are any available Listl prediction predictors. If there are, processing logic computes the weighting parameter predictors using neighbors fLo(Neigho. . . . Neighi) (processing block 912) and processing continues to processing block 914. If there are no available Listl predictors, processing logic sets the weighting parameters to the default values ⁇ DScale ⁇ , DOffsetu ⁇ (processing block 913) and the processing transitions to processing block 914. [0070] At processing block 914, processing logic decodes the weighting parameters. Thereafter, the weighting parameters predictors are added to the weighting parameters (processing block 915).
  • bi-predicted weights could have different dynamic range, while the parameters luma_bipred_weight_type and chroma_bipred_weight_type are used to control the method that is to be used for deriving the weights. More specifically for luma, if luma_bipred_weight__type is set to 0, then the default 1 A weight is used. If 1, then implicit mode is used, while if 2, the uniprediction weights are used instead as is currently done in H.264 explicit mode. Finally, if 3, weighting parameters are transmitted explicitly within this header.
  • a weighting parameter is uniformly quantized to N bits
  • non uniform quantization may instead be more efficient and appropriate to better handle different variations within a frame.
  • a finer fidelity is used for smaller weighting or offset parameters, whereas fidelity is increased for larger parameters.
  • encoding a parameter qw ⁇ that provides a mapping to the weighting parameters B_wQk,wQb or even possibly to the differential value dB_wQk is contemplated, using an equation of the form: -(1 «(N-1)) ,..., (1 «(N-1))-1 (14)
  • Equation 17 (or the more generic equation 18) allows for increased flexibility in terms of representing the fidelity of the weighting parameters. Note also that the operation a ⁇ x could indicate not only a multiply, but also an integer or floating point divide (i.e. assume that a is less than 1). Figure 8 is an example of the relationship between the representation and the true value of the weighting parameter.
  • the parameters are determined by performing a preanalysis of the frame or sequence, generating the distribution of the weighting parameters, and approximating the distribution with g(x) (i.e. using a polynomial approximation).
  • Figure 12 illustrates the transform process of the weighting parameters for bi-prediction with controlled fidelity. Referring to Figure 12, the process is the same as in Figure 10, with the exception that each of the transformed weighting parameters output from transform 1001 isn't input directly into variable-length encoder 1002, along with the offset. Instead, they are input into functions, the output of which are input to variable length encoder 1002. More specifically, transform weighting parameter B_wQ 0 is input into function g o (x) to produce XQ. More specifically, transform weighting parameter B wQ 1 is input into function gi(x) to produce X 1 (1202). The offset is also subjected to function g o (x) to produce x 2 (1203).
  • decoder 1003 which are xo, X 1 and x 2 , are input into functions g 0 ⁇ '(x) (1204), g/ ⁇ (x) (1205), and g 0 ⁇ '(x) (1206), which produce the transformed weighting parameters in the offset. Thereafter, these are handled in the same manner as shown in Figure 10.
  • Figure 6 is a block diagram of one embodiment of an encoder.
  • the encoder first divides the incoming bitstream into rectangular arrays, referred to as macroblocks. For each macroblock, the encoder then chooses whether to use intra-frame or inter-frame coding.
  • Intra-frame coding uses only the information contained in the current video frame and produces a compressed result referred to as an I-frame.
  • Intra-frame coding can use information of one or more other frames, occurring before or after the current frame. Compressed results that only use data from previous frames are called P-frames, while those that use data from both before and after the current frame are called B-frames.
  • video 601 is input into the encoder.
  • frames of video are input into DCT 603.
  • DCT 603 performs a 2D discrete cosign transform (DCT) to create DCT coefficients.
  • the coefficients are quantized by quantizer 604.
  • the quantization performed by quantizer 604 is weighted by sealer.
  • the quantizer sealer parameter QP takes values from 1 to 31. The QP value can be modified in both picture and macroblock levels.
  • VLC 605 which generates bitstream 620.
  • VLC 605 performs entropy coding by using Huffman coding or arithmetic coding.
  • reordering may be performed in which quantized DCT coefficients are zigzag scanned so that a 2D array of coefficients is converted into a ID array of coefficients in a manner well-known in the art. This may be followed by run length encoding in which the array of reordered quantized coefficients corresponding to each block is encoded to better represent zero coefficients.
  • each nonzero coefficient is encoded as a triplet (last, run, level), where "last" indicates whether this is the final nonzero coefficient in the block, "ran” signals the preceeding 0 coefficients and "level” indicates the coefficients sign and magnitude.
  • a copy of the frames may be saved for use as a reference frame. This is particularly the case of I or P frames.
  • the quantized coefficients output from quantizer 604 are inverse quantized by inverse quantizer 606.
  • An inverse DCT transform is applied to the inverse quantized coefficients using IDCT 607.
  • the resulting frame data is added to a motion compensated predication from motion compensation (MC) unit 609 in the case of a P frame and then the resulting frame is filtered using loop filter 612 and stored in frame buffer 611 for use as a reference frame, hi the case of I frames, the data output from IDCT 607 is not added to a motion compensation prediction from MC unit 609 and is filtered using loop filter 612 and stored in frame buffer 611.
  • the P frame is coded with interprediction from previous a I or P frame, which is referred to commonly in the art as the reference frame, hi this case, the interprediction is performed by motion estimation (ME) block 610 and motion compensation unit 609.
  • ME motion estimation
  • motion estimation unit 610 uses the reference frame from frame store 611 and the input video 601, motion estimation unit 610 searches for a location of a region in the reference frame that best matched the current macro block in the current frame. As discussed above, this includes not only determining a displacement, but also determining weighting parameters (WP) and an offset. The displacement between the current macroblock and the compensation region in the reference frame, along with the weighting parameters and the offset is called the motion vector.
  • the motion vectors for motion estimation unit 610 are sent to motion compensation unit 609. At motion compensation unit 609, the prediction is subtracted from the current macroblock to produce a residue macroblock using subtractor 602. The residue is then encoded using DCT 603, quantizer 604 and VLC 605 as described above.
  • Motion estimation unit 610 outputs the weighting parameters to VLC 605 for variable length encoding 605.
  • the output of VLC 605 is bitstream 620.
  • Figure 7 is a block diagram of one embodiment of a decoder. Referring to
  • bitstream 701 is received by variable length decoder 702 which performs variable length decoding.
  • the output of variable length decoding is sent to inverse quantizer 703 which performs an inverse quantization operation that is the opposite of the quantization performed by quantizer 604.
  • the output of inverse quantizer 703 comprise coefficients that are inverse DCT transformed by IDCT 704 to produce image data, hi the case of I frames, the output of IDCT 704 is simply filtered by loop filter 721, stored in frame buffer 722 and eventually output as output 760. hi the case of P frames, the image data output from IDCT 704 is added to the prediction from motion compensation unit 710 using adder 705.
  • Motion compensation unit 710 uses the output from variable length decoder 722, which includes the weighting parameters discussed above, as well as reference frames from frame buffer 722.
  • the resulting image data output from adder 705 is filtered using loop filter 721 and stored in frame buffer 722 for eventual output as part of output 760.
  • Figure 13 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein.
  • computer system 1300 may comprise an exemplary client or server computer system.
  • Computer system 1300 comprises a communication mechanism or bus 1311 for communicating information, and a processor 1312 coupled with bus 1311 for processing information.
  • Processor 1312 includes a microprocessor, but is not limited to a microprocessor, such as, for example, PentiumTM, PowerPCTM, AlphaTM, etc.
  • System 1300 further comprises a random access memory (RAM), or other dynamic storage device 1304 (referred to as main memory) coupled to bus 1311 for storing information and instructions to be executed by processor 1312.
  • main memory dynamic storage device 1304
  • Main memory 1304 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 1312.
  • Computer system 1300 also comprises a read only memory (ROM) and/or other static storage device 1306 coupled to bus 1311 for storing static information and instructions for processor 1312, and a data storage device 1307, such as a magnetic disk or optical disk and its corresponding disk drive.
  • ROM read only memory
  • data storage device 1307 such as a magnetic disk or optical disk and its corresponding disk drive.
  • Data storage device 1307 is coupled to bus 1311 for storing information and instructions.
  • Computer system 1300 may further be coupled to a display device 1321, such as a cathode ray tube (CRT) or liquid crystal display (LCD), coupled to bus 1311 for displaying information to a computer user.
  • a display device 1321 such as a cathode ray tube (CRT) or liquid crystal display (LCD)
  • An alphanumeric input device 1322 may also be coupled to bus 1311 for communicating information and command selections to processor 1312.
  • cursor control 1323 such as a mouse, trackball, trackpad, stylus, or cursor direction keys, coupled to bus 1311 for communicating direction information and command selections to processor 1312, and for controlling cursor movement on display 1321.
  • bus 1311 Another device that may be coupled to bus 1311 is hard copy device 1324, which may be used for marking information on a medium such as paper, film, or similar types of media.
  • hard copy device 1324 Another device that may be coupled to bus 1311 is a wired/wireless communication capability 1325 to communication to a phone or handheld palm device.
  • wired/wireless communication capability 1325 to communication to a phone or handheld palm device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
PCT/US2006/020633 2005-05-26 2006-05-25 Method and apparatus for coding motion and prediction weighting parameters Ceased WO2006128072A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN200680016666XA CN101176350B (zh) 2005-05-26 2006-05-25 对运动和预测加权参数进行编码的方法和装置
EP06771414.7A EP1891810B1 (en) 2005-05-26 2006-05-25 Method and apparatus for coding motion and prediction weighting parameters
JP2008508011A JP2008541502A (ja) 2005-05-26 2006-05-25 動き及び予測の重み付けパラメータを符号化する方法及び装置

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US68526105P 2005-05-26 2005-05-26
US60/685,261 2005-05-26
US11/440,604 US8457203B2 (en) 2005-05-26 2006-05-24 Method and apparatus for coding motion and prediction weighting parameters

Publications (2)

Publication Number Publication Date
WO2006128072A2 true WO2006128072A2 (en) 2006-11-30
WO2006128072A3 WO2006128072A3 (en) 2007-03-01

Family

ID=37462886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/020633 Ceased WO2006128072A2 (en) 2005-05-26 2006-05-25 Method and apparatus for coding motion and prediction weighting parameters

Country Status (5)

Country Link
US (1) US8457203B2 (enExample)
EP (1) EP1891810B1 (enExample)
JP (2) JP2008541502A (enExample)
CN (1) CN101176350B (enExample)
WO (1) WO2006128072A2 (enExample)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2444992A (en) * 2006-12-21 2008-06-25 Tandberg Television Asa Video encoding using picture division and weighting by luminance difference data
WO2009084340A1 (ja) * 2007-12-28 2009-07-09 Sharp Kabushiki Kaisha 動画像符号化装置、および、動画像復号装置
JP2013500661A (ja) * 2009-07-30 2013-01-07 トムソン ライセンシング 画像シーケンスを表す符号化されたデータのストリームを復号する方法および画像シーケンスを符号化する方法
WO2013057783A1 (ja) * 2011-10-17 2013-04-25 株式会社東芝 符号化方法及び復号方法
JP2014131360A (ja) * 2014-04-07 2014-07-10 Toshiba Corp 符号化方法、符号化装置及びプログラム
US8995526B2 (en) 2009-07-09 2015-03-31 Qualcomm Incorporated Different weights for uni-directional prediction and bi-directional prediction in video coding
JPWO2013057783A1 (ja) * 2011-10-17 2015-04-02 株式会社東芝 符号化方法及び復号方法
JP2015119499A (ja) * 2015-02-16 2015-06-25 株式会社東芝 復号方法、復号装置及びプログラム
US9161057B2 (en) 2009-07-09 2015-10-13 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding
US9232223B2 (en) 2009-02-02 2016-01-05 Thomson Licensing Method for decoding a stream representative of a sequence of pictures, method for coding a sequence of pictures and coded data structure
JP2016096567A (ja) * 2015-12-22 2016-05-26 株式会社東芝 復号方法、復号装置及びプログラム
US9538176B2 (en) 2008-08-08 2017-01-03 Dolby Laboratories Licensing Corporation Pre-processing for bitdepth and color format scalable video coding
JP2017099016A (ja) * 2017-01-30 2017-06-01 株式会社東芝 電子機器、復号方法及びプログラム
JP2017121070A (ja) * 2017-02-23 2017-07-06 株式会社東芝 電子機器、復号方法及びプログラム
RU2638756C2 (ru) * 2016-05-13 2017-12-15 Кабусики Кайся Тосиба Устройство кодирования, устройство декодирования, способ кодирования и способ декодирования
JP2018042266A (ja) * 2017-10-20 2018-03-15 株式会社東芝 電子機器、符号化方法及びプログラム
JP2019009792A (ja) * 2018-08-22 2019-01-17 株式会社東芝 符号化方法、復号方法及び符号化データ
US10257516B2 (en) 2012-06-27 2019-04-09 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for coding efficiency
JP2020058073A (ja) * 2020-01-07 2020-04-09 株式会社東芝 符号化方法、復号方法及び符号化データ
JP2020129848A (ja) * 2020-05-29 2020-08-27 株式会社東芝 符号化データのデータ構造、記憶装置、送信装置および符号化方法
WO2022175625A1 (fr) * 2021-02-19 2022-08-25 Orange Prédiction pondérée d'image, codage et décodage d'image utilisant une telle prédiction pondérée
US12506872B2 (en) 2024-08-19 2025-12-23 Kabushiki Kaisha Toshiba Encoding method that encodes a first denominator for a luma weighting factor, transfer device, and decoding method

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007094792A1 (en) * 2006-02-17 2007-08-23 Thomson Licensing Localized weighted prediction handling video data brightness variations
KR100856411B1 (ko) * 2006-12-01 2008-09-04 삼성전자주식회사 조도 보상 방법 및 그 장치와 그 방법을 기록한 컴퓨터로 읽을 수 있는 기록매체
JP2008219100A (ja) * 2007-02-28 2008-09-18 Oki Electric Ind Co Ltd 予測画像生成装置、方法及びプログラム、並びに、画像符号化装置、方法及びプログラム
KR101408698B1 (ko) 2007-07-31 2014-06-18 삼성전자주식회사 가중치 예측을 이용한 영상 부호화, 복호화 방법 및 장치
US8254469B2 (en) * 2008-05-07 2012-08-28 Kiu Sha Management Liability Company Error concealment for frame loss in multiple description coding
US8711930B2 (en) * 2009-07-09 2014-04-29 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding
EP2302933A1 (en) * 2009-09-17 2011-03-30 Mitsubishi Electric R&D Centre Europe B.V. Weighted motion compensation of video
US20110075724A1 (en) * 2009-09-29 2011-03-31 Qualcomm Incorporated Encoding parameters with unit sum
US9161058B2 (en) * 2010-03-27 2015-10-13 Texas Instruments Incorporated Method and system for detecting global brightness change for weighted prediction in video encoding
JP5393573B2 (ja) * 2010-04-08 2014-01-22 株式会社Nttドコモ 動画像予測符号化装置、動画像予測復号装置、動画像予測符号化方法、動画像予測復号方法、動画像予測符号化プログラム、及び動画像予測復号プログラム
JP5325157B2 (ja) * 2010-04-09 2013-10-23 株式会社エヌ・ティ・ティ・ドコモ 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、動画像符号化プログラム、及び、動画像復号プログラム
US8971400B2 (en) * 2010-04-14 2015-03-03 Mediatek Inc. Method for performing hybrid multihypothesis prediction during video coding of a coding unit, and associated apparatus
US9118929B2 (en) 2010-04-14 2015-08-25 Mediatek Inc. Method for performing hybrid multihypothesis prediction during video coding of a coding unit, and associated apparatus
US8908755B2 (en) * 2010-07-16 2014-12-09 Sony Corporation Multi-parameter motion for efficient prediction in video compression
JP5298140B2 (ja) * 2011-01-12 2013-09-25 株式会社エヌ・ティ・ティ・ドコモ 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法、及び画像予測復号プログラム
WO2012122426A1 (en) * 2011-03-10 2012-09-13 Dolby Laboratories Licensing Corporation Reference processing for bitdepth and color format scalable video coding
CN104054338B (zh) * 2011-03-10 2019-04-05 杜比实验室特许公司 位深和颜色可伸缩视频编码
JPWO2012172668A1 (ja) * 2011-06-15 2015-02-23 株式会社東芝 動画像符号化方法及び装置並びに動画復号化方法及び装置
WO2012172668A1 (ja) * 2011-06-15 2012-12-20 株式会社 東芝 動画像符号化方法及び装置並びに動画復号化方法及び装置
KR20120140592A (ko) 2011-06-21 2012-12-31 한국전자통신연구원 움직임 보상의 계산 복잡도 감소 및 부호화 효율을 증가시키는 방법 및 장치
WO2012177052A2 (ko) 2011-06-21 2012-12-27 한국전자통신연구원 인터 예측 방법 및 그 장치
KR20140034292A (ko) * 2011-07-01 2014-03-19 모토로라 모빌리티 엘엘씨 움직임 벡터 예측 설계 간소화
PH12014500632B1 (en) 2011-09-29 2019-01-18 Sharp Kk Image decoding apparatus, image decoding method and image encoding apparatus
WO2013047811A1 (ja) * 2011-09-29 2013-04-04 シャープ株式会社 画像復号装置、画像復号方法および画像符号化装置
CN104041041B (zh) 2011-11-04 2017-09-01 谷歌技术控股有限责任公司 用于非均匀运动向量栅格的运动向量缩放
JP5485969B2 (ja) * 2011-11-07 2014-05-07 株式会社Nttドコモ 動画像予測符号化装置、動画像予測符号化方法、動画像予測符号化プログラム、動画像予測復号装置、動画像予測復号方法及び動画像予測復号プログラム
US9756353B2 (en) 2012-01-09 2017-09-05 Dolby Laboratories Licensing Corporation Hybrid reference picture reconstruction method for single and multiple layered video coding systems
US9420302B2 (en) 2012-01-24 2016-08-16 Dolby Laboratories Licensing Corporation Weighted multi-band cross color channel predictor
US9172970B1 (en) 2012-05-29 2015-10-27 Google Inc. Inter frame candidate selection for a video encoder
US11317101B2 (en) 2012-06-12 2022-04-26 Google Inc. Inter frame candidate selection for a video encoder
US9716892B2 (en) 2012-07-02 2017-07-25 Qualcomm Incorporated Video parameter set including session negotiation information
US9906786B2 (en) * 2012-09-07 2018-02-27 Qualcomm Incorporated Weighted prediction mode for scalable video coding
US9503746B2 (en) 2012-10-08 2016-11-22 Google Inc. Determine reference motion vectors
US9485515B2 (en) 2013-08-23 2016-11-01 Google Inc. Video coding using reference motion vectors
US9467700B2 (en) 2013-04-08 2016-10-11 Qualcomm Incorporated Non-entropy encoded representation format
CN104488271B (zh) * 2013-07-26 2019-05-07 北京大学深圳研究生院 一种基于p帧的多假设运动补偿方法
CN115118971B (zh) * 2016-05-13 2025-06-17 交互数字Vc控股公司 用于视频编码的通用式多假设预测的系统及方法
CN109804627B (zh) * 2016-08-11 2023-07-25 Lx 半导体科技有限公司 图像编码/解码方法和设备
CN106358041B (zh) 2016-08-30 2019-05-10 北京奇艺世纪科技有限公司 一种帧间预测编码方法及装置
CN106230578B (zh) * 2016-09-08 2019-09-27 哈尔滨工程大学 一种基于加权处理的三维Lorenz映射控制的二进制安全算术编码方法
CN106851389A (zh) * 2017-02-24 2017-06-13 深圳市安拓浦科技有限公司 一种使用wifi传输tv信号的设备
US20180332298A1 (en) * 2017-05-10 2018-11-15 Futurewei Technologies, Inc. Bidirectional Prediction In Video Compression
US10778978B2 (en) * 2017-08-21 2020-09-15 Qualcomm Incorporated System and method of cross-component dynamic range adjustment (CC-DRA) in video coding
US11153602B2 (en) 2018-01-24 2021-10-19 Vid Scale, Inc. Generalized bi-prediction for video coding with reduced coding complexity
US20190246114A1 (en) * 2018-02-02 2019-08-08 Apple Inc. Techniques of multi-hypothesis motion compensation
US11924440B2 (en) * 2018-02-05 2024-03-05 Apple Inc. Techniques of multi-hypothesis motion compensation
US20190285977A1 (en) * 2018-03-16 2019-09-19 Qingdao Hisense Laser Display Co., Ltd. Laser projection apparatus
EP3554080A1 (en) * 2018-04-13 2019-10-16 InterDigital VC Holdings, Inc. Methods and devices for picture encoding and decoding
WO2019229683A1 (en) * 2018-05-31 2019-12-05 Beijing Bytedance Network Technology Co., Ltd. Concept of interweaved prediction
WO2020058955A1 (en) 2018-09-23 2020-03-26 Beijing Bytedance Network Technology Co., Ltd. Multiple-hypothesis affine mode
CN111083485B (zh) * 2018-10-22 2024-08-02 北京字节跳动网络技术有限公司 仿射模式的运动信息的利用
CN117915081A (zh) 2019-01-02 2024-04-19 北京字节跳动网络技术有限公司 视频处理的方法
US11134246B2 (en) * 2019-01-02 2021-09-28 Shumaker & Sieffert, P.A. Weighted prediction for video coding
CN114731442B (zh) * 2019-09-16 2023-12-01 Lg电子株式会社 使用加权预测的图像编码/解码方法和装置以及发送比特流的方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5758257A (en) * 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
WO2002065784A1 (en) * 2001-02-13 2002-08-22 Koninklijke Philips Electronics N.V. Motion information coding and decoding method
HK1037462A2 (en) * 2001-07-20 2002-01-04 Cityu Research Limited Image compression
JP2004007379A (ja) 2002-04-10 2004-01-08 Toshiba Corp 動画像符号化方法及び動画像復号化方法
CA2574047C (en) * 2002-01-18 2008-01-08 Kabushiki Kaisha Toshiba Video encoding method and apparatus and video decoding method and apparatus
US7003035B2 (en) * 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
CA2762936C (en) 2002-10-01 2015-04-14 Panasonic Corporation Picture coding apparatus, picture decoding apparatus and the methods
JP2006517364A (ja) * 2003-01-07 2006-07-20 トムソン ライセンシング マクロブロック・パーティションのインター/イントラ混在ビデオ符号化
JP2007525072A (ja) 2003-06-25 2007-08-30 トムソン ライセンシング 置換されたフレーム差を使用する重み付き予測推定の方法と装置
US8731054B2 (en) * 2004-05-04 2014-05-20 Qualcomm Incorporated Method and apparatus for weighted prediction in predictive frames
KR100654436B1 (ko) * 2004-07-07 2006-12-06 삼성전자주식회사 비디오 코딩 방법과 디코딩 방법, 및 비디오 인코더와디코더
EP1790168B1 (en) * 2004-09-16 2016-11-09 Thomson Licensing Video codec with weighted prediction utilizing local brightness variation
US8228994B2 (en) * 2005-05-20 2012-07-24 Microsoft Corporation Multi-view video coding based on temporal and view decomposition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAMIKURA ET AL.: "Global Brightness-Variation Compensation for Video Coding", IEEE TRANS ON CSVT, vol. 8, December 1998 (1998-12-01), pages 988 - 1000

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2444992A (en) * 2006-12-21 2008-06-25 Tandberg Television Asa Video encoding using picture division and weighting by luminance difference data
WO2009084340A1 (ja) * 2007-12-28 2009-07-09 Sharp Kabushiki Kaisha 動画像符号化装置、および、動画像復号装置
US9538176B2 (en) 2008-08-08 2017-01-03 Dolby Laboratories Licensing Corporation Pre-processing for bitdepth and color format scalable video coding
US9232223B2 (en) 2009-02-02 2016-01-05 Thomson Licensing Method for decoding a stream representative of a sequence of pictures, method for coding a sequence of pictures and coded data structure
US9161057B2 (en) 2009-07-09 2015-10-13 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding
US9609357B2 (en) 2009-07-09 2017-03-28 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding
US8995526B2 (en) 2009-07-09 2015-03-31 Qualcomm Incorporated Different weights for uni-directional prediction and bi-directional prediction in video coding
US9008178B2 (en) 2009-07-30 2015-04-14 Thomson Licensing Method for decoding a stream of coded data representative of a sequence of images and method for coding a sequence of images
JP2013500661A (ja) * 2009-07-30 2013-01-07 トムソン ライセンシング 画像シーケンスを表す符号化されたデータのストリームを復号する方法および画像シーケンスを符号化する方法
US9826247B2 (en) 2011-10-17 2017-11-21 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for efficient coding
RU2681359C1 (ru) * 2011-10-17 2019-03-06 Кабусики Кайся Тосиба Устройство кодирования, устройство декодирования, способ кодирования и способ декодирования
JPWO2013057783A1 (ja) * 2011-10-17 2015-04-02 株式会社東芝 符号化方法及び復号方法
US11153593B2 (en) 2011-10-17 2021-10-19 Kabushiki Kaisha Toshiba Decoding method, encoding method, and electronic apparatus for decoding/coding
US9521422B2 (en) 2011-10-17 2016-12-13 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for efficient coding
US10602173B2 (en) 2011-10-17 2020-03-24 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for efficient coding
WO2013057783A1 (ja) * 2011-10-17 2013-04-25 株式会社東芝 符号化方法及び復号方法
US11140405B2 (en) 2011-10-17 2021-10-05 Kabushiki Kaisha Toshiba Decoding method, encoding method, and transmission apparatus for efficient coding
US10271061B2 (en) 2011-10-17 2019-04-23 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for efficient coding
US11039159B2 (en) 2011-10-17 2021-06-15 Kabushiki Kaisha Toshiba Encoding method and decoding method for efficient coding
RU2681379C1 (ru) * 2011-10-17 2019-03-06 Кабусики Кайся Тосиба Устройство кодирования, устройство декодирования, способ кодирования и способ декодирования
AU2011379259B2 (en) * 2011-10-17 2015-07-16 Kabushiki Kaisha Toshiba Encoding method and decoding method
US11800111B2 (en) 2012-06-27 2023-10-24 Kabushiki Kaisha Toshiba Encoding method that encodes a first denominator for a luma weighting factor, transfer device, and decoding method
US11363270B2 (en) 2012-06-27 2022-06-14 Kabushiki Kaisha Toshiba Decoding method, encoding method, and transfer device for coding efficiency
US12088810B2 (en) 2012-06-27 2024-09-10 Kabushiki Kaisha Toshiba Encoding method that encodes a first denominator for a luma weighting factor, transfer device, and decoding method
US10257516B2 (en) 2012-06-27 2019-04-09 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for coding efficiency
US11202075B2 (en) 2012-06-27 2021-12-14 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for coding efficiency
US10277900B2 (en) 2012-06-27 2019-04-30 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for coding efficiency
US10609376B2 (en) 2012-06-27 2020-03-31 Kabushiki Kaisha Toshiba Encoding device, decoding device, encoding method, and decoding method for coding efficiency
JP2014131360A (ja) * 2014-04-07 2014-07-10 Toshiba Corp 符号化方法、符号化装置及びプログラム
JP2015119499A (ja) * 2015-02-16 2015-06-25 株式会社東芝 復号方法、復号装置及びプログラム
JP2016096567A (ja) * 2015-12-22 2016-05-26 株式会社東芝 復号方法、復号装置及びプログラム
RU2638756C2 (ru) * 2016-05-13 2017-12-15 Кабусики Кайся Тосиба Устройство кодирования, устройство декодирования, способ кодирования и способ декодирования
JP2017099016A (ja) * 2017-01-30 2017-06-01 株式会社東芝 電子機器、復号方法及びプログラム
JP2017121070A (ja) * 2017-02-23 2017-07-06 株式会社東芝 電子機器、復号方法及びプログラム
JP2018042266A (ja) * 2017-10-20 2018-03-15 株式会社東芝 電子機器、符号化方法及びプログラム
JP2019009792A (ja) * 2018-08-22 2019-01-17 株式会社東芝 符号化方法、復号方法及び符号化データ
JP2020058073A (ja) * 2020-01-07 2020-04-09 株式会社東芝 符号化方法、復号方法及び符号化データ
JP7000498B2 (ja) 2020-05-29 2022-01-19 株式会社東芝 記憶装置、送信装置および符号化方法
JP2020129848A (ja) * 2020-05-29 2020-08-27 株式会社東芝 符号化データのデータ構造、記憶装置、送信装置および符号化方法
WO2022175625A1 (fr) * 2021-02-19 2022-08-25 Orange Prédiction pondérée d'image, codage et décodage d'image utilisant une telle prédiction pondérée
FR3120174A1 (fr) * 2021-02-19 2022-08-26 Orange Prédiction pondérée d’image, codage et décodage d’image utilisant une telle prédiction pondérée
US12506872B2 (en) 2024-08-19 2025-12-23 Kabushiki Kaisha Toshiba Encoding method that encodes a first denominator for a luma weighting factor, transfer device, and decoding method

Also Published As

Publication number Publication date
JP5086422B2 (ja) 2012-11-28
WO2006128072A3 (en) 2007-03-01
JP2011091840A (ja) 2011-05-06
JP2008541502A (ja) 2008-11-20
CN101176350A (zh) 2008-05-07
EP1891810A2 (en) 2008-02-27
EP1891810B1 (en) 2020-03-18
CN101176350B (zh) 2011-02-23
US8457203B2 (en) 2013-06-04
US20060268166A1 (en) 2006-11-30

Similar Documents

Publication Publication Date Title
EP1891810B1 (en) Method and apparatus for coding motion and prediction weighting parameters
US8208564B2 (en) Method and apparatus for video encoding and decoding using adaptive interpolation
US8831087B2 (en) Efficient prediction mode selection
US8855202B2 (en) Flexible range reduction
KR101377883B1 (ko) 비디오 인코딩에서 넌-제로 라운딩 및 예측 모드 선택 기법들
KR100866293B1 (ko) 예측 프레임에서의 가중 예측을 위한 방법 및 장치
US8995526B2 (en) Different weights for uni-directional prediction and bi-directional prediction in video coding
CA2703775C (en) Method and apparatus for selecting a coding mode
EP1999958B1 (en) Method of reducing computations in intra-prediction mode decision processes in a digital video encoder
US10735746B2 (en) Method and apparatus for motion compensation prediction
US7577200B2 (en) Extended range variable length coding/decoding of differential motion vector information
WO2007115325A2 (en) Apparatus and method of enhanced frame interpolation in video compression
EP2105030A2 (en) Spatial sparsity induced temporal prediction for video compression
Schwarz et al. The emerging JVT/H. 26L video coding standard
US20120008687A1 (en) Video coding using vector quantized deblocking filters
JP7594364B2 (ja) 符号化装置、復号装置、及びプログラム
WO2022146215A1 (en) Temporal filter
Ismaeil Computation-distortion optimized DCT-based video coding
HK1103900A (en) Method and apparatus for weighted prediction in predictive frames

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680016666.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2008508011

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2006771414

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU