WO2012043882A1 - System for nested entropy encoding - Google Patents

System for nested entropy encoding Download PDF

Info

Publication number
WO2012043882A1
WO2012043882A1 PCT/JP2011/073149 JP2011073149W WO2012043882A1 WO 2012043882 A1 WO2012043882 A1 WO 2012043882A1 JP 2011073149 W JP2011073149 W JP 2011073149W WO 2012043882 A1 WO2012043882 A1 WO 2012043882A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vectors
motion vector
candidate
subset
candidate set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2011/073149
Other languages
English (en)
French (fr)
Inventor
Yeping Su
Christopher A. Segall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Priority to JP2013514449A priority Critical patent/JP2013543286A/ja
Publication of WO2012043882A1 publication Critical patent/WO2012043882A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • the present invention relates to a method for trimming a candidate set of motion vectors.
  • the present invention also relates to a system for decoding received data.
  • Modern video transmission and display systems and particularly those systems that present high-definition content, require significant data compression in order to produce a visually acceptable motion picture, because transmission media simply cannot transmit an uncompressed sequence of video frames at a fast enough rate to appear as continuous motion to the human eye .
  • the compression technique used should not unduly sacrifice image quality by discarding too much frame data.
  • video compression and encoding standards such as MPEG and H .264 take advantage of temporal redundancy in the sequence of video frames.
  • adjacent frames typically show the same objects or features, which may move slightly from one frame to another due either to the movement of the object in the scene being shot (producing local motion in a frame) , the movement of the camera shooting the scene (producing global motion) , or both.
  • Video compression standards employ motion estimation to define regions in an image, which may correspond to objects, and associate with those regions a motion vector that describes the inter-frame movement of the content in each region so as to avoid redundant encoding and transmission of objects or patterns that appear in more than one sequential frame, despite appearing at slightly different locations in sequential frames.
  • Motion vectors may be represented by a translational model or many other models that approximate the motion of a real video camera, such as rotation, translation, or zoom. Accordingly, motion estimation is the process of calculating and encoding motion vectors as a substitute for duplicating the encoding of similar information in sequential frames.
  • motion vectors may relate to the whole image, more often they relate to small regions if the image, such as rectangular blocks, arbitrary shapes, boundaries of objects, or even individual pixels.
  • One of the popular methods is block- matching, in which the current image is subdivided into rectangular blocks of pixels, such as 4x4 pixels, 4x8 pixels, 8x8 pixels, 16x 16 pixels, etc . , and a motion vector (or displacement vector) is estimated for each block by searching for the closest-matching block in the reference image, within a pre-defined search region of a subsequent frame .
  • motion vectors improves coding efficiency for any particular block of an image by permitting a block to be encoded only in terms of a motion vector pointing to a corresponding block in another frame, and a "residual" or differential between the target and reference blocks.
  • the goal is therefore to determine a motion vector for a block in a way that minimizes the differential that needs to be encoded .
  • numerous variations of block matching exist, differing in the definition of the size and placement of blocks, the method of searching, the criterion for matching blocks in the current and reference frame, and several other aspects.
  • MVC motion vector competition
  • the coding of motion vectors can exploit redundancies in situations where motion vectors between sequential frames do not change drastically, by identifying an optimal predictor, from a limited set of previously-encoded candidates, so as to minimize the bit length of the differential.
  • the predictor set usually contains both spatial motion vector neighbors and temporally co- located motion vectors, and possibly spatiotemporal vectors .
  • a preferred embodiment is a method for decoding data, said method comprising:
  • Another preferred embodiment is a method for trimming a candidate set of motion vectors used to at least one of encode and decode a selected one of said candidate set, said method comprising:
  • Another preferred embodiment is a decoding system for decoding received data, said system comprising:
  • a syntax modeler for associating a sequence of first symbols with selective ones of a plurality of syntax elements in said data, where said syntax elements are used to index a plurality of VLC tables;
  • FIGS. 1A and I B generally illustrate motion vector competition.
  • FIG. 2 shows an exemplary system for encoding and decoding motion vectors.
  • FIG. 3 shows a nested entropy encoding structure .
  • FIG. 4 shows a system using the nested entropy encoding structure depicted in FIG . 3.
  • FIG. 5A shows an exemplary encoder capable of trimming a candidate set of motion vectors.
  • FIG. 5B shows an exemplary method of trimming a candidate set of motion vectors used by the encoder of FIG . 5.
  • FIG. 6 generally illustrates an alternate embodiment of encoding a temporally co-located motion vector in a candidate set of motion vectors.
  • This motion vector may be encoded with reference to a candidate set of motion vectors V a , V x , V y , and V z .
  • FIG . 1 A also shows the blocks A', X', Y ⁇ and Z' that the respective motion vectors would point to if used when encoding the candidate block.
  • motion vector V z would be selected to minimize the code length of the differential Vd, which in that instance, would only require a value of " 1 " in a single component (down) of the vector. All other differential motion vectors either would require encoding two components or would have a larger value for a single component.
  • MVC motion vector competition
  • each block may represent a single pixel, and many more motion vectors could be included in the candidate set.
  • all motion vectors previously calculated in the current frame could be included in the candidate set, as well as any motion vectors calculated for preceding frames.
  • the candidate set may include a desired number of arbitrary motion vectors useful to capture large and sudden motions in the scene.
  • the selected motion vector Vz will need to be encoded.
  • One straightforward approach is for an encoder 10 to assign a value to each candidate motion vector in a table 14 of symbols, which assuming a variable-length entropy encoding method such as Huffman or arithmetic encoding, might look something like :
  • the encoder and decoder will preferably collect statistics as the bitstream is encoded and decoded and rearrange the assignments of symbols to the motion vector candidates, in the respective tables 14 and 16, so that at any given time the motion vector having the highest frequency receives the shortest symbol, etc .
  • This process is generally referred to as entropy coding, and will usually result in significant, lossless compression of the bitstream.
  • the encoder 10 and the decoder 12 use the same methodology to construct and update the tables 14 and 16 initialized from the beginning of the bitstream, respectively, so that for every symbol, the table 16 used to encode that symbol is identical to the table used to decode the symbol.
  • the system shown in FIG. 2 can result in significant overhead when signaling which predictor is chosen from the set of candidate motion vectors. This is particularly true if the number of predictors is large . However, the more predictors used, the more efficiency is gained when encoding the differential motion vector. In order to further reduce the overhead of signaling which predictor is chosen, additional techniques may be employed.
  • the set of candidate motion vector predictors may be trimmed to eliminate duplicate vectors.
  • two motion vectors are duplicate vectors when the two vectors have the same horizontal value , vertical value and reference index.
  • the term duplicate vector is equivalent to duplicate motion vector, identical vector or identical motion vector.
  • the vectors V x , V y are identical, hence one of the motion vectors can be trimmed, and as a result, the largest symbol 1 1 10 in the table above can be eliminated.
  • knowing the size of the trimmed motion predictor set means that the last bit of the last symbol in the trimmed set can be omitted, e . g.
  • this symbol may simply be encoded as 1 1 give that this bit sequence distinguishes over all the previous symbols in the table, and the decoder knows from the size of the trimmed set that there are no further symbols.
  • coding efficiency gains could theoretically be achieved by signaling a selected one of a group of ordered candidate sets. This gain in coding efficiency could work, not only in tandem with techniques such as motion vector trimming and using truncated unary codes, but actually as a substitute for those techniques, i. e . preserving spatial and temporal independence when parsing the bitstream by not trimming duplicate candidate motion vectors and not truncating the highest-bit-length symbol.
  • an encoder or a decoder may utilize a nested entropy encoding structure where one of a plurality of coded symbols 18 is assigned to each of a plurality of entropy-coded candidate set of motion vectors, shown as separate VLC tables 20. It should be understood that any particular one of the VLC tables 20 may include a motion vector set that differs from that another VLC table 20 , meaning that a particular motion vector that appears in one VLC table 20 does not need to appear in all VLC tables 20.
  • the encoder may signal one of the symbols 18 that corresponds to that one of the VLC tables 20 (candidate sets) for which the signaled motion vector has the highest frequency and therefore the smallest code length. Coded symbols 18 identifying a respective candidate set can themselves be entropy-coded, if desired, or may alternatively be encoded with a fixed length code, or any other appropriate coding technique .
  • Implicit in the foregoing discussion is the assumption that there is some non-random distribution among the plurality of all possible candidate sets of motion vectors . If, for example, the respective individual candidate sets simply comprise all permutations of the symbols included in each, randomly distributed with respect to each other, there would be no reason to expect a net gain in coding efficiency because the number of candidate sets of motion vectors, needed to guarantee that a sufficient number of candidate motion vectors appear in a candidate set high enough in the table to benefit from a reduced code length, would be too large . Essentially, what efficiency gained in coding the selected one of the candidate motion vector is lost in the overhead of coding the symbol associated with the particular candidate set.
  • the disclosed nested entropy encoding structure would be expected to further compress the bitstream only if some of the possible permutations of symbols in the candidate set are more likely than others, such that the higher-code-length candidate sets are not used as often as the lower-code-length candidate sets.
  • an encoder 10 may have access to syntax symbols from a syntax model 24 that defines a set of syntax elements in the encoded data to be used to differentiate multiple VLC tables of candidate sets of motion vectors, and therefore also defines a set of syntax elements used by the encoder and decoder to determine the VLC table with which to encode the selected ones of the candidate motion vectors with code symbols.
  • an encoder 10 (and hence a decoder 12) will include a learning agent that tries different combinations of syntax elements so as to intelligently maximize coding efficiency. Stated differently, the encoder 10 intelligently optimizes coding efficiency by iteratively choosing different combinations of available said syntax elements, measuring a change in coding efficiency following each chosen combination, and responding accordingly by replacing one or more syntax elements in the combination .
  • the encoder 10 may then use an applicable motion vector symbol for the selected motion vector for the current block from a VLC table 28a, 28b, 28c, 28d, etc, and encode the motion vector symbol in a bitstream to the decoder 12.
  • the encoder 10 also updates the order of the motion vector symbols in the
  • any change in the frequency distribution of symbols in a table results in the symbols being reordered.
  • the encoder 10 (and the decoder 12) keeps track of the most frequently-occurring symbol in the un-reordered set and ensures that that symbol is at the top of the table, i. e . that it has the smallest code length. Note that, in this example, because the syntax symbol is determined solely by the syntax of previously-encoded data, the encoder need not encode the syntax symbol along with the motion vector symbol, so long as the decoder 12 uses the same syntax model to determine the particular VLC table 30a, 30b, 30c, and 30d, from which to extract the received motion vector symbol. In other words, when the encoder 10 uses the syntax of the previously-encoded data to differentiate the VLC tables, updating the order of symbols in those tables in the process, a very high degree of coding efficiency can be achieved.
  • the decoder 12 When the decoder 12 receives a coded bitstream from the encoder 10, the decoder parses the bitstream to determine the relevant VLC table for a received symbol, using a syntax model 26 if available, to decode the received symbols to identify the selected motion vector from the candidate set. The decoder also updates the respective VLC tables in the same manner as does the encoder 10.
  • the motion vector predictor set may contain candidate motion vectors spatially predictive of a selected motion vector (i. e. candidates in the same frame as the current block) , candidate motion vectors temporally predictive of a selected motion vector (i. e. candidates at the co-located block in the frame preceding the current block) , and candidate motion vectors spatiotemporally predictive of a selected motion vector (i.e . candidates in the frame preceding the current block spatially offset from the co-located block) .
  • the disclosed nested entropy encoding structure permits a decoder to parse a bitstream without trimming candidate motion vectors or truncating code symbols, thereby preserving spatial and temporal independence in the parsing process, and preserving error resilience while at the same time achieving significant coding efficiencies.
  • the nested entropy encoding structure can be used in tandem with the techniques of trimming candidate motion vectors or truncating code symbols , while at least partially preserving error resilience .
  • an encoder 10 may include a candidate motion vector set construction module 40 that retrieves from one or more buffers 28 the full set of candidate motion vectors applicable to a current block being encoded.
  • a candidate motion vector set trimming module 42 then selectively trims the set of candidate motion vectors according to predefined rules, by applying a syntax model 24 to the set of candidate motion vectors, prior to encoding a selected motion vector with an encoding module 44 , which in turn selects a symbol based on the trimmed set of candidates .
  • One potential predefined rule may prevent the candidate motion vector set module 42 from trimming motion vector predictors derived from previously reconstructed/ transmitted frames.
  • the two motion vector predictors are both included in the trimmed set.
  • Another potential predefined rule may prevent the candidate motion vector set module 42 from trimming motion vector predictors with different reference indices.
  • the two motion vectors predictors are both included in the trimmed set.
  • two motion vectors that have a same reference index values but not a same horizontal value and/ or vertical value are both included in the trimmed set.
  • two motion vectors have the same reference index value, horizontal value and vertical value are in the set, one is removed by the candidate motion vector set module 42.
  • a predefined rule may prevent the candidate motion vector set trimming module 42 from trimming motion vector predictors derived from regions that are located in different slices, so as to preserve spatial independence.
  • a predefined rule may prevent the candidate motion vector set trimming module 42 from trimming motion vector predictors derived from regions that are located in different entropy slices, where an entropy slice is a unit of the bit- stream that may be parsed without reference to other data in the current frame .
  • FIG . 5B shows a generalized technique for applying any one of a wide variety of trimming rule sets that are signaled using a novel flag.
  • an encoder 10 receives a candidate set of motion vector predictors from a buffer 28, for example .
  • a flag is signaled by the encoder (or received by the decoder) that is used at decision step 53 to indicate whether trimming is applied, and optionally a trimming rule set as well that may be used to define which vectors will be trimmed. If the flag indicates that no trimming is to occur, the technique proceeds to step 60 and encodes the selected motion vector using the full set of candidate motion vectors.
  • the subset of duplicate motion vectors is identified in step 54.
  • the subset of duplicate motion vectors can be considered in one embodiment as a maximized collection of motion vectors for which each member of the subset has an identical motion vector not included in the subset.
  • the subset may be seen as one that excludes from the subset any motion vector in the full set of candidates that has no duplicate and also excludes from the subset exactly one motion vector in a collection of identical duplicates.
  • selected candidate motion vectors may be selectively removed from the subset of duplicates. It is this step that enables spatial and/ or temporal independence to be preserved .
  • candidate motion vectors can also be added to the subset of duplicate motion vectors, for reasons explained in more detail below.
  • the purpose of steps 54 and 56 is simply to apply a rule set to identify those motion vectors that will be trimmed from the full candidate set. Once this subset has been identified, the candidate motion vectors in this subset is trimmed at step 58 and the encoder then encodes the selected motion vector, from those remaining, based on the size of the trimmed set at step 60.
  • temporal_mvp_flag used by the encoder to signal into the bitstream a true / false condition of whether the selected motion vector, from the candidate set, is a temporally-located motion vector.
  • the applicable rule set for this flag is intended to preserve temporal independence . If the temporal_mvp_flag indicates that a temporal predictor is selected by the encoder, the temporal predictor subset in the candidate set will not be trimmed, because to do so would create temporal dependency. However, the spatial predictor subset of the candidate set can be trimmed because the decoder 12 has foreknowledge of the size of the temporal predictor subset.
  • the candidate set can not only be trimmed of duplicates, but in some embodiments can also be trimmed of temporal predictors, resulting in a drastically diminished candidate set that needs to be encoded . It should also be recognized that, if an applicable rule set permits both temporal and spatial dependencies, the a temporal_mvp_flag can be used, regardless of its value, to trim duplicates of the temporal or spatial subset signaled by the flag and to trim the entire subset not signaled by the flag.
  • the inventors have determined that there is a reasonable correlation between the value of the disclosed temporal_mvp_flag and the value of a constrained_intra_pred_flag, associated with a frame, and often used in an encoded video bit stream. Specifically, the inventors have determined that there is a strong correlation between these two flags when the value of the constrained_intra_pred_flag is 1 , and a substantially less strong correlation when the value of the constrained_intra_pred_flag is 0.
  • the encoder may optionally be configured to not encode the disclosed temporal_mvp_flag when the constrained_intra_pred_flag is set to 1 for the frame of a current pixel, such that the decoder will simply insert or assume an equal value for the temporal_mvp_flag in that instance, and to otherwise encode the temporal_mvp_flag.
  • the disclosed temporal_mvp_flag may simply be assigned a value equal to the constrained_intra_pred_flag, but preferably in this latter circumstance the value of a 0 should be associated in the defined rule set as causing the result of simply trimming duplicate vectors in the candidate set.
  • the disclosed nested entropy encoding structure can be additionally applied to this temporal_mvp_flag syntax.
  • top and left neighboring flags are used to determine the predictor set template used in the entropy coding of temporal_mvp_flag. This may be beneficial if, as is the usual case, the encoder and decoder exclusively assigns entropy symbols to coded values, and also where the temporal_mvp_flag may take on many values.
  • the predictor set template for the coding of the selected motion vector for the candidate set is made depending on the temporal_mvp_flag of the current block.
  • another embodiment of the invention signals if the motion vector predictor is equal to motion vectors derived from the current frame or motion vectors derived from a previously reconstructed / transmitted frame, as was previously described with respect to the temporal_mvp_flag.
  • the flag is sent indexed by the number of unique motion vector predictors derived from the current frame.
  • a predictor set template in this embodiment could distinguish all possible combinations of a first code value that reflects the combination of flags in the two blocks to the left and above the current block, e . g.
  • a context template in this embodiment could identify all possible combinations of a first code value that reflects whether the flags in the two blocks to the left and above the current block are identical or not, e. g. 00 and 1 1 entropy coded as 0 and 0 1 and 10 entropy coded as 10 , for example , and a second code value reflective of the number of unique motion vectors in the candidate set.
  • An encoding scheme may include a candidate set of motion vectors that includes a large number of temporally co- located motion vectors from each of a plurality of frames, such as the one illustrated in FIG. 6. This means that, to encode the blocks 64 of a current frame, the encoder may have to access one or more buffers that contains a history of all the selected motion vectors in each of the prior frames from which a candidate motion vector is extracted . This can require an extensive amount of memory. As an alternative, the smaller-sized block of pixels used in the encoding scheme, e . g.
  • a 2x2 block may be grouped in larger blocks 62 , where the motion vectors stored in the buffer, and later used as co- located motion vectors when encoding subsequent blocks, may instead be the average motion vector 66 of all the selected vectors in the respective group .
  • This trade s memory requirements for coding efficiency as the averaging procedure tend s to produce a larger differential to be encoded whenever the co-located motion vector is selected . Having said that, the reduction in coding efficiency is not all that great given that the averaged co-located vector will only be cho sen if it is more efficient to use that vector than any of the alternatives in the candidate set.
  • a vector median operation or a component-wise medial operation may be used, as can any other standard operation such as maximum, minimum, or a combination of maximum and minimum operations , commonly called a dilate , erode , open , or close operation .
  • the motion vector from a predefined location may be used .
  • the motion vector corresponding to the iV-th smaller- sized block of pixels in a larger block may be stored in the buffer for latter use as co-located motion vectors for the larger block, where N is an integer corresponding to the location of the smaller- sized block in the larger block in raster scan order.
  • the operation used to group smaller- sized blocks of pixels into larger blocks may be signaled in a bit- stream from an encoder to a decoder.
  • the operation may be signaled in a sequence parameter set, or alternatively, the operation may be signaled in the picture parameter set, slice header, or for any defined group of pixels.
  • the operation can be determined from a level or profile identifier that is signaled in the bit-stream.
  • the number of smaller- sized blocks that are grouped to larger blocks may be signaled in a bit- stream from an encoder to a decoder.
  • said number may signaled in the sequence parameter set, or alternatively the number may be signaled in the picture parameter set, slice header, or for any defined group of pixels.
  • the number may be determined from a level or profile identifier that is signaled in the bit-stream.
  • the number may be expressed as a number of rows of smaller-sized blocks and a number of column of smaller-sized blocks.
  • an encoder and / or a decoder may be used in any one of a number of hardware, firmware, or software implementations.
  • an encoder may be used in a set-top recorder, a server, desktop computer, etc .
  • a decoder may be implemented in a display device, a set-top cable box, a set-top recorder, a server, desktop computer, etc.
  • the various components of the disclosed encoder and decoder may access any available processing device and storage to perform the described techniques.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/JP2011/073149 2010-10-01 2011-09-30 System for nested entropy encoding Ceased WO2012043882A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013514449A JP2013543286A (ja) 2010-10-01 2011-09-30 ネスト型エントロピー符号化システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/896,800 2010-10-01
US12/896,800 US10104391B2 (en) 2010-10-01 2010-10-01 System for nested entropy encoding

Publications (1)

Publication Number Publication Date
WO2012043882A1 true WO2012043882A1 (en) 2012-04-05

Family

ID=45889821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/073149 Ceased WO2012043882A1 (en) 2010-10-01 2011-09-30 System for nested entropy encoding

Country Status (3)

Country Link
US (6) US10104391B2 (enExample)
JP (2) JP2013543286A (enExample)
WO (1) WO2012043882A1 (enExample)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104937939A (zh) * 2012-04-11 2015-09-23 摩托罗拉移动有限责任公司 用信号发送用于时间预测的时间运动矢量预测符(mvp)标志
CN106664420A (zh) * 2014-06-20 2017-05-10 高通股份有限公司 用于帧内块复制的块向量译码
US9979968B2 (en) 2011-01-12 2018-05-22 Canon Kabushiki Kaisha Method, a device, a medium for video decoding that includes adding and removing motion information predictors
JP2019036993A (ja) * 2011-05-27 2019-03-07 サン パテント トラスト 動画像復号化方法、および動画像復号化装置
US10536712B2 (en) 2011-04-12 2020-01-14 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US10595023B2 (en) 2011-05-27 2020-03-17 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US10645413B2 (en) 2011-05-31 2020-05-05 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US10887585B2 (en) 2011-06-30 2021-01-05 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
CN113228676A (zh) * 2018-11-29 2021-08-06 交互数字Vc控股公司 在合并列表中运动矢量预测量候选排序
US11553202B2 (en) 2011-08-03 2023-01-10 Sun Patent Trust Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus
US11647208B2 (en) 2011-10-19 2023-05-09 Sun Patent Trust Picture coding method, picture coding apparatus, picture decoding method, and picture decoding apparatus

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101279573B1 (ko) * 2008-10-31 2013-06-27 에스케이텔레콤 주식회사 움직임 벡터 부호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치
US10104391B2 (en) 2010-10-01 2018-10-16 Dolby International Ab System for nested entropy encoding
US20120082228A1 (en) 2010-10-01 2012-04-05 Yeping Su Nested entropy encoding
WO2012102021A1 (ja) 2011-01-25 2012-08-02 パナソニック株式会社 動画像符号化方法および動画像復号化方法
GB2491589B (en) * 2011-06-06 2015-12-16 Canon Kk Method and device for encoding a sequence of images and method and device for decoding a sequence of image
EP2781098B1 (en) 2011-11-18 2019-02-06 Google Technology Holdings LLC An explicit way for signaling a collocated picture for high efficiency video coding (hevc)
US9185408B2 (en) 2011-11-18 2015-11-10 Google Technology Holdings LLC Efficient storage of motion information for high efficiency video coding
US9392235B2 (en) 2011-11-18 2016-07-12 Google Technology Holdings LLC Explicit way for signaling a collocated reference picture for video coding
EP2783510A1 (en) 2011-11-21 2014-10-01 Motorola Mobility LLC Implicit determination and combined implicit and explicit determination of collocated picture for temporal prediction
WO2013107028A1 (en) * 2012-01-19 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of amvp simplification
US9549177B2 (en) * 2012-04-11 2017-01-17 Google Technology Holdings LLC Evaluation of signaling of collocated reference picture for temporal prediction
KR101347263B1 (ko) * 2012-06-11 2014-01-10 한국항공대학교산학협력단 움직임 벡터 예측 장치 및 그 예측 방법
US9319681B2 (en) 2012-07-18 2016-04-19 Google Technology Holdings LLC Signaling of temporal motion vector predictor (MVP) enable flag
FR3029055B1 (fr) * 2014-11-24 2017-01-13 Ateme Procede d'encodage d'image et equipement pour la mise en oeuvre du procede
SG11202013028PA (en) 2018-06-29 2021-01-28 Beijing Bytedance Network Technology Co Ltd Interaction between lut and amvp
WO2020003261A1 (en) 2018-06-29 2020-01-02 Beijing Bytedance Network Technology Co., Ltd. Selection from multiple luts
EP3794824A1 (en) 2018-06-29 2021-03-24 Beijing Bytedance Network Technology Co. Ltd. Conditions for updating luts
TWI728388B (zh) 2018-06-29 2021-05-21 大陸商北京字節跳動網絡技術有限公司 Lut中的運動候選的檢查順序
CN110662053B (zh) * 2018-06-29 2022-03-25 北京字节跳动网络技术有限公司 使用查找表的视频处理方法、装置和存储介质
WO2020003279A1 (en) * 2018-06-29 2020-01-02 Beijing Bytedance Network Technology Co., Ltd. Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks
WO2020003266A1 (en) * 2018-06-29 2020-01-02 Beijing Bytedance Network Technology Co., Ltd. Resetting of look up table per slice/tile/lcu row
TWI752331B (zh) 2018-06-29 2022-01-11 大陸商北京字節跳動網絡技術有限公司 當向Merge/AMVP添加HMVP候選時的部分/完全修剪
CN114845108B (zh) 2018-06-29 2025-08-12 抖音视界(北京)有限公司 查找表的更新:fifo、约束的fifo
WO2020008353A1 (en) 2018-07-02 2020-01-09 Beijing Bytedance Network Technology Co., Ltd. Usage of luts
TW202021358A (zh) * 2018-07-14 2020-06-01 大陸商北京字節跳動網絡技術有限公司 用時間信息擴展基於查找表的運動向量預測
GB2590310B (en) 2018-09-12 2023-03-22 Beijing Bytedance Network Tech Co Ltd Conditions for starting checking HMVP candidates depend on total number minus K
JP7275286B2 (ja) 2019-01-10 2023-05-17 北京字節跳動網絡技術有限公司 Lut更新の起動
WO2020143824A1 (en) 2019-01-13 2020-07-16 Beijing Bytedance Network Technology Co., Ltd. Interaction between lut and shared merge list
WO2020147772A1 (en) 2019-01-16 2020-07-23 Beijing Bytedance Network Technology Co., Ltd. Motion candidates derivation
CN113615193B (zh) 2019-03-22 2024-06-25 北京字节跳动网络技术有限公司 Merge列表构建和其他工具之间的交互
EP4292283A1 (en) * 2021-04-26 2023-12-20 Huawei Technologies Co., Ltd. Parallel entropy coding
US12460522B2 (en) 2022-05-17 2025-11-04 Sc Asset Corporation Collet baffle, a tool incorporating same, and a system and method incorporating same, for perforating and fracking a wellbore not having initial ports or sliding sleeves

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10224800A (ja) * 1997-02-07 1998-08-21 Matsushita Electric Ind Co Ltd 動きベクトル符号化方法および復号化方法
JP2007532036A (ja) * 2003-09-07 2007-11-08 マイクロソフト コーポレーション インターレース・ビデオの進歩した双方向予測コーディング

Family Cites Families (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5996728U (ja) 1982-12-21 1984-06-30 オムロン株式会社 光電スイツチ
GB2266023B (en) * 1992-03-31 1995-09-06 Sony Broadcast & Communication Motion dependent video signal processing
US6141446A (en) 1994-09-21 2000-10-31 Ricoh Company, Ltd. Compression and decompression system with reversible wavelets and lossy reconstruction
EP0731614B1 (en) * 1995-03-10 2002-02-06 Kabushiki Kaisha Toshiba Video coding/decoding apparatus
US7020671B1 (en) 2000-03-21 2006-03-28 Hitachi America, Ltd. Implementation of an inverse discrete cosine transform using single instruction multiple data instructions
JP3597107B2 (ja) * 2000-03-29 2004-12-02 沖電気工業株式会社 動きベクトル検出回路及び動きベクトル検出方法
US6958715B2 (en) * 2001-02-20 2005-10-25 Texas Instruments Incorporated Variable length decoding system and method
US7929610B2 (en) 2001-03-26 2011-04-19 Sharp Kabushiki Kaisha Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
WO2003098939A1 (en) * 2002-05-22 2003-11-27 Matsushita Electric Industrial Co., Ltd. Moving image encoding method, moving image decoding method, and data recording medium
US20040001546A1 (en) * 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
EP1569460B1 (en) * 2002-11-25 2013-05-15 Panasonic Corporation Motion compensation method, image encoding method, and image decoding method
US7321625B2 (en) * 2002-12-13 2008-01-22 Ntt Docomo, Inc. Wavelet based multiresolution video representation with spatially scalable motion vectors
JP4536325B2 (ja) 2003-02-04 2010-09-01 ソニー株式会社 画像処理装置および方法、記録媒体、並びにプログラム
KR100510136B1 (ko) * 2003-04-28 2005-08-26 삼성전자주식회사 참조 픽처 결정 방법, 그 움직임 보상 방법 및 그 장치
US6917310B2 (en) * 2003-06-25 2005-07-12 Lsi Logic Corporation Video decoder and encoder transcoder to and from re-orderable format
US7599438B2 (en) * 2003-09-07 2009-10-06 Microsoft Corporation Motion vector block pattern coding and decoding
KR20050045746A (ko) * 2003-11-12 2005-05-17 삼성전자주식회사 계층 구조의 가변 블록 크기를 이용한 움직임 추정 방법및 장치
KR100586882B1 (ko) 2004-04-13 2006-06-08 삼성전자주식회사 모션 스케일러빌리티를 지원하는 코딩 방법 및 장치
US7953152B1 (en) 2004-06-28 2011-05-31 Google Inc. Video compression and encoding method
KR100654436B1 (ko) 2004-07-07 2006-12-06 삼성전자주식회사 비디오 코딩 방법과 디코딩 방법, 및 비디오 인코더와디코더
KR100679026B1 (ko) 2004-07-15 2007-02-05 삼성전자주식회사 비디오 코딩 및 디코딩을 위한 시간적 분해 및 역 시간적분해 방법과, 비디오 인코더 및 디코더
US7937271B2 (en) 2004-09-17 2011-05-03 Digital Rise Technology Co., Ltd. Audio decoding using variable-length codebook application ranges
KR100664929B1 (ko) 2004-10-21 2007-01-04 삼성전자주식회사 다 계층 기반의 비디오 코더에서 모션 벡터를 효율적으로압축하는 방법 및 장치
CN100469146C (zh) 2004-11-17 2009-03-11 展讯通信(上海)有限公司 视频图像运动补偿装置
KR100703746B1 (ko) 2005-01-21 2007-04-05 삼성전자주식회사 비동기 프레임을 효율적으로 예측하는 비디오 코딩 방법 및장치
WO2006096612A2 (en) 2005-03-04 2006-09-14 The Trustees Of Columbia University In The City Of New York System and method for motion estimation and mode decision for low-complexity h.264 decoder
US8913660B2 (en) * 2005-04-14 2014-12-16 Fastvdo, Llc Device and method for fast block-matching motion estimation in video encoders
US7199735B1 (en) 2005-08-25 2007-04-03 Mobilygen Corporation Method and apparatus for entropy coding
JP5061179B2 (ja) 2006-03-22 2012-10-31 韓國電子通信研究院 照明変化補償動き予測符号化および復号化方法とその装置
US20070268964A1 (en) 2006-05-22 2007-11-22 Microsoft Corporation Unit co-location-based motion estimation
US20080043832A1 (en) 2006-08-16 2008-02-21 Microsoft Corporation Techniques for variable resolution encoding and decoding of digital video
JP4592656B2 (ja) 2006-08-17 2010-12-01 富士通セミコンダクター株式会社 動き予測処理装置、画像符号化装置および画像復号化装置
KR101356735B1 (ko) 2007-01-03 2014-02-03 삼성전자주식회사 전역 움직임 벡터를 사용해서 움직임 벡터를 추정하기 위한방법, 장치, 인코더, 디코더 및 복호화 방법
JP2009055519A (ja) 2007-08-29 2009-03-12 Sony Corp 符号化処理装置、符号化処理方法、復号処理装置、及び、復号処理方法
JP2009159323A (ja) 2007-12-26 2009-07-16 Toshiba Corp 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム
ES2812473T3 (es) 2008-03-19 2021-03-17 Nokia Technologies Oy Vector de movimiento combinado y predicción de índice de referencia para la codificación de vídeo
US20100027663A1 (en) 2008-07-29 2010-02-04 Qualcomm Incorporated Intellegent frame skipping in video coding based on similarity metric in compressed domain
KR20100027384A (ko) * 2008-09-02 2010-03-11 삼성전자주식회사 예측 모드 결정 방법 및 장치
KR101279573B1 (ko) * 2008-10-31 2013-06-27 에스케이텔레콤 주식회사 움직임 벡터 부호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치
US8675736B2 (en) 2009-05-14 2014-03-18 Qualcomm Incorporated Motion vector processing
US8891628B2 (en) * 2009-06-19 2014-11-18 France Telecom Methods for encoding and decoding a signal of images, corresponding encoding and decoding devices, signal and computer programs
US9036692B2 (en) 2010-01-18 2015-05-19 Mediatek Inc. Motion prediction method
PL2559166T3 (pl) 2010-04-13 2018-04-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Koder i dekoder dzielący interwał prawdopodobieństwa
KR20120016991A (ko) 2010-08-17 2012-02-27 오수미 인터 프리딕션 방법
US10104391B2 (en) 2010-10-01 2018-10-16 Dolby International Ab System for nested entropy encoding
US20120082228A1 (en) 2010-10-01 2012-04-05 Yeping Su Nested entropy encoding
US20120183047A1 (en) 2011-01-18 2012-07-19 Louis Joseph Kerofsky Video decoder with reduced dynamic range transform with inverse transform clipping
CN103748879B (zh) 2011-06-28 2018-03-06 Lg电子株式会社 设置运动矢量列表的方法及使用其的装置
US9992511B2 (en) 2011-10-21 2018-06-05 Nokia Technologies Oy Method for video coding and an apparatus
US9525861B2 (en) 2012-03-14 2016-12-20 Qualcomm Incorporated Disparity vector prediction in video coding
US9503720B2 (en) 2012-03-16 2016-11-22 Qualcomm Incorporated Motion vector coding and bi-prediction in HEVC and its extensions
JP6454468B2 (ja) 2013-12-26 2019-01-16 日東電工株式会社 延伸積層体の製造方法、該製造方法により得られる延伸積層体、該延伸積層体を用いた偏光膜の製造方法、および延伸装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10224800A (ja) * 1997-02-07 1998-08-21 Matsushita Electric Ind Co Ltd 動きベクトル符号化方法および復号化方法
JP2007532036A (ja) * 2003-09-07 2007-11-08 マイクロソフト コーポレーション インターレース・ビデオの進歩した双方向予測コーディング

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FRANK BOSSEN ET AL.: "Simplified motion vector coding method", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 2ND MEETING, 21 July 2010 (2010-07-21), GENEVA, CH *
JOEL JUNG ET AL.: "Competition-Based Scheme for Motion Vector Selection and Coding", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG) 29TH MEETING, 17 July 2006 (2006-07-17), KLAGENFURT, AUSTRIA *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9979968B2 (en) 2011-01-12 2018-05-22 Canon Kabushiki Kaisha Method, a device, a medium for video decoding that includes adding and removing motion information predictors
US10165279B2 (en) 2011-01-12 2018-12-25 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
US11146792B2 (en) 2011-01-12 2021-10-12 Canon Kabushiki Kaisha Video encoding and decoding with improved error resilience
US11012705B2 (en) 2011-04-12 2021-05-18 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US12238326B2 (en) 2011-04-12 2025-02-25 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US11917186B2 (en) 2011-04-12 2024-02-27 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US11356694B2 (en) 2011-04-12 2022-06-07 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US10536712B2 (en) 2011-04-12 2020-01-14 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US10609406B2 (en) 2011-04-12 2020-03-31 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US11895324B2 (en) 2011-05-27 2024-02-06 Sun Patent Trust Coding method and apparatus with candidate motion vectors
US11979582B2 (en) 2011-05-27 2024-05-07 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US10708598B2 (en) 2011-05-27 2020-07-07 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US10721474B2 (en) 2011-05-27 2020-07-21 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US12375684B2 (en) 2011-05-27 2025-07-29 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US12323616B2 (en) 2011-05-27 2025-06-03 Sun Patent Trust Coding method and apparatus with candidate motion vectors
JP2021083097A (ja) * 2011-05-27 2021-05-27 サン パテント トラスト 動画像復号化方法
JP2024113087A (ja) * 2011-05-27 2024-08-21 サン パテント トラスト ビットストリーム及びビットストリームを送信する方法
US11076170B2 (en) 2011-05-27 2021-07-27 Sun Patent Trust Coding method and apparatus with candidate motion vectors
JP7507407B2 (ja) 2011-05-27 2024-06-28 サン パテント トラスト 動画像符号化方法及び動画像符号化装置
US11115664B2 (en) 2011-05-27 2021-09-07 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US10595023B2 (en) 2011-05-27 2020-03-17 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
JP7065354B2 (ja) 2011-05-27 2022-05-12 サン パテント トラスト 動画像復号化方法、および動画像復号化装置
JP2019036993A (ja) * 2011-05-27 2019-03-07 サン パテント トラスト 動画像復号化方法、および動画像復号化装置
JP2023041958A (ja) * 2011-05-27 2023-03-24 サン パテント トラスト 動画像符号化方法及び動画像符号化装置
JP7228851B2 (ja) 2011-05-27 2023-02-27 サン パテント トラスト 動画像復号化方法
US11570444B2 (en) 2011-05-27 2023-01-31 Sun Patent Trust Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus
US11575930B2 (en) 2011-05-27 2023-02-07 Sun Patent Trust Coding method and apparatus with candidate motion vectors
US10652573B2 (en) 2011-05-31 2020-05-12 Sun Patent Trust Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device
US11509928B2 (en) 2011-05-31 2022-11-22 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US12348768B2 (en) 2011-05-31 2025-07-01 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US10645413B2 (en) 2011-05-31 2020-05-05 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US11917192B2 (en) 2011-05-31 2024-02-27 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US11057639B2 (en) 2011-05-31 2021-07-06 Sun Patent Trust Derivation method and apparatuses with candidate motion vectors
US10887585B2 (en) 2011-06-30 2021-01-05 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US11979598B2 (en) 2011-08-03 2024-05-07 Sun Patent Trust Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus
US11553202B2 (en) 2011-08-03 2023-01-10 Sun Patent Trust Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus
US12120324B2 (en) 2011-10-19 2024-10-15 Sun Patent Trust Picture coding method, picture coding apparatus, picture decoding method, and picture decoding apparatus
US11647208B2 (en) 2011-10-19 2023-05-09 Sun Patent Trust Picture coding method, picture coding apparatus, picture decoding method, and picture decoding apparatus
CN104937939B (zh) * 2012-04-11 2018-10-23 谷歌技术控股有限责任公司 用于时间运动矢量预测符标志的编码器和解码器及其方法
CN104937939A (zh) * 2012-04-11 2015-09-23 摩托罗拉移动有限责任公司 用信号发送用于时间预测的时间运动矢量预测符(mvp)标志
CN106664420A (zh) * 2014-06-20 2017-05-10 高通股份有限公司 用于帧内块复制的块向量译码
CN113228676A (zh) * 2018-11-29 2021-08-06 交互数字Vc控股公司 在合并列表中运动矢量预测量候选排序

Also Published As

Publication number Publication date
US11659196B2 (en) 2023-05-23
JP2016040933A (ja) 2016-03-24
US12081789B2 (en) 2024-09-03
US20240373056A1 (en) 2024-11-07
US20200244985A1 (en) 2020-07-30
JP2013543286A (ja) 2013-11-28
US10104391B2 (en) 2018-10-16
US20120082229A1 (en) 2012-04-05
US20160037180A1 (en) 2016-02-04
US10587890B2 (en) 2020-03-10
US11032565B2 (en) 2021-06-08
US20210368197A1 (en) 2021-11-25
US20230379488A1 (en) 2023-11-23

Similar Documents

Publication Publication Date Title
US12081789B2 (en) System for nested entropy encoding
US12363300B2 (en) Nested entropy encoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11829432

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013514449

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11829432

Country of ref document: EP

Kind code of ref document: A1