CN104349170B - The method that vision signal is decoded - Google Patents

The method that vision signal is decoded Download PDF

Info

Publication number
CN104349170B
CN104349170B CN201410571867.2A CN201410571867A CN104349170B CN 104349170 B CN104349170 B CN 104349170B CN 201410571867 A CN201410571867 A CN 201410571867A CN 104349170 B CN104349170 B CN 104349170B
Authority
CN
China
Prior art keywords
candidate blocks
block
space
prediction
mer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410571867.2A
Other languages
Chinese (zh)
Other versions
CN104349170A (en
Inventor
李培根
权载哲
金柱英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KT Corp
Original Assignee
KT Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020120039500A external-priority patent/KR101197176B1/en
Application filed by KT Corp filed Critical KT Corp
Publication of CN104349170A publication Critical patent/CN104349170A/en
Application granted granted Critical
Publication of CN104349170B publication Critical patent/CN104349170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Merging the method for candidate blocks and using the equipment of this method for guiding the present invention relates to a kind of.Picture decoding method includes:Motion estimation regions (MER) relevant information is decoded;Determine that prediction object block and space merge whether candidate blocks are included in identical MER;And when predicting that object block and space merging candidate blocks are included in identical MER, space merging candidate blocks are determined as not available merging candidate blocks.Therefore, merge candidate method for guiding by being performed in parallel, can realize parallel processing and reduce calculation amount and implement complexity.

Description

The method that vision signal is decoded
The application is to be September in 2012 6 application No. is the 201280006137.7, applying date, is entitled " for drawing The divisional application of the parent application of the equipment led the method for merging candidate blocks and use this method ".
Technical field
The present invention relates to a kind of methods of coding and decoding video, and more particularly to a kind of side of export merging candidate blocks Method and the device for using this method.
Background technology
Recently, in various application fields to the tool of such as fine definition (HD) video and ultrahigh resolution (UHD) video There is the demand of the video of high-resolution and high quality to be continuously increased.Become higher with the resolution ratio and quality of video, video Size relative increases compared with existing video, therefore, such as existing wiring or wireless broadband network is utilized in the video In the case that network is transmitted or is stored in existing storage medium, transmission cost and carrying cost will increase.In order to solve Because resolution ratio and quality become higher and these problems for generating, efficient video compression technology can be used.
Video compression technology includes various technologies, such as:It is predicted for the picture before or after current picture It is included in Predicting Technique between (picture) of the pixel value in current picture, for by using the Pixel Information in current picture To predict (picture) the interior prediction technology for the pixel value being included in current picture, and it is higher by for that will be distributed to compared with short code Existing frequency values and the entropy coding that longer code is distributed to low frequency of occurrences value, also, by using this video pressure Contracting technology, video data can efficiently be compressed, to be transmitted or store.
Invention content
Technical problem
The first object of the present invention is to provide a kind of method exporting merging candidate using parallel processing.
The second object of the present invention is to provide a kind of exporting the method for merging candidate for executing using parallel processing Device.
Technical solution
According to one aspect of the present invention for realizing the first purpose present invention as described above, a kind of acquisition is provided Merge the method for candidate blocks.This method may include being decoded to motion estimation regions (MER) relevant information;Determine prediction mesh Mark block and space merge whether candidate blocks are included in identical MER;And merge candidate blocks in prediction object block and space When being included in identical MER, if it is determined that it is candidate then to determine that space merges for the merging candidate blocks for merging candidate blocks without using space Block is not available merging candidate blocks.This method can also include:If predict object block and space merge candidate blocks by including In identical MER, then adaptively determine that space merges candidate blocks according to the size of the size of MER and prediction object block.Such as The size of fruit MER is 8 × 8 and predicts that the size of object block is 8 × 4 or 4 × 8, and it includes being located at outside MER that can utilize The block of point replaces the space of prediction object block to merge at least one of candidate blocks.This method can also include determining that space is closed And whether candidate blocks are included in decoded MER not yet.This method can also include:If predicting that object block and space merge Candidate blocks are included in identical MER, then replace space to merge candidate blocks using the block being included in other MER.According to packet Include the position that space in identical MER merges candidate blocks, it can be being accommodated property ground generation that the space replaced, which merges candidate blocks, For the space merging candidate blocks to be included in the MER different from object block is predicted.MER relevant informations can be related MER It the information of size and is transmitted in picture unit.It is identical to determine whether prediction object block and space merging candidate blocks are included in May include according to the big of the location information and MER for merging candidate blocks based on the prediction location information of object block, space in MER Really fixed pattern predicts that object block and space merge whether candidate blocks are included in identical MER to small information to determine.
According to above-mentioned another aspect of the present invention for realizing the second object of the present invention, a kind of image decoding dress is provided It sets.The device may include:Entropy decoding unit, for being decoded to motion estimation regions relevant information;And predicting unit, For determining that prediction object block and space merge whether candidate blocks are included in identical MER, and if prediction object block Merge candidate blocks with space to be included in identical MER, then space merging candidate blocks are determined as not available merging waits Select block.The predicting unit can be predict object block and space merge in the case that candidate blocks are included in identical MER according to The size of the size of MER and prediction object block adaptively determines that space merges the predicting unit of candidate blocks.If MER's is big Small to be 8 × 8 and predict that the size of object block is 8 × 4 or 4 × 8, then it includes being located at outside the MER that predicting unit, which can utilize, The block of the point in portion replaces the space of the prediction object block to merge at least one of candidate blocks.The predicting unit can determine sky Between merge candidate blocks module whether be included in decoded MER not yet.The predicting unit can be in prediction object block and sky Between merge candidate blocks and replace space to merge candidate blocks using the block being included in other MER when being included in identical MER Predicting unit.The position for merging candidate blocks according to the space being included in identical MER, the space replaced merges candidate blocks can Be being accommodated property merge candidate blocks from the space in the different MER of prediction object block instead of being included in.MER relevant informations It can be the information of the size in relation to MER and be transmitted in picture unit.Predicting unit can be based on according to prediction mesh Mark the location information of block, space merge candidate blocks location information and the size information of MER fixed pattern determines prediction mesh really Mark block and space merge whether candidate blocks are included in the predicting unit in identical candidate blocks MER.
Technique effect
Acquisition described in exemplary embodiment according to the present invention merges the method for candidate blocks and using this method Device, parallel processing can be realized by being performed in parallel the method for acquisition merging candidate blocks, it is thus possible to reduce calculation amount And the complexity implemented.
Description of the drawings
Fig. 1 is the block diagram for the video encoder for illustrating exemplary embodiment according to the present invention.
Fig. 2 is the block diagram for illustrating Video Decoder in accordance with an alternative illustrative embodiment of the present invention.
Fig. 3 is diagram exemplary embodiment according to the present invention for the candidate blocks using merging patterns and dancing mode Conceptual view.
Fig. 4 is the conceptual view that the decision of diagram exemplary embodiment according to the present invention merges the method for candidate blocks.
Fig. 5 is the side that merging candidate blocks are determined according to the size of MER of diagram exemplary embodiment according to the present invention The conceptual view of method.
Fig. 6 be diagram determine current block space merge candidate blocks whether be obtainable method conceptual view.
Fig. 7 is the side for obtaining space in merging patterns and merging candidate blocks of diagram exemplary embodiment according to the present invention The flow chart of method.
Fig. 8 is the flow using the method predicted between merging patterns for illustrating exemplary embodiment according to the present invention Figure.
Specific implementation mode
Although various modifications example and exemplary embodiment can be made, spy is fully described only with reference to attached drawing herein Fixed exemplary embodiment.However, the present invention should not be construed as limited only to the exemplary embodiment proposed herein But it should be understood that covering falls into the scope of the present invention and all variations, equivalent or the alternative of technical term. Everywhere in the attached drawing, same reference numerals refer to identical element.
It should be understood that:Although the various elements of description such as possible term " first " used herein, " second ", this A little elements should not be limited by these terms.These terms are used only for distinguishing an element with other.These terms It is used only for distinguishing an element with another element.For example, in the case where not departing from teachings of the present invention, first yuan Part can be referred to as second element, and similarly, and second element can be referred to as first element.Term "and/or" includes more Any of the combination of a associated list items or multiple associated list items.
It should be understood that:In feature, either element is referred to as being connected " or " coupled " to another feature or element When, it, which can be directly connected, is either couple to another element or can have an intervening elements.On the contrary, in feature or element When being referred to as " being directly connected " or " directly coupling " to another element, it will be appreciated that:There is no intervening elements.
Term as used herein is only used for describing specific embodiment and is not intended to limit showing for the present invention Example property embodiment.Unless the context clearly indicates otherwise, singulative " one ", "one" and " this " mean also include Plural form.It should be understood that:Although term " comprising " or "comprising" specified when being used for herein stated feature, Entirety, step, operation, the presence of component, assembly unit or any combination of them, but it is not excluded for one or more other spies The either presence of any combination of them or additional of sign, entirety, step, operation, component, assembly unit.
The present invention is described in detail below with reference to attached drawing.Hereinafter, identical reference numeral is used to refer to everywhere in attached drawing Same section and the repeated explanation for omitting same section.
Fig. 1 is the block diagram for the video encoder for illustrating exemplary embodiment according to the present invention.
With reference to figure 1, video encoder 100 may include picture segmentation module 110, prediction module 120, interior prediction module 125, conversion module 130, quantization modules 135, rearrange module 160, entropy code module 165, de-quantization module 140, inversion Change the mold block 145, filter module 150 and memory 155.
Each module shown in Fig. 1 respectively shown, in order to provide the different characteristic of the function in video encoder, And it does not mean that and indicates that each module is configured as the hardware or software component unit of separation.That is, for illustrative purposes, Each module is listed as each element, and each mould at least two module in the block can be merged into an element, or One module can be divided into multiple element to execute function, and the embodiment that wherein modules are merged or divide does not have There is the essence for being detached from the present invention and is included in scope of the presently claimed invention.
In addition, a part of element can not be the indispensable element of the function for executing the essence in the present invention, And it is only used for carrying high performance selective element.The present invention can be necessary just with to the essence for implementing the present invention Element and exclude only be carried out with improving element used in performance, and only include essence element and exclude only Configuration for carrying high performance selective element is also included in scope of the presently claimed invention.
Input picture can be split as at least one processing unit by picture segmentation module 110.Herein, processing unit can To be predicting unit (PU), converter unit (TU) or decoding unit (CU).Picture segmentation module 110 can tear a picture open It is divided into the combination of multiple decoding units, predicting unit and converter unit, and preassigned (such as cost letter can be based on Number), encode the picture by selecting a combination of decoding unit, predicting unit and converter unit.
For example, a picture can be divided into multiple decoding units.In order to divide the decoding unit, can use such as The recursive tree structure of quaternary tree shape structure, and with as root picture or maximum decoding unit be split as it is other The decoding unit of decoding unit can be split as having the child node of quantity and split decoding unit as many.No longer root Become leaf node according to a certain decoding unit further split that limits.In other words, assuming that only for a decoding unit When only square segmentation (square partitioning) is available, a decoding unit can be split as four it is different Decoding unit.
Hereinafter, in an exemplary embodiment of the present invention, decoding unit can not only be used to refer to the unit for coding And it can refer to decoded unit.
Predicting unit is divided in a decoding unit using the square or rectangular shape with same size.
Based on decoding unit come when generating the predicting unit for executing interior prediction, if decoding unit is not minimum Decoding unit then can execute interior prediction in the case where not being split as multiple predicting units in the form of N × N units.
Prediction module may include:For prediction module between being predicted between execution 120 and for executing in interior prediction in advance Survey module 125.For predicting unit, prediction module may determine whether to predict between executing or whether execute interior prediction, and It can determine the specific information (such as inner estimation mode, motion vector, reference picture etc.) according to each prediction technique.At this In, the processing unit for executing prediction and the processing unit for determining prediction technique and specific details can not phases Together.For example, prediction technique and prediction mode can be determined in predicting unit and predict to be held in converter unit Row.Remaining value (rest block) between the prediction block generated and original block can be input into conversion module 130.In addition, with It can be compiled in entropy together with the remaining value to be sent to decoder in prediction mode information, motion vector information of prediction etc. It is encoded in code module 135.When using specific coding mode, it may not be generated and be predicted by prediction module 120,125 Block, but original block is because to be sent be encoded to decoder.
Between prediction module can the information based at least one of picture before or after current picture picture pre- It surveys on unit and is predicted.Between prediction module may include reference picture interpolating module, motion prediction module and motion compensation Module.
Reference picture interpolating module can be provided reference picture information from memory 155, and can be according to reference to picture Face generates Pixel Information in the form of less than entire pixel unit.In the case of luminance pixel, 8 based on DCT can be used Tap interpolation filter, wherein filter factor is altered to generate the pixel letter of the unit of 1/4 pixel smaller than entire pixel unit Breath.In the case of carrier chrominance signal, the 4 tap interpolation filters based on DCT, wherein filter factor can be used to be altered to give birth to At the Pixel Information of the unit of 1/8 pixel smaller than entire pixel unit.
Motion prediction module can be based on executing movement by reference to picture difference block into the reference picture of row interpolation Prediction.Method for obtaining motion vector, can use such as FBMA (based entirely on the block matching algorithm of search), TSS (three Grade search) or NTS (new three-stagesearch algorithm) various methods.Motion vector can based on the pixel being interpolated 1/2 or There is motion vector value in the unit of 1/4 pixel of person.Motion prediction module can be predicted to work as by changing motion forecast method Preceding predicting unit.As motion forecast method, such as dancing mode, merging patterns or advanced motion vector can be used pre- The various methods of (AMVP) pattern of survey.
Exemplary embodiment according to the present invention, when being predicted between execution, motion estimation regions (MER) can be defined as It is performed in parallel prediction.For example, when being predicted between being executed using merging patterns or dancing mode, it may be determined that prediction target Block and space merge whether candidate blocks are included in identical MER, and merge candidate blocks in prediction object block and space and do not include When in identical MER, it can merge whether candidate blocks include that conjunction is determined in still not decoded MER by determining space And candidate blocks, or can determine that space merging candidate blocks are unavailable.Hereinafter, in an exemplary embodiment of the present invention, retouching State the operation of predicting unit when being predicted between execution.
Between predicting unit can generate predicting unit based on the information about the reference pixel adjacent with current block, wherein Reference pixel is the pixel in current picture.If the adjacent block of current prediction unit is predicted to be done so that ginseng on it between being The block that the pixel being performed on it is predicted between examining pixel and being includes then the reference image in the block that prediction is performed on it Element can usually be replaced using the reference image for the adjacent block that interior prediction is performed on it.In other words, reference pixel can not Used time, not available reference pixel can utilize at least one of available reference pixel reference image usually to replace.
Interior prediction can have according to prediction direction use about reference pixel information orientation prediction mode and Execute the non-directional pattern without using directional information when predicting.Pattern for predicting the information about luma samples and for pre- It surveys and can be different about the pattern of the information of chroma sample.In addition, for luma samples inner estimation mode information or The information of the luminance signal of person's prediction can be used to predict the information about chroma sample.
It, can be to being based on being in if the size of predicting unit is identical with the size of converter unit when executing interior prediction The predicting unit of the pixel in the left side of predicting unit, the pixel in upper left region and the pixel on upper area is held Row interior prediction.However, the size of predicting unit when executing interior prediction and converter unit it is of different sizes in the case of, can be with Interior prediction is usually executed by using the reference image based on the converter unit.In addition it is possible to use single only with respect to minimum decoding The interior prediction of the NxN segmentations of member.
In interior prediction technique, according to prediction mode, pattern, which relies on interior smooth (MDIS) filter, can be applied to ginseng Pixel is examined, to generate prediction block.The type for being applied to the MDIS filtering phases of reference pixel can be different.In order to execute interior prediction, The inner estimation mode of current prediction unit can be predicted from the inner estimation mode of the predicting unit of adjacent current prediction unit.When When by using prediction mode of the pattern information predicted from adjacent predicting unit to predict current prediction unit, if worked as The inner estimation mode of preceding predicting unit and adjacent predicting unit are identical, then can be sent using scheduled label information The identical information of the prediction mode of current prediction unit and neighboring prediction unit, and if current prediction unit and neighboring prediction The prediction mode of unit is different, then the prediction mode information of current block can be decoded by entropy coding.
In addition, rest block includes remaining value information, the residue value information in prediction module 120,125 based on generating Predicting unit executes the difference between the predicting unit and the original block of predicting unit of prediction.The rest block generated can be entered To conversion module 130.Conversion module 130 can be by using such as discrete cosine transform (DCT) or discrete sine transform (DST) transform method is come the remaining value letter for the predicting unit for converting including original block and being generated in prediction module 120,125 The rest block of breath.Whether can be based on the predicting unit for generating rest block to convert rest block using DCT or DST Inner estimation mode information is determined.
Quantization modules 135 can quantify the value that frequency domain is transformed to by conversion module 130.Importance according to image Or block, thus it is possible to vary quantization parameter.The value exported by quantization modules 135 can be provided to de-quantization module 140 and again Arrange module 160.
Quantization parameter value about remaining value can be rearranged by rearranging module 160.
Rearrange module 160 can by coefficient scanning method by the coefficient modifying of the block form of two-dimensional array be it is one-dimensional The form of vector.For example, in rearranging module 160, it can be by using diagonal scan pattern from DC coefficient to high frequency Coefficient in domain is scanned, to be rearranged into the form of one-dimensional vector.According to the size of converter unit and interior prediction mould Formula, instead of diagonal scan pattern, can use the vertical scan mode of the two-dimentional coefficient of scanning block form in a column direction or The horizontal sweep pattern of person's two-dimentional coefficient in scanning block form in the row direction.In other words, according to the size of converter unit And inner estimation mode can determine and use which between diagonal scan pattern, vertical scan mode and horizontal sweep pattern A scan pattern.
Entropy code module 165 is based on executing entropy coding from rearranging the value that module 160 exports.Entropy coding can use Such as various coding methods of Exp-Golomb, context adaptive binary arithmetic coding (CABAC).
Entropy code unit 165 can be to from such as decoding unit for rearranging module 160 and prediction module 120,125 Residual coefficient information and block type information, prediction mode information, cutting unit information, predicting unit information, transmission unit letter The various information of breath, motion vector information, reference picture information, the interpolation information of block, filtering information, MER information etc..
Entropy code unit 165 can be by using the entropy coding method of such as CABAC come to defeated from module 160 is rearranged Coefficient value in the decoding unit entered executes entropy coding.
De-quantization module 140 and inverse transform module 145 carry out de-quantization to the value quantified by quantization modules 135, and inverse The value converted by conversion module 130 is converted to ground.The remaining value generated by de-quantization module 140 and inverse transform module 145 can be with It is added to pre- by the motion estimation module, motion compensating module and the interior prediction module that are included in prediction module 120,125 The predicting unit of survey, to generate reconstructed blocks.
Filter module 150 may include in deblocking filter, offset correction module and auto-adaptive loop filter (ALF) At least one of.
Deblocking filter can remove the block distortion generated due to the boundary between each piece in reconstruction picture.In order to Determine whether to execute de-blocking filter, can be determined whether to current based on including several row in block or the pixel in a few rows Block application de-blocking filter.When to block application de-blocking filter, strong filtering can be applied according to required de-blocking filter intensity Device or weak filter.Similarly, in application deblocking filter, when executing vertical filtering and horizontal filtering, level side It can concurrently be handled to filtering and vertical direction filtering.
Offset correction module can be corrected with pixel unit relative to original picture for the image for performing de-blocking filter The offset in face.In order to execute the offset correction relative to specific picture, the pixel being included in image can be used to be divided into pre- The region of fixed number amount, determination deviate the region to be performed on it and by offset applications to corresponding region or by examining The method for considering the marginal information of each pixel to apply the offset.
Auto-adaptive loop filter (ALF) can execute filter based on the comparison of the reconstructed image and original image that are filtered Wave.After the pixel in being included within image is divided into predetermined group and the determination filter to be applied to respective sets, it can incite somebody to action The filtering application is to being determined as each groups different from respective filter.It can be by translating about whether the information of application ALF Code unit (CU) is sent and the size of ALF to be applied and coefficient can be different each piece.ALF can have Have variously-shaped, therefore many coefficients in filter can be different each filter.The filtering relevant information of ALF (filter coefficient information, ALF ON/OFF information, filter shape information etc.) can be included and to be arranged in bit stream Predefined parameter be transmitted.
Memory 155 can store the reconstructed blocks exported from filter module 150 or picture, and be predicted between execution When, the reconstructed blocks or picture that are stored can be provided to prediction module 120,125.
Fig. 2 is the block diagram for showing image decoder in accordance with an alternative illustrative embodiment of the present invention.
With reference to figure 2, Video Decoder may include entropy decoder module 210, rearrange module 215, de-quantization module 220, inverse transform module 225, prediction module 230,235, filter module 240 and memory 245.
When video bit stream is inputted from video encoder, the bit stream that is inputted can with in video encoder The opposite sequence of processing sequence is decoded.
Entropy decoder module 210 can be come according to the opposite sequence of entropy coding is executed in the entropy code module of video encoder Execute entropy decoding.Information for generating prediction block in by 210 decoded information of entropy decoder module can be provided to pre- Module 230,235 is surveyed, and entropy-decoded remaining value in entropy decoder module can be input into and rearrange module 215.
Entropy decoder module 210 can decode and execute interior prediction and the related information of prediction by encoder.As described above, With in video encoder interior prediction and prediction predetermined constraints when, have with the interior prediction of current block and a prediction The information of pass can be provided based on the constraint by executing entropy decoding.
Rearrange module 215 can based on encoder rearrange method execute by entropy decoder module 210 into The bit stream of entropy decoding of having gone rearranges.Being expressed as the coefficient of one-dimensional vector form can be reconstructed and with two-dimensional block shape Formula is re-arranged.
De-quantization module 220 can be based on from encoder and rearranging the quantization parameter that coefficient block provides and execute solution amount Change.
Inverse transform module 225 can be to being executed by video encoder relative to the DCT and DST that are executed by conversion module The result of quantization executes inverse DCT and inverse DST.Inverse transformation can be executed based on the transmission unit determined by video encoder. In the conversion module of video encoder, DCT and DST can be according to the sizes and prediction direction of such as prediction technique, current block Multiple information selectively execute, and the inverse transform module 225 of Video Decoder can be based in video encoder The information converting that is executed in conversion module executes inverse transformation.
Prediction module 230,235 can based on information related with the prediction block that is provided from entropy decoder module 210 is generated with And the block of early decoding or the information of the picture provided from memory 245 generate prediction block.
Prediction module 230,235 may include predicting unit determining module, prediction module and interior prediction module.Prediction Unit determining module can receive such as predicting unit information, the prediction mode information of interior prediction method and defeated from entropy decoder Various information of the motion prediction relevant information of prediction technique between entering distinguish current decoding unit based on received information In predicting unit, and determination be in predicting unit execute between predict or interior prediction is executed in predicting unit.Between it is pre- Unit is surveyed by using the information for predicting to need between the current prediction unit provided by video encoder, based on including current pre- Survey at least one picture information for including between the previous picture and subsequent pic of the current picture of unit, come execute for It is predicted between current prediction unit.
In order to predict between executing, can be determined in the predicting unit being included in corresponding decoding unit based on decoding unit Motion forecast method be dancing mode, merging patterns or AMVP patterns.
Exemplary embodiment according to the present invention, when being predicted between execution, motion estimation regions (MER) can be defined as It is performed in parallel the prediction.For example, when being predicted between being executed using merging or jump, it may be determined that prediction object block and sky Between merge candidate blocks whether be included in identical MER.Merge candidate blocks in prediction object block and space to be not included identical When in MER, merge whether candidate blocks are included in decoded MER not yet by determining space, space merges candidate blocks can Merging candidate blocks are can be determined that be confirmed as not available or space merging candidate blocks.In the exemplary of the present invention The operation of prediction module is described in detail in embodiment.
Interior prediction module can generate prediction block based on the Pixel Information in current picture.It is for holding in predicting unit When the predicting unit of row interior prediction, it can be executed based on the inner estimation mode information of the predicting unit provided by video encoder Interior prediction.Interior prediction module may include the MDIS filters, reference pixel interpolating module and DC filters.MDIS filters It is the module for executing filtering to the reference pixel of current block, and whether can be according to current prediction unit using filtering Prediction mode is determined and applied.It is filtered by using the prediction mode of predicting unit and by the MDIS that video encoder provides Information can execute filtering to the reference pixel of current block.It, can when the prediction mode of current block is not execute the pattern of filtering Not apply MDIS filters.
It is the prediction for executing interior prediction based on the pixel value of interpolation reference pixel in the prediction mode of predicting unit When unit, reference pixel difference block can generate reference image by interpolation reference pixel in the pixel unit less than integer value Element.When the prediction mode of current prediction unit is the prediction mode at prediction block in the case of no interpolation reference pixel, Reference pixel can be not inserted into.If the prediction mode of current block is DC patterns, DC filters can be generated by filtering Prediction block.
Reconstructed blocks or picture can be provided to filter module 240.Filter module 240 may include deblocking filter, Offset correction module and ALF.
It whether is applied to relevant block or picture about deblocking filter and is if deblocking filter is by application It can be provided from video encoder using the information of strong filter or weak filter.The deblocking filter of Video Decoder can be with By the information provided from video encoder about deblocking filter and de-blocking filter is executed to the relevant block in Video Decoder. Identical as video encoder, vertical de-blocking filter and horizontal de-blocking filter are performed first, and vertically deblock and horizontal deblocking in At least one of can be performed in overlapping region.It, can in the overlapping region of vertical de-blocking filter and horizontal de-blocking filter To execute the vertical de-blocking filter not being performed previously or horizontal de-blocking filter.It is handled by the de-blocking filter, deblocking filter The parallel processing of wave is possibly realized.
Offset correction module can be based on the type for the offset correction for being applied to image and offset value information come in reconstruct picture Offset correction is executed on face.
ALF can execute filtering based on the value for comparing original image and filtered reconstructed image.It can be based on closing In whether using the information of ALF, about the information of the ALF coefficients provided from decoder ALF being applied to decoding unit.ALF believes Breath can be included in the specific parameter set to be provided.
Memory 245 can store the reconstruction picture or block of reference picture to be used as either reference block, and reconstruct Picture can be provided to output module.
As described above, although decoding unit is used to refer to the unit of the decoding in exemplary embodiment, decoding is single Member can also execute decoded unit for not only executing coding.Hereinafter, Fig. 3 of exemplary embodiment according to the present invention is extremely Prediction technique described in Fig. 8 can be executed by such as including the element of prediction module in Fig. 1 and Fig. 2.
Fig. 3 is the candidate blocks for applying merging patterns and dancing mode for showing exemplary embodiment according to the present invention Conceptual view.
Hereinafter, for illustrative purposes, describing the merging patterns in exemplary embodiment of the present invention;However, identical Method can be applied to dancing mode and in the scope of the claims that this embodiment is also included in the present invention.
With reference to figure 3, in order to predict between being executed by merging patterns, can be merged with use space candidate blocks 300,305, 310,315,320 and the time merge candidate blocks 350,355.
There is prediction mesh in the point (xP, yP) of the upper left quarter of the prediction object block for the position for being located relative to prediction object block When marking the width nPSW of block and predicting the height sPSH of object block, space merges in candidate blocks 300,305,310,315,320 Each block can be include first piece 300 including point (xP+nPSW- of point (xP-1, yP+nPSH-MinPuSize) MinPuSize, yP-1) the second piece of 305 third block 310 including point (xP-1, yP+ for including point (xP+nPSW, yP-1) NPSH) the 4th piece 315 and the 5th piece one of 320 including point (xP-MinPuSize, yP-1).
It includes being located at Col that time, which merges candidate can be using multiple candidate blocks and the first Col blocks (configuration block) 350, The block of the point (xP+nPSW, yP+nPSH) of picture (configuration picture).If the first Col blocks 350 are not present or unavailable (example Such as, if the first Col blocks are predicted between not executing), it includes the point (xP+ (nPSW for being located at Col pictures that can be used as replacement> >1),yP+(nPSH>>1) the 2nd Col blocks 355).
Exemplary embodiment according to the present invention, between concurrently merging patterns being used to execute when executing motion prediction Prediction, it may be determined that whether use relative to some region of merging candidate blocks.For example, in order to determine for executing merging patterns Merging candidate blocks, relative to the presumptive area with a certain size, it may be determined that merge candidate blocks whether with prediction object block It is located in presumptive area together, to determine whether to replace using merging candidate blocks or using other merging candidate blocks, to It is performed in parallel motion prediction relative to presumptive area.Use in exemplary embodiment of the present invention explained below merges mould The concurrent movement prediction technique of formula.
Fig. 4 is the conceptual view for determining the method for merging candidate blocks for showing exemplary embodiment according to the present invention.
With reference to figure 4, it is assumed that maximum decoding unit (LCU) is split as four motion estimation regions (MER).
If the first prediction block PU0 is included in the first MER (MER0), it is similar to Fig. 4, by using merging patterns When being predicted being executed between the first prediction block PU0, five spaces, which merge candidate blocks 400,405,410,415,420, can be used as space Merge candidate blocks to exist.Five merging candidate blocks 400,405,410,415,420, which can be located at, is not included in the first MER (MER0) In interior position, and it can be the block for having executed coding/decoding on it.
Second prediction block (PUI) is included in the prediction block in the 2nd MER (MER1) and for using merging patterns The space predicted between execution merge in candidate blocks 430,435,440,445,450 four merge candidate blocks 430,435,445, 450 can be the block of the block being located in the 2nd MER (MER1) and the identical MER for belonging to current execution prediction.A remaining merging Candidate blocks 440 can be in current MER right side block and including be also not carried out on it coding/decoding LCU or Block in MER.
Exemplary embodiment according to the present invention, when the merging candidate blocks and current block of current block belong to identical MER, when Preceding piece of merging candidate blocks are excluded and the movable information at least one of another position block can be according to current The size and MER sizes of block are increased as candidate blocks are merged.
Include the block of the point in other MER on either vertically or horizontally direction to be added to merge candidate Block.Alternatively, belonging to the block in other MER in the position of candidate blocks can be added to merge candidate blocks.It replaces Ground is changed, can be added to merge candidate blocks according to the block of the form of current block and size in precalculated position.
For example, being located at the upside of the second predicting unit (PU1) if merging candidate blocks 435 and merging candidate blocks 450 In the upper left side of the second predicting unit, including the block 455,460 of the point of the outside of the 2nd MER in vertical direction can make It is used by the merging candidate blocks replaced.For positioned at the left side of the second predicting unit merging candidate blocks 430 and be located at second The block 465,470 of point outside the merging candidate blocks 445 of the lower left side of predicting unit, including MER in the horizontal direction can be with Merging candidate blocks as replacement are used.It is not included together in identical MER in block and current prediction unit and therefore not Energy is included in other MER as candidate blocks are merged in use, merging candidate blocks and can be utilized according to the position for merging candidate blocks In other pieces of point replace.
If it is third prediction block (PU2), with third prediction block included together in the merging candidate blocks 475 in identical MER It can be replaced as using by the block 480 positioned at upside in vertical direction.In addition, another as the present invention is exemplary Embodiment is not that either vertically or horizontally block in other MER in square upwardly direction replaces space by using being included in Merge the position of candidate blocks, the position for merging candidate blocks can be replaced, and the exemplary embodiment is also included in the present invention Right in.
Following steps can be executed, to execute for determining the method for merging candidate blocks.
1) the step of motion estimation regions (MER) relevant information being decoded
MER relevant informations may include the information of the size about MER.It can be based on the size and prediction mesh about MER The information of the size of block is marked to determine whether prediction object block is included in MER.
2) determine that prediction object block and space merge whether candidate blocks are included in the step in identical MER
In the case where predicting that object block and space merging candidate blocks are included in identical MER, back can be executed The step of, adaptively to determine that space merges candidate blocks according to the size of the size of MER and prediction object block.
3) determine that space merges candidate blocks when predicting that object block and space merging candidate blocks are included in identical MER Not available step
When predicting that object block and space merge candidate blocks and be included in identical MER, space merges candidate blocks can be with It is confirmed as unavailable, and the space that is included in identical MER is merged candidate and can be replaced using other candidate blocks.This Outside, as described below, it can not used in using prediction between merging patterns and be confirmed as not available merging candidate blocks.
Another exemplary embodiment according to the present invention can also be applied identical as prediction object block without using being included in MER in merging candidate blocks method.
For example, in merging candidate blocks, it is included in coding/decoding and has executed on it and current on it with prediction Block in the different MER of current MER executed is predicted to be available between merging patterns for concurrently applying.The block can be by As predicting candidate block between utilization merging patterns.However, belong to the block of MER that prediction is currently performed can not as It is used using predicting candidate block between being predicted between merging patterns.Between the block that coding/decoding is not performed can not also be used as Predicting candidate block is used.The exemplary embodiment is also included in scope of the presently claimed invention.
Fig. 5 be show exemplary embodiment according to the present invention based on the size of MER come determine merge candidate blocks side The conceptual view of method.
With reference to figure 5, is determined according to the being adapted to property of size of the size of MER and current prediction unit and merge candidate.Example Such as, it is included in current prediction unit in the merging candidate corresponding to one of the position for merging candidate A, B, C, D, E identical In the case of in MER, merging candidate blocks are confirmed as unavailable.Herein, the movement of at least one block at other positions Information can be added according to the size of current block and the size of MER as candidate blocks are merged.
In fig. 5, it is assumed that the size of MER is 8 × 8 and predicts that object block is 4 × 8.When MER sizes are 8 × 8, packet It includes and is predicting that in the block piece of A of target belongs to MER identical with prediction object block and block B, C, D and E are included in and predict In the different MER of object block.
If it is block A, which can be replaced with the position for the block (for example, block A ') being included in different MER.Cause This, exemplary embodiment according to the present invention, when the merging candidate blocks and current block of current block belong to identical MER, current block Merging candidate blocks can be excluded from the block for merging candidate blocks so that can be big according to the size and MER of current block It is small to be added to merging candidate blocks in the movable information of at least one of other positions block.
The size information of exemplary embodiment according to the present invention, MER can be included in high-grade sentence to be sent In method information.
Table 1 below is related to the method for size information about MER of the transmission in high-grade syntax.
<Table 1>
Reference table 1, the syntactic constituents log2_ being included in high-grade syntactic structure based on such as parameter sets Parallel_merge_level_minus2 can obtain the size information of MER.Syntactic constituents log2_parallel_merge_ Level_minus2 may also be included in that in the high-grade syntactic structure except parameter sets, and the exemplary embodiment It is also included in scope of the presently claimed invention.
Table 2 below describes between the value of log2_parallel_merge_level_minus2 and the size of MER Relationship.
<Table 2>
Reference table 2, the value of log2_parallel_merge_level_minus2 can have the value in from 0 to 4, and And the size of MER sizes can differently be specified according to the value of syntactic constituents.When MER is 0, it with not using MER and That is predicted between being executed using merging patterns is identical.
In an exemplary embodiment of the present invention, including the syntactic constituents of the size information of MER can be expressed and use Make term " MER size informations syntactic constituents ", and the restriction MER size information syntactic constituents in such as table 2 are an examples, And MER sizes can be specified using a variety of different methods, and this syntactic constituents expression is also included in this In the right of invention.
Fig. 6 is the conceptual view for showing to determine that the space of current block merges the whether available method of candidate blocks.
With reference to figure 6, based on prediction object block 600 and the space adjacent with prediction object block 600 merges the position of candidate blocks 650 It sets and MER size information syntactic constituents, it may be determined that space merges the availability of candidate blocks.
Assuming that it is in merging candidate blocks that (xP, yP), which is in the point of the upper left quarter of prediction object block and (xN, yN), When the point of upper left quarter, it can determine that space merges whether candidate blocks can be used by mathematical expression 1 below and mathematical expression 2.
<Mathematical expression 1>
(xP > > (l0g2_parallel_merge_level_minus2+2))
==(XN > > (l0g2_parallel_merge_level_minus2+2))
<Mathematical expression 2>
(yP > > (log2_parallel_merge_level_minus2+2))
==(yN > > (log2_parallel_merge_level_minus2+2))
Above-mentioned mathematical expression 1 and mathematical expression 2 are for determining it is identical whether merging candidate blocks and prediction object block are included in MER in exemplary formula.Furthermore, it is possible to by using of the invention as long as no being detached from other than above-mentioned determining method Essence method come determine merge candidate blocks and prediction object block whether be included in identical MER.
Fig. 7 is the side for obtaining space in merging patterns and merging candidate blocks for showing exemplary embodiment according to the present invention The flow chart of method.
With reference to figure 7, MER relevant informations are decoded (step S700).
As described above, MER relevant informations can be syntactic constituents information and can be included in high-grade syntactic structure In.Based on decoded MER relevant informations, it may be determined that space merges candidate blocks and predicts it is identical whether object block is included in In MER or in different MER.
Determine that space merges candidate blocks and whether prediction object block is included in identical MER (step S710).
Exemplary embodiment according to the present invention is included in identical in the merging candidate blocks and current block of current block When in MER, the merging candidate blocks of current block can be excluded, and according to the size of current block and MER sizes can will with The movable information for merging at least one of the different position of candidate blocks block is added to merging candidate blocks (step S720).According to this Another exemplary embodiment of invention, when space merges candidate blocks and prediction object block is included in identical MER, generation For using the space being included in the MER to merge candidate blocks as candidate blocks are merged, it is included in other MER with other positions In block can merge candidate blocks instead of space to predict between executing.
In addition, in another exemplary embodiment, merging candidate blocks in space and predicting that it is identical that object block is included in When in MER, it can not be used as merging candidate blocks as described above, the space being included in MER merges candidate blocks.
When space merges candidate blocks and predicting candidate block is not included in identical MER, closed based on corresponding space And candidate blocks predict (step S730) between executing.
Fig. 8 is the flow using the method predicted between merging patterns for showing exemplary embodiment according to the present invention Figure.
With reference to figure 8, merges candidate from space and obtain motion prediction relevant information (step S800).
It can show that space merges candidate blocks from the neighboring prediction unit of prediction object block.It is waited to show that space merges Block is selected, the width and elevation information, MER information, single MCL label (singleMCLFlag) information of predicting unit can be provided And the information of the position about segmentation.Based on above-mentioned input information, according to space merge candidate position can obtain about Space merge the information (serviceable indicia N (availableFlagN)) of candidate availability, reference picture information (refIdxL0, RefIdxL1), list use information (predFlagL0N, predFlagL1N) and motion vector information (mvL0N, mvL1N). Space merges multiple pieces that candidate can be adjacent with object block is predicted.
Property embodiment according to an example of the present invention, space, which merges candidate blocks, can be divided into following three kinds:1) Be not included in identical MER and be encoded or decoded space merge candidate blocks, 2) be included in the sky in identical MER Between merge candidate blocks and 3) code and decode also not processed space merging candidate blocks on it.
Exemplary embodiment according to the present invention, in order to predict between being performed in parallel in the unit of MER, for executing Between the space predicted merge in candidate blocks, be not included in identical MER and be encoded or decoded space merges and waits It selects block to be used as space and merges candidate blocks.In addition, merge the position of candidate blocks instead of the space being included in identical MER Space merges candidate blocks and is used as space merging candidate blocks.In other words, exemplary embodiment according to the present invention is being worked as When preceding piece of merging candidate blocks and current block are included in identical MER, the merging candidate blocks of current block can be excluded and It is candidate that merging can be added to according to the size and MER sizes of current block in the movable information of at least one of other positions block Block.As set forth above, it is possible to by including the steps that the relevant information of decoding MER (motion estimation regions), determining prediction object block Merge whether candidate blocks are included in the step in identical MER with space, and merges candidate blocks in prediction object block and space It is determined when being included in identical MER and is disabled step for merging candidate blocks using prediction space between merging patterns, come Execute the method for determining and merging candidate blocks.
Another exemplary embodiment according to the present invention, in the space for being predicted between execution merges candidate blocks, only Be be not included in identical MER and be encoded or decoded space merge candidate blocks can be used to execute between predict.
Show that the time merges candidate reference picture exponential quantity (step S810).
It is the index for including the Col pictures that the time merges candidate (Col blocks) that time, which merges candidate reference picture exponential quantity, Value, and can be derived by following specific condition.For example, in prediction object block upper left quarter point be (xP, YP), when predicting that the width of object block is nPSW and predicts that the height of object block is nPSH, if 1) existed corresponding to position The neighboring prediction unit of the prediction object block of (xP-1, yP+nPSH-1), 2) the neighboring prediction list for obtaining reference picture indices The segmentation index value of member is 0,3) is used to show that the neighboring prediction unit of reference picture indices is executed using inner estimation mode The block of prediction and 4) predict object block and for showing that it is identical that the neighboring prediction unit of reference picture indices is not included in In MER (motion estimation regions), then the reference picture indices value of time merging candidate blocks can be determined that and neighboring prediction list The identical value of reference picture indices value of member (referred to below as " neighboring prediction unit for obtaining reference picture indices ").Such as Fruit does not meet above-mentioned condition, then the candidate reference picture indices value of time merging can be set to 0.
It determines that the time merges candidate blocks and merges candidate blocks from the time and obtains motion prediction relevant information (step S820).
Fortune is obtained in order to determine that the time merges candidate blocks (Col blocks) and merges candidate blocks (Col blocks) based on the determining time Dynamic prediction relevant information, can position whether available to prediction object block based on such as Col blocks or prediction object block Whether come relative to the case where LCU (for example, whether the position of prediction object block is located relative to the bottom boundaries or right margin of LCU) Determine the position of the Col blocks for obtaining time prediction motion vector.By reference picture information based on determining Col blocks and Motion prediction vectors information obtains motion prediction relevant information, can merge candidate blocks (Col blocks) from the time and obtain motion prediction Relevant information.
Structure merges candidate blocks list (step S830).
Merging candidate blocks list can be by including that space merges at least one of merging of candidate and time candidate come by structure It builds.Being included in the space merging candidate merged in candidate list can be using fixed priority come cloth with time merging candidate It sets.
Merging candidate list can be fabricated by the merging candidate including fixed quantity.It is not enough to give birth to merging candidate At fixed quantity merging candidate when, can by conjunction with merge candidate motion prediction relevant information generate merge it is candidate, Or it is used as by addition null vector and merges candidate to generate merging candidate list.
It can not only be used to the inter-prediction side for using merging patterns as described above, obtaining and merging the candidate above method In method, and it can also be used in the inter-frame forecast mode using dancing mode, and the exemplary embodiment is also included within In scope of the presently claimed invention.
Although describing the disclosure by reference to exemplary embodiment, those skilled in the art will appreciate that:It is not taking off From under the spirit and scope of the present invention being determined by the claims that follow, various changes and deformation can be made.

Claims (5)

1. a kind of method that vision signal is decoded, including:
Determine that space merges whether candidate blocks are not yet decoded;
Determine that the space merges whether candidate blocks are included in current prediction block in identical motion estimation regions;
Determine whether the space merges candidate blocks unavailable, wherein when the space merges candidate blocks and the current predictive When block is included in identical motion estimation regions, the space merges candidate blocks and is confirmed as being directed to the current prediction block Inter-prediction unavailable merging candidate blocks;
The merging candidate list of the current prediction block is generated, the merging candidate list includes available merging candidate blocks;And
The inter-prediction of the current prediction block is executed based on the merging candidate list;
Wherein, when the size of the motion estimation regions is equal to 8 × 8, and the size of the current prediction block is 8 × 4, make Space, which is replaced, with the block except the motion estimation regions in vertical direction merges at least one of candidate blocks, and
Wherein, when the size of the motion estimation regions is equal to 8 × 8, and the size of the current prediction block is 4 × 8, make Space, which is replaced, with the block except the motion estimation regions in horizontal direction merges at least one of candidate blocks.
2. the method for claim 1, wherein being closed by using the location information of the current prediction block, the space And the size information of the location information of candidate blocks and the motion estimation regions come determine the space merge candidate blocks whether with The current prediction block is included in identical motion estimation regions.
3. method as claimed in claim 2, wherein be performed in parallel each in the prediction block in the motion estimation regions The inter-prediction of prediction block.
4. method as claimed in claim 2, wherein when the displacement bit manipulation of the location information by the current prediction block When obtained value is not equal to the value of the displacement bit manipulation acquisition for the location information for merging candidate blocks by the space, institute is determined Space merging candidate blocks are stated not to be included in identical motion estimation regions with the current prediction block.
5. it includes adjacent with the current prediction block that the method for claim 1, wherein the space, which merges candidate blocks, At least one of adjacent block, the adjacent block include left side adjacent block, top adjacent block, upper right quarter adjacent block, lower left quarter phase Adjacent block and upper left quarter adjacent block.
CN201410571867.2A 2011-09-23 2012-09-06 The method that vision signal is decoded Active CN104349170B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20110096138 2011-09-23
KR10-2011-0096138 2011-09-23
KR1020120039500A KR101197176B1 (en) 2011-09-23 2012-04-17 Methods of derivation of merge candidate block and apparatuses for using the same
KR10-2012-0039500 2012-04-17

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201280006137.7A Division CN103444172B (en) 2011-09-23 2012-09-06 Method for inducing a merge candidate block and device using same

Publications (2)

Publication Number Publication Date
CN104349170A CN104349170A (en) 2015-02-11
CN104349170B true CN104349170B (en) 2018-08-31

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471320A (en) * 2002-06-03 2004-01-28 Time-space prediction of two-way predictive (B) image and movable vector prediction of multi image reference mobile compensation
CN101662631A (en) * 2008-08-26 2010-03-03 索尼株式会社 Frame interpolation device and frame interpolation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471320A (en) * 2002-06-03 2004-01-28 Time-space prediction of two-way predictive (B) image and movable vector prediction of multi image reference mobile compensation
CN101662631A (en) * 2008-08-26 2010-03-03 索尼株式会社 Frame interpolation device and frame interpolation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Description of video coding technology proposal by RWTH Aachen University》;Steffen Kamp et al.;《Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 1st Meeting: Dresden》;20100423;1-23 *
《Parallelized merge/skip mode for HEVC》;Minhua Zhou;《Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino》;20110722;1-13 *

Similar Documents

Publication Publication Date Title
CN103444172B (en) Method for inducing a merge candidate block and device using same
CN103096071B (en) The method for exporting movable information
CN104067622B (en) Method for encoding images, picture decoding method, image encoder and image decoder
RU2716231C2 (en) Video decoding method
CN103096072B (en) The coding/decoding method of video data
CN109417617A (en) Intra-frame prediction method and device
CN109479138A (en) Image coding/decoding method and device
CN108353185A (en) Method and apparatus for handling vision signal
CN109196864A (en) Image coding/decoding method and recording medium for the method
CN109792515A (en) The recording medium of image coding/decoding method and device and stored bits stream
CN108353164A (en) Method and apparatus for handling vision signal
CN109792529A (en) In image compiling system based on prediction technique between illuminance compensation and equipment
CN104255032B (en) Inter-layer prediction method and use its encoding device and decoding device
CN108111853A (en) Image reconstructing method under merging patterns
CN104349170B (en) The method that vision signal is decoded
KR102573577B1 (en) A method for processing a video, a method and an apparatus for frame rate conversion

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant