CN103813163A - Image decoding method and image decoding device - Google Patents

Image decoding method and image decoding device Download PDF

Info

Publication number
CN103813163A
CN103813163A CN201410051514.XA CN201410051514A CN103813163A CN 103813163 A CN103813163 A CN 103813163A CN 201410051514 A CN201410051514 A CN 201410051514A CN 103813163 A CN103813163 A CN 103813163A
Authority
CN
China
Prior art keywords
piece
block
movable information
information
utilize
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410051514.XA
Other languages
Chinese (zh)
Other versions
CN103813163B (en
Inventor
盐寺太一郎
浅香沙织
谷沢昭行
中條健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Priority to CN201410051514.XA priority Critical patent/CN103813163B/en
Priority claimed from CN201080066017.7A external-priority patent/CN102823248B/en
Publication of CN103813163A publication Critical patent/CN103813163A/en
Application granted granted Critical
Publication of CN103813163B publication Critical patent/CN103813163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an image decoding method and an image decoding device. The image decoding method includes: selecting at least one motion reference block from a decoded pixel block with motion information; selecting at least one available block from the motion reference block, wherein the available block is a candidate pixel block which has the motion information used for a decoding object block, and each available block has different motion information; decoding the input coded data according to a code table which is preset according to the number of the available block so as to obtain selection information for determining a selection block; selecting one selection block from the available blocks according to the selection information; generating a predicated image of the decoding object block by the aid of the motion information of the selection block; decoding a predicated error of the decoding object block according to the code data; obtaining a decoding image according to the predicated image and the predicated error.

Description

Picture decoding method and picture decoding apparatus
The present invention is the application number submitted on April 23rd, 2013 is the divisional application of " 201310142052.8 ", the denomination of invention application that is " picture decoding method and picture decoding apparatus ", and its original female case is to enter the application China national stage, that national applications number is " method for encoding images and picture decoding method " for " 201080066017.7 ", denomination of invention on October 08th, 2012.
Technical field
The present invention relates to coding and coding/decoding method for moving image and rest image.
Background technology
In recent years, in ITU-T and ISO/IEC, all, as ITU-T Rec.H.264 and below ISO/IEC14496-10(, be called H.264) and advised increasing substantially the dynamic image encoding method of code efficiency.In H.264, prediction processing, conversion process and entropy coding are processed and for example, are carried out with rectangular block unit (, 16 × 16 block of pixels units, 8 × 8 block of pixels units etc.).In prediction processing, for the rectangular block (coded object piece) of coded object, carry out carrying out with reference to encoded complete frame (reference frame) motion compensation of the prediction of time orientation.In such motion compensation, need to be to comprising that the movable information of motion vector encodes and send to decoding side, this motion vector is the vector as the displacement information on the space between coded object piece and the piece of institute's reference in reference frame.In addition, in the situation that using multiple reference frame to carry out motion compensation, need to encode together with reference frame number to movable information.Therefore, the encoding amount relevant with movable information and reference frame number increases sometimes.
In motion compensated prediction, an example of the method for motion vector is obtained in conduct, the with good grounds motion vector of distributing to encoded complete piece is derived the motion vector that distribute to coded object piece, and according to the Direct Model of derived motion vector generation forecast image (with reference to patent documentation 1 and patent documentation 2).In Direct Model, because motion vector is not encoded, so can reduce the encoding amount of movable information.H.264/AVC, Direct Model is for example adopted.
Patent documentation
Patent documentation 1: No. 4020789th, Japan Patent
Patent documentation 2: No. 7233621st, rice state patent
Summary of the invention
In Direct Model, utilize the fixing method of the median calculating kinematical vector of the motion vector of the complete piece of basis and the coding of coded object piece adjacency to predict the motion vector that generates coded object piece.Therefore, the degree of freedom of motion vector computation is low.
In order to improve the degree of freedom of calculating kinematical vector, propose to have and from complete of multiple codings, select one and method to coded object piece assigned motion vector.Decoding side in the method, must always send the selection information of determining selected, so that can be determined the piece that selected coding is complete.Therefore,, in the time selecting one and determine distribute to the motion vector of coded object piece from complete of multiple codings, there is the problem that has increased and selected information-related encoding amount.
The present invention makes in order to address the above problem, and object is to provide method for encoding images and the picture decoding method that code efficiency is high.
The method for encoding images of one embodiment of the present invention possesses following steps: the 1st step of selecting at least one motion reference block from have the complete block of pixels of the coding of movable information; From above-mentioned motion reference block, select at least one can utilize the 2nd step of piece, this can utilize piece is the block of pixels with the candidate of the movable information that is applicable to coded object piece, and has different movable information mutually; From above-mentioned the 3rd step of selecting a selection piece piece of utilizing; Use the movable information of above-mentioned selection piece to generate the 4th step of the predicted picture of above-mentioned coded object piece; The 5th step that predicated error between above-mentioned predicted picture and original image is encoded; And with reference to according to the above-mentioned quantity of utilizing piece and predefined code table, to the 6th step of determining that the selection information of above-mentioned selection piece is encoded.
The picture decoding method of another embodiment of the present invention possesses following steps: the 1st step of selecting at least one motion reference block from have the complete block of pixels of the decoding of movable information; From above-mentioned motion reference block, select at least one can utilize the 2nd step of piece, this can utilize piece is the block of pixels with the candidate of the movable information that is applicable to decoder object piece, and has different movable information mutually; With reference to utilizing the quantity of piece and predefined code table is decoded to inputted coded data according to above-mentioned, thereby obtain the 3rd step for determining the selection information of selecting piece; According to above-mentioned selection information, from above-mentioned the 4th step of selecting a selection piece piece of utilizing; Use the movable information of above-mentioned selection piece to generate the 5th step of the predicted picture of above-mentioned decoder object piece; According to decode the 6th step of predicated error of above-mentioned decoder object piece of above-mentioned coded data; And obtain the 7th step of decoded picture according to above-mentioned predicted picture and above-mentioned predicated error.
According to the present invention, can improve code efficiency.
Accompanying drawing explanation
Fig. 1 is the block diagram that the structure of the picture coding device of the 1st execution mode is roughly shown.
Fig. 2 A illustrates that the processing unit of the coding of the image decoding portion shown in Fig. 1 is the figure of a big or small example of microlith.
Fig. 2 B is that the processing unit of the coding of the image decoding portion shown in Fig. 1 is the figure of another big or small example of microlith.
Fig. 3 is the figure of the order that illustrates that the Image Coding portion shown in Fig. 1 encodes to the block of pixels in coded object frame.
Fig. 4 is the figure of an example of the movable information frame that illustrates that the movable information memory shown in Fig. 1 keeps.
Fig. 5 is the flow chart that an example of the order of the received image signal of processing Fig. 1 is shown.
Fig. 6 A is the figure that an example of the performed interaction prediction processing of the dynamic compensating unit of Fig. 1 is shown.
Fig. 6 B is the figure that another example of the performed interaction prediction processing of the dynamic compensating unit of Fig. 1 is shown.
Fig. 7 A illustrates that interaction prediction processes the figure of a big or small example of the motion compensation block using.
Fig. 7 B illustrates that interaction prediction processes the figure of another big or small example of the motion compensation block using.
Fig. 7 C illustrates that interaction prediction processes the figure of another big or small other example of the motion compensation block using.
Fig. 7 D illustrates that interaction prediction processes the figure of another big or small example of the motion compensation block using.
Fig. 8 A is the figure that an example of the configuration of direction in space and time orientation motion reference block is shown.
Fig. 8 B is the figure that another example of the configuration of direction in space motion reference block is shown.
Fig. 8 C illustrates the figure of direction in space motion reference block with respect to the relative position of the coded object piece shown in Fig. 8 B.
Fig. 8 D is the figure that another example of the configuration of time orientation motion reference block is shown.
Fig. 8 E is the figure that another other example of the configuration of time orientation motion reference block is shown.
Fig. 8 F is the figure that another other example of the configuration of time orientation motion reference block is shown.
Fig. 9 illustrates that the utilized piece obtaining section of Fig. 1 selects the flow chart of an example of the method that can utilize piece from motion reference block.
Figure 10 is the figure that an example of the utilized piece of selecting from the motion reference block shown in Fig. 8 according to the method for Fig. 9 is shown.
Figure 11 is the figure that an example of the utilized block message of the utilized piece obtaining section output of Fig. 1 is shown.
Figure 12 A is the figure that an example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 12 B is the figure that another example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 12 C is the figure that another other example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 12 D is the figure that another example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 12 E is the figure that another other example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 12 F is the figure that another example of the homogeneity judgement of the movable information of the interblock being undertaken by the utilized piece obtaining section of Fig. 1 is shown.
Figure 13 is the block diagram that the structure of the prediction section of Fig. 1 is roughly shown.
Figure 14 is the figure of the group of the movable information that illustrates that the time orientation movable information obtaining section of Figure 13 exports.
Figure 15 is the key diagram that explanation can utilize the interpolation processing of a few pixels precision in the motion compensation process of the dynamic compensating unit based on Figure 13.
Figure 16 is the flow chart that an example of the action of the prediction section of Figure 13 is shown.
Figure 17 illustrates that the dynamic compensating unit of Figure 13 copies the movable information of time orientation motion reference block in the figure of the situation of coded object piece.
Figure 18 is the block diagram that the structure of the variable length code portion of Fig. 1 is roughly shown.
Figure 19 is the figure that basis can be utilized the example of block message generative grammar (syntax).
Figure 20 is the routine figure that 2 values of the selection block message grammer corresponding with utilizing block message are shown.
Figure 21 is the key diagram of the proportional zoom (scaling) of account for motion information.
Figure 22 is according to the figure of the syntactic constructs of execution mode.
Figure 23 A is according to the figure of an example of the microlith layer grammer of the 1st execution mode.
Figure 23 B is according to the figure of another example of the microlith layer grammer of the 1st execution mode.
Figure 24 A be illustrate with H.264 in the mb_type in B when section and the figure of code table corresponding to mb_type.
Figure 24 B is the figure that an example of the code table of execution mode is shown.
Figure 24 C be illustrate with H.264 in the mb_type in P when section and the figure of code table corresponding to mb_type.
Figure 24 D is the figure that another example of the code table of execution mode is shown.
Figure 25 A is the figure illustrating according to an example of execution mode, the code table corresponding with mb_type in B section and mb_type.
Figure 25 B is the figure illustrating according to another example of execution mode, the code table corresponding with mb_type in P section and mb_type.
Figure 26 is the block diagram that the structure of the picture coding device of the 2nd execution mode is roughly shown.
Figure 27 is the block diagram that the structure of the prediction section of Figure 26 is roughly shown.
Figure 28 is the block diagram that the structure of the 2nd prediction section of Figure 27 is roughly shown.
Figure 29 is the block diagram that the structure of the variable length code portion of Figure 26 is roughly shown.
Figure 30 A is the figure illustrating according to an example of the microlith layer grammer of the 2nd execution mode.
Figure 30 B is the figure illustrating according to another example of the microlith layer grammer of the 2nd execution mode.
Figure 31 is the block diagram that the picture decoding apparatus of the 3rd execution mode is roughly shown.
Figure 32 is the block diagram that illustrates in greater detail the coding row lsb decoder shown in Figure 31.
Figure 33 is the block diagram that illustrates in greater detail the prediction section shown in Figure 31.
Figure 34 is the block diagram that the picture decoding apparatus of the 4th execution mode is roughly shown.
Figure 35 is the block diagram that illustrates in greater detail the coding row lsb decoder shown in Figure 33.
Figure 36 is the block diagram that illustrates in greater detail the prediction section shown in Figure 33.
The explanation of Reference numeral
10: received image signal; 11: prediction image signal; 12: prediction error image signal; 13: quantization conversion coefficient; 14: coded data; 15: decoding predictive error signal; 16: local decoder picture signal; 17: with reference to picture signal; 18: movable information; 20: bit stream; 21: movable information; 25,26: information frame; 30: can utilize block message; 31: select block message; 32: prediction handover information; 33: conversion coefficient information; 34: predictive error signal; 35: prediction image signal; 36: decoded image signal; 37: with reference to picture signal; 38: movable information; 39: reference movement information; 40: movable information; 50: encoding control information; 51: feedback information; 60: can utilize block message; 61: select block message; 62: prediction handover information; 70: decoding control information; 71: control information; 80: coded data; 100: Image Coding portion; 101: prediction section; 102: subtracter; 103: conversion quantization portion; 104: variable length code portion; 105: inverse guantization (IQ) inverse transformation portion; 106: adder; 107: frame memory; 108: information-storing device; 109: can utilize piece obtaining section; 110: direction in space movable information obtaining section; 111: time orientation movable information obtaining section; 112: information diverter switch; 113: dynamic compensating unit; 114: parameter coding portion; 115: transform coefficients encoding portion; 116: select piece coding portion; 117: multiplexed portion; 118: movable information selection portion; 120: output state; 150: coding-control portion; 200: Image Coding portion; 201: prediction section; 202: the 2 prediction section; 203: Forecasting Methodology diverter switch; 204: variable length code portion; 205: movable information obtaining section; 216: select piece coding portion; 217: movable information coding portion; 300: image decoding portion; 301: coding row lsb decoder; 301: coding row lsb decoder; 302: inverse guantization (IQ) inverse transformation portion; 303: adder; 304: frame memory; 305: prediction section; 306: information-storing device; 307: can utilize piece obtaining section; 308: output state; 310: direction in space movable information obtaining section; 311: time orientation movable information obtaining section; 312: movable information diverter switch; 313: dynamic compensating unit; 314: Information Selection portion; 320: separation unit; 321: parameter lsb decoder; 322: conversion coefficient lsb decoder; 323: select piece lsb decoder; 350: decoding control section; 400: image decoding portion; 401: coding row lsb decoder; 405: prediction section; 410: the 2 prediction section; 411: Forecasting Methodology diverter switch; 423: select piece lsb decoder; 424: information decoding portion; 901: high-level syntax; 902: sequential parameter group grammer; 903: graphic parameter group grammer; 904: slice-level grammer; 905: a section grammer; 906: slice of data grammer; 907: microlith level grammer; 908: microlith layer grammer; 909: microlith prediction grammer
Embodiment
Below, as required, the Image Coding with reference to accompanying drawing to embodiments of the present invention and the method for image decoding and device describe.In addition, in the following embodiments, be made as the part of carrying out same action about the part of giving same numbering, omitted the explanation repeating.
(the 1st execution mode)
Fig. 1 roughly illustrates the structure of the picture coding device of the 1st execution mode of the present invention.As shown in Figure 1, this picture coding device possess and have Image Coding portion 100, coding-control portion 150 and output state 120.This picture coding device both can be realized by hardware such as LSI chips, or also can be made as by computer carries out image coded program is realized.
In Image Coding portion 100, for example, to have cut apart block of pixels unit's input of original image as the original image (received image signal) 10 of moving image or rest image.As described in detail afterwards, Image Coding portion 100 carries out compressed encoding and generates coded data 14 received image signal 10.The coded data 14 generating is temporarily saved in output state 120, and the output timing of managing in coding-control portion 150, sends to not shown storage system (storage medium) or transmission system (communication line).
Coding-control portion 150 controls the whole coding processing that produces the such Image Coding portion 100 of FEEDBACK CONTROL, quantization control, predictive mode control and the entropy coding-control of encoding amounts.Particularly, encoding control information 50 is offered Image Coding portion 100 by coding-control portion 150, from suitably receiving feedback information 51 of Image Coding portion 100.Encoding control information 50 comprises information of forecasting, movable information 18 and quantization parameter information etc.Information of forecasting comprises prediction mode information and block size information.Movable information 18 comprises motion vector, reference frame number and prediction direction (one direction prediction, twocouese prediction).Quantization parameter information comprises quantization parameter and the quantization matrixes such as quantization width (quantization stepping size).Feedback information 51 comprises the generation encoding amount based on Image Coding portion 100, for example, in the time determining quantization parameter, uses.
Image Coding portion 100 with cut apart that original image obtains block of pixels (for example, microlith, sub-block, 1 pixel etc.) be unit, received image signal 10 is encoded.Therefore, received image signal 10 is input to Image Coding portion 100 successively with the block of pixels unit of having cut apart original image.In the present embodiment, the processing unit of coding is made as to microlith, using corresponding with received image signal 10, be only called coded object piece as the block of pixels (microlith) of coded object.In addition, by comprising the picture frame of coded object piece, the picture frame of coded object is called coded object frame.
Such coded object piece, for example, can be both 16 × 16 block of pixels such shown in Fig. 2 A, also can Fig. 2 B shown in 64 × 64 such block of pixels.In addition, coded object piece can be also 32 × 32 block of pixels, 8 × 8 block of pixels etc.In addition, the shape of microlith is not limited to the example of square shape such shown in Fig. 2 A and Fig. 2 B, also can be made as the arbitrary shapes such as rectangular shape.In addition, above-mentioned processing unit is not limited to the such block of pixels of microlith, can be also frame or field.
In addition, can carry out in any order for the coding processing of the each block of pixels in coded object frame.In the present embodiment, for the purpose of simplifying the description, as shown in Figure 3, be made as block of pixels from from the block of pixels of the upper left of coded object frame to bottom right line by line, according to raster scan order, block of pixels is carried out to coding and processes.
Image Coding portion 100 shown in Fig. 1 possesses: prediction section 101, subtracter 102, conversion quantization portion 103, variable length code portion 104, inverse guantization (IQ) inverse transformation portion 105, adder 106, frame memory 107, movable information memory 108 and can utilize piece obtaining section 109.
In Image Coding portion 100, received image signal 10 is imported into prediction section 101 and subtracter 102.Subtracter 102 receives received image signal 10, and receives prediction image signal 11 from prediction section 101 described later.The difference of subtracter 102 calculating input image signals 10 and prediction image signal 11, generation forecast error image signal 12.
Conversion quantization portion 103 receives prediction error image signal 12 from subtracter 102, and the prediction error image signal 12 receiving is implemented to conversion process, generates conversion coefficient.Conversion process for example, is the orthogonal transforms such as discrete cosine transform (DCT:Discrete Cosine Transform).In another embodiment, conversion quantization portion 103 also can substitute discrete cosine transform and utilize the method such as wavelet transformation and isolated component parsing to generate conversion coefficient.In addition, conversion quantization portion 103 carries out quantization according to the quantization parameter being provided by coding-control portion 150 to generated conversion coefficient.Exported to variable length code portion 104 and inverse guantization (IQ) inverse transformation portion 105 by the conversion coefficient after quantization (conversion coefficient information) 13.
Inverse guantization (IQ) inverse transformation portion 105 according to the quantization parameter being provided by coding-control portion 150, with the identical quantization parameter of conversion quantization portion 103, the conversion coefficient 13 after quantization is carried out to inverse guantization (IQ).Then, inverse guantization (IQ) inverse transformation portion 105 implements inverse transformation to the conversion coefficient after inverse guantization (IQ), generates decoding predictive error signal 15.Inversion process based on inverse guantization (IQ) inverse transformation portion 105 is consistent with the inversion process of the conversion process based on conversion quantization portion 103.For example, inversion process is inverse discrete cosine transform (IDCT:Inverse Discrete Cosine Transform) or inverse wavelet transform etc.
Adder 106, from inverse guantization (IQ) inverse transformation portion 105 receipt decoding predictive error signals 15, in addition, receives prediction image signal 11 from prediction section 101.To decode predictive error signal 15 and prediction image signal 11 of adder 106 is added and generates local decoder picture signal 16.The local decoder picture signal 16 generating is saved as with reference to picture signal 17 in frame memory 107.Frame memory 107 preserve with reference to picture signal 17, in the time that the coded object piece to is thereafter encoded, read and reference by prediction section 101.
Prediction section 101 receives with reference to picture signal 17 from frame memory 107, and utilizes 109 receptions of piece obtaining section can utilize block message 30 from described later.In addition, prediction section 101 receives reference movement information 19 from movable information memory 108 described later.Prediction section 101 is according to reference to picture signal 17, reference movement information 19 and can utilize block message 30 generate the prediction image signal 11, movable information 18 of coded object piece and select block message 31.Particularly, prediction section 101 possesses: according to can utilizing block message 30 and reference movement information 19 to generate movable information 18 and selecting the movable information selection portion 118 of block message 31; And according to the dynamic compensating unit 113 of movable information 18 generation forecast picture signals 11.Prediction image signal 11 is fed to subtracter 102 and adder 106.Movable information 18 is stored in movable information memory 108, for the prediction processing of the coded object piece for thereafter.In addition, select block message 31 to be fed to variable length code portion 104.About prediction section 101 in rear detailed description.
Preserve movable information 18 and as reference movement information 19 temporarily in movable information memory 108.Fig. 4 illustrates an example of the structure of movable information memory 108.As shown in Figure 4, movable information memory 108 maintains reference movement information 19 with frame unit, and reference movement information 19 is formed with movable information frame 25.The movable information 18 relevant with the complete piece of coding is by input motion information-storing device 108 successively, and consequently, movable information memory 108 keeps different multiple movable information frames 25 of scramble time.
Reference movement information 19 for example, remains in movable information frame 25 with fixed block unit (, 4 × 4 block of pixels units).Motion vector block 28 shown in Fig. 4 represents with coded object piece, can utilize piece and select the block of pixels of the formed objects such as piece, for example, is 16 × 16 block of pixels.In motion vector block 28, for example, be assigned motion vector for every 4 × 4 block of pixels.The interaction prediction processing that has utilized motion vector block is called to motion vector block prediction processing.In the time generating movable information 18, the reference movement information 19 that movable information memory 108 keeps is read by prediction section 101.The movable information 18 that such the utilized piece of aftermentioned has refers to the reference movement information 19 that region that the utilized piece in movable information memory 108 is positioned at keeps.
In addition, movable information memory 108 is not limited to keep with 4 × 4 block of pixels units the example of reference movement information 19, also can keep reference movement information 19 with other block of pixels unit.For example, the block of pixels unit relevant with reference movement information 19 can be both 1 pixel, can be also 2 × 2 block of pixels.In addition, the shape of the block of pixels relevant with reference movement information 19 is not limited to the example of square shape, can be made as arbitrary shape.
The utilized piece obtaining section 109 of Fig. 1 obtains reference movement information 19 from movable information memory 108, according to the reference movement information 19 obtaining, selects the utilized piece that can utilize from encoded multiple of completing in the prediction processing of prediction section 101.Selectedly utilize piece to be used as can to utilize block message 30 and give prediction section 101 and variable length code portion 104.The complete piece of coding becoming for selecting the candidate that can utilize piece is called to motion reference block.About motion reference block and can utilize the system of selection of piece, after at length describe.
Variable length code portion 104 is except conversion coefficient information 13, also receive and select block message 31 from prediction section 101, receive the coding parameter of information of forecasting and quantization parameter etc. from coding-control portion 150, can utilize block message 30 from utilizing piece obtaining section 109 to receive.Variable length code portion 104 to the conversion coefficient 13 after quantization, select block message 31, can utilize block message 30 and coding parameter to carry out entropy coding (for example, fixed-length coding, Huffman encoding or arithmetic coding etc.), generate coded data 14.Coding parameter comprises selects block message 31 and information of forecasting, and comprises that the information relevant with conversion coefficient, the information relevant with quantization etc. are in the needed all parameters of decoding.The coded data 14 generating is temporarily stored in output state 120, and is fed to not shown storage system or transmission system.
Fig. 5 illustrates the handling procedure of received image signal 10.As shown in Figure 5, first, by prediction section 101 generation forecast picture signal 11(step S501).In the generation of the prediction image signal 11 of step S501, utilize one in piece can utilize piece to be chosen as selection piece by described later, and use and select block message 31, select the movable information that has of piece and make prediction image signal 11 with reference to picture signal 17.Calculate the difference of prediction image signal 11 and received image signal 10 by subtracter 102, generation forecast error image signal 12(step S502).
Then, by conversion, quantization portion 103 implements orthogonal transform and quantization to prediction error image signal 12, generates conversion coefficient information 13(step S503).Conversion coefficient information 13 and selection block message 31 are fed to variable length code portion 104, are implemented variable length code, and generate coded data 14(step S504).In addition, in step S504, according to selecting block message 31 to carry out switch code table, so that in code table, have and the entry of quantity equal amount that can utilize piece, and to selecting block message 31 to carry out variable length code.The bit stream 20 of coded data is fed to not shown storage system system or transmission path.
The conversion coefficient information 13 generating in step S503 is carried out inverse guantization (IQ) by inverse guantization (IQ) inverse transformation portion 105, and is implemented inversion process, becomes decoding predictive error signal 15(step S505).By decoding predictive error signal 15 be added in step S501, use with reference in picture signal 17, become local decoder picture signal 16(step S506), and as being stored in frame memory 107(step S507 with reference to picture signal).
Then, each structure of above-mentioned Image Coding portion 100 is described in detail.
The Image Coding portion 100 of Fig. 1 has prepared multiple predictive modes in advance, and the generation method of the prediction image signal 11 of each predictive mode and motion compensation block size are mutually different.As the method for prediction section 101 generation forecast picture signals 11, particularly, roughly divide, have and use the relevant interaction prediction with reference to picture signal 17 generation forecast images (inter prediction) of the complete reference frame (referential field) of the interior prediction (infra-frame prediction) with reference to picture signal 17 generation forecast images relevant with coded object frame (or field) and use and more than one coding.Prediction section 101 is predicted and interaction prediction in optionally switching, is generated the prediction image signal 11 of coded object piece.
Fig. 6 A illustrates an example of the interaction prediction of being undertaken by dynamic compensating unit 113.In interaction prediction, as shown in Figure 6A, according to the piece in the reference frame as before encoded 1 frame completing and with the piece of coded object piece same position (also claiming prediction piece) 23, the piece 24 of the position having used with displacement according to the included motion vector 18a of movable information 18 and spatially relevant with reference to picture signal 17, generation forecast picture signal 11.,, in the generation of prediction image signal 11, use the position (coordinate) of coded object piece and relevant to piece 24 by the definite reference frame of the included motion vector 18a of movable information 18 picture signal 17.In interaction prediction, can carry out the motion compensation of a few pixels precision (for example, 1/2 pixel precision or 1/4 pixel precision), by carrying out filtering processing with reference to picture signal 17, generate the value of inter polated pixel.For example, in H.264, can proceed to the interpolation processing till 1/4 pixel precision to luminance signal.In the case of carrying out the motion compensation of 1/4 pixel precision, the amount of information of movable information 18 becomes 4 times of integer-pel precision.
In addition, in interaction prediction, be not limited to use the example of the reference frame before 1 frame such shown in Fig. 6 A, as shown in Figure 6B, can use the complete reference frame of encoding arbitrarily.In the case of maintain the multiple reference frame different from time location relevant with reference to picture signal 17, represent that crossing reference frame number according to the information exchange that has generated prediction image signal 11 with reference to picture signal 17 of which time location illustrates.Reference frame number is contained in movable information 18.Reference frame number can change with area unit (figure, block unit etc.)., can use different reference frame for each block of pixels.As an example, when having used in the situation of the reference frame before 1 complete frame of encoding in prediction, the reference frame number in this region is set to 0, and when having used in the situation of the reference frame before 2 complete frames of encoding in prediction, the reference frame number in this region is set to 1.As other example, when the quantity with reference to picture signal 17(reference frame that maintains 1 frame amount in frame memory 107 is 1) in situation, reference frame number is always set to 0.
In addition, in interaction prediction, can be suitable for from selection among multiple motion compensation blocks the block size of coded object piece., also coded object piece can be divided into multiple small pixel pieces, and carry out motion compensation for each small pixel piece.Fig. 7 A to Fig. 7 C illustrates the size of the motion compensation block of microlith unit, and Fig. 7 D illustrates the size of the motion compensation block of sub-block (block of pixels below 8 × 8 pixels) unit.As shown in Figure 7 A, when coded object piece is in the situation of 64 × 64 pixels, as motion compensation block, can select 64 × 64 block of pixels, 64 × 32 block of pixels, 32 × 64 block of pixels or 32 × 32 block of pixels etc.In addition, as shown in Figure 7 B, when coded object piece is in the situation of 32 × 32 pixels, can select 32 × 32 block of pixels, 32 × 16 block of pixels, 16 × 32 block of pixels or 16 × 16 block of pixels etc. as motion compensation block.In addition, as shown in Fig. 7 C, when coded object piece is in the situation of 16 × 16 pixels, motion compensation block can be set as to 16 × 16 block of pixels, 16 × 8 block of pixels, 8 × 16 block of pixels or 8 × 8 block of pixels etc.In addition, as shown in Fig. 7 D, when coded object piece is in the situation of 8 × 8 pixels, motion compensation block can be selected 8 × 8 block of pixels, 8 × 4 block of pixels, 4 × 8 block of pixels or 4 × 4 block of pixels etc.
As mentioned above, the small pixel piece (for example, 4 × 4 block of pixels) in the reference frame using in interaction prediction has movable information 18, so can utilize according to the local character of received image signal 10 shape and the motion vector of best motion compensation block.In addition, the microlith of Fig. 7 A to Fig. 7 D and sub-microlith can at random combine.When coded object piece is in the situation of 64 × 64 block of pixels such shown in Fig. 7 A, select the each block size shown in Fig. 7 B for each that cut apart 4 32 × 32 block of pixels that 64 × 64 block of pixels obtain, thereby can periodically utilize the piece of 64 × 64~16 × 16 pixels.Similarly, in the case of choosing the block size shown in Fig. 7 D, can periodically utilize 64 × 64~4 × 4 block size.
Then, carry out account for motion reference block with reference to Fig. 8 A to Fig. 8 F.
Motion reference block is to select the complete region (piece) of the coding in coded object frame and reference frame according to the method for the mutual agreement of the picture coding device of Fig. 1 and picture decoding apparatus described later.Fig. 8 A illustrates an example of the configuration of the motion reference block of selecting according to the position of coded object piece.In the example of Fig. 8 A, 9 motion reference block A~D and TA~TE are selected in the complete region of coding in coded object frame and reference frame.Particularly, from coded object frame, select 4 piece A with the left side of coded object piece, upper, upper right, upper left adjacency, B, C, D is as motion reference block, from reference frame, select with the piece TA of coded object piece same position and with the right side of this piece TA, under, 4 block of pixels TB of left and upper adjacency, TC, TD, TE is as motion reference block.In the present embodiment, the motion reference block of selecting is called to direction in space motion reference block from coded object frame, the motion reference block of selecting from reference frame is called to time orientation motion reference block.The coding p that gives each motion reference block of Fig. 8 A represents the index of motion reference block.This index is numbered by the order of the motion reference block according to time orientation, direction in space, but does not limit therewith, as long as index does not repeat, not needing must be according to this order.For example, the motion reference block of time orientation and direction in space also can order numbering pell-mell.
In addition, direction in space motion reference block is not limited to the example shown in Fig. 8 A, can be as shown in Figure 8 B also and the pixel a of coded object piece adjacency b, c, the piece (for example, microlith or sub-microlith etc.) under d suchly.In this case, from the top left pixel e in coded object piece to each pixel a, b, c, the relative position (dx, dy) of d is set to shown in Fig. 8 C.At this, in the example shown in Fig. 8 A and Fig. 8 B, microlith is depicted as N × N block of pixels.
In addition, as shown in Fig. 8 D, also can by with whole piece A1~A4 of coded object piece adjacency, B1, B2, C, D is chosen as direction in space motion reference block.In the example of Fig. 8 D, the quantity of direction in space motion reference block is 8.
In addition, time orientation motion reference block both can Fig. 8 E shown in a part of each TA~TE overlapping, also each TA~TE configured separate as shown in Figure 8 F.In Fig. 8 E, with oblique line, time orientation motion reference block TA and the overlapping part of TB are shown.In addition, it must be the piece of the position (Collocate position) corresponding with coded object piece and the example that is positioned at its piece around that time orientation motion reference block is not limited to, and also can be configured in the piece of the optional position in reference frame.For example, the movable information 18 that can have by the position by reference block and with the complete piece of encoding arbitrarily of coded object piece adjacency is that determine, piece in reference frame (is for example chosen as central block, piece TA), and by this central block with and piece be around chosen as time orientation motion reference block.In addition, time orientation reference block also can from central block unequal interval configure.
In above-mentioned situation arbitrarily, if in code device and decoding device prearrange quantity and the position of direction in space and time orientation motion reference block, the quantity of motion reference block and position can at random arrange.In addition, the size of motion reference block do not need must with coded object piece formed objects.For example, as shown in Fig. 8 D, the size of motion reference block both can be larger than coded object piece, also can be less than coded object piece.In addition, motion reference block is not limited to square shape, also can be set as the arbitrary shapes such as rectangular shape.In addition, motion reference block also can be set as any size.
In addition, motion reference block and can utilize piece also can only be configured in the one party of time orientation and direction in space.In addition, the kind of also can cut into slices according to P, B cutting into slices these sections is carried out the motion reference block of direction setup time and can be utilized piece, and motion reference block that also can configuration space direction and can utilize piece.
Fig. 9 illustrates the method that can utilize piece obtaining section 109 to select to utilize piece from motion reference block.Can utilize piece is can be to the piece of coded object piece application movable information, and has different movable information mutually.Can utilize piece obtaining section 109 with reference to reference movement information 19, according to the method shown in Fig. 9, judge whether motion reference block is to utilize piece separately, and output can utilize block message 30.
As shown in Figure 9, first, select the motion reference block (S800) that index p is 0.In the explanation of Fig. 9, imagination according to index p from 0 to the M-1(M quantity that represents motion reference block.) the situation of sequential processes motion reference block.The index that the determination processing utilized of in addition, establishing the motion reference block from 0 to p-1 for index p finishes and become the motion reference block that determines whether available object describes for p.
Can utilize piece obtaining section 109 to judge whether whether motion reference block p has movable information 18, be assigned with at least one motion vector (S801).When motion reference block p does not have in the situation of motion vector, be time orientation motion reference block p be do not have the piece in the I section of movable information or the whole small pixel piece in time orientation motion reference block p by intraprediction encoding situation under, enter step S805.In step S805, motion reference block p is judged as and can not utilizes piece.
The in the situation that of being judged to be motion reference block p thering is movable information in step S801, enter step S802.Can utilize piece obtaining section 109 to select has been selected as utilizing the motion reference block q(of piece to utilize piece q).At this, q is the value less than p.Then, can utilize the movable information 18 of piece obtaining section 109 to motion reference block p and can utilize the movable information 18 of piece q to compare, determining whether and there is same movable information (S803).When being judged to be the movable information 18 of motion reference block p and being selected as, in situation that the movable information 18 of the motion reference block q that can utilize piece is identical, entering step S805, be judged to be motion reference block p for can not utilize piece.
As whole the utilized piece q for meeting q<p, in step S803, be judged to be the movable information 18 of motion reference block p and can utilize the movable information 18 of piece q not identical, enter step S804.In step S804, can utilize piece obtaining section 109 that motion reference block p is judged to be to utilize piece.
Can utilize piece maybe can not utilize piece if motion reference block p is judged as, can utilize piece obtaining section 109 to determine whether for whole motion reference blocks and carry out and can utilize judgement (S806).Be not performed the motion reference block that can utilize judgement in the case of existing, for example, the in the situation that of p<M-1, enter step S807.Then, can utilize piece obtaining section 109 that index p is added to 1(step S807), and again perform step S801 to step S806.When in step S806, be judged to be for whole motion reference blocks carried out can utilize judge time, finish to utilize determination processing.
By carrying out the above-mentioned determination processing utilized, judge that each motion reference block is can utilize piece or can not utilize piece.Can utilize piece obtaining section 109 to generate the utilized block message 30 that comprises the information relevant with utilizing piece.Thus, by select to utilize piece from motion reference block, the amount of information relevant with utilizing block message 30 reduces, and as a result of, can reduce the amount of coded data 14.
Figure 10 illustrates an example of having carried out the result that can utilize determination processing for the motion reference block shown in Fig. 8 A.In Figure 10, be judged to be 2 direction in space motion reference blocks (p=0,1) and 2 time orientation motion reference blocks (p=5,8) for can utilize piece.Figure 11 illustrates an example of the utilized block message relevant with the example of Figure 10 30.As shown in figure 11, can utilize block message 30 to comprise index, utilizability and the motion reference block title of motion reference block.In the example of Figure 11, index p=0,1,5,8 for can utilize piece, and can utilize piece number is 4.Prediction section 101 can be utilized piece and select one of the best can utilize piece as selecting piece from these, and the output information (selection block message) 31 relevant with selecting piece.Select block message 31 to comprise quantity and the selected index value that utilizes piece that can utilize piece.For example, when utilizing in the situation that the quantity of piece is 4, using maximum entry is 4 code table, by variable length code portion 104, corresponding selection block message 31 is encoded.
In addition, in the step S801 of Fig. 9, when at least one in the piece in time orientation motion reference block p by intraprediction encoding the situation of piece under, can utilize piece obtaining section 109 also motion reference block p can be judged to be to utilize piece.That is, only also can be made as in the case of the whole piece in time orientation motion reference block p is encoded with interaction prediction, enter step S802.
Figure 12 A to Figure 12 E be illustrated in step S803 movable information 18 relatively in, be judged to be the movable information 18 of motion reference block p and can utilize the identical example of movable information 18 of piece q.Shown in Figure 12 A to Figure 12 E, there are multiple with oblique line and 2 pieces of whitewashing respectively.In Figure 12 A to Figure 12 E, for the purpose of simplifying the description, suppose not consider the movable information 18 of these 2 pieces of whitewashing of comparison with the piece of oblique line.One of 2 pieces of whitewashing is motion reference block p, and another is to be judged to be available motion reference block q(can utilize piece q).Especially, unless otherwise specified, any one of 2 white blocks can be motion reference block p.
Figure 12 A illustrates motion reference block p and can utilize the example of the piece that the both sides of piece q are direction in space.In the example of Figure 12 A, the movable information 18 of if block A and B is identical, is judged to be movable information 18 identical.Now, do not need the size of piece A and B identical.
Figure 12 B illustrates motion reference block p and can utilize one of piece q is the piece A of direction in space, and another is the example of the piece TB of time orientation.In Figure 12 B, in the piece TB of time orientation, there is a piece with movable information.If the movable information 18 of the movable information 18 of the piece TB of time orientation and the piece A of direction in space is identical, be judged to be movable information 18 identical.Now, do not need the size of piece A and TB identical.
Figure 12 C illustrates motion reference block p and can utilize one of piece q is the piece A of direction in space, and another is another example of the piece TB of time orientation.Figure 12 C illustrates the piece TB of time orientation is divided into multiple fritters, and has multiple fritter situations with movable information 18.In the example of Figure 12 C, whole piece with movable information 18 has identical movable information 18, if this movable information 18 is identical with the movable information 18 of the piece A of direction in space, is judged to be movable information 18 identical.Now, do not need the size of piece A and TB identical.
Figure 12 D illustrates motion reference block p and can utilize piece q is all the example of the piece of time orientation.In this case, the movable information 18 of if block TB and TE, is judged to be movable information 18 identical.
Figure 12 E illustrates motion reference block p and can utilize piece q is all another example of the piece of time orientation.Figure 12 E illustrates the piece TB of time orientation and TE is divided into respectively to multiple fritters, and has separately the situation of multiple fritters with movable information 18.In this case, for the each fritter comparing motion information 18 in piece, if identical for whole fritter movable informations 18, be judged to be the movable information 18 of piece TB identical with the movable information 18 of piece TE.
Figure 12 F illustrates motion reference block p and can utilize piece q is all another example of the piece of time orientation.Figure 12 F illustrates that the piece TE of time orientation is split into the situation that has multiple fritters with movable information 18 in multiple fritters and piece TE.In the case of whole movable information 18 of piece TE be identical movable information 18 and identical with the movable information 18 of piece TD, be judged to be piece TD identical with the movable information 18 of TE.
Thus, in step S803, judge the movable information 18 of motion reference block p and can utilize the movable information 18 of piece q whether identical.In the example of Figure 12 A to Figure 12 F, the quantity that is made as the utilized piece q comparing with motion reference block p is 1 to be illustrated, but in the case of can utilize the quantity of piece q be more than 2, also can utilize the movable information 18 of piece q to compare the movable information of motion reference block p 18 and each.In addition, the in the situation that of application proportional zoom described later, the movable information 18 after proportional zoom becomes the movable information 18 of above-mentioned explanation.
In addition, the judgement that the movable information of motion reference block p is identical with the movable information that can utilize piece q is not limited to the on all four situation of each motion vector that movable information comprises.For example, as long as the norm (norm) of the difference of 2 motion vectors within the limits prescribed, the movable information that can regard motion reference block p as with can utilize the movable information of piece q identical in fact.
Figure 13 illustrates the more detailed structure of prediction section 101.As mentioned above, 101 inputs of this prediction section can utilize block message 30, reference movement information 19 and with reference to picture signal 17, prediction of output picture signal 11, movable information 18 and select block message 31.As shown in figure 13, movable information selection portion 118 possesses direction in space movable information obtaining section 110, time orientation movable information obtaining section 111 and movable information diverter switch 112.
In direction in space movable information obtaining section 110, input can utilize block message 30 and the reference movement information 19 relevant with direction in space motion reference block.Direction in space movable information obtaining section 110 output comprises the movable information 18A that respectively can utilize the movable information that piece has and the index value that can utilize piece that is positioned at direction in space.When in the situation of the information shown in input Figure 11 as utilizing block message 30, direction in space movable information obtaining section 110 generates 2 movable information output 18A, and each movable information output 18A comprises can utilize piece and this movable information 19 that can utilize piece to have.
In time orientation movable information obtaining section 111, input can utilize block message 30 and the reference movement information 19 relevant with time orientation motion reference block.Time orientation movable information obtaining section 111, the movable information 19 that the available time orientation motion reference blocks that output utilization can utilize block message 30 to determine have and the index value that can utilize piece and as movable information 18B.Time orientation motion reference block is split into multiple small pixel pieces, and each small pixel piece has movable information 19.As shown in figure 14, the movable information 18B that time orientation movable information obtaining section 111 is exported comprises the group of the movable information 19 that each small pixel piece that can utilize in piece has.When movable information 18B comprises in the situation of group of movable information 19, the small pixel block unit that can obtain with partition encoding object piece is carried out motion compensated prediction to coded object piece.When in the situation of the information shown in input Figure 11 as utilizing block message 30, time orientation movable information obtaining section 111 generates 2 movable informations output 18B, and each movable information output comprises the group that can utilize piece and this can utilize the movable information 19 that piece has.
In addition, time orientation movable information obtaining section 111 also can be obtained mean value or the typical value of the motion vector that movable information 19 that each pixel fritter has comprises, and the mean value of motion vector or typical value are exported as movable information 18B.
The movable information diverter switch 112 of Figure 13 is according to the movable information 18A and the 18B that export from direction in space movable information obtaining section 110 and time orientation movable information obtaining section 111, select suitable a utilized piece as selecting piece, and by with select movable information 18(corresponding to piece or the group of movable information 18) output to dynamic compensating unit 113.In addition, the movable information diverter switch 112 outputs selection block message 31 relevant with selecting piece.Selection block message 31 comprises the title of index p or motion reference block etc., is also called simply selection information.Selecting block message 31 not to be defined as the title of index p and motion reference block, as long as can determine and select the position of piece, can be also information arbitrarily.
Movable information diverter switch 112, the coding cost of for example the cost formula by shown in following formula 1 being derived is that minimum utilized piece is chosen as selection piece.
[several 1]
J=D+λ×R (1)
At this, J presentation code cost, D illustrate represent received image signal 10 and with reference to the square error between picture signal 17 and coding distortion.In addition, R represents the encoding amount of estimating by virtual encoder, and λ represents by Lagrange (Lagrange) inderminate coefficient of the regulations such as quantization width.Both can substituted 1, and only carry out calculation code cost J with encoding amount R or coding distortion D, also can be with the cost function that encoding amount R or coding distortion D are similar to the value obtaining come making formula 1.In addition, coding distortion D be not limited to square error and, can be also predicated error absolute value and (SAD:sums of absolute difference: difference absolute value and).Encoding amount R, also can only use the encoding amount relevant with movable information 18.In addition, being not limited to coding cost is that minimum utilized piece is chosen as the example of selecting piece, and one also coding cost to value within the scope of more than minimum value certain can be utilized piece to be chosen as selection piece.
The movable information (or group of movable information) that dynamic compensating unit 113 has according to the selected selection piece of movable information selection portion 118, the position with reference to the block of pixels of picture signal 17 is taken out in derivation as prediction image signal 11.When dynamic compensating unit 113 having been inputted in the situation of group of movable information, dynamic compensating unit 113 will be taken out and (for example be divided into small pixel piece with reference to the block of pixels of picture signal 17 as prediction image signal 11,4 × 4 block of pixels), and, each of these small pixel pieces applied to corresponding movable information 18, thereby according to obtaining prediction image signal 11 with reference to picture signal 17.Obtain the position of the piece of prediction image signal 11, for example as shown in Figure 4 A, become the motion vector 18a that comprises corresponding to movable information 18 and from small pixel piece in direction in space displacement position.
Can use and the same processing of motion compensation process H.264 for the motion compensation process of coded object piece.At this, as an example, illustrate the interpolating method of 1/4 pixel precision.In the interpolation of 1/4 pixel precision, in the time of multiple that each component of motion vector is 4, motion vector indication integer pixel positions.In situation in addition, the predicted position that motion vector indication is corresponding with the interpolation position of fraction precision.
[several 2]
x_pos=x+(mv_x/4)
(2)
y_pos=y+(mv_y/4)
At this, x and y illustrate the index of the vertical and horizontal direction of the beginning position (for example, left upper apex) that represents forecasting object piece, and x_pos and y_pos represent the corresponding predicted position with reference to picture signal 17.(mv_x, mv_y) represents to have the motion vector of 1/4 pixel precision.Then, for cutting apart the location of pixels obtaining, by filling up or interpolation processing generation forecast pixel of the respective pixel position with reference to picture signal 17.Figure 15 illustrates the example that predict pixel H.264 generates.The pixel that represents integer position in Figure 15 with the square shown in the Latin alphabet of daimonji (with the square of oblique line), the square representing with grid line illustrates the inter polated pixel of 1/2 location of pixels.In addition, to whitewash the square representing, the inter polated pixel corresponding with 1/4 location of pixels is shown.For example, in Figure 15, calculate the interpolation processing of 1/2 pixel corresponding with the position of Latin alphabet b, h by following formula 3.
[several 3]
b=(E-5×F+20×G+20×H-5×1+J+16)>>5
(3)
h=(A-5×C+20×G+20×M-5×R+T+16)>>5
At this, the Latin alphabet shown in formula 3 and following formula 4 (for example, b, h, C1 etc.) represents the pixel value of the pixel of having given the identical Latin alphabet in Figure 16.In addition, " >> " represents right displacement computing, and " >>5 " is equivalent to divided by 32., the inter polated pixel of 1/2 location of pixels uses 6 tap FIR(Finite Impulse Response: finite impulse response) filter (tap coefficient: (1 ,-5,20,20 ,-5,1)/32) calculates.
In addition, in Figure 15, the interpolation processing of 1/4 pixel corresponding with the position of Latin alphabet a, d calculates by following formula 4.
[several 4]
a=(G+b+1)>>1
(4)
d=(G+h+1)>>1
Thus, the inter polated pixel of 1/4 location of pixels uses the averaging filter (tap coefficient: (1/2,1/2)) of 2 taps to calculate.The interpolation processing of 1/2 pixel corresponding with the Latin alphabet j of centre that is present in 4 integer pixel positions uses two directions of vertical direction 6 taps and horizontal direction 6 taps to generate.Location of pixels beyond illustrated also profit is used the same method and generates inter polated pixel value.
In addition, interpolation processing is not limited to the example of formula 3 and formula 4, also can use other interpolated coefficients to generate.In addition, the fixing value providing from coding-control portion 150 both can be provided interpolated coefficients, or, also can be according to above-mentioned coding cost and for each frame optimization interpolated coefficients, and generate by the interpolated coefficients after optimizing.
In addition, in the present embodiment, to with motion reference block be microlith (for example, 16 × 16 block of pixels) the relevant processing of motion vector block prediction processing of unit narrates, but be not limited to microlith, also can carry out prediction processing with 16 × 8 block of pixels units, 8 × 16 block of pixels units, 8 × 8 block of pixels units, 8 × 4 block of pixels units, 4 × 8 block of pixels units or 4 × 4 block of pixels units.In this case, derive the information relevant with motion vector block with block of pixels unit.The unit that in addition, also can be greater than 16 × 16 block of pixels with 32 × 32 block of pixels units, 32 × 16 block of pixels units, 64 × 64 block of pixels units etc. carries out above-mentioned prediction processing.
When the motion vector of the small pixel piece at the reference movement vector in motion vector block in coded object piece and substitution, both negative value (reversion vector) that can substitution (A) reference movement vector, or, (B) also can substitution have used the weighted average of reference movement vector of the reference movement vector corresponding with fritter and this reference movement vector adjacency or median, maximum, minimum value.
Figure 16 roughly illustrates the action of prediction section 101.As shown in figure 16, first, obtain the reference frame (motion reference frame) (step S1501) that comprises time orientation reference movement piece.About motion reference frame, typically be with coded object frame time apart from minimum reference frame, be reference frame in the past in time.For example, motion reference frame is the frame being encoded above at coded object frame tight.In other example, also can obtain some reference frame of preserving movable information 18 in movable information memory 108 as motion reference frame.Then, direction in space movable information obtaining section 110 and time orientation movable information obtaining section 111 obtain respectively the utilized block message 30(step S1502 from utilizing piece obtaining section 109 to export).Then, movable information diverter switch 112, for example, select one as selecting piece (step S1503) according to formula 1 from utilizing piece.Then the movable information that, dynamic compensating unit 113 has selected selection piece copies coded object piece (step S1504) to.Now, be in the situation of direction in space reference block when selecting piece, as shown in figure 17, the movable information 18 that this selection piece has is copied into coding reference block.In addition, be in the situation of time orientation reference block when selecting piece, the group of the movable information 18 that this selection piece has is copied into coded object piece together with positional information.Then, use the group of the movable information 18 that copied by dynamic compensating unit 113 or movable information 18 to carry out motion compensation, and prediction of output picture signal 11 and the movable information 18 that uses in motion compensated prediction.
Figure 18 illustrates the more detailed structure of variable length code portion 104.As shown in figure 18, variable length code portion 104 possesses parameter coding portion 114, transform coefficients encoding portion 115, selects piece coding portion 116 and multiplexed portion 117.Parameter coding portion 114 is except to conversion coefficient information 13 and select block message 31 encodes, also the required parameter of decoding of prediction mode information, block size information, quantization parameter information etc. to be encoded, and generate coded data 14A.Transform coefficients encoding portion 115 encodes to conversion coefficient information 13, generates coded data 14B.In addition, select piece coding portion 116 with reference to utilizing block message 30, to selecting block message 31 to encode, generate coded data 14C.
As shown in figure 19, when utilizing block message 30 to comprise in the situation of utilizability of index and the motion reference block corresponding with index, from predefined multiple motion reference blocks, get rid of not available motion reference block, only available motion reference block is transformed to grammer (stds_idx).In Figure 19,5 motion reference blocks in 9 motion reference blocks can not utilize, so for 4 motion reference blocks having got rid of after these 5 motion reference blocks, distribute successively grammer stds_idx since 0.In this example, the selection block message that encode is not selected from 9, but can utilize and select piece from 4, so the encoding amount (bin number) distributing has reduced fifty-fifty.
Figure 20 is an example that represents the code table of 2 value informations (bin) of grammer stds_idx and grammer stds_idx.As shown in figure 18, the quantity of available motion reference block is fewer, and the required average bin number of coding of grammer stds_idx more reduces.For example, when utilizing in the situation that the quantity of piece is 4, grammer stds_idx can represent to be less than or equal to 3 bits.2 value informations (bin) of grammer stds_idx both can, with for each mode 2 values that utilize the whole stds_idx of piece number to become identical bin number, also can carry out 2 values according to the 2 value methods definite by prior learning.In addition, also can prepare multiple 2 value methods, and for each coded object piece applicability switch.
In these coding portions 114,115,116, can apply entropy coding (such as fixed-length coding, Huffman encoding or arithmetic coding etc.), coded data 14A, the 14B generating, 14C carry out multiplexed and output by multiplexed portion 117.
In the present embodiment, imagination is using the frame being encoded than 1 frame before coded object frame as reference frame and the example of reference is illustrated, but also can use motion vector and the reference frame number selected in the reference movement information 19 that has of piece, motion vector is carried out to proportional zoom (scaling) (or standardization), and to coded object piece application reference movement information 19.
About this proportional zoom processing, describe particularly with reference to Figure 21.Tc presentation code shown in Figure 21 is to the time gap between picture frame and motion reference frame (POC(represents the number of DISPLAY ORDER) distance), calculate by following formula 5.Tr[i shown in Figure 21] represent the time gap between motion reference frame and the frame i of selection piece institute reference, calculate by following formula 6.
[several 5]
tc=Clip(-128,127,DiffpicOrderCnt(curPOC,colPOC)) (5)
tr[i]=Clip(-128,127,DiffPicOrderCnt(colPOC,refPOC)) (6)
At this, the POC(Picture Order Count of curPOC presentation code to picture frame: sequential counting), colPOC represents the POC of motion reference frame, refPOC represents the POC of the frame i that selects the reference of piece institute.In addition, Clip(min, max, target) be that following CLIP function: target exports min while being less than the value of min, target exports max while being greater than the value of max, in situation in addition, exports target.In addition, DiffPicOrderCnt(x, y) be the function that calculates the difference of 2 POC.
If established, to select the motion vector of piece be MVr=(MVr_x, MVr_y), be MV=(MV_x, MV_y to the motion vector of coded object piece application), by following formula 7 calculating kinematical vector MV.
[several 6]
MV_x=(MVr_x×tc+Abs(tr[i]/2))/tr[i]
(7)
MV_y=(MVr_y×to+Abs(tr[i]/2))/tr[i]
At this, Abs(x) represent the function of the absolute value that takes out x.Thus, in the proportional zoom of motion vector, distribute to and select the motion vector MVr of piece to be transformed to the motion vector MV between coded object frame and motion the 1st reference frame.
In addition, another example relevant with the proportional zoom of motion vector is below described.
First,, for each section or each frame, according to following formula 8, obtain proportional zoom coefficient (DistScaleFactor[i]) about whole time gap tr that can obtain motion reference frame.The quantity of proportional zoom coefficient with select piece institute reference frame quantity,, the quantity of reference frame equates.
[several 7]
tx=(16384+Abs(tr[i]/2))/tr[i]
(8)
DistScalcFactor[i]=Clip(-1024,1023,(tc×tx+32))>>6
About the calculating of the tx shown in formula 8, also tabular in advance.
In the time of the proportional zoom for each coded object piece, by using following formula 9, only just can calculate motion vector MV by multiplication, addition, displacement computing.
[several 8]
MV_x=(DistScaleFactor[i]×MVr_x+128)>>8
(9)
MV_y=(DistScaleFactor[i]×MVr_y+128)>>8
In the case of having implemented such proportional zoom processing, prediction section 101 and the processing that can utilize piece obtaining section 109 be the movable information 18 after application percentage convergent-divergent all.In the situation that having implemented proportional zoom processing, the reference frame of coded object piece institute reference becomes motion reference frame.
Figure 22 illustrates the syntactic structure in Image Coding portion 100., high-level syntax 901, slice-level grammer 904 and microlith level grammer 907 as shown in figure 22, grammer mainly comprises 3 parts.High-level syntax 901 maintains the syntactic information of the above upper layer of section.Slice-level grammer 904 keeps needed information for each section, and microlith level grammer 907 maintains needed data for the each microlith shown in Fig. 7 A to Fig. 7 D.
Each several part comprises more detailed grammer.High-level syntax 901 comprises the grammer of the sequence such as sequential parameter group grammer 902 and graphic parameter group grammer 903 and figure form class.Slice-level grammer 904 comprises a section grammer 905 and slice of data grammer 906 etc.In addition, microlith level grammer 907 comprises microlith layer grammer 908 and microlith prediction grammer 909 etc.
Figure 23 A and Figure 23 B illustrate the example of microlith layer grammer.Available_block_num shown in Figure 23 A and Figure 23 B represents to utilize the quantity of piece, when this value is to be greater than in 1 the situation of value, need to select the coding of block message.In addition, stds_idx illustrates selection block message, uses code table corresponding to the aforesaid quantity with utilizing piece number to encode to stds_idx.
Figure 23 A is illustrated in after mb_type the grammer when selecting block message to encode.Be the size determined or the pattern (TARGET_MODE) determined in mb_type represented pattern, and be to be greater than 1 value at available_block_num, stds_idx is encoded.For example, when the movable information of selecting piece becomes in the situation that available block size is 64 × 64 pixels, 32 × 32 pixels, 16 × 16 pixels, or in the situation of Direct Model, stds_idx is encoded.
Figure 23 B is illustrated in before mb_type the grammer when selecting block message to encode.When available_block_num is greater than in 1 the situation of value, stds_idx is encoded.In addition, if available_block_num is 0, carry out the H.264 motion compensation in the past of representative, so mb_type is encoded.
In addition, the table shown in Figure 23 A and Figure 23 B can also insert unspecified grammatical feature in the present invention in the ranks, also can comprise relevant with condition difference in addition description.Or, also syntax table can be cut apart, merge into multiple tables.In addition, must not use same term, can at random change according to the mode of utilizing yet.And then each grammer segment of describing in this microlith layer grammer also can change, to be clearly recorded microlith data syntax described later.
In addition, by utilizing the information that the information of stds_idx can Xiao Minus mb_type.Figure 24 A be with H.264 in the mb_type in B when section and code table corresponding to mb_type.N shown in Figure 24 A is the big or small value that represents 16,32,64 pieces such as coded object such as grade, and M is the value of the half of N.Therefore,, when in the situation that mb_type is 4~21, coded object piece is depicted as rectangular blocks.In addition, the L0 of Figure 24 A, L1, Bi represents respectively one direction prediction (only List0 direction), one direction prediction (only List1 direction), twocouese prediction.Coded object piece is in the situation of rectangular blocks, and mb_type, for each of 2 rectangular blocks in coded object piece, comprises and represents to have carried out L0, L1, the information of the some predictions in Bi.In addition, B_Sub represents to carry out above-mentioned processing for 4 each of block of pixels of having cut apart microlith.For example, when coded object piece is in the situation of 64 × 64 pixel microliths, coded object piece, cuts apart each of 4 32 × 32 block of pixels that this microlith obtains for 4, further distributes mb_type and encodes.
At this, when the selection piece that stds_idx represents is the block of pixels of the left side adjacency of Spatial Left(and coded object piece) situation under, will be made as the movable information of coded object piece with the movable information of the block of pixels of the left side adjacency of coded object piece, so stds_idx has and use the mb_type=4 of Figure 24 A, 6,8,10,12,14,16,18, the rectangular blocks of growing crosswise shown in 20 is carried out the identical implication of prediction to coded object piece.In addition, in the situation that selection piece shown in stds_idx is SpatialUp, by being made as the movable information of coded object piece with the movable information of the upside adjacency of coded object piece, so stds_idx has and the mb_type=5 that utilizes Figure 24 A, 7,9,11,13,15,17,19, the rectangular blocks of the lengthwise shown in 21 is carried out the identical implication of prediction.Therefore, by utilizing stds_idx, such reduction shown in can construction drawing 24B the code table on hurdle of mb_type=4~21 of Figure 24 A.Similarly, about with shown in Figure 24 C H.264 in the mb_type in P when section and code table corresponding to mb_type, such reduction shown in also can construction drawing 24D the code table of quantity of mb_type.
In addition, also the information of stds_idx can be included in the information of mb_type and encode.Figure 25 A illustrates the code table when information of stds_idx is included in to the information of mb_type, and an example of the code table corresponding with the mb_type of B section and mb_type is shown.The B_STDS_X(X=0 of Figure 25 A, 1,2) pattern suitable with stds_idx is shown, append the B_STDS_X(of the amount that can utilize piece number in Figure 25 A, can utilize piece number is 3).Similarly, Figure 25 B illustrates cut into slices another example of relevant mb_type with P.The explanation of Figure 25 B is identical with B section, so omit.
The order of mb_type and 2 value methods (binization) are not limited to the example shown in Figure 25 A and Figure 25 B, also can encode to mb_type according to other order and 2 value methods.B_STDS_X and P_STDS_X do not need continuously, can be configured between each mb_type yet.In addition, 2 value methods (binization) also can design by the selection frequency based on study in advance.
In the present embodiment, even gathering multiple microliths and carry out also applying the present invention in the expansion microlith of motion compensated prediction.In addition, in the present embodiment, can be arbitrarily sequentially about the scanning sequency of coding.For example, also can apply the present invention to line scanning or Z scanning etc.
As described above, the picture coding device of present embodiment is selected to utilize piece from multiple motion reference blocks, generates the information for determining the motion reference block to the application of coded object piece, and this information is encoded according to the selected quantity of utilizing piece.Therefore,, according to the picture coding device of present embodiment, even if cut down the encoding amount relevant with motion vector information, also can carry out motion compensation with the small pixel block unit thinner than coded object piece, so can realize high code efficiency.
(the 2nd execution mode)
Figure 26 illustrates the related picture coding device of the 2nd execution mode of the present invention.In the 2nd execution mode, mainly part and the action different from the 1st execution mode are described.As shown in figure 26, in the Image Coding portion 200 of present embodiment, the structure of prediction section 201 and variable length code portion 204 is different from the 1st execution mode.As shown in figure 27, prediction section 201 possesses the 1st prediction section 101 and the 2nd prediction section 202, optionally switches this 1st and the 2nd prediction section 101,202 and generation forecast picture signal 11.The 1st prediction section 101 has the prediction section 101(Fig. 1 with the 1st execution mode) identical structure, according to prediction mode (the 1st prediction mode) the generation forecast picture signal 11 that uses the movable information 18 of selecting piece to have to carry out motion compensation.The 2nd prediction section 202 is according to using a motion vector to carry out prediction mode motion compensation, as H.264 (the 2nd prediction mode) generation forecast picture signal 11 to coded object piece.The 2nd prediction section 202 use from received image signal 10 and frame memory with reference to picture signal 17 generation forecast picture signal 11B.
Figure 28 roughly illustrates the structure of the 2nd prediction section 202.As shown in figure 28, the 2nd prediction section 202 has: movable information obtaining section 205, is used received image signal 10 and generate movable information 21 with reference to picture signal 17; And dynamic compensating unit 113(Fig. 1), use with reference to picture signal 17 and movable information 21 generation forecast picture signal 11A.This movable information obtaining section 205 according to received image signal 10 and with reference to picture signal 17, for example, is mated and is obtained the motion vector that distribute to coded object piece by piece.As the metewand of coupling, use for each pixel accumulation received image signal 10 with mate after the difference of interpolating image and the value that obtains.
In addition, movable information obtaining section 205 also can be used prediction image signal 11 and the difference of received image signal 10 are converted to the value obtaining, and decides best motion vector.In addition, also can consider the size of motion vector and the encoding amount of motion vector and reference frame number, or use formula 1 decides best motion vector.Matching process both can be carried out according to the hunting zone information providing from the outside of picture coding device, also can periodically carry out for each pixel precision.In addition, also can not search for processing, and output 21 using the movable information being provided by coding-control portion 150 as movable information obtaining section 205.
The prediction section 101 of Figure 27 further possesses Forecasting Methodology diverter switch 203, selects and exports from the prediction image signal 11A of the 1st prediction section 101 and from the one party of the prediction image signal 11B of the 2nd prediction section 202.For example, Forecasting Methodology diverter switch 203 is for each prediction image signal 11A and 11B, use received image signal 10, for example obtain coding cost according to formula 1, select the one party in prediction image signal 11A and 11B in the less mode of coding cost, and export as prediction image signal 11.In addition, Forecasting Methodology diverter switch 203 is also together with movable information 18 and selection block message 31, and the prediction image signal 11 that output expression is exported is the prediction handover informations 32 from the prediction image signal of which output of the 1st prediction section 101 and the 2nd prediction section 202.The movable information 18 of exporting is encoded by variable length code portion 204, is multiplexed as afterwards coded data 14.
Figure 29 roughly illustrates the structure of variable length code portion 204.Variable length code portion 204 shown in Figure 29, except the structure of the variable length code portion 104 shown in Figure 18, also possesses movable information coding portion 217.In addition, the selection piece coding portion 216 of Figure 29 is different from the selection piece coding portion 116 of Figure 18, and prediction handover information 32 is encoded and generated coded data 14D.When the 1st prediction section 101 has been carried out in the situation of prediction processing, select piece coding portion 216 further to utilizing block message 30 and select block message 31 to encode.Utilized block message after coded 30 and selection block message 31 are contained in coded data 14D.Carried out in the situation of prediction processing when the 2nd prediction section 202, movable information coding portion 217 encodes to movable information 18, and generates coded data 14E.Select piece coding portion 216 and movable information coding portion 217 to judge that according to prediction handover information 32 which side in the 1st prediction section 101 and the 2nd prediction section 202 carried out prediction processing respectively, this prediction handover information 32 represents whether predicted picture is to select the motion compensated prediction of the movable information of piece to generate by having used.
Multiplexed portion 117 from parameter coding portion 114, transform coefficients encoding portion 115, select piece coding portion 216 and the received code data 14A of movable information coding portion, 14B, 14D, 14E, to received coded data 14A, 14B, 14D, 14E carries out multiplexed.
Figure 30 A and Figure 30 B illustrate the example of the microlith layer grammer of present embodiment separately.Available_block_num shown in Figure 30 A represents to utilize the quantity of piece, when it is to be greater than in 1 the situation of value, selects piece coding portion 216 to selecting block message 31 to encode.In addition, whether stds_flag is illustrated in motion compensated prediction the movable information using the movable information of selecting piece as coded object piece and the mark that used, be, to represent that Forecasting Methodology diverter switch 203 selected which the mark in the 1st prediction section 101 and the 2nd prediction section 202.When can utilize the quantity of piece be greater than 1 and the stds_flag situation that is 1 under, be illustrated in and in motion compensated prediction, used the movable information of selecting piece to have.In addition, when in the situation that stds_flag is 0, do not utilize and select the movable information that has of piece, and with H.264 similarly directly the information of movable information 18 is encoded, or predicted difference value is encoded.In addition, stds_idx represents to select block message, and the code table corresponding with utilizing piece number is described above.
Figure 30 A is illustrated in after mb_type the grammer when selecting block message to encode.Only, in the case of the pattern of having determined the size of the pattern that mb_type represents or determined, stds_flag and stds_idx are encoded.For example, be that available block size is 64 × 64,32 × 32,16 × 16 or Direct Model at the movable information of selecting piece, stds_flag and stds_idx are encoded.
Figure 30 B is illustrated in before mb_type the grammer when selecting block message to encode.For example, when in the situation that stds_flag is 1, do not need mb_type to encode.When in the situation that stds_flag is 0, mb_type is encoded.
As previously discussed, the picture coding device of the 2nd execution mode optionally switches the 1st prediction section 101 of the 1st execution mode and utilizes the 2nd prediction section 202 of the prediction mode H.264 waiting, and received image signal is carried out to compressed encoding, so that coding cost reduces.Therefore, in the picture coding device of the 2nd execution mode, the picture coding device than the 1st execution mode has further improved code efficiency.
(the 3rd execution mode)
Figure 31 roughly illustrates the picture decoding apparatus of the 3rd execution mode.As shown in figure 31, this picture decoding apparatus possesses image decoding portion 300, decoding control section 350 and output state 308.Image decoding portion 300 is controlled by decoding control section 350.The picture decoding apparatus of the 3rd execution mode is corresponding with the picture coding device of the 1st execution mode., the decoding of the picture decoding apparatus based on Figure 31 is processed with the coding processing of the Image Coding processing based on Fig. 1 and is had complementary relation.The picture decoding apparatus of Figure 31 both can be realized by hardware such as LSI chips, or also can be by computer carries out image decoding program is realized.
The picture decoding apparatus of Figure 31 possesses coding row lsb decoder 301, inverse guantization (IQ) inverse transformation portion 302, adder 303, frame memory 304, prediction section 305, movable information memory 306 and can utilize piece obtaining section 307.In image decoding portion 300, be input to coding row lsb decoder 301 from the coded data 80 of not shown storage system system or transmission system.This coded data 80, for example, corresponding with the coded data 14 to be sent from the picture coding device of Fig. 1 by multiplexed state.
In the present embodiment, the block of pixels (for example, microlith) as decoder object is only called to decoder object piece.In addition, the picture frame that comprises decoder object piece is called to decoder object frame.
In coding row lsb decoder 301, for every 1 frame or every 1 field, carry out the deciphering based on syntactic analysis according to grammer.Particularly, coding row lsb decoder 301 carries out length-changeable decoding to the coding row of each grammer successively, and to comprising conversion coefficient information 33, select coding parameters information of forecasting, relevant with decoder object piece such as block message 61, block size information and prediction mode information etc. to decode.
In the present embodiment, decoding parametric comprises conversion coefficient 33, selects block message 61 and information of forecasting, and comprises all parameters required when the information relevant with conversion coefficient, the information relevant with quantization etc. are decoded.The information relevant with information of forecasting, conversion coefficient and relevant information is transfused to decoding control section 350 as control information 71 with quantization.Decoding control section 350 offers the decoding control information 70 that comprises the needed parameter of decoding such as information of forecasting and quantization parameter the each several part of image decoding portion 300.
In addition, as afterwards illustrated, coding row lsb decoder 301 is decoded to coded data 80 simultaneously, obtains information of forecasting and selects block message 61.Also can not decode to the movable information 38 that comprises motion vector and reference frame number.
The conversion coefficient 33 of being understood by coding row lsb decoder 301 is sent to inverse guantization (IQ) inverse transformation portion 302.Be provided for decoding control section 350 by the coding row lsb decoder 301 various information relevant with quantization that understand, i.e. quantization parameter and quantization matrix, in the time of inverse guantization (IQ), be downloaded to inverse guantization (IQ) inverse transformation portion 302.Inverse guantization (IQ) inverse transformation portion 302, according to the downloaded information relevant with quantization, carries out inverse guantization (IQ) to conversion coefficient 33, then implements inversion process (for example, inverse discrete cosine transform etc.), obtains predictive error signal 34.The inversion process of the inverse guantization (IQ) inverse transformation portion 302 based on Figure 31 is inverse transformations of the conversion process of the conversion quantization portion based on Fig. 1.For example, in the situation that implementing wavelet transformation by picture coding device (Fig. 1), inverse guantization (IQ) inverse transformation portion 302 carries out corresponding inverse guantization (IQ) and inverse wavelet transform.
The predictive error signal 34 having restored by inverse guantization (IQ) inverse transformation portion 302 is input to adder 303.The prediction image signal 35 that adder 303 generates by predictive error signal 34 with in prediction section 305 described later is added, and generates decoded image signal 36.The decoded image signal 36 generating is exported from image decoding portion 300, and stores output state 308 into temporarily, and afterwards, the output timing of managing according to decoding control section 350 is exported.In addition, this decoded image signal 36 is saved as with reference to picture signal 37 in frame memory 304.Read with reference to picture signal 37 from frame memory 304 successively according to each frame or each field, and input to prediction section 305.
Can utilize piece obtaining section 307 to receive reference movement information 39 from movable information memory 306 described later, and output can utilize block message 60.Can utilize the action of piece obtaining section 307 and the utilized piece obtaining section 109(Fig. 1 illustrating in the 1st execution mode) identical.
Movable information memory 306 receives movable information 38 from prediction section 305, and is temporarily saved as reference movement information 39.The movable information of exporting from prediction section 305 38 is temporarily saved as reference movement information 39 by movable information memory 306.Fig. 4 illustrates an example of movable information memory 306.Movable information memory 306 maintains different multiple movable information frames 26 of scramble time.The movable information 38 that decoding is through with or the group of movable information 38, be stored in the movable information frame 26 corresponding with decode time as reference movement information 39.In movable information frame 26, for example, preserve reference movement information 39 with 4 × 4 block of pixels units.The reference movement information 39 that movable information memory 306 keeps, is read and reference in the time generating the movable information 38 of decoder object piece by prediction section 305.
Then, the motion reference block of present embodiment be described and can utilize piece.Motion reference block is according to the candidate blocks of being selected from the complete region of decoding by aforesaid picture coding device and the predefined method of picture decoding apparatus.Fig. 8 A illustrates an example relevant with utilizing piece.In Fig. 8 A, this adds up to 9 motion reference blocks to dispose 5 motion reference blocks in 4 motion reference blocks and the reference frame in decoder object frame.Motion reference block A in the decoder object frame of Fig. 8 A, B, C, D be with respect to decoder object piece left, on, the piece of upper right, upper left adjacency.In the present embodiment, the motion reference block of selecting is called to direction in space motion reference block from the decoder object frame that comprises decoder object piece.In addition, the motion reference block TA in reference frame is in reference frame, with the block of pixels of decoder object piece same position, by the block of pixels TB joining with this motion reference block TA, TC, TD, TE is chosen as motion reference block.The motion reference block of selecting block of pixels in reference frame is called to time orientation motion reference block.In addition, frame time orientation motion reference block being positioned at is called motion reference frame.
Direction in space motion reference block is not limited to the example shown in Fig. 8 A, as shown in Figure 8 B, also can by with the pixel a of decoder object piece adjacency, b, c, the block of pixels under d is chosen as direction in space motion reference block.In this case, pixel a, b, c, d is with respect to the relative position (dx, dy) of the top left pixel in decoder object piece as shown in Figure 8 C.
In addition, as shown in Fig. 8 D, also can by with whole block of pixels A1~A4 of decoder object piece adjacency, B1, B2, C, D is chosen as direction in space motion reference block.In Fig. 8 D, the quantity of direction in space motion reference block is 8.
In addition, as shown in Fig. 8 E, time orientation motion reference block TA~TE both can be mutually partly overlapping, also can be separated from each other as shown in Figure 8 F.In addition, time orientation motion reference block does not need must be the piece of Collocate position and be positioned at its piece around, as long as can be the block of pixels of optional position in motion reference frame.For example, also can utilize and the movable information of the complete piece of decoding of decoder object piece adjacency, the indicated reference block of motion vector that movable information is comprised is chosen as the center (for example, piece TA) of motion reference block.In addition, the reference block of time orientation can not be also equally spaced to configure.
In the method for selection campaign reference block as described above, if the total relevant information of quantity and position with direction in space and time orientation motion reference block of the both sides of picture decoding apparatus and picture decoding apparatus, motion reference block can be from selecting quantity and position arbitrarily.In addition, not need must be the size identical with decoder object piece to the size of motion reference block.For example, as shown in Fig. 8 D, the size that the size of motion reference block both can ratio decoder object piece is large, also can ratio decoder object piece slight greatly, can be size arbitrarily.In addition, the shape of motion reference block is not limited to square shape, can be also rectangular shape.
Then, to utilizing piece to describe.Can utilize piece is the block of pixels of selecting from motion reference block, is can be to the block of pixels of decoder object piece application movable information.Can utilize piece to there is different movable information mutually.For example 9 motion reference blocks of total in such decoder object frame and reference frame as shown in Figure 8 A, select to utilize piece by the utilized piece determination processing shown in execution graph 9.Figure 10 illustrates the result that the utilized piece determination processing shown in execution graph 9 obtains.In Figure 10, represent to utilize piece with the block of pixels of oblique line, the piece of whitewashing represents to utilize piece., be judged to be from direction in space motion reference block, to select 2, from time orientation motion reference block, select 2 to add up to 4 conducts of selection can utilize piece.Movable information selection portion 314 in prediction section 305 is according to the selection block message 61 from selecting piece lsb decoder 323 to receive, and selects best one can utilize piece as selecting piece from be configured in these utilized pieces of time and direction in space.
Then, to utilizing piece obtaining section 307 to describe.Can utilize piece obtaining section 307 to there is the function identical with the utilized piece obtaining section 109 of the 1st execution mode, obtain reference movement information 39 from movable information memory 306, represent to utilize piece maybe can not utilize the information of piece can utilize block message 60 for each motion reference block output.
Can utilize the action of piece obtaining section 307 with reference to the flowchart text of Fig. 9.First, can utilize piece obtaining section 307 to judge (whether index p) has a movable information (step S801) to motion reference block., in step S801, determine whether that at least one the small pixel piece in motion reference block p has movable information.Do not have in the situation of movable information when being judged to be motion reference block p, be that time orientation motion reference block is the piece not having in the I section of movable information, or the whole small pixel piece in time orientation motion reference block, by the situation of interior prediction decoding, enters step S805.In step S805, this motion reference block p is judged to be to utilize piece.
In the situation that being judged to be motion reference block p thering is movable information in step S801, can utilize the selected motion reference block q(that has been judged to be to utilize piece of piece obtaining section 307 to be called and can utilize q) (step S802) of piece.At this, q is the value that is less than p.Then, can utilize piece obtaining section 307 for the relatively movable information and the movable information that can utilize piece q of this motion reference block p of whole q, judge whether motion reference block p has the movable information identical with utilizing piece q (S803).When motion reference block p has in the situation of the motion vector identical with utilizing piece q, enter step S805, in step S805, by utilizing piece obtaining section 307 that this motion reference block p is judged to be to utilize piece.Motion reference block p has in the situation of the movable information different from whole utilized piece q, in step S804, by utilizing piece obtaining section 307 that this motion reference block p is judged to be to utilize piece.
Carry out above-mentioned utilized piece determination processing by the motion reference block to whole, judge it is can utilize piece or can not utilize piece for each motion reference block, and generation can utilize block message 60.Figure 11 illustrates the example that can utilize block message 60.As shown in figure 11, can utilize block message 60 to comprise index p and the utilizability of motion reference block.In Figure 11, the motion reference block that can to utilize block message 60 to illustrate index p be 0,1,5 and 8 is chosen as and can utilizes piece, and the quantity that can utilize piece is 4.
In addition, in the step S801 of Fig. 9, at least one that also can be in the piece in time orientation motion reference block p by intraprediction encoding piece, can utilize piece obtaining section 307 that motion reference block p is judged to be to utilize piece.Only also can be made as in the case of the whole piece in time orientation motion reference block p is encoded with interaction prediction, enter step S802.
Figure 12 A to Figure 12 E be illustrated in step S803 movable information 38 relatively in, by the movable information of motion reference block p 38 with can utilize the movable information 38 of piece q to be judged to be identical example.Figure 12 A to Figure 12 E is separately shown with multiple with oblique line and 2 pieces of whitewashing.In Figure 12 A to Figure 12 E, for the purpose of simplifying the description, suppose not consider the piece with oblique line, and compare the situation of the movable information 38 of these 2 pieces of whitewashing.One side of 2 pieces of whitewashing is motion reference block p, and the opposing party has been judged to be available motion reference block q(can utilize piece q).Unless otherwise specified, any one in 2 white blocks can be motion reference block p.
Figure 12 A illustrates motion reference block p and can utilize the example of the piece that the both sides of piece q are direction in space.In the example of Figure 12 A, the movable information 38 of if block A and B is identical, is judged to be movable information 38 identical.Now, the size of piece A and B does not need identical.
Figure 12 B illustrates motion reference block p and can utilize a side of piece q is the piece A of direction in space, and the opposing party is the example of the piece TB of time orientation.In Figure 12 B, in the piece TB of time orientation, there is a piece with movable information.If the movable information 38 of the movable information 38 of the piece TB of time orientation and the piece A of direction in space is identical, be judged to be movable information 38 identical.Now, the size of piece A and TB does not need identical.
Figure 12 C illustrates motion reference block p and can utilize a side of piece q is the piece A of direction in space, another example of the piece TB that the opposing party is time orientation.Figure 12 C illustrates that the piece TB of time orientation is split into multiple fritters, and has the situation of multiple fritters with movable information 38.In the example of Figure 12 C, whole piece with movable information 38 has identical movable information 38, if this movable information 38 is identical with the movable information 38 of the piece A of direction in space, is judged to be movable information 38 identical.Now, the size of piece A and TB does not need identical.
Figure 12 D illustrates motion reference block p and can utilize piece q is all the example of the piece of time orientation.In this case, the movable information 38 of if block TB and TE is identical, is judged to be movable information 38 identical.
Figure 12 E illustrates motion reference block p and can utilize piece q is all another example of the piece of time orientation.Figure 12 E illustrates the piece TB of time orientation and TE is divided into respectively to multiple fritters, and has separately the situation of multiple fritters with movable information 38.In this case, for the each fritter comparing motion information 38 in piece, if identical for whole fritter movable informations 38, be judged to be the movable information 38 of piece TB identical with the movable information 38 of piece TE.
Figure 12 F illustrates motion reference block p and can utilize piece q is all another example of the piece of time orientation.Figure 12 F illustrates the piece TE of time orientation is divided into multiple fritters, has the situation of multiple fritters with movable information 38 in piece TE.In the case of whole movable information 38 of piece TE be identical movable information 38 and the movable information 38 that has with piece TD identical, be judged to be piece TD identical with the movable information 38 of TE.
Thus, in step S803, judge the movable information 38 of motion reference block p and can utilize the movable information 38 of piece q whether identical.In the example of Figure 12 A to Figure 12 F, if the quantity of the utilized piece q comparing with motion reference block p is 1 to be illustrated, but in the case of can utilize the quantity of piece q be more than 2, also can utilize the movable information 38 of piece q to compare the movable information of motion reference block p 38 and each.In addition, the in the situation that of application proportional zoom described later, the movable information 38 after proportional zoom becomes the movable information 38 of above-mentioned explanation.
In addition, the judgement that the movable information of motion reference block p is identical with the movable information that can utilize piece q is not limited to the on all four situation of each motion vector that movable information comprises.For example, also can if the norm of the difference of 2 motion vectors just think within the limits prescribed motion reference block p movable information and can utilize the movable information of piece q identical in fact.
Figure 32 is the block diagram that illustrates in greater detail coding row lsb decoder 301.Shown in figure 32, coding row lsb decoder 301 has: coded data 80 is separated into the separation unit 320 of syntactical unit, the conversion coefficient lsb decoder 322 that conversion coefficient is decoded, the selection piece lsb decoder 323 that selection block message is decoded and the parameter lsb decoder 321 that the parameter relevant with prediction block sizes and quantization etc. decoded.
Parameter lsb decoder 321 receives the coded data 80A that comprises the parameter relevant with prediction block sizes and quantization from separation unit, coded data 80A is decoded and generates control information 71.Conversion coefficient lsb decoder 322 receives coded conversion coefficient 80B from separation unit 320, and the conversion coefficient 80B of this coding is decoded, and obtains conversion coefficient information 33.Select the piece lsb decoder 323 inputs coded data 80C relevant with selecting piece and can utilize block message 60, block message 61 is selected in output.As shown in figure 11, the utilized block message of inputting 60 illustrates utilizability for each motion reference block.
Then,, with reference to Figure 33, describe prediction section 305 in detail.
As shown in figure 33, prediction section 305 has movable information selection portion 314 and dynamic compensating unit 313, and movable information selection portion 314 has direction in space movable information obtaining section 310, time orientation movable information obtaining section 311 and movable information diverter switch 312.Prediction section 305 has structure and the function identical with the prediction section 101 illustrating in the 1st execution mode substantially.
Prediction section 305 input can utilize block message 60, selects block message 61, reference movement information 39 and with reference to picture signal 37, prediction of output picture signal 35 and movable information 38.Direction in space movable information obtaining section 310 and time orientation movable information obtaining section 311 have respectively the function identical with the direction in space movable information obtaining section 110 illustrating in the 1st execution mode and time orientation movable information obtaining section 111.Direction in space movable information obtaining section 310 is used can utilize block message 60 and reference movement information 39, generates to comprise and be positioned at the movable information that respectively can utilize piece of direction in space and the movable information 38A of index.Time orientation movable information obtaining section 311 is used can utilize block message 60 and reference movement information 39, generate comprise be positioned at time orientation respectively can utilize the movable information of piece and the movable information of index (or group of movable information) 38B.
In movable information diverter switch 312, according to selecting block message 61, from the movable information 38A from direction in space movable information obtaining section 310 and movable information (or group of the movable information) 38B from time orientation movable information obtaining section 311, select one, obtain movable information 38.Selected movable information 38 is sent to dynamic compensating unit 313 and movable information memory 306.Dynamic compensating unit 313 is similarly carried out motion compensated prediction, generation forecast picture signal 35 according to selected movable information 38 and the dynamic compensating unit 113 illustrating in the 1st execution mode.
In the proportional zoom function of the motion vector of dynamic compensating unit 313, with in the 1st execution mode, illustrate identical, therefore description thereof is omitted.
Figure 22 illustrates the syntactic structure in image decoding portion 300.As shown in figure 22, grammer mainly comprises 3 parts, i.e. high-level syntax 901, slice-level grammer 904 and microlith level grammer 907.High-level syntax 901 keeps the syntactic information of the above upper layer of section.Slice-level grammer 904 keeps needed information for each section, and microlith level grammer 907 keeps needed data for the each microlith shown in Fig. 7 A to Fig. 7 D.
Each several part comprises more detailed grammer.High-level syntax 901 comprises the grammer of the sequence such as sequential parameter group grammer 902 and graphic parameter group grammer 903 and figure form class.Slice-level grammer 904 comprises a section grammer 905 and slice of data grammer 906 etc.In addition, microlith level grammer 907 comprises microlith layer grammer 908 and microlith prediction grammer 909 etc.
Figure 23 A and Figure 23 B illustrate the example of microlith layer grammer.Available_block_num shown in Figure 23 A and Figure 23 B represents to utilize the quantity of piece, when it is to be greater than in 1 the situation of value, need to select the decoding of block message.In addition, stds_idx illustrates selection block message, uses and utilizes the code table that piece number is corresponding to encode to stds_idx with aforesaid.
Figure 23 A is illustrated in after mb_type the grammer when selecting block message to decode.When the predictive mode shown in mb_type is in the situation of the size determined or the pattern (TARGET_MODE) determined, and be to be greater than 1 value at available_block_num, stds_idx is decoded.For example, selecting the movable information of piece to become available is that block size is in the situation of 64 × 64 pixels, 32 × 32 pixels, 16 × 16 pixels, stds_idx to be encoded, or the in the situation that of Direct Model, stds_idx is encoded.
Figure 23 B is illustrated in before mb_type the grammer when selecting block message to decode.When available_block_num is greater than in 1 the situation of value, stds_idx is decoded.In addition, if available_block_num is 0, carry out the H.264 motion compensation in the past of representative, so mb_type is encoded.
Table shown in Figure 23 A and Figure 23 B unspecified grammatical feature in the present invention in the ranks also may be shown, also can comprise the description relevant with condition difference in addition.Or, also syntax table can be cut apart, merge into multiple tables.In addition, do not need to use identical term, can at random change according to the mode of utilizing yet.In addition, each grammer segment of describing in this microlith layer grammer also can change to and clearly be documented in microlith data syntax described later.
As mentioned above, the picture decoding apparatus of present embodiment is decoded to the image of having encoded by the picture coding device of aforesaid the 1st execution mode.Therefore, the image decoding of present embodiment can reproduce according to smaller coded data the decoded picture of high picture element.
(the 4th execution mode)
Figure 34 roughly illustrates the picture decoding apparatus of the 4th execution mode.As shown in figure 34, picture decoding apparatus possesses image decoding portion 400, decoding control section 350 and output state 308.The picture decoding apparatus of the 4th execution mode is corresponding with the picture coding device of the 2nd execution mode.In the 4th execution mode, mainly part and the action different from the 3rd execution mode are described.As shown in figure 34, in the image decoding portion 400 of present embodiment, coding row lsb decoder 401 and prediction section 405 are different from the 3rd execution mode.
The prediction section 405 of present embodiment is optionally switched following two kinds of prediction mode, and generation forecast picture signal 35, and these two kinds of prediction mode comprise: the prediction mode (the 1st prediction mode) that uses the movable information of selecting piece to have to carry out motion compensation; As H.264, for decoder object piece use a prediction mode (the 2nd prediction mode) that motion vector carries out motion compensation.
Figure 35 is the block diagram that illustrates in greater detail coding row lsb decoder 401.Coding row lsb decoder 401 shown in Figure 35 further possesses movable information lsb decoder 424 in the structure of the coding row lsb decoder 301 shown in Figure 32.In addition, the selection piece lsb decoder 423 shown in Figure 35 is different from the selection piece lsb decoder 323 shown in Figure 32, and the coded data 80C relevant with selecting piece decoded, and obtains predicting handover information 62.Which in the 1st and the 2nd prediction mode be prediction section 101 in the picture coding device of prediction handover information 62 presentation graphs 1 used.When prediction handover information 62 represents that prediction section 101 has been used in the situation of the 1st prediction mode, utilize the 1st prediction mode in the situation of decoder object piece coding, select piece lsb decoder 423 to decode to the selection block message in coded data 80C, obtain selecting block message 61.When prediction handover information 62 represents that prediction section 101 has been used in the situation of the 2nd prediction mode, utilize the 2nd prediction mode in the situation of decoder object piece coding, select piece lsb decoder 423 not to selecting block message decoding, movable information lsb decoder 424, to coded movable information 80D decoding, obtains movable information 40.
Figure 36 is the block diagram that illustrates in greater detail prediction section 405.Prediction section 405 shown in Figure 34 possesses the 1st prediction section the 305, the 2nd prediction section 410 and Forecasting Methodology diverter switch 411.The 2nd prediction section 410 is used the movable information 40 of having been decoded by coding row lsb decoder 401 and with reference to picture signal 37, carries out the motion compensated prediction same with the dynamic compensating unit 313 of Figure 33, generation forecast picture signal 35B.The 1st prediction section 305 is identical with the prediction section 305 illustrating in the 3rd execution mode, generation forecast picture signal 35B.In addition, Forecasting Methodology diverter switch 411 is according to prediction handover information 62, from the prediction image signal 35B from the 2nd prediction section 410 and the prediction image signal 35A from the 1st prediction section 305, select one party, and as the prediction image signal 35 of prediction section 405 and export.Meanwhile, Forecasting Methodology diverter switch 411 using selected in the 1st prediction section 305 or the 2nd prediction section 410 used movable information send to movable information memory 306 as movable information 38.
Then,, about the syntactic structure relevant with present embodiment, mainly the point different from the 3rd execution mode described.
Figure 30 A and Figure 30 B illustrate respectively the example of the microlith layer grammer of present embodiment.Available_block_num shown in Figure 30 A represents to utilize the quantity of piece, when it is to be greater than in 1 the situation of value, selects piece lsb decoder 423 to decode to the selection block message in coded data 80C.In addition, whether stds_flag is illustrated in motion compensated prediction the movable information using the movable information of selecting piece as decoder object piece and the mark that used, represents that Forecasting Methodology diverter switch 411 selected which side the mark in the 1st prediction section 305 and the 2nd prediction section 410.When can utilize the quantity of piece be greater than 1 and the stds_flag situation that is 1 under, be illustrated in and in motion compensated prediction, used the movable information of selecting piece to have.In addition, when in the situation that stds_flag is 0, do not utilize and select the movable information that has of piece, and with H.264 similarly directly the information of movable information is encoded or predicted difference value is encoded.In addition, stds_idx represents to select block message, and the code table corresponding with utilizing piece number as previously mentioned.
Figure 30 A is illustrated in after mb_type the grammer when selecting block message to decode.Be only the block size determined or the pattern determined at the predictive mode shown in mb_type, stds_flag and stds_idx decoded.For example, under block size is 64 × 64,32 × 32,16 × 16 situation, or in the situation of Direct Model, stds_flag and stds_idx are decoded.
Figure 30 B is illustrated in before mb_type the grammer when selecting block message to decode.For example, when in the situation that stds_flag is 1, do not need mb_type to decode.When in the situation that stds_flag is 0, mb_type is decoded.
As previously discussed, the picture decoding apparatus of present embodiment is decoded to the image of having encoded by the picture coding device of aforesaid the 2nd execution mode.Therefore, the image decoding of present embodiment can reproduce according to smaller coded data the decoded picture of high picture element.
In addition, the invention is not restricted to above-mentioned execution mode itself, implementation phase, in the scope that does not depart from its main idea, can structural element be out of shape and be specialized.In addition, appropriately combined by the disclosed multiple structural elements of above-mentioned execution mode, can form various inventions.For example, also can from the entire infrastructure key element shown in execution mode, delete several structural elements.In addition, also can suitably combine the structural element in different execution modes.
As this example, even if as described below the 1st to the 4th above-mentioned execution mode is out of shape and also can obtains same effect.
(1) in the 1st to the 4th execution mode, example when handling object frame being divided into the rectangular blocks such as 16 × 16 block of pixels and encoding or decode to the order of the block of pixels of bottom right according to the block of pixels from picture upper left is as shown in Figure 4 illustrated, but coding or decoding order are not limited to this example.For example, coding or decoding order can be both the orders from picture bottom right to upper left, can be also order to left down from upper right.In addition, coding or decoding order can be both the orders from the central portion helically of picture to periphery, can be also the orders from the periphery of picture to central part.
(2) in the 1st to the 4th execution mode, take regardless of luminance signal and color difference signal and the situation that is defined as the chrominance signal component of be illustrated as example.But, both can use different prediction processing for luminance signal and color difference signal, or also can use identical prediction processing.In the case of using different prediction processing, by the Forecasting Methodology of selecting for color difference signal, utilize the method same with luminance signal to carry out coding/decoding.
In addition, obviously, implement various distortion in the scope that does not depart from main idea of the present invention and can implement too.
Utilizability in industry
Image coding/decoding method of the present invention can improve code efficiency, so have the utilizability in industry.

Claims (4)

1. a picture decoding method, is characterized in that, has:
The 1st step, receives the input of the pattern information relevant to the predictive mode of decoder object piece;
The 2nd step, the information that regulation is shown with described pattern information correspondingly, is selected multiple motion reference blocks from have the complete block of pixels of the decoding of movable information;
The 3rd step selects at least one can utilize piece from described multiple motion reference blocks, described at least one can to utilize piece be the block of pixels with the candidate of the movable information that is applied to described decoder object piece, and there is different movable information mutually;
The 4th step, based on utilizing the quantity of piece and predefined code regulation is decoded to inputted coded data according to described, thereby obtains the selection information of selecting piece for determining;
The 5th step, according to described selection information, selects one to select piece from described utilization piece;
The 6th step, is used the movable information of described selection piece to generate the predicted picture of described decoder object piece;
The 7th step, decodes to the predicated error of described decoder object piece according to described coded data; And
The 8th step, obtains decoded picture according to described predicted picture and described predicated error.
2. picture decoding method according to claim 1, is characterized in that,
In described the 3rd step, described motion reference block have movable information and this movable information with corresponding to the inconsistent situation of the movable information that is judged as available motion reference block under, be judged to be to utilize piece.
3. a picture decoding apparatus, is characterized in that, possesses:
Input part, receives the input of the pattern information relevant to the predictive mode of decoder object piece;
Can utilize piece obtaining section, the information that regulation is shown with described pattern information correspondingly, from there is the complete block of pixels of the decoding of movable information, select multiple motion reference blocks, from described multiple motion reference blocks, select at least one can utilize piece, described at least one can to utilize piece be the block of pixels with the candidate of the movable information that is applied to described decoder object piece, and there is different movable information mutually;
The 1st lsb decoder, based on utilizing the quantity of piece and predefined code regulation is decoded to inputted coded data according to described, thereby obtains the selection information of selecting piece for determining;
Selection portion, according to described selection information, selects one to select piece from described utilization piece;
Prediction section, is used the movable information of described selection piece to generate the predicted picture of described decoder object piece;
The 2nd lsb decoder, decodes to the predicated error of described decoder object piece according to described coded data; And
Adder, obtains decoded picture according to described predicted picture and described predicated error.
4. picture decoding apparatus according to claim 3, is characterized in that,
Utilize in piece obtaining section described, described motion reference block have movable information and this movable information with corresponding to the inconsistent situation of the movable information that is judged as available motion reference block under, be judged to be to utilize piece.
CN201410051514.XA 2010-04-08 2010-04-08 Picture decoding method and picture decoding apparatus Active CN103813163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410051514.XA CN103813163B (en) 2010-04-08 2010-04-08 Picture decoding method and picture decoding apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201080066017.7A CN102823248B (en) 2010-04-08 2010-04-08 Image encoding method and image decoding method
CN201410051514.XA CN103813163B (en) 2010-04-08 2010-04-08 Picture decoding method and picture decoding apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201080066017.7A Division CN102823248B (en) 2010-04-08 2010-04-08 Image encoding method and image decoding method

Publications (2)

Publication Number Publication Date
CN103813163A true CN103813163A (en) 2014-05-21
CN103813163B CN103813163B (en) 2017-03-01

Family

ID=50709296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410051514.XA Active CN103813163B (en) 2010-04-08 2010-04-08 Picture decoding method and picture decoding apparatus

Country Status (1)

Country Link
CN (1) CN103813163B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1692653A (en) * 2002-11-22 2005-11-02 株式会社东芝 Moving picture encoding/decoding method and device
CN1889687A (en) * 2006-06-02 2007-01-03 清华大学 Non-predicted circulation anti-code error video frequency coding method
CN1898964A (en) * 2003-12-22 2007-01-17 佳能株式会社 Motion image coding apparatus, and control method and program of the apparatus
CN101361370A (en) * 2005-11-30 2009-02-04 株式会社东芝 Image encoding/image decoding method and image encoding/image decoding apparatus
US20090074077A1 (en) * 2006-10-19 2009-03-19 Canon Kabushiki Kaisha Video source coding with decoder side information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1692653A (en) * 2002-11-22 2005-11-02 株式会社东芝 Moving picture encoding/decoding method and device
CN1898964A (en) * 2003-12-22 2007-01-17 佳能株式会社 Motion image coding apparatus, and control method and program of the apparatus
CN101361370A (en) * 2005-11-30 2009-02-04 株式会社东芝 Image encoding/image decoding method and image encoding/image decoding apparatus
CN1889687A (en) * 2006-06-02 2007-01-03 清华大学 Non-predicted circulation anti-code error video frequency coding method
US20090074077A1 (en) * 2006-10-19 2009-03-19 Canon Kabushiki Kaisha Video source coding with decoder side information

Also Published As

Publication number Publication date
CN103813163B (en) 2017-03-01

Similar Documents

Publication Publication Date Title
CN102823248B (en) Image encoding method and image decoding method
JP6101327B2 (en) Method and apparatus for motion compensated prediction
CN103826129A (en) Image decoding method and image decoding device
JP5479648B1 (en) Image encoding method and image decoding method
CN103227922B (en) Picture decoding method and picture decoding apparatus
JP5444497B2 (en) Image encoding method and image decoding method
CN103813163A (en) Image decoding method and image decoding device
CN103826131A (en) Image decoding method and image decoding device
CN103813165A (en) Image decoding method and image decoding device
CN103826130A (en) Image decoding method and image decoding device
CN103813164A (en) Image decoding method and image decoding device
CN103813168A (en) Image coding method and image coding device
CN103747252A (en) Image decoding method and image decoding device
JP2004072732A (en) Coding apparatus, computer readable program, and coding method
JP6961781B2 (en) Image coding method and image decoding method
JP6795666B2 (en) Image coding method and image decoding method
JP6980889B2 (en) Image coding method and image decoding method
JP5571262B2 (en) Image encoding method and image decoding method
JP7547598B2 (en) Image encoding method and image decoding method
JP6609004B2 (en) Image encoding method and image decoding method
JP6370977B2 (en) Image encoding method and image decoding method
JP6367452B2 (en) Image encoding method and image decoding method
JP5659314B1 (en) Image encoding method and image decoding method
JP5509398B1 (en) Image encoding method and image decoding method
JP5571229B2 (en) Image encoding method and image decoding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant