WO2003092301A1 - Coding device and method, decoding device and method, recording medium, and program - Google Patents
Coding device and method, decoding device and method, recording medium, and program Download PDFInfo
- Publication number
- WO2003092301A1 WO2003092301A1 PCT/JP2003/005081 JP0305081W WO03092301A1 WO 2003092301 A1 WO2003092301 A1 WO 2003092301A1 JP 0305081 W JP0305081 W JP 0305081W WO 03092301 A1 WO03092301 A1 WO 03092301A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- field
- macroblock
- frame
- encoding
- context model
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/16—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/40—Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
- H03M7/4006—Conversion to or from arithmetic code
Definitions
- Encoding device and method decoding device and method, recording medium, and program
- the present invention relates to an encoding device and method, a decoding device and method, a recording medium, and a program.
- the present invention is suitable for encoding an image signal at a higher compression ratio than before and transmitting or storing the signal.
- the present invention relates to an encoding method, a decoding apparatus and method, a recording medium, and a program. Background art
- the MPEG 2 (IS0 / IEC 13818-2) compression method is a standard defined as a versatile image compression method, and includes both interlaced and progressive scan images, as well as standard-resolution images and high-definition images. It is currently widely used for a wide range of professional and consumer applications, such as the DVD (Digital Versatile Disk) standard.
- MPEG2 By using the MPEG2 compression method, for example, 4 to 8 Mbps for a standard resolution interlaced scan image having 720 x 480 pixels, and a high resolution having 1,920 x 1,080 pixels
- a code amount (bit rate) 18 to 22 Mbps
- MPEG2 mainly targets high-quality encoding suitable for broadcasting, but does not support an encoding method with a higher compression rate. Therefore, as an encoding method with a higher compression rate, Standardization of the MPEG 4 coding system was performed. Image coding method The standard was approved as an international standard as ISO / IEC 14496-2 in February 1998.
- H.26L requires more computation for encoding and decoding than conventional encoding methods such as MPEG2 and MPEG4, but has higher encoding efficiency. Is known to be realized.
- FIG. 1 shows an example of a configuration of a conventional image information encoding device.
- an input image signal which is an analog signal, is converted into a digital signal by an A / D converter 1 and then supplied to a screen rearrangement buffer 2.
- the screen rearrangement buffer 2 rearranges the image information from the A / D conversion unit 1 according to the GOP (Group of Pictures) structure of the image compression information output by the image information encoding device. .
- an image to be subjected to intra (intra-image) encoding will be described.
- the image information is supplied to the orthogonal transform unit 4 via the adder 3.
- the image information is subjected to an orthogonal transform (such as discrete cosine transform or power Lunen'-Loeve transform), and the obtained transform coefficients are supplied to the quantization unit 5.
- the quantization unit 5 calculates the quantization coefficient based on the data amount of the transform coefficient stored in the storage buffer 7. In accordance with the control from the rate control unit 8, quantization processing is performed on the transform coefficients supplied from the orthogonal transform unit 4.
- the encoding mode is determined from the quantized transform coefficient and the quantization scale supplied from the quantization unit 5, and the determined encoding mode is subjected to irreversible encoding (variable length Coding or arithmetic coding) to form information to be inserted into the header part of the image coding unit.
- the encoded encoding mode is supplied to the accumulation buffer 7 and accumulated.
- the encoded coding mode stored in the storage buffer 7 is output to the subsequent stage as image compression information.
- the lossless encoding unit 6 performs lossless encoding on the quantized transform coefficients, and stores the encoded transform coefficients in the storage buffer 7.
- the coded transform coefficients accumulated in the accumulation buffer 7 are also output to the subsequent stage as image compression information.
- the transform coefficient quantized by the quantization unit 5 is inversely quantized.
- the inverse orthogonal transform unit 10 performs an inverse orthogonal transform process on the inversely quantized transform coefficients to generate decoded image information.
- the generated decoded image information is stored in the frame memory 11.
- the image information is supplied to the adder 3 and the motion prediction and compensation unit 12.
- the motion prediction / compensation unit 12 reads the image information for reference, which corresponds to the inter-coded image from the screen rearrangement buffer 2, from the frame memory 11, and performs motion prediction and compensation processing. Then, the reference image information is generated and supplied to the adder 3. The motion vector information obtained in the motion prediction / compensation process by the motion prediction / compensation unit 12 is supplied to the lossless encoding unit 6.
- the reference image information from the motion prediction / compensation unit 12 is converted into a difference signal from the image information of the image to be inter-coded from the screen rearrangement buffer 2.
- the orthogonal transform unit 4 uses a differential signal Are subjected to orthogonal transform, and the obtained transform coefficients are supplied to the quantization unit 5.
- the quantization unit 5 performs a quantization process on the transform coefficient supplied from the orthogonal transform unit 4 under the control of the rate control unit 8.
- the lossless encoding unit 6 sets the encoding mode based on the transform coefficients and the quantization scale quantized by the quantization unit 5, and the motion vector information supplied from the motion prediction / compensation unit 12 and the like.
- the determined encoding mode is subjected to lossless encoding for the determined encoding mode, and information to be inserted into the header part of the image encoding unit is generated.
- the encoded coding mode is stored in the storage buffer 7.
- the encoded coding mode stored in the storage buffer 7 is output as image compression information.
- lossless encoding is performed on the motion vector information from the motion prediction / compensation unit 12 to generate information to be inserted into the header part of the image encoding unit.
- FIG. 2 shows an example of the configuration of a conventional image information decoding device.
- the input image compression information is temporarily stored in the storage buffer 21 and then transferred to the reversible decoding unit 22.
- the lossless decoding unit 22 performs lossless decoding (variable length decoding, arithmetic decoding, etc.) on the image compression information based on a predetermined format of the image compression information, and encodes the data stored in the header section.
- the mode information is obtained and supplied to the inverse quantization unit 23.
- the reversible decoding unit 22 obtains the quantized transform coefficient and supplies it to the inverse quantization unit 23.
- the lossless decoding unit 22 also decodes the motion vector information stored in the header of the image compression information and performs motion prediction on the information.
- the inverse quantization unit 23 inversely quantizes the quantized transform coefficient supplied from the lossless decoding unit 22 and supplies the obtained transform coefficient to the inverse orthogonal transform unit 24.
- the inverse orthogonal transform unit 24 performs an inverse orthogonal transform (an inverse discrete cosine transform or a reverse Lunen-Loeve transform) on the transform coefficient based on a predetermined format of the image compression information.
- the image information subjected to the inverse orthogonal transform is stored in the screen rearrangement buffer 26 via the adder 25, and D /
- the signal is converted into an analog signal by the A conversion section 27 and output to the subsequent stage.
- the image information that has been subjected to the inverse orthogonal transform is also stored in the frame memory 29.
- the motion prediction 'compensation unit 28 stores the motion vector information from the lossless decoding unit 22 and the motion vector information stored in the frame memory 29.
- a reference image is generated based on the image information and supplied to the adder 25.
- the reference image from the motion prediction / compensation unit 28 and the output of the inverse orthogonal transform unit 25 are combined to generate image information.
- the other processing is the same as that of the intra-coded frame, and a description thereof will not be repeated.
- UVLC Universal Variable Length Code
- CABAC Context-based adaptive binary arithmetic coding
- the user is able to select and apply one of UVLC or CABAC the lossless encoding scheme.
- Information indicating whether the lossless encoding method is UVLC or CABAC is specified in a field called Entropy Coding included in the RTP Parameter Set Packet of the RTP layer in the image compression information.
- any message (consisting of multiple alphabetic symbols) is represented as a point on the half-open interval 0.0 ⁇ X ⁇ 1.0, and a code is generated from the coordinates of this point.
- the half-open interval 0. ⁇ ⁇ ⁇ ⁇ 1.0 is divided into sub-intervals corresponding to each symbol based on the appearance probabilities of the symbols constituting the alphabet.
- Figure 3 shows the probability of occurrence of symbols si through s 7, an example of a sub-interval fraction.
- the upper and lower limits of sub-intervals are determined based on the cumulative appearance probabilities of each symbol.
- the lower limit of the subinterval for symbol 3 ; (i 1, 2, ⁇ ⁇ ⁇ , 7) is the upper limit of the subinterval of symbol s, and the upper limit of the subinterval corresponding to symbol si is the lower limit of that subinterval. This is a value obtained by adding the appearance probability of the symbol si.
- Equation (2) is a value representing a message included in the half-open section 0.21 1 64 ⁇ x ⁇ 0.21 1 7.
- the code length of the code corresponding to the message is 2 - long 1 1 2 bits can be represented up to 2 12 .
- the message (s 2 s! S 3 s 6 s 7 ) is encoded as (00 1 10 1 10001 1).
- the first feature is that by using an appropriate context model for each symbol to be encoded and performing arithmetic coding based on an independent probability model, redundancy between the symbols can be eliminated. is there.
- the second feature is that in arithmetic coding, it is possible to assign a non-integer code amount (bit) to each symbol, and it is possible to obtain coding efficiency close to entropy. is there.
- FIG. 5 shows a general configuration of a CABAC encoder to which CABAC is applied.
- the context modeling unit 31 first converts a syntax element symbol (symbol) into an appropriate context model according to a past history for an arbitrary syntax element in the image compression information. I do. Such a model Is called context modeling. The context model for each syntax element in the image compression information will be described later.
- the binarization unit 32 binarizes the symbol that has not been binarized.
- the probability estimation is performed on the binarized symbol by the probability estimating unit 34, and the adaptive arithmetic coding based on the probability estimation is performed by the coding engine 35.
- the related models are updated, so that each model can perform the coding process according to the statistics of the actual image compression information.
- MB_type MB-type
- MVD motion vector information
- Ref_frame reference frame parameters
- the generation of the B_type context model is described separately for intra frames and inter frames.
- the context model ctx—mb_type one intra (C) corresponding to the MB—type of the macroblock C is expressed by the following equation ( 3) defined by In the intraframe, the mode of the macroblock is Intra4X4 or Intral6Xl6.
- ctx_mb_type_intra (C) A + B (3) where, in equation (3), A is 0 when macroblock A is Intra 4 X 4 and Intra 16 X 16 In some cases it is 1. Similarly, B is 0 if Mac mouth block B is Intra 4 ⁇ 4 and 1 if Intra 16 ⁇ 16. Therefore, the context model ctx—mb—type—intra (C) is one of the values 0, 1, and 2.
- the context model ctx—mb—type_inter (C) corresponding to the MB_type of macroblock C indicates that the interframe is If it is a P picture, 1
- interframe is a B picture, it is defined by the following equation (5).
- the context model ctx_mb_type—inter (C) corresponding to the MB type of the macroblock C in the interframe (P picture) has three values for the case of the P picture and the case of the B picture, respectively. It will be.
- the motion vector information corresponding to the macroblock of interest included in the image compression information is encoded as a prediction error from the motion vector corresponding to the adjacent macroblock.
- the evaluation function ek (C) for the macroblock C of interest is defined by the following equation (6).
- k 0 indicates a horizontal component
- k l indicates a vertical component.
- e k (C) [mvd k (A)
- mv d k (A), mvd k (B) are the motion vector prediction errors for macroblocks A and B that are adjacent to macroblock C, respectively. It is.
- the context model generation of the motion vector information (MVD) is performed as shown in Fig.8. That is, the motion for the macroblock C vector prediction error mv d k (C), the absolute value I mv d k (C) is separated into I and code. The absolute value I mv d k (C)
- the first bin (leftmost value) of the binarized absolute value imvd k (C) I is encoded using the above-described context model ctx_mvd (C, k).
- the second bin (the second value from the left) is encoded using context model 3.
- the third and fourth bins are encoded using context models 4 and 5, respectively.
- the fifth and subsequent bins are encoded using the context model 6.
- the code of mv d k (C) is encoded using context model 7.
- motion vector information (MVD) is encoded using eight types of context models.
- 0: 1) indicates 0 when the reference frame parameter of the macro block 0 is 0, and indicates 1 when the reference frame parameter of the macro block ⁇ ⁇ is not 0.
- Equation (8) defines four types of context models that encode reference frame parameters (Ref-frame). Further, a context model for the second bin and a context model for the third and subsequent bin are defined. Next, code block pattern (CBP), intra prediction mode (IPRED), and option (RUN, LEVEL) information, which are syntax elements related to the texture information included in the H.26L image compression information, are calculated and encoded.
- CBP code block pattern
- IPRED intra prediction mode
- RUN option
- the context model to be used is described. First, the context model for the code block pattern is described.
- the handling of code block patterns other than the Intra 16 X 16 macro block is defined as follows.
- the luminance signal includes four CBP bits, one bit for each of the four 8 ⁇ 8 blocks included in the Intral 6 ⁇ 16 macroblock.
- the context model ctx—cbp—luma (C) corresponding to the luminance signal of macroblock C is given by the following equation (9). Defined.
- ctx_cbp_luma (C) A + 2B (9) where, in equation (9), A is the CBP bit of the luminance signal of macroblock A, and B is the CBP bit of the luminance signal of macroblock B. It is. ⁇
- the remaining two bits of the CBP field are for color difference signals.
- the context model corresponding to the color difference signal of macroblock C ci: x_cbp—chroma—sig (C) is It is defined by (1 0).
- Equation (10) A + 2B ⁇ ⁇ '(1 0)
- A is the CBP bit of the color difference signal of macro block A
- B is the color difference signal of macro block B.
- the context model ctx_cbp_chroraa_sig (C) corresponding to the color difference signal of the macro block C is not 0, that is, if the AC component of the color difference signal exists, the macro block C defined by the following equation (1 1) It is necessary to encode the context model ctx-cbp-chroma-ac (C) corresponding to the AC component of the color difference signal.
- ctx_cbp_chroma_ac (C) A + 2B) where, in equation (11), A is cbp_chroma_a c decision corresponding to macroblock A, and B is cbp_ckroma_ac decision corresponding to macroblock B. is there.
- FIG. 9 shows pixels a to p existing in a 4 ⁇ 4 block obtained by dividing a macro block and pixels A to I existing in adjacent 4 ⁇ 4 blocks.
- Labels 1 to 5 in FIG. 10 indicate the directions of the intra prediction modes of labels 1 to 5, respectively.
- the intra prediction mode at label 0 is the DC prediction mode (DC Prediction).
- Equation (1 2) the average value of the remaining four pixels (in this case, pixels E to H) is calculated as pixels a to p
- the predetermined value for example, 1 28
- the intra prediction mode with label 1 is called Vertical / Diagonal Prediction.
- the intra prediction mode of label 1 is used only when four pixels A to D are present in the image frame. In this case, pixels a through! Are predicted according to the following equations (13-1) to (13-6).
- the intra prediction mode of label 2 is called Vertical Prediction.
- the intra prediction mode of label 2 is used only when four pixels A to D exist in the image frame. In this case, for example, pixel A is used as a predicted value of pixels a, e, i, and m, and pixel ⁇ is used as a predicted value of pixels b, f, j, and ⁇ .
- the intra prediction mode at label 3 is called Diagonal Prediction.
- the intra prediction mode of label 1 is used only when 9 pixels A to I exist in the image frame.
- each of the pixels a to p is expressed by the following equations (14-1) to (13— TJP03 / 05081
- the intra prediction mode of label 4 is called Horizontal Prediction.
- Label 4 intra prediction mode is used only when 4 pixels E to H are present in the picture frame.
- pixel E is used as a predicted value of pixels a, b, c, and d
- pixel F is used as a predicted value of pixels e, f, g, and h.
- the intra prediction mode with label 5 is called Horizontal / Diagonal Prediction.
- the intra prediction mode of label 5 is used only when the four pixels E to H exist in the image frame. In this case, pixels a through! Are predicted according to the following equation (15-1).
- Pixel a (E + F) // 2 (1 5-1)
- Pixel b F (1 5 2)
- Pixel c, e (F + G) 1/2 (1 5 3)
- Pixel f, d G (1 5 4)
- Pixel i, g (G + H) // 2 (1 5 5)
- Models are defined. Therefore, a total of 14 context models are defined for intra prediction mode.
- FIGS. 11A and 11B are defined as a scanning method for rearranging the two-dimensional discrete cosine transform coefficients into one dimension.
- the single scan method shown in FIG. 11A is a method used for a luminance signal for an intra macro block and when the quantization parameter QP is smaller than 24.
- the double scan method shown in Fig. 11B is used when the single scan method is not used.
- an intra macroblock with an interpol macroblock and a quantization parameter QP of 24 or more there is one non-zero coefficient on average for a 4x4 macroblock, and a 1-bit EOB (End Of Block) signal Sufficient, but for a luminance signal of an intra macroblock whose quantization parameter QP is smaller than 24, a 1-bit E0B signal is not sufficient because there are two or more non-zero coefficients. For this reason, the double scan method shown in FIG. 11B is used.
- the context model for (RUN, LEVEL) is based on the above-described distinction between scan methods, distinction between DC / AC block types, distinction between luminance signals / color difference signals, and the Nine types are defined according to the distinction between macroblocks.
- the LEVEL information is separated into a sign and an absolute value.
- Four context models are defined according to the corresponding Ctx_run-level shown in FIG. That is, the first context model is for the code, the second context model is for the first bin, and the second context model is for the second bin. And the fourth context model is defined for subsequent bins.
- the following describes a context model for the quantization parameter Dquant, which can be set at the macroblock level in the H.26L image compression information.
- the parameter Dquant is set when the code block pattern for the macroblock includes a non-zero orthogonal transform coefficient, or when the macroblock is 16 ⁇ 16 Intra Coded.
- the parameter Dquant can take values from 16 to 16.
- the quantization parameter QUANT new for the macroblock is calculated by the following equation (16) using the parameter Dquant in the image compression information.
- QUANT new modulo 3 2 (QUANT 0 L d + Dquant + 3 2) ⁇ ⁇ ⁇ (1 6)
- QUANT. ld is the quantization parameter used for the previous encoding or decoding.
- the first context model ctx_dquant (C) for the parameter Dquant of the macro block C arranged as shown in FIG. 6 is defined as the following equation (17).
- A represents the value of the parameter Dquant of the macroblock A.
- a second context model is defined for the first b in, and a second context model is defined for the second and subsequent b i ⁇ .
- MB_ty P e being 1 0 type defined for the P picture, is binarized according to the relationship shown in FIG. 1 4 A. Further, 17 types of MB-type defined for the B picture are binarized according to the correspondence shown in FIG. 14B.
- Registers corresponding to the various context models described above are initialized in advance by values calculated in advance, and when encoding each symbol, a series of The frequency of bin occurrence for the text model is updated successively and is used to determine when to encode the next symbol.
- the frequency counter is reduced.
- the field Z frame adaptive coding processing can be performed at the macroblock level.
- FIG. 15 In the current H.26L, seven types of modes (modes 1 to 7) as shown in FIG. 15 are defined as units of motion prediction 'compensation in a macroblock.
- Reference 3 proposes to have a Frame / Field Flag between Run and MB-type as the syntax corresponding to the macroblock of image compression information, as shown in Fig. 16. If the value of Frame / Field Flag is 0, it indicates that the macroblock is subjected to frame-based coding.If the value of Frame / Field Flag is 1, field-based coding is performed. It is shown that it is done. When the value of Frame / Field Flag is 1 (that is, when field-based encoding is performed), the pixels in the macro block are rearranged in units of rows as shown in Fig. 17 .
- the pixels a to p located in the 4 ⁇ 4 block shown in FIG. 9 use the pixels A to I located in the adjacent 4 ⁇ 4 block even when the value of the Frame / Field Flag is 1. Intra prediction is performed, but pixels a through! ), And the pixels A to I all belong to the same field parity.
- Pixels a to p existing in the 4 ⁇ 4 block 7 obtained by dividing the macroblock into 16 are subjected to intra prediction using the pixels A to I existing at the ends of the adjacent blocks 2, 3, and 6.
- Figure 2 OA shows a case where the value of the Frame / Field Flag for the macroblock on the left side of the macroblock to be processed and the value of the Frame / Field Flag for the macroblock on the upper side are 1 respectively.
- the intra prediction of the pixels present in the 4 ⁇ 4 block C obtained by dividing the macro block to be processed into 16 is performed by dividing the left macro block by 16 minutes.
- Intra prediction of the pixels existing in the 4 ⁇ 4 block C ′ is performed using the pixels existing in the 4 ⁇ 4 block A ′ and the pixels existing in the 4 ⁇ 4 block B ′.
- FIG. 20B shows a case where the value of the Frame / Field Flag for the macroblock to be processed is 1, and the value of the Frame / Field Flag for the left and upper macroblocks is 0, respectively.
- the intra prediction of the pixels existing in the 4 ⁇ 4 block C obtained by dividing the macroblock to be processed into 16 is obtained by calculating the pixels existing in the 4 ⁇ 4 block A obtained by dividing the left macroblock into 16 and the upper pixels This is performed using the pixels present in the 4 ⁇ 4 block B, which is obtained by dividing the macro block into 16 parts.
- the intra prediction of the pixels existing in the 4 ⁇ 4 block C ′ is performed using the pixels existing in the 4 ⁇ 4 block A ′ and the pixels existing in the 4 ⁇ 4 block B.
- the intra prediction of the color difference signal will be described with reference to FIG.
- the value of Frame / Field Flag is 1, only one type of the intra prediction mode of the color difference signal is defined.
- a to D indicate 4 ⁇ 4 blocks of the color difference signal, respectively.
- Blocks A and B belong to the first field, and blocks C and D belong to the second field.
- s 2 of the blocks adjacent to the proc A to D, which is the total value of the color difference signals present in proc belonging to the first Fi one Rudopariti.
- s 3 Itaru s 5 among the blocks adjacent to the block A to D, which is the total value of the color difference signals present in the block that belongs to the second field parity.
- the predicted values A to D respectively corresponding to the blocks A to D are s. Or if s 5 exist in all image frame within are predicted according to Expressions (1 8).
- FIG. 22 shows a method for encoding the residual components of the chrominance signal after intra-prediction as described above. That is, after performing orthogonal transformation processing on each 4 ⁇ 4 block, a 2 ⁇ 2 block as shown is generated using the DC components of the first field and the second field, and the orthogonal transformation processing is performed again. Will be applied.
- the motion prediction compensation mode is inter 16 x 16 mode, inter 8 x 16 mode, inter 8 x 8 mode, inter 4 x 8 mode, inter 4 x 4 mode
- the inter 16 X 16 mode is a mode in which the motion vector information for the first field, the motion vector information for the second field, and the reference frame in the inter 8 X 16 mode are equivalent.
- Code-Numbers 0 to 5 are assigned to these six types of motion prediction compensation modes, respectively.
- Code 1 Number 0 is assigned to the first field of the frame encoded immediately before, and Code_Number is assigned to the second field of the frame. l is assigned. Code 1 Number 2 is assigned to the first field of the frame encoded one time before that, and Code—Number 3 is assigned to the second field of the frame. Further, Code_Number 4 is assigned to the first field of the frame encoded one time earlier, and Code_Number 5 is assigned to the second field.
- a reference field for the first field and a reference field for the second field are separately defined.
- the 16X16, 8X8, or 4X4 motion vector information corresponding to the 16X16 macroblock E shown in Fig. 24 is the motion vector information of the adjacent macroblocks A to C. Is predicted using the median of the country.
- the median is calculated assuming that the value of the corresponding motion vector information is 0 for the mac mouth blocks A to C that do not exist in the image frame. For example, when the macro blocks D, B, and C do not exist in the image frame, the motion vector information corresponding to the macro block A is used as the predicted value. If the macroblock C does not exist in the image frame, the motion vector information of the macroblock D is used instead.
- the reference frames of the macro blocks A to D do not necessarily have to be the same.
- the block size of the macro block is 8 ⁇ 16, 16 ⁇ 8, 8 ⁇ 4, or 4 ⁇ 8 will be described with reference to FIGS. 25A to 25D.
- the macroblock E of interest and the adjacent macroblocks A to D are arranged as shown in FIG.
- FIG. 25A shows a case where the block sizes of the macro blocks E1 and E2 are 8 ⁇ 16.
- the motion vector information of the macroblock A is used as the prediction value. If the macroblock A adjacent to the left refers to a frame different from the macroblock E1, the above-described median prediction is applied.
- the motion vector information of the macroblock C is used as the predicted value. If the macroblock C adjacent to the upper right refers to a frame different from the macroblock E2, the above-described median prediction is applied.
- FIG. 25B shows a case where the block sizes of the macroblocks El and E2 are 16 ⁇ 8.
- the motion vector information of the macroblock B is used as the prediction value. If the macroblock B adjacent above refers to a frame different from the macroblock E1, the above-described median prediction is applied.
- FIG. 25C shows a case where the block sizes of the macroblocks E1 to E8 are 8 ⁇ 4.
- the median prediction described above is applied to the left macroblocks E1 to E4, and the motion vectors of the left macroblocks E1 to E4 are applied to the right macroblocks E5 to E8.
- the torque information is used as a predicted value.
- FIG. 25D shows a case where the block sizes of the macro blocks E1 to E8 are 4 ⁇ 8.
- the median prediction described above is applied to the upper macroblocks E1 to E4, and the motion estimation of the upper macroblocks E1 to E4 is applied to the lower macroblocks E5 to E8.
- the vector information is used as the predicted value.
- the value of the vertical component of the motion vector information is divided by 2.
- the quotient is assumed to be equivalent to a field-based motion vector and prediction is performed.
- the CABAC method requires a larger amount of calculation for the encoding process than the UVLC method, it is known to achieve higher encoding efficiency, and the input image information is in an interlaced scanning format. In such a case, it is desirable that macroblock-level field Z frame encoding using the CABAC method can be realized. Disclosure of the invention
- the present invention has been made in view of such a situation, and it is possible to perform macro block-level field Z frame encoding using the CABAC method even when input image information is in an interlaced scanning format. With the goal.
- the encoding apparatus performs a context model corresponding to a frame / field flag indicating whether the macroblock level encoding processing is to be field-based or frame-based, and performs frame-based encoding processing.
- a lossless encoding unit that performs lossless encoding using a context model corresponding to the syntax element for performing the encoding processing on a field basis and a context model corresponding to the syntax element for performing the encoding processing on a field basis. I do.
- the context model for the syntax element for performing the field-based encoding includes MB_type for I picture, MB—type for P / B picture, motion vector information, reference field parameter, and intra prediction mode. It can include at least one of the context models corresponding to.
- the encoding method performs a context model corresponding to a frame / field flag indicating whether a macroblock-level encoding process is field-based or frame-based, and performs a frame-based encoding process. And a lossless encoding step of executing a lossless encoding process using a context model corresponding to the syntax element for performing the encoding process on a field basis and a context model corresponding to the syntax element for performing the encoding process on a field basis.
- the program of the first recording medium according to the present invention includes a context model corresponding to a frameZfield flag indicating whether the macroblock-level encoding processing is to be based on a fino redo or a frame.
- a first program according to the present invention includes a context model corresponding to a frame / field flag indicating whether a macroblock-level encoding process is to be field-based or frame-based, and a frame-based encoding process.
- the decoding device of the present invention performs a frame-based encoding process corresponding to a frame / field flag indicating whether a macroblock-level encoding process is field-based or a frame-based encoding process.
- Decoding means for decoding lossless-encoded image compression information using a context model corresponding to the syntax element and a context model corresponding to the syntax element for performing a field-based encoding process It is characterized by the following.
- the decoding method of the present invention includes a context model corresponding to a frameZfield flag indicating whether a macroblock-level encoding process is field-based or frame-based, and a syntax element for performing the frame-based encoding process.
- the program of the second recording medium according to the present invention includes a context model corresponding to a frameZfield flag indicating whether the macroblock-level encoding process is to be field-based or frame-based, and a frame-based encoding process.
- a second program according to the present invention includes a context model corresponding to a frame / field flag indicating whether the encoding process of the MAC block level is to be field-based or frame-based, and a frame-based encoding process.
- a computer executes a decoding step of decoding image-compressed information that has been losslessly encoded using a context model corresponding to the syntax element of the above and a context model corresponding to the syntax element for performing the field-based encoding processing. It is characterized by
- a context model corresponding to a frameZfield flag indicating whether encoding processing of a macroblock level is field-based or frame-based The lossless encoding process is performed using a context model corresponding to the syntax element for performing the encoding process and a context model corresponding to the syntax element for performing the encoding process on a field basis.
- a context model corresponding to a frame / field flag indicating whether a macroblock level encoding process is to be based on a fino redo or a frame is described.
- Image compression that is losslessly encoded using a context model corresponding to syntax elements for performing frame-based encoding processing and a context model corresponding to syntax elements for performing field-based encoding processing The information is decrypted.
- the encoding device and the decoding device may be independent devices, or may be a block that performs the encoding process and the decoding process of the signal processing device.
- FIG. 1 shows a conventional image that achieves image compression by orthogonal transformation and motion compensation.
- FIG. 2 is a block diagram illustrating a configuration of an information encoding device.
- FIG. 2 is a block diagram showing a configuration of an image information decoding device corresponding to the image information encoding device of FIG.
- FIG. 3 is a diagram showing an example of a correspondence relationship between symbol occurrence probabilities and assigned sub-intervals in the arithmetic coding process.
- FIG. 4 is a diagram illustrating an example of the arithmetic coding process.
- FIG. 5 is a block diagram showing a general configuration of a CABAC encoder.
- FIG. 6 is a diagram for explaining an MB-type context model.
- FIG. 7 is a diagram for explaining a context model of the motion vector information MVD.
- FIG. 8 is a diagram for explaining a process of encoding motion vector information MVD based on a context model.
- FIG. 9 is a diagram for explaining the intra prediction mode defined in H.26L.
- FIG. 10 is a diagram for explaining the directions of the intra prediction modes of labels 1 to 5.
- FIG. 11A is a diagram for explaining the single scan method defined in H.26L.
- FIG. 11B is a diagram for explaining the double scan method defined in H.26L.
- FIG. 12 is a diagram showing a context model corresponding to (RUN, LEVEL) defined in H.26L.
- FIG. 13 is a diagram illustrating a process of binarizing syntax elements other than MB-type in H.26L.
- FIG. 14A is a diagram for explaining a process of binarizing the MB_type of a P picture in H.26L.
- Fig. 14B shows the process of binarizing the MB_type of a B picture in H.26L. It is a figure for explaining.
- FIG. 15 is a diagram showing seven modes as units of motion prediction / compensation in macroblock, defined in H.26L.
- FIG. 16 is a diagram illustrating the syntax of image compression information extended so as to obtain macroblock-level field / frame adaptive coding.
- FIG. 17 is a diagram for explaining the rearrangement of the pixels of the macroblock when the macroblock is encoded on a field basis.
- FIG. 18 is a diagram showing five types of modes defined as units of motion prediction / compensation when a macroblock is coded on a field basis.
- FIG. 19 is a diagram for explaining the operation principle of performing intra prediction in the macroblock when the macroblock is encoded on a field basis.
- FIG. 2OA is a diagram for explaining an operation principle of performing intra prediction across macroblocks when a macroblock is encoded on a field basis.
- FIG. 20B is a diagram for explaining the operation principle of performing intra prediction across macroblocks when coding a macroblock on a field basis.
- FIG. 21 is a diagram for explaining the operation principle of performing intra prediction on a chrominance signal when a macroblock is coded on a field basis.
- FIG. 22 is a diagram for explaining an operation principle of encoding a residual component of a chrominance signal when a macroblock is encoded on a field basis.
- FIG. 23 is a diagram for describing multiple frame prediction defined in H.26L.
- FIG. 24 is a diagram for explaining a prediction method of motion vector information when a macroblock is encoded on a field basis.
- FIG. 25A is a diagram for explaining a process of generating a predicted value of motion vector information in each prediction mode defined in H.26L.
- FIG. 25B is a diagram for describing a process of generating a predicted value of motion vector information in each prediction mode defined in H.26L.
- FIG. 25C is a diagram for describing a process of generating a predicted value of motion vector information in each prediction mode defined in H.26L.
- FIG. 25D is a diagram for describing a process of generating a predicted value of motion vector information in each prediction mode defined in H.26L.
- FIG. 26 is a block diagram illustrating a configuration example of an image information encoding device according to an embodiment of the present invention.
- FIG. 27 is a block diagram showing a configuration example of the arithmetic coding unit 58 in FIG.
- FIG. 28A is a diagram showing a tape / record for binarizing the MB_type of a Mac block belonging to a P picture when a macroblock is encoded on a field basis.
- FIG. 28B is a diagram illustrating a tape insert for binarizing the MB_type of the MAC block belonging to the B picture when the macroblock is encoded on a field basis.
- FIG. 29 is a block diagram illustrating a configuration example of an image information decoding device according to an embodiment of the present invention, which corresponds to the image information encoding device of FIG. 26.
- the image information encoding apparatus can perform the encoding process using the CABAC method even when the input image information is in the interlaced scanning format.
- the A / D converter 51 converts an input image signal, which is an analog signal, into a digital signal, and outputs the digital signal to the screen rearrangement buffer 52.
- the screen rearrangement buffer 52 rearranges the input image information from the A / D converter 51 according to the GOP structure of the image compression information output from the image information encoding device, PT / JP03 / 05081
- the field / frame determination unit 53 determines which of the coding efficiency of the macroblock of the image to be processed is higher on the basis of the field-based coding and the case of coding on the frame-based basis.
- / Field Flag is generated and output to the field / frame conversion unit 55 and the arithmetic coding unit 58.
- the adder 54 converts the difference image between the input image via the field / frame determination unit 53 and the reference image from the motion prediction / compensation unit 64. It is generated and output to the field frame conversion unit 55 and the orthogonal conversion unit 56.
- the adder 54 outputs the field / frame conversion unit 55 and the orthogonal conversion unit 5 with the input image passed through the field / frame determination unit 53 unchanged. Output to 6.
- the field Z frame conversion unit 55 converts the input image from the adder 54 into a field structure and outputs it to the orthogonal conversion unit 56.
- the orthogonal transform unit 56 performs orthogonal transform (discrete cosine transform, Karhunen's-Loeve transform, or the like) on the input image information, and supplies the obtained transform coefficients to the quantization unit 57.
- the quantization unit 57 performs a quantization process on the transform coefficient supplied from the orthogonal transformation unit 56 under the control of the rate control unit 65.
- the arithmetic coding unit 58 includes the syntax elements input from the quantization unit 57 and the motion prediction 'compensation unit 64, and the Frame / Field Flag from the field Z frame determination unit 53. Is arithmetically encoded based on the CABAC method, and supplied to an accumulation buffer 59 for accumulation.
- the storage buffer 59 outputs the stored image compression information to the subsequent stage.
- the inverse quantization unit 60 inversely quantizes the quantized orthogonal transform coefficients and outputs the result to the inverse orthogonal transform unit 61.
- Inverse orthogonal transform unit 61 is subjected to inverse orthogonal transform processing to generate decoded image information to the inverse quantized transform coefficients, to accumulate and supplies to the frame memory 6 2.
- the field / frame converter 63 converts the macroblock to be processed into a macroblock. Fine 81
- the decoded image information stored in the frame memory 62 is converted into a field structure and output to the motion prediction / compensation unit 64.
- the motion prediction 'compensation unit 64 generates optimal prediction mode information and motion vector information by motion prediction processing and outputs the generated information to the arithmetic coding unit 58, and also generates a prediction image and adds an adder 54 Output to
- the rate control unit 65 performs feedback control of the operation of the quantization unit 57 based on the amount of data stored in the storage buffer 59.
- the control unit 66 controls each unit of the image information encoding device according to a control program recorded on the recording medium 67.
- FIG. 27 shows a configuration example of the arithmetic coding unit 58.
- the frame Zfield flag is first coded by the frame Z-field flag content model 91 shown in FIG.
- the frame-based context model 92 currently specified in the H.26L standard is applied. It should be noted that arithmetic elements having a non-binarized value are binarized by the binarizing unit 93 and then subjected to arithmetic coding.
- the field-based context model 94 is applied to the following syntax elements.
- the first syntax element is MB_type for an I picture
- the second syntax element is MB—type for a P / B picture
- the third syntax element is motion vector information
- the syntax element is a reference field parameter
- the fifth syntax is the intra prediction mode.
- ctx-mb-type-intra-field (C) A + B ⁇ --(22) where A and B in equation (22) are the same as those in equation (3).
- the adjacent macroblocks A and B may be either field-based or frame-based coded.
- the context model for MB_type for P / B pictures is described.
- the context model ctx—mb—type—inter—field (C) corresponding to the MB—type of the macroblock C is defined by the following equation (23).
- the context model ctx—mb—type—inter—field (C) corresponding to the MB—type of the macroblock C is defined by the following equation (23).
- Neighboring macroblocks A and B may be field-based coded or frame-based coded. Note that the MB_type of a non-binarized P picture is binarized according to the table shown in FIG. 28A. Also, the MB_type of a B picture that has not been binarized is binarized by the table shown in FIG. 28B.
- the probability estimation is performed on the binarized symbol by the probability estimation unit 97, and the coding engine 98 performs the adaptive arithmetic coding based on the probability estimation.
- the related models are updated, so that each model can perform the coding process according to the statistics of the actual image compression information.
- a macroblock to be frame-based coded For a macroblock to be frame-based coded, ten MB_types are defined if they belong to a P picture.
- the 16 ⁇ 16 mode and the 8 ⁇ 16 mode are not defined among the 16 types. That is, for a macroblock to be field-based coded, eight MB-types are defined for p-pictures.
- MB_type For macroblocks to be frame-based coded, 18 types of MB_type are defined for B pictures.
- a macroblock to be field-based coded if it belongs to a B picture, of the above 18 types, it is the forward 16 ⁇ 16 mode, the backward 16 ⁇ 16 mode, and the forward 8 X16 mode and backward 8 X16 mode are not defined. That is, for a macroblock to be field-based coded, 14 types of MB_type are defined for B pictures.
- the context model for the reference field parameter is described.
- the first context model ctx—ref—field_top (C) corresponding to the first field is defined by the following equation (29-1).
- the first context model ctx-ref_fiel d-bot (C) corresponding to the second field is defined by the following equation (29-2).
- parameter b t is related to the first field of adjacent macroblock B
- parameter b b is related to the second field of adjacent macroblock B. It is defined by the following equations (30-1) and (30-2).
- the context model ctx—intra_pred—field (C) for the intra prediction mode corresponding to macroblock C is the context model ctx—intra—pred (C Defined in the same way as).
- adjacent macroblocks A and B may be field-based coded or frame-based coded.
- field-no-frame encoding using the CABAC method can be performed.
- FIG. 29 illustrates a configuration example of an image information decoding device corresponding to the image information encoding device of FIG.
- the accumulation buffer 101 accumulates the input image compression information and outputs the information to the arithmetic decoding unit 102 as appropriate.
- the arithmetic decoding unit 102 performs an arithmetic decoding process on the image compression information encoded based on the CABAC method, JP03 / 05081
- the inverse quantization unit 103 inversely quantizes the quantized orthogonal transform coefficients decoded by the arithmetic decoding unit 102.
- the inverse orthogonal transform unit 104 performs an inverse orthogonal transform on the inversely quantized orthogonal transform coefficient.
- the field frame conversion unit 105 converts the output image or the difference image obtained as a result of the inverse orthogonal transform into a frame structure.
- the adder 106 combines the difference image from the inverse orthogonal transform unit 104 and the reference image from the motion prediction / compensation unit 111 when the macroblock to be processed is an inter macroblock. To generate an output image.
- the screen rearrangement buffer 107 rearranges the output image according to the G0P structure of the input image compression information and outputs the rearranged image to the D / A converter 108.
- the 0 conversion unit 108 converts an output image, which is a digital signal, into an analog signal and outputs it to a subsequent stage. '
- the frame memory 109 stores image information which is generated by the adder 106 and is used as a reference image.
- the field / frame conversion unit 110 converts the image information stored in the frame memory 111 into a field structure.
- the motion prediction / compensation unit 111 generates a reference image based on the image information stored in the frame memory based on the prediction mode information and the motion vector information for each macroblock included in the image compression information. Are output to the adder 106.
- the image compression information output by the image information encoding device in FIG. 26 can be decoded to obtain the original image information.
- the series of processes described above can be executed by hardware, but can also be executed by software.
- the programs that make up the software must be installed on a computer built into a dedicated hard disk or by installing various programs.
- the recording medium 67 shown in FIG. 26 is installed on a general-purpose personal computer capable of executing various functions.
- This recording medium 67 is provided separately from the computer to distribute the program to the user, and includes a magnetic disk (including a flexible disk) on which the program is recorded and an optical disk (CD-ROM (Compact Disc-Read)). Only Memory), DVD (including Digital Versatile Disc), magneto-optical disc (including MD (Mini Disc)), or packaged media consisting of semiconductor memory, etc. It is composed of a ROM or hard disk that stores the program and is provided to the user in a state.
- steps for describing a program to be recorded on a recording medium are not only performed in chronological order according to the order described, but are not necessarily performed in chronological order. It also includes the processing executed in Industrial applicability
- the macroblock level field using the CABAC method is used.
- the image information in the interlaced scanning format is decoded by using the CABAC method to decode the compressed image information in which the field / frame coding is performed at the macroblock level, and the image information in the interlaced scanning format is restored. It becomes possible.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Description
Claims
Priority Applications (19)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN038094312A CN1650636B (zh) | 2002-04-26 | 2003-04-22 | 编码设备和编码方法、解码设备和解码方法 |
EP03717684.9A EP1501313A4 (en) | 2002-04-26 | 2003-04-22 | Coding device and method Decoding device and method, recording medium and program |
US10/509,682 US7778331B2 (en) | 2002-04-26 | 2003-04-22 | Coding device and method, decoding device and method, recording medium, and program |
KR1020047017169A KR100952649B1 (ko) | 2002-04-26 | 2003-04-22 | 부호화 장치 및 방법 |
US12/468,912 US8325820B2 (en) | 2002-04-26 | 2009-05-20 | Coding device and method, decoding device and method, recording medium, and program |
US12/468,917 US8320467B2 (en) | 2002-04-26 | 2009-05-20 | Coding device and method, decoding device and method, recording medium, and program |
US13/546,666 US8571116B2 (en) | 2002-04-26 | 2012-07-11 | Coding device and method, decoding device and method, recording medium, and program |
US13/559,117 US8649442B2 (en) | 2002-04-26 | 2012-07-26 | Coding device and method, decoding device and method, recording medium, and program |
US13/559,066 US8619875B2 (en) | 2002-04-26 | 2012-07-26 | Coding device and method, decoding device and method, recording medium, and program |
US13/558,554 US8611430B2 (en) | 2002-04-26 | 2012-07-26 | Coding device and method, decoding device and method, recording medium, and program |
US13/558,758 US8654862B2 (en) | 2002-04-26 | 2012-07-26 | Coding device and method, decoding device and method, recording medium, and program |
US13/558,712 US8693550B2 (en) | 2002-04-26 | 2012-07-26 | Coding device and method, decoding device and method, recording medium, and program |
US13/619,759 US8509311B2 (en) | 2002-04-26 | 2012-09-14 | Coding device and method, decoding device and method, recording medium, and program |
US13/619,975 US8509312B2 (en) | 2002-04-26 | 2012-09-14 | Coding device and method, decoding device and method, recording medium, and program |
US13/619,779 US8477854B2 (en) | 2002-04-26 | 2012-09-14 | Coding device and method, decoding device and method, recording medium, and program |
US13/725,519 US8488669B2 (en) | 2002-04-26 | 2012-12-21 | Coding device and method, decoding device and method, recording medium, and program |
US13/725,462 US8483280B2 (en) | 2002-04-26 | 2012-12-21 | Coding device and method, decoding device and method, recording medium, and program |
US14/176,826 US9088784B2 (en) | 2002-04-26 | 2014-02-10 | Coding device and method, decoding device and method, recording medium, and program |
US14/732,909 US9532068B2 (en) | 2002-04-26 | 2015-06-08 | Coding device and method, decoding device and method, recording medium, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002125295A JP2003319391A (ja) | 2002-04-26 | 2002-04-26 | 符号化装置および方法、復号装置および方法、記録媒体、並びにプログラム |
JP2002-125295 | 2002-04-26 |
Related Child Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10509682 A-371-Of-International | 2003-04-22 | ||
US10/509,682 A-371-Of-International US7778331B2 (en) | 2002-04-26 | 2003-04-22 | Coding device and method, decoding device and method, recording medium, and program |
US12/468,912 Continuation US8325820B2 (en) | 2002-04-26 | 2009-05-20 | Coding device and method, decoding device and method, recording medium, and program |
US12/468,917 Continuation US8320467B2 (en) | 2002-04-26 | 2009-05-20 | Coding device and method, decoding device and method, recording medium, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003092301A1 true WO2003092301A1 (en) | 2003-11-06 |
Family
ID=29267555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/005081 WO2003092301A1 (en) | 2002-04-26 | 2003-04-22 | Coding device and method, decoding device and method, recording medium, and program |
Country Status (6)
Country | Link |
---|---|
US (16) | US7778331B2 (ja) |
EP (4) | EP1501313A4 (ja) |
JP (1) | JP2003319391A (ja) |
KR (2) | KR100952649B1 (ja) |
CN (3) | CN1650636B (ja) |
WO (1) | WO2003092301A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100455019C (zh) * | 2004-07-22 | 2009-01-21 | 三星电子株式会社 | 内容自适应二进制算术编码的方法和使用该方法的设备 |
US20100232496A1 (en) * | 2004-11-09 | 2010-09-16 | Masayoshi Tojima | Decoding-processing apparatus and method |
CN101686397B (zh) * | 2004-09-16 | 2013-07-31 | 汤姆逊许可证公司 | 用于快速视频帧和场编码的方法 |
Families Citing this family (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003319391A (ja) | 2002-04-26 | 2003-11-07 | Sony Corp | 符号化装置および方法、復号装置および方法、記録媒体、並びにプログラム |
US7769088B2 (en) * | 2003-05-28 | 2010-08-03 | Broadcom Corporation | Context adaptive binary arithmetic code decoding engine |
US7630440B2 (en) * | 2003-05-28 | 2009-12-08 | Broadcom Corporation | Context adaptive binary arithmetic code decoding engine |
US7463781B2 (en) * | 2004-04-14 | 2008-12-09 | Lsi Corporation | Low overhead context intializations for arithmetic video codecs |
KR100648258B1 (ko) | 2004-08-02 | 2006-11-23 | 삼성전자주식회사 | 고속의 디코딩을 수행하는 파이프라인 구조의 내용 기반적응적 이진 산술 디코더 |
JP4763422B2 (ja) * | 2004-12-03 | 2011-08-31 | パナソニック株式会社 | イントラ予測装置 |
KR101104828B1 (ko) * | 2004-12-09 | 2012-01-16 | 삼성전자주식회사 | 움직임 벡터 연산 장치 및 그 방법 |
US20060133494A1 (en) * | 2004-12-17 | 2006-06-22 | Rahul Saxena | Image decoder with context-based parameter buffer |
EP1836858A1 (en) | 2005-01-14 | 2007-09-26 | Sungkyunkwan University | Methods of and apparatuses for adaptive entropy encoding and adaptive entropy decoding for scalable video encoding |
KR100694098B1 (ko) | 2005-04-04 | 2007-03-12 | 한국과학기술원 | 산술 복호 방법 및 그 장치 |
WO2006109974A1 (en) * | 2005-04-13 | 2006-10-19 | Samsung Electronics Co., Ltd. | Method for entropy coding and decoding having improved coding efficiency and apparatus for providing the same |
KR100703773B1 (ko) * | 2005-04-13 | 2007-04-06 | 삼성전자주식회사 | 향상된 코딩 효율을 갖는 엔트로피 코딩 및 디코딩 방법과이를 위한 장치, 이를 포함하는 비디오 코딩 및 디코딩방법과 이를 위한 장치 |
EP1878253A1 (en) * | 2005-04-19 | 2008-01-16 | Samsung Electronics Co., Ltd. | Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same |
KR100703776B1 (ko) | 2005-04-19 | 2007-04-06 | 삼성전자주식회사 | 향상된 코딩 효율을 갖는 컨텍스트 기반 적응적 산술 코딩및 디코딩 방법과 이를 위한 장치, 이를 포함하는 비디오코딩 및 디코딩 방법과 이를 위한 장치 |
JP4856954B2 (ja) * | 2005-06-08 | 2012-01-18 | パナソニック株式会社 | 画像符号化装置 |
CN100461863C (zh) * | 2005-08-05 | 2009-02-11 | 上海富瀚微电子有限公司 | 基于上下文自适应二进制算术解码器 |
CN100466739C (zh) * | 2005-10-12 | 2009-03-04 | 华为技术有限公司 | Cabac解码系统及方法 |
JP4622804B2 (ja) * | 2005-10-26 | 2011-02-02 | ソニー株式会社 | 符号化装置、符号化方法およびプログラム |
KR100644713B1 (ko) | 2005-10-31 | 2006-11-10 | 삼성전자주식회사 | 컨텍스트 기반 적응적 이진 산술 코딩 복호기에서 원소구문을 복호화하는 방법 및 이를 위한 복호화 장치 |
KR100717052B1 (ko) * | 2005-11-08 | 2007-05-10 | 삼성전자주식회사 | Cabac 복호기에서 이진 산술 복호화와 이진 매칭을병렬 처리하는 원소 구문의 복호화 방법 및 이를 위한복호화 장치 |
KR100717055B1 (ko) * | 2005-11-18 | 2007-05-10 | 삼성전자주식회사 | Cabac 복호기에서 복수의 이진 값들을 파이프라인방식에 의하여 복호화하는 방법 및 이를 위한 복호화 장치 |
US7245242B2 (en) * | 2005-11-28 | 2007-07-17 | Conexant Systems, Inc. | Decoding systems and methods |
KR100750165B1 (ko) * | 2006-02-22 | 2007-08-17 | 삼성전자주식회사 | 압축률 향상을 위해 개선된 컨텍스트 모델 선택을 사용하는cabac 부호화 방법 및 장치, 그리고 cabac복호화 방법 및 장치 |
KR101049258B1 (ko) * | 2006-02-22 | 2011-07-13 | 삼성전자주식회사 | 타입이 일치하지 않는 하위 계층의 정보를 사용하여인터레이스 비디오 신호를 인코딩/디코딩 하는 방법 및장치 |
KR100809296B1 (ko) * | 2006-02-22 | 2008-03-04 | 삼성전자주식회사 | 타입이 일치하지 않는 하위 계층의 정보를 사용하여인터레이스 비디오 신호를 인코딩/디코딩 하는 방법 및장치 |
US8976870B1 (en) * | 2006-08-30 | 2015-03-10 | Geo Semiconductor Inc. | Block and mode reordering to facilitate parallel intra prediction and motion vector prediction |
US8311120B2 (en) * | 2006-12-22 | 2012-11-13 | Qualcomm Incorporated | Coding mode selection using information of other coding modes |
US8554767B2 (en) * | 2008-12-23 | 2013-10-08 | Samsung Electronics Co., Ltd | Context-based interests in computing environments and systems |
KR20090129926A (ko) * | 2008-06-13 | 2009-12-17 | 삼성전자주식회사 | 영상 부호화 방법 및 그 장치, 영상 복호화 방법 및 그 장치 |
KR100936208B1 (ko) * | 2008-12-09 | 2010-01-12 | 주식회사세오 | 무손실 문맥적응적이진산술부호화를 위한 h.264/avc 부호화기, 상기 부호화기의 문맥적응적이진산술부호화방법 |
US8175902B2 (en) * | 2008-12-23 | 2012-05-08 | Samsung Electronics Co., Ltd. | Semantics-based interests in computing environments and systems |
US20100161380A1 (en) * | 2008-12-23 | 2010-06-24 | Samsung Electronics Co., Ltd. | Rating-based interests in computing environments and systems |
US20100198604A1 (en) * | 2009-01-30 | 2010-08-05 | Samsung Electronics Co., Ltd. | Generation of concept relations |
US9288494B2 (en) * | 2009-02-06 | 2016-03-15 | Thomson Licensing | Methods and apparatus for implicit and semi-implicit intra mode signaling for video encoders and decoders |
US8558724B2 (en) * | 2009-05-20 | 2013-10-15 | Nippon Telegraph And Telephone Corporation | Coding method, coding appartaus, decoding method, decoding apparatus, program, and recording medium |
CN103826132B (zh) * | 2009-06-18 | 2017-03-01 | 株式会社东芝 | 运动图像解码装置和运动图像解码方法 |
SI3448031T1 (sl) | 2009-06-18 | 2021-03-31 | Kabushiki Kaisha Toshiba | Naprava za video dekodiranje in postopek za video dekodiranje |
US9628794B2 (en) | 2009-06-18 | 2017-04-18 | Kabushiki Kaisha Toshiba | Video encoding apparatus and a video decoding apparatus |
CN102474601B (zh) * | 2009-06-29 | 2017-06-23 | 汤姆森特许公司 | 未编码语法的自适应概率更新的方法和装置 |
WO2011011052A1 (en) | 2009-07-20 | 2011-01-27 | Thomson Licensing | A method for detecting and adapting video processing for far-view scenes in sports video |
EP2312854A1 (de) * | 2009-10-15 | 2011-04-20 | Siemens Aktiengesellschaft | Verfahren zur Codierung von Symbolen aus einer Folge digitalisierter Bilder |
EP2493194A4 (en) * | 2009-10-22 | 2014-07-16 | Univ Zhejiang | VIDEO AND PICTURE CODING / DECODING SYSTEM BASED ON THE PREDICTION OF SPATIAL DOMAINS |
JP5470405B2 (ja) * | 2009-12-28 | 2014-04-16 | パナソニック株式会社 | 画像符号化装置および方法 |
MX2012010863A (es) * | 2010-04-01 | 2012-10-15 | Sony Corp | Dispositivo y metodo de procesamiento de imagenes. |
US8306343B2 (en) * | 2010-04-09 | 2012-11-06 | Newport Media, Inc. | Optimized prediction based image compression |
CA2794261A1 (en) * | 2010-04-19 | 2011-10-27 | Research In Motion Limited | Methods and devices for reordered parallel entropy coding and decoding |
JP5134050B2 (ja) * | 2010-07-23 | 2013-01-30 | ソニー株式会社 | 符号化装置および方法、記録媒体、並びにプログラム |
JP5134047B2 (ja) * | 2010-07-23 | 2013-01-30 | ソニー株式会社 | 復号装置および方法、記録媒体、並びにプログラム |
JP5134048B2 (ja) * | 2010-07-23 | 2013-01-30 | ソニー株式会社 | 復号装置および方法、記録媒体、並びにプログラム |
JP5134049B2 (ja) * | 2010-07-23 | 2013-01-30 | ソニー株式会社 | 符号化装置および方法、記録媒体、並びにプログラム |
WO2012077349A1 (ja) * | 2010-12-09 | 2012-06-14 | パナソニック株式会社 | 画像符号化方法および画像復号化方法 |
US20120163457A1 (en) * | 2010-12-28 | 2012-06-28 | Viktor Wahadaniah | Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus |
JP5988071B2 (ja) * | 2011-02-07 | 2016-09-07 | ソニー株式会社 | 画像処理装置および方法、並びに、プログラム |
CN107529706B (zh) * | 2011-06-16 | 2020-11-17 | Ge视频压缩有限责任公司 | 解码器、编码器、解码和编码视频的方法及存储介质 |
USRE47366E1 (en) | 2011-06-23 | 2019-04-23 | Sun Patent Trust | Image decoding method and apparatus based on a signal type of the control parameter of the current block |
MX2013013508A (es) | 2011-06-23 | 2014-02-27 | Panasonic Corp | Metodo de decodificacion de imagenes, metodo de codificacion de imagenes, aparato de decodificacion de imagenes, aparato de codificacion de imagenes y aparato de codificacion y decodificacion de imagenes. |
RU2603552C2 (ru) | 2011-06-24 | 2016-11-27 | Сан Пэтент Траст | Способ декодирования изображения, способ кодирования изображения, устройство декодирования изображения, устройство кодирования изображения и устройство кодирования и декодирования изображения |
TWI581615B (zh) | 2011-06-24 | 2017-05-01 | Sun Patent Trust | A decoding method, a coding method, a decoding device, an encoding device, and a coding / decoding device |
MX2013013483A (es) | 2011-06-27 | 2014-02-27 | Panasonic Corp | Metodo de decodificacion de imagenes, metodo de codificacion de imagenes, aparato de decodificacion de imagenes, aparato de codificacion de imagenes y aparato de codificacion y decodificacion de imagenes. |
CN103563377B (zh) | 2011-06-28 | 2017-05-10 | 太阳专利托管公司 | 解码方法及解码装置 |
US20130083856A1 (en) * | 2011-06-29 | 2013-04-04 | Qualcomm Incorporated | Contexts for coefficient level coding in video compression |
MX2013010892A (es) | 2011-06-29 | 2013-12-06 | Panasonic Corp | Metodo de decodificacion de imagenes, metodo de codificacion de imagenes, aparato de decodificacion de imagenes, aparato de codificacion de imagenes y aparato de codificacion y decodificacion de imagenes. |
US8798139B1 (en) | 2011-06-29 | 2014-08-05 | Zenverge, Inc. | Dual-pipeline CABAC encoder architecture |
US9258565B1 (en) * | 2011-06-29 | 2016-02-09 | Freescale Semiconductor, Inc. | Context model cache-management in a dual-pipeline CABAC architecture |
MX339141B (es) | 2011-06-30 | 2016-05-13 | Panasonic Ip Corp America | Metodo de decodificacion de imagenes, metodo de codificacion de imagenes, aparato de decodificacion de imagenes, aparato de codificacion de imagenes y aparato de codificacion y decodificacion de imagenes. |
WO2013001769A1 (ja) | 2011-06-30 | 2013-01-03 | パナソニック株式会社 | 画像復号方法、画像符号化方法、画像復号装置、画像符号化装置及び画像符号化復号装置 |
MY164252A (en) | 2011-07-01 | 2017-11-30 | Samsung Electronics Co Ltd | Method and apparatus for entropy encoding using hierarchical data unit, and method and apparatus for decoding |
AU2012281918C1 (en) | 2011-07-11 | 2016-11-17 | Sun Patent Trust | Decoding Method, Coding Method, Decoding Apparatus, Coding Apparatus, And Coding and Decoding Apparatus |
BR112014001026B1 (pt) | 2011-07-15 | 2022-05-03 | Ge Video Compression, Llc | Codificação de matriz de amostra de baixo atrasdo |
US11184623B2 (en) * | 2011-09-26 | 2021-11-23 | Texas Instruments Incorporated | Method and system for lossless coding mode in video coding |
EP2777258B1 (en) | 2011-11-04 | 2017-01-11 | Huawei Technologies Co., Ltd. | Binarization of prediction residuals for lossless video coding |
CN109120930B (zh) * | 2011-11-04 | 2021-03-26 | 夏普株式会社 | 图像解码装置、图像编码装置及其方法 |
US9088796B2 (en) | 2011-11-07 | 2015-07-21 | Sharp Kabushiki Kaisha | Video decoder with enhanced CABAC decoding |
US9237358B2 (en) * | 2011-11-08 | 2016-01-12 | Qualcomm Incorporated | Context reduction for context adaptive binary arithmetic coding |
US10616581B2 (en) | 2012-01-19 | 2020-04-07 | Huawei Technologies Co., Ltd. | Modified coding for a transform skipped block for CABAC in HEVC |
US9654139B2 (en) | 2012-01-19 | 2017-05-16 | Huawei Technologies Co., Ltd. | High throughput binarization (HTB) method for CABAC in HEVC |
US9743116B2 (en) | 2012-01-19 | 2017-08-22 | Huawei Technologies Co., Ltd. | High throughput coding for CABAC in HEVC |
US20130188736A1 (en) | 2012-01-19 | 2013-07-25 | Sharp Laboratories Of America, Inc. | High throughput significance map processing for cabac in hevc |
US9860527B2 (en) | 2012-01-19 | 2018-01-02 | Huawei Technologies Co., Ltd. | High throughput residual coding for a transform skipped block for CABAC in HEVC |
CN108347619B (zh) | 2012-04-11 | 2020-09-15 | 杜比国际公司 | 用于对与变换系数相关联的比特流进行编码和解码方法 |
CN107809645B (zh) * | 2012-06-22 | 2021-12-03 | 威勒斯媒体国际有限公司 | 图像解码方法及图像解码设备 |
EP2869563B1 (en) | 2012-07-02 | 2018-06-13 | Samsung Electronics Co., Ltd. | METHOD FOR ENTROPY DECODING of a VIDEO |
US9584804B2 (en) | 2012-07-10 | 2017-02-28 | Qualcomm Incorporated | Coding SEI NAL units for video coding |
CN102801974B (zh) * | 2012-07-19 | 2014-08-20 | 西安电子科技大学 | 基于cabac的图像压缩熵编码器 |
WO2014049981A1 (ja) * | 2012-09-28 | 2014-04-03 | 三菱電機株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
US9794558B2 (en) | 2014-01-08 | 2017-10-17 | Qualcomm Incorporated | Support of non-HEVC base layer in HEVC multi-layer extensions |
US11025934B2 (en) * | 2014-12-16 | 2021-06-01 | Advanced Micro Devices, Inc. | Methods and apparatus for decoding video using re-ordered motion vector buffer |
TWI651042B (zh) * | 2015-01-06 | 2019-02-11 | 南韓商愛思開海力士有限公司 | 電磁干擾抑制結構及具有該電磁干擾抑制結構之電子裝置 |
GB2563936A (en) * | 2017-06-30 | 2019-01-02 | Canon Kk | Method and apparatus for encoding or decoding a flag during video data encoding |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10229564A (ja) * | 1996-12-12 | 1998-08-25 | Matsushita Electric Ind Co Ltd | 画像符号化装置、画像復号化装置および画像符号化方法、画像復号化方法 |
JPH11164303A (ja) * | 1997-12-01 | 1999-06-18 | Nippon Hoso Kyokai <Nhk> | 動画像の可逆圧縮符号化装置および可逆伸長復号化装置 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5471207A (en) * | 1994-02-23 | 1995-11-28 | Ricoh Company Ltd. | Compression of palettized images and binarization for bitwise coding of M-ary alphabets therefor |
US5654702A (en) * | 1994-12-16 | 1997-08-05 | National Semiconductor Corp. | Syntax-based arithmetic coding for low bit rate videophone |
JP3855286B2 (ja) * | 1995-10-26 | 2006-12-06 | ソニー株式会社 | 画像符号化装置および画像符号化方法、画像復号化装置および画像復号化方法、並びに記録媒体 |
JP3466058B2 (ja) | 1996-07-31 | 2003-11-10 | 松下電器産業株式会社 | 画像復号化装置、及び画像復号化方法 |
JP3263807B2 (ja) | 1996-09-09 | 2002-03-11 | ソニー株式会社 | 画像符号化装置および画像符号化方法 |
JP3774954B2 (ja) * | 1996-10-30 | 2006-05-17 | 株式会社日立製作所 | 動画像の符号化方法 |
DE69735437T2 (de) * | 1996-12-12 | 2006-08-10 | Matsushita Electric Industrial Co., Ltd., Kadoma | Bildkodierer und bilddekodierer |
US6005980A (en) * | 1997-03-07 | 1999-12-21 | General Instrument Corporation | Motion estimation and compensation of video object planes for interlaced digital video |
US5974184A (en) * | 1997-03-07 | 1999-10-26 | General Instrument Corporation | Intra-macroblock DC and AC coefficient prediction for interlaced digital video |
US5857035A (en) * | 1997-05-19 | 1999-01-05 | Hewlett-Packard Company | Arithmetic coding compressor for encoding multiple bit values |
JP3349957B2 (ja) * | 1997-07-09 | 2002-11-25 | 株式会社ハイニックスセミコンダクター | コンテキスト確率表を利用した二進映像情報の内挿装置および方法 |
EP0940774A3 (en) * | 1998-03-05 | 2000-07-05 | Matsushita Electric Industrial Co., Ltd. | Motion vector coding and decoding apparatus and method |
EP1142343A1 (en) * | 1999-10-29 | 2001-10-10 | Koninklijke Philips Electronics N.V. | Video encoding method |
US6538583B1 (en) * | 2001-03-16 | 2003-03-25 | Analog Devices, Inc. | Method and apparatus for context modeling |
EP1384376A4 (en) * | 2001-04-11 | 2010-08-25 | Nice Systems Ltd | DIGITAL VIDEO PROTECTION FOR AUTHENTICITY VERIFICATION |
US6856701B2 (en) * | 2001-09-14 | 2005-02-15 | Nokia Corporation | Method and system for context-based adaptive binary arithmetic coding |
JP2003319391A (ja) | 2002-04-26 | 2003-11-07 | Sony Corp | 符号化装置および方法、復号装置および方法、記録媒体、並びにプログラム |
CN1190755C (zh) * | 2002-11-08 | 2005-02-23 | 北京工业大学 | 基于感知器的彩色图像无损压缩方法 |
JP2006352911A (ja) | 2006-08-18 | 2006-12-28 | Sony Corp | 符号化装置および方法、並びに復号装置および方法 |
JP2008104205A (ja) | 2007-10-29 | 2008-05-01 | Sony Corp | 符号化装置および方法 |
JP2008092593A (ja) | 2007-10-29 | 2008-04-17 | Sony Corp | 復号装置および方法 |
JP5134047B2 (ja) | 2010-07-23 | 2013-01-30 | ソニー株式会社 | 復号装置および方法、記録媒体、並びにプログラム |
JP5234870B1 (ja) | 2013-02-08 | 2013-07-10 | ソニー株式会社 | 符号化装置および方法、記録媒体、並びにプログラム |
-
2002
- 2002-04-26 JP JP2002125295A patent/JP2003319391A/ja active Pending
-
2003
- 2003-04-22 EP EP03717684.9A patent/EP1501313A4/en not_active Ceased
- 2003-04-22 EP EP20130164839 patent/EP2624461A3/en not_active Ceased
- 2003-04-22 EP EP20130166278 patent/EP2629522A1/en not_active Ceased
- 2003-04-22 CN CN038094312A patent/CN1650636B/zh not_active Expired - Fee Related
- 2003-04-22 KR KR1020047017169A patent/KR100952649B1/ko active IP Right Grant
- 2003-04-22 KR KR1020097022376A patent/KR100969345B1/ko active IP Right Grant
- 2003-04-22 WO PCT/JP2003/005081 patent/WO2003092301A1/ja active Application Filing
- 2003-04-22 CN CN2009102228100A patent/CN101800898B/zh not_active Expired - Lifetime
- 2003-04-22 CN CN200910222809.8A patent/CN101800897B/zh not_active Expired - Lifetime
- 2003-04-22 US US10/509,682 patent/US7778331B2/en active Active
- 2003-04-22 EP EP20130166271 patent/EP2627089A3/en not_active Ceased
-
2009
- 2009-05-20 US US12/468,912 patent/US8325820B2/en not_active Expired - Lifetime
- 2009-05-20 US US12/468,917 patent/US8320467B2/en not_active Expired - Lifetime
-
2012
- 2012-07-11 US US13/546,666 patent/US8571116B2/en not_active Expired - Lifetime
- 2012-07-26 US US13/559,066 patent/US8619875B2/en not_active Expired - Lifetime
- 2012-07-26 US US13/558,758 patent/US8654862B2/en not_active Expired - Fee Related
- 2012-07-26 US US13/558,554 patent/US8611430B2/en not_active Expired - Fee Related
- 2012-07-26 US US13/559,117 patent/US8649442B2/en not_active Expired - Fee Related
- 2012-07-26 US US13/558,712 patent/US8693550B2/en not_active Expired - Fee Related
- 2012-09-14 US US13/619,975 patent/US8509312B2/en not_active Expired - Lifetime
- 2012-09-14 US US13/619,779 patent/US8477854B2/en not_active Expired - Lifetime
- 2012-09-14 US US13/619,759 patent/US8509311B2/en not_active Expired - Lifetime
- 2012-12-21 US US13/725,462 patent/US8483280B2/en not_active Expired - Lifetime
- 2012-12-21 US US13/725,519 patent/US8488669B2/en not_active Expired - Lifetime
-
2014
- 2014-02-10 US US14/176,826 patent/US9088784B2/en not_active Expired - Fee Related
-
2015
- 2015-06-08 US US14/732,909 patent/US9532068B2/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10229564A (ja) * | 1996-12-12 | 1998-08-25 | Matsushita Electric Ind Co Ltd | 画像符号化装置、画像復号化装置および画像符号化方法、画像復号化方法 |
JPH11164303A (ja) * | 1997-12-01 | 1999-06-18 | Nippon Hoso Kyokai <Nhk> | 動画像の可逆圧縮符号化装置および可逆伸長復号化装置 |
Non-Patent Citations (4)
Title |
---|
MARPE D. ET AL.: "Adaptive codes for H.26L", ITU-TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG) VCEG-L13, TWELFTH MEETING, 9 January 2001 (2001-01-09) - 12 January 2001 (2001-01-12), EIBSEE, GERMANY, pages 1 - 8, XP002971071 * |
MARPE D. ET AL.: "Video compression using context-based adaptive arithmetic coding", IMAGE PROCESSING 2001. PROCEEDINGS. INTERNATIONAL CONFERENCE, IEEE, vol. 3, 7 October 2001 (2001-10-07) - 10 October 2001 (2001-10-10), THESSALONIKI, GREECE, pages 558 - 561, XP001077781 * |
See also references of EP1501313A4 * |
WANG L. ET AL.: "Interlace coding tools for H.26L video coding", ITU-TELECOMMUNCATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG) VCEG-037, 15TH MEETING, 4 December 2001 (2001-12-04) - 6 December 2001 (2001-12-06), PATTAYA, THAILAND, pages 1 - 20, XP002240263 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100455019C (zh) * | 2004-07-22 | 2009-01-21 | 三星电子株式会社 | 内容自适应二进制算术编码的方法和使用该方法的设备 |
CN101686397B (zh) * | 2004-09-16 | 2013-07-31 | 汤姆逊许可证公司 | 用于快速视频帧和场编码的方法 |
US20100232496A1 (en) * | 2004-11-09 | 2010-09-16 | Masayoshi Tojima | Decoding-processing apparatus and method |
US8406308B2 (en) * | 2004-11-09 | 2013-03-26 | Panasonic Corporation | Encoding apparatus for encoding binary signals generated from binarized multivalued syntax elements and method for performing the same |
US8867612B2 (en) | 2004-11-09 | 2014-10-21 | Panasonic Corporation | Decoding method for decoding an incoming bitstream and method for performing the same |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2003092301A1 (en) | Coding device and method, decoding device and method, recording medium, and program | |
JP5134049B2 (ja) | 符号化装置および方法、記録媒体、並びにプログラム | |
JP5134047B2 (ja) | 復号装置および方法、記録媒体、並びにプログラム | |
JP2006352911A (ja) | 符号化装置および方法、並びに復号装置および方法 | |
JP2008104205A (ja) | 符号化装置および方法 | |
JP2008092593A (ja) | 復号装置および方法 | |
JP5234870B1 (ja) | 符号化装置および方法、記録媒体、並びにプログラム | |
JP5215495B2 (ja) | 復号装置および方法、記録媒体、並びにプログラム | |
JP5134050B2 (ja) | 符号化装置および方法、記録媒体、並びにプログラム | |
JP5134048B2 (ja) | 復号装置および方法、記録媒体、並びにプログラム | |
JP5234872B1 (ja) | 符号化装置および方法、記録媒体、並びにプログラム | |
JP5234871B2 (ja) | 復号装置および方法、記録媒体、並びにプログラム | |
JP5234874B1 (ja) | 符号化装置および方法、記録媒体、並びにプログラム | |
JP5234873B2 (ja) | 復号装置および方法、記録媒体、並びにプログラム | |
JP2013123263A (ja) | 復号装置および方法、記録媒体、プログラム、並びに復号画像情報 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10509682 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003717684 Country of ref document: EP Ref document number: 1020047017169 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038094312 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020047017169 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003717684 Country of ref document: EP |