WO2007114368A1 - appareil et procédé de codage d'image, et appareil et procédé de décodage d'image - Google Patents

appareil et procédé de codage d'image, et appareil et procédé de décodage d'image Download PDF

Info

Publication number
WO2007114368A1
WO2007114368A1 PCT/JP2007/057197 JP2007057197W WO2007114368A1 WO 2007114368 A1 WO2007114368 A1 WO 2007114368A1 JP 2007057197 W JP2007057197 W JP 2007057197W WO 2007114368 A1 WO2007114368 A1 WO 2007114368A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bit
pixel
accuracy
information
Prior art date
Application number
PCT/JP2007/057197
Other languages
English (en)
Japanese (ja)
Inventor
Reiko Noda
Takeshi Chujoh
Original Assignee
Kabushiki Kaisha Toshiba
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2006/320876 external-priority patent/WO2007116551A1/fr
Priority to MX2008012516A priority Critical patent/MX2008012516A/es
Priority to CN2007800112005A priority patent/CN101411202B/zh
Priority to EP20070740633 priority patent/EP2003897A4/fr
Priority to CA 2645931 priority patent/CA2645931A1/fr
Priority to BRPI0709853-7A priority patent/BRPI0709853A2/pt
Application filed by Kabushiki Kaisha Toshiba filed Critical Kabushiki Kaisha Toshiba
Priority to AU2007232828A priority patent/AU2007232828B2/en
Priority to JP2008508675A priority patent/JP5254004B2/ja
Publication of WO2007114368A1 publication Critical patent/WO2007114368A1/fr
Priority to US12/242,181 priority patent/US8606028B2/en
Priority to NO20084509A priority patent/NO20084509L/no
Priority to HR20080553A priority patent/HRP20080553A2/xx

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • Image encoding apparatus and method and image decoding apparatus and method
  • the present invention relates to an image encoding apparatus and method for moving images or still images, and an image decoding apparatus and method.
  • H. 264 which is a typical moving image coding standard method, is a discrete code for a prediction error signal between an input image signal and a prediction image signal created by performing intra prediction or motion compensation.
  • orthogonal transform such as sine transform (DCT) is performed, and after such transformation on the frequency axis is performed, compression processing by quantization and coding is performed on the transform coefficient to obtain an encoded image.
  • DCT sine transform
  • This is a lossy compression method.
  • an interpolation filter for in-screen prediction and motion compensation adds a filter coefficient to a plurality of pixels and then rounds them to the same precision as the pixel bit length of the input image. Is going.
  • an encoder and a decoder are applied to a digital camera provided integrally, and the encoder is also matched to the DCT calculation accuracy shared by the decoder.
  • a technique for shifting the bit length of an input image is disclosed. In this method, the encoder and the decoder are provided separately, and if the DCT operation accuracy is different, the bits are shifted in accordance with the calculation accuracy. H is generated.
  • the present invention relates to an image encoding device, an image encoding method, an image decoding device, and an image decoding method for improving encoding efficiency by sufficiently increasing the accuracy of intra prediction and motion compensation.
  • the purpose is to provide.
  • a converted input image with different bit precision is generated by converting the bit precision of each pixel of the input image, and bit conversion information representing the number of bits changed by the conversion is generated.
  • an image encoding device Provided is an image encoding device.
  • the second aspect of the present invention is a pixel bit accuracy for converting the bit accuracy of each pixel of an input image composed of a plurality of pixels each having N bit accuracy to M bits large (N + M) bit accuracy
  • a prediction image generator that generates a prediction image of (N + M) bit accuracy for the input image of (N + M) bit accuracy from a reference image of (N + M) bit accuracy; (+ M) a subtractor that obtains a difference signal between the input image with bit accuracy and the predicted image with (N + M) bit accuracy, and an encoder that encodes the difference signal and outputs image coding information.
  • a decoding controller that outputs a decoded differential image based on the image encoding information, and adds the (N + M) bit-accurate prediction image and the decoded differential image to obtain (N + M) bit accuracy
  • An adder that outputs a decoded image of (N + M) bit accuracy and the decoded image of (N + M) bit accuracy as the reference image
  • an image encoding device including a reference image storage memory to be stored.
  • a demultiplexer that inputs image information obtained by multiplexing bit conversion information and image code information and separates it into bit conversion information and image encoded information;
  • An image decoder that decodes the image encoding information and outputs a decoded image, and a pixel bit accuracy converter that converts the value of each pixel of the decoded image to bit accuracy with different bit accuracy based on the bit conversion information
  • an image decoding apparatus is a decoder for decoding input encoded image information into a (N + M) bit-accuracy decoded differential image, and using the encoded image information, (N + M) bit accuracy.
  • a prediction image generator that creates a (N + M) bit-accurate predicted image from the reference image, and an addition that obtains a (N + M) bit-accurate decoded image by adding the decoded differential image and the predicted image
  • a reference image storage memory for storing the decoded image with (N + M) bit accuracy as the reference image, and (N + M) each pixel of the decoded image with bit accuracy is converted to N bit accuracy N
  • An image decoding apparatus including a pixel bit accuracy converter that outputs a decoded image with bit accuracy is provided.
  • FIG. 1A is a block diagram showing a configuration example of an image encoding device according to a first embodiment of the present invention.
  • FIG. 1B is a flowchart showing the operation of the 1A image encoding device.
  • FIG. 2 is a block diagram showing a configuration of a pixel bit length expander.
  • FIG. 3 is a diagram showing an example of a syntax structure used in the embodiment.
  • FIG. 4 is a diagram showing a structure of sequence parameter set syntax used in the same embodiment.
  • FIG. 5 is a diagram showing the structure of supplemental syntax used in the same embodiment.
  • FIG. 6 is a diagram showing the structure of supplemental syntax used in the same embodiment.
  • FIG. 7A is a block diagram showing a configuration example of an image decoding apparatus according to the embodiment.
  • FIG. 7B is a flowchart showing an operation of the image decoding apparatus in FIG. 7A.
  • FIG. 8 is a block diagram showing a configuration of the pixel bit length reducer of FIG.
  • FIG. 9A is a block diagram showing a configuration example of an image encoding device according to the second embodiment of the present invention.
  • FIG. 9B is a flowchart showing the operation of the image decoding apparatus in FIG. 9A.
  • FIG. 10A is a block diagram showing a configuration example of an image decoding device according to the embodiment.
  • FIG. 10B is a flowchart showing the operation of the image decoding apparatus in FIG. 10A.
  • FIG. 11A is a block diagram showing a configuration example of the image encoding device according to the third embodiment.
  • FIG. 11B is a flowchart showing an operation of the image encoding device in FIG. 11A.
  • FIG. 11C is a diagram showing an outline of the operation of the pixel bit length converter of FIG. 11A.
  • FIG. 11D is a block diagram showing an example of the configuration of the pixel bit length converter of FIG. 11A.
  • FIG. 11E is a block diagram showing an example of the configuration of the pixel bit length converter of FIG. 11A
  • FIG. 11F is a block diagram showing an example of the configuration of the frame memory in FIG. 11A.
  • FIG. 12 is a block diagram showing a configuration of the pixel bit length converter of FIG. 11.
  • FIG. 13 is a diagram showing the structure of sequence parameter set syntax used in the embodiment.
  • FIG. 14 shows the structure of the picture parameter set syntax used in the embodiment.
  • FIG. 15 is a diagram showing a structure of slice level syntax used in the embodiment.
  • FIG. 16 is a diagram showing a structure of macroblock level syntax used in the embodiment.
  • FIG. 17A is a block diagram showing a configuration example of the image decoding apparatus according to the embodiment.
  • FIG. 17B is a flowchart showing the operation of the image decoding apparatus in FIG. 17A.
  • FIG. 18A is a block diagram showing a configuration example of an image encoding device according to the fourth embodiment of the present invention.
  • FIG. 18B is a flowchart showing an operation of the image encoding device in FIG. 18A.
  • FIG. 19A is a block diagram showing a configuration example of the image decoding apparatus according to the embodiment.
  • FIG. 19B is a flowchart showing the operation of the image decoding apparatus in FIG. 19A.
  • FIG. 20A is a block diagram showing a configuration example of the image encoding device according to the fifth embodiment.
  • FIG. 20B is a flowchart showing an operation of the image encoding device in FIG. 20A.
  • 21A is a block diagram showing a configuration example of the image decoding apparatus according to the embodiment.
  • FIG. 21B is a flowchart showing the operation of the image decoding apparatus in FIG. 21A.
  • FIG. 22 is a block diagram of a predicted image creator on the encoder side.
  • FIG. 23 is a block diagram of a prediction image creator on the decoder side.
  • FIG. 24 is a block diagram showing a configuration of an inter-frame predictor.
  • FIG. 25 is a block diagram showing another configuration of the inter-frame predictor.
  • FIG. 26 is a block diagram showing another configuration of the inter-frame predictor.
  • FIG. 27 is a block diagram showing a configuration of a loop filter on the encoder side.
  • FIG. 28 is a block diagram showing a configuration of a decoding-side loop filter.
  • FIG. 29 is a block diagram showing another configuration of the pixel bit length expander.
  • FIG. 30 is a block diagram of a filter processing unit.
  • FIG. 31 is a flowchart of the operation of the filter processing unit.
  • FIG. 32 is a block diagram of a pixel bit length reducer.
  • FIG. 33A is a block diagram of an image coding apparatus according to the sixth embodiment.
  • FIG. 33B is a flowchart showing the operation of the image encoding device in FIG. 33A.
  • FIG. 34A is a block diagram of an image decoding apparatus according to the sixth embodiment.
  • FIG. 34B is a flowchart showing an operation of the image decoding apparatus in FIG. 34A.
  • 35A is a block diagram of an image coding apparatus according to the seventh embodiment.
  • FIG. 35B is a flowchart showing an operation of the image encoding device in FIG. 35A.
  • FIG. 36 is a diagram showing sequence parameter set syntax.
  • FIG. 37 is a diagram showing sequence parameter set syntax.
  • FIG. 38 is a diagram showing a setting example of control flags used in the processing of the image encoding device.
  • FIG. 39A is a block diagram showing a configuration of an image decoding apparatus according to the seventh embodiment.
  • FIG. 39B is a flowchart showing the operation of the image decoding apparatus in FIG. 39A.
  • FIG. 40 is a diagram showing setting of control flags included in bit conversion information.
  • FIG. 41 is a diagram for explaining a difference in rounding error caused by the presence or absence of bit precision extension.
  • FIG. 42 is a diagram showing predicted pixel values with half-pixel accuracy.
  • FIG. 43 is a diagram showing conversion characteristics in pixel bit length extension and pixel bit length reduction according to the embodiment of the present invention.
  • This image encoding apparatus is supplied with an input image signal 100, and a pixel bit length extender (that is, a pixel bit precision converter that converts pixel bit precision) 1001 that extends the pixel bit length and the image bit length extension.
  • a pixel bit length extender that is, a pixel bit precision converter that converts pixel bit precision
  • An image encoder 10 connected to the output of the encoder 1001, a multiplexer 12 connected to the output of the image encoder 10, and another output of the image bit length extender 1 001,
  • a bit length conversion controller 1002 for supplying the bit extension information to the multiplexing unit 12.
  • a moving image signal is input to the image encoding device in units of frames as the input image signal 100 (Sl l).
  • the pixel bit length expander 1001 performs a process of converting each pixel of the input N-bit precision image signal 100 to a precision of (N + M) bits larger than N bits by M bits. That is, in the pixel bit length extender 1001 shown in FIG. 2, whether to extend the bit length of each pixel of the input N-bit precision image signal 100 is selected by the switch E0 based on the bit length extension information 1003. (S12). Connect switch E0 to the ON side for expansion, and OFF for no expansion.
  • the image signal is input to the pixel bit length extension converter (pixel bit precision converter) E01, and the pixel bit length conversion described later is performed (S13).
  • the image signal is output as it is without being converted to pixel bit length. For example, if the value of a pixel in the input image signal is K, the pixel value K ′ after extending the pixel value K by M bits is calculated as shown in the following formula (1). It is calculated.
  • the pixel value K ′ is calculated as shown in the following equation (1_1).
  • K ' INT [[ ⁇ (K-Min) / (Max-Min) ⁇ ⁇ X ((1 ⁇ ⁇ ) _ 1)] + offset]
  • INT indicates processing for rounding to an integer.
  • the offset in the formulas (1_1) and (1_2) indicates the offset when rounding and is an arbitrary value between 0 and 1.
  • the input image is a color image signal composed of a plurality of components such as RGB
  • the bit length of each pixel of each component is expanded to a bit accuracy of M bits, and then each component is changed to a different color.
  • each pixel R, G, B force of each component of the N-bit input image is converted to Y, Co, Cg after extending the value of each pixel to a bit precision larger by M bits.
  • Round (A) is the process of rounding A to an integer.
  • the number of bits M to be expanded is set to 2 or more, it is possible to perform color conversion without causing a rounding error in the rounding process.
  • the color conversion shown here is merely an example, and any process may be performed as long as it is a process for performing color conversion.
  • the above example is an example of the conversion performed by the pixel bit length expander 1001, and the process of extending the value of each pixel to a bit precision larger by M bits is not limited to the above example. Any processing that can be expanded is possible.
  • bit extended input image signal 1009 that has been bit extended as described above is guided to the image encoder 10.
  • bit conversion information 1010 such as the number of bits M to be expanded, the bit length N of the input image signal, and the gamma value, pixel maximum value, minimum value, histogram, etc. necessary for the conversion is generated (S14).
  • the conversion controller (bit precision conversion controller) 1002 guides it to the multiplexing unit 12 as bit extension information 1003.
  • the image encoder 10 encodes the input bit-extended input image signal 1009, and outputs it to the multiplexing unit 12 as image encoded data 11 (S15).
  • the multiplexing unit 12 multiplexes the encoded image data 11 and the bit extension information 1003 (S16), and sends them to the transmission system or storage system (not shown) as the encoded data 117 (S17).
  • FIG. 3 shows an example of the syntax structure used in the present embodiment.
  • the high-level syntax (401) is filled with syntax information of an upper layer higher than a slice.
  • the slice level syntax (402) information required for each slice is specified, and in the macroblock level syntax (403), the quantization parameter change value and mode information required for each macroblock are specified. Is clearly stated.
  • Each syntax includes a more detailed syntax.
  • the high level syntax (401) is composed of sequences such as sequence parameter set syntax (404) and picture parameter set syntax (405), and picture level syntax power.
  • slice level syntax (402) slice header syntax (406), slice It consists of data syntax (407) and so on.
  • macro block level syntax (403) is composed of a macro block header syntax (408), a macro block data syntax (409), and the like.
  • syntax described above is an indispensable component at the time of decoding, and if the syntax information is missing, data cannot be correctly restored at the time of decoding.
  • sub-subscript syntax (410) as auxiliary syntax for multiplexing information that is not necessarily required at the time of decoding.
  • Supplemental syntax is prepared to send information indicating instructions for processing that can be executed independently on the decoding side.
  • ex_seq_bit_extention_flag shown in the sequence parameter set syntax of Fig. 4 is a flag indicating whether or not to perform bit extension.
  • the flag power is STRUE, whether or not to perform bit extension is determined in sequence units. It is possible to switch. If the flag is FALSE, no bit expansion is performed in the sequence.
  • ex_seq_bit_extention_flag is TRUE, ex_seq_shift_bits indicating how many more bits to extend is transmitted. For example, ON / OFF of the switch E0 in the pixel bit length extender 1001 shown in FIG. 2 is determined by TRUE / FALSE of the flag.
  • the sequence parameter set syntax also includes an ex-bit-transform-type that indicates what kind of transformation has been performed; ex— bit— transform— type ⁇ , f column; J number A value BIT_EXT_TRANS indicating the simple bit extension conversion indicated by equation (1) and GAMMA_TRANS indicating that the ⁇ conversion indicated by equation (1 1) has been performed.
  • DR_TRANS, etc. indicating that the dynamic range conversion indicated by Equation (1_2) has been performed is stored.
  • ex_bit_transform_type is GAMMA_TRANS
  • gamma_value indicating what kind of gamma value is used for conversion is transmitted.
  • ex_bit_transform_type is DR_TRANS
  • the max_value and min_value indicating the maximum and minimum values of the pixels of the input image signal are also used.
  • the data multiplexed in the supplemental syntax (410) It is possible to use.
  • Figure 5 shows an example of transmitting the bit accuracy of the output image signal on the decoding side using supplemental syntax.
  • the ex_se bit_extention_flag shown in the supplemental syntax is a flag indicating whether or not the bit accuracy of the output image signal is to be changed. When the flag is TRUE, it indicates that the bit accuracy of the output image signal is to be changed. When the flag is FALSE, the bit precision of the output image signal is not changed. When the flag power is STRUE, bit_d mark th_of_decoded_image is further transmitted.
  • the bit_d mark th_of_decoded_image is a value indicating the bit accuracy of the output image signal.
  • bit_depth_of_decoded_image is not necessarily used. It is also possible to output the decoded bit accuracy without the need to output the decoded image.
  • ex_color_transform_flag indicated in the supplemental syntax is a flag indicating whether or not to transform the color space of the output image signal.
  • the flag When the flag is TRUE, the color space of each component of the output image signal is changed. It shows that. On the other hand, when the flag is FALSE, it indicates that the color space of each component of the output image signal is not changed. If the flag is TRUE, color_space_of_decoded_image is also sent.
  • color_space_of_decoded_image is a value indicating the color space of the output image signal, and has a capability of decoding by performing bit expansion or reduction on the decoding side described later.
  • each component of the output image signal is converted to the color space specified by the color_space_of_decoded_image value.
  • bit_d mark th_of_decoded_image It is possible to output the decoded image with the bit accuracy indicated by bit_depth_of_decoded_image by enlarging or reducing the bit accuracy of the decoded image.
  • the image decoding apparatus includes a demultiplexing unit 21 to which encoded data is input, an image decoding unit 20 connected to the output of the demultiplexing unit 21, and an image decoding unit.
  • Bit length conversion controller (bit precision conversion controller) 2002 bit precision conversion controller
  • the encoded data 117 encoded by the image encoding device of FIG. 1A is input to the demultiplexer 21 (S21).
  • the encoded data 117 is separated into the bit extension information 2004 and the image encoded data 11 (S22).
  • the bit extension information 2004 is input to the bit length conversion controller 2002, and the image encoded data 11 is input to the image decoding device 20.
  • the encoded image data 11 is decoded by a procedure reverse to the encoding performed by the image encoder 10 to be a bit extended decoded image signal 203 (S23).
  • the bit extended decoded image signal 203 is input to the pixel bit length reducer 2001.
  • bit extension information 2004 is input to bit length conversion controller 2002
  • bit length conversion controller 2002 outputs bit conversion information 2003 indicating how many bits the decoded image is extended and information necessary for the conversion. To do.
  • the input N-bit precision In the image signal 100 whether or not to reduce the bit length of each pixel is selected by the switch E2 based on the bit conversion information 2003 (S24).
  • Switch E2 is connected to the ON side when performing reduction, and to the OFF side when not expanding.
  • the image signal is input to the pixel bit length reduction converter E02, and pixel bit length conversion described later is performed (S25).
  • switch E2 is OFF, the image signal is output as it is without pixel bit length conversion. For example, if the ex_seq_bit_extention_flag force is TRUE and the bit length of each pixel is extended, the switch is set to the ON side, and if it is FALSE, the switch is set to the OFF side.
  • bit conversion information 2003 is M bit extended by ex_seq_shift_bits, for example, and BIT_EXT_TRANS indicating that the transformation represented by Equation (1) is performed by ex_bit_transform_type
  • the pixel bit length The bit extended decoded image signal 203 input to the reducer 2001 reduces the value of each pixel by M bits. For example, when the value of a certain pixel in the bit extended decoded image signal 203 is K, the pixel value K ′ after being reduced by M bits is calculated as follows.
  • the reduction conversion method here is, for example, that the offset is set to an arbitrary value from 0 to (: ⁇ ⁇ ) Any conversion method may be used as long as the bit length is reduced.
  • Bit conversion information 2003 is expanded by M bits by ex_seq_shift_bits, for example, and GAMMA_TRANS indicating that gamma transformation represented by, for example, Equation (11) has been performed by ex_bit_transform_type.
  • the bit extended decoded image signal 203 input to the pixel bit length reducer 2001 reduces the value of each pixel by ⁇ bits. For example, if the value of a certain pixel in the bit extended decoded image signal 203 is ⁇ , the pixel value K ′ after ⁇ bit reduction is calculated as follows.
  • ⁇ , ⁇ [[ ⁇ / ((1 ⁇ ⁇ ) _ 1) ⁇ 1 / ⁇ X ((l «N)-l)] + offset]
  • Bit conversion information 2003 is extended by M bits by ex_seq_shift_bits, for example, and dex_bit_transform_type indicates DR_TRANS indicating that dynamic range conversion indicated by, for example, equation (1-2) has been performed, min.value, If maxvalue indicates that the minimum and maximum values of the pixels of the input image are Max and Min, respectively, the bit extended decoded image signal 203 input to the pixel bit length reducer 2001 is Reduce the pixel value by M bits. For example, when the value of a certain pixel in the bit extended decoded image signal 203 is K, the pixel value K ′ after being reduced by M bits is calculated as follows.
  • K, INT [[ ⁇ (K- (Min «M)) / ((Max -Min) « M) ⁇ y X ((1 «N) _ 1)] + offset]
  • INT indicates processing for rounding to an integer.
  • the offset in the formulas (2_1) and (2_2) indicates the offset when rounding, and is an arbitrary value between 0 and 1.
  • Bit conversion information 2003 is expanded by M bits by ex_seq_shift_bits, for example, and the color space of the decoded image signal is different from the color space specified by color_space_of_decoded_image specified in the supplemental syntax shown in FIG. 5, for example. If it is a color space, after converting each pixel of each component of the decoded image into the color space specified by color_space_of_decoded_image, the bit length is reduced by M bits.
  • RGB force YCoCg and encoded if the input image is converted to RGB force YCoCg and encoded, and RGB is specified in color_space_of_decode d_image, for each pixel Y, Co, Cg of each component of the decoded image, Bit reduction and color space conversion are performed according to the following formula (2-3), and each pixel of each component is converted to R, G, and B, respectively.
  • R ' Cliply (B' + (Co- (l «(N + M-1))))
  • Cliply (A) outputs A as it is when A satisfies 0 ⁇ A (K (N + M)), 0 when A is 0,
  • a ⁇ (1 ⁇ ( N + M)) represents the process of outputting (1 (N + M)) — 1.
  • the reduction conversion method here is, for example, to set offset to any value from 0 to (1 ⁇ M) Any conversion method may be used as long as it is a method of reducing bits.
  • the color S of the decoded image signal is specified in supplemental syntax.
  • the force S is an example specified in collor_space_of_decoded_image. If there is no such specification, the high-level syntax 401 is supplemented. Even if the color space of the decoded image signal is specified as in the example shown in the mental syntax, if color conversion processing is performed in the process of outputting the decoded image, the pixel conversion is performed after the color conversion processing is performed. You can reduce the value to a bit length smaller by M bits.
  • the color conversion shown here is only an example, and any process that performs color conversion can be performed.
  • the decoded image 202 having the same N-bit accuracy as the input image input to the image encoding device is output from the pixel bit length reducer 2001 (S25).
  • image encoding and image decoding can be performed with a bit accuracy that is M bits larger than the bit accuracy of the input image, and encoding efficiency can be improved.
  • this image encoding device has a pixel bit length extender (pixel bit precision converter) 1001, a bit length conversion controller (bit precision conversion controller) 1002, a subtractor 101, and an orthogonal transform. 104, quantizer 106, inverse quantizer 109, inverse orthogonal transformer 110, adder 111, norep finalizer 113, frame memory 114, predictive image creator 115, and ent-py code encoder 108 ing.
  • pixel bit length extender pixel bit precision converter
  • bit length conversion controller bit precision conversion controller
  • the output of the pixel bit length extender 1001 that receives the input image signal is connected to the entropy encoder 108 via the subtractor 101, the orthogonal transformer 104, and the quantizer 106.
  • the output of the quantizer 106 is an inverse quantizer 109, an inverse orthogonal transformer 1 10, the adder 111, the loop filter 113, and the frame memory 114 are connected to the predicted image creator 115.
  • the predicted image signal from the predicted image generator 115 is input to the subtractor 101 and the calorie calculator 111.
  • the motion vector / prediction mode information from the prediction image generator 115 is input to the entropy encoder 108.
  • the bit length conversion controller 1002 inputs the bit conversion information to the pixel bit length extender 1001 and inputs the bit extension information to the entropy code encoder 108.
  • the pixel bit length expander 1001 When a moving image signal, for example, in units of frames is input as an input image signal 100 to the image encoding device (S31), the pixel bit length expander 1001 outputs each pixel of the input N-bit precision image signal 100. Processing is performed to expand the value to bit precision (N + M) that is M bits larger than N bits (S32). For example, if the value of a pixel in the input image signal is K, the pixel value K ′ after M-bit expansion can be expressed by, for example, Equation (1), Equation (1 _ 1), (1-2), (1 It is calculated by _3).
  • Conversion information such as the number of bits M to be extended is guided to the entropy encoder 108 by the bit length conversion controller 1002 as the bit extension information 1003 (S33).
  • the bit extension information 1003 S33
  • the configuration of the pixel bit length expander 1 001 the configuration of FIG. 2 may be used, and processing for controlling whether to extend the bit length of each pixel with a flag may be performed as in the first embodiment. .
  • a difference from the predicted image signal 102 created with (N + M) bit accuracy is taken, and a prediction error signal 103 with (N + M) bit accuracy is generated (S34).
  • the generated prediction error signal 103 is subjected to orthogonal transform (for example, discrete cosine transform (DCT)) by the orthogonal transformer 104 (S35).
  • orthogonal transform coefficient information 105 for example, DCT coefficient information
  • the orthogonal transform coefficient information 105 is quantized by the quantizer 106, and the quantized orthogonal transform coefficient information 107 is input to the entropy code encoder 108 and the inverse quantizer 109.
  • Quantization Orthogonal transform coefficient information 107 is subjected to a process reverse to the process of the quantizer 106 and the orthogonal transformer 104 by the inverse quantizer 109 and the inverse orthogonal transformer 110, that is, locally decoded, and is predicted error signal. Is converted into a signal similar to the above, that is, a local decoding error signal.
  • This local decoding error signal is added by the adder 111 to the predicted image signal 102 with (N + M) bit accuracy.
  • a locally decoded image signal 112 with (N + M) bit accuracy is generated. That is, the quantized orthogonal transform coefficient information 107 is locally decoded (S36).
  • the locally decoded image signal 112 is stored in the frame memory 114 after being filtered by the loop filter 113 as necessary.
  • the prediction image generator 115 includes the input image signal 100 in which the bit length is extended to (N + M) bit accuracy and the (N + M) bit accuracy local decoded image signal 1 12 stored in the frame memory 114. Then, a prediction image signal with (N + M) bit accuracy based on certain prediction mode information is generated. At this time, the locally decoded image signal 112 with (N + M) bit accuracy from the adder 111 is temporarily stored in the frame memory 114. Matching between (N + M) bit precision input image signal 100 and (N + M) bit precision local decoded image signal 112 stored in frame memory 114 for each block in the frame (eg, block matching) Thus, a motion vector is detected (S37).
  • a predicted image signal of (N + M) bit accuracy is created using the local image signal of (N + M) bit accuracy compensated by this motion vector (S38).
  • the predicted image signal 102 with (N + M) bit accuracy generated here is output from the predicted image generator 115 together with the motion vector information / prediction mode information 116 of the selected predicted image signal.
  • the quantized orthogonal transform coefficient information 107, the motion vector information / prediction mode information 116, and the bit extension information 1003 are entropy-encoded (S39), and the code signal generated thereby is encoded.
  • Data 117 is sent to a transmission system or storage system (not shown).
  • This image decoding apparatus includes an entropy decoder 200, an inverse quantizer 109, an inverse orthogonal transformer 110, an adder 111, a loop filter 113, a frame memory 114, a prediction image creator 115, a pixel bit length reducer ( Pixel bit precision converter (2001) and bit length conversion controller (bit precision conversion controller) 2002.
  • the coefficient information output of the entropy decoder 200 receiving the encoded data is an inverse quantizer 109, an inverse orthogonal transformer 110, an adder 111, a loop field
  • the pixel bit length reducer 2001 is connected via the filter 113.
  • the output of the loop filter 113 is connected to one input of the prediction image generator 115 via the frame memory 114.
  • the other input of the prediction image generator 115 receives motion vector / prediction mode information from the prediction image generator 115.
  • the output of the predicted image creator 115 is connected to the adder 111.
  • the bit extension information output of the entropy decoder 200 is connected to the field bit length reducer 2001 via the bit length conversion controller 2002.
  • the operation of the image decoding apparatus will be described with reference to FIG. 10B.
  • the entropy decoder 200 converts the code key data 117 into the entropy code key.
  • the data is decoded according to the reverse procedure (S42), and quantized orthogonal transform coefficient information 107, motion vector Z prediction mode information 116, and bit extension information 2004 are obtained.
  • Quantized orthogonal transform coefficient information 107 is sequentially processed by the inverse quantizer 109 and the inverse orthogonal transformer 110 in the reverse order of the processing of the quantizer 106 and the orthogonal transformer 104 (N + M) bits. It is converted into an accuracy residual signal 201 (S43).
  • the motion vector / prediction mode information 116 is input to the prediction image generator 115 and is based on the motion vector / prediction mode information 116 from the (N + M) bit-accurate bit extended decoded image signal 203 stored in the frame memory 114 ( The predicted image signal 102 with N + M) bit accuracy is generated (S44).
  • the (N + M) bit precision residual signal 201 and the (N + M) bit precision prediction image signal 102 are added by an adder 111 to generate a decoded image signal (S45).
  • the decoded image signal is subjected to filter processing as necessary by the loop filter 113, output as a bit extended decoded image signal 203 with (N + M) bit precision, and stored in the frame memory 114.
  • Entropy decoder 200 When the output bit extension information 2004 is input to the bit length conversion controller 2002, bit conversion information 2003 indicating how many bits the decoded image has been extended from the bit length conversion controller 2002 2003 Is output.
  • the bit extended decoded image signal 203 input to the pixel bit length reducer 2001 is based on the bit conversion information 2003, for example, Equation (2), Equation (2-1), (2-2), (2 — The same processing as in the first embodiment is performed based on 3), and the value of each pixel is reduced by M bits (S46).
  • the pixel bit length reducer 2001 has the configuration shown in FIG. 8 and each image as in the first embodiment. Control may be performed to switch with a flag whether or not to reduce the prime bit length.
  • a decoded image signal 202 having the same N-bit accuracy as the input image input to the image encoding device is output from the pixel bit length reducer 2001 (S47).
  • the predicted image signal is generated with an accuracy that is M bits larger than the input image signal, it is possible to improve the accuracy of motion compensation filters, loop filters, intra prediction, and the like. As a result, the prediction error signal can be reduced, so that the coding efficiency can be improved.
  • FIG. 11A shows the configuration of an image encoding device for video encoding according to the third embodiment of the present invention.
  • This image encoding device has substantially the same configuration as the configuration example shown in the second embodiment (FIG. 9), but includes a pixel bit length converter 1005 in the previous stage of the frame memory 114 and the subsequent stage of the frame memory 114. Is provided with a pixel bit length converter (pixel bit precision converter) 1006.
  • a moving image signal is input to the image encoding apparatus in units of frames as the image signal 100 (S51).
  • the pixel bit length extender (pixel bit precision converter) 1001 performs the same processing as that of the second embodiment, so that the value of each pixel of the input N-bit precision image signal 100 is N bits.
  • the extended bit number M is guided to the entropy encoder 108 as bit extension information 1003 by the bit length conversion controller 1002.
  • the pixel bit length expander 1001 may be configured as shown in FIG. 2, and may perform a process of controlling the pixel bit length by switching with a flag whether to extend the bit length of each pixel, as in the first embodiment.
  • a difference from the predicted image signal 102 created with (N + M) bit accuracy is taken, and a prediction error signal 103 with (N + M) bit accuracy is generated (S53).
  • the generated prediction error signal 103 is subjected to orthogonal transform (for example, discrete cosine transform (DCT)) by an orthogonal transformer 104.
  • orthogonal transform coefficient information 105 for example, DCT coefficient information
  • the orthogonal transform coefficient information 105 is quantized by the quantizer 106. That is, the prediction error signal 103 Are orthogonally transformed and quantized (S54).
  • the quantized orthogonal transform coefficient information 107 is guided to an entropy encoder 108 and an inverse quantizer 109.
  • the quantized orthogonal transform coefficient information 107 is converted into a local decoded error signal by the inverse quantizer 109 and the inverse orthogonal transformer 110, which are sequentially subjected to processing reverse to that of the quantizer 106 and orthogonal transformer 104. Then, the adder 111 adds the predicted image signal 102 with (N + M) bit accuracy. As a result, a locally decoded image signal 112 with (N + M) bit accuracy is generated. That is, the quantized orthogonal transform coefficient information 107 is locally encoded (S55).
  • the local decoded image signal 112 with (N + M) bit accuracy is filtered by the loop filter 113 as necessary, and then input to the pixel bit length converter 1005, where the value of each pixel is set to L bits. Reduce conversion to a smaller value, or enlarge and convert each pixel value to a value larger by L bits (S56). For example, when the value of a certain pixel in the locally decoded image signal 112 with (N + M) bit accuracy is K, the pixel value K ′ after being reduced by L bits is calculated as follows.
  • the number of bits L to be reduced is an integer satisfying 0 ⁇ L ⁇ M.
  • the bit length of each pixel of the locally decoded image signal 112 is (N + M ⁇ L) bits.
  • the reduction conversion method here may be any conversion method as long as the bit length is reduced, for example, by setting offset to an arbitrary value from 0 to (: K ⁇ L).
  • FIG. 11C shows a pixel value histogram (left side) and a histogram after bit precision conversion (right side) of the encoded unit image.
  • the local decoded image 112 extended to 12 bits can be changed to an arbitrary unit such as a macroblock unit.
  • the dynamic range D (mb_max-mb_min) is calculated from the maximum pixel value mbjnax and the minimum value mbjnin for each encoding unit.
  • the block is converted to 8 bits and output to the frame memory 114.
  • the representative value determined using the maximum value mbjnax and / or the minimum value mbjnin of the pixel value of this block is also output to the frame memory 114.
  • the dynamic range D is a value that cannot be expressed in 8 bits
  • the block is divided by shifting 2 bits to the right. Is converted to 8 bits.
  • the pixel bit length converter 1005 has a configuration as shown in FIG. 1 1D, for example.
  • the shift amount / representative value calculation unit 10052 calculates the shift amount Q_bit according to the dynamic range D using, for example, the (3-la) equation.
  • mb_min is set as a representative value.
  • the pixel bit precision converter 10053 when the value of a certain pixel of the bit extended decoded image signal is K, the pixel value K ′ after being reduced by L bits (L> 0) is calculated as follows, for example.
  • the converted local decoded image signal 112 is output to the frame memory 114 with the representative value mbjnin and the shift amount Q_bit (representative value / shift amount 10054).
  • Offset (l «(Q_bit-D)
  • K ' (K-mb_min + Oifset) »Q_bit... (3 _ lb)
  • the frame memory 1 14 is converted to the memory 1 14a for storing the reference image as shown in FIG. 1 1F, and how the bit precision is converted for each code unit (macroblock unit) of each reference image.
  • the sub memory 114b holds the representative value mbjnin and the shift amount Q_bit.
  • Equation (2) is used when a decoded image signal is obtained by converting a bit extended decoded image signal obtained with N + M bit accuracy to N bit accuracy on the decoding side.
  • 3 The (N + M- L) bit-accurate local decoded image signal converted using equation (-1) is stored in the frame memory, and the bit accuracy is N + M from the frame memory by reversing the equation (3-1).
  • the local decoded image signal expanded to bits is read out, the local decoded image signal before conversion to N + ML bits is converted directly using equation (2). It is possible that a different value is calculated.
  • the conversion formula taking this into consideration is the following formula (3_2) _ (3-6).
  • the dynamic range calculation unit 10051 for each arbitrary encoding unit such as a macroblock unit, the maximum pixel value mbjnax, the maximum / J, and the value mbjnin in the local decoded image signal 112 are obtained,
  • the dynamic range D (mb_max-mb_min) is obtained.
  • Equation (3_2) the offset value Offset used when performing the shift operation with the temporary bit numbers Q_bit and Q_bit to be reduced in accordance with the dynamic range D is expressed by Equation (3_2). Calculate using.
  • Offset (1 (Q_bit-1)... (3 _4)
  • Q_bit is 0 or L
  • Q_bit and Offset in (3-2) are used as they are.
  • the pixel bit precision converter 10053 when the value of a certain pixel of the bit extended decoded image signal is K, the pixel value K ′ after being reduced by L bits (L> 0) is, for example,
  • the locally decoded image signal 112 calculated and converted as described above is output as a representative value mbjnin and a shift amount Q_bit (representative value / shift amount 10054) to the frame memory. [0073].
  • Q_bit is 0 or L
  • K ' (K-(min_mb «L) + Offset)» Q— bit-(3 -5)
  • the frame memory 114 stores each code of each frame. It has a sub memory for storing information indicating how the bit precision is converted for each conversion unit, and holds, for example, a representative value mbjnin and a shift amount Q_bit.
  • each pixel value is set to L using the equations (3_1) and (3_2) to (3_6) in consideration of the dynamic range of each pixel for each encoding unit such as a macroblock.
  • the conversion is reduced to a bit-small value, the rounding error caused by the shift operation at the time of reduction can be suppressed as compared with the case where the conversion is simply reduced to a small value of L bits as shown in Equation (3). It is possible to store the signal in the frame memory with higher accuracy.
  • the force S with the representative value as the minimum pixel value mbjnin may be a value determined by the maximum value mb_max and the minimum value mbjnin of other pixel values. ,.
  • the number L of bits to be expanded is an integer satisfying 0 (one L).
  • the bit length of each pixel of the locally decoded image signal 112 is (N + M ⁇ L) bits.
  • Figure 12 shows a configuration example of the pixel bit length converter 1005.
  • the input N-bit precision image signal 100 is selected by the switch E3 based on the bit length extension information 1003 as to whether or not to extend the bit length of each pixel.
  • Switch E3 is connected to the “N” side when expanding, and to the OFF side when not expanding.
  • the image signal input to the pixel bit length converter 1005 is input to the pixel bit length expansion / reduction converter E03, and is converted to the pixel bit length.
  • the image signal is output as is without undergoing pixel bit length conversion.
  • the locally decoded image signal 112 with the bit length reduced or expanded is stored in the frame memory 114 (S57).
  • frame memory 114 N + M—L
  • the bit decoded local decoded image signal 112 is input to the pixel bit length converter 1006 before being input to the prediction image generator 115, and when L> 0, the bit length of each pixel Is extended and converted to a bit precision larger than L bits (S58).
  • the pixel value K ′ after L-bit expansion is calculated as follows, for example.
  • the shift amount Z representative value reading unit 10061 reads out the representative value mbjni n and the shift amount Q_bit (representative value / shift amount 10054) of the coding unit from the sub memory 114b in the frame memory 114, and outputs the pixel bit.
  • the accuracy conversion unit 10062 for example, the pixel value K ′ after the bit accuracy is extended by L bits using the following formula is calculated.
  • K ' (K «Q_bit) + mb.min ⁇ ' ⁇ (5— 1)
  • the shift amount / representative value reading unit 10061 stores in the frame memory 114.
  • the mbjnin and Q_bit of the coding unit are read from the sub memory, and the pixel bit accuracy conversion unit 10062 calculates the pixel value K ′ after extending the bit accuracy by L bits using, for example, the following equation.
  • K ' (K «Q_bit) + (mbjninoku L)
  • K, (K «Q_bit) + (mb_min« L)-(l «(Q_bit-1))
  • the reduction conversion method here may be any conversion method as long as the bit length is reduced, for example, by setting the offset to an arbitrary value from 0 to (: 1 (_L)).
  • the pixel bit length converter 1006 may have the configuration shown in FIG. 12 similar to that of the pixel bit length converter 1005, and may perform control to switch processing of whether or not to convert the bit length of each pixel with a flag. By performing such conversion, the bit accuracy of the locally decoded image signal 112 output to the predicted image generator 115 becomes (N + M) bit accuracy.
  • the bit length L reduced / expanded before and after the frame memory 114 is an integer satisfying L ⁇ M, and the number of bits reduced / expanded before and after the frame memory 114 only needs to be equal.
  • the prediction image creator 115 is an input image signal whose bit length is extended to (N + M) bit precision.
  • a prediction image signal with (N + M) bit accuracy based on certain prediction mode information is generated from the local decoded image signal 112 with 100 and (N + M) bit accuracy.
  • a motion vector is detected (for example, block matching) (S59).
  • a predicted image signal with (N + M) bit accuracy is created (S60).
  • the predicted image signal 102 with (N + M) bit accuracy generated here is output from the predicted image generator 115 together with the motion vector information / prediction mode information 116 of the selected predicted image signal.
  • quantized orthogonal transform coefficient information 107 quantized orthogonal transform coefficient information 107
  • motion vector information The / prediction mode information 116 and the bit extension information 1003 are entropy encoded (S61), and the generated code data 117 is sent to a transmission system or storage system (not shown).
  • the number of bits to be expanded is included in the sequence parameter set syntax (404), the picture one parameter set syntax (405), the slice level syntax (402), and the macro block level syntax (403) for transmission. It is possible. Each syntax is described below.
  • ex_seq_bit_extention_flag shown in the sequence parameter set syntax of Fig. 13 is a flag indicating whether or not to perform bit extension. When the flag is TRUE, whether or not to perform bit extension is switched in sequence units. It is possible. On the other hand, if the flag is SFALSE, no bit expansion is performed in the sequence.
  • ex_seq_bit_ext When ention_flag is TRUE, ex_seq_shift_bits indicating how many more bits to extend may be sent.
  • ex_framemem_bitdepth indicating the bit precision when stored in the frame memory 114 may be transmitted.
  • N + M-L force x_framemem_bitdepth is stored and transmitted.
  • the number of bits to be extended can be further included in the picture parameter set syntax (405), slice level syntax (402), and macroblock level syntax (403) for transmission.
  • ex_pi bit_extention_flag shown in the picture parameter set syntax of Fig. 14 is a flag indicating whether or not to change bit extension for each picture.
  • this flag is TRUE, it is possible to switch whether to perform bit expansion on a picture-by-picture basis.
  • the flag is FALSE, it is not possible to change whether or not to perform bit expansion for each picture.
  • ex_pic_bit_extention_flag is TRUE, ex_pic_shift_bits indicating how many more bits to extend is transmitted.
  • ex_seq_shift_bits can be given priority as the number of bits to be extended, and ex_seq_shift_bits has priority as the number of bits to be extended. You can use ex_seq_shift_bits + ex_pic _shift_bits as the number of bits to extend. It is preferable to decide in advance which will be given priority.
  • ex_bit_extention_in_slice_flag and ex_bit_extention_in_mb_flag may be present. Each of these flags indicates whether there is a flag for changing whether or not to perform bit extension at the slice level and the macroblock level.
  • ex_slice_shift_bits is transmitted, and it is possible to switch and transmit the number of bits to be expanded in units of slices.
  • S may be given priority as the number of bits to be extended, or ex_slice_shift_bits is given priority as the number of bits to be extended. Or you can give priority to the number of bits to extend S + ex_slice_shift_bits. It is preferable to decide in advance which is to be prioritized.
  • ex_bit_extention_in_slice_flag is FALSE, ex_slice_shift_bits is not transmitted.
  • ex_bit_extention_in_mb_flag when ex_bit_extention_in_mb_flag is TRUE, ex_mb_shift_bits is transmitted, and it is possible to switch and transmit the number of bits to be expanded in macroblock units.
  • S may be given priority as the number of bits to be expanded. You may give priority to ex_mb_shift_bits as the number of bits to extend, or you may give priority to S + ex_mb_shift_bits as the number of bits to extend. It is preferable to decide in advance which should be given priority.
  • ex_bit_extention_in_mb_flag when ex_bit_extention_in_mb_flag is FALSE, ex_slice_shift.bits is not transmitted.
  • This image decoding apparatus replaces the pixel bit length reducer 2001 after the force adder 111 (and the loop filter 113), which has almost the same configuration as the configuration example shown in the second embodiment (FIG. 10A).
  • a difference is that a pixel bit length converter 2008 is connected to the subsequent stage of the memory 114.
  • the operation of the image decoding apparatus will be described with reference to the flowchart in FIG. 17B.
  • the entropy decoder 200 converts the encoded data 117 to the reverse of entropy encoding. Decoded by the procedure, quantized orthogonal transform coefficient information 107, motion vector / prediction mode information 116, and bit extension information 2004 are obtained (S72).
  • Quantized orthogonal transform coefficient information 107 is subjected to processing reverse to the processing of the quantizer 106 and the orthogonal transformer 104 by the inverse quantizer 109 and the inverse orthogonal transformer 110 in order (N + M) bits. It is converted into an accuracy residual signal 201 (S73) (S74).
  • bit extension information 2004 output from the entropy decoder 200 is input to the bit length conversion controller 2002, the number of bits M indicating how many bits the decoded image has been extended, and the frame memory Bit conversion information 2003 indicating the number of bits L to be reduced when stored is output from the bit length conversion controller 2002.
  • the pixel bit length converter 2008 uses, for example, L based on the bit conversion information 2003 on the decoded image signal 202 with (N + M ⁇ L) bit precision stored in the frame memory 114 in the procedure described later. If> 0, the pixel bit length is extended by L bits by performing processing equivalent to Equation (5) (5-1) (5-2), etc. (S75) If L ⁇ 0, the pixel bit length is reduced by (-L) bits by performing the same processing as in Equation (6). Conversion processing is performed, and the bit extended reference image signal 204 with (N + M) bit precision is output (S75). Note that when the same processing as (5-1) and (5-2) is performed, the configuration of the pixel bit length converter 2008 may be the same as that in FIG. 11D.
  • the frame memory 114 may have the same configuration as that shown in FIG. 11F, for example.
  • the motion vector Z prediction mode information 116 is input to the prediction image generator 115, and based on the motion vector Z prediction mode information 11 6 from the bit extended reference image signal 204 with (N + M) bit accuracy (N + M).
  • the predicted image signal 102 with bit accuracy is generated (S77).
  • the adder 111 adds the (N + M) bit precision residual signal 201 and the (N + M) bit precision predicted image signal 102.
  • the added signal is subjected to filter processing as necessary by the loop filter 113, and a bit extended decoded image signal 203 with (N + M) bit precision is output (S78).
  • the bit extended decoded image signal 203 is input to the pixel bit length converter 2007, and based on the bit conversion information 2003, the image decoding of the first embodiment such as Equation (2) is performed on the value of each pixel.
  • the pixel bit length is reduced and converted to a smaller value of M bits (S79), and the same as the input image input to the image encoding device N
  • a bit-accurate decoded image 202 is obtained (S80).
  • the pixel bit length converter 2007 may be configured as shown in FIG. 12, and may perform control to switch the force with a flag to reduce the bit length of each pixel.
  • the decoded image signal 202 with N-bit accuracy is stored in the frame memory 114 as it is.
  • the expression (3) (3-1) (3-2) to (3) is applied to the bit extended decoded image signal 203 with (N + M) bit precision.
  • a decoded image with (N + M ⁇ L) -bit precision with a bit length reduced by L bits is created and stored in the frame memory 114.
  • the bit length is reduced by (-L) bits by performing the same processing as Equation (4) for each pixel (N + M -L).
  • the decoded image is created and stored in the frame memory 114. Note that when the same processing as (3-1) and (3-2) to (3-6) is performed, the configuration of the pixel bit length converter 2007 may be the same as that in FIG. 11D.
  • the predicted image signal has an accuracy that is M bits larger than the input image signal. Therefore, it is possible to improve the accuracy of motion compensation filters, loop filters, intra prediction, etc., and as a result, the prediction error signal can be reduced, so that the coding efficiency can be improved.
  • the reference image signal stored in the frame memory is converted to a bit smaller than the input image signal or the predicted image signal whose bit length is extended. It can be stored long, and the amount of frame memory used can be reduced while maintaining the effect of improving accuracy, such as motion compensation filters, loop filters, and intra prediction.
  • this image encoding device does not include a pixel bit length expander that expands the pixel bit length of the input image, and the frame memory 114 (and A pixel bit length extender (pixel bit precision extension converter) 1001 is provided before the loop filter 113), and a pixel bit length reducer (pixel bit precision reduction converter) 1004 is provided after the prediction image generator 115. Is different.
  • a moving image signal with an accuracy of each pixel for example, an N-bit accuracy is input as an input image signal 100 to the image encoding apparatus (S81).
  • the subtracter 101 calculates the difference between the input image signal 100 and the predicted image signal 102 with N-bit accuracy, and generates a prediction error signal 103 with N-bit accuracy (S82).
  • a method of generating the predicted image signal 102 with N-bit accuracy will be described later.
  • the generated prediction error signal 103 is subjected to orthogonal transform (for example, discrete cosine transform (DCT)) by an orthogonal transformer 104, and the orthogonal transformer 104 performs orthogonal transform coefficient information 105 (for example, DCT coefficient information). ) Is obtained (S83).
  • orthogonal transform coefficient information 105 is quantized by the quantizer 106, and the quantized orthogonal transform coefficient information 107 is guided to the entry port code encoder 108 and the inverse quantizer 109.
  • the quantized orthogonal transform coefficient information 107 is converted into a local decoded error signal by the inverse quantizer 109 and the inverse orthogonal transformer 110, which are sequentially subjected to the reverse processing of the quantizer 106 and the orthogonal transformer 104. Is done.
  • This local decoding error signal is added to the predicted image signal 102 with N-bit accuracy by the adder 111, thereby generating a local decoded image signal 112 with N-bit accuracy (S84).
  • the locally decoded image signal 112 is input to the pixel bit length expander 1001 after being filtered by the loop filter 113 as necessary, and the value of each pixel is extended and converted to a value larger by M bits (S85). ). For example, in the case of the value strength of a certain pixel of the bit extended decoded image signal, the pixel value K ′ after being reduced by M bits is calculated in the same manner as the equation (1) of the first embodiment.
  • the locally decoded image signal 112 whose bit precision is expanded to (N + M) bits is stored in the frame memory 114 (S86).
  • the locally decoded image signal 112 with (N + M) bit accuracy stored in the frame memory 114 is input to the predicted image generator 115.
  • the pixel bit length expander 1001 may be configured as shown in FIG. 2, and may perform a process of controlling the bit length by switching with a flag whether or not to extend the bit length of each pixel, as in the first embodiment.
  • the predicted image generator 115 generates a predicted image signal with (N + M) bit accuracy based on certain prediction mode information from the local decoded image signal 112 with (N + M) bit accuracy.
  • an image signal obtained by extending each pixel of the input image signal 100 to (N + M) bit accuracy for each block in the frame, and a locally decoded image signal 112 expanded to (N + M) bit accuracy Matching is performed to detect a motion vector (S87).
  • an extended prediction image signal 117 with (N + M) bit accuracy is created (S89).
  • the (N + M) bit accuracy extended prediction image signal 117 generated here is output from the prediction image generator 115 together with the motion vector information / prediction mode information 116 of the selected prediction image signal.
  • the extended predicted image signal 117 with (N + M) bit accuracy is input to the pixel bit length reducer 1004, and is converted into an image signal in which the bit length of each pixel is reduced by M bits (S90).
  • This conversion is performed, for example, by the same processing as the formula (2) shown in the first embodiment.
  • the number of bits M to be reduced and the number of bits M to be enlarged by the pixel bit length extender 1001 are the same value, and the bit length conversion controller 1002 obtains the bit extension information 1003.
  • the pixel bit length reducer 1004 is configured as shown in FIG. 8 and, similarly to the first embodiment, performs a process of controlling the bit length by switching whether to extend the bit length of each pixel with a flag. .
  • the quantized orthogonal transform coefficient information 107 the motion vector information / prediction mode information 116, and the bit extension information 1003 are entropy-encoded (S91).
  • Data 117 is sent to a transmission system or storage system (not shown).
  • the encoding method for the number of bits M to be extended is the same as in the first embodiment. Further, in the present embodiment, it is possible to use data multiplexed in the supplemental syntax as in the first embodiment.
  • this image decoding apparatus includes a pixel bit length expander 2005 in the previous stage of the frame memory 114 (and the loop filter 113), so that the predicted image
  • a pixel bit length reducer (pixel bit precision converter) 2006 is provided after the generator 115.
  • the motion vector Z prediction mode information 116 is input to the prediction image generator 115, the motion vector / prediction mode information 116 is converted from the bit extended decoded image signal 203 stored in the frame memory 114 by the procedure described later. Based on (N + M) bit accuracy extended prediction image signal 205 is generated (S105).
  • the pixel bit length reducer 2006 performs the image restoration of the first embodiment such as Expression (2) on the basis of the bit conversion information 2003 for the extended predicted image signal 205 with (N + M) bit accuracy. By performing the same processing as the pixel bit reduction converter in the encoding device, the pixel bit length is reduced by M bits (S106), and the predicted image signal 102 with N-bit accuracy is output.
  • the pixel bit length reducer 2006 may be configured as shown in FIG. 8, and may perform processing for controlling the bit length by switching with a flag whether or not to extend the bit length of each pixel, as in the first embodiment.
  • the N-bit accuracy residual signal 201 and the N-bit accuracy prediction image signal 102 are added by the adder 111 to generate a reduced decoded image signal (S107).
  • the pixel bit length expander 2005 performs image coding according to the first embodiment such as Equation (1) based on the bit conversion information 2003.
  • a process for extending the pixel bit length by M bits is performed by performing a process equivalent to the process performed by the pixel bit length expander in the apparatus (S108).
  • the signal output from the pixel bit length expander 2005 is subjected to filter processing by the loop filter 113 as necessary, and an extended decoded image signal 203 with bit (N + M) bit accuracy is output (S109), and the frame memory 114 Stored in.
  • the pixel bit length expander 2005 may be configured as shown in FIG. 2, and may perform a process of controlling the bit length by switching whether to extend the bit length of each pixel with a flag, as in the first embodiment. Les.
  • bit extended decoded image signal 203 with (N + M) bit accuracy is input to the pixel bit length reducer 2001, and based on the bit conversion information 2003, the value of each pixel is expressed by, for example, Equation (2).
  • a decoded image 202 having the same N-bit accuracy as the input image input to the image encoding apparatus is obtained (S110). ) (Sl ll).
  • the prediction error signal is Signal and residual signal conversion ⁇ Quantization processing can be performed with an accuracy corresponding to the bit accuracy of the input image, while maintaining the effect of improving accuracy such as motion compensation filters, loop filters, and in-screen prediction. It is possible to reduce the bit precision of conversion and quantization and reduce the operation scale.
  • this image encoding device does not include a pixel bit length expander that expands the pixel bit length of the input image, and the predicted image generator 115
  • a pixel bit length extender (pixel bit precision extension converter) 10 01 is provided in the previous stage and a pixel bit length reducer (pixel bit precision reduction converter) 1004 is provided in the subsequent stage of the predictive image creator 115.
  • a moving image signal in which the accuracy of each pixel is, for example, an N-bit accuracy is input as an input image signal 100 to the image encoding device (S121).
  • the subtracter 101 calculates the difference between the input image signal 100 and the predicted image signal 102 with N-bit accuracy, and generates a prediction error signal 103 with N-bit accuracy (S122).
  • a method for generating the predicted image signal 102 with N-bit accuracy will be described later.
  • the generated prediction error signal 103 is subjected to orthogonal transform, for example, discrete cosine transform (DCT) by an orthogonal transformer 104, and the orthogonal transformer 104 obtains orthogonal transform coefficient information 105, for example, DCT coefficient information.
  • orthogonal transform coefficient information 105 is quantized by the quantizer 106, and the quantized orthogonal transform coefficient information 107 is guided to the entropy encoder 108 and the inverse quantizer 109.
  • the quantized orthogonal transform coefficient information 107 is converted into a local decoded error signal by the inverse quantizer 109 and the inverse orthogonal transformer 110, which are sequentially subjected to processing reverse to that of the quantizer 106 and orthogonal transformer 104. Is done.
  • This local decoding error signal is added to the predicted image signal 102 with N-bit accuracy by the adder 111, thereby generating a local decoded image signal 112 with N-bit accuracy (S124).
  • the locally decoded image signal 112 with N-bit accuracy is filtered by the loop filter 113 as necessary, and then stored in the frame memory 114 (S125).
  • Frame memory The locally decoded image signal 112 with N-bit accuracy stored in 114 is input to the pixel bit length expander 1001, and the value of each pixel is extended and converted to a value larger by M bits (S126). For example, in the case of a value value of a certain pixel of the bit extended decoded image signal, the pixel value K ′ after being reduced by M bits is calculated in the same manner as Expression (1) of the first embodiment.
  • the locally decoded image signal 112 with the bit accuracy extended to (N + M) bits is input to the predicted image generator 115.
  • the pixel bit length expander 1001 may be configured as shown in FIG. 2, and may perform a process of controlling the bit length by switching with a flag whether to extend the bit length of each pixel, as in the first embodiment. .
  • the prediction image generator 115 generates a prediction image signal with (N + M) bit accuracy based on certain prediction mode information from the (N + M) bit accuracy local decoded image signal 112.
  • each pixel of the input image signal 100 is expanded to (N + M) bit accuracy for each block in the frame, and the locally decoded image signal 112 expanded to (N + M) bit accuracy.
  • Matching for example, block matching
  • S127 a motion vector is detected
  • an extended prediction image signal 117 with (N + M) bit accuracy is created (S128).
  • the (N + M) bit-accuracy extended prediction image signal 117 generated here is output from the prediction image generator 115 together with the motion vector information / prediction mode information 116 of the selected prediction image signal.
  • the extended predicted image signal 117 is input to the pixel bit length reducer 1004 and converted so that the bit length of each pixel is reduced to M bits (S129). This conversion is performed, for example, by performing processing similar to Equation (2) shown in the image decoder of the first embodiment.
  • the bit number M to be reduced and the bit number M to be enlarged in the pixel bit length extender 1001 are both the same value, and the bit length conversion controller 1002 uses the entropy encoder 108 as the bit extension information 1003. Led to.
  • the pixel bit length reducer 1004 may be configured as shown in FIG. 8, and may perform a process of controlling whether to extend the bit length of each pixel with a flag, as in the first embodiment.
  • the quantized orthogonal transform coefficient information 107, the motion vector information / prediction mode information 116, and the bit extension information 1003 are entropy-encoded (S130), and the code signal generated thereby is encoded.
  • Data 117 is sent to a transmission system or storage system (not shown).
  • the encoding method for the number of bits M to be extended is the same as in the first embodiment. In the present embodiment, it is possible to use data multiplexed in the supplemental syntax as in the first embodiment.
  • this image decoding apparatus does not include a pixel bit reducer in the subsequent stage of the adder 1 11 (and the norep filter 113).
  • the decoded image signal 202 is output from the device 111 (and the norep filter 113), and a predictive image generator 115 is equipped with a pixel bit length extender (pixel bit precision extended converter) 2005 before the predictive image generator 115.
  • the pixel 115 is provided with a pixel bit length reducer (pixel bit precision reduction converter) 2001 in the subsequent stage.
  • Quantized orthogonal transform Coefficient information 107 is converted into an N-bit residual signal 201 by sequentially receiving the inverse processing of the quantizer 106 and the orthogonal transformer 104 by the inverse quantizer 109 and the inverse orthogonal transformer 110. (S143).
  • bit extension information 2004 output from the entropy decoder 200 is input to the bit length conversion controller 2002, the bit conversion information 2003 indicating how many bits the decoded image has been extended is converted into the bit length conversion controller 2002. Is output (S144).
  • the pixel bit length expander 2005 performs, for example, a first equation such as Equation (1) on the decoded image signal 202 with N-bit accuracy stored in the frame memory 114 in the procedure described later, based on the bit conversion information 2003.
  • a process equivalent to the process performed by the pixel bit length extender is performed to perform an extension conversion of the pixel bit length to M bits, (N + M) bit accuracy
  • the bit extended reference image signal 204 is output.
  • the pixel bit length extender 2005 may be configured as shown in FIG. 2, and may perform a process of switching whether or not to extend the bit length of each pixel with a flag as in the first embodiment.
  • the motion vector Z prediction mode information 116 is input to the prediction image generator 115, and the above (N + M) An extended prediction image signal 205 with (N + M) bit accuracy based on the motion vector / prediction mode information 116 is generated from the bit extended reference image signal 204 with bit accuracy (S145).
  • the pixel bit length reducer 2001 applies the image decoding apparatus of the first embodiment such as Formula (2) based on the bit conversion information 2003 to the extended prediction image signal 205 with (N + M) bit accuracy.
  • the pixel bit length is converted to M bits by performing processing equivalent to that of the pixel bit reduction converter in FIG. 1, and the N-bit predicted image signal 102 is output (S146).
  • the pixel bit length reducer 2001 may be configured as shown in FIG. 8, and may perform a process of switching whether or not to extend the bit length of each pixel with a flag as in the first embodiment.
  • the N-bit precision residual signal 201 and the N-bit precision prediction image signal 102 are added by an adder 111.
  • the added signal is subjected to filter processing as necessary by the loop filter 113, and an N-bit precision decoded image signal 202 is output (S147) and stored in the frame memory 114.
  • the decoded image signal 202 obtained here has the same N-bit accuracy as the input image input to the image encoding device.
  • the predicted image signal is generated with an accuracy of M bits larger than that of the input image signal. Therefore, it is possible to improve the accuracy of motion compensation filters, loop filters, and intra prediction. it can. Therefore, since the prediction error signal can be reduced as a result, the coding efficiency can be improved.
  • the predicted image creator 115 can be configured as shown in FIG.
  • the intra-frame predictor B101 performs intra-frame prediction from the locally decoded image signal 112 in the area already encoded in the frame in the frame memory 114. Create a predicted image signal based on it.
  • the inter-frame predictor B103 performs motion compensation on the locally decoded image signal 112 in the frame memory 114 based on the motion vector detected by the motion vector detector B102.
  • the adaptive filter information generation unit B 104 generates a predicted image signal 102 based on inter-frame prediction using adaptive filter information generated from the input image signal 101, the locally decoded image signal 112, and the motion vector.
  • the intra-frame predictor B101 has M (M is plural) intra-frame prediction modes, and the inter-frame predictor B103 has N (N is plural) inter-frame prediction modes.
  • a mode determiner B105 is connected to the outputs of the inter-frame predictor B103 and the intra-frame predictor B101.
  • the mode determiner B105 is a prediction image signal based on one prediction mode selected from N inter-frame prediction modes or a prediction image signal based on one prediction mode selected from M intra-frame prediction modes.
  • 102 is output.
  • the motion vector Z prediction mode information / adaptive filter information 116 that is, the motion vector output from the motion vector detector B102 and the prediction mode information indicating the prediction mode selected by the mode determiner and the adaptive filter information 116 are Then, it is sent to the entropy encoder 117 and included in the encoded data 117 and sent to the decoding side.
  • the motion vector and adaptive filter information are output from the motion vector detector B102 only when the inter-frame prediction mode is selected.
  • the switch B201 selects a predictor based on the prediction mode information 116.
  • the intra-frame predictor B202 creates a predicted image signal 102 based on intra-frame prediction from the decoded image signal 203 of the area already encoded in the frame in the frame memory 114.
  • the inter-frame predictor B203 performs motion compensation on the decoded image signal 203 in the frame memory 114 based on the motion vector, and performs prediction based on inter-frame prediction using the adaptive filter information 116. Create image signal 102.
  • Fig. 24 illustrates a configuration example of the inter-frame predictor B103.
  • the inter-frame predictor B103 generates an inter-frame predicted image signal 102 from the motion vector, the locally decoded image signal, and adaptive filter coefficient information.
  • Integer pixel weight Z offset correction unit B301 A process for performing so-called weighted prediction of values is performed. As a specific example, the following formula (7
  • a predicted image signal 102 of integer pixels is created.
  • W is a weight coefficient
  • L is a shift coefficient
  • O is an offset coefficient, which are included in the adaptive filter coefficient information.
  • Formula (8) is an example of the weight Z offset processing when bi-directional prediction is performed.
  • the weighting factor for pixel value X is W
  • the offset factor is O
  • Weighting factor is W, offset factor is ⁇ , L is a shift factor and is included in the adaptive filter coefficient information.
  • luminance correction and color correction can be performed.
  • switch B301 is switched to create a decimal pixel adaptive interpolation image.
  • the bit length of the encoding / decoding process is set longer than the pixel bit length of the input image signal, the calculation accuracy of the weight / offset correction process can be ensured, and the higher weight / Offset correction can be performed.
  • the interpolation of the decimal point pixel position is performed from the integer pixel value using the interpolation filter coefficient in the adaptive filter information. Pixel values are generated, and a predicted image signal 102 is generated.
  • the bit length of the encoding / decoding processing is set longer than the pixel bit length of the input image signal, the calculation accuracy of the interpolation filter processing can be ensured, and a better decimal point can be secured. Pixel values can be obtained.
  • FIG. 25 shows another configuration example of the inter-frame predictor B103.
  • the inter-frame predictor generates an inter-frame prediction image signal 102 from the motion vector, the locally decoded image signal, and adaptive filter coefficient information.
  • the decimal pixel adaptive interpolation image creation unit B401 when the motion vector indicates the decimal point pixel position, the interpolation filter in the adaptive filter information is displayed. Using the filter coefficient, an interpolated pixel value at the decimal point pixel position is generated from the integer pixel value, and a predicted image signal 102 is generated.
  • the bit length of the encoding / decoding process is set longer than the pixel bit length of the input image signal, the calculation accuracy of the interpolation filter process can be ensured and a better decimal pixel value can be obtained. It becomes possible.
  • the weight / offset correction unit B403 performs processing for so-called weighted prediction of the pixel value of the predicted image signal.
  • the predicted image signal 102 is created according to the above formula (7) or formula (8).
  • the bit length of the encoding / decoding process is set longer than the pixel bit length of the input image signal, the calculation accuracy of the weight / offset correction process can be ensured, and the accuracy can be improved. High weight / offset correction can be performed.
  • FIG. 26 shows another configuration example of the inter-frame predictor.
  • This inter-frame predictor generates an inter-frame prediction image signal from the motion vector, the reconstructed image signal, and adaptive filter coefficient information.
  • the switch B501 when the motion beta is indicating the decimal point pixel position, the switch B501 is connected to the decimal pixel adaptive interpolation / offset correction image creation unit B502, and the reproduced image signal is created as the fractional pixel adaptive interpolation / offset correction image creation. Processed by part B502.
  • the switch B501 is connected to the integer pixel weight / offset corrected image creating unit B503, and the reproduced image signal is processed by the integer pixel weight / offset corrected image creating unit B503.
  • the fractional pixel adaptive interpolation / offset correction image creation unit B502 when the motion vector force indicates the fractional pixel position, an integer is calculated using the interpolation filter coefficient and the offset correction coefficient in the adaptive filter information. An interpolated pixel value at the decimal point pixel position is generated from the pixel value, and a predicted image signal is generated.
  • Equation (9) is used when performing the following one-way prediction
  • Equation (10) is used when performing bidirectional prediction.
  • Wi force interpolation filter coefficient for the reproduced image signal Xi, j, ⁇ is the offset coefficient, Y Is the predicted image signal.
  • W (Q) i, j for the reproduced image signal X (Q) i, j is the interpolation filter coefficient
  • O (0) is the offset coefficient
  • 0 (1) is the offset coefficient
  • Y is the predicted image signal.
  • the integer pixel weight / offset correction unit B503 performs processing for performing so-called weighted prediction of the pixel value of the predicted image signal.
  • a predicted image signal is created according to the above formula (7) or formula (8).
  • the offset correction coefficient at this time a coefficient with an accuracy equal to or greater than the input pixel bit length is used.
  • the loop filter 113 can be configured as shown in FIG.
  • FIG. 27 shows the configuration of the loop filter 113 on the encoder side.
  • the filter coefficient determination unit C101 receives the input image signal 101 and the local decoded image signal 112 as input signals, and when the local decoded image signal 112 is subjected to filter processing, for example, the least square error with the input image signal 101 is the largest.
  • a filter coefficient that is small is calculated, and the calculated filter coefficient is output to the filter processing unit C102, and also output to the entropy encoding unit 108 as filter coefficient information C103.
  • the method for determining the filter coefficient here is not limited to this example. For example, a plurality of filters such as a deblocking filter and a deringing filter are prepared in advance and are switched adaptively.
  • Select the selected filter It is possible to apply various modes that may be output as filter coefficient information C103 or may be subjected to processing such as whether to apply each filter adaptively.
  • As the input image signal 101 to be input a signal having the same pixel bit length as that of the local image decoded signal 112 input to the loop filter is used according to each embodiment.
  • the filter coefficients input from the filter coefficient determination unit C101 are used to filter each pixel of the locally decoded image signal, and then output to the frame memory 114 and the like. .
  • the filter coefficient information 103 output to the entropy encoding unit 108 is entropy encoded by the entropy encoding unit 108, included in the code key data 117, and transmitted.
  • FIG. 28 shows a configuration example of the loop filter 113 on the decoding side.
  • the filter processing unit C102 Based on the filter coefficient information C103 output from the entropy decoding unit 200, the filter processing unit C102 performs filter processing on each pixel of the decoded image signal.
  • the decoded image signal subjected to the filtering process is output to the frame memory 114 or the pixel bit length reducer as it is or as a decoded image signal according to each embodiment.
  • the pixel bit length expander 1001 for converting the pixel bit length of the input image signal 101 may be configured as shown in FIG.
  • the input image signal input to the pixel bit length expander 1001 is expanded to a bit accuracy (N + M) that is M bits larger than N bits by the same procedure as in the first embodiment.
  • the signal is guided to the filter processing unit 7000, and the filter processing is performed on each pixel.
  • the filtering process performed here uses, for example, a spatial filter that adds up after adding a filter coefficient to adjacent pixels in a frame.
  • FIG. 30 shows a filter processing unit 7000 using, for example, an epsilon filter as a spatial filter.
  • this filter processing unit 7000 when the input pixel value is P [t] and the output pixel value is Q [t], t indicates the pixel position.
  • the input image The prime value P [t] and the previous output pixel value Q [t— 1] stored in the temporary memory A04 are input to the adder AO 1 to obtain the difference d.
  • Lookup table A02 is subtracted from this difference d to determine F (d).
  • This value F (d) is input to the adder A03 and subtracted from the input pixel value P [t]. This operation is expressed as follows.
  • the contents of the lookup table A02 should be stored in advance as a table with the value calculated by the function F. For example, if the pixel value is 3 ⁇ 4 bit, the value of d is _ 2 n + l to + 2 n _ 1, and the lookup table should have 2 n + 1 _ l arrays .
  • u is a parameter of the filter. The larger u is, the stronger the filter is applied. The computational amount of this filter is only two additions to the filter processing for one pixel.
  • FIG. 31 is a specific flowchart of the filter processing unit 7000 using the filter of FIG.
  • This filter processing unit 7000 performs filtering from the four directions of the frame. First, post-filter processing (S101) of the right force of the frame and the left is performed. Next, post-filter processing (S102) from left to right in the opposite direction is performed. Similarly, this time, top-to-bottom post-filter processing (S103) is performed, and then top-to-bottom post-filter processing in the opposite direction is performed. This process is performed for each YCb Cr signal. By performing the filter processing from opposite directions in this way, it is possible to cancel the phase shift, which is a drawback of the cyclic filter.
  • a time filter may be used in which the corresponding pixels of a plurality of frames are multiplied by a filter coefficient and then added.
  • a time filter with motion compensation may be applied.
  • the time filter with motion compensation for example, a method described in Japanese Patent Application No. 2006-3620 6 is used.
  • the filtering process in this configuration is not limited to the filtering process described here, and any filtering process may be performed.
  • the input image signal 101 is encoded without converting the pixel bit length, but the input image signal 101 is directly subjected to the prefiltering described here. Encoding from May be.
  • the pixel bit length reducers 2001 and 2007 can be configured as shown in FIG.
  • the bit extended decoded image signal 2003 input to the pixel bit length expanders 2001 and 2007 is input to the filter processing unit 7001, and after the filter processing is performed on each pixel, the first implementation is performed.
  • the bit length of each pixel is reduced to N bits precision smaller than (N + M) bits by the same procedure as in form 1, and output as an N-bit decoded image signal.
  • a spatial filter may be used in which adjacent pixels in the frame are multiplied by the filter coefficient and then added, or the corresponding pixels of a plurality of frames are multiplied by the filter coefficient. You may use a time filter that adds up and down.
  • the pixel bit length expander 1001 is combined with the filter processing configuration, and the inverse filter processing of the filter processing performed by the filter processing unit 7000 in the pixel bit length expander 1001 is performed. It is also possible to use the filter processing unit in 2001. For example, in the filter processing unit 7000, when two pixel values corresponding to two frames as input pixel values are X and y, respectively, and corresponding output pixel values are a and b, Expression (14) shows Apply processing.
  • the processing of Formula (14) is the power of performing division processing by 2-bit right shift. For example, when pixel bit extension by 2-bit left shift is performed in the pixel bit length expander 1001, the filter expressed by Formula A4 The processing does not cause rounding errors and does not lose the input image information.
  • the filter processing unit 7001 on the decoding side the pixel values of two points corresponding to two frames of the decoded image signal as input pixels are a ′, and the corresponding output pixel values are ⁇ ′, y ′.
  • the processing shown in Equation (15) is performed.
  • Equation (15) is the power of performing division processing by 3-bit right shift. For example, when pixel bit extension by 2-bit left shift is performed in the pixel bit length extender 1001, it is expressed by Equation (14). Rounding error does not occur as in the filtering process. That is, assuming that the quantization error caused by the encoding and decoding processes is 0, the filter and the inverse filter according to Equation (14) and Equation (15) are reversible filters. Equation (14) performs the process of bringing the corresponding two pixel values between two frames closer to each other, so that motion compensation makes prediction more quickly and the prediction residual becomes smaller. Therefore, the code amount can be reduced. Furthermore, by performing pixel bit expansion, it is possible to perform a filtering process for returning to the original input image on the decoding side, except for quantization errors, so that encoding efficiency can be improved.
  • the filter used in the filter processing units 7000 and 7001 is not limited to the example described here, and any filter may be used. Of course, it is needless to say that the accuracy of prediction and the like can be further improved by using a reversible filter corresponding to the number M of bits expanded in the pixel bit expansion unit 1001. Also, in the example described here, the power for filtering the two points corresponding to each other between two frames is not limited to this example. For example, pixels of two or more frames may be used. It is also possible to filter two or more adjacent pixels within one frame. Furthermore, it is possible to use various forms of filtering, such as two pixel values corresponding to the top field and bottom field in the interlaced image.
  • the coefficients of the lossless filter used in the filter processing units 7000 and 7001 can be output to the entropy encoding unit 108 and included in the code key data 117. In this way, it is possible to perform encoding by selecting a filter coefficient that has a high effect of improving accuracy such as prediction for each frame.
  • This image encoding apparatus has substantially the same configuration as that of the configuration example shown in the second embodiment (Fig. 9). However, a permeability information storage unit 3000 is provided after the pixel bit length extender 1001. It is different in that it is equipped with.
  • a moving image signal is input to the image encoding device in units of frames as the image signal 100 (S151).
  • the pixel bit length extender (pixel bit precision converter) 1001 performs the same processing as in the second embodiment, so that the value of each pixel of the input N-bit precision image signal 100 is greater than N bits.
  • a process of expanding to bit precision (N + M) bit precision is performed (S152).
  • the expanded bit number M is guided to the entropy encoder 108 as bit extension information 1003 by a bit length conversion controller (bit precision conversion controller) 1002.
  • the input image signal 100 with an extended bit length is input to the permeability data packer 3000, a data sequence generated based on parameters set in advance is generated for each pixel of the input image signal 100. For example, it is carried in the lower bits (S153).
  • the generated data string has a word length of M bits or less, which is the extended bit length.
  • the subtracter 101 and the input image signal 100 with the bit length extended to (N + M) bit precision are (N + M)
  • the difference from the predicted image signal 102 generated with bit accuracy is obtained.
  • a prediction error signal 103 with (N + M) bit accuracy is generated (S154).
  • the generated prediction error signal 103 is subjected to orthogonal transform (for example, discrete cosine transform (DCT)) by the orthogonal transformer 104, and the orthogonal transformer 104 performs orthogonal transform coefficient information 105 (for example, DCT coefficient information). Is obtained (S155).
  • the orthogonal transform coefficient information 105 is quantized by the quantizer 106, and the quantized orthogonal transform coefficient information 107 is guided to the entropy code encoder 108 and the inverse quantizer 109.
  • the quantized orthogonal transform coefficient information 107 is subjected to processing reverse to the processing of the quantizer 106 and the orthogonal transformer 104 by the inverse quantizer 109 and the inverse orthogonal transformer 110, in the same manner as the prediction error signal.
  • a signal ie a local decoding error signal, is generated.
  • This local decoding error signal is added to the predicted image signal 102 with (N + M) bit accuracy by the adder 111, thereby generating a local decoded image signal 112 with (N + M) bit accuracy (S 156). (N + M) bit precision
  • the locally decoded image signal 112 is stored in the frame memory 114 after being filtered by the loop filter 113 as necessary.
  • the local decoded image signal 112 stored in the frame memory 114 is input to the predicted image generator 115.
  • the prediction image generator 115 is based on prediction mode information from the input image signal 100 in which the bit length is extended to (N + M) bit accuracy and the local decoded image signal 112 having (N + M) bit accuracy.
  • a prediction image signal with (N + M) bit accuracy is generated.
  • a motion vector is detected by taking (for example, block matching) (S157).
  • a predicted image signal with (N + M) bit accuracy is created using the local image signal with (N + M) bit accuracy compensated by this motion vector (S157).
  • the (N + M) bit-accurate predicted image signal 102 generated here is output from the predicted image creator 115 together with the motion vector information Z prediction mode information 116 of the selected predicted image signal.
  • the quantized orthogonal transform coefficient information 107, the motion vector information / prediction mode information 116, and the bit extension information 1003 are entropy-encoded (S159), and the code signal generated thereby is encoded.
  • Data 117 is sent to a transmission system or storage system (not shown).
  • This image decoding apparatus has substantially the same configuration as the configuration example shown in the second embodiment (FIG. 10), but between the loop filter 113 and the pixel bit length reducer 2001 (pixel bit precision converter). The difference is that the permeability detector 3002 is connected.
  • the operation of the image decoding apparatus will be described with reference to the flowchart in Fig. 34B.
  • the entropy decoder 200 converts the code data 117 into the entropy encoding.
  • Quantized orthogonal transform coefficient information 107, motion vector / prediction mode information 116, and bit extension information 2004 are obtained (S162).
  • the quantized orthogonal transform coefficient information 107 is sequentially processed by the inverse quantizer 109 and the inverse orthogonal transformer 110 (N + M) bits by sequentially receiving the reverse processing of the quantizer 106 and the orthogonal transformer 104.
  • the bit extension information 2004 output from the entropy decoder 200 is input to the bit length conversion controller 2002 and then the bit conversion information 2003 indicating the number of bits M indicating the number of bits extended in the decoded image. Output.
  • the motion vector Z prediction mode information 116 is input to the prediction image generator 115, and is based on the motion vector Z prediction mode information 11 6 from the bit extended reference image signal 204 with (N + M) bit accuracy (N + M ) A predicted image signal 102 with bit accuracy is generated (S164).
  • the adder 11 1 adds the (N + M) bit accuracy residual signal 201 and the (N + M) bit accuracy prediction image signal 102.
  • the added signal is subjected to a finisher process as necessary by the loop filter 113, and a bit extended decoded image signal 203 with (N + M) bit precision is output (S165).
  • Bit extended decoded image signal 203 is stored in frame memory 114 and input to watermark detection unit 3002.
  • the force detection unit 3002 determines whether or not, for example, the lower-order bits of each pixel of the bit extended decoded image 203 are embedded watermark data, and detects whether or not the data includes transparency data. Is output (S166). Whether the data is watermark data is determined by, for example, calculating a correlation coefficient of a known permeation pattern in advance and, for example, the lower M bits of the bit-enhanced decoded image 203, and the value is above a certain threshold. In such a case, the method is used to determine that the data is transparent.
  • the bit extended decoded image signal 203 is input to the pixel bit length converter 2007, and based on the bit conversion information 2003, the image complex of the first embodiment such as Equation (2) is applied to the value of each pixel.
  • the pixel bit length is reduced and converted to a smaller value of M bits (S167), which is the same as the input image input to the image encoding device
  • S168 A decoded image 202 with N-bit precision is obtained (S168).
  • the predicted image signal is generated with an accuracy that is M bits larger than the input image signal, it is possible to improve the accuracy of motion compensation filters, loop filters, intra prediction, and the like. . As a result, the prediction error signal can be reduced, so that the coding efficiency can be improved. Also, using the extended lower bits, embedding information and transparency As a result, it is possible to embed information with transparency without changing the input image signal.
  • the number of bits M to be expanded or reduced is entropy-coded as bit expansion information and included in the encoded data.
  • the image encoding device and the image decoding device When the bit length is expanded or reduced by a predetermined number in (1), it is not always necessary to include the number of bits M in the encoded data.
  • the bit length is expanded or reduced by a predetermined number, it is possible to include only a flag indicating the power to perform extension or reduction in the encoded data.
  • the flag when the flag is TRUE, the switch is connected to the ON side in the pixel bit length expander 1001, the pixel bit length reducer 2001, or the pixel bit length converter 2007 in the second to sixth embodiments.
  • the flag when the flag is FALSE, the switch is connected to the OFF side.
  • Extending the pixel bit length of the input image, the locally decoded image signal, and the decoded image is equivalent to improving the arithmetic accuracy of the integer operation in the encoding / decoding process. It is also possible to vary the calculation accuracy at any point in the processing process and include a flag or information indicating the calculation accuracy in the encoded data.
  • the second to fifth embodiments are combined, and in the process of encoding / decoding, a flag indicating whether or not the pixel bit accuracy stored in the frame memory is expanded, and orthogonal transformation / quantization are used.
  • a flag indicating whether or not to extend the pixel bit length of the input prediction residual, that is, whether or not to extend the operation accuracy of orthogonal transformation / quantization, is included in the sign data. Whether or not to extend the pixel bit accuracy stored in the frame memory by the encoder / decoder and the prediction residual input to the orthogonal transform Z quantization according to ⁇ N / OFF of these flags. It may be switched whether to extend the pixel bit length. Furthermore, when these flags are ON, data indicating how many bits are expanded / reduced may be included in the code data. In addition, whether to increase the calculation accuracy of one process of these encoding 'decoding processes?
  • the flag indicating whether or not the data indicating the number of bits to be expanded is not limited to the frame memory or the orthogonal transformation / quantization shown here.
  • the flag for the loop filter and the data indicating the number of expansion bits are added. May be. If the process is part of the encoding / decoding process, the flag or extension bit indicates whether the process or multiple processes are combined into one, and the accuracy of the operation is extended for each. Data indicating the number may be set and included in the encoded data.
  • FIG. 35A The configuration of the image encoding device according to the seventh embodiment will be described with reference to FIG. 35A.
  • This embodiment basically has the same configuration as that of the embodiment of FIG. 11, except that a pixel bit length reducer (pixel bit precision reduction converter) 1004 is provided between the predicted image creation unit 115 and the subtractor 101.
  • a pixel bit length extender (pixel bit precision extension converter) 1007 is provided between the adder 111 and the loop filter 112.
  • a moving image signal is input to the image encoding device in units of frames as the image signal 100 (S171).
  • the pixel bit length extender 1001 is configured as shown in FIG. 2, and the value of each pixel of the input N-bit precision image signal 100 based on a control flag F1001 based on syntax information described later included in the bit conversion information 1010. It is determined whether or not to extend the bit accuracy to (N + M) bits larger than N bits (S172).
  • the switch E0 in the pixel bit length extender 1001 is set to ON, and the bit length of each pixel of the local decoded image signal 112 is set to the pixel bit in the first embodiment. Extension is performed by the same processing as that performed by the length converter 1001 (S173). For example, if the value of a pixel in the input image signal is K, the pixel value K ′ after M-bit expansion is expressed by, for example, Equation (1), Equation (1—1), (1-2), (1 — Calculated by processing such as 3).
  • the subtracter 101 calculates the difference between the input image signal 100 and the predicted image signal 102, and performs prediction.
  • a residual signal 103 is generated (S174).
  • the generated prediction error signal 103 is subjected to orthogonal transform (for example, discrete cosine transform (DCT)) by the orthogonal transformer 104.
  • the orthogonal transformer 104 performs orthogonal transform coefficient information 105 (for example, DCT coefficient information). Is obtained (S175).
  • the orthogonal transform coefficient information 105 is quantized by the quantizer 106, and the quantized orthogonal transform coefficient information 107 is guided to the entropy code encoder 108 and the inverse quantizer 109.
  • the quantized orthogonal transform coefficient information 107 is subjected to processing reverse to the processing of the quantizer 106 and the orthogonal transformer 104 by the inverse quantizer 109 and the inverse orthogonal transformer 110 in order, and is similar to the prediction error signal.
  • Signal ie, a local decoding error signal.
  • This local decoding error signal is added to the predicted image signal 102 by the adder 111, thereby generating a local decoded image signal 112 (S176).
  • the local decoded image signal 112 is input to the pixel bit length extender 1007, and based on a control flag F1007 based on syntax information described later included in the bit conversion information 1010, each pixel of the local decoded image signal 112 is displayed.
  • the local decoded image signal 112 output from the pixel bit length expander 1007 is subjected to filter processing by the loop filter 113 as necessary, and then input to the pixel bit length converter 1005.
  • the pixel bit length converter 1005 determines whether or not to convert the bit length of each pixel of the locally decoded image signal 112 based on a control flag F1005 based on syntax information described later included in the bit conversion information 1010 ( S 179).
  • the control flag F1005 is 0 N
  • the switch E3 in the pixel bit length expander 1005 is set to ON, and the bit length of each pixel of the local decoded image signal 112 is set to the pixel in the third embodiment.
  • the bit length converter 1005 is expanded by the same processing as that performed by the bit length converter 1005 (S180). When the control flag F100 5 is OFF, the process of converting the bit length of the pixel is not performed.
  • the local image signal 112 output from the pixel bit length converter 1005 is stored in the frame memory 114 (S18 1). 0
  • the local image signal stored in the frame memory 114 is further converted into a pixel bit length converter 1 Input to 006.
  • the pixel bit length converter 1006 determines whether or not to convert the bit length of each pixel of the locally decoded image signal 112 based on a control flag F1006 that is included in the bit conversion information 1010 and is based on syntax information described later. Determine (S182).
  • the switch E3 in the pixel bit length expander 1006 is set to ⁇ N, and the bit length of each pixel of the locally decoded image signal 112 is set in the third embodiment.
  • the pixel bit length converter 1006 is expanded by the same processing as that performed by the pixel bit length converter 1006 (S183).
  • the control flag F1006 is OFF, the process of converting the bit length of the pixel is not performed.
  • the local decoded image signal 112 output from the pixel bit length converter 1006 is input to the predicted image generator 115.
  • the predicted image generator 115 generates a predicted image signal based on certain prediction mode information from the input image signal 100 and the locally decoded image signal 112 stored in the frame memory 114 (S184). At this time, the locally decoded image signal 112 from the adder 111 is temporarily stored in the frame memory 114, and matching between the input image signal 100 and the locally decoded image signal 112 stored in the frame memory 114 is performed for each block in the frame. (For example, block matching) is performed, a motion vector is detected, and a predicted image signal is created using a local image signal compensated by the motion vector. The predicted image signal 102 generated here is output from the predicted image generator 115 together with the motion vector information / prediction mode information 116 of the selected predicted image signal.
  • the predicted image signal 102 output from the predicted image signal generator 115 is input to the pixel bit length reducer 1004.
  • the pixel bit length reducer 1004 determines whether or not to convert the bit length of each pixel of the predicted image signal 102 based on a control flag F1004 based on syntax information described later included in the bit conversion information 1010 (S185). ).
  • the control flag F1004 is 0N
  • the switch E2 in the pixel bit length expander 1004 is set to ON, and the bit length of each pixel of the predicted image signal 102 is reduced by the same process as in the fourth embodiment. (S186).
  • the control flag F1004 is OFF, the process of converting the bit length of the pixel is not performed.
  • the entropy encoder 108 entropy-encodes the quantized orthogonal transform coefficient information 107, the motion vector information / prediction mode information 116, and the bit extension information 1003 (S187).
  • the code key data 117 thus generated is sent to a transmission system or storage system (not shown).
  • ex_seq_all_bit_extention_flag shown in the sequence parameter set syntax of Fig. 36 is a flag indicating whether or not to perform bit extension.
  • this flag is TRUE, the internal data path of encoding and decoding All indicate that encoding and decoding are performed using data in which the bit length of each pixel is extended.
  • the flag is FALSE, this flag indicates whether encoding / decoding is performed using an image signal in which the bit length of each pixel is extended in all of the internal data paths of the encoding and decoding keys. Only in some data paths, it is shown that encoding / decoding is performed using an image signal in which the bit length of each pixel is extended.
  • ex_seq_bit_extention_flag If the power is STRUE, ex_seq_shift_bits indicating how many more bits to extend can be sent.
  • ex_seq_all_bit_extention_flag FALSE
  • a flag ee X _seq_partial_bit_extention_flag indicating whether or not to extend the pixel bit length of the image signal input to the prediction image creation unit in the encoding and decoding is further transmitted Good.
  • the flag When the flag is TRUE, it indicates that an image signal in which the bit length of each pixel is extended at the time of prediction image creation in encoding / decoding internal processing is used.
  • the flag is FALSE, this flag indicates that the image signal in which the bit length of each pixel is extended is used in all the data paths inside the code and decoding keys. Show.
  • ex_seq_partial_bit_extention_flag When the power is STRUE, further, ex_se q_partial_shift_bits is sent to indicate how many bits the pixel bit length of the image signal input to the predicted image creation unit 112 is to be expanded compared to the pixel bit length of the input image signal.
  • ex_seq_partial_bit_extention_flag force When TRUE, indicates the power to use an image signal in which the bit length of each pixel is extended in a certain part of the data path inside the encoding and decoding. You can send a flag. In the example shown in FIG. 36, a flag ex_seq_framemem_bit_extent ion_flag indicating whether or not to extend the pixel bit length of the reference image signal stored in the frame memory 114, and an orthogonal transform unit and a quantization unit (orthogonal transformer 104, quantizer 106).
  • a puffer ex_seq_trans_and_quant_bit_extention_flag indicating the force that stretches the pixel bit length of the prediction residual signal input to or output from the inverse quantizer 109 and the inverse orthogonal transformer 110) by 5 ⁇ m is transmitted.
  • ex_seq_framemem_bit_extention_flag When ex_seq_framemem_bit_extention_flag is TRUE, this flag indicates that the pixel bit length of the reference image signal stored in the frame memory is extended. When the flag is FALSE, it indicates that the pixel bit length of the reference image signal stored in the frame memory is the same as the input image. If ex_seq_framemem_bit_extention_flag is TRUE, ex_seq_framemem_shift_bits indicating how many bits the pixel bit length of the image signal stored in the frame memory is extended compared to the pixel bit length of the input image signal may be transmitted .
  • this flag is used for the orthogonal transform unit and the quantization unit (orthogonal transform unit).
  • the prediction residual signal input or output is the difference signal between the prediction image signal having the same bit accuracy as the input image signal and the input image signal. Indicates that there is.
  • Fig. 38 shows an example of setting the control flags F1001, F1007, F1005, F1006, and F1004 used in the processing of the image encoding device shown in Fig. 35. .
  • the all column indicates the value of ex_seq_all_bit_extention_flag.
  • the part [fm], t / q, and [J will not deviate the 3 ⁇ 4_ of ex_seq_partial_bit_extention_flag, ex_seq_framemem_bit_extension_flag, ex_seq_trans_ana_quant_bit_extention_flag.
  • control flags FlOOl, F1007, F1005, F1006, and F1004 indicate the value of the control flag that is set based on the value of the respective syntax. 1 indicates ON and 0 indicates OFF. For example, in the case of ex_seq_all_bit_extention_flag force STRUE, only the control flag F1001 is ON and the remaining flags are OFF.
  • Fig. 40 shows a set column of control flags F2001, F2005, F2006, F2007, and F2008 used in the processing of the image decoding apparatus described later.
  • the values are the same as in FIG.
  • this image decoding apparatus includes an entropy decoder 200, an inverse quantizer 109, an inverse orthogonal transformer 110, an adder 111, a norep filter 113, a frame memory 114, and a prediction image creator.
  • pixel bit length reducer pixel bit precision reduction converter 2001, 2006, pixel bit length extender (pixel bit precision extension converter) 2005, pixel bit length converter (pixel bit precision converter) 2007, 2008, bit length conversion controller 2002 is provided.
  • the coefficient information output of the entropy decoder 200 is connected to the adder 111 via the inverse quantizer 109 and the inverse orthogonal transformer 110.
  • the motion vector / prediction mode information output of the entropy decoder 200 is connected to the prediction image generator 115, and its bit spread information output is connected to the bit length conversion controller 2002.
  • the output of the predicted image generator 115 is connected to the other input of the adder 111 via a pixel bit length reducer.
  • the output of adder 111 is pixel bit length It is connected to the frame memory 114 via an expander 2005, a norpe finalizer 113, and a pixel bit length converter 2007.
  • the other output of the loop filter 113 is connected to the input of the pixel bit length reducer 2001.
  • the output of the frame memory 114 is connected to the pixel bit length converter 2008 together with the output to the bit length conversion controller 2002.
  • the output of the pixel bit length converter 2008 is connected to the other input of the prediction image generator 115.
  • the output of the bit length conversion controller 2002 is the other input of the pixel bit length reducer 2001, the pixel bit length extender 2005, the pixel bit length reducer 2006, the pixel bit length converter 2007, and the pixel bit length converter 2008. Connected to.
  • Bit conversion information 2003 including a control flag indicating whether or not to perform pixel bit length conversion and information indicating the bit length to be expanded or reduced is output.
  • the control flags included in the bit conversion information are set as shown in FIG. 40 based on the status value (FIG. 36) multiplexed as bit extension information 2004 in the encoded data 117, for example. Included in Information 2003.
  • the quantized orthogonal transform coefficient information 107 is converted into a residual signal 201 by the inverse quantizer 109 and the inverse orthogonal transformer 110, which are sequentially subjected to processing reverse to that of the quantizer 106 and the orthogonal transformer 104.
  • the motion vector Z prediction mode information 116 is input to the prediction image generator 115, stored in the frame memory 114, and motion is generated from the decoded image signal 203 in which the bit length of each pixel is converted by the pixel bit length converter 2008 as necessary.
  • a predicted image signal 102 based on the vector Z prediction mode information 116 is generated (S204).
  • the predicted image signal 102 is input to the pixel bit length reducer 2006, and based on the control flag F2006 included in the bit conversion information 2003, whether or not to convert the bit length of each pixel of the predicted image signal 102 is determined. Determination is made (S205). When control flag F2006 is ON, The switch E2 in the elementary bit length extender 2006 is set to ON, and the bit length of each pixel of the prediction image signal 102 is processed by the same process as the process performed by the pixel bit length extender 2006 in the fourth embodiment. Reduce (S206). When the control flag F2006 is OFF, the process of converting the bit length of the pixel is not performed.
  • Predictive image signal 102 and residual signal 201 output from pixel bit length reducer 2006 are added by adder 111 to generate a decoded image signal (S207).
  • the decoded image signal 203 is input to the pixel bit length expander 2005.
  • the pixel bit length converter 2005 determines whether to extend the bit length of each pixel of the decoded image signal 203 based on the control flag F2005 included in the bit conversion information 2003 (S208).
  • the control flag F2005 is ⁇ N
  • the switch E1 in the pixel bit length expander 2005 is set to ⁇ N, and the bit length of each pixel of the decoded image signal 203 is converted to the pixel bit length conversion in the fourth embodiment.
  • the processing is expanded by the same processing as that performed in the device 2005 (S209).
  • the control flag F2005 is OFF, the process of converting the pixel bit length is not performed.
  • the decoded image signal 203 output from the pixel bit length expander 2005 is output after being filtered by the loop filter 113 as necessary.
  • the pixel bit length reducer 2001 and the pixel bit length converter 2007 are output. Is output.
  • the pixel bit length converter 2007 When the decoded image signal 203 is input to the pixel bit length converter 2007, the pixel bit length converter 2007 outputs the decoded image signal 203 based on the control flag F2007 included in the bit conversion information 2003. It is determined whether the bit length of each pixel is to be converted (S210). When the control flag F2007 is ON, the switch E3 in the pixel bit length extender 2007 is set to ON, and the bit length of each pixel of the decoded image signal 203 is the pixel bit length in the third embodiment. Conversion is performed by the same processing as that performed by the converter 2007 (S211). When the control flag F2007 is OFF, the process of converting the pixel bit length is not performed.
  • the decoded image signal 203 output from the pixel bit length converter 2007 is stored in the frame memory 114 (S212).
  • the pixel bit length converter 2008 decodes based on the control flag F2008 included in the bit conversion information 2003. It is determined whether or not the bit length of each pixel of the image signal 203 is to be converted (S213).
  • control flag F2008 is ON, The switch E3 in the elementary bit length extender 2008 is set to ON, and the bit length of each pixel of the decoded image signal 203 is processed by the same processing as that performed by the pixel bit length converter 2008 in the third embodiment. Conversion is performed (S214).
  • the control flag F2008 is OFF, the process of converting the pixel bit length is not performed.
  • the image signal output from the pixel bit length converter 2008 is input to the predicted image generator 115 (S215).
  • the decoded image signal 203 is filtered by the loop filter 113 as necessary, and is output to the pixel bit length reducer 2001.
  • the pixel bit length reducer 2001 converts each pixel of the decoded image signal 203 based on the control flag F2001 included in the bit conversion information 2003.
  • the power to convert the bit length is determined (S216).
  • the control flag F2001 is ⁇ N
  • the switch E2 in the pixel bit length extender 2001 is set to ON, and the bit length of each pixel of the decoded image signal 203 is set to the pixel bit length reducer in the first embodiment. Reduction is performed by the same processing as in 2001 (S217).
  • the control flag F2001 is OFF, the process of converting the pixel bit length is not performed, and the decoded image signal 202 is output as it is.
  • the decoded image 202 having the same N-bit accuracy as the input image input to the image encoding device is output from the pixel bit length reducer 2001 (S218).
  • the pixel bit length extender, the pixel bit length reducer, and the pixel bit length converter are partially omitted, and the configuration shown in the second to fifth embodiments is used, and the pixel bit length extension is performed. Whether the bit length conversion is performed in the imager, the pixel bit length reducer, and the pixel bit length converter is determined according to the value specified by ex_seq_all_bit_extention_flag in the syntax shown in FIG.
  • the quantization / inverse quantization in the second and third embodiments is calculated by the following formula considering that the accuracy of the residual is expanded to (N + M) bits when the bit precision is expanded to M bits. Qst mark 'may be used.
  • FIG. 41 is a table showing the predicted pixel value of 8-bit accuracy according to H.264 and the predicted pixel value of 12-bit accuracy according to the present embodiment for the input pixel value (8 bits) at each pixel position. Is a graph of this (the interpolation filter for half-pixel position is 6 types ⁇ 1/32, —5/32, 20/32, 20/32, —5/32, 1/32 ⁇ ).
  • the predicted image signal is created with the same 8 bits as the input image signal (Fig. 42 (represented by a rhombus plot), the pixel value of the pixel interpolated at the half-pixel position has a rounding error.
  • bit expansion as described in the present embodiment is performed to create a 12-bit predicted image (represented by a circular plot in FIG. 42)
  • the half-pixel position is set. The rounding error of the pixel value of the interpolated pixel is reduced.
  • bit extension is performed, the bit accuracy of the residual is expanded to 12 bits compared to before bit extension.
  • a bit larger than the bit accuracy of the input image by M bits Since the predicted image signal is created with an accuracy that is M bits larger than the input image signal, the accuracy of motion compensation filters, loop filters, and in-screen prediction can be improved. Since the prediction error signal can be made smaller, the coding efficiency can be improved.
  • the present invention is not limited to the above-described embodiments as they are, but can be embodied by modifying the constituent elements without departing from the spirit of the invention in the implementation stage.
  • Various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiments. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, constituent elements over different embodiments may be appropriately combined.
  • the present invention it is possible to improve the coding efficiency by sufficiently increasing the accuracy of intra prediction and motion compensation.
  • the configuration of the present invention by making the bit accuracy of the reference image input to the prediction image creator larger than the bit accuracy of the input image, for example, intra-screen prediction applied to H.264, etc.
  • the prediction error signal can be reduced, and the coding efficiency can be improved.
  • the bit accuracy related to the calculation accuracy of integer pixels can be kept high as well as the bit accuracy related to the calculation accuracy of decimal pixels described above can be kept high.
  • a reference image is stored with a bit accuracy higher than the bit accuracy of an input image and used for prediction, and a prediction image is created by a prediction image creator. In this case, it is possible to improve the calculation accuracy and improve the coding efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un appareil de codage d'image comprenant un convertisseur d'extension de longueur de bits de pixel (1001) pour convertir la précision des bits de chaque pixel d'une image d'entrée pour sortir l'image d'entrée convertie et sortir des informations de conversion de bits indiquant le nombre de bits modifiés par la conversion, un codeur d'image (10) pour coder l'image d'entrée convertie en des informations de codage d'image de sortie, et un multiplexeur (12) pour multiplexer les informations de conversion de bits et les informations de codage d'image.
PCT/JP2007/057197 2006-03-30 2007-03-30 appareil et procédé de codage d'image, et appareil et procédé de décodage d'image WO2007114368A1 (fr)

Priority Applications (10)

Application Number Priority Date Filing Date Title
JP2008508675A JP5254004B2 (ja) 2006-03-30 2007-03-30 画像符号化装置及び方法
CN2007800112005A CN101411202B (zh) 2006-03-30 2007-03-30 图像编码装置和方法以及图像解码装置和方法
EP20070740633 EP2003897A4 (fr) 2006-03-30 2007-03-30 Appareil et procede de codage d'image, et appareil et procede de decodage d'image
CA 2645931 CA2645931A1 (fr) 2006-03-30 2007-03-30 Appareillage et methode de codage d'images, et appareillage et methode de decodage d'images
BRPI0709853-7A BRPI0709853A2 (pt) 2006-03-30 2007-03-30 aparelho e mÉtodo de codificaÇço de imagem e aparelho e mÉtodo de decodificaÇço de imagem
MX2008012516A MX2008012516A (es) 2006-03-30 2007-03-30 Aparato y metodo de codificacion de imagen y aparato y metodo de descodificacion de imagen.
AU2007232828A AU2007232828B2 (en) 2006-03-30 2007-03-30 Image coding apparatus and method, and image decoding apparatus and method
US12/242,181 US8606028B2 (en) 2006-03-30 2008-09-30 Pixel bit depth conversion in image encoding and decoding
NO20084509A NO20084509L (no) 2006-03-30 2008-10-27 Bildekoding og bildedekoding
HR20080553A HRP20080553A2 (en) 2006-03-30 2008-10-29 Image coding apparatus and method, and image decoding apparatus and method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2006095597 2006-03-30
JP2006-095597 2006-03-30
JPPCT/JP2006/320876 2006-10-19
PCT/JP2006/320876 WO2007116551A1 (fr) 2006-03-30 2006-10-19 Appareil et procédé de codage d'image et appareil et procédé de décodage d'image

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/242,181 Continuation US8606028B2 (en) 2006-03-30 2008-09-30 Pixel bit depth conversion in image encoding and decoding

Publications (1)

Publication Number Publication Date
WO2007114368A1 true WO2007114368A1 (fr) 2007-10-11

Family

ID=38563636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/057197 WO2007114368A1 (fr) 2006-03-30 2007-03-30 appareil et procédé de codage d'image, et appareil et procédé de décodage d'image

Country Status (1)

Country Link
WO (1) WO2007114368A1 (fr)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009130931A (ja) * 2007-11-19 2009-06-11 Samsung Electronics Co Ltd 映像の解像度の調整を通じて動画を効率的に符号化/復号化する方法及び装置
WO2010038587A1 (fr) * 2008-10-01 2010-04-08 株式会社エヌ・ティ・ティ・ドコモ Appareil de codage d'images animées, appareil de décodage d'images animées, procédé de codage d'images animées, procédé de décodage d'images animées, programme de codage d'images animées, programme de décodage d'images animées et système de codage/décodage d'images animées
WO2011086777A1 (fr) * 2010-01-13 2011-07-21 株式会社 東芝 Appareil codeur d'image animée et appareil décodeur
WO2011086836A1 (fr) * 2010-01-12 2011-07-21 シャープ株式会社 Appareil codeur, appareil décodeur et structure de données
WO2011121716A1 (fr) * 2010-03-30 2011-10-06 株式会社 東芝 Procédé de codage d'image animée, procédé de décodage, appareil codeur et appareil décodeur
JP2011233972A (ja) * 2010-04-23 2011-11-17 Toshiba Corp 画像符号化装置及び画像符号化方法並びに画像復号化装置及び画像復号化方法
WO2012001818A1 (fr) * 2010-07-02 2012-01-05 株式会社 東芝 Dispositif de codage vidéo et dispositif de décodage vidéo
WO2012001833A1 (fr) * 2010-07-02 2012-01-05 株式会社 東芝 Appareil de codage et appareil de décodage d'image animée et procédé
WO2012042645A1 (fr) * 2010-09-30 2012-04-05 株式会社 東芝 Dispositif de codage d'image dynamique et dispositif de décodage d'image dynamique
JPWO2010064675A1 (ja) * 2008-12-03 2012-05-10 ソニー株式会社 画像処理装置および画像処理方法、並びにプログラム
JP2012195959A (ja) * 2012-06-15 2012-10-11 Toshiba Corp 画像符号化装置及び画像符号化方法並びに画像復号化装置及び画像復号化方法
JP2013509763A (ja) * 2009-10-29 2013-03-14 シコラ,トマス ビデオ・シーケンスを処理する方法及び装置
WO2013065678A1 (fr) * 2011-10-31 2013-05-10 三菱電機株式会社 Dispositif de codage d'image dynamique, dispositif de décodage d'image dynamique, procédé pour coder une image dynamique et procédé pour décoder une image dynamique
JP5323211B2 (ja) * 2010-01-13 2013-10-23 株式会社東芝 動画像符号化装置および復号装置
JP2013232974A (ja) * 2010-01-13 2013-11-14 Toshiba Corp 動画像符号化装置および復号装置
JP2014506442A (ja) * 2011-01-12 2014-03-13 シーメンス アクチエンゲゼルシヤフト ビデオコーダにおける参照イメージの圧縮及び伸長方法
JP2014060744A (ja) * 2010-01-13 2014-04-03 Toshiba Corp 動画像符号化装置および復号装置
JP2014131172A (ja) * 2012-12-28 2014-07-10 Canon Inc 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP2014517631A (ja) * 2011-06-09 2014-07-17 クゥアルコム・インコーポレイテッド ビデオコーディングにおける内部ビット深度増加
JP2014520488A (ja) * 2011-06-24 2014-08-21 株式会社Nttドコモ 動き補償予測のための方法及び装置
JP2014520416A (ja) * 2011-08-31 2014-08-21 メディアテック インコーポレイテッド 制約付きフィルタ係数を有する適応ループフィルタの方法及び装置
JP2014521257A (ja) * 2011-07-01 2014-08-25 クゥアルコム・インコーポレイテッド 低減解像度ピクセル補間
KR101446771B1 (ko) 2008-01-30 2014-10-06 삼성전자주식회사 영상 부호화장치 및 영상 복호화장치
US9172956B2 (en) 2010-11-23 2015-10-27 Lg Electronics Inc. Encoding and decoding images using inter-prediction
JP2016005244A (ja) * 2014-06-19 2016-01-12 株式会社ニコン 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法
RU2613728C1 (ru) * 2011-01-12 2017-03-21 Нтт Докомо, Инк. Способ кодирования изображения с предсказанием, устройство кодирования изображения с предсказанием, программа кодирования изображения с предсказанием, способ декодирования изображения с предсказанием, устройство декодирования изображения с предсказанием и программа декодирования изображения с предсказанием
JP2017200191A (ja) * 2017-05-25 2017-11-02 キヤノン株式会社 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
CN108370444A (zh) * 2015-12-21 2018-08-03 汤姆逊许可公司 组合自适应分辨率和内部比特深度增加编码的方法和装置
JP2018142969A (ja) * 2018-04-11 2018-09-13 キヤノン株式会社 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP2019106741A (ja) * 2019-04-08 2019-06-27 株式会社ニコン 画像処理装置
JP2019126077A (ja) * 2011-06-23 2019-07-25 華為技術有限公司Huawei Technologies Co.,Ltd. 画像フィルタ装置、フィルタ方法および動画像復号装置
WO2019159696A1 (fr) * 2018-02-14 2019-08-22 ソニー株式会社 Dispositif et procédé de traitement d'image
WO2019159697A1 (fr) * 2018-02-14 2019-08-22 ソニー株式会社 Dispositif et procédé de traitement d'image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04326667A (ja) * 1991-04-26 1992-11-16 Konica Corp 画像データの圧縮伸張装置
JPH06311506A (ja) * 1992-12-25 1994-11-04 Mitsubishi Electric Corp フレーム間符号化処理方式及びフレーム間符号化処理方法及び符号化制御方式
JPH09205647A (ja) * 1996-01-24 1997-08-05 Fuji Xerox Co Ltd 画像符号化装置、画像復号化装置および画像処理装置
JPH1127673A (ja) * 1997-06-27 1999-01-29 Victor Co Of Japan Ltd ディジタル画像信号圧縮装置及びディジタル画像信号記録装置
JP2003333603A (ja) * 2002-04-10 2003-11-21 Microsoft Corp コンピュータ実施される方法、システムおよびコンピュータ可読媒体
JP2005503734A (ja) * 2001-09-17 2005-02-03 ノキア コーポレイション サブピクセル値補間方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04326667A (ja) * 1991-04-26 1992-11-16 Konica Corp 画像データの圧縮伸張装置
JPH06311506A (ja) * 1992-12-25 1994-11-04 Mitsubishi Electric Corp フレーム間符号化処理方式及びフレーム間符号化処理方法及び符号化制御方式
JPH09205647A (ja) * 1996-01-24 1997-08-05 Fuji Xerox Co Ltd 画像符号化装置、画像復号化装置および画像処理装置
JPH1127673A (ja) * 1997-06-27 1999-01-29 Victor Co Of Japan Ltd ディジタル画像信号圧縮装置及びディジタル画像信号記録装置
JP2005503734A (ja) * 2001-09-17 2005-02-03 ノキア コーポレイション サブピクセル値補間方法
JP2003333603A (ja) * 2002-04-10 2003-11-21 Microsoft Corp コンピュータ実施される方法、システムおよびコンピュータ可読媒体

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2003897A4 *

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101365444B1 (ko) 2007-11-19 2014-02-21 삼성전자주식회사 영상의 해상도의 조정을 통하여 동영상을 효율적으로부호화/복호화하는 방법 및 장치
JP2009130931A (ja) * 2007-11-19 2009-06-11 Samsung Electronics Co Ltd 映像の解像度の調整を通じて動画を効率的に符号化/復号化する方法及び装置
KR101446771B1 (ko) 2008-01-30 2014-10-06 삼성전자주식회사 영상 부호화장치 및 영상 복호화장치
WO2010038587A1 (fr) * 2008-10-01 2010-04-08 株式会社エヌ・ティ・ティ・ドコモ Appareil de codage d'images animées, appareil de décodage d'images animées, procédé de codage d'images animées, procédé de décodage d'images animées, programme de codage d'images animées, programme de décodage d'images animées et système de codage/décodage d'images animées
JP2010087984A (ja) * 2008-10-01 2010-04-15 Ntt Docomo Inc 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、動画像符号化プログラム、動画像復号プログラム、及び動画像符号化・復号システム
US8879627B2 (en) 2008-10-01 2014-11-04 Ntt Docomo, Inc. Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method, moving image decoding method, moving image encoding program, moving image decoding program, and moving image encoding/ decoding system
CN104135665A (zh) * 2008-10-01 2014-11-05 株式会社Ntt都科摩 动态图像编码装置及方法、动态图像解码装置及方法
CN102172029A (zh) * 2008-10-01 2011-08-31 株式会社Ntt都科摩 动态图像编码装置、动态图像解码装置、动态图像编码方法、动态图像解码方法、动态图像编码程序、动态图像解码程序、以及动态图像编码解码系统
JPWO2010064675A1 (ja) * 2008-12-03 2012-05-10 ソニー株式会社 画像処理装置および画像処理方法、並びにプログラム
US9445119B2 (en) 2009-10-29 2016-09-13 Vestel Elektronik Sanayi Ve Ticaret A.S. Method and device for processing a video sequence
JP2013509763A (ja) * 2009-10-29 2013-03-14 シコラ,トマス ビデオ・シーケンスを処理する方法及び装置
WO2011086836A1 (fr) * 2010-01-12 2011-07-21 シャープ株式会社 Appareil codeur, appareil décodeur et structure de données
JPWO2011086836A1 (ja) * 2010-01-12 2013-05-16 シャープ株式会社 符号化装置、復号装置、および、データ構造
JP5323211B2 (ja) * 2010-01-13 2013-10-23 株式会社東芝 動画像符号化装置および復号装置
WO2011086777A1 (fr) * 2010-01-13 2011-07-21 株式会社 東芝 Appareil codeur d'image animée et appareil décodeur
WO2011086672A1 (fr) * 2010-01-13 2011-07-21 株式会社 東芝 Dispositif de codage et dispositif de décodage d'images animées
JP2014060744A (ja) * 2010-01-13 2014-04-03 Toshiba Corp 動画像符号化装置および復号装置
JP2013232974A (ja) * 2010-01-13 2013-11-14 Toshiba Corp 動画像符号化装置および復号装置
JP5323252B2 (ja) * 2010-03-30 2013-10-23 株式会社東芝 動画像符号化方法、復号化方法、符号化装置および復号化装置
US9100650B2 (en) 2010-03-30 2015-08-04 Kabushiki Kaisha Toshiba Video encoding method, decoding method, and apparatus
WO2011121716A1 (fr) * 2010-03-30 2011-10-06 株式会社 東芝 Procédé de codage d'image animée, procédé de décodage, appareil codeur et appareil décodeur
JP2011233972A (ja) * 2010-04-23 2011-11-17 Toshiba Corp 画像符号化装置及び画像符号化方法並びに画像復号化装置及び画像復号化方法
US8559734B2 (en) 2010-04-23 2013-10-15 Kabushiki Kaisha Toshiba Image encoding apparatus and method for the same and image decoding apparatus and method for the same
WO2012001818A1 (fr) * 2010-07-02 2012-01-05 株式会社 東芝 Dispositif de codage vidéo et dispositif de décodage vidéo
WO2012001833A1 (fr) * 2010-07-02 2012-01-05 株式会社 東芝 Appareil de codage et appareil de décodage d'image animée et procédé
WO2012042645A1 (fr) * 2010-09-30 2012-04-05 株式会社 東芝 Dispositif de codage d'image dynamique et dispositif de décodage d'image dynamique
US9369729B2 (en) 2010-11-23 2016-06-14 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US11234013B2 (en) 2010-11-23 2022-01-25 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US11627332B2 (en) 2010-11-23 2023-04-11 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US10440381B2 (en) 2010-11-23 2019-10-08 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US10148975B2 (en) 2010-11-23 2018-12-04 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US10757436B2 (en) 2010-11-23 2020-08-25 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US9800888B2 (en) 2010-11-23 2017-10-24 Lg Electronics Inc. Method for encoding and decoding images, and device using same
US9172956B2 (en) 2010-11-23 2015-10-27 Lg Electronics Inc. Encoding and decoding images using inter-prediction
US9621911B2 (en) 2010-11-23 2017-04-11 Lg Electronics Inc. Method for encoding and decoding images, and device using same
RU2676411C1 (ru) * 2011-01-12 2018-12-28 Нтт Докомо, Инк. Способ кодирования изображения с предсказанием, устройство кодирования изображения с предсказанием, программа кодирования изображения с предсказанием, способ декодирования изображения с предсказанием, устройство декодирования изображения с предсказанием и программа декодирования изображения с предсказанием
RU2650027C1 (ru) * 2011-01-12 2018-04-06 Нтт Докомо, Инк. Способ кодирования изображения с предсказанием, устройство кодирования изображения с предсказанием, программа кодирования изображения с предсказанием, способ декодирования изображения с предсказанием, устройство декодирования изображения с предсказанием и программа декодирования изображения с предсказанием
JP2014506442A (ja) * 2011-01-12 2014-03-13 シーメンス アクチエンゲゼルシヤフト ビデオコーダにおける参照イメージの圧縮及び伸長方法
RU2613728C1 (ru) * 2011-01-12 2017-03-21 Нтт Докомо, Инк. Способ кодирования изображения с предсказанием, устройство кодирования изображения с предсказанием, программа кодирования изображения с предсказанием, способ декодирования изображения с предсказанием, устройство декодирования изображения с предсказанием и программа декодирования изображения с предсказанием
US9723318B2 (en) 2011-01-12 2017-08-01 Siemens Aktiengesellschaft Compression and decompression of reference images in a video encoder
JP2014517631A (ja) * 2011-06-09 2014-07-17 クゥアルコム・インコーポレイテッド ビデオコーディングにおける内部ビット深度増加
US9521434B2 (en) 2011-06-09 2016-12-13 Qualcomm Incorporated Internal bit depth increase in video coding
JP7200320B2 (ja) 2011-06-23 2023-01-06 華為技術有限公司 画像フィルタ装置、フィルタ方法および動画像復号装置
JP2019126077A (ja) * 2011-06-23 2019-07-25 華為技術有限公司Huawei Technologies Co.,Ltd. 画像フィルタ装置、フィルタ方法および動画像復号装置
US11765350B2 (en) 2011-06-23 2023-09-19 Huawei Technologies Co., Ltd. Offset decoding device, offset coding device, image filtering device
JP2021005904A (ja) * 2011-06-23 2021-01-14 華為技術有限公司Huawei Technologies Co.,Ltd. 画像フィルタ装置、フィルタ方法および動画像復号装置
JP2022017267A (ja) * 2011-06-23 2022-01-25 華為技術有限公司 画像フィルタ装置、フィルタ方法および動画像復号装置
US9860532B2 (en) 2011-06-24 2018-01-02 Ntt Docomo, Inc. Method and apparatus for motion compensation prediction
JP2014520488A (ja) * 2011-06-24 2014-08-21 株式会社Nttドコモ 動き補償予測のための方法及び装置
US10171816B2 (en) 2011-06-24 2019-01-01 Ntt Docomo, Inc. Method and apparatus for motion compensation prediction
US10735746B2 (en) 2011-06-24 2020-08-04 Ntt Docomo, Inc. Method and apparatus for motion compensation prediction
US10721481B2 (en) 2011-06-24 2020-07-21 Ntt Docomo, Inc. Method and apparatus for motion compensation prediction
JP2014521257A (ja) * 2011-07-01 2014-08-25 クゥアルコム・インコーポレイテッド 低減解像度ピクセル補間
JP2014520416A (ja) * 2011-08-31 2014-08-21 メディアテック インコーポレイテッド 制約付きフィルタ係数を有する適応ループフィルタの方法及び装置
WO2013065678A1 (fr) * 2011-10-31 2013-05-10 三菱電機株式会社 Dispositif de codage d'image dynamique, dispositif de décodage d'image dynamique, procédé pour coder une image dynamique et procédé pour décoder une image dynamique
JP2012195959A (ja) * 2012-06-15 2012-10-11 Toshiba Corp 画像符号化装置及び画像符号化方法並びに画像復号化装置及び画像復号化方法
US10743017B2 (en) 2012-12-28 2020-08-11 Canon Kabushiki Kaisha Encoding including transform and quantization, or decoding including inverse-transform and inverse-quantization
JP2014131172A (ja) * 2012-12-28 2014-07-10 Canon Inc 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP2016005244A (ja) * 2014-06-19 2016-01-12 株式会社ニコン 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法
JP7340333B2 (ja) 2015-12-21 2023-09-07 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 複合型の適応的解像度及び内部ビット深度増加コーディングのための方法及び機器
JP2019505110A (ja) * 2015-12-21 2019-02-21 トムソン ライセンシングThomson Licensing 複合型の適応的解像度及び内部ビット深度増加コーディングのための方法及び機器
CN108370444B (zh) * 2015-12-21 2024-03-26 交互数字Vc控股公司 组合自适应分辨率和内部比特深度增加编码的方法和装置
CN108370444A (zh) * 2015-12-21 2018-08-03 汤姆逊许可公司 组合自适应分辨率和内部比特深度增加编码的方法和装置
US11297323B2 (en) 2015-12-21 2022-04-05 Interdigital Vc Holdings, Inc. Method and apparatus for combined adaptive resolution and internal bit-depth increase coding
JP2017200191A (ja) * 2017-05-25 2017-11-02 キヤノン株式会社 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
US11109046B2 (en) 2018-02-14 2021-08-31 Sony Corporation Image processing apparatus and image processing method
JPWO2019159696A1 (ja) * 2018-02-14 2021-01-28 ソニー株式会社 画像処理装置および方法
US11356701B2 (en) 2018-02-14 2022-06-07 Sony Corporation Image processing apparatus and image processing method
JPWO2019159697A1 (ja) * 2018-02-14 2021-01-28 ソニー株式会社 画像処理装置および方法
JP7235030B2 (ja) 2018-02-14 2023-03-08 ソニーグループ株式会社 画像処理装置および方法
JP7235031B2 (ja) 2018-02-14 2023-03-08 ソニーグループ株式会社 画像処理装置および方法
WO2019159697A1 (fr) * 2018-02-14 2019-08-22 ソニー株式会社 Dispositif et procédé de traitement d'image
WO2019159696A1 (fr) * 2018-02-14 2019-08-22 ソニー株式会社 Dispositif et procédé de traitement d'image
JP2018142969A (ja) * 2018-04-11 2018-09-13 キヤノン株式会社 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP2019106741A (ja) * 2019-04-08 2019-06-27 株式会社ニコン 画像処理装置

Similar Documents

Publication Publication Date Title
WO2007114368A1 (fr) appareil et procédé de codage d'image, et appareil et procédé de décodage d'image
JP5254004B2 (ja) 画像符号化装置及び方法
JP6626295B2 (ja) 画像符号化装置、画像処理装置、画像符号化方法
US8260069B2 (en) Color image encoding and decoding method and apparatus using a correlation between chrominance components
JP6532467B2 (ja) ビデオ符号化および復号におけるシンタックス要素符号化方法および装置
US20070217703A1 (en) Image encoding apparatus, image decoding apparatus and control method therefor
JP2009260977A (ja) 不可逆圧縮及び可逆圧縮を組み合わせて用いたビデオデータ圧縮
CN113810694A (zh) 编码装置、解码装置、编码方法和解码方法
JP4360416B2 (ja) 画像圧縮方法、画像圧縮装置およびプログラム
KR102238058B1 (ko) 인코더, 디코더 및 방법
US8290041B2 (en) Communication terminal
JP4574444B2 (ja) 画像復号装置及び方法、画像符号化装置及び方法、コンピュータプログラム及び記憶媒体
JP6502739B2 (ja) 画像符号化装置、画像処理装置、画像符号化方法
JP4284265B2 (ja) 動画像符号化装置、動画像符号化方法、動画像復号化装置および動画像復号化方法
US7197078B2 (en) Video coding/decoding buffering apparatus and buffering method thereof
CN102215384B (zh) 图像压缩方法和系统
JP2005142898A (ja) 電子透かし埋め込み方式およびドリフト補償方式
JP2010045598A (ja) 可変長復号装置、及び、符号化装置
JP4521835B2 (ja) 画像処理装置
JPH08242446A (ja) 画像処理方法及び画像処理装置
JP4262144B2 (ja) 画像符号化装置及び方法
JP4133678B2 (ja) 画像符号化装置
JP2009095066A (ja) 動画像復号化装置、動画像復号化方法、動画像符号化装置、および動画像符号化方法
JP2009004878A (ja) 画像処理装置、画像処理方法および画像処理プログラム、並びに撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07740633

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008508675

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2007232828

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2645931

Country of ref document: CA

Ref document number: 7799/DELNP/2008

Country of ref document: IN

REEP Request for entry into the european phase

Ref document number: 2007740633

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12008502090

Country of ref document: PH

Ref document number: 2007740633

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020087023440

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 200780011200.5

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2008138706

Country of ref document: RU

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2007232828

Country of ref document: AU

Date of ref document: 20070330

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: P20080553A

Country of ref document: HR

WWE Wipo information: entry into national phase

Ref document number: 1020117001539

Country of ref document: KR

ENP Entry into the national phase

Ref document number: PI0709853

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20080929