WO2012077533A1 - Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method - Google Patents

Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method Download PDF

Info

Publication number
WO2012077533A1
WO2012077533A1 PCT/JP2011/077510 JP2011077510W WO2012077533A1 WO 2012077533 A1 WO2012077533 A1 WO 2012077533A1 JP 2011077510 W JP2011077510 W JP 2011077510W WO 2012077533 A1 WO2012077533 A1 WO 2012077533A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
vector information
information
prediction
block
Prior art date
Application number
PCT/JP2011/077510
Other languages
French (fr)
Japanese (ja)
Inventor
佐藤 数史
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to US13/990,506 priority Critical patent/US20130259134A1/en
Priority to CN2011800576190A priority patent/CN103238329A/en
Publication of WO2012077533A1 publication Critical patent/WO2012077533A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • This technology relates to an image decoding device and a motion vector decoding method, and an image encoding device and a motion vector encoding method. Specifically, the efficiency in moving picture encoding is improved.
  • MPEG2 ISO / IEC13818-2
  • ISO / IEC13818-2 is defined as a general-purpose image encoding method, and is currently widely used in a wide range of applications for professional use and consumer use.
  • MPEG2 compression method for example, a standard resolution interlaced scanned image having 720 ⁇ 480 pixels can be assigned a code amount (bit rate) of 4 to 8 Mbps, thereby realizing a high compression ratio and good image quality. It is.
  • bit rate code amount
  • H.264 / AVC Advanced Video Coding
  • the unit of motion prediction / compensation processing is 16 ⁇ 16 pixels in the frame motion compensation mode, and 16 units for each of the first field and the second field in the field motion compensation mode. Motion prediction / compensation processing is performed in units of ⁇ 8 pixels.
  • a block E is a target block to be encoded, and blocks A to D are already encoded and are adjacent to the target block E.
  • predicted motion vector information pmvE for the target block E is generated as shown in Equation (1) by median prediction.
  • pmvE med (mvA, mvB, mvC) (1)
  • the information about the adjacent block C cannot be obtained because it is the edge of the image frame, the information about the adjacent block D is substituted.
  • Data mvdE encoded in the image compression information as motion vector information for the target block E is generated as shown in Expression (2) using pmvE.
  • mvdE mvE ⁇ pmvE (2) Note that the actual processing is performed independently for each of the horizontal and vertical components of the motion vector information.
  • H.264 / AVC a multi-reference frame method is defined. Using FIG. The multiple reference frame system defined in H.264 / AVC will be described.
  • a mode called a direct mode is provided.
  • the motion vector information is not stored in the image compression information, and the decoding apparatus extracts the motion vector information of the block from the motion vector information of the surrounding or anchor block (Co-Located Block).
  • the anchor block is a block having the same xy coordinates as the target block in the reference image.
  • spatial Direct mode Spatial Direct Mode
  • temporal direct mode Temporal Direct Mode
  • the motion vector information pmvE generated by the median prediction is set as motion vector information mvE applied to the block.
  • mvE pmvE (3)
  • a block at the same space address as the current block in the L0 reference picture is an anchor block, and motion vector information in the anchor block is a motion “mvcol”. Also, the distance on the time axis between the current picture and the L0 reference picture is “TDB”, and the distance on the time axis between the L0 reference picture and the L1 reference picture is “TDD”.
  • L0 motion vector information mvL0 and L1 motion vector information mvL1 in the picture are calculated as in equations (4) and (5).
  • the direct mode can be defined in units of 16 ⁇ 16 pixel macroblocks or in units of 8 ⁇ 8 pixel sub-macroblocks.
  • Non-Patent Document 1 has been proposed to improve the encoding of motion vector information using median prediction as shown in FIG.
  • Non-Patent Document 1 it is possible to adaptively use either temporal prediction motion vector information or spatio-temporal prediction motion vector information in addition to spatial prediction motion vector information obtained by median prediction.
  • the temporal prediction motion vector information mvtm is generated from five pieces of motion vector information using, for example, Expression (6).
  • the temporal prediction motion vector information mvtm may be generated from nine motion vectors using equation (7).
  • mvtm5 med (mvcol, mvt0,... mvt3)
  • mvtm9 med (mvcol, mvt0,... mvt7) (7)
  • the spatio-temporal predicted motion vector information mvspt is generated from the five motion vector information using Expression (8).
  • mvspt med (mvcol, mvcol, mvA, mvB, mvC) (8)
  • a cost function value is calculated for each block when using each predicted motion vector information, and optimal predicted motion vector information is selected.
  • image compression information for example, a flag that can identify which prediction motion vector information is used for each block is transmitted.
  • coding unit CU Coding Unit
  • HEVC High Efficiency Video Coding
  • FIG. 6 illustrates the hierarchical structure of the coding unit CU.
  • a prediction unit (PU: Prediction Unit) that is a basic unit for prediction by dividing a coding unit is also defined.
  • Non-Patent Document 1 has a problem that since it cannot have prediction information independently for the horizontal direction and the vertical direction of the motion vector component, sufficient improvement in coding efficiency cannot be realized. Yes. For example, when there are three types of candidates in the horizontal direction and three types in the vertical direction, there are nine (3 ⁇ 3) combinations of candidates in the horizontal direction and the vertical direction. A method for performing the conversion processing is also conceivable. However, if there are many combinations, there are problems that the number of types of flags increases and the amount of codes indicating the flags increases.
  • horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block
  • motion vector information is vertical prediction motion vector information.
  • a lossless decoding unit that obtains vertical prediction block information indicating a block selected as image compression information, and sets motion vector information of the block indicated by the horizontal prediction block information as horizontal prediction motion vector information
  • a prediction motion vector information setting unit that sets motion vector information of a block indicated by prediction block information as the vertical prediction motion vector information, and the horizontal prediction motion vector information and vertical prediction set by the prediction motion vector information setting unit The motion vector of the target block using motion vector information In the image decoding apparatus and a motion vector information generation unit for generating information.
  • This technology divides input image data into a plurality of pixel blocks, detects motion vector information for each block, and performs motion compensation predictive coding to perform image decoding processing for decoding image compression information generated
  • horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and a block in which motion vector information is selected as vertical prediction motion vector information.
  • the predicted vertical block information shown is acquired from the compressed image information.
  • the motion vector information of the block indicated by the horizontal prediction block information is set as horizontal prediction motion vector information
  • the motion vector information of the block indicated by the vertical prediction block information is set as vertical prediction motion vector information.
  • Motion vector information of the target block is generated using the set horizontal predicted motion vector information and vertical predicted motion vector information.
  • horizontal and vertical motion vector information indicating motion vector information selected from the decoded block adjacent to the horizontal and vertical components of the motion vector information of the target block or the horizontal and vertical motion vector information of the target block.
  • Identification information indicating which one of the two is used is acquired from the image compression information. Based on this identification information, horizontal prediction motion vector information and vertical prediction motion vector information, or horizontal and vertical prediction motion vector information are set, and motion vector information of the target block is generated.
  • horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block and motion vector information is vertical prediction motion vector information.
  • a motion vector information decoding method comprising:
  • motion vector information is selected from an encoded block adjacent to the target block to generate horizontal predicted motion vector information.
  • a predicted motion vector information setting unit that generates horizontal predicted block information and vertical predicted block information indicating a block in which the motion vector information is selected.
  • This technique divides input image data into a plurality of pixel blocks, detects motion vector information for each block, and performs motion compensation predictive coding.
  • motion vector information is selected from an encoded block adjacent to the target block, and horizontal prediction motion vector information and vertical prediction motion vector information are set.
  • horizontal prediction motion vector information and vertical prediction motion vector information are set for each component.
  • the motion vector information of an adjacent encoded block that has the highest encoding efficiency is obtained. It is selected and set as horizontal prediction motion vector information.
  • the motion vector information of the adjacent encoded block having the highest encoding efficiency is selected and the vertical prediction motion vector Set as information.
  • the motion vector information of the target block is compressed. Also, horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated, and the horizontal prediction block information and the vertical prediction block information are included in the image compression information.
  • the setting is made such that the motion vector information selected from the encoded block adjacent to the target block is the horizontal and vertical predicted motion vector information, or the horizontal predicted motion vector information
  • the setting of the vertical prediction motion vector information can be switched for each picture or each slice.
  • horizontal prediction motion vector information and vertical prediction motion vector information are set for a P picture
  • horizontal vertical prediction motion vector information is set for a B picture.
  • identification information indicating whether horizontal predicted motion vector information, vertical predicted motion vector information, or horizontal / vertical predicted motion vector information is used is provided in the image compression information.
  • codes are assigned to the horizontal prediction block information and the vertical prediction block information, respectively, and the codes assigned to the horizontal prediction block information and the vertical prediction block information are included in the image compression information. Furthermore, when performing the encoding process of the motion vector information detected based on the image data produced
  • motion vector information is selected from an encoded block adjacent to the target block to generate horizontal predicted motion vector information.
  • the motion vector information encoding method includes a step of setting the vertical prediction motion vector information and generating horizontal prediction block information and vertical prediction block information indicating a block for which the motion vector information is selected.
  • the motion vector information is selected from the encoded block adjacent to the target block, and the horizontal prediction motion vector information and the vertical prediction motion are selected.
  • Vector information is set, respectively, and the motion vector information of the target block is compressed using the set horizontal predicted motion vector information and vertical predicted motion vector information.
  • horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated. Further, the motion vector information is decoded based on the horizontal prediction block information and the vertical prediction block information.
  • horizontal prediction motion vector information and vertical prediction motion vector information with horizontal prediction block information and vertical prediction block information that have a data amount smaller than a flag corresponding to a combination of horizontal prediction motion vector information and vertical prediction motion vector information candidates Can be set, and the encoding efficiency can be improved.
  • H It is a figure which shows the block in H.264 / AVC. It is a figure for demonstrating median prediction. It is a figure for demonstrating Multi-Reference
  • FIG. 7 shows the configuration of the image encoding device.
  • the image encoding device 10 includes an analog / digital conversion unit (A / D conversion unit) 11, a screen rearrangement buffer 12, a subtraction unit 13, an orthogonal transformation unit 14, a quantization unit 15, a lossless encoding unit 16, and a storage buffer 17.
  • the rate control unit 18 is provided.
  • the image encoding device 10 includes an inverse quantization unit 21, an inverse orthogonal transform unit 22, an addition unit 23, a deblocking filter 24, a frame memory 25, an intra prediction unit 31, a motion prediction / compensation unit 32, and predicted motion vector information.
  • a setting unit 33 and a predicted image / optimum mode selection unit 35 are provided.
  • the A / D converter 11 converts an analog image signal into digital image data and outputs the digital image data to the screen rearrangement buffer 12.
  • the screen rearrangement buffer 12 rearranges the frames of the image data output from the A / D conversion unit 11.
  • the screen rearrangement buffer 12 rearranges the frames in accordance with a GOP (Group of Pictures) structure related to the encoding process, and subtracts the image data after the rearrangement, the intra prediction unit 31, and the motion prediction / compensation unit. 32.
  • GOP Group of Pictures
  • the subtraction unit 13 is supplied with the image data output from the screen rearrangement buffer 12 and the predicted image data selected by the predicted image / optimum mode selection unit 35 described later.
  • the subtraction unit 13 calculates prediction error data that is a difference between the image data output from the screen rearrangement buffer 12 and the prediction image data supplied from the prediction image / optimum mode selection unit 35, and sends the prediction error data to the orthogonal transformation unit 14. Output.
  • the orthogonal transform unit 14 performs orthogonal transform processing such as discrete cosine transform (DCT) and Karoonen-Loeve transform on the prediction error data output from the subtraction unit 13.
  • the orthogonal transform unit 14 outputs transform coefficient data obtained by performing the orthogonal transform process to the quantization unit 15.
  • the quantization unit 15 is supplied with transform coefficient data output from the orthogonal transform unit 14 and a rate control signal from a rate control unit 18 described later.
  • the quantization unit 15 quantizes the transform coefficient data and outputs the quantized data to the lossless encoding unit 16 and the inverse quantization unit 21. Further, the quantization unit 15 changes the bit rate of the quantized data by switching the quantization parameter (quantization scale) based on the rate control signal from the rate control unit 18.
  • the lossless encoding unit 16 is supplied with quantized data output from the quantization unit 15, prediction mode information from the intra prediction unit 31 described later, prediction mode information from the motion prediction / compensation unit 32, and the like. Also, information indicating whether the optimal mode is intra prediction or inter prediction is supplied from the predicted image / optimum mode selection unit 35. Note that the prediction mode information includes prediction mode, block size information of a prediction unit, and the like according to intra prediction or inter prediction.
  • the lossless encoding unit 16 performs lossless encoding processing on the quantized data by, for example, variable length encoding or arithmetic encoding, generates image compression information, and outputs it to the accumulation buffer 17.
  • the lossless encoding part 16 performs the lossless encoding of the prediction mode information supplied from the intra prediction part 31, when the optimal mode is intra prediction.
  • the lossless encoding unit 16 performs lossless encoding of prediction mode information, prediction block information, difference motion vector information, and the like supplied from the motion prediction / compensation unit 32.
  • the lossless encoding unit 16 includes information subjected to lossless encoding in the image compression information. For example, the lossless encoding unit 16 adds the header information of the encoded stream that is the image compression information.
  • the accumulation buffer 17 accumulates the compressed image information from the lossless encoding unit 16.
  • the accumulation buffer 17 outputs the accumulated image compression information at a transmission rate corresponding to the transmission path.
  • the rate control unit 18 monitors the free capacity of the storage buffer 17, generates a rate control signal according to the free capacity, and outputs it to the quantization unit 15.
  • the rate control unit 18 acquires information indicating the free capacity from the accumulation buffer 17, for example.
  • the rate control unit 18 reduces the bit rate of the quantized data by the rate control signal when the free space is low. Further, when the free capacity of the storage buffer 17 is sufficiently large, the rate control unit 18 increases the bit rate of the quantized data by the rate control signal.
  • the inverse quantization unit 21 performs an inverse quantization process on the quantized data supplied from the quantization unit 15.
  • the inverse quantization unit 21 outputs transform coefficient data obtained by performing the inverse quantization process to the inverse orthogonal transform unit 22.
  • the inverse orthogonal transform unit 22 performs an inverse orthogonal transform process on the transform coefficient data supplied from the inverse quantization unit 21, and outputs the obtained data to the addition unit 23.
  • the adding unit 23 adds the data supplied from the inverse orthogonal transform unit 22 and the predicted image data supplied from the predicted image / optimum mode selection unit 35 to generate decoded image data, and the deblocking filter 24 and the frame memory To 25.
  • the decoded image data is used as image data for the reference image.
  • the deblocking filter 24 performs a filter process for reducing block distortion that occurs during image coding.
  • the deblocking filter 24 performs a filter process for removing block distortion from the decoded image data supplied from the adding unit 23, and outputs the decoded image data after the filter process to the frame memory 25.
  • the frame memory 25 holds the decoded image data before the filtering process supplied from the adding unit 23 and the decoded image data after the filtering process supplied from the deblocking filter 24.
  • the decoded image data held in the frame memory 25 is supplied as reference image data to the intra prediction unit 31 or the motion prediction / compensation unit 32 via the selector 26.
  • the selector 26 supplies the decoded image data before the deblocking filter process held in the frame memory 25 to the intra prediction unit 31 as reference image data. Further, when the motion prediction / compensation unit 32 performs inter prediction, the selector 26 supplies the decoded image data after the deblocking filter processing held in the frame memory 25 to the motion prediction / compensation unit 32 as reference image data. .
  • the intra prediction unit 31 uses the input image data supplied from the screen rearrangement buffer 12 and the reference image data supplied from the frame memory 25 to predict the target block in all candidate intra prediction modes and Determine the intra prediction mode. For example, the intra prediction unit 31 calculates the cost function value in each intra prediction mode, and sets the intra prediction mode in which the coding efficiency is the best based on the calculated cost function value as the optimal intra prediction mode. The intra prediction unit 31 outputs the predicted image data generated in the optimal intra prediction mode and the cost function value in the optimal intra prediction mode to the predicted image / optimum mode selection unit 35. Further, the intra prediction unit 31 outputs prediction mode information indicating the optimal intra prediction mode to the lossless encoding unit 16.
  • the motion prediction / compensation unit 32 uses the input image data supplied from the screen rearrangement buffer 12 and the reference image data supplied from the frame memory 25 to predict the target block in all candidate inter prediction modes. Determine the optimal inter prediction mode. For example, the motion prediction / compensation unit 32 calculates the cost function value in each inter prediction mode, and sets the inter prediction mode in which the coding efficiency is the best based on the calculated cost function value as the optimal inter prediction mode. In addition, the motion prediction / compensation unit 32 calculates a cost function value using the prediction block information and the difference motion vector information generated by the prediction motion vector information setting unit 33. Further, the motion prediction / compensation unit 32 outputs the predicted image data generated in the optimal inter prediction mode and the cost function value in the optimal inter prediction mode to the predicted image / optimum mode selection unit 35. In addition, the motion prediction / compensation unit 32 outputs prediction mode information, prediction block information, difference motion vector information, and the like regarding the optimal inter prediction mode to the lossless encoding unit 16.
  • the predicted motion vector information setting unit 33 sets the horizontal motion vector information of the encoded adjacent block as a candidate for predicted horizontal motion vector information for the target block. Also, the predicted motion vector information setting unit 33 generates differential motion vector information indicating the difference between the candidate horizontal predicted motion vector information and the horizontal motion vector information of the target block for each candidate. Further, the predicted motion vector information setting unit 33 sets horizontal motion vector information having the highest coding efficiency of the difference motion vector information among the candidates as the predicted horizontal motion vector information.
  • the prediction motion vector information setting unit 33 generates horizontal prediction block information indicating which adjacent block motion vector information the set horizontal prediction motion vector information is. For example, a flag (hereinafter referred to as “horizontal prediction block flag”) is generated as horizontal prediction block information.
  • the predicted motion vector information setting unit 33 sets the vertical motion vector information of the adjacent block that has been encoded for the target block as the candidate for the predicted vertical motion vector information. Also, the predicted motion vector information setting unit 33 generates differential motion vector information indicating the difference between the candidate vertical predicted motion vector information and the vertical motion vector information of the target block for each candidate. Further, the predicted motion vector information setting unit 33 sets the vertical motion vector information having the highest coding efficiency of the difference motion vector information among the candidates as the predicted vertical motion vector information. The predicted motion vector information setting unit 33 generates vertical predicted block information indicating which adjacent block motion vector information the set vertical predicted motion vector information is. For example, a flag (hereinafter referred to as “vertical prediction block flag”) is generated as the vertical prediction block information.
  • the predicted motion vector information setting unit 33 uses the motion vector information of the block indicated by the predicted block flag for the horizontal component and the vertical component, respectively, as the predicted motion vector information. Also, the predicted motion vector information setting unit 33 calculates difference motion vector information, which is the difference between the motion vector information of the target block and the predicted motion vector information, for each of the horizontal component and the vertical component, and sends it to the motion prediction / compensation unit 32. Output.
  • FIG. 8 shows the configuration of the motion prediction / compensation unit 32 and the predicted motion vector information setting unit 33.
  • the motion prediction / compensation unit 32 includes a motion search unit 321, a cost function value calculation unit 322, a mode determination unit 323, a motion compensation processing unit 324, and a motion vector information buffer 325.
  • the motion search unit 321 is supplied with the rearranged input image data supplied from the screen rearrangement buffer 12 and the reference image data read from the frame memory 25.
  • the motion search unit 321 performs motion search in all candidate inter prediction modes and detects a motion vector.
  • the motion search unit 321 outputs motion vector information indicating the detected motion vector to the cost function value calculation unit 322 together with the input image data and the reference image data when the motion vector is detected.
  • the cost function value calculation unit 322 is supplied with motion vector information, input image data, reference image data, and predicted motion vector information setting unit 33 from the motion search unit 321 and predicted block information and difference motion vector information.
  • the cost function value calculation unit 322 calculates cost function values in all candidate inter prediction modes using the motion vector information, the input image data, the reference image data, the prediction block flag, and the difference motion vector information.
  • Calculating the cost function value is, for example, H. As defined by JM (Joint Model), which is reference software in the H.264 / AVC system, this is performed based on either the High Complexity mode or the Low Complexity mode.
  • JM Joint Model
  • Cost (Mode ⁇ ) D + ⁇ ⁇ R (9)
  • indicates the entire set of prediction modes that are candidates for encoding the image of the block.
  • D indicates the differential energy (distortion) between the decoded image and the input image when encoding is performed in the prediction mode.
  • R is a generated code amount including orthogonal transform coefficients, prediction mode information, prediction block information, differential motion vector information, and the like, and ⁇ is a Lagrange multiplier given as a function of the quantization parameter QP.
  • indicates the entire set of prediction modes that are candidates for encoding the image of the block.
  • D indicates the differential energy (distortion) between the decoded image and the input image when encoding is performed in the prediction mode.
  • Header_Bit is a header bit for the prediction mode, and QP2Quant is a function given as a function of the quantization parameter QP.
  • the cost function value calculation unit 322 outputs the calculated cost function value to the mode determination unit 323.
  • the mode determination unit 323 determines the mode having the minimum cost function value as the optimal inter prediction mode.
  • the mode determination unit 323 includes the motion compensation processing unit 324 including the optimal inter prediction mode information indicating the determined optimal inter prediction mode together with the motion vector information, the prediction block flag, the difference motion vector information, and the like related to the optimal inter prediction mode. Output to.
  • the prediction mode information includes block size information and the like.
  • the motion compensation processing unit 324 performs motion compensation on the reference image data read from the frame memory 25 based on the optimal inter prediction mode information and the motion vector information, and generates predicted image data to generate a predicted image / optimum mode.
  • the data is output to the selection unit 35.
  • the motion compensation processing unit 324 outputs prediction mode information for optimal inter prediction, difference motion vector information in the mode, and the like to the lossless encoding unit 16.
  • the motion vector information buffer 325 holds motion vector information related to the optimal inter prediction mode. Also, the motion vector information buffer 325 outputs the motion vector information of the adjacent block that has been encoded with respect to the target block to be encoded, to the predicted motion vector information setting unit 33.
  • FIG. 9 is a diagram for explaining a motion prediction / compensation process with 1 ⁇ 4 pixel accuracy.
  • the position “A” is the position of the integer precision pixel stored in the frame memory 25
  • the positions “b”, “c”, and “d” are the positions of half pixel precision
  • the positions “e 1”, “ “e2” and “e3” are positions with 1/4 pixel accuracy.
  • Clip1 () is defined as shown in Expression (11).
  • Expression (11) when the input image has 8-bit precision, the value of max_pix is 255.
  • the prediction motion vector information setting unit 33 includes a horizontal prediction motion vector information generation unit 331, a vertical prediction motion vector information generation unit 332, and an identification information generation unit 334.
  • the horizontal prediction motion vector information generation unit 331 sets horizontal prediction motion vector information that provides the highest encoding efficiency in the encoding process for the horizontal component of the motion vector information of the target block.
  • the predicted horizontal motion vector information generation unit 331 uses the predicted horizontal motion vector information of the adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for predicted horizontal motion vector information. Further, the horizontal predicted motion vector information generation unit 331 generates horizontal differential motion vector information indicating a difference between the horizontal motion vector information of each candidate and the horizontal motion vector information of the target block supplied from the motion prediction / compensation unit 32. To do. Further, the horizontal predicted motion vector information generation unit 331 sets the candidate horizontal motion vector information that minimizes the code amount of the horizontal difference motion vector information as horizontal predicted motion vector information. The horizontal predicted motion vector information generation unit 331 outputs the horizontal difference motion vector information when horizontal predicted motion vector information and horizontal predicted motion vector information are used to the identification information generation unit 334 as a horizontal predicted motion vector information generation result.
  • the vertical prediction motion vector information generation unit 332 sets vertical prediction motion vector information that provides the highest encoding efficiency in the encoding process for the vertical component of the motion vector information of the target block.
  • the vertical motion vector predictor information generation unit 332 sets the vertical motion vector information of the encoded adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for the vertical motion vector predictor information.
  • the vertical motion vector predictor information generation unit 332 generates vertical difference motion vector information indicating the difference between the vertical motion vector information of each candidate and the vertical motion vector information of the target block supplied from the motion prediction / compensation unit 32. To do.
  • the horizontal prediction motion vector information generation unit 331 sets the candidate vertical motion vector information that minimizes the code amount of the vertical difference motion vector information as the vertical prediction motion vector information.
  • the predicted vertical motion vector information generation unit 332 outputs the vertical difference motion vector information when the predicted vertical motion vector information and the predicted vertical motion vector information are used to the identification information generation unit 334 as the predicted vertical motion vector information generation result.
  • the identification information generation unit 334 generates horizontal prediction block information, for example, a horizontal prediction block flag indicating a block in which the motion vector information is selected as the horizontal prediction motion vector information, based on the horizontal prediction motion vector information generation result.
  • the identification information generation unit 334 outputs the generated horizontal prediction block flag to the cost function value calculation unit 322 of the motion prediction / compensation unit 32 together with the horizontal difference motion vector information.
  • the identification information generation unit 334 generates vertical prediction block information, for example, a vertical prediction block flag, indicating a block in which the motion vector information is selected as the vertical prediction motion vector information based on the vertical prediction motion vector information generation result.
  • the identification information generation unit 334 outputs the generated vertical prediction block flag to the cost function value calculation unit 322 of the motion prediction / compensation unit 32 together with the vertical difference motion vector information.
  • the predicted motion vector information setting unit 33 calculates the difference motion vector information indicating the difference between the horizontal (vertical) motion vector information of the target block and the motion vector information of each candidate, together with information indicating the candidate block, as a cost function value calculation unit. You may supply to 322. In this case, the candidate horizontal (vertical) motion vector information that minimizes the cost function value calculated by the cost function value calculation unit 322 is set as the horizontal (vertical) predicted motion vector information. Further, identification information indicating a candidate block having the smallest cost function value is used in inter prediction.
  • the predicted image / optimum mode selection unit 35 compares the cost function value supplied from the intra prediction unit 31 with the cost function value supplied from the motion prediction / compensation unit 32, and the cost function value is small. Is selected as the optimum mode with the best coding efficiency. Further, the predicted image / optimum mode selection unit 35 outputs the predicted image data generated in the optimal mode to the subtraction unit 13 and the addition unit 23. Further, the predicted image / optimum mode selection unit 35 outputs information indicating whether the optimal mode is the intra prediction mode or the inter prediction mode to the lossless encoding unit 16. Note that the predicted image / optimum mode selection unit 35 switches between intra prediction and inter prediction in units of slices.
  • FIG. 10 is a flowchart showing the operation of the image coding apparatus.
  • the A / D converter 11 performs A / D conversion on the input image signal.
  • step ST12 the screen rearrangement buffer 12 performs image rearrangement.
  • the screen rearrangement buffer 12 stores the image data supplied from the A / D conversion unit 11, and rearranges from the display order of each picture to the encoding order.
  • step ST13 the subtraction unit 13 generates prediction error data.
  • the subtraction unit 13 calculates a difference between the image data of the images rearranged in step ST12 and the predicted image data selected by the predicted image / optimum mode selection unit 35, and generates prediction error data.
  • the prediction error data has a smaller data amount than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
  • the orthogonal transform unit 14 performs an orthogonal transform process.
  • the orthogonal transformation unit 14 performs orthogonal transformation on the prediction error data supplied from the subtraction unit 13. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed on the prediction error data, and transformation coefficient data is output.
  • step ST15 the quantization unit 15 performs a quantization process.
  • the quantization unit 15 quantizes the transform coefficient data.
  • rate control is performed as described in the process of step ST25 described later.
  • step ST16 the inverse quantization unit 21 performs an inverse quantization process.
  • the inverse quantization unit 21 inversely quantizes the transform coefficient data quantized by the quantization unit 15 with characteristics corresponding to the characteristics of the quantization unit 15.
  • the inverse orthogonal transform unit 22 performs an inverse orthogonal transform process.
  • the inverse orthogonal transform unit 22 performs inverse orthogonal transform on the transform coefficient data inversely quantized by the inverse quantization unit 21 with characteristics corresponding to the characteristics of the orthogonal transform unit 14.
  • step ST18 the adding unit 23 generates reference image data.
  • the adding unit 23 adds the predicted image data supplied from the predicted image / optimum mode selecting unit 35 and the data after inverse orthogonal transformation of the position corresponding to the predicted image, and obtains reference image data (decoded image data). Generate.
  • step ST19 the deblocking filter 24 performs filter processing.
  • the deblocking filter 24 filters the decoded image data output from the adding unit 23 to remove block distortion.
  • the frame memory 25 stores reference image data.
  • the frame memory 25 stores reference image data (decoded image data) after filtering.
  • the intra prediction unit 31 and the motion prediction / compensation unit 32 each perform a prediction process. That is, the intra prediction unit 31 performs intra prediction processing in the intra prediction mode, and the motion prediction / compensation unit 32 performs motion prediction / compensation processing in the inter prediction mode.
  • the details of the prediction process will be described later with reference to FIG. 11.
  • the prediction process is performed in all candidate prediction modes, and the cost function values in all candidate prediction modes are respectively determined. Calculated. Then, based on the calculated cost function value, the optimal intra prediction mode and the optimal inter prediction mode are selected, and the predicted image generated in the selected prediction mode and its cost function value and prediction mode information are predicted image / optimum It is supplied to the mode selection unit 35.
  • the predicted image / optimum mode selection unit 35 selects predicted image data.
  • the predicted image / optimum mode selection unit 35 determines the optimum mode with the best coding efficiency based on the cost function values output from the intra prediction unit 31 and the motion prediction / compensation unit 32. Further, the predicted image / optimum mode selection unit 35 selects the predicted image data of the determined optimal mode and outputs it to the subtraction unit 13 and the addition unit 23. As described above, the predicted image data is used for the calculations in steps ST13 and ST18.
  • the lossless encoding unit 16 performs a lossless encoding process.
  • the lossless encoding unit 16 performs lossless encoding on the quantized data output from the quantization unit 15. That is, lossless encoding such as variable length encoding or arithmetic encoding is performed on the quantized data, and the data is compressed. Further, the lossless encoding unit 16 performs lossless encoding such as prediction mode information corresponding to the prediction image data selected in step ST22, and adds the prediction mode to the image compression information generated by lossless encoding of the quantized data. Lossless encoded data such as information is included.
  • step ST24 the accumulation buffer 17 performs accumulation processing.
  • the accumulation buffer 17 accumulates the compressed image information output from the lossless encoding unit 16.
  • the compressed image information stored in the storage buffer 17 is appropriately read out and transmitted to the decoding side via the transmission path.
  • step ST25 the rate control unit 18 performs rate control.
  • the rate control unit 18 controls the rate of the quantization operation of the quantization unit 15 so that overflow or underflow does not occur in the accumulation buffer 17.
  • the intra prediction unit 31 performs an intra prediction process.
  • the intra prediction unit 31 performs intra prediction on the image of the target block in all candidate intra prediction modes.
  • the decoded image data before the blocking filter processing is performed by the deblocking filter 24 is used as the image data of the decoded image referred to in the intra prediction.
  • intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, one intra prediction mode with the best coding efficiency is selected from all the intra prediction modes.
  • step ST32 the motion prediction / compensation unit 32 performs an inter prediction process.
  • the motion prediction / compensation unit 32 uses the decoded image data after the deblocking filter processing stored in the frame memory 25 to perform inter prediction processing in a candidate inter prediction mode.
  • inter prediction processing prediction processing is performed in all candidate inter prediction modes, and cost function values are calculated for all candidate inter prediction modes. Then, based on the calculated cost function value, one inter prediction mode with the best coding efficiency is selected from all the inter prediction modes.
  • step ST41 the intra prediction unit 31 performs intra prediction in each prediction mode.
  • the intra prediction unit 31 generates predicted image data for each intra prediction mode using the decoded image data before the blocking filter processing.
  • step ST42 the intra prediction unit 31 calculates a cost function value in each prediction mode.
  • the cost function value is calculated as described above, for example, H.264. As defined by JM (Joint Model), which is reference software in the H.264 / AVC system, this is performed based on either the High Complexity mode or the Low Complexity mode. In other words, in the High Complexity mode, as a process of step ST42, all the candidate prediction modes are subjected to the lossless encoding process, and the cost function value represented by the equation (9) is set for each prediction mode. To calculate.
  • JM Joint Model
  • step ST42 for all the prediction modes that are candidates, generation of a prediction image and header bits such as motion vector information and prediction mode information are generated, and expressed by Expression (10).
  • the calculated cost function value is calculated for each prediction mode.
  • step ST43 the intra prediction unit 31 determines the optimal intra prediction mode. Based on the cost function value calculated in step ST42, the intra prediction unit 31 selects one intra prediction mode having a minimum cost function value from them, and determines the optimal intra prediction mode.
  • step ST51 the motion prediction / compensation unit 32 performs a motion prediction process.
  • the motion prediction / compensation unit 32 performs motion prediction for each prediction mode to detect a motion vector, and proceeds to step ST52.
  • step ST52 the motion vector predictor information setting unit 33 performs a motion vector predictor information setting process.
  • the predicted motion vector information setting unit 33 generates a predicted block flag and difference motion vector information for the target block.
  • FIG. 14 is a flowchart showing the predicted motion vector information setting process.
  • the predicted motion vector information setting unit 33 selects a candidate for predicted horizontal motion vector information.
  • the motion vector predictor information setting unit 33 selects the horizontal motion vector information of the encoded block adjacent to the target block as a candidate for the motion vector predictive motion vector information, and proceeds to step ST62.
  • the predicted motion vector information setting unit 33 performs horizontal predicted motion vector information setting processing.
  • the predicted motion vector information setting unit 33 detects the i-th horizontal motion vector information that minimizes the amount of encoding of the horizontal difference motion vector information based on, for example, Expression (20). arg i min (R (mvx ⁇ pmvx (i))) (20)
  • mvx indicates the horizontal motion vector information of the target block
  • pmvx (i) indicates the i th candidate of the predicted horizontal motion vector information
  • R (mvx ⁇ pmvx (i)) is a code amount when the horizontal difference motion vector information indicating the difference between the i th candidate of the horizontal prediction motion vector and the horizontal motion vector information of the target block is encoded. Is shown.
  • the predicted motion vector information setting unit 33 generates a horizontal predicted block flag indicating an adjacent block of the horizontal motion vector information that minimizes the coding amount detected based on the equation (20). Further, the predicted motion vector information setting unit 33 generates horizontal difference motion vector information when the horizontal motion vector information is used, and the process proceeds to step ST63.
  • step ST63 the motion vector predictor information setting unit 33 selects a candidate for motion vector predictor information.
  • the motion vector predictor information setting unit 33 selects the vertical motion vector information of the encoded block adjacent to the target block as a candidate for the motion vector predictor information, and proceeds to step ST64.
  • step ST64 the motion vector predictor information setting unit 33 performs a vertical motion vector predictor information setting process.
  • the motion vector predictor information setting unit 33 detects the j-th vertical motion vector information that minimizes the amount of encoding of the vertical difference report based on, for example, Expression (21).
  • mvy indicates the vertical motion vector information of the target block
  • pmvy (j) indicates the j-th candidate for the predicted vertical motion vector information
  • R (mvy ⁇ pmvy (j)) is a code amount when the vertical difference motion vector information indicating the difference between the j th candidate of the vertical prediction motion vector and the vertical motion vector information of the target block is encoded. Is shown.
  • the predicted motion vector information setting unit 33 generates a vertical predicted block flag indicating an adjacent block of the vertical motion vector information in which the coding amount detected based on Expression (21) is minimized. Also, the predicted motion vector information setting unit 33 generates vertical difference motion vector information when the vertical motion vector information is used, ends the predicted motion vector information setting process, and returns to step ST53 in FIG.
  • the motion prediction / compensation unit 32 calculates a cost function value in each prediction mode.
  • the motion prediction / compensation unit 32 calculates the cost function value using the above-described equation (9) or equation (10). Also, the motion prediction / compensation unit 32 calculates the generated code amount using the difference motion vector information. Note that the cost function value for the inter prediction mode is calculated using the H.264 standard. Evaluation of cost function values in skipped macroblocks and direct mode defined in the H.264 / AVC format is also included.
  • step ST54 the motion prediction / compensation unit 32 determines the optimal inter prediction mode. Based on the cost function value calculated in step ST54, the motion prediction / compensation unit 32 selects one prediction mode having the minimum cost function value from them, and determines the optimum inter prediction mode.
  • the image encoding device 10 individually sets the horizontal prediction motion vector and the vertical prediction motion vector for the target block. Further, the image encoding device 10 performs variable-length encoding on the horizontal differential motion vector information that is the difference between the horizontal motion vector information of the target block and the predicted horizontal motion vector information. In addition, the image encoding device 10 performs variable-length encoding on the vertical difference motion vector information that is the difference between the vertical motion vector of the target block and the predicted vertical motion vector information. The predicted horizontal motion vector information and the predicted vertical motion vector information are indicated by a prediction block flag as to which of the adjacent encoded blocks.
  • the horizontal / vertical prediction motion vector information includes the motion vector of the adjacent block that minimizes the code amount obtained by adding the code amount of the horizontal difference motion vector information and the code amount of the vertical difference motion vector information.
  • Information arg k min (R (mvx ⁇ pmvx (k)) + R (mvy ⁇ pmvy (k))) (22)
  • the present technology may prepare six types (three types + three types) of flags.
  • six types three types + three types
  • nine types nine types (three types ⁇ three types) of flags must be prepared. That is, in the present technology, the number of flags to be prepared is reduced, so that the efficiency in encoding motion vector information can be improved.
  • Image decoding apparatus ⁇ 3. Configuration of Image Decoding Device> Next, the image decoding apparatus will be described. Image compression information generated by encoding an input image is supplied to an image decoding apparatus via a predetermined transmission path, recording medium, or the like and decoded.
  • FIG. 15 shows the configuration of the image decoding apparatus.
  • the image decoding device 50 includes a storage buffer 51, a lossless decoding unit 52, an inverse quantization unit 53, an inverse orthogonal transform unit 54, an addition unit 55, a deblocking filter 56, a screen rearrangement buffer 57, a digital / analog conversion unit ( D / A converter 58). Furthermore, the image decoding device 50 includes a frame memory 61, selectors 62 and 75, an intra prediction unit 71, a motion compensation unit 72, and a predicted motion vector information setting unit 73.
  • the accumulation buffer 51 accumulates the transmitted image compression information.
  • the lossless decoding unit 52 decodes the image compression information supplied from the accumulation buffer 51 by a method corresponding to the encoding method of the lossless encoding unit 16 in FIG.
  • the lossless decoding unit 52 outputs prediction mode information obtained by decoding the image compression information to the intra prediction unit 71 and the motion compensation unit 72. Further, the lossless decoding unit 52 outputs prediction block information (prediction block flag) and differential motion vector information obtained by decoding the image compression information to the motion compensation unit 72.
  • the inverse quantization unit 53 inversely quantizes the quantized data decoded by the lossless decoding unit 52 by a method corresponding to the quantization method of the quantization unit 15 of FIG.
  • the inverse orthogonal transform unit 54 performs inverse orthogonal transform on the output of the inverse quantization unit 53 by a method corresponding to the orthogonal transform method of the orthogonal transform unit 14 of FIG.
  • the addition unit 55 adds the data after inverse orthogonal transformation and the predicted image data supplied from the selector 75 to generate decoded image data, and outputs the decoded image data to the deblocking filter 56 and the frame memory 61.
  • the deblocking filter 56 performs deblocking filter processing on the decoded image data supplied from the addition unit 55, removes block distortion, supplies the frame memory 61 to the frame memory 61, and outputs the frame memory 61 to the screen rearrangement buffer 57. To do.
  • the screen rearrangement buffer 57 rearranges images. That is, the order of frames rearranged for the encoding order in the screen rearrangement buffer 12 in FIG. 7 is rearranged in the original display order and output to the D / A conversion unit 58.
  • the D / A conversion unit 58 performs D / A conversion on the image data supplied from the screen rearrangement buffer 57 and outputs it to a display (not shown) to display an image.
  • the frame memory 61 stores the decoded image data before the filtering process by the deblocking filter 24 and the decoded image data after the filtering process by the deblocking filter 24 are performed.
  • the selector 62 Based on the prediction mode information supplied from the lossless decoding unit 52, the selector 62 supplies the decoded image data before filtering stored in the frame memory 61 to the intra prediction unit 71 in the case of decoding an intra prediction image. To do. In addition, in the case of decoding an inter prediction image, the selector 62 supplies the decoded image data after the filter processing stored in the frame memory 61 to the motion compensation unit 72.
  • the intra prediction unit 71 generates predicted image data based on the prediction mode information supplied from the lossless decoding unit 52 and the decoded image data supplied from the frame memory 61 via the selector 62, and the generated predicted image data Is output to the selector 75.
  • the motion compensation unit 72 adds the difference motion vector information supplied from the lossless decoding unit 52 and the predicted motion vector information supplied from the predicted motion vector information setting unit 73 to obtain motion vector information of a block to be decoded. Generate. In addition, the motion compensation unit 72 performs motion compensation using the decoded image data supplied from the frame memory 61 based on the generated motion vector information and the prediction mode information supplied from the lossless decoding unit 52, so that predicted image data Is output to the selector 75.
  • the prediction motion vector information setting unit 73 sets prediction motion vector information based on the prediction block information supplied from the lossless decoding unit 52.
  • the predicted motion vector information setting unit 73 sets the horizontal motion vector information of the block indicated by the horizontal predicted block flag information in the decoded adjacent block as the predicted horizontal motion vector information for the target block.
  • the vertical motion vector information of the block indicated by the vertical prediction block flag in the decoded adjacent block is used as the vertical prediction motion vector information.
  • the predicted motion vector information setting unit 73 outputs the set horizontal predicted motion vector information and vertical motion vector information to the motion compensation unit 72.
  • FIG. 16 shows the configuration of the motion compensation unit 72 and the predicted motion vector information setting unit 73.
  • the motion compensation unit 72 includes a block size information buffer 721, a differential motion vector information buffer 722, a motion vector information generation unit 723, a motion compensation processing unit 724, and a motion vector information buffer 725.
  • the block size information buffer 721 stores block size information included in the prediction mode information supplied from the lossless decoding unit 52. Also, the block size information buffer 721 outputs the stored block size information to the motion compensation processing unit 724 and the predicted motion vector information setting unit 73.
  • the difference motion vector information buffer 722 stores the difference motion vector information supplied from the lossless decoding unit 52. Further, the differential motion vector information buffer 722 outputs the stored differential motion vector information to the motion vector information generation unit 723.
  • the motion vector information generation unit 723 adds the horizontal difference motion vector information supplied from the difference motion vector information buffer 722 and the predicted horizontal motion vector information set by the predicted motion vector information setting unit 73. In addition, the motion vector information generation unit 723 adds the vertical difference motion vector information supplied from the difference motion vector information buffer 722 and the predicted vertical motion vector information set by the predicted motion vector information setting unit 73. The motion vector information generation unit 723 outputs the motion vector information obtained by adding the difference motion vector information and the predicted motion vector information to the motion compensation processing unit 724 and the motion vector information buffer 725.
  • the motion compensation processing unit 724 reads the image data of the reference image from the frame memory 61 based on the prediction mode information supplied from the lossless decoding unit 52.
  • the motion compensation processing unit 724 performs motion compensation based on the image data of the reference image, the block information supplied from the block size information buffer 721, and the motion vector information supplied from the motion vector information generation unit 723.
  • the motion compensation processing unit 724 outputs the predicted image data generated by the motion compensation to the selector 75.
  • the motion vector information buffer 725 stores the motion vector information supplied from the motion vector information generation unit 723. In addition, the motion vector information buffer 725 outputs the stored motion vector information to the predicted motion vector information setting unit 73.
  • the predicted motion vector information setting unit 73 includes a flag buffer 730, a horizontal predicted motion vector information generation unit 731 and a vertical predicted motion vector information generation unit 732.
  • the flag buffer 730 stores the prediction block flag supplied from the lossless decoding unit 52. Also, the flag buffer 730 outputs the stored prediction block flag to the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732.
  • the horizontal predicted motion vector information generation unit 731 selects the horizontal motion vector information indicated by the horizontal prediction block flag from the horizontal motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72, and performs horizontal processing. Set to predicted motion vector information.
  • the predicted horizontal motion vector information generation unit 731 outputs the set predicted horizontal motion vector information to the motion vector information generation unit 723 of the motion compensation unit 72.
  • the vertical prediction motion vector information generation unit 732 selects the vertical motion vector information indicated by the vertical prediction block flag from the vertical motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72 and performs vertical operation. Set to predicted motion vector information. The predicted vertical motion vector information generation unit 732 outputs the set predicted vertical motion vector information to the motion vector information generation unit 723 of the motion compensation unit 72.
  • the selector 75 selects the intra prediction unit 71 for intra prediction and the motion compensation unit 72 for inter prediction based on the prediction mode information supplied from the lossless decoding unit 52.
  • the selector 75 outputs the predicted image data generated by the selected intra prediction unit 71 or motion compensation unit 72 to the addition unit 55.
  • step ST81 the accumulation buffer 51 accumulates the transmitted image compression information.
  • step ST82 the lossless decoding unit 52 performs lossless decoding processing.
  • the lossless decoding unit 52 decodes the compressed image information supplied from the accumulation buffer 51. That is, quantized data of each picture encoded by the lossless encoding unit 16 in FIG. 7 is obtained. Further, when the lossless decoding unit 52 performs lossless decoding of the prediction mode information included in the image compression information and the obtained prediction mode information is information related to the intra prediction mode, the prediction mode information is converted into the intra prediction unit 71. Output to.
  • the lossless decoding part 52 outputs prediction mode information to the motion compensation part 72, when prediction mode information is the information regarding inter prediction mode.
  • step ST83 the inverse quantization unit 53 performs an inverse quantization process.
  • the inverse quantization unit 53 inversely quantizes the quantized data decoded by the lossless decoding unit 52 with characteristics corresponding to the characteristics of the quantization unit 15 in FIG.
  • the inverse orthogonal transform unit 54 performs an inverse orthogonal transform process.
  • the inverse orthogonal transform unit 54 performs inverse orthogonal transform on the transform coefficient data inversely quantized by the inverse quantization unit 53 with characteristics corresponding to the characteristics of the orthogonal transform unit 14 of FIG.
  • step ST85 the addition unit 55 generates decoded image data.
  • the adder 55 adds the data obtained by performing the inverse orthogonal transform process and the predicted image data selected in step ST89 described later to generate decoded image data. As a result, the original image is decoded.
  • step ST86 the deblocking filter 56 performs filter processing.
  • the deblocking filter 56 performs a deblocking filter process on the decoded image data output from the adding unit 55 to remove block distortion included in the decoded image.
  • step ST87 the frame memory 61 performs a process of storing decoded image data.
  • step ST88 the intra prediction unit 71 and the motion compensation unit 72 perform a predicted image generation process.
  • the intra prediction unit 71 and the motion compensation unit 72 perform a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 52, respectively.
  • the intra prediction unit 71 when prediction mode information for intra prediction is supplied from the lossless decoding unit 52, the intra prediction unit 71 generates predicted image data based on the prediction mode information.
  • the motion compensation unit 72 performs motion compensation based on the prediction mode information to generate predicted image data.
  • step ST89 the selector 75 selects predicted image data.
  • the selector 75 selects the prediction image supplied from the intra prediction unit 71 and the prediction image data supplied from the motion compensation unit 72, supplies the selected prediction image data to the addition unit 55, and, as described above, In step ST85, it is added to the output of the inverse orthogonal transform unit 54.
  • step ST90 the screen rearrangement buffer 57 performs image rearrangement. That is, the screen rearrangement buffer 57 rearranges the order of frames rearranged for encoding by the screen rearrangement buffer 12 of the image encoding device 10 of FIG. 7 to the original display order.
  • step ST91 the D / A converter 58 D / A converts the image data from the screen rearrangement buffer 57. This image is output to a display (not shown), and the image is displayed.
  • step ST101 the lossless decoding unit 52 determines whether or not the target block is intra-coded.
  • the prediction mode information obtained by performing lossless decoding is prediction mode information for intra prediction
  • the lossless decoding unit 52 supplies the prediction mode information to the intra prediction unit 71 and proceeds to step ST102.
  • the prediction mode information is inter prediction mode information
  • the lossless decoding unit 52 supplies the prediction mode information to the motion compensation unit 72 and proceeds to step ST103.
  • the intra prediction unit 71 performs intra prediction image generation processing.
  • the intra prediction unit 71 performs intra prediction using the decoded image data before deblocking filter processing and the prediction mode information stored in the frame memory 61, and generates predicted image data.
  • step ST103 the motion compensation unit 72 performs inter prediction image generation processing.
  • the motion compensation unit 72 performs motion compensation of the reference image read from the frame memory 61 based on the prediction mode information and the difference motion vector information from the lossless decoding unit 52, and generates predicted image data.
  • FIG. 19 is a flowchart showing the inter predicted image generation processing in step ST103.
  • the motion compensation unit 72 acquires prediction mode information.
  • the motion compensation unit 72 acquires the prediction mode information from the lossless decoding unit 52, and proceeds to step ST112.
  • step ST112 the motion compensation unit 72 and the predicted motion vector information setting unit 73 perform a motion vector information reconstruction process.
  • FIG. 20 is a flowchart showing the motion vector information reconstruction process.
  • step ST121 the motion compensation unit 72 and the predicted motion vector information setting unit 73 acquire the predicted block flag and the difference motion vector information.
  • the motion compensation unit 72 acquires the difference motion vector information from the lossless decoding unit 52. Further, the motion vector predictor information setting unit 73 acquires the prediction block flag from the lossless decoding unit 52, and proceeds to step ST122.
  • the predicted motion vector information setting unit 73 performs horizontal predicted motion vector information setting processing.
  • the horizontal predicted motion vector information generation unit 731 selects horizontal motion vector information of the block indicated by the horizontal prediction block flag from the horizontal motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72. To do.
  • the predicted horizontal motion vector information generation unit 731 sets the selected predicted horizontal motion vector information as the predicted horizontal motion vector information.
  • step ST123 the motion compensation unit 72 reconstructs horizontal motion vector information.
  • the motion compensation unit 72 reconstructs the horizontal motion vector information by adding the horizontal difference motion vector information and the horizontal predicted motion vector information, and proceeds to step ST124.
  • the predicted motion vector information setting unit 73 performs vertical predicted motion vector information setting processing.
  • the predicted vertical motion vector information generation unit 732 selects the vertical motion vector information of the block indicated by the vertical prediction block flag from the vertical motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72. To do.
  • the predicted vertical motion vector information generation unit 732 sets the selected vertical motion vector information as the predicted vertical motion vector information.
  • step ST125 the motion compensation unit 72 reconstructs the vertical motion vector information.
  • the motion compensation unit 72 reconstructs the vertical motion vector information by adding the vertical difference motion vector information and the predicted vertical motion vector information, and proceeds to step ST113 in FIG.
  • step ST113 the motion compensation unit 72 generates predicted image data. Based on the prediction mode information acquired in step ST111 and the motion vector information reconstructed in step ST112, the motion compensation unit 72 reads reference image data from the frame memory 61 to perform motion compensation, generates predicted image data, and selects a selector. Output to 75.
  • the image decoding apparatus 50 uses the horizontal motion vector information of the adjacent block indicated by the horizontal prediction block flag as the horizontal prediction motion vector information, and the vertical motion vector information of the adjacent block indicated by the vertical prediction block flag as the vertical. Set to predicted motion vector information. Therefore, in order to improve the encoding efficiency in the image encoding device 10, the motion vector information can be correctly reconstructed even if the horizontal prediction motion vector information and the vertical prediction motion vector information are individually set.
  • the horizontal / vertical prediction motion vector information generation unit 333 sets the motion vector information of the encoded adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for prediction motion vector information. Further, the horizontal / vertical prediction motion vector information generation unit 333 generates difference motion vector information indicating a difference between the motion vector information of each candidate and the motion vector information of the target block supplied from the motion prediction / compensation unit 32. Further, the horizontal / vertical prediction motion vector information generation unit 333 sets the motion vector information with the minimum coding amount detected based on the above equation (23) as the horizontal / vertical prediction motion vector information.
  • the horizontal / vertical prediction motion vector information generation unit 333 outputs the difference motion vector information when the horizontal / vertical prediction motion vector information and the horizontal / vertical prediction motion vector information are used to the identification information generation unit 334a as the horizontal / vertical prediction motion vector information generation result. To do.
  • the identification information generation unit 334a individually selects one of horizontal prediction motion vector information and vertical prediction motion vector information, or horizontal / vertical prediction motion vector information, and selects the selected prediction motion vector information together with difference motion vector information as a cost function.
  • the value is output to the value calculation unit 322. For example, when horizontal prediction motion vector information and vertical prediction motion vector information are selected as the prediction motion vector information, as described above, the identification information generation unit 334a calculates the cost function value of the horizontal prediction block flag and the horizontal difference motion vector information. To the unit 322. Also, the identification information generation unit 334a outputs the vertical prediction block flag and the vertical difference motion vector information to the cost function value calculation unit 322.
  • the identification information generation unit 334a when the horizontal / vertical prediction motion vector information is selected as the prediction motion vector information, the identification information generation unit 334a generates horizontal / vertical prediction block information indicating the block whose motion vector information is selected as the horizontal / vertical prediction motion vector information. . For example, the identification information generation unit 334a generates a horizontal / vertical prediction block flag as the horizontal / vertical prediction block information. The identification information generation unit 334a outputs the generated horizontal / vertical prediction block flag to the cost function value calculation unit 322 together with the difference motion vector information.
  • the identification information generation unit 334a generates identification information indicating which of the predicted horizontal motion vector information, the predicted vertical motion vector information, or the predicted horizontal and vertical motion vector information is selected. This identification information is supplied to the lossless encoding unit 16 via the motion prediction / compensation unit 32 and included in the picture parameter set or slice header of the image compression information.
  • the identification information generation unit 334a may switch between the horizontal prediction motion vector information and the vertical prediction motion vector information or the horizontal and vertical prediction motion vector information in units of pictures or slices. . Further, when selecting any one of the horizontal prediction motion vector information, the vertical prediction motion vector information, and the horizontal / vertical prediction motion vector information in units of pictures, the identification information generation unit 334a performs selection according to, for example, the picture type of the target block. You may do it. That is, in the P picture, even if there is some overhead for flag information, it is important to improve the efficiency of motion vector coding.
  • the horizontal prediction block flag and horizontal difference motion vector information, and the vertical prediction block flag and vertical difference motion vector information are output to the cost function value calculation unit 322.
  • a B picture having a horizontal prediction block flag and a vertical prediction block flag for List0 prediction and List1 prediction, respectively, can realize optimal encoding efficiency especially in the case of a low bit rate. Not exclusively. Therefore, in the case of a B picture, it is possible to achieve optimal encoding efficiency by outputting the horizontal / vertical prediction block flag and the difference motion vector information to the cost function value calculation unit 322 as in the conventional art.
  • the flag buffer 730a switches the supply destination of the prediction block flag based on the identification information included in the image compression information. For example, when the horizontal prediction motion vector information and the vertical prediction motion vector information are selected, the flag buffer 730 a outputs the prediction block flag to the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732. When the horizontal / vertical prediction motion vector information is selected, the flag buffer 730a outputs the prediction block flag to the horizontal / vertical prediction motion vector information generation unit 733. Further, for example, when the prediction motion vector information is switched according to the picture type, the flag buffer 730a switches the supply destination of the prediction block flag according to the picture type.
  • motion vector information is encoded using horizontal prediction motion vector information and vertical prediction motion vector information in the case of a P picture, and horizontal and vertical prediction motion vector information in the case of a B picture.
  • the flag buffer 730a sets the prediction block flag in the case of a P picture and the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732, and in the case of a B picture, sets the prediction block flag as horizontal and vertical prediction motion vector information. This is supplied to the generation unit 733.
  • the lossless encoding unit 16 may perform different code assignments in the horizontal direction and the vertical direction. For example, it is assumed that spatial prediction motion vector information and temporal prediction motion vector information can be used as the prediction motion vector information. In this case, in consideration of an imaging operation when a moving image to be encoded is generated, a code with a small amount of data is assigned to prediction motion vector information with high prediction accuracy. For example, when a captured image is recorded by an imaging device described later, when the panning operation of the imaging device is performed and the imaging direction moves in the horizontal direction, the motion vector information in the vertical direction is almost “0”.
  • the prediction accuracy of the temporal prediction motion vector information is higher than the spatial prediction motion vector information in the vertical direction
  • the prediction accuracy of the spatial prediction motion vector information is higher than the temporal prediction motion vector information in the horizontal direction.
  • code number “0” is assigned to the spatial prediction motion vector information block
  • code number “1” is assigned to the temporal direction motion vector information block.
  • the vertical prediction block information a code number “1” is assigned to the spatial prediction vector information block
  • a code number “0” is assigned to the temporal direction motion vector information block.
  • FIG. 23 is a diagram illustrating a configuration of a computer device that executes the above-described series of processing by a program.
  • the CPU 801 of the computer device 80 executes various processes according to programs recorded in the ROM 802 or the recording unit 808.
  • the RAM 803 appropriately stores programs executed by the CPU 801 and various data. These CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804.
  • An input / output interface 805 is also connected to the CPU 801 via the bus 804.
  • An input unit 806 such as a touch panel, a keyboard, a mouse, and a microphone, and an output unit 807 including a display are connected to the input / output interface 805.
  • the CPU 801 executes various processes in response to commands input from the input unit 806. Then, the CPU 801 outputs the processing result to the output unit 807.
  • the recording unit 808 connected to the input / output interface 805 includes, for example, a hard disk, and records programs executed by the CPU 801 and various data.
  • a communication unit 809 communicates with an external device via a wired or wireless communication medium such as a network such as the Internet or a local area network or digital broadcasting. Further, the computer device 80 may acquire a program via the communication unit 809 and record it in the ROM 802 or the recording unit 808.
  • the drive 810 drives them to acquire a recorded program or data.
  • the acquired program and data are transferred to the ROM 802, RAM 803, or recording unit 808 as necessary.
  • the CPU 801 reads and executes a program for performing the above-described series of processing, and performs encoding processing and image processing on the image signal recorded in the recording unit 808 and the removable medium 85 and the image signal supplied via the communication unit 809. Decodes the compressed information.
  • H.264 is used as the encoding method / decoding method.
  • the present technology can also be applied to an image encoding device / image decoding device using an encoding method / decoding method for performing other motion prediction / compensation processing.
  • the present technology for example, converts image information (bitstream) compressed by orthogonal transform such as discrete cosine transform and motion compensation via network media such as satellite broadcasting, cable TV (television), the Internet, and mobile phones. Applicable when receiving. Further, the present invention can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical, magnetic disk, and flash memory.
  • the image encoding device 10 and the image decoding device 50 described above can be applied to any electronic device. Examples thereof will be described below.
  • FIG. 24 illustrates a schematic configuration of a television apparatus to which the present technology is applied.
  • the television apparatus 90 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, and an external interface unit 909. Furthermore, the television apparatus 90 includes a control unit 910, a user interface unit 911, and the like.
  • the tuner 902 selects a desired channel from the broadcast wave signal received by the antenna 901, performs demodulation, and outputs the obtained stream to the demultiplexer 903.
  • the demultiplexer 903 extracts video and audio packets of the program to be viewed from the stream, and outputs the extracted packet data to the decoder 904.
  • the demultiplexer 903 outputs a packet of data such as EPG (Electronic Program Guide) to the control unit 910. If scrambling is being performed, descrambling is performed by a demultiplexer or the like.
  • the decoder 904 performs packet decoding processing, and outputs video data generated by the decoding processing to the video signal processing unit 905 and audio data to the audio signal processing unit 907.
  • the video signal processing unit 905 performs noise removal, video processing according to user settings, and the like on the video data.
  • the video signal processing unit 905 generates video data of a program to be displayed on the display unit 906, image data by processing based on an application supplied via a network, and the like.
  • the video signal processing unit 905 generates video data for displaying a menu screen for selecting an item and the like, and superimposes the video data on the video data of the program.
  • the video signal processing unit 905 generates a drive signal based on the video data generated in this way, and drives the display unit 906.
  • the display unit 906 drives a display device (for example, a liquid crystal display element or the like) based on a drive signal from the video signal processing unit 905 to display a program video or the like.
  • a display device for example, a liquid crystal display element or the like
  • the audio signal processing unit 907 performs predetermined processing such as noise removal on the audio data, performs D / A conversion processing and amplification processing on the processed audio data, and outputs the audio data by supplying the audio data to the speaker 908. .
  • the external interface unit 909 is an interface for connecting to an external device or a network, and transmits and receives data such as video data and audio data.
  • a user interface unit 911 is connected to the control unit 910.
  • the user interface unit 911 includes an operation switch, a remote control signal receiving unit, and the like, and supplies an operation signal corresponding to a user operation to the control unit 910.
  • the control unit 910 is configured using a CPU (Central Processing Unit), a memory, and the like.
  • the memory stores programs executed by the CPU, various data necessary for the CPU to perform processing, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU at a predetermined timing such as when the television device 90 is activated.
  • the CPU controls each unit so that the television device 90 operates according to the user operation by executing the program.
  • the television device 90 is provided with a bus 912 for connecting the tuner 902, the demultiplexer 903, the video signal processing unit 905, the audio signal processing unit 907, the external interface unit 909, and the control unit 910.
  • the decoder 904 is provided with the function of the image decoding apparatus (image decoding method) of the present application. Therefore, the television apparatus can correctly restore the motion vector information of the target block to be decoded based on the generated predicted motion vector information and the received difference motion vector information. Therefore, even if the broadcast station side sets the horizontal prediction motion vector information and the vertical prediction motion vector information individually and performs a process of increasing the encoding efficiency, the television apparatus can correctly decode.
  • FIG. 25 illustrates a schematic configuration of a mobile phone to which the present technology is applied.
  • the cellular phone 92 includes a communication unit 922, an audio codec 923, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, and a control unit 931. These are connected to each other via a bus 933.
  • an antenna 921 is connected to the communication unit 922, and a speaker 924 and a microphone 925 are connected to the audio codec 923. Further, an operation unit 932 is connected to the control unit 931.
  • the mobile phone 92 performs various operations such as transmission / reception of voice signals, transmission / reception of e-mail and image data, image shooting, and data recording in various modes such as a voice call mode and a data communication mode.
  • the voice signal generated by the microphone 925 is converted into voice data and compressed by the voice codec 923 and supplied to the communication unit 922.
  • the communication unit 922 performs audio data modulation processing, frequency conversion processing, and the like to generate a transmission signal.
  • the communication unit 922 supplies a transmission signal to the antenna 921 and transmits it to a base station (not shown).
  • the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and supplies the obtained audio data to the audio codec 923.
  • the audio codec 923 performs audio data expansion or conversion into an analog audio signal, and outputs it to the speaker 924.
  • the control unit 931 receives character data input by operating the operation unit 932 and displays the input characters on the display unit 930.
  • the control unit 931 generates mail data based on a user instruction or the like in the operation unit 932 and supplies the mail data to the communication unit 922.
  • the communication unit 922 performs mail data modulation processing, frequency conversion processing, and the like, and transmits the obtained transmission signal from the antenna 921.
  • the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and restores mail data. This mail data is supplied to the display unit 930 to display the mail contents.
  • the mobile phone 92 can also store the received mail data in a storage medium by the recording / playback unit 929.
  • the storage medium is any rewritable storage medium.
  • the storage medium is a removable medium such as a semiconductor memory such as a RAM or a built-in flash memory, a hard disk, a magnetic disk, a magneto-optical disk, an optical disk, a USB memory, or a memory card.
  • the image data generated by the camera unit 926 is supplied to the image processing unit 927.
  • the image processing unit 927 performs image data encoding processing and generates image compression information.
  • the demultiplexing unit 928 multiplexes the image compression information generated by the image processing unit 927 and the audio data supplied from the audio codec 923 by a predetermined method, and supplies the multiplexed data to the communication unit 922.
  • the communication unit 922 performs modulation processing and frequency conversion processing of multiplexed data, and transmits the obtained transmission signal from the antenna 921.
  • the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and restores multiplexed data.
  • This multiplexed data is supplied to the demultiplexing unit 928.
  • the demultiplexing unit 928 performs demultiplexing of the multiplexed data, and supplies image compression information to the image processing unit 927 and audio data to the audio codec 923.
  • the image processing unit 927 performs a decoding process on the image compression information to generate image data.
  • the image data is supplied to the display unit 930 and the received image is displayed.
  • the audio codec 923 converts the audio data into an analog audio signal, supplies the analog audio signal to the speaker 924, and outputs the received audio.
  • the image processing unit 927 is provided with the functions of the image encoding device (image encoding method) and the image decoding device (image decoding method) of the present application. Therefore, when transmitting an image, for the target block, horizontal prediction motion vector information for the horizontal component of the motion vector information and vertical prediction motion vector information for the vertical component are individually set to improve encoding efficiency. be able to. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
  • FIG. 26 illustrates a schematic configuration of a recording / reproducing apparatus to which the present technology is applied.
  • the recording / reproducing apparatus 94 records, for example, audio data and video data of a received broadcast program on a recording medium, and provides the recorded data to the user at a timing according to a user instruction.
  • the recording / reproducing device 94 can also acquire audio data and video data from another device, for example, and record them on a recording medium.
  • the recording / reproducing device 94 decodes and outputs the audio data and video data recorded on the recording medium, thereby enabling image display and audio output on the monitor device or the like.
  • the recording / reproducing apparatus 94 includes a tuner 941, an external interface unit 942, an encoder 943, an HDD (Hard Disk Drive) unit 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) unit 948, a control unit 949, A user interface unit 950 is included.
  • Tuner 941 selects a desired channel from a broadcast signal received by an antenna (not shown).
  • the tuner 941 outputs image compression information obtained by demodulating the received signal of the desired channel to the selector 946.
  • the external interface unit 942 includes at least one of an IEEE 1394 interface, a network interface unit, a USB interface, a flash memory interface, and the like.
  • the external interface unit 942 is an interface for connecting to an external device, a network, a memory card, and the like, and receives data such as video data and audio data to be recorded.
  • the encoder 943 performs encoding by a predetermined method when the video data and audio data supplied from the external interface unit 942 are not encoded, and outputs image compression information to the selector 946.
  • the HDD unit 944 records content data such as video and audio, various programs, and other data on a built-in hard disk, and reads them from the hard disk during playback.
  • the disk drive 945 records and reproduces signals with respect to the mounted optical disk.
  • An optical disk such as a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.), Blu-ray disk, or the like.
  • the selector 946 selects any stream from the tuner 941 or the encoder 943 and supplies it to either the HDD unit 944 or the disk drive 945 when recording video or audio. In addition, the selector 946 supplies the stream output from the HDD unit 944 or the disk drive 945 to the decoder 947 when playing back video or audio.
  • the decoder 947 performs a stream decoding process.
  • the decoder 947 supplies the video data generated by performing the decoding process to the OSD unit 948.
  • the decoder 947 outputs audio data generated by performing the decoding process.
  • the OSD unit 948 generates video data for displaying a menu screen for selecting an item and the like, and superimposes it on the video data output from the decoder 947 and outputs the video data.
  • a user interface unit 950 is connected to the control unit 949.
  • the user interface unit 950 includes an operation switch, a remote control signal receiving unit, and the like, and supplies an operation signal corresponding to a user operation to the control unit 949.
  • the control unit 949 is configured using a CPU, a memory, and the like.
  • the memory stores programs executed by the CPU and various data necessary for the CPU to perform processing.
  • the program stored in the memory is read and executed by the CPU at a predetermined timing such as when the recording / reproducing apparatus 94 is activated.
  • the CPU executes the program to control each unit so that the recording / reproducing device 94 operates in accordance with the user operation.
  • the encoder 943 is provided with the function of the image encoding apparatus (image encoding method) of the present application.
  • the decoder 947 is provided with the function of the image decoding apparatus (image decoding method) of the present application. Therefore, when recording an image on a recording medium, encoding efficiency is set by individually setting the predicted horizontal motion vector information for the horizontal component of the motion vector information and the predicted vertical motion vector information for the vertical component for the target block. Can be improved. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
  • FIG. 27 illustrates a schematic configuration of an imaging apparatus to which the present technology is applied.
  • the imaging device 96 images a subject and displays an image of the subject on a display unit, or records it on a recording medium as image data.
  • the imaging device 96 includes an optical block 961, an imaging unit 962, a camera signal processing unit 963, an image data processing unit 964, a display unit 965, an external interface unit 966, a memory unit 967, a media drive 968, an OSD unit 969, and a control unit 970.
  • a user interface unit 971 and a motion detection sensor unit 972 are connected to the control unit 970.
  • the image data processing unit 964, the external interface unit 966, the memory unit 967, the media drive 968, the OSD unit 969, the control unit 970, and the like are connected via a bus 973.
  • the optical block 961 is configured using a focus lens, a diaphragm mechanism, and the like.
  • the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
  • the imaging unit 962 is configured using a CCD or CMOS image sensor, generates an electrical signal corresponding to the optical image by photoelectric conversion, and supplies the electrical signal to the camera signal processing unit 963.
  • the camera signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the electrical signal supplied from the imaging unit 962.
  • the camera signal processing unit 963 supplies the image data after the camera signal processing to the image data processing unit 964.
  • the image data processing unit 964 performs an encoding process on the image data supplied from the camera signal processing unit 963.
  • the image data processing unit 964 supplies the compressed image information generated by performing the encoding process to the external interface unit 966 and the media drive 968. Further, the image data processing unit 964 performs a decoding process on the compressed image information supplied from the external interface unit 966 and the media drive 968.
  • the image data processing unit 964 supplies the image data generated by performing the decoding process to the display unit 965. Further, the image data processing unit 964 superimposes the processing for supplying the image data supplied from the camera signal processing unit 963 to the display unit 965 and the display data acquired from the OSD unit 969 on the image data. To supply.
  • the OSD unit 969 generates display data such as a menu screen and icons made up of symbols, characters, or figures and outputs them to the image data processing unit 964.
  • the external interface unit 966 includes, for example, a USB input / output terminal, and is connected to a printer when printing an image.
  • a drive is connected to the external interface unit 966 as necessary, a removable medium such as a magnetic disk or an optical disk is appropriately mounted, and a program read from the medium is installed as necessary.
  • the external interface unit 966 has a network interface connected to a predetermined network such as a LAN or the Internet.
  • the control unit 970 reads the image compression information from the memory unit 967 according to an instruction from the user interface unit 971, and supplies the compressed image information from the external interface unit 966 to another device connected via the network. it can.
  • the control unit 970 may acquire image compression information and image data supplied from another device via the network via the external interface unit 966 and supply the acquired information to the image data processing unit 964. it can.
  • any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory is used.
  • the recording medium may be any type of removable medium, and may be a tape device, a disk, or a memory card. Of course, a non-contact IC card or the like may be used.
  • media drive 968 and the recording medium may be integrated, and may be configured by a non-portable storage medium such as a built-in hard disk drive or an SSD (Solid State Drive).
  • a non-portable storage medium such as a built-in hard disk drive or an SSD (Solid State Drive).
  • the control unit 970 is configured using a CPU, a memory, and the like.
  • the memory stores programs executed by the CPU, various data necessary for the CPU to perform processing, and the like.
  • the program stored in the memory is read and executed by the CPU at a predetermined timing such as when the imaging device 96 is activated.
  • the CPU executes the program to control each unit so that the imaging device 96 operates according to the user operation.
  • the image data processing unit 964 is provided with the functions of the image encoding device (image encoding method) and the image decoding device (image decoding method) of the present application. Therefore, when recording a captured image, the horizontal prediction motion vector information for the horizontal component of the motion vector information and the vertical prediction motion vector information for the vertical component are individually set for the target block to improve the encoding efficiency. Can be made. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
  • a motion detection sensor unit 972 configured using a gyro or the like is provided in the imaging device 96, and a code with a small amount of data is predicted with high accuracy based on the detection result of motion such as panning or tilting of the imaging device 96. Assigned to predicted motion vector information. In this way, the coding efficiency can be further improved by dynamically allocating codes according to the motion detection result of the imaging apparatus.
  • the code adjacent to the target block for each of the horizontal component and the vertical component of the motion vector information of the target block is selected from the converted block, and the horizontal prediction motion vector information and the vertical prediction motion vector information are set, respectively, and the motion of the target block is set using the set horizontal prediction motion vector information and vertical prediction motion vector information.
  • Vector information compression processing is performed.
  • horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated. Further, the motion vector information is decoded based on the horizontal prediction block information and the vertical prediction block information.
  • image compression information (bitstream) is transmitted / received via network media such as satellite broadcasting, cable TV, the Internet, and cellular phones, or optical disks, magnetic disks, and flash memories. It is suitable for an apparatus for recording and reproducing images using a storage medium such as
  • Image decoding device 52 ... ⁇ Lossless decoding unit, 58... D / A conversion unit, 72... Motion compensation unit 80: Computer device, 90: Television device, 92: Mobile phone, 94: Recording / reproducing device, 96: Imaging device, 321: Motion search unit, 322: Cost Function value calculation unit, 323 ... mode determination unit, 324 ... motion compensation processing unit, 325 ... motion vector buffer, 331, 731 ... horizontal prediction motion vector information generation unit, 332, 732 ... Vertical prediction motion vector information generation unit, 333, 733... Vertical and vertical prediction motion vector information generation unit, 334, 334a... Identification information generation unit, 721... Block size information buffer, 722. Information buffer, 723 ... motion vector information generation unit, 724 ... motion compensation processing unit, 725 ... motion vector information buffer, 730, 730a ... flag buffer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

In the present invention, a reversible decoding unit (52) acquires, from compressed image information, horizontal predicted block information indicating a block wherein motion vector information has been selected as horizontal predicted motion vector information from a block to be decoded and an adjacent block that has been decoded, and vertical predicted block information indicating a block wherein motion vector information has been selected as vertical predicted motion vector information. A predicted motion vector information setting unit (73) sets motion vector information of the block indicated by the horizontal predicted block information as horizontal predicted motion vector information, and sets motion vector information of the block indicated by the vertical predicted block information as vertical predicted motion vector information. Using the set horizontal predicted motion vector information and vertical predicted motion vector information, the motion vector information generating unit of a motion compensation unit (72) generates motion vector information of the block to be decoded. As a result, encoding efficiency is increased.

Description

画像復号化装置と動きベクトル復号化方法、画像符号化装置と動きベクトル符号化方法Image decoding apparatus and motion vector decoding method, image encoding apparatus and motion vector encoding method
 この技術は、画像復号化装置と動きベクトル復号化方法、画像符号化装置と動きベクトル符号化方法に関する。詳しくは、動画像の符号化における効率を向上させる。 This technology relates to an image decoding device and a motion vector decoding method, and an image encoding device and a motion vector encoding method. Specifically, the efficiency in moving picture encoding is improved.
 近年、画像情報をディジタルとして取り扱い、その際、効率の高い情報の伝送、蓄積を行う装置、例えば離散コサイン変換等の直交変換と動き補償により圧縮するMPEG等の方式に準拠した装置が、放送局や一般家庭において普及しつつある。 In recent years, image information is handled as digital, and at that time, a device that transmits and stores information with high efficiency, for example, a device that complies with a system such as MPEG that compresses by orthogonal transform such as discrete cosine transform and motion compensation, It is becoming popular in general households.
 特に、MPEG2(ISO/IEC13818-2)は、汎用画像符号化方式として定義されており、プロフェッショナル用途およびコンシューマー用途の広範なアプリケーションに現在広く用いられている。MPEG2圧縮方式を用いることにより、例えば720×480画素を持つ標準解像度の飛び越し走査画像であれば4~8Mbpsの符号量(ビットレート)を割り当てることで、高い圧縮率と良好な画質の実現が可能である。また、1920×1088画素を持つ高解像度の飛び越し走査画像であれば18~22Mbpsの符号量を割り当てることで、高い圧縮率と良好な画質の実現が可能である。 In particular, MPEG2 (ISO / IEC13818-2) is defined as a general-purpose image encoding method, and is currently widely used in a wide range of applications for professional use and consumer use. By using the MPEG2 compression method, for example, a standard resolution interlaced scanned image having 720 × 480 pixels can be assigned a code amount (bit rate) of 4 to 8 Mbps, thereby realizing a high compression ratio and good image quality. It is. Further, in the case of a high-resolution interlaced scanned image having 1920 × 1088 pixels, it is possible to realize a high compression ratio and good image quality by assigning a code amount of 18 to 22 Mbps.
 また、MPEG2やMPEG4といった従来の符号化方式に比べ、その符号化、復号化により多くの演算量が要求されるものの、より高い符号化効率を実現する標準化がJoint Model of Enhanced-Compression Video Codingとして行われて、H.264およびMPEG-4 Part10(以下「H.264/AVC(Advanced Video Coding)」と記す)という名の下に国際標準となった。 Compared to conventional encoding methods such as MPEG2 and MPEG4, a large amount of computation is required for encoding and decoding, but standardization that realizes higher encoding efficiency is Joint Model of Enhanced-Compression Video Coding. H. It became an international standard under the names of H.264 and MPEG-4 Part 10 (hereinafter referred to as “H.264 / AVC (Advanced Video Coding)”).
 H.264/AVCでは、図1の(A)に示すように、16×16画素で構成される1つのマクロブロックを、16×16、16×8、8×16または8×8のいずれかのブロックサイズに分割して、それぞれ独立した動きベクトル情報を持つことが可能である。さらに、8×8画素のサブマクロブロックに関しては、図1の(B)に示されるとおり、8×8、8×4、4×8、4×4のいずれかの動き補償ブロックサイズに分割し、それぞれ独立した動きベクトル情報を持つことが可能である。なお、MPEG-2では、動き予測・補償処理の単位は、フレーム動き補償モードの場合には16×16画素、フィールド動き補償モードの場合には第一フィールド、第二フィールドのそれぞれに対し、16×8画素を単位として動き予測・補償処理が行われる。 H. In H.264 / AVC, as shown in FIG. 1 (A), one macro block composed of 16 × 16 pixels is replaced with any block of 16 × 16, 16 × 8, 8 × 16, or 8 × 8. It is possible to have independent motion vector information divided into sizes. Further, the 8 × 8 pixel sub-macroblock is divided into motion compensation block sizes of 8 × 8, 8 × 4, 4 × 8, and 4 × 4 as shown in FIG. , Each can have independent motion vector information. In MPEG-2, the unit of motion prediction / compensation processing is 16 × 16 pixels in the frame motion compensation mode, and 16 units for each of the first field and the second field in the field motion compensation mode. Motion prediction / compensation processing is performed in units of × 8 pixels.
 H.264/AVCにおいて、かかるような動き予測・補償処理が行われることで、膨大な動きベクトル情報が生成され、これをこのまま符号化することは、符号化効率の低下を招く。 H. In H.264 / AVC, such motion prediction / compensation processing is performed, so that a large amount of motion vector information is generated. If this is encoded as it is, encoding efficiency is reduced.
 かかる問題を解決する手法として、H.264/AVCにおいては、以下のようなメディアン予測を用いて、動きベクトル情報の情報量の低減が実現されている。 As a technique for solving this problem, H. In H.264 / AVC, the amount of motion vector information is reduced using the following median prediction.
 図2において、ブロックEはこれから符号化されようとしている対象ブロック、ブロックA~Dは、既に符号化済みであって対象ブロックEに隣接するブロックである。 In FIG. 2, a block E is a target block to be encoded, and blocks A to D are already encoded and are adjacent to the target block E.
 今、X=A,B,C,D,Eとして、ブロックXに対する動きベクトル情報を、mvXで表すものとする。 Now, let X = A, B, C, D, E and the motion vector information for the block X be represented by mvX.
 ブロックA,B,Cに関する動きベクトル情報を用い、対象ブロックEに対する予測動きベクトル情報pmvEを、メディアン予測により式(1)のように生成する。
  pmvE =med(mvA,mvB,mvC)  ・・・(1)
Using the motion vector information regarding the blocks A, B, and C, predicted motion vector information pmvE for the target block E is generated as shown in Equation (1) by median prediction.
pmvE = med (mvA, mvB, mvC) (1)
 隣接ブロックCに関する情報が、画枠の端である等の理由で得られない場合は、隣接ブロックDに関する情報で代用する。 If the information about the adjacent block C cannot be obtained because it is the edge of the image frame, the information about the adjacent block D is substituted.
 画像圧縮情報に、対象ブロックEに対する動きベクトル情報として符号化されるデータmvdEは、pmvEを用いて式(2)のように生成する。
  mvdE =mvE-pmvE    ・・・(2)
なお、実際の処理は、動きベクトル情報の水平方向、垂直方向のそれぞれの成分に対して、独立に処理が行われる。
Data mvdE encoded in the image compression information as motion vector information for the target block E is generated as shown in Expression (2) using pmvE.
mvdE = mvE−pmvE (2)
Note that the actual processing is performed independently for each of the horizontal and vertical components of the motion vector information.
 また、H.264/AVCにおいては、複数参照フレーム(Multi-Reference Frame)方式が規定されている。図3を用いて、H.264/AVCにおいて規定されている複数参照フレーム方式について説明する。 H. In H.264 / AVC, a multi-reference frame method is defined. Using FIG. The multiple reference frame system defined in H.264 / AVC will be described.
 MPEG2等においては、Pピクチャの場合、フレームメモリに格納された参照フレーム1枚のみを参照して、動き予測・補償処理を行っていた。しかし、H.264/AVCでは、図3に示したように、複数の参照フレームをメモリに格納して、ブロック毎に、異なるメモリを参照することが可能となっている。 In MPEG2 or the like, in the case of a P picture, motion prediction / compensation processing is performed with reference to only one reference frame stored in the frame memory. However, H. In H.264 / AVC, as shown in FIG. 3, it is possible to store a plurality of reference frames in a memory and refer to a different memory for each block.
 ところで、Bピクチャにおける動きベクトル情報における情報量は膨大であるが、H.264/AVCにおいては、ダイレクトモード(Direct Mode)と呼ばれるモードが用意されている。ダイレクトモードにおいて、動きベクトル情報は、画像圧縮情報中には格納されず、復号化装置において、周辺またはアンカーブロック(Co-Located Block)の動きベクトル情報から、当該ブロックの動きベクトル情報を抽出する。なお、アンカーブロックは、参照画像において、xy座標が対象ブロックと同じであるブロックである。 By the way, the amount of information in motion vector information in a B picture is enormous. In H.264 / AVC, a mode called a direct mode is provided. In the direct mode, the motion vector information is not stored in the image compression information, and the decoding apparatus extracts the motion vector information of the block from the motion vector information of the surrounding or anchor block (Co-Located Block). The anchor block is a block having the same xy coordinates as the target block in the reference image.
 ダイレクトモードは、空間ダイレクトモード(Spatial Direct Mode)と時間ダイレクトモード(Temporal Direct Mode)の2種類があり、どちらを用いるかは、スライス毎に切り替えることが可能である。 There are two types of direct mode: spatial direct mode (Spatial Direct Mode) and temporal direct mode (Temporal Direct Mode), which can be switched for each slice.
 空間ダイレクトモードにおいては、式(3)に示すように、メディアン予測で生成された動きベクトル情報pmvEを、当該ブロックに適用する動きベクトル情報mvEとする。
  mvE =pmvE   ・・・(3)
In the spatial direct mode, as shown in Expression (3), the motion vector information pmvE generated by the median prediction is set as motion vector information mvE applied to the block.
mvE = pmvE (3)
 次に図4を用いて、時間ダイレクトモード(Temporal Direct Mode)を説明する。図4において、L0参照ピクチャにおける、当該ブロックと、同じ空間上のアドレスにあるブロックを、アンカーブロックとし、アンカーブロックにおける動きベクトル情報を、動き「mvcol」とする。また、当該ピクチャとL0参照ピクチャの時間軸上の距離を「TDB」とし、L0参照ピクチャとL1参照ピクチャの時間軸上の距離を「TDD」とする。この場合、当該ピクチャにおける、L0動きベクトル情報mvL0およびL1動きベクトル情報mvL1を、式(4)(5)のように算出する。
 mvL0 =(TDB/TDD)mvcol        ・・・(4)
 mvL1 =((TDD-TDB)/TDD)mvcol  ・・・(5)
なお、画像圧縮情報においては、時間軸上の距離を表す情報が存在しないため、式(4)(5)では、POC(Picture Order Count)を用いて演算を行うものとする。
Next, a temporal direct mode will be described with reference to FIG. In FIG. 4, a block at the same space address as the current block in the L0 reference picture is an anchor block, and motion vector information in the anchor block is a motion “mvcol”. Also, the distance on the time axis between the current picture and the L0 reference picture is “TDB”, and the distance on the time axis between the L0 reference picture and the L1 reference picture is “TDD”. In this case, L0 motion vector information mvL0 and L1 motion vector information mvL1 in the picture are calculated as in equations (4) and (5).
mvL0 = (TDB / TDD) mvcol (4)
mvL1 = ((TDD−TDB) / TDD) mvcol (5)
In the image compression information, since there is no information indicating the distance on the time axis, the calculation is performed using POC (Picture Order Count) in equations (4) and (5).
 また、AVC画像圧縮情報において、ダイレクトモードは、16×16画素マクロブロック単位、または8×8画素サブマクロブロック単位で定義することが可能である。 In the AVC image compression information, the direct mode can be defined in units of 16 × 16 pixel macroblocks or in units of 8 × 8 pixel sub-macroblocks.
 ところで、図2に示されたような、メディアン予測を用いた動きベクトル情報の符号化を改善する非特許文献1の提案がなされている。非特許文献1では、メディアン予測で求められる空間予測動きベクトル情報に加え、時間予測動きベクトル情報および時空間予測動きベクトル情報のどれかを適応的に用いることが可能とされている。 Incidentally, Non-Patent Document 1 has been proposed to improve the encoding of motion vector information using median prediction as shown in FIG. In Non-Patent Document 1, it is possible to adaptively use either temporal prediction motion vector information or spatio-temporal prediction motion vector information in addition to spatial prediction motion vector information obtained by median prediction.
 すなわち、図5において、動きベクトル情報mvcolを、当該対象ブロックに対するアンカーブロックに対する動きベクトル情報とする。また、動きベクトル情報mvtk(k=0~8)をその周辺ブロックの動きベクトル情報とする。 That is, in FIG. 5, the motion vector information mvcol is the motion vector information for the anchor block for the target block. Also, the motion vector information mvtk (k = 0 to 8) is used as the motion vector information of the surrounding blocks.
 時間予測動きベクトル情報mvtmは、例えば式(6)を用いて5つの動きベクトル情報から生成する。また、時間予測動きベクトル情報mvtmは、式(7)を用いて9つの動きベクトルから生成してもよい。
  mvtm5 =med(mvcol,mvt0,・・・mvt3) ・・・(6)
  mvtm9 =med(mvcol,mvt0,・・・mvt7) ・・・(7)
The temporal prediction motion vector information mvtm is generated from five pieces of motion vector information using, for example, Expression (6). The temporal prediction motion vector information mvtm may be generated from nine motion vectors using equation (7).
mvtm5 = med (mvcol, mvt0,... mvt3) (6)
mvtm9 = med (mvcol, mvt0,... mvt7) (7)
 また、時空間予測動きベクトル情報mvsptは、式(8)を用いて5つの動きベクトル情報から生成する。
  mvspt =med(mvcol,mvcol,mvA,mvB,mvC)・・・(8)
Also, the spatio-temporal predicted motion vector information mvspt is generated from the five motion vector information using Expression (8).
mvspt = med (mvcol, mvcol, mvA, mvB, mvC) (8)
 画像情報の符号化を行う画像処理装置においては、それぞれのブロックに関して、それぞれの予測動きベクトル情報を用いた場合のコスト関数値が算出され、最適な予測動きベクトル情報の選択が行われる。なお、画像圧縮情報においては、それぞれのブロックに対し、どの予測動きベクトル情報が用いられたかを識別可能とする例えばフラグが伝送される。 In an image processing apparatus that encodes image information, a cost function value is calculated for each block when using each predicted motion vector information, and optimal predicted motion vector information is selected. In the image compression information, for example, a flag that can identify which prediction motion vector information is used for each block is transmitted.
 また、UHD(Ultra High Definition:4000画素×2000画素)といった大きな画枠では、MPEG2やH.264/AVCで規定されている16画素×16画像のマクロブロックサイズは最適でない場合がある。例えば、大きな画枠では、マクロブロックサイズを大きくすることで符号化効率を高めることが可能となる場合がある。そこで、次世代符号化方式であるHEVC(High Efficiency Video Coding)では、非特許文献2に示すように、コーディングユニットCU(Coding Unit)が規定されている。また、非特許文献2では、出力となる画像圧縮情報のSPS(Sequence Parameter Set)において、コーディングユニットCUの最大サイズ(LCU = Largest Coding Unit)と最小サイズ(SCU = Smallest Coding Unit)が規定される。さらに、各LCU内においては、SCUのサイズを下回らない範囲で、split-flag=1とすることにより、より小さなサイズのコーディングユニットCUに分割することが可能とされている。 Also, for large picture frames such as UHD (Ultra High Definition: 4000 pixels × 2000 pixels), MPEG2 and H.264 are used. The macroblock size of 16 pixels × 16 images defined by H.264 / AVC may not be optimal. For example, in a large image frame, it may be possible to increase the encoding efficiency by increasing the macroblock size. Therefore, as shown in Non-Patent Document 2, coding unit CU (Coding Unit) is defined in HEVC (High Efficiency Video Coding) which is a next-generation encoding method. In Non-Patent Document 2, the maximum size (LCU = Largest Coding Unit) and the minimum size (SCU = Smallest Coding Unit) of the coding unit CU are defined in the SPS (Sequence Parameter Set) of the image compression information to be output. . Furthermore, in each LCU, it is possible to divide into smaller sized coding units CU by setting split-flag = 1 within a range not smaller than the size of the SCU.
 図6はコーディングユニットCUの階層構造を例示している。なお、図6では、最大サイズが128画素×128画素、階層の深さ(Depth)が「5」である場合を示している。例えば、階層の深さが「0」である場合、2N×2N(N=64画素)のブロックがコーディングユニットCU0とされる。また、split flag=1とすると、コーディングユニットCU0は4つの独立したN×Nのブロックに分割されて、N×Nのブロックが1つ下の階層のブロックとされる。すなわち、階層の深さが「1」とされて、2N×2N(N=32画素)のブロックがコーディングユニットCU1とされる。同様に、split flag=1とされると、4つの独立したブロックに分割される。さらに、最も深い階層である深さ「4」の場合、2N×2N(N=4画素)のブロックがコーディングユニットCU4とされて、8画素×8画素がコーディングユニットCUの最小サイズとなる。また、HEVCでは、コーディングユニットを分割して予測用の基本単位である予測ユニット(PU:Prediction Unit)も定義されている。 FIG. 6 illustrates the hierarchical structure of the coding unit CU. FIG. 6 shows a case where the maximum size is 128 pixels × 128 pixels and the depth (Depth) of the hierarchy is “5”. For example, when the layer depth is “0”, a block of 2N × 2N (N = 64 pixels) is set as the coding unit CU0. Also, if split flag = 1, the coding unit CU0 is divided into four independent N × N blocks, and the N × N block is a block in the next lower layer. That is, the depth of the hierarchy is “1”, and a 2N × 2N (N = 32 pixels) block is the coding unit CU1. Similarly, when split flag = 1, it is divided into four independent blocks. Further, in the case of the depth “4” which is the deepest hierarchy, a block of 2N × 2N (N = 4 pixels) is set as the coding unit CU4, and 8 × 8 pixels is the minimum size of the coding unit CU. In HEVC, a prediction unit (PU: Prediction Unit) that is a basic unit for prediction by dividing a coding unit is also defined.
 ところで、非特許文献1では、動きベクトル成分の、水平方向と垂直方向に対して、独立に予測情報を持つことができないため、十分な符号化効率の向上を実現できないという問題点を有している。例えば、水平方向に3種類、垂直方向に3種類の候補がある場合、水平方向と垂直方向の候補の組み合わせは9通り(3×3)であることから9種類のフラグを用意して、符号化処理を行う手法も考えられる。しかし、組み合わせが多いとフラグの種類が多くなり、フラグを示す符号量が増大するという問題点を有している。 By the way, Non-Patent Document 1 has a problem that since it cannot have prediction information independently for the horizontal direction and the vertical direction of the motion vector component, sufficient improvement in coding efficiency cannot be realized. Yes. For example, when there are three types of candidates in the horizontal direction and three types in the vertical direction, there are nine (3 × 3) combinations of candidates in the horizontal direction and the vertical direction. A method for performing the conversion processing is also conceivable. However, if there are many combinations, there are problems that the number of types of flags increases and the amount of codes indicating the flags increases.
 そこで、この技術では、符号化効率を向上できる画像復号化装置と動きベクトル復号化方法、画像符号化装置と動きベクトル符号化方法を提供することを目的とする。 Therefore, it is an object of this technique to provide an image decoding device and a motion vector decoding method, an image encoding device and a motion vector encoding method that can improve the encoding efficiency.
 この技術の第1の側面は、対象ブロックと隣接する復号化済みブロックから動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報と、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報を画像圧縮情報から取得する可逆復号化部と、前記水平予測ブロック情報で示されたブロックの動きベクトル情報を水平予測動きベクトル情報として設定し、前記垂直予測ブロック情報で示されたブロックの動きベクトル情報を前記垂直予測動きベクトル情報として設定する予測動きベクトル情報設定部と、前記予測動きベクトル情報設定部で設定された前記水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて前記対象ブロックの動きベクトル情報を生成する動きベクトル情報生成部とを有する画像復号化装置にある。 According to a first aspect of the present technology, horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and motion vector information is vertical prediction motion vector information. A lossless decoding unit that obtains vertical prediction block information indicating a block selected as image compression information, and sets motion vector information of the block indicated by the horizontal prediction block information as horizontal prediction motion vector information, and A prediction motion vector information setting unit that sets motion vector information of a block indicated by prediction block information as the vertical prediction motion vector information, and the horizontal prediction motion vector information and vertical prediction set by the prediction motion vector information setting unit The motion vector of the target block using motion vector information In the image decoding apparatus and a motion vector information generation unit for generating information.
 この技術は、入力画像データを複数の画素ブロックに分割し、各ブロックについて動きベクトル情報を検出して動き補償予測符号化を行うことにより生成された画像圧縮情報の復号化処理を行う画像復号化装置において、対象ブロックと隣接する復号化済みブロックから動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報と、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報が画像圧縮情報から取得される。この水平予測ブロック情報で示されたブロックの動きベクトル情報が水平予測動きベクトル情報として設定されて、垂直予測ブロック情報で示されたブロックの動きベクトル情報が垂直予測動きベクトル情報として設定される。この設定された水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて対象ブロックの動きベクトル情報が生成される。 This technology divides input image data into a plurality of pixel blocks, detects motion vector information for each block, and performs motion compensation predictive coding to perform image decoding processing for decoding image compression information generated In the apparatus, horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and a block in which motion vector information is selected as vertical prediction motion vector information. The predicted vertical block information shown is acquired from the compressed image information. The motion vector information of the block indicated by the horizontal prediction block information is set as horizontal prediction motion vector information, and the motion vector information of the block indicated by the vertical prediction block information is set as vertical prediction motion vector information. Motion vector information of the target block is generated using the set horizontal predicted motion vector information and vertical predicted motion vector information.
 また、水平予測動きベクトル情報と垂直予測動きベクトル情報、または対象ブロックの動きベクトル情報の水平成分と垂直成分に対して隣接する復号化済みブロックから選択した動きベクトル情報を示す水平垂直予測動きベクトル情報のいずれが用いられているかを示す識別情報が画像圧縮情報から取得される。この識別情報に基づき、水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報が設定されて対象ブロックの動きベクトル情報が生成される。 Also, horizontal and vertical motion vector information indicating motion vector information selected from the decoded block adjacent to the horizontal and vertical components of the motion vector information of the target block or the horizontal and vertical motion vector information of the target block. Identification information indicating which one of the two is used is acquired from the image compression information. Based on this identification information, horizontal prediction motion vector information and vertical prediction motion vector information, or horizontal and vertical prediction motion vector information are set, and motion vector information of the target block is generated.
 この技術の第2の側面は、対象ブロックと隣接する復号化済みブロックから動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報と、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報を画像圧縮情報から取得する工程と、前記水平予測ブロック情報で示されたブロックの動きベクトル情報を水平予測動きベクトル情報として設定し、前記垂直予測ブロック情報で示されたブロックの動きベクトル情報を前記垂直予測動きベクトル情報として設定する工程と、前記設定された水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて前記対象ブロックの動きベクトル情報を生成する工程とを設けた動きベクトル情報復号化方法にある。 According to a second aspect of this technique, horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and motion vector information is vertical prediction motion vector information. Obtaining from the image compression information vertical prediction block information indicating the selected block, and setting motion vector information of the block indicated by the horizontal prediction block information as horizontal prediction motion vector information, and the vertical prediction block information And setting the motion vector information of the target block using the set horizontal predicted motion vector information and vertical predicted motion vector information. A motion vector information decoding method comprising:
 この技術の第3の側面は、対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、前記対象ブロックと隣接する符号化済みブロックから動きベクトル情報を選択して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定を行い、該動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報を生成する予測動きベクトル情報設定部を有する画像符号化装置にある。 According to a third aspect of the present technology, for each of a horizontal component and a vertical component of the motion vector information of the target block, motion vector information is selected from an encoded block adjacent to the target block to generate horizontal predicted motion vector information. And a predicted motion vector information setting unit that generates horizontal predicted block information and vertical predicted block information indicating a block in which the motion vector information is selected.
 この技術は、入力画像データを複数の画素ブロックに分割し、各ブロックについて動きベクトル情報を検出して動き補償予測符号化を行う画像符号化装置において、対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、対象ブロックと隣接する符号化済みブロックから動きベクトル情報を選択して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定が行われる。例えば、コスト関数値が最小となる最適予測モードで動き探索を行うことにより得られた動きベクトル情報の水平成分に対して、最も符号化効率が高くなる隣接する符号化済みブロックの動きベクトル情報が選択されて水平予測動きベクトル情報として設定される。また、最適予測モードで動き探索を行うことにより得られた動きベクトル情報の垂直成分に対して、最も符号化効率が高くなる隣接する符号化済みブロックの動きベクトル情報が選択されて垂直予測動きベクトル情報として設定される。この水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて、対象ブロックの動きベクトル情報の圧縮処理が行われる。また、動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報が生成されて、画像圧縮情報に水平予測ブロック情報と垂直予測ブロック情報が含められる。 This technique divides input image data into a plurality of pixel blocks, detects motion vector information for each block, and performs motion compensation predictive coding. For each component, motion vector information is selected from an encoded block adjacent to the target block, and horizontal prediction motion vector information and vertical prediction motion vector information are set. For example, for the horizontal component of motion vector information obtained by performing motion search in the optimal prediction mode that minimizes the cost function value, the motion vector information of an adjacent encoded block that has the highest encoding efficiency is obtained. It is selected and set as horizontal prediction motion vector information. Also, with respect to the vertical component of the motion vector information obtained by performing the motion search in the optimal prediction mode, the motion vector information of the adjacent encoded block having the highest encoding efficiency is selected and the vertical prediction motion vector Set as information. Using this horizontal predicted motion vector information and vertical predicted motion vector information, the motion vector information of the target block is compressed. Also, horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated, and the horizontal prediction block information and the vertical prediction block information are included in the image compression information.
 また、対象ブロックの動きベクトル情報の水平成分と垂直成分に対して、対象ブロックと隣接する符号化済みブロックから選択した動きベクトル情報を水平垂直予測動きベクトル情報とする設定、または水平予測動きベクトル情報と垂直予測動きベクトル情報の設定が、ピクチャ毎またはスライス毎に切り替え可能とされる。例えば、Pピクチャに対して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定が行われて、Bピクチャに対して水平垂直予測動きベクトル情報の設定が行われる。さらに、水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報のいずれが用いられているかを示す識別情報が画像圧縮情報に設けられる。 Also, with respect to the horizontal and vertical components of the motion vector information of the target block, the setting is made such that the motion vector information selected from the encoded block adjacent to the target block is the horizontal and vertical predicted motion vector information, or the horizontal predicted motion vector information The setting of the vertical prediction motion vector information can be switched for each picture or each slice. For example, horizontal prediction motion vector information and vertical prediction motion vector information are set for a P picture, and horizontal vertical prediction motion vector information is set for a B picture. Further, identification information indicating whether horizontal predicted motion vector information, vertical predicted motion vector information, or horizontal / vertical predicted motion vector information is used is provided in the image compression information.
 また、例えば、水平予測ブロック情報と垂直予測ブロック情報に対してコードがそれぞれ割り当てられて、水平予測ブロック情報と垂直予測ブロック情報に割り当てられたコードが画像圧縮情報に含められる。さらに、撮像装置で生成された画像データに基づいて検出された動きベクトル情報の符号化処理を行う場合、撮像装置の動き検出結果に基づいて、コード割り当てが行われる。

Further, for example, codes are assigned to the horizontal prediction block information and the vertical prediction block information, respectively, and the codes assigned to the horizontal prediction block information and the vertical prediction block information are included in the image compression information. Furthermore, when performing the encoding process of the motion vector information detected based on the image data produced | generated by the imaging device, code allocation is performed based on the motion detection result of an imaging device.

 この技術の第4の側面は、対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、前記対象ブロックと隣接する符号化済みブロックから動きベクトル情報を選択して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定を行い、該動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報を生成する工程を設けた動きベクトル情報符号化方法にある。 According to a fourth aspect of the present technology, for each of a horizontal component and a vertical component of the motion vector information of the target block, motion vector information is selected from an encoded block adjacent to the target block to generate horizontal predicted motion vector information. The motion vector information encoding method includes a step of setting the vertical prediction motion vector information and generating horizontal prediction block information and vertical prediction block information indicating a block for which the motion vector information is selected.
 この技術によれば、対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、対象ブロックと隣接する符号化済みブロックから動きベクトル情報が選択されて水平予測動きベクトル情報と垂直予測動きベクトル情報がそれぞれ設定されて、設定された水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて、対象ブロックの動きベクトル情報の圧縮処理が行われる。また、動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報が生成される。さらに、水平予測ブロック情報と垂直予測ブロック情報に基づいて動きベクトル情報の復号化が行われる。このため、例えば水平予測動きベクトル情報と垂直予測動きベクトル情報の候補の組み合わせ分のフラグよりも少ないデータ量である水平予測ブロック情報と垂直予測ブロック情報で水平予測動きベクトル情報と垂直予測動きベクトル情報の設定が可能となり、符号化効率を向上させることができる。 According to this technique, for each of the horizontal component and the vertical component of the motion vector information of the target block, the motion vector information is selected from the encoded block adjacent to the target block, and the horizontal prediction motion vector information and the vertical prediction motion are selected. Vector information is set, respectively, and the motion vector information of the target block is compressed using the set horizontal predicted motion vector information and vertical predicted motion vector information. Also, horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated. Further, the motion vector information is decoded based on the horizontal prediction block information and the vertical prediction block information. For this reason, for example, horizontal prediction motion vector information and vertical prediction motion vector information with horizontal prediction block information and vertical prediction block information that have a data amount smaller than a flag corresponding to a combination of horizontal prediction motion vector information and vertical prediction motion vector information candidates Can be set, and the encoding efficiency can be improved.
H.264/AVCにおけるブロックを示す図である。H. It is a figure which shows the block in H.264 / AVC. メディアン予測を説明するための図である。It is a figure for demonstrating median prediction. Multi-Reference Frame方式を説明するための図である。It is a figure for demonstrating Multi-Reference | standard Frame system. 時間ダイレクトモードを説明するための図である。It is a figure for demonstrating time direct mode. 時間予測動きベクトル情報および時空間予測動きベクトル情報を説明するための図である。It is a figure for demonstrating temporal prediction motion vector information and spatiotemporal prediction motion vector information. コーディングユニットCUの階層構造を例示した図である。It is the figure which illustrated the hierarchical structure of coding unit CU. 画像符号化装置の構成を示す図である。It is a figure which shows the structure of an image coding apparatus. 動き予測・補償部と予測動きベクトル情報設定部の構成を示す図である。It is a figure which shows the structure of a motion prediction / compensation part and a prediction motion vector information setting part. 1/4画素精度の動き予測・補償処理を説明するための図である。It is a figure for demonstrating the motion prediction and compensation process of 1/4 pixel precision. 画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of an image coding apparatus. 予測処理を示すフローチャートである。It is a flowchart which shows a prediction process. イントラ予測処理を示すフローチャートである。It is a flowchart which shows an intra prediction process. インター予測処理を示すフローチャートである。It is a flowchart which shows the inter prediction process. 予測動きベクトル情報設定処理を示すフローチャートである。It is a flowchart which shows a prediction motion vector information setting process. 画像復号化装置の構成を示す図である。It is a figure which shows the structure of an image decoding apparatus. 動き補償部と予測動きベクトル情報設定部の構成を示す図である。It is a figure which shows the structure of a motion compensation part and a prediction motion vector information setting part. 画像復号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of an image decoding apparatus. 予測画像生成処理を示すフローチャートである。It is a flowchart which shows a prediction image generation process. インター予測画像生成処理を示すフローチャートである。It is a flowchart which shows the inter estimated image production | generation process. 動きベクトル情報再構築処理を示すフローチャートである。It is a flowchart which shows a motion vector information reconstruction process. 画像符号化装置で用いる予測動きベクトル情報設定部の他の構成を示す図である。It is a figure which shows the other structure of the prediction motion vector information setting part used with an image coding apparatus. 画像復号化装置で用いる予測動きベクトル情報設定部の他の構成を示す図である。It is a figure which shows the other structure of the prediction motion vector information setting part used with an image decoding apparatus. コンピュータ装置の概略構成を例示した図である。It is the figure which illustrated schematic structure of the computer apparatus. テレビジョン装置の概略構成を例示した図である。It is the figure which illustrated schematic structure of the television apparatus. 携帯電話機の概略構成を例示した図である。It is the figure which illustrated schematic structure of the mobile phone. 記録再生装置の概略構成を例示した図である。It is the figure which illustrated schematic structure of the recording / reproducing apparatus. 撮像装置の概略構成を例示した図である。It is the figure which illustrated schematic structure of the imaging device.
 以下、技術を実施するための形態について説明する。なお、説明は以下の順序で行う。
 1.画像符号化装置の構成
 2.画像符号化装置の動作
 3.画像復号化装置の構成
 4.画像復号化装置の動作
 5.予測動きベクトル情報設定部の他の構成
 6.ソフトウェア処理の場合
 7.電子機器に適用した場合
Hereinafter, embodiments for carrying out the technology will be described. The description will be given in the following order.
1. 1. Configuration of image encoding device 2. Operation of image encoding device 3. Configuration of image decoding apparatus 4. Operation of image decoding apparatus 5. Other configuration of prediction motion vector information setting unit In the case of software processing When applied to electronic equipment
 <1.画像符号化装置の構成>
 図7は画像符号化装置の構成を示している。画像符号化装置10は、アナログ/ディジタル変換部(A/D変換部)11、画面並び替えバッファ12、減算部13、直交変換部14、量子化部15、可逆符号化部16、蓄積バッファ17、レート制御部18を備えている。さらに、画像符号化装置10は、逆量子化部21、逆直交変換部22、加算部23、デブロッキングフィルタ24、フレームメモリ25、イントラ予測部31、動き予測・補償部32、予測動きベクトル情報設定部33、予測画像・最適モード選択部35を備えている。
<1. Configuration of Image Encoding Device>
FIG. 7 shows the configuration of the image encoding device. The image encoding device 10 includes an analog / digital conversion unit (A / D conversion unit) 11, a screen rearrangement buffer 12, a subtraction unit 13, an orthogonal transformation unit 14, a quantization unit 15, a lossless encoding unit 16, and a storage buffer 17. The rate control unit 18 is provided. Furthermore, the image encoding device 10 includes an inverse quantization unit 21, an inverse orthogonal transform unit 22, an addition unit 23, a deblocking filter 24, a frame memory 25, an intra prediction unit 31, a motion prediction / compensation unit 32, and predicted motion vector information. A setting unit 33 and a predicted image / optimum mode selection unit 35 are provided.
 A/D変換部11は、アナログの画像信号をディジタルの画像データに変換して画面並べ替えバッファ12に出力する。 The A / D converter 11 converts an analog image signal into digital image data and outputs the digital image data to the screen rearrangement buffer 12.
 画面並べ替えバッファ12は、A/D変換部11から出力された画像データに対してフレームの並べ替えを行う。画面並べ替えバッファ12は、符号化処理にかかわるGOP(Group of Pictures)構造に応じてフレームの並べ替えを行い、並べ替え後の画像データを減算部13とイントラ予測部31と動き予測・補償部32に出力する。 The screen rearrangement buffer 12 rearranges the frames of the image data output from the A / D conversion unit 11. The screen rearrangement buffer 12 rearranges the frames in accordance with a GOP (Group of Pictures) structure related to the encoding process, and subtracts the image data after the rearrangement, the intra prediction unit 31, and the motion prediction / compensation unit. 32.
 減算部13には、画面並べ替えバッファ12から出力された画像データと、後述する予測画像・最適モード選択部35で選択された予測画像データが供給される。減算部13は、画面並べ替えバッファ12から出力された画像データと予測画像・最適モード選択部35から供給された予測画像データとの差分である予測誤差データを算出して、直交変換部14に出力する。 The subtraction unit 13 is supplied with the image data output from the screen rearrangement buffer 12 and the predicted image data selected by the predicted image / optimum mode selection unit 35 described later. The subtraction unit 13 calculates prediction error data that is a difference between the image data output from the screen rearrangement buffer 12 and the prediction image data supplied from the prediction image / optimum mode selection unit 35, and sends the prediction error data to the orthogonal transformation unit 14. Output.
 直交変換部14は、減算部13から出力された予測誤差データに対して、離散コサイン変換(DCT;Discrete Cosine Transform)、カルーネン・レーベ変換等の直交変換処理を行う。直交変換部14は、直交変換処理を行うことで得られた変換係数データを量子化部15に出力する。 The orthogonal transform unit 14 performs orthogonal transform processing such as discrete cosine transform (DCT) and Karoonen-Loeve transform on the prediction error data output from the subtraction unit 13. The orthogonal transform unit 14 outputs transform coefficient data obtained by performing the orthogonal transform process to the quantization unit 15.
 量子化部15には、直交変換部14から出力された変換係数データと、後述するレート制御部18からレート制御信号が供給されている。量子化部15は変換係数データの量子化を行い、量子化データを可逆符号化部16と逆量子化部21に出力する。また、量子化部15は、レート制御部18からのレート制御信号に基づき量子化パラメータ(量子化スケール)を切り替えて、量子化データのビットレートを変化させる。 The quantization unit 15 is supplied with transform coefficient data output from the orthogonal transform unit 14 and a rate control signal from a rate control unit 18 described later. The quantization unit 15 quantizes the transform coefficient data and outputs the quantized data to the lossless encoding unit 16 and the inverse quantization unit 21. Further, the quantization unit 15 changes the bit rate of the quantized data by switching the quantization parameter (quantization scale) based on the rate control signal from the rate control unit 18.
 可逆符号化部16には、量子化部15から出力された量子化データと、後述するイントラ予測部31から予測モード情報や動き予測・補償部32から予測モード情報等が供給される。また、予測画像・最適モード選択部35から最適モードがイントラ予測であるかインター予測であるかを示す情報が供給される。なお、予測モード情報には、イントラ予測またはインター予測に応じて、予測モードや予測ユニットのブロックサイズ情報等が含まれる。可逆符号化部16は、量子化データに対して例えば可変長符号化または算術符号化等で可逆符号化処理を行い、画像圧縮情報を生成して蓄積バッファ17に出力する。また、可逆符号化部16は、最適モードがイントラ予測である場合、イントラ予測部31から供給された予測モード情報の可逆符号化を行う。また、可逆符号化部16は、最適モードがインター予測である場合、動き予測・補償部32から供給された予測モード情報や予測ブロック情報、差分動きベクトル情報等の可逆符号化を行う。さらに、可逆符号化部16は、可逆符号化が行われた情報を画像圧縮情報に含める。例えば可逆符号化部16は、画像圧縮情報である符号化ストリームのヘッダ情報に付加する。 The lossless encoding unit 16 is supplied with quantized data output from the quantization unit 15, prediction mode information from the intra prediction unit 31 described later, prediction mode information from the motion prediction / compensation unit 32, and the like. Also, information indicating whether the optimal mode is intra prediction or inter prediction is supplied from the predicted image / optimum mode selection unit 35. Note that the prediction mode information includes prediction mode, block size information of a prediction unit, and the like according to intra prediction or inter prediction. The lossless encoding unit 16 performs lossless encoding processing on the quantized data by, for example, variable length encoding or arithmetic encoding, generates image compression information, and outputs it to the accumulation buffer 17. Moreover, the lossless encoding part 16 performs the lossless encoding of the prediction mode information supplied from the intra prediction part 31, when the optimal mode is intra prediction. In addition, when the optimum mode is inter prediction, the lossless encoding unit 16 performs lossless encoding of prediction mode information, prediction block information, difference motion vector information, and the like supplied from the motion prediction / compensation unit 32. Further, the lossless encoding unit 16 includes information subjected to lossless encoding in the image compression information. For example, the lossless encoding unit 16 adds the header information of the encoded stream that is the image compression information.
 蓄積バッファ17は、可逆符号化部16からの画像圧縮情報を蓄積する。また、蓄積バッファ17は、蓄積した画像圧縮情報を伝送路に応じた伝送速度で出力する。 The accumulation buffer 17 accumulates the compressed image information from the lossless encoding unit 16. The accumulation buffer 17 outputs the accumulated image compression information at a transmission rate corresponding to the transmission path.
 レート制御部18は、蓄積バッファ17の空き容量の監視を行い、空き容量に応じてレート制御信号を生成して量子化部15に出力する。レート制御部18は、例えば蓄積バッファ17から空き容量を示す情報を取得する。レート制御部18は空き容量が少なくなっている場合、レート制御信号によって量子化データのビットレートを低下させる。また、レート制御部18は蓄積バッファ17の空き容量が十分大きい場合、レート制御信号によって量子化データのビットレートを高くする。 The rate control unit 18 monitors the free capacity of the storage buffer 17, generates a rate control signal according to the free capacity, and outputs it to the quantization unit 15. The rate control unit 18 acquires information indicating the free capacity from the accumulation buffer 17, for example. The rate control unit 18 reduces the bit rate of the quantized data by the rate control signal when the free space is low. Further, when the free capacity of the storage buffer 17 is sufficiently large, the rate control unit 18 increases the bit rate of the quantized data by the rate control signal.
 逆量子化部21は、量子化部15から供給された量子化データの逆量子化処理を行う。逆量子化部21は、逆量子化処理を行うことで得られた変換係数データを逆直交変換部22に出力する。 The inverse quantization unit 21 performs an inverse quantization process on the quantized data supplied from the quantization unit 15. The inverse quantization unit 21 outputs transform coefficient data obtained by performing the inverse quantization process to the inverse orthogonal transform unit 22.
 逆直交変換部22は、逆量子化部21から供給された変換係数データの逆直交変換処理を行い、得られたデータを加算部23に出力する。 The inverse orthogonal transform unit 22 performs an inverse orthogonal transform process on the transform coefficient data supplied from the inverse quantization unit 21, and outputs the obtained data to the addition unit 23.
 加算部23は、逆直交変換部22から供給されたデータと予測画像・最適モード選択部35から供給された予測画像データを加算して復号画像データを生成して、デブロッキングフィルタ24とフレームメモリ25に出力する。なお、復号画像データは参照画像の画像データとして用いられる。 The adding unit 23 adds the data supplied from the inverse orthogonal transform unit 22 and the predicted image data supplied from the predicted image / optimum mode selection unit 35 to generate decoded image data, and the deblocking filter 24 and the frame memory To 25. The decoded image data is used as image data for the reference image.
 デブロッキングフィルタ24は、画像の符号化時に生じるブロック歪みを減少させるためのフィルタ処理を行う。デブロッキングフィルタ24は、加算部23から供給された復号画像データからブロック歪みを除去するフィルタ処理を行い、フィルタ処理後の復号画像データをフレームメモリ25に出力する。 The deblocking filter 24 performs a filter process for reducing block distortion that occurs during image coding. The deblocking filter 24 performs a filter process for removing block distortion from the decoded image data supplied from the adding unit 23, and outputs the decoded image data after the filter process to the frame memory 25.
 フレームメモリ25は、加算部23から供給されたフィルタ処理前の復号画像データと、デブロッキングフィルタ24から供給されたフィルタ処理後の復号画像データを保持する。フレームメモリ25に保持された復号画像データは、セレクタ26を介してイントラ予測部31または動き予測・補償部32に参照画像データとして供給される。 The frame memory 25 holds the decoded image data before the filtering process supplied from the adding unit 23 and the decoded image data after the filtering process supplied from the deblocking filter 24. The decoded image data held in the frame memory 25 is supplied as reference image data to the intra prediction unit 31 or the motion prediction / compensation unit 32 via the selector 26.
 セレクタ26は、イントラ予測部31でイントラ予測を行う場合、フレームメモリ25に保持されているデブロッキングフィルタ処理前の復号画像データを参照画像データとしてイントラ予測部31に供給する。また、セレクタ26は、動き予測・補償部32でインター予測を行う場合、フレームメモリ25に保持されているデブロッキングフィルタ処理後の復号画像データを参照画像データとして動き予測・補償部32に供給する。 When the intra prediction unit 31 performs the intra prediction, the selector 26 supplies the decoded image data before the deblocking filter process held in the frame memory 25 to the intra prediction unit 31 as reference image data. Further, when the motion prediction / compensation unit 32 performs inter prediction, the selector 26 supplies the decoded image data after the deblocking filter processing held in the frame memory 25 to the motion prediction / compensation unit 32 as reference image data. .
 イントラ予測部31は、画面並べ替えバッファ12から供給された入力画像データとフレームメモリ25から供給された参照画像データを用いて、候補となる全てのイントラ予測モードで対象ブロックの予測を行い、最適イントラ予測モードを決定する。イントラ予測部31は、例えば各イントラ予測モードでコスト関数値を算出して、算出したコスト関数値に基づき符号化効率が最良となるイントラ予測モードを最適イントラ予測モードとする。イントラ予測部31は、最適イントラ予測モードで生成された予測画像データと最適イントラ予測モードでのコスト関数値を予測画像・最適モード選択部35に出力する。さらに、イントラ予測部31は、最適イントラ予測モードを示す予測モード情報を可逆符号化部16に出力する。 The intra prediction unit 31 uses the input image data supplied from the screen rearrangement buffer 12 and the reference image data supplied from the frame memory 25 to predict the target block in all candidate intra prediction modes and Determine the intra prediction mode. For example, the intra prediction unit 31 calculates the cost function value in each intra prediction mode, and sets the intra prediction mode in which the coding efficiency is the best based on the calculated cost function value as the optimal intra prediction mode. The intra prediction unit 31 outputs the predicted image data generated in the optimal intra prediction mode and the cost function value in the optimal intra prediction mode to the predicted image / optimum mode selection unit 35. Further, the intra prediction unit 31 outputs prediction mode information indicating the optimal intra prediction mode to the lossless encoding unit 16.
 動き予測・補償部32は、画面並べ替えバッファ12から供給された入力画像データとフレームメモリ25から供給された参照画像データを用いて、候補となる全てのインター予測モードで対象ブロックの予測を行い、最適インター予測モードを決定する。動き予測・補償部32は、例えば各インター予測モードでコスト関数値を算出して、算出したコスト関数値に基づき符号化効率が最良となるインター予測モードを最適インター予測モードとする。また、動き予測・補償部32は、予測動きベクトル情報設定部33で生成された予測ブロック情報と差分動きベクトル情報を用いてコスト関数値の算出を行う。さらに、動き予測・補償部32は、最適インター予測モードで生成された予測画像データと最適インター予測モードでのコスト関数値を予測画像・最適モード選択部35に出力する。また、動き予測・補償部32は、最適インター予測モードに関する予測モード情報や予測ブロック情報および差分動きベクトル情報等を可逆符号化部16に出力する。 The motion prediction / compensation unit 32 uses the input image data supplied from the screen rearrangement buffer 12 and the reference image data supplied from the frame memory 25 to predict the target block in all candidate inter prediction modes. Determine the optimal inter prediction mode. For example, the motion prediction / compensation unit 32 calculates the cost function value in each inter prediction mode, and sets the inter prediction mode in which the coding efficiency is the best based on the calculated cost function value as the optimal inter prediction mode. In addition, the motion prediction / compensation unit 32 calculates a cost function value using the prediction block information and the difference motion vector information generated by the prediction motion vector information setting unit 33. Further, the motion prediction / compensation unit 32 outputs the predicted image data generated in the optimal inter prediction mode and the cost function value in the optimal inter prediction mode to the predicted image / optimum mode selection unit 35. In addition, the motion prediction / compensation unit 32 outputs prediction mode information, prediction block information, difference motion vector information, and the like regarding the optimal inter prediction mode to the lossless encoding unit 16.
 予測動きベクトル情報設定部33は、対象ブロックについて、符号化済みの隣接ブロックの水平動きベクトル情報を水平予測動きベクトル情報の候補とする。また、予測動きベクトル情報設定部33は、候補の水平予測動きベクトル情報と対象ブロックの水平動きベクトル情報との差を示す差分動きベクトル情報を候補毎に生成する。さらに、予測動きベクトル情報設定部33は、候補の中から差分動きベクトル情報の符号化効率が最も高い水平動きベクトル情報を水平予測動きベクトル情報に設定する。予測動きベクトル情報設定部33は、設定した水平予測動きベクトル情報がいずれの隣接ブロックの動きベクトル情報であるかを示す水平予測ブロック情報を生成する。例えば、水平予測ブロック情報としてフラグ(以下「水平予測ブロックフラグ」という)を生成する。 The predicted motion vector information setting unit 33 sets the horizontal motion vector information of the encoded adjacent block as a candidate for predicted horizontal motion vector information for the target block. Also, the predicted motion vector information setting unit 33 generates differential motion vector information indicating the difference between the candidate horizontal predicted motion vector information and the horizontal motion vector information of the target block for each candidate. Further, the predicted motion vector information setting unit 33 sets horizontal motion vector information having the highest coding efficiency of the difference motion vector information among the candidates as the predicted horizontal motion vector information. The prediction motion vector information setting unit 33 generates horizontal prediction block information indicating which adjacent block motion vector information the set horizontal prediction motion vector information is. For example, a flag (hereinafter referred to as “horizontal prediction block flag”) is generated as horizontal prediction block information.
 予測動きベクトル情報設定部33は、対象ブロックについて、符号化済みの隣接ブロックの垂直動きベクトル情報を垂直予測動きベクトル情報の候補とする。また、予測動きベクトル情報設定部33は、候補の垂直予測動きベクトル情報と対象ブロックの垂直動きベクトル情報との差を示す差分動きベクトル情報を候補毎に生成する。さらに、予測動きベクトル情報設定部33は、候補の中から差分動きベクトル情報の符号化効率が最も高い垂直動きベクトル情報を垂直予測動きベクトル情報に設定する。予測動きベクトル情報設定部33は、設定した垂直予測動きベクトル情報がいずれの隣接ブロックの動きベクトル情報であるかを示す垂直予測ブロック情報を生成する。例えば、垂直予測ブロック情報としてフラグ(以下「垂直予測ブロックフラグ」という)を生成する。 The predicted motion vector information setting unit 33 sets the vertical motion vector information of the adjacent block that has been encoded for the target block as the candidate for the predicted vertical motion vector information. Also, the predicted motion vector information setting unit 33 generates differential motion vector information indicating the difference between the candidate vertical predicted motion vector information and the vertical motion vector information of the target block for each candidate. Further, the predicted motion vector information setting unit 33 sets the vertical motion vector information having the highest coding efficiency of the difference motion vector information among the candidates as the predicted vertical motion vector information. The predicted motion vector information setting unit 33 generates vertical predicted block information indicating which adjacent block motion vector information the set vertical predicted motion vector information is. For example, a flag (hereinafter referred to as “vertical prediction block flag”) is generated as the vertical prediction block information.
 さらに、予測動きベクトル情報設定部33は、水平成分と垂直成分について、それぞれ予測ブロックフラグで示されるブロックの動きベクトル情報を予測動きベクトル情報として用いる。また、予測動きベクトル情報設定部33は、対象ブロックの動きベクトル情報と予測動きベクトル情報の差分である差分動きベクトル情報を、水平成分と垂直成分のそれぞれについて算出して動き予測・補償部32に出力する。 Further, the predicted motion vector information setting unit 33 uses the motion vector information of the block indicated by the predicted block flag for the horizontal component and the vertical component, respectively, as the predicted motion vector information. Also, the predicted motion vector information setting unit 33 calculates difference motion vector information, which is the difference between the motion vector information of the target block and the predicted motion vector information, for each of the horizontal component and the vertical component, and sends it to the motion prediction / compensation unit 32. Output.
 図8は、動き予測・補償部32と予測動きベクトル情報設定部33の構成を示している。動き予測・補償部32は、動き探索部321、コスト関数値算出部322、モード判定部323、動き補償処理部324、動きベクトル情報バッファ325を有している。 FIG. 8 shows the configuration of the motion prediction / compensation unit 32 and the predicted motion vector information setting unit 33. The motion prediction / compensation unit 32 includes a motion search unit 321, a cost function value calculation unit 322, a mode determination unit 323, a motion compensation processing unit 324, and a motion vector information buffer 325.
 動き探索部321には、画面並べ替えバッファ12から供給された並べ替え後の入力画像データと、フレームメモリ25から読み出された参照画像データが供給される。動き探索部321は、候補となる全てのインター予測モードで動き探索を行い、動きベクトルを検出する。動き探索部321は、検出した動きベクトルを示す動きベクトル情報を、動きベクトルを検出した場合の入力画像データと参照画像データと共にコスト関数値算出部322に出力する。 The motion search unit 321 is supplied with the rearranged input image data supplied from the screen rearrangement buffer 12 and the reference image data read from the frame memory 25. The motion search unit 321 performs motion search in all candidate inter prediction modes and detects a motion vector. The motion search unit 321 outputs motion vector information indicating the detected motion vector to the cost function value calculation unit 322 together with the input image data and the reference image data when the motion vector is detected.
 コスト関数値算出部322には、動き探索部321から動きベクトル情報と入力画像データと参照画像データおよび予測動きベクトル情報設定部33から予測ブロック情報と差分動きベクトル情報が供給されている。コスト関数値算出部322は、動きベクトル情報と入力画像データと参照画像データおよび予測ブロックフラグと差分動きベクトル情報を用いて、候補となる全てのインター予測モードでコスト関数値を算出する。 The cost function value calculation unit 322 is supplied with motion vector information, input image data, reference image data, and predicted motion vector information setting unit 33 from the motion search unit 321 and predicted block information and difference motion vector information. The cost function value calculation unit 322 calculates cost function values in all candidate inter prediction modes using the motion vector information, the input image data, the reference image data, the prediction block flag, and the difference motion vector information.
 コスト関数値の算出は、例えばH.264/AVC方式における参照ソフトウェアであるJM(Joint Model)で定められているように、High Complexityモードか、Low Complexityモードのいずれかの手法に基づいて行う。 Calculating the cost function value is, for example, H. As defined by JM (Joint Model), which is reference software in the H.264 / AVC system, this is performed based on either the High Complexity mode or the Low Complexity mode.
 すなわち、High Complexityモードでは、候補となる全ての予測モードに対して、仮に可逆符号化処理までを行い、次の式(9)で表されるコスト関数値を各予測モードに対して算出する。
  Cost(Mode∈Ω)=D+λ・R      ・・・(9)
That is, in the High Complexity mode, all the prediction modes that are candidates are subjected to the lossless encoding process, and the cost function value represented by the following equation (9) is calculated for each prediction mode.
Cost (Mode∈Ω) = D + λ · R (9)
 Ωは、当該ブロックの画像を符号化するための候補となる予測モードの全体集合を示している。Dは、予測モードで符号化を行った場合の復号画像と入力画像との差分エネルギー(歪み)を示している。Rは、直交変換係数,予測モード情報,予測ブロック情報,差分動きベクトル情報等を含んだ発生符号量、λは、量子化パラメータQPの関数として与えられるラグランジュ乗数である。 Ω indicates the entire set of prediction modes that are candidates for encoding the image of the block. D indicates the differential energy (distortion) between the decoded image and the input image when encoding is performed in the prediction mode. R is a generated code amount including orthogonal transform coefficients, prediction mode information, prediction block information, differential motion vector information, and the like, and λ is a Lagrange multiplier given as a function of the quantization parameter QP.
 つまり、High Complexityモードでの符号化を行うには、上記パラメータDおよびRを算出するため、候補となる全ての予測モードで、一度、仮エンコード処理を行う必要があり、より高い演算量を要する。 That is, in order to perform encoding in the High Complexity mode, the parameters D and R are calculated, and therefore, it is necessary to perform temporary encoding once in all candidate prediction modes, which requires a higher calculation amount. .
 一方、Low Complexityモードでは、候補となる全ての予測モードで、予測画像の生成、および予測ブロック情報や差分動きベクトル情報および予測モード情報などを含むヘッダビットの生成等を行い、次の式(10)で表されるコスト関数値を算出する。
  Cost(Mode∈Ω)=D+QP2Quant(QP)・Header_Bit   ・・・(10)
On the other hand, in the Low Complexity mode, prediction images are generated and header bits including prediction block information, differential motion vector information, prediction mode information, and the like are generated in all candidate prediction modes, and the following equation (10 ) Is calculated.
Cost (Mode∈Ω) = D + QP2Quant (QP) · Header_Bit (10)
 Ωは、当該ブロックの画像を符号化するための候補となる予測モードの全体集合を示している。Dは、予測モードで符号化を行った場合の復号画像と入力画像との差分エネルギー(歪み)を示している。Header_Bitは、予測モードに対するヘッダビット、QP2Quantは、量子化パラメータQPの関数として与えられる関数である。 Ω indicates the entire set of prediction modes that are candidates for encoding the image of the block. D indicates the differential energy (distortion) between the decoded image and the input image when encoding is performed in the prediction mode. Header_Bit is a header bit for the prediction mode, and QP2Quant is a function given as a function of the quantization parameter QP.
 すなわち、Low Complexityモードにおいては、それぞれの予測モードに関して、予測処理を行う必要があるが、復号化画像までは必要ないため、High Complexityモードより低い演算量での実現が可能である。 That is, in the Low Complexity mode, it is necessary to perform prediction processing for each prediction mode, but since the decoded image is not necessary, it is possible to realize with a calculation amount lower than that in the High Complexity mode.
 コスト関数値算出部322は、算出したコスト関数値をモード判定部323に出力する。 The cost function value calculation unit 322 outputs the calculated cost function value to the mode determination unit 323.
 モード判定部323は、コスト関数値が最小となるモードを最適インター予測モードに決定する。また、モード判定部323は、決定した最適インター予測モードを示す最適インター予測モード情報を、当該最適インター予測モードにかかわる動きベクトル情報と予測ブロックフラグと差分動きベクトル情報等とともに、動き補償処理部324に出力する。なお、予測モード情報にはブロックサイズ情報等が含まれる。 The mode determination unit 323 determines the mode having the minimum cost function value as the optimal inter prediction mode. In addition, the mode determination unit 323 includes the motion compensation processing unit 324 including the optimal inter prediction mode information indicating the determined optimal inter prediction mode together with the motion vector information, the prediction block flag, the difference motion vector information, and the like related to the optimal inter prediction mode. Output to. Note that the prediction mode information includes block size information and the like.
 動き補償処理部324は、最適インター予測モード情報と動きベクトル情報に基づき、フレームメモリ25から読み出された参照画像データに対して動き補償を行い、予測画像データを生成して予測画像・最適モード選択部35に出力する。また、動き補償処理部324は、最適インター予測の予測モード情報と当該モードにおける差分動きベクトル情報等を、可逆符号化部16に出力する。 The motion compensation processing unit 324 performs motion compensation on the reference image data read from the frame memory 25 based on the optimal inter prediction mode information and the motion vector information, and generates predicted image data to generate a predicted image / optimum mode. The data is output to the selection unit 35. In addition, the motion compensation processing unit 324 outputs prediction mode information for optimal inter prediction, difference motion vector information in the mode, and the like to the lossless encoding unit 16.
 動きベクトル情報バッファ325は、最適インター予測モードにかかわる動きベクトル情報を保持する。また、動きベクトル情報バッファ325は、符号化を行う対象ブロックに対して符号化済みの隣接ブロックの動きベクトル情報を予測動きベクトル情報設定部33に出力する。 The motion vector information buffer 325 holds motion vector information related to the optimal inter prediction mode. Also, the motion vector information buffer 325 outputs the motion vector information of the adjacent block that has been encoded with respect to the target block to be encoded, to the predicted motion vector information setting unit 33.
 なお、動き予測・補償部32では、例えばH.264/AVCにおいて規定されている、1/4画素精度の動き予測・補償処理を行う。図9は、1/4画素精度の動き予測・補償処理を説明するための図である。図9において位置「A」は、フレームメモリ25に格納されている整数精度画素の位置、位置「b」,「c」,「d」は1/2画素精度の位置、位置「e1」,「e2」,「e3」は1/4画素精度の位置である。 In the motion prediction / compensation unit 32, for example, H.264 A motion prediction / compensation process with 1/4 pixel accuracy defined in H.264 / AVC is performed. FIG. 9 is a diagram for explaining a motion prediction / compensation process with ¼ pixel accuracy. In FIG. 9, the position “A” is the position of the integer precision pixel stored in the frame memory 25, the positions “b”, “c”, and “d” are the positions of half pixel precision, the positions “e 1”, “ “e2” and “e3” are positions with 1/4 pixel accuracy.
 以下では、Clip1()を式(11)のように定義する。
Figure JPOXMLDOC01-appb-M000001

 式(11)において、入力画像が8ビット精度である場合、max_pixの値は255となる。
In the following, Clip1 () is defined as shown in Expression (11).
Figure JPOXMLDOC01-appb-M000001

In Expression (11), when the input image has 8-bit precision, the value of max_pix is 255.
 位置「b」「d」における画素値は、6タップのFIRフィルタを用いて、式(12)(13)のように生成される。
  F=A-2-5・A-1+20・A+20・A-5・A+A ・・・(12)
  b,d=Clip1((F+16)>>5)          ・・・(13)
The pixel values at the positions “b” and “d” are generated as in Expressions (12) and (13) using a 6-tap FIR filter.
F = A −2 −5 · A −1 + 20 · A 0 + 20 · A 1 −5 · A 2 + A 3 (12)
b, d = Clip1 ((F + 16) >> 5) (13)
 位置「c」における画素値は、6タップのFIRフィルタを用いて、式(14)または式(15)のいずれかと式(16)のように生成される。
  F=b-2-5・b-1+20・b+20・b-5・b+b・・・(14)
  F=d-2-5・d-1+20・d+20・d-5・d+d・・・(15)
  c=Clip1((F+512)>>10)          ・・・(16)
なお、Clip1処理は、水平方向および垂直方向の積和処理の両方を行った後、最後に一度のみ行う。
The pixel value at the position “c” is generated as shown in either Expression (14) or Expression (15) and Expression (16) using a 6-tap FIR filter.
F = b −2 −5 · b −1 + 20 · b 0 + 20 · b 1 −5 · b 2 + b 3 (14)
F = d −2 −5 · d −1 + 20 · d 0 + 20 · d 1 −5 · d 2 + d 3 (15)
c = Clip1 ((F + 512) >> 10) (16)
Note that the Clip1 process is performed only once at the end after performing both the horizontal and vertical product-sum processes.
 位置「e1」~「e3」における画素値は、線形内挿により式(17)~(19)のように生成される。
  e1=(A+b+1)>>1   ・・・(17)
  e2=(b+d+1)>>1   ・・・(18)
  e3=(b+c+1)>>1   ・・・(19)
 このようにして、動き予測・補償部32は、1/4画素精度の動き予測・補償処理を行う。
The pixel values at the positions “e1” to “e3” are generated as shown in equations (17) to (19) by linear interpolation.
e1 = (A + b + 1) >> 1 (17)
e2 = (b + d + 1) >> 1 (18)
e3 = (b + c + 1) >> 1 (19)
In this way, the motion prediction / compensation unit 32 performs a motion prediction / compensation process with 1/4 pixel accuracy.
 予測動きベクトル情報設定部33は、水平予測動きベクトル情報生成部331と垂直予測動きベクトル情報生成部332と識別情報生成部334を有している。 The prediction motion vector information setting unit 33 includes a horizontal prediction motion vector information generation unit 331, a vertical prediction motion vector information generation unit 332, and an identification information generation unit 334.
 水平予測動きベクトル情報生成部331は、対象ブロックの動きベクトル情報の水平成分について、符号化処理で最も符号化効率が高くなる水平予測動きベクトル情報を設定する。水平予測動きベクトル情報生成部331は、動き予測・補償部32から供給された符号化済みの隣接ブロックの水平動きベクトル情報を水平予測動きベクトル情報の候補とする。また、水平予測動きベクトル情報生成部331は、各候補の水平動きベクトル情報と、動き予測・補償部32から供給された対象ブロックの水平動きベクトル情報との差分を示す水平差分動きベクトル情報を生成する。さらに、水平予測動きベクトル情報生成部331は、水平差分動きベクトル情報の符号量が最小となる候補の水平動きベクトル情報を水平予測動きベクトル情報とする。水平予測動きベクトル情報生成部331は、水平予測動きベクトル情報と水平予測動きベクトル情報を用いた場合の水平差分動きベクトル情報を水平予測動きベクトル情報生成結果として識別情報生成部334に出力する。 The horizontal prediction motion vector information generation unit 331 sets horizontal prediction motion vector information that provides the highest encoding efficiency in the encoding process for the horizontal component of the motion vector information of the target block. The predicted horizontal motion vector information generation unit 331 uses the predicted horizontal motion vector information of the adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for predicted horizontal motion vector information. Further, the horizontal predicted motion vector information generation unit 331 generates horizontal differential motion vector information indicating a difference between the horizontal motion vector information of each candidate and the horizontal motion vector information of the target block supplied from the motion prediction / compensation unit 32. To do. Further, the horizontal predicted motion vector information generation unit 331 sets the candidate horizontal motion vector information that minimizes the code amount of the horizontal difference motion vector information as horizontal predicted motion vector information. The horizontal predicted motion vector information generation unit 331 outputs the horizontal difference motion vector information when horizontal predicted motion vector information and horizontal predicted motion vector information are used to the identification information generation unit 334 as a horizontal predicted motion vector information generation result.
 垂直予測動きベクトル情報生成部332は、対象ブロックの動きベクトル情報の垂直成分について、符号化処理で最も符号化効率が高くなる垂直予測動きベクトル情報を設定する。垂直予測動きベクトル情報生成部332は、動き予測・補償部32から供給された符号化済みの隣接ブロックの垂直動きベクトル情報を垂直予測動きベクトル情報の候補とする。また、垂直予測動きベクトル情報生成部332は、各候補の垂直動きベクトル情報と、動き予測・補償部32から供給された対象ブロックの垂直動きベクトル情報との差分を示す垂直差分動きベクトル情報を生成する。さらに、水平予測動きベクトル情報生成部331は、垂直差分動きベクトル情報の符号量が最小となる候補の垂直動きベクトル情報を垂直予測動きベクトル情報とする。垂直予測動きベクトル情報生成部332は、垂直予測動きベクトル情報と垂直予測動きベクトル情報を用いた場合の垂直差分動きベクトル情報を垂直予測動きベクトル情報生成結果として識別情報生成部334に出力する。 The vertical prediction motion vector information generation unit 332 sets vertical prediction motion vector information that provides the highest encoding efficiency in the encoding process for the vertical component of the motion vector information of the target block. The vertical motion vector predictor information generation unit 332 sets the vertical motion vector information of the encoded adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for the vertical motion vector predictor information. Further, the vertical motion vector predictor information generation unit 332 generates vertical difference motion vector information indicating the difference between the vertical motion vector information of each candidate and the vertical motion vector information of the target block supplied from the motion prediction / compensation unit 32. To do. Further, the horizontal prediction motion vector information generation unit 331 sets the candidate vertical motion vector information that minimizes the code amount of the vertical difference motion vector information as the vertical prediction motion vector information. The predicted vertical motion vector information generation unit 332 outputs the vertical difference motion vector information when the predicted vertical motion vector information and the predicted vertical motion vector information are used to the identification information generation unit 334 as the predicted vertical motion vector information generation result.
 識別情報生成部334は、水平予測動きベクトル情報生成結果に基づき、動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報、例えば水平予測ブロックフラグを生成する。識別情報生成部334は、生成した水平予測ブロックフラグを水平差分動きベクトル情報と共に、動き予測・補償部32のコスト関数値算出部322に出力する。また、識別情報生成部334は、垂直予測動きベクトル情報生成結果に基づき、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報、例えば垂直予測ブロックフラグを生成する。識別情報生成部334は、生成した垂直予測ブロックフラグを垂直差分動きベクトル情報と共に、動き予測・補償部32のコスト関数値算出部322に出力する。 The identification information generation unit 334 generates horizontal prediction block information, for example, a horizontal prediction block flag indicating a block in which the motion vector information is selected as the horizontal prediction motion vector information, based on the horizontal prediction motion vector information generation result. The identification information generation unit 334 outputs the generated horizontal prediction block flag to the cost function value calculation unit 322 of the motion prediction / compensation unit 32 together with the horizontal difference motion vector information. Also, the identification information generation unit 334 generates vertical prediction block information, for example, a vertical prediction block flag, indicating a block in which the motion vector information is selected as the vertical prediction motion vector information based on the vertical prediction motion vector information generation result. The identification information generation unit 334 outputs the generated vertical prediction block flag to the cost function value calculation unit 322 of the motion prediction / compensation unit 32 together with the vertical difference motion vector information.
 なお、予測動きベクトル情報設定部33は、対象ブロックの水平(垂直)動きベクトル情報と各候補の動きベクトル情報の差分を示す差分動きベクトル情報を、候補のブロックを示す情報と共にコスト関数値算出部322に供給してもよい。この場合、コスト関数値算出部322で算出されたコスト関数値が最小となる候補の水平(垂直)動きベクトル情報を水平(垂直)予測動きベクトル情報に設定する。また、コスト関数値が最小となる候補のブロックを示す識別情報をインター予測で用いるようにする。 Note that the predicted motion vector information setting unit 33 calculates the difference motion vector information indicating the difference between the horizontal (vertical) motion vector information of the target block and the motion vector information of each candidate, together with information indicating the candidate block, as a cost function value calculation unit. You may supply to 322. In this case, the candidate horizontal (vertical) motion vector information that minimizes the cost function value calculated by the cost function value calculation unit 322 is set as the horizontal (vertical) predicted motion vector information. Further, identification information indicating a candidate block having the smallest cost function value is used in inter prediction.
 図7に戻り、予測画像・最適モード選択部35は、イントラ予測部31から供給されたコスト関数値と動き予測・補償部32から供給されたコスト関数値を比較して、コスト関数値が少ない方を、符号化効率が最良となる最適モードとして選択する。また、予測画像・最適モード選択部35は、最適モードで生成した予測画像データを減算部13と加算部23に出力する。さらに、予測画像・最適モード選択部35は、最適モードがイントラ予測モードであるかインター予測モードであるかを示す情報を可逆符号化部16に出力する。なお、予測画像・最適モード選択部35は、スライス単位でイントラ予測またはインター予測の切り替えを行う。 Returning to FIG. 7, the predicted image / optimum mode selection unit 35 compares the cost function value supplied from the intra prediction unit 31 with the cost function value supplied from the motion prediction / compensation unit 32, and the cost function value is small. Is selected as the optimum mode with the best coding efficiency. Further, the predicted image / optimum mode selection unit 35 outputs the predicted image data generated in the optimal mode to the subtraction unit 13 and the addition unit 23. Further, the predicted image / optimum mode selection unit 35 outputs information indicating whether the optimal mode is the intra prediction mode or the inter prediction mode to the lossless encoding unit 16. Note that the predicted image / optimum mode selection unit 35 switches between intra prediction and inter prediction in units of slices.
 <2.画像符号化装置の動作>
 図10は画像符号化装置の動作を示すフローチャートである。ステップST11において、A/D変換部11は入力された画像信号をA/D変換する。
<2. Operation of Image Encoding Device>
FIG. 10 is a flowchart showing the operation of the image coding apparatus. In step ST11, the A / D converter 11 performs A / D conversion on the input image signal.
 ステップST12において画面並べ替えバッファ12は、画像並べ替えを行う。画面並べ替えバッファ12は、A/D変換部11より供給された画像データを記憶し、各ピクチャの表示する順番から符号化する順番への並べ替えを行う。 In step ST12, the screen rearrangement buffer 12 performs image rearrangement. The screen rearrangement buffer 12 stores the image data supplied from the A / D conversion unit 11, and rearranges from the display order of each picture to the encoding order.
 ステップST13において減算部13は、予測誤差データの生成を行う。減算部13は、ステップST12で並び替えられた画像の画像データと予測画像・最適モード選択部35で選択された予測画像データとの差分を算出して予測誤差データを生成する。予測誤差データは、元の画像データに比べてデータ量が小さい。したがって、画像をそのまま符号化する場合に比べて、データ量を圧縮することができる。 In step ST13, the subtraction unit 13 generates prediction error data. The subtraction unit 13 calculates a difference between the image data of the images rearranged in step ST12 and the predicted image data selected by the predicted image / optimum mode selection unit 35, and generates prediction error data. The prediction error data has a smaller data amount than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
 ステップST14において直交変換部14は、直交変換処理を行う。直交変換部14は、減算部13から供給された予測誤差データを直交変換する。具体的には、予測誤差データに対して離散コサイン変換、カルーネン・レーベ変換等の直交変換が行われ、変換係数データを出力する。 In step ST14, the orthogonal transform unit 14 performs an orthogonal transform process. The orthogonal transformation unit 14 performs orthogonal transformation on the prediction error data supplied from the subtraction unit 13. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed on the prediction error data, and transformation coefficient data is output.
 ステップST15において量子化部15は、量子化処理を行う。量子化部15は、変換係数データを量子化する。量子化に際しては、後述するステップST25の処理で説明されるように、レート制御が行われる。 In step ST15, the quantization unit 15 performs a quantization process. The quantization unit 15 quantizes the transform coefficient data. At the time of quantization, rate control is performed as described in the process of step ST25 described later.
 ステップST16において逆量子化部21は、逆量子化処理を行う。逆量子化部21は、量子化部15で量子化された変換係数データを量子化部15の特性に対応する特性で逆量子化する。 In step ST16, the inverse quantization unit 21 performs an inverse quantization process. The inverse quantization unit 21 inversely quantizes the transform coefficient data quantized by the quantization unit 15 with characteristics corresponding to the characteristics of the quantization unit 15.
 ステップST17において逆直交変換部22は、逆直交変換処理を行う。逆直交変換部22は、逆量子化部21で逆量子化された変換係数データを直交変換部14の特性に対応する特性で逆直交変換する。 In step ST17, the inverse orthogonal transform unit 22 performs an inverse orthogonal transform process. The inverse orthogonal transform unit 22 performs inverse orthogonal transform on the transform coefficient data inversely quantized by the inverse quantization unit 21 with characteristics corresponding to the characteristics of the orthogonal transform unit 14.
 ステップST18において加算部23は、参照画像データの生成を行う。加算部23は、予測画像・最適モード選択部35から供給された予測画像データと、この予測画像と対応する位置の逆直交変換後のデータを加算して、参照画像データ(復号画像データ)を生成する。 In step ST18, the adding unit 23 generates reference image data. The adding unit 23 adds the predicted image data supplied from the predicted image / optimum mode selecting unit 35 and the data after inverse orthogonal transformation of the position corresponding to the predicted image, and obtains reference image data (decoded image data). Generate.
 ステップST19においてデブロッキングフィルタ24は、フィルタ処理を行う。デブロッキングフィルタ24は、加算部23より出力された復号画像データをフィルタリングしてブロック歪みを除去する。 In step ST19, the deblocking filter 24 performs filter processing. The deblocking filter 24 filters the decoded image data output from the adding unit 23 to remove block distortion.
 ステップST20においてフレームメモリ25は、参照画像データを記憶する。フレームメモリ25はフィルタ処理後の参照画像データ(復号画像データ)を記憶する。 In step ST20, the frame memory 25 stores reference image data. The frame memory 25 stores reference image data (decoded image data) after filtering.
 ステップST21においてイントラ予測部31と動き予測・補償部32は、それぞれ予測処理を行う。すなわち、イントラ予測部31は、イントラ予測モードのイントラ予測処理を行い、動き予測・補償部32は、インター予測モードの動き予測・補償処理を行う。予測処理の詳細は、図11を参照して後述するが、この処理により、候補となる全ての予測モードでの予測処理がそれぞれ行われ、候補となる全ての予測モードでのコスト関数値がそれぞれ算出される。そして、算出されたコスト関数値に基づいて、最適イントラ予測モードと最適インター予測モードが選択され、選択された予測モードで生成された予測画像とそのコスト関数値および予測モード情報が予測画像・最適モード選択部35に供給される。 In step ST21, the intra prediction unit 31 and the motion prediction / compensation unit 32 each perform a prediction process. That is, the intra prediction unit 31 performs intra prediction processing in the intra prediction mode, and the motion prediction / compensation unit 32 performs motion prediction / compensation processing in the inter prediction mode. The details of the prediction process will be described later with reference to FIG. 11. By this process, the prediction process is performed in all candidate prediction modes, and the cost function values in all candidate prediction modes are respectively determined. Calculated. Then, based on the calculated cost function value, the optimal intra prediction mode and the optimal inter prediction mode are selected, and the predicted image generated in the selected prediction mode and its cost function value and prediction mode information are predicted image / optimum It is supplied to the mode selection unit 35.
 ステップST22において予測画像・最適モード選択部35は、予測画像データの選択を行う。予測画像・最適モード選択部35は、イントラ予測部31および動き予測・補償部32より出力された各コスト関数値に基づいて、符号化効率が最良となる最適モードに決定する。さらに、予測画像・最適モード選択部35は、決定した最適モードの予測画像データを選択して、減算部13と加算部23に出力する。この予測画像データが、上述したように、ステップST13,ST18の演算に利用される。 In step ST22, the predicted image / optimum mode selection unit 35 selects predicted image data. The predicted image / optimum mode selection unit 35 determines the optimum mode with the best coding efficiency based on the cost function values output from the intra prediction unit 31 and the motion prediction / compensation unit 32. Further, the predicted image / optimum mode selection unit 35 selects the predicted image data of the determined optimal mode and outputs it to the subtraction unit 13 and the addition unit 23. As described above, the predicted image data is used for the calculations in steps ST13 and ST18.
 ステップST23において可逆符号化部16は、可逆符号化処理を行う。可逆符号化部16は、量子化部15より出力された量子化データを可逆符号化する。すなわち、量子化データに対して可変長符号化や算術符号化等の可逆符号化が行われて、データ圧縮される。また、可逆符号化部16は、ステップST22で選択された予測画像データに対応する予測モード情報等の可逆符号化を行い、量子化データを可逆符号化して生成された画像圧縮情報に、予測モード情報等の可逆符号化データが含められる。 In step ST23, the lossless encoding unit 16 performs a lossless encoding process. The lossless encoding unit 16 performs lossless encoding on the quantized data output from the quantization unit 15. That is, lossless encoding such as variable length encoding or arithmetic encoding is performed on the quantized data, and the data is compressed. Further, the lossless encoding unit 16 performs lossless encoding such as prediction mode information corresponding to the prediction image data selected in step ST22, and adds the prediction mode to the image compression information generated by lossless encoding of the quantized data. Lossless encoded data such as information is included.
 ステップST24において蓄積バッファ17は、蓄積処理を行う。蓄積バッファ17は、可逆符号化部16から出力される画像圧縮情報を蓄積する。この蓄積バッファ17に蓄積された画像圧縮情報は、適宜読み出されて伝送路を介して復号側に伝送される。 In step ST24, the accumulation buffer 17 performs accumulation processing. The accumulation buffer 17 accumulates the compressed image information output from the lossless encoding unit 16. The compressed image information stored in the storage buffer 17 is appropriately read out and transmitted to the decoding side via the transmission path.
 ステップST25においてレート制御部18は、レート制御を行う。レート制御部18は、蓄積バッファ17で画像圧縮情報を蓄積する場合、オーバーフローまたはアンダーフローが蓄積バッファ17で発生しないように、量子化部15の量子化動作のレートを制御する。 In step ST25, the rate control unit 18 performs rate control. When accumulating image compression information in the accumulation buffer 17, the rate control unit 18 controls the rate of the quantization operation of the quantization unit 15 so that overflow or underflow does not occur in the accumulation buffer 17.
 次に、図11のフローチャートを参照して、図10のステップST21における予測処理を説明する。 Next, the prediction process in step ST21 in FIG. 10 will be described with reference to the flowchart in FIG.
 ステップST31において、イントラ予測部31はイントラ予測処理を行う。イントラ予測部31は対象ブロックの画像を、候補となる全てのイントラ予測モードでイントラ予測する。なお、イントラ予測において参照される復号画像の画像データは、デブロッキングフィルタ24でブロッキングフィルタ処理が行われる前の復号画像データが用いられる。このイントラ予測処理により、候補となる全てのイントラ予測モードでイントラ予測が行われ、候補となる全てのイントラ予測モードに対してコスト関数値が算出される。そして、算出されたコスト関数値に基づいて、全てのイントラ予測モードの中から、符号化効率が最良となる1つのイントラ予測モードが選択される。 In step ST31, the intra prediction unit 31 performs an intra prediction process. The intra prediction unit 31 performs intra prediction on the image of the target block in all candidate intra prediction modes. Note that the decoded image data before the blocking filter processing is performed by the deblocking filter 24 is used as the image data of the decoded image referred to in the intra prediction. By this intra prediction process, intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, one intra prediction mode with the best coding efficiency is selected from all the intra prediction modes.
 ステップST32において、動き予測・補償部32はインター予測処理を行う。動き予測・補償部32は、フレームメモリ25に記憶されているデブロッキングフィルタ処理後の復号画像データを用いて、候補となるインター予測モードのインター予測処理を行う。このインター予測処理により、候補となる全てのインター予測モードで予測処理が行われ、候補となる全てのインター予測モードに対してコスト関数値が算出される。そして、算出されたコスト関数値に基づいて、全てのインター予測モードの中から、符号化効率が最良となる1つのインター予測モードが選択される。 In step ST32, the motion prediction / compensation unit 32 performs an inter prediction process. The motion prediction / compensation unit 32 uses the decoded image data after the deblocking filter processing stored in the frame memory 25 to perform inter prediction processing in a candidate inter prediction mode. By this inter prediction processing, prediction processing is performed in all candidate inter prediction modes, and cost function values are calculated for all candidate inter prediction modes. Then, based on the calculated cost function value, one inter prediction mode with the best coding efficiency is selected from all the inter prediction modes.
 次に、図12のフローチャートを参照して、図11におけるステップST31のイントラ予測処理について説明する。 Next, the intra prediction process in step ST31 in FIG. 11 will be described with reference to the flowchart in FIG.
 ステップST41でイントラ予測部31は、各予測モードのイントラ予測を行う。イントラ予測部31は、ブロッキングフィルタ処理前の復号画像データを用いて、イントラ予測モード毎に予測画像データを生成する。 In step ST41, the intra prediction unit 31 performs intra prediction in each prediction mode. The intra prediction unit 31 generates predicted image data for each intra prediction mode using the decoded image data before the blocking filter processing.
 ステップST42でイントラ予測部31は、各予測モードでのコスト関数値を算出する。コスト関数値の算出は、上述のように例えばH.264/AVC方式における参照ソフトウェアであるJM(Joint Model)で定められているように、High Complexityモードか、Low Complexityモードのいずれかの手法に基づいて行う。すなわち、High Complexityモードでは、ステップST42の処理として、候補となる全ての予測モードに対して、仮に可逆符号化処理までを行い、式(9)で表されるコスト関数値を各予測モードに対して算出する。Low Complexityモードでは、ステップST42の処理として、候補となる全ての予測モードに対して、予測画像の生成と動きベクトル情報や予測モード情報などのヘッダビットまでを生成して、式(10)で表されるコスト関数値を各予測モードに対して算出する。 In step ST42, the intra prediction unit 31 calculates a cost function value in each prediction mode. The cost function value is calculated as described above, for example, H.264. As defined by JM (Joint Model), which is reference software in the H.264 / AVC system, this is performed based on either the High Complexity mode or the Low Complexity mode. In other words, in the High Complexity mode, as a process of step ST42, all the candidate prediction modes are subjected to the lossless encoding process, and the cost function value represented by the equation (9) is set for each prediction mode. To calculate. In the Low Complexity mode, as a process of step ST42, for all the prediction modes that are candidates, generation of a prediction image and header bits such as motion vector information and prediction mode information are generated, and expressed by Expression (10). The calculated cost function value is calculated for each prediction mode.
 ステップST43でイントラ予測部31は、最適イントラ予測モードを決定する。イントラ予測部31は、ステップST42において算出されたコスト関数値に基づいて、それらの中から、コスト関数値が最小値である1つのイントラ予測モードを選択して最適イントラ予測モードに決定する。 In step ST43, the intra prediction unit 31 determines the optimal intra prediction mode. Based on the cost function value calculated in step ST42, the intra prediction unit 31 selects one intra prediction mode having a minimum cost function value from them, and determines the optimal intra prediction mode.
 次に、図13のフローチャートを参照して、図11におけるステップST32のインター予測処理について説明する。 Next, the inter prediction process in step ST32 in FIG. 11 will be described with reference to the flowchart in FIG.
 ステップST51で動き予測・補償部32は、動き予測処理を行う。動き予測・補償部32は、予測モード毎に動き予測を行って動きベクトルを検出してステップST52に進む。 In step ST51, the motion prediction / compensation unit 32 performs a motion prediction process. The motion prediction / compensation unit 32 performs motion prediction for each prediction mode to detect a motion vector, and proceeds to step ST52.
 ステップST52で予測動きベクトル情報設定部33は、予測動きベクトル情報設定処理を行う。予測動きベクトル情報設定部33は、対象ブロックに対して、予測ブロックフラグと差分動きベクトル情報を生成する。 In step ST52, the motion vector predictor information setting unit 33 performs a motion vector predictor information setting process. The predicted motion vector information setting unit 33 generates a predicted block flag and difference motion vector information for the target block.
 図14は、予測動きベクトル情報設定処理を示すフローチャートである。ステップST61で予測動きベクトル情報設定部33は、水平予測動きベクトル情報の候補を選択する。予測動きベクトル情報設定部33は、対象ブロックに対して隣接している符号化済みのブロックの水平方向動きベクトル情報を、水平予測動きベクトル情報の候補として選択してステップST62に進む。 FIG. 14 is a flowchart showing the predicted motion vector information setting process. In step ST <b> 61, the predicted motion vector information setting unit 33 selects a candidate for predicted horizontal motion vector information. The motion vector predictor information setting unit 33 selects the horizontal motion vector information of the encoded block adjacent to the target block as a candidate for the motion vector predictive motion vector information, and proceeds to step ST62.
 ステップST62で予測動きベクトル情報設定部33は、水平予測動きベクトル情報設定処理を行う。予測動きベクトル情報設定部33は、例えは式(20)に基づき、水平差分動きベクトル情報の符号化量が最小となるi番目の水平動きベクトル情報を検出する。
  argmin(R(mvx-pmvx(i))) ・・・(20)
In step ST62, the predicted motion vector information setting unit 33 performs horizontal predicted motion vector information setting processing. The predicted motion vector information setting unit 33 detects the i-th horizontal motion vector information that minimizes the amount of encoding of the horizontal difference motion vector information based on, for example, Expression (20).
arg i min (R (mvx−pmvx (i))) (20)
 なお、「mvx」は対象ブロックの水平動きベクトル情報、「pmvx(i)」は水平予測動きベクトル情報のi番目の候補を示している。また、「R(mvx-pmvx(i))」は、水平予測動きベクトルのi番目の候補と対象ブロックの水平動きベクトル情報との差分を示す水平差分動きベクトル情報を符号化したときの符号量を示している。 Note that “mvx” indicates the horizontal motion vector information of the target block, and “pmvx (i)” indicates the i th candidate of the predicted horizontal motion vector information. “R (mvx−pmvx (i))” is a code amount when the horizontal difference motion vector information indicating the difference between the i th candidate of the horizontal prediction motion vector and the horizontal motion vector information of the target block is encoded. Is shown.
 予測動きベクトル情報設定部33は、式(20)に基づいて検出した符号化量が最小となる水平動きベクトル情報の隣接ブロックを示す水平予測ブロックフラグを生成する。また、予測動きベクトル情報設定部33は、当該水平動きベクトル情報を用いたときの水平差分動きベクトル情報を生成してステップST63に進む。 The predicted motion vector information setting unit 33 generates a horizontal predicted block flag indicating an adjacent block of the horizontal motion vector information that minimizes the coding amount detected based on the equation (20). Further, the predicted motion vector information setting unit 33 generates horizontal difference motion vector information when the horizontal motion vector information is used, and the process proceeds to step ST63.
 ステップST63で予測動きベクトル情報設定部33は、垂直予測動きベクトル情報の候補を選択する。予測動きベクトル情報設定部33は、対象ブロックに対して隣接している符号化済みのブロックの垂直方向動きベクトル情報を、垂直予測動きベクトル情報の候補として選択してステップST64に進む。 In step ST63, the motion vector predictor information setting unit 33 selects a candidate for motion vector predictor information. The motion vector predictor information setting unit 33 selects the vertical motion vector information of the encoded block adjacent to the target block as a candidate for the motion vector predictor information, and proceeds to step ST64.
 ステップST64で予測動きベクトル情報設定部33は、垂直予測動きベクトル情報設定処理を行う。予測動きベクトル情報設定部33は、例えは式(21)に基づき、垂直差報の符号化量が最小となるj番目の垂直動きベクトル情報を検出する。
  argmin(R(mvy-pmvy(j))) ・・・(21)
In step ST64, the motion vector predictor information setting unit 33 performs a vertical motion vector predictor information setting process. The motion vector predictor information setting unit 33 detects the j-th vertical motion vector information that minimizes the amount of encoding of the vertical difference report based on, for example, Expression (21).
arg j min (R (mvy-pmvy (j))) (21)
 なお、「mvy」は対象ブロックの垂直動きベクトル情報、「pmvy(j)」は垂直予測動きベクトル情報のj番目の候補を示している。また、「R(mvy-pmvy(j))」は、垂直予測動きベクトルのj番目の候補と対象ブロックの垂直動きベクトル情報との差分を示す垂直差分動きベクトル情報を符号化したときの符号量を示している。 Note that “mvy” indicates the vertical motion vector information of the target block, and “pmvy (j)” indicates the j-th candidate for the predicted vertical motion vector information. “R (mvy−pmvy (j))” is a code amount when the vertical difference motion vector information indicating the difference between the j th candidate of the vertical prediction motion vector and the vertical motion vector information of the target block is encoded. Is shown.
 予測動きベクトル情報設定部33は、式(21)に基づいて検出した符号化量が最小となる垂直動きベクトル情報の隣接ブロックを示す垂直予測ブロックフラグを生成する。また、予測動きベクトル情報設定部33は、当該垂直動きベクトル情報を用いたときの垂直差分動きベクトル情報を生成して予測動きベクトル情報設定処理を終了して、図13のステップST53に戻る。 The predicted motion vector information setting unit 33 generates a vertical predicted block flag indicating an adjacent block of the vertical motion vector information in which the coding amount detected based on Expression (21) is minimized. Also, the predicted motion vector information setting unit 33 generates vertical difference motion vector information when the vertical motion vector information is used, ends the predicted motion vector information setting process, and returns to step ST53 in FIG.
 ステップST53で動き予測・補償部32は、各予測モードでのコスト関数値を算出する。動き予測・補償部32は、上述した式(9)または式(10)を用いてコスト関数値の算出を行う。また、動き予測・補償部32は、差分動きベクトル情報を用いて発生符号量を算出する。なお、インター予測モードに対するコスト関数値の算出には、H.264/AVC方式において定められているスキップドマクロブロックやダイレクトモードのコスト関数値の評価も含まれる。 In step ST53, the motion prediction / compensation unit 32 calculates a cost function value in each prediction mode. The motion prediction / compensation unit 32 calculates the cost function value using the above-described equation (9) or equation (10). Also, the motion prediction / compensation unit 32 calculates the generated code amount using the difference motion vector information. Note that the cost function value for the inter prediction mode is calculated using the H.264 standard. Evaluation of cost function values in skipped macroblocks and direct mode defined in the H.264 / AVC format is also included.
 ステップST54で動き予測・補償部32は、最適インター予測モードを決定する。動き予測・補償部32は、ステップST54において算出されたコスト関数値に基づいて、それらの中から、コスト関数値が最小値である1つの予測モードを選択して最適インター予測モードに決定する。 In step ST54, the motion prediction / compensation unit 32 determines the optimal inter prediction mode. Based on the cost function value calculated in step ST54, the motion prediction / compensation unit 32 selects one prediction mode having the minimum cost function value from them, and determines the optimum inter prediction mode.
 このように、画像符号化装置10は、対象ブロックに対して水平予測動きベクトルと垂直予測動きベクトルを個々に設定する。また、画像符号化装置10は、対象ブロックの水平動きベクトル情報と水平予測動きベクトル情報との差分である水平差分動きベクトル情報を可変長符号化する。また、画像符号化装置10は、対象ブロックの垂直動きベクトルと垂直予測動きベクトル情報との差分である垂直差分動きベクトル情報を可変長符号化する。水平予測動きベクトル情報と垂直予測動きベクトル情報は、隣接する符号化済みブロックのいずれのブロックであるか予測ブロックフラグによって示される。 In this way, the image encoding device 10 individually sets the horizontal prediction motion vector and the vertical prediction motion vector for the target block. Further, the image encoding device 10 performs variable-length encoding on the horizontal differential motion vector information that is the difference between the horizontal motion vector information of the target block and the predicted horizontal motion vector information. In addition, the image encoding device 10 performs variable-length encoding on the vertical difference motion vector information that is the difference between the vertical motion vector of the target block and the predicted vertical motion vector information. The predicted horizontal motion vector information and the predicted vertical motion vector information are indicated by a prediction block flag as to which of the adjacent encoded blocks.
 したがって、式(22)に示す水平垂直予測動きベクトル情報を用いる場合に比べて、予測ブロックフラグのデータ量を少なくできる。なお、式(22)に示すように、水平垂直予測動きベクトル情報は、水平差分動きベクトル情報の符号量と垂直差分動きベクトル情報の符号量を加算した符号量が最小となる隣接ブロックの動きベクトル情報である。
  argmin(R(mvx-pmvx(k))
           +R(mvy-pmvy(k)))・・・(22)
Therefore, the data amount of the prediction block flag can be reduced as compared with the case of using the horizontal / vertical prediction motion vector information shown in Expression (22). In addition, as shown in Expression (22), the horizontal / vertical prediction motion vector information includes the motion vector of the adjacent block that minimizes the code amount obtained by adding the code amount of the horizontal difference motion vector information and the code amount of the vertical difference motion vector information. Information.
arg k min (R (mvx−pmvx (k))
+ R (mvy−pmvy (k))) (22)
 例えば水平動きベクトル情報に対して3種類、垂直動きベクトル情報に対して3種類の候補がある場合、本技術では6種類(3種類+3種類)のフラグを用意すればよい。しかし、水平差分動きベクトル情報の符号量と垂直差分動きベクトル情報の符号量を加算した符号量に基づきブロックを決定すると、9種類(3種類×3種類)のフラグを用意しなければならない。すなわち、本技術では、用意するフラグを少なくてきるので、動きベクトル情報の符号化における効率を向上させることができる。 For example, when there are three types of candidates for the horizontal motion vector information and three types of candidates for the vertical motion vector information, the present technology may prepare six types (three types + three types) of flags. However, if a block is determined based on the code amount obtained by adding the code amount of the horizontal difference motion vector information and the code amount of the vertical difference motion vector information, nine types (three types × three types) of flags must be prepared. That is, in the present technology, the number of flags to be prepared is reduced, so that the efficiency in encoding motion vector information can be improved.
 <3.画像復号化装置の構成>
 次に、画像復号化装置について説明する。入力画像を符号化して生成された画像圧縮情報は、所定の伝送路や記録媒体等を介して画像復号化装置に供給されて復号される。
<3. Configuration of Image Decoding Device>
Next, the image decoding apparatus will be described. Image compression information generated by encoding an input image is supplied to an image decoding apparatus via a predetermined transmission path, recording medium, or the like and decoded.
 図15は、画像復号化装置の構成を示している。画像復号化装置50は、蓄積バッファ51、可逆復号化部52、逆量子化部53、逆直交変換部54、加算部55、デブロッキングフィルタ56、画面並べ替えバッファ57、ディジタル/アナログ変換部(D/A変換部)58を備えている。さらに、画像復号化装置50は、フレームメモリ61、セレクタ62,75、イントラ予測部71、動き補償部72、予測動きベクトル情報設定部73を備えている。 FIG. 15 shows the configuration of the image decoding apparatus. The image decoding device 50 includes a storage buffer 51, a lossless decoding unit 52, an inverse quantization unit 53, an inverse orthogonal transform unit 54, an addition unit 55, a deblocking filter 56, a screen rearrangement buffer 57, a digital / analog conversion unit ( D / A converter 58). Furthermore, the image decoding device 50 includes a frame memory 61, selectors 62 and 75, an intra prediction unit 71, a motion compensation unit 72, and a predicted motion vector information setting unit 73.
 蓄積バッファ51は、伝送されてきた画像圧縮情報を蓄積する。可逆復号化部52は、蓄積バッファ51より供給された画像圧縮情報を、図7の可逆符号化部16の符号化方式に対応する方式で復号化する。 The accumulation buffer 51 accumulates the transmitted image compression information. The lossless decoding unit 52 decodes the image compression information supplied from the accumulation buffer 51 by a method corresponding to the encoding method of the lossless encoding unit 16 in FIG.
 可逆復号化部52は、画像圧縮情報を復号して得られた予測モード情報をイントラ予測部71や動き補償部72に出力する。また、可逆復号化部52は、画像圧縮情報を復号して得られた予測ブロック情報(予測ブロックフラグ)と差分動きベクトル情報を動き補償部72に出力する。 The lossless decoding unit 52 outputs prediction mode information obtained by decoding the image compression information to the intra prediction unit 71 and the motion compensation unit 72. Further, the lossless decoding unit 52 outputs prediction block information (prediction block flag) and differential motion vector information obtained by decoding the image compression information to the motion compensation unit 72.
 逆量子化部53は、可逆復号化部52で復号された量子化データを、図7の量子化部15の量子化方式に対応する方式で逆量子化する。逆直交変換部54は、図7の直交変換部14の直交変換方式に対応する方式で逆量子化部53の出力を逆直交変換して加算部55に出力する。 The inverse quantization unit 53 inversely quantizes the quantized data decoded by the lossless decoding unit 52 by a method corresponding to the quantization method of the quantization unit 15 of FIG. The inverse orthogonal transform unit 54 performs inverse orthogonal transform on the output of the inverse quantization unit 53 by a method corresponding to the orthogonal transform method of the orthogonal transform unit 14 of FIG.
 加算部55は、逆直交変換後のデータとセレクタ75から供給される予測画像データを加算して復号画像データを生成してデブロッキングフィルタ56とフレームメモリ61に出力する。 The addition unit 55 adds the data after inverse orthogonal transformation and the predicted image data supplied from the selector 75 to generate decoded image data, and outputs the decoded image data to the deblocking filter 56 and the frame memory 61.
 デブロッキングフィルタ56は、加算部55から供給された復号画像データに対してデブロッキングフィルタ処理を行い、ブロック歪みを除去してからフレームメモリ61に供給し蓄積させるとともに、画面並べ替えバッファ57に出力する。 The deblocking filter 56 performs deblocking filter processing on the decoded image data supplied from the addition unit 55, removes block distortion, supplies the frame memory 61 to the frame memory 61, and outputs the frame memory 61 to the screen rearrangement buffer 57. To do.
 画面並べ替えバッファ57は、画像の並べ替えを行う。すなわち、図7の画面並べ替えバッファ12で符号化の順番のために並べ替えられたフレームの順番が、元の表示の順番に並べ替えられて、D/A変換部58に出力される。 The screen rearrangement buffer 57 rearranges images. That is, the order of frames rearranged for the encoding order in the screen rearrangement buffer 12 in FIG. 7 is rearranged in the original display order and output to the D / A conversion unit 58.
 D/A変換部58は、画面並べ替えバッファ57から供給された画像データをD/A変換し、図示せぬディスプレイに出力することで画像を表示させる。 The D / A conversion unit 58 performs D / A conversion on the image data supplied from the screen rearrangement buffer 57 and outputs it to a display (not shown) to display an image.
 フレームメモリ61は、デブロッキングフィルタ24でフィルタ処理が行われる前の復号画像データと、デブロッキングフィルタ24でフィルタ処理が行われた後の復号画像データを記憶する。 The frame memory 61 stores the decoded image data before the filtering process by the deblocking filter 24 and the decoded image data after the filtering process by the deblocking filter 24 are performed.
 セレクタ62は、可逆復号化部52から供給された予測モード情報に基づき、イントラ予測画像の復号化の場合、フレームメモリ61に記憶されているフィルタ処理前の復号画像データをイントラ予測部71に供給する。また、セレクタ62は、インター予測画像の復号化の場合、フレームメモリ61に記憶されているフィルタ処理後の復号画像データを動き補償部72に供給する。 Based on the prediction mode information supplied from the lossless decoding unit 52, the selector 62 supplies the decoded image data before filtering stored in the frame memory 61 to the intra prediction unit 71 in the case of decoding an intra prediction image. To do. In addition, in the case of decoding an inter prediction image, the selector 62 supplies the decoded image data after the filter processing stored in the frame memory 61 to the motion compensation unit 72.
 イントラ予測部71は、可逆復号化部52から供給された予測モード情報とセレクタ62を介してフレームメモリ61から供給された復号画像データに基づいて予測画像データの生成を行い、生成した予測画像データをセレクタ75に出力する。 The intra prediction unit 71 generates predicted image data based on the prediction mode information supplied from the lossless decoding unit 52 and the decoded image data supplied from the frame memory 61 via the selector 62, and the generated predicted image data Is output to the selector 75.
 動き補償部72は、可逆復号化部52から供給された差分動きベクトル情報と予測動きベクトル情報設定部73から供給された予測動きベクトル情報を加算して、復号化対象のブロックの動きベクトル情報を生成する。また、動き補償部72は、生成した動きベクトル情報と可逆復号化部52から供給された予測モード情報に基づき、フレームメモリ61から供給された復号画像データを用いて動き補償を行い、予測画像データを生成してセレクタ75に出力する。 The motion compensation unit 72 adds the difference motion vector information supplied from the lossless decoding unit 52 and the predicted motion vector information supplied from the predicted motion vector information setting unit 73 to obtain motion vector information of a block to be decoded. Generate. In addition, the motion compensation unit 72 performs motion compensation using the decoded image data supplied from the frame memory 61 based on the generated motion vector information and the prediction mode information supplied from the lossless decoding unit 52, so that predicted image data Is output to the selector 75.
 予測動きベクトル情報設定部73は、可逆復号化部52から供給された予測ブロック情報に基づき予測動きベクトル情報の設定を行う。予測動きベクトル情報設定部73は、対象ブロックについて、復号化済みの隣接ブロックにおける水平予測ブロックフラグ情報で示されたブロックの水平動きベクトル情報を水平予測動きベクトル情報とする。また、復号化済みの隣接ブロックにおける垂直予測ブロックフラグで示されたブロックの垂直動きベクトル情報を垂直予測動きベクトル情報とする。予測動きベクトル情報設定部73は、設定した水平予測動きベクトル情報と垂直動きベクトル情報を動き補償部72に出力する。 The prediction motion vector information setting unit 73 sets prediction motion vector information based on the prediction block information supplied from the lossless decoding unit 52. The predicted motion vector information setting unit 73 sets the horizontal motion vector information of the block indicated by the horizontal predicted block flag information in the decoded adjacent block as the predicted horizontal motion vector information for the target block. Also, the vertical motion vector information of the block indicated by the vertical prediction block flag in the decoded adjacent block is used as the vertical prediction motion vector information. The predicted motion vector information setting unit 73 outputs the set horizontal predicted motion vector information and vertical motion vector information to the motion compensation unit 72.
 図16は、動き補償部72と予測動きベクトル情報設定部73の構成を示している。 FIG. 16 shows the configuration of the motion compensation unit 72 and the predicted motion vector information setting unit 73.
 動き補償部72は、ブロックサイズ情報バッファ721、差分動きベクトル情報バッファ722、動きベクトル情報生成部723、動き補償処理部724、動きベクトル情報バッファ725を有している。 The motion compensation unit 72 includes a block size information buffer 721, a differential motion vector information buffer 722, a motion vector information generation unit 723, a motion compensation processing unit 724, and a motion vector information buffer 725.
 ブロックサイズ情報バッファ721は、可逆復号化部52から供給された予測モード情報に含まれているブロックサイズ情報を記憶する。また、ブロックサイズ情報バッファ721は、記憶しているブロックサイズ情報を動き補償処理部724と予測動きベクトル情報設定部73に出力する。 The block size information buffer 721 stores block size information included in the prediction mode information supplied from the lossless decoding unit 52. Also, the block size information buffer 721 outputs the stored block size information to the motion compensation processing unit 724 and the predicted motion vector information setting unit 73.
 差分動きベクトル情報バッファ722は、可逆復号化部52から供給された差分動きベクトル情報を記憶する。また、差分動きベクトル情報バッファ722は、記憶している差分動きベクトル情報を動きベクトル情報生成部723に出力する。 The difference motion vector information buffer 722 stores the difference motion vector information supplied from the lossless decoding unit 52. Further, the differential motion vector information buffer 722 outputs the stored differential motion vector information to the motion vector information generation unit 723.
 動きベクトル情報生成部723は、差分動きベクトル情報バッファ722から供給された水平差分動きベクトル情報と予測動きベクトル情報設定部73で設定された水平予測動きベクトル情報を加算する。また、動きベクトル情報生成部723は、差分動きベクトル情報バッファ722から供給された垂直差分動きベクトル情報と予測動きベクトル情報設定部73で設定された垂直予測動きベクトル情報を加算する。動きベクトル情報生成部723は、差分動きベクトル情報と予測動きベクトル情報を加算して得られた動きベクトル情報を動き補償処理部724と動きベクトル情報バッファ725に出力する。 The motion vector information generation unit 723 adds the horizontal difference motion vector information supplied from the difference motion vector information buffer 722 and the predicted horizontal motion vector information set by the predicted motion vector information setting unit 73. In addition, the motion vector information generation unit 723 adds the vertical difference motion vector information supplied from the difference motion vector information buffer 722 and the predicted vertical motion vector information set by the predicted motion vector information setting unit 73. The motion vector information generation unit 723 outputs the motion vector information obtained by adding the difference motion vector information and the predicted motion vector information to the motion compensation processing unit 724 and the motion vector information buffer 725.
 動き補償処理部724は、可逆復号化部52から供給された予測モード情報に基づいてフレームメモリ61から参照画像の画像データを読み出す。動き補償処理部724は、参照画像の画像データと、ブロックサイズ情報バッファ721から供給されたブロック情報と、動きベクトル情報生成部723から供給された動きベクトル情報とに基づき動き補償を行う。動き補償処理部724は、動き補償によって生成した予測画像データをセレクタ75に出力する。 The motion compensation processing unit 724 reads the image data of the reference image from the frame memory 61 based on the prediction mode information supplied from the lossless decoding unit 52. The motion compensation processing unit 724 performs motion compensation based on the image data of the reference image, the block information supplied from the block size information buffer 721, and the motion vector information supplied from the motion vector information generation unit 723. The motion compensation processing unit 724 outputs the predicted image data generated by the motion compensation to the selector 75.
 動きベクトル情報バッファ725は、動きベクトル情報生成部723から供給された動きベクトル情報を記憶する。また、動きベクトル情報バッファ725は、記憶している動きベクトル情報を予測動きベクトル情報設定部73に出力する。 The motion vector information buffer 725 stores the motion vector information supplied from the motion vector information generation unit 723. In addition, the motion vector information buffer 725 outputs the stored motion vector information to the predicted motion vector information setting unit 73.
 予測動きベクトル情報設定部73は、フラグバッファ730と水平予測動きベクトル情報生成部731、および垂直予測動きベクトル情報生成部732を有している。 The predicted motion vector information setting unit 73 includes a flag buffer 730, a horizontal predicted motion vector information generation unit 731 and a vertical predicted motion vector information generation unit 732.
 フラグバッファ730は、可逆復号化部52から供給された予測ブロックフラグを記憶する。また、フラグバッファ730は、記憶している予測ブロックフラグを水平予測動きベクトル情報生成部731と、垂直予測動きベクトル情報生成部732に出力する。 The flag buffer 730 stores the prediction block flag supplied from the lossless decoding unit 52. Also, the flag buffer 730 outputs the stored prediction block flag to the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732.
 水平予測動きベクトル情報生成部731は、動き補償部72の動きベクトル情報バッファ725に記憶されている隣接ブロックの水平動きベクトル情報から、水平予測ブロックフラグで示された動きベクトル情報を選択して水平予測動きベクトル情報に設定する。水平予測動きベクトル情報生成部731は、設定した水平予測動きベクトル情報を動き補償部72の動きベクトル情報生成部723に出力する。 The horizontal predicted motion vector information generation unit 731 selects the horizontal motion vector information indicated by the horizontal prediction block flag from the horizontal motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72, and performs horizontal processing. Set to predicted motion vector information. The predicted horizontal motion vector information generation unit 731 outputs the set predicted horizontal motion vector information to the motion vector information generation unit 723 of the motion compensation unit 72.
 垂直予測動きベクトル情報生成部732は、動き補償部72の動きベクトル情報バッファ725に記憶されている隣接ブロックの垂直動きベクトル情報から、垂直予測ブロックフラグで示された動きベクトル情報を選択して垂直予測動きベクトル情報に設定する。垂直予測動きベクトル情報生成部732は、設定した垂直予測動きベクトル情報を動き補償部72の動きベクトル情報生成部723に出力する。 The vertical prediction motion vector information generation unit 732 selects the vertical motion vector information indicated by the vertical prediction block flag from the vertical motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72 and performs vertical operation. Set to predicted motion vector information. The predicted vertical motion vector information generation unit 732 outputs the set predicted vertical motion vector information to the motion vector information generation unit 723 of the motion compensation unit 72.
 図15に戻り、セレクタ75は、可逆復号化部52から供給された予測モード情報に基づき、イントラ予測である場合はイントラ予測部71、インター予測である場合は動き補償部72を選択する。セレクタ75は、選択されたイントラ予測部71または動き補償部72で生成された予測画像データを加算部55に出力する。 15, the selector 75 selects the intra prediction unit 71 for intra prediction and the motion compensation unit 72 for inter prediction based on the prediction mode information supplied from the lossless decoding unit 52. The selector 75 outputs the predicted image data generated by the selected intra prediction unit 71 or motion compensation unit 72 to the addition unit 55.
 <4.画像復号化装置の動作>
 次に、図17のフローチャートを参照して、画像復号化装置50で行われる画像復号処理動作について説明する。
<4. Operation of Image Decoding Device>
Next, the image decoding processing operation performed by the image decoding device 50 will be described with reference to the flowchart of FIG.
 ステップST81で蓄積バッファ51は、伝送されてきた画像圧縮情報を蓄積する。ステップST82で可逆復号化部52は、可逆復号化処理を行う。可逆復号化部52は、蓄積バッファ51から供給される画像圧縮情報を復号化する。すなわち、図7の可逆符号化部16で符号化された各ピクチャの量子化データが得られる。また、可逆復号化部52、画像圧縮情報に含まれている予測モード情報の可逆復号化を行い、得られた予測モード情報がイントラ予測モードに関する情報である場合、予測モード情報をイントラ予測部71に出力する。また、可逆復号化部52は、予測モード情報がインター予測モードに関する情報である場合、予測モード情報を動き補償部72に出力する。 In step ST81, the accumulation buffer 51 accumulates the transmitted image compression information. In step ST82, the lossless decoding unit 52 performs lossless decoding processing. The lossless decoding unit 52 decodes the compressed image information supplied from the accumulation buffer 51. That is, quantized data of each picture encoded by the lossless encoding unit 16 in FIG. 7 is obtained. Further, when the lossless decoding unit 52 performs lossless decoding of the prediction mode information included in the image compression information and the obtained prediction mode information is information related to the intra prediction mode, the prediction mode information is converted into the intra prediction unit 71. Output to. Moreover, the lossless decoding part 52 outputs prediction mode information to the motion compensation part 72, when prediction mode information is the information regarding inter prediction mode.
 ステップST83において逆量子化部53は、逆量子化処理を行う。逆量子化部53は、可逆復号化部52により復号された量子化データを、図7の量子化部15の特性に対応する特性で逆量子化する。 In step ST83, the inverse quantization unit 53 performs an inverse quantization process. The inverse quantization unit 53 inversely quantizes the quantized data decoded by the lossless decoding unit 52 with characteristics corresponding to the characteristics of the quantization unit 15 in FIG.
 ステップST84において逆直交変換部54は、逆直交変換処理を行う。逆直交変換部54は、逆量子化部53により逆量子化された変換係数データを、図7の直交変換部14の特性に対応する特性で逆直交変換する。 In step ST84, the inverse orthogonal transform unit 54 performs an inverse orthogonal transform process. The inverse orthogonal transform unit 54 performs inverse orthogonal transform on the transform coefficient data inversely quantized by the inverse quantization unit 53 with characteristics corresponding to the characteristics of the orthogonal transform unit 14 of FIG.
 ステップST85において加算部55は、復号画像データの生成を行う。加算部55は、逆直交変換処理を行うことにより得られたデータと、後述するステップST89で選択された予測画像データを加算して復号画像データを生成する。これにより元の画像が復号される。 In step ST85, the addition unit 55 generates decoded image data. The adder 55 adds the data obtained by performing the inverse orthogonal transform process and the predicted image data selected in step ST89 described later to generate decoded image data. As a result, the original image is decoded.
 ステップST86においてデブロッキングフィルタ56は、フィルタ処理を行う。デブロッキングフィルタ56は、加算部55より出力された復号画像データのデブロッキングフィルタ処理を行い、復号画像に含まれているブロック歪みを除去する。 In step ST86, the deblocking filter 56 performs filter processing. The deblocking filter 56 performs a deblocking filter process on the decoded image data output from the adding unit 55 to remove block distortion included in the decoded image.
 ステップST87においてフレームメモリ61は、復号画像データの記憶処理を行う。 In step ST87, the frame memory 61 performs a process of storing decoded image data.
 ステップST88においてイントラ予測部71と動き補償部72は、予測画像生成処理を行う。イントラ予測部71と動き補償部72は、可逆復号化部52から供給される予測モード情報に対応してそれぞれ予測画像生成処理を行う。 In step ST88, the intra prediction unit 71 and the motion compensation unit 72 perform a predicted image generation process. The intra prediction unit 71 and the motion compensation unit 72 perform a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 52, respectively.
 すなわち、可逆復号化部52からイントラ予測の予測モード情報が供給された場合、イントラ予測部71は、予測モード情報に基づいて予測画像データを生成する。また、可逆復号化部52からインター予測の予測モード情報が供給された場合、動き補償部72は、予測モード情報に基づき動き補償を行い、予測画像データを生成する。 That is, when prediction mode information for intra prediction is supplied from the lossless decoding unit 52, the intra prediction unit 71 generates predicted image data based on the prediction mode information. When inter prediction mode information is supplied from the lossless decoding unit 52, the motion compensation unit 72 performs motion compensation based on the prediction mode information to generate predicted image data.
 ステップST89において、セレクタ75は予測画像データの選択を行う。セレクタ75は、イントラ予測部71から供給された予測画像と動き補償部72から供給された予測画像データの選択を行い、選択した予測画像データを加算部55に供給して、上述したように、ステップST85において逆直交変換部54の出力と加算させる。 In step ST89, the selector 75 selects predicted image data. The selector 75 selects the prediction image supplied from the intra prediction unit 71 and the prediction image data supplied from the motion compensation unit 72, supplies the selected prediction image data to the addition unit 55, and, as described above, In step ST85, it is added to the output of the inverse orthogonal transform unit 54.
 ステップST90において画面並べ替えバッファ57は、画像並べ替えを行う。すなわち画面並べ替えバッファ57は、図7の画像符号化装置10の画面並べ替えバッファ12で符号化のために並べ替えられたフレームの順序が、元の表示の順序に並べ替えられる。 In step ST90, the screen rearrangement buffer 57 performs image rearrangement. That is, the screen rearrangement buffer 57 rearranges the order of frames rearranged for encoding by the screen rearrangement buffer 12 of the image encoding device 10 of FIG. 7 to the original display order.
 ステップST91において、D/A変換部58は、画面並べ替えバッファ57からの画像データをD/A変換する。この画像が図示せぬディスプレイに出力され、画像が表示される。 In step ST91, the D / A converter 58 D / A converts the image data from the screen rearrangement buffer 57. This image is output to a display (not shown), and the image is displayed.
 次に、図18のフローチャートを参照して、図17のステップST88の予測画像生成処理について説明する。 Next, the predicted image generation processing in step ST88 in FIG. 17 will be described with reference to the flowchart in FIG.
 ステップST101で可逆復号化部52は、対象ブロックがイントラ符号化されているか否かを判定する。可逆復号化部52は、可逆復号化を行うことで得られた予測モード情報がイントラ予測の予測モード情報である場合、予測モード情報をイントラ予測部71に供給してステップST102に進む。また、可逆復号化部52は、予測モード情報がインター予測の予測モード情報である場合、予測モード情報を動き補償部72に供給してステップST103に進む。 In step ST101, the lossless decoding unit 52 determines whether or not the target block is intra-coded. When the prediction mode information obtained by performing lossless decoding is prediction mode information for intra prediction, the lossless decoding unit 52 supplies the prediction mode information to the intra prediction unit 71 and proceeds to step ST102. Also, when the prediction mode information is inter prediction mode information, the lossless decoding unit 52 supplies the prediction mode information to the motion compensation unit 72 and proceeds to step ST103.
 ステップST102でイントラ予測部71は、イントラ予測画像生成処理を行う。イントラ予測部71は、フレームメモリ61に記憶されているデブロックフィルタ処理前の復号画像データと予測モード情報を用いてイントラ予測を行い、予測画像データを生成する。 In step ST102, the intra prediction unit 71 performs intra prediction image generation processing. The intra prediction unit 71 performs intra prediction using the decoded image data before deblocking filter processing and the prediction mode information stored in the frame memory 61, and generates predicted image data.
 ステップST103で動き補償部72は、インター予測画像生成処理を行う。動き補償部72は、可逆復号化部52からの予測モード情報や差分動きベクトル情報に基づいて、フレームメモリ61から読み出した参照画像の動き補償を行い、予測画像データを生成する。 In step ST103, the motion compensation unit 72 performs inter prediction image generation processing. The motion compensation unit 72 performs motion compensation of the reference image read from the frame memory 61 based on the prediction mode information and the difference motion vector information from the lossless decoding unit 52, and generates predicted image data.
 図19は、ステップST103のインター予測画像生成処理を示すフローチャートである。ステップST111で動き補償部72は、予測モード情報を取得する。動き補償部72は、予測モード情報を可逆復号化部52から取得してステップST112に進む。 FIG. 19 is a flowchart showing the inter predicted image generation processing in step ST103. In step ST111, the motion compensation unit 72 acquires prediction mode information. The motion compensation unit 72 acquires the prediction mode information from the lossless decoding unit 52, and proceeds to step ST112.
 ステップST112で動き補償部72と予測動きベクトル情報設定部73は、動きベクトル情報再構築処理を行う。図20は、動きベクトル情報再構築処理を示すフローチャートである。 In step ST112, the motion compensation unit 72 and the predicted motion vector information setting unit 73 perform a motion vector information reconstruction process. FIG. 20 is a flowchart showing the motion vector information reconstruction process.
 ステップST121で動き補償部72と予測動きベクトル情報設定部73は、予測ブロックフラグと差分動きベクトル情報を取得する。動き補償部72は、可逆復号化部52から差分動きベクトル情報を取得する。また、予測動きベクトル情報設定部73は、可逆復号化部52から予測ブロックフラグを取得してステップST122に進む。 In step ST121, the motion compensation unit 72 and the predicted motion vector information setting unit 73 acquire the predicted block flag and the difference motion vector information. The motion compensation unit 72 acquires the difference motion vector information from the lossless decoding unit 52. Further, the motion vector predictor information setting unit 73 acquires the prediction block flag from the lossless decoding unit 52, and proceeds to step ST122.
 ステップST122で予測動きベクトル情報設定部73は、水平予測動きベクトル情報設定処理を行う。水平予測動きベクトル情報生成部731は、動き補償部72の動きベクトル情報バッファ725に記憶されている隣接ブロックの水平動きベクトル情報から、水平予測ブロックフラグで示されたブロックの水平動きベクトル情報を選択する。水平予測動きベクトル情報生成部731は、選択した水平動きベクトル情報を水平予測動きベクトル情報に設定する。 In step ST122, the predicted motion vector information setting unit 73 performs horizontal predicted motion vector information setting processing. The horizontal predicted motion vector information generation unit 731 selects horizontal motion vector information of the block indicated by the horizontal prediction block flag from the horizontal motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72. To do. The predicted horizontal motion vector information generation unit 731 sets the selected predicted horizontal motion vector information as the predicted horizontal motion vector information.
 ステップST123で動き補償部72は、水平動きベクトル情報の再構築を行う。動き補償部72は、水平差分動きベクトル情報と水平予測動きベクトル情報を加算して水平動きベクトル情報を再構築してステップST124に進む。 In step ST123, the motion compensation unit 72 reconstructs horizontal motion vector information. The motion compensation unit 72 reconstructs the horizontal motion vector information by adding the horizontal difference motion vector information and the horizontal predicted motion vector information, and proceeds to step ST124.
 ステップST124で予測動きベクトル情報設定部73は、垂直予測動きベクトル情報設定処理を行う。垂直予測動きベクトル情報生成部732は、動き補償部72の動きベクトル情報バッファ725に記憶されている隣接ブロックの垂直動きベクトル情報から、垂直予測ブロックフラグで示されたブロックの垂直動きベクトル情報を選択する。垂直予測動きベクトル情報生成部732は、選択した垂直動きベクトル情報を垂直予測動きベクトル情報に設定する。 In step ST124, the predicted motion vector information setting unit 73 performs vertical predicted motion vector information setting processing. The predicted vertical motion vector information generation unit 732 selects the vertical motion vector information of the block indicated by the vertical prediction block flag from the vertical motion vector information of the adjacent block stored in the motion vector information buffer 725 of the motion compensation unit 72. To do. The predicted vertical motion vector information generation unit 732 sets the selected vertical motion vector information as the predicted vertical motion vector information.
 ステップST125で動き補償部72は、垂直動きベクトル情報の再構築を行う。動き補償部72は、垂直差分動きベクトル情報と垂直予測動きベクトル情報を加算して垂直動きベクトル情報を再構築して図19のステップST113に進む。 In step ST125, the motion compensation unit 72 reconstructs the vertical motion vector information. The motion compensation unit 72 reconstructs the vertical motion vector information by adding the vertical difference motion vector information and the predicted vertical motion vector information, and proceeds to step ST113 in FIG.
 ステップST113で動き補償部72は、予測画像データの生成を行う。動き補償部72はステップST111で取得した予測モード情報や、ステップST112で再構築した動きベクトル情報に基づき、フレームメモリ61から参照画像データを読み出して動き補償を行い、予測画像データを生成してセレクタ75に出力する。 In step ST113, the motion compensation unit 72 generates predicted image data. Based on the prediction mode information acquired in step ST111 and the motion vector information reconstructed in step ST112, the motion compensation unit 72 reads reference image data from the frame memory 61 to perform motion compensation, generates predicted image data, and selects a selector. Output to 75.
 このように、画像復号化装置50は、水平予測ブロックフラグで示された隣接ブロックの水平動きベクトル情報が水平予測動きベクトル情報、垂直予測ブロックフラグで示された隣接ブロックの垂直動きベクトル情報が垂直予測動きベクトル情報に設定される。したがって、画像符号化装置10で符号化効率を向上させるために、水平予測動きベクトル情報と垂直予測動きベクトル情報を個々に設定しても、正しく動きベクトル情報を再構築することができる。 As described above, the image decoding apparatus 50 uses the horizontal motion vector information of the adjacent block indicated by the horizontal prediction block flag as the horizontal prediction motion vector information, and the vertical motion vector information of the adjacent block indicated by the vertical prediction block flag as the vertical. Set to predicted motion vector information. Therefore, in order to improve the encoding efficiency in the image encoding device 10, the motion vector information can be correctly reconstructed even if the horizontal prediction motion vector information and the vertical prediction motion vector information are individually set.
 <5.画像符号化装置と画像復号化装置の他の構成>
 ところで、上述の画像符号化装置と画像復号化装置では、個々に水平予測動きベクトル情報と垂直予測動きベクトル情報を設定して、動きベクトル情報の符号化や復号化を行う場合について説明した。しかし、個々に水平予測動きベクトル情報と垂直予測動きベクトル情報を設定可能とするだけでなく、水平垂直動きベクトル情報も設定可能とすれば、最適な符号効率を実現することも可能となる。この場合、画像符号化装置10で用いる予測動きベクトル情報設定部33aは、図21に示す構成とする。また、画像復号化装置50で用いる予測動きベクトル情報設定部73aは、図22に示す構成とする。
<5. Other Configurations of Image Encoding Device and Image Decoding Device>
In the above-described image encoding device and image decoding device, the case has been described in which horizontal prediction motion vector information and vertical prediction motion vector information are individually set and the motion vector information is encoded and decoded. However, not only can the horizontal predicted motion vector information and the vertical predicted motion vector information be individually set, but also the horizontal and vertical motion vector information can be set, so that it is possible to achieve optimum coding efficiency. In this case, the motion vector predictor information setting unit 33a used in the image encoding device 10 has the configuration shown in FIG. Further, the motion vector predictor information setting unit 73a used in the image decoding apparatus 50 has the configuration shown in FIG.
 図21において、水平垂直予測動きベクトル情報生成部333は、動き予測・補償部32から供給された符号化済みの隣接ブロックの動きベクトル情報を予測動きベクトル情報の候補とする。また、水平垂直予測動きベクトル情報生成部333は、各候補の動きベクトル情報と、動き予測・補償部32から供給された対象ブロックの動きベクトル情報との差分を示す差分動きベクトル情報を生成する。さらに、水平垂直予測動きベクトル情報生成部333は、上述の式(23)に基づいて検出した符号化量が最小となる動きベクトル情報を水平垂直予測動きベクトル情報とする。水平垂直予測動きベクトル情報生成部333は、水平垂直予測動きベクトル情報と水平垂直予測動きベクトル情報を用いた場合の差分動きベクトル情報を水平垂直予測動きベクトル情報生成結果として識別情報生成部334aに出力する。 In FIG. 21, the horizontal / vertical prediction motion vector information generation unit 333 sets the motion vector information of the encoded adjacent block supplied from the motion prediction / compensation unit 32 as a candidate for prediction motion vector information. Further, the horizontal / vertical prediction motion vector information generation unit 333 generates difference motion vector information indicating a difference between the motion vector information of each candidate and the motion vector information of the target block supplied from the motion prediction / compensation unit 32. Further, the horizontal / vertical prediction motion vector information generation unit 333 sets the motion vector information with the minimum coding amount detected based on the above equation (23) as the horizontal / vertical prediction motion vector information. The horizontal / vertical prediction motion vector information generation unit 333 outputs the difference motion vector information when the horizontal / vertical prediction motion vector information and the horizontal / vertical prediction motion vector information are used to the identification information generation unit 334a as the horizontal / vertical prediction motion vector information generation result. To do.
 識別情報生成部334aは、個々に水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報のいずれかを選択して、選択した予測動きベクトル情報を差分動きベクトル情報と共にコスト関数値算出部322に出力する。例えば、予測動きベクトル情報として水平予測動きベクトル情報と垂直予測動きベクトル情報を選択した場合、上述のように、識別情報生成部334aは、水平予測ブロックフラグと水平差分動きベクトル情報をコスト関数値算出部322に出力する。また、識別情報生成部334aは、垂直予測ブロックフラグと垂直差分動きベクトル情報をコスト関数値算出部322に出力する。さらに、予測動きベクトル情報として水平垂直予測動きベクトル情報を選択した場合、識別情報生成部334aは、動きベクトル情報が水平垂直予測動きベクトル情報として選択されたブロックを示す水平垂直予測ブロック情報を生成する。例えば、識別情報生成部334aは、水平垂直予測ブロック情報として水平垂直予測ブロックフラグを生成する。識別情報生成部334aは、生成した水平垂直予測ブロックフラグを差分動きベクトル情報と共にコスト関数値算出部322に出力する。 The identification information generation unit 334a individually selects one of horizontal prediction motion vector information and vertical prediction motion vector information, or horizontal / vertical prediction motion vector information, and selects the selected prediction motion vector information together with difference motion vector information as a cost function. The value is output to the value calculation unit 322. For example, when horizontal prediction motion vector information and vertical prediction motion vector information are selected as the prediction motion vector information, as described above, the identification information generation unit 334a calculates the cost function value of the horizontal prediction block flag and the horizontal difference motion vector information. To the unit 322. Also, the identification information generation unit 334a outputs the vertical prediction block flag and the vertical difference motion vector information to the cost function value calculation unit 322. Further, when the horizontal / vertical prediction motion vector information is selected as the prediction motion vector information, the identification information generation unit 334a generates horizontal / vertical prediction block information indicating the block whose motion vector information is selected as the horizontal / vertical prediction motion vector information. . For example, the identification information generation unit 334a generates a horizontal / vertical prediction block flag as the horizontal / vertical prediction block information. The identification information generation unit 334a outputs the generated horizontal / vertical prediction block flag to the cost function value calculation unit 322 together with the difference motion vector information.
 また、識別情報生成部334aは、水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報のいずれが選択されているかを示す識別情報を生成する。この識別情報は、動き予測・補償部32を介して可逆符号化部16に供給して、画像圧縮情報のピクチャパラメータセットまたはスライスヘッダに含める。 Also, the identification information generation unit 334a generates identification information indicating which of the predicted horizontal motion vector information, the predicted vertical motion vector information, or the predicted horizontal and vertical motion vector information is selected. This identification information is supplied to the lossless encoding unit 16 via the motion prediction / compensation unit 32 and included in the picture parameter set or slice header of the image compression information.
 識別情報生成部334aは、予測動きベクトル情報を選択する場合、ピクチャ単位やスライス単位で水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報の切り替えを行うようにしてもよい。また、識別情報生成部334aは、ピクチャ単位で水平予測動きベクトル情報と垂直予測動きベクトル情報、または水平垂直予測動きベクトル情報のいずれを選択する場合、例えば対象ブロックのピクチャタイプに応じて選択を行うようにしてもよい。すなわち、Pピクチャにおいては、多少フラグ情報に対するオーバーヘッドがあっても、その分、動きベクトル符号化の効率を向上させることが重要である。したがって、Pピクチャの場合、水平予測ブロックフラグと水平差分動きベクトル情報および垂直予測ブロックフラグと垂直差分動きベクトル情報をコスト関数値算出部322に出力する。また、Bピクチャにおいては、List0予測と、List1予測のそれぞれに対して、水平予測ブロックフラグと垂直予測ブロックフラグを持つことは、特に低ビットレートの場合、最適な符号化効率を実現できるとは限らない。したがって、Bピクチャの場合、従来のように、水平垂直予測ブロックフラグと差分動きベクトル情報をコスト関数値算出部322に出力することにより、最適な符号化効率を達成することが可能である。 When selecting the prediction motion vector information, the identification information generation unit 334a may switch between the horizontal prediction motion vector information and the vertical prediction motion vector information or the horizontal and vertical prediction motion vector information in units of pictures or slices. . Further, when selecting any one of the horizontal prediction motion vector information, the vertical prediction motion vector information, and the horizontal / vertical prediction motion vector information in units of pictures, the identification information generation unit 334a performs selection according to, for example, the picture type of the target block. You may do it. That is, in the P picture, even if there is some overhead for flag information, it is important to improve the efficiency of motion vector coding. Therefore, in the case of a P picture, the horizontal prediction block flag and horizontal difference motion vector information, and the vertical prediction block flag and vertical difference motion vector information are output to the cost function value calculation unit 322. In addition, in a B picture, having a horizontal prediction block flag and a vertical prediction block flag for List0 prediction and List1 prediction, respectively, can realize optimal encoding efficiency especially in the case of a low bit rate. Not exclusively. Therefore, in the case of a B picture, it is possible to achieve optimal encoding efficiency by outputting the horizontal / vertical prediction block flag and the difference motion vector information to the cost function value calculation unit 322 as in the conventional art.
 図22において、フラグバッファ730aは、画像圧縮情報に含まれている識別情報に基づき予測ブロックフラグの供給先を切り替える。例えば、水平予測動きベクトル情報と垂直予測動きベクトル情報が選択されている場合、フラグバッファ730aは、予測ブロックフラグを水平予測動きベクトル情報生成部731と垂直予測動きベクトル情報生成部732に出力する。また、水平垂直予測動きベクトル情報が選択されている場合、フラグバッファ730aは、予測ブロックフラグを水平垂直予測動きベクトル情報生成部733に出力する。また、フラグバッファ730aは、例えばピクチャタイプに応じて予測動きベクトル情報の切り替えが行われる場合、予測ブロックフラグの供給先をピクチャタイプに応じて切り替える。例えばPピクチャの場合は水平予測動きベクトル情報と垂直予測動きベクトル情報、Bピクチャの場合は水平垂直予測動きベクトル情報を用いて動きベクトル情報の符号化が行われているとする。この場合、フラグバッファ730aは、Pピクチャの場合は予測ブロックフラグを水平予測動きベクトル情報生成部731と垂直予測動きベクトル情報生成部732、Bピクチャの場合は予測ブロックフラグを水平垂直予測動きベクトル情報生成部733に供給する。 22, the flag buffer 730a switches the supply destination of the prediction block flag based on the identification information included in the image compression information. For example, when the horizontal prediction motion vector information and the vertical prediction motion vector information are selected, the flag buffer 730 a outputs the prediction block flag to the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732. When the horizontal / vertical prediction motion vector information is selected, the flag buffer 730a outputs the prediction block flag to the horizontal / vertical prediction motion vector information generation unit 733. Further, for example, when the prediction motion vector information is switched according to the picture type, the flag buffer 730a switches the supply destination of the prediction block flag according to the picture type. For example, it is assumed that motion vector information is encoded using horizontal prediction motion vector information and vertical prediction motion vector information in the case of a P picture, and horizontal and vertical prediction motion vector information in the case of a B picture. In this case, the flag buffer 730a sets the prediction block flag in the case of a P picture and the horizontal prediction motion vector information generation unit 731 and the vertical prediction motion vector information generation unit 732, and in the case of a B picture, sets the prediction block flag as horizontal and vertical prediction motion vector information. This is supplied to the generation unit 733.
 また、可逆符号化部16は、水平方向と垂直方向に対して、異なるコード割り当てを行うようにしてもよい。例えば予測動きベクトル情報として空間予測動きベクトル情報と時間予測動きベクトル情報を用いることが可能であるとする。この場合、符号化対象の動画像を生成したときの撮像動作を考慮して、データ量の少ないコードを予測精度の高い予測動きベクトル情報に対して割り当てる。例えば、後述する撮像装置で撮像画像の記録を行う場合、撮像装置のパンニング操作が行われて、撮像方向が水平方向に移動すると、垂直方向の動きベクトル情報はほとんど「0」となる。このとき、垂直方向に対しては空間予測動きベクトル情報よりも時間予測動きベクトル情報の予測精度が高く、水平方向に対しては時間予測動きベクトル情報より空間予測動きベクトル情報の予測精度が高くなることが多い。したがって、水平予測ブロック情報に関しては、空間予測動きベクトル情報のブロックに対してコード番号「0」、時間方向予測動きベクトル情報のブロックに対してコード番号「1」を割り当てる。また、垂直予測ブロック情報に関しては、空間予測ベクトル情報のブロックに対してコード番号「1」、時間方向予測動きベクトル情報のブロックに対してコード番号「0」を割り当てる。このように、水平予測ブロック情報と垂直予測ブロック情報で異なるコード割り当てを行うことで、データ量の少ないコードを多く用いるようにできるので、より高い符号化効率を実現することが可能となる。 Further, the lossless encoding unit 16 may perform different code assignments in the horizontal direction and the vertical direction. For example, it is assumed that spatial prediction motion vector information and temporal prediction motion vector information can be used as the prediction motion vector information. In this case, in consideration of an imaging operation when a moving image to be encoded is generated, a code with a small amount of data is assigned to prediction motion vector information with high prediction accuracy. For example, when a captured image is recorded by an imaging device described later, when the panning operation of the imaging device is performed and the imaging direction moves in the horizontal direction, the motion vector information in the vertical direction is almost “0”. At this time, the prediction accuracy of the temporal prediction motion vector information is higher than the spatial prediction motion vector information in the vertical direction, and the prediction accuracy of the spatial prediction motion vector information is higher than the temporal prediction motion vector information in the horizontal direction. There are many cases. Therefore, regarding the horizontal prediction block information, code number “0” is assigned to the spatial prediction motion vector information block, and code number “1” is assigned to the temporal direction motion vector information block. As for the vertical prediction block information, a code number “1” is assigned to the spatial prediction vector information block, and a code number “0” is assigned to the temporal direction motion vector information block. In this way, by assigning different codes for the horizontal prediction block information and the vertical prediction block information, it is possible to use a large number of codes with a small amount of data, so that higher encoding efficiency can be realized.
 <6.ソフトウェア処理の場合>
 また、明細書中において説明した一連の処理はハードウェア、またはソフトウェア、または両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させる。または、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることも可能である。
<6. For software processing>
The series of processes described in the specification can be executed by hardware, software, or a combined configuration of both. When processing by software is executed, a program in which a processing sequence is recorded is installed and executed in a memory in a computer incorporated in dedicated hardware. Alternatively, the program can be installed and executed on a general-purpose computer capable of executing various processes.
 図23は、上述した一連の処理をプログラムで実行するコンピュータ装置の構成を例示した図である。コンピュータ装置80のCPU801は、ROM802、または記録部808に記録されているプログラムにしたがって各種の処理を実行する。 FIG. 23 is a diagram illustrating a configuration of a computer device that executes the above-described series of processing by a program. The CPU 801 of the computer device 80 executes various processes according to programs recorded in the ROM 802 or the recording unit 808.
 RAM803には、CPU801が実行するプログラムや各種のデータなどが適宜記憶される。これらのCPU801、ROM802、およびRAM803は、バス804で相互に接続されている。 The RAM 803 appropriately stores programs executed by the CPU 801 and various data. These CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804.
 CPU801にはまた、バス804を介して入出力インタフェース805が接続されている。入出力インタフェース805には、タッチパネルやキーボード、マウス、マイクロホンなどの入力部806、ディスプレイなどよりなる出力部807が接続されている。CPU801は、入力部806から入力される指令に対応して各種の処理を実行する。そして、CPU801は、処理の結果を出力部807に出力する。 An input / output interface 805 is also connected to the CPU 801 via the bus 804. An input unit 806 such as a touch panel, a keyboard, a mouse, and a microphone, and an output unit 807 including a display are connected to the input / output interface 805. The CPU 801 executes various processes in response to commands input from the input unit 806. Then, the CPU 801 outputs the processing result to the output unit 807.
 入出力インタフェース805に接続されている記録部808は、例えばハードディスクからなり、CPU801が実行するプログラムや各種のデータを記録する。通信部809は、インターネットやローカルエリアネットワークなどのネットワークやディジタル放送といった有線または無線の通信媒体を介して外部の装置と通信する。また、コンピュータ装置80は、通信部809を介してプログラムを取得し、ROM802や記録部808に記録してもよい。 The recording unit 808 connected to the input / output interface 805 includes, for example, a hard disk, and records programs executed by the CPU 801 and various data. A communication unit 809 communicates with an external device via a wired or wireless communication medium such as a network such as the Internet or a local area network or digital broadcasting. Further, the computer device 80 may acquire a program via the communication unit 809 and record it in the ROM 802 or the recording unit 808.
 ドライブ810は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブルメディア85が装着された場合、それらを駆動して、記録されているプログラムやデータなどを取得する。取得されたプログラムやデータは、必要に応じてROM802やRAM803または記録部808に転送される。 When the removable medium 85 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is mounted, the drive 810 drives them to acquire a recorded program or data. The acquired program and data are transferred to the ROM 802, RAM 803, or recording unit 808 as necessary.
 CPU801は、上述の一連の処理を行うプログラムを読み出して実行し、記録部808やリムーバブルメディア85に記録されている画像信号や、通信部809を介して供給された画像信号に対する符号化処理や画像圧縮情報の復号化処理を行う。 The CPU 801 reads and executes a program for performing the above-described series of processing, and performs encoding processing and image processing on the image signal recorded in the recording unit 808 and the removable medium 85 and the image signal supplied via the communication unit 809. Decodes the compressed information.
 <7.電子機器に適用した場合>
 また、以上においては、符号化方式/復号方式としてH.264/AVC方式が用いられたが、本技術は、その他の動き予測・補償処理を行う符号化方式/復号方式を用いる画像符号化装置/画像復号装置に適用することもできる。
<7. When applied to electronic devices>
In the above, H.264 is used as the encoding method / decoding method. Although the H.264 / AVC format is used, the present technology can also be applied to an image encoding device / image decoding device using an encoding method / decoding method for performing other motion prediction / compensation processing.
 さらに、本技術は例えば離散コサイン変換等の直交変換と動き補償によって圧縮された画像情報(ビットストリーム)を、衛星放送、ケーブルTV(テレビジョン)、インターネット、および携帯電話機などのネットワークメディアを介して受信する際に適用できる。また、光、磁気ディスク、およびフラッシュメモリのような記憶メディア上で処理する際に用いられる画像符号化装置および画像復号装置に適用することができる。 Furthermore, the present technology, for example, converts image information (bitstream) compressed by orthogonal transform such as discrete cosine transform and motion compensation via network media such as satellite broadcasting, cable TV (television), the Internet, and mobile phones. Applicable when receiving. Further, the present invention can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical, magnetic disk, and flash memory.
 上述した画像符号化装置10や画像復号化装置50は、任意の電子機器に適用することができる。以下にその例について説明する。 The image encoding device 10 and the image decoding device 50 described above can be applied to any electronic device. Examples thereof will be described below.
 図24は、本技術を適用したテレビジョン装置の概略構成を例示している。テレビジョン装置90は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース部909を有している。さらに、テレビジョン装置90は、制御部910、ユーザインタフェース部911等を有している。 FIG. 24 illustrates a schematic configuration of a television apparatus to which the present technology is applied. The television apparatus 90 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, and an external interface unit 909. Furthermore, the television apparatus 90 includes a control unit 910, a user interface unit 911, and the like.
 チューナ902は、アンテナ901で受信された放送波信号から所望のチャンネルを選局して復調を行い、得られたストリームをデマルチプレクサ903に出力する。 The tuner 902 selects a desired channel from the broadcast wave signal received by the antenna 901, performs demodulation, and outputs the obtained stream to the demultiplexer 903.
 デマルチプレクサ903は、ストリームから視聴対象である番組の映像や音声のパケットを抽出して、抽出したパケットのデータをデコーダ904に出力する。また、デマルチプレクサ903は、EPG(Electronic Program Guide)等のデータのパケットを制御部910に出力する。なお、スクランブルが行われている場合、デマルチプレクサ等でスクランブルの解除を行う。 The demultiplexer 903 extracts video and audio packets of the program to be viewed from the stream, and outputs the extracted packet data to the decoder 904. The demultiplexer 903 outputs a packet of data such as EPG (Electronic Program Guide) to the control unit 910. If scrambling is being performed, descrambling is performed by a demultiplexer or the like.
 デコーダ904は、パケットの復号化処理を行い、復号処理化によって生成された映像データを映像信号処理部905、音声データを音声信号処理部907に出力する。 The decoder 904 performs packet decoding processing, and outputs video data generated by the decoding processing to the video signal processing unit 905 and audio data to the audio signal processing unit 907.
 映像信号処理部905は、映像データに対して、ノイズ除去やユーザ設定に応じた映像処理等を行う。映像信号処理部905は、表示部906に表示させる番組の映像データや、ネットワークを介して供給されるアプリケーションに基づく処理による画像データなどを生成する。また、映像信号処理部905は、項目の選択などのメニュー画面等を表示するための映像データを生成し、それを番組の映像データに重畳する。映像信号処理部905は、このようにして生成した映像データに基づいて駆動信号を生成して表示部906を駆動する。 The video signal processing unit 905 performs noise removal, video processing according to user settings, and the like on the video data. The video signal processing unit 905 generates video data of a program to be displayed on the display unit 906, image data by processing based on an application supplied via a network, and the like. The video signal processing unit 905 generates video data for displaying a menu screen for selecting an item and the like, and superimposes the video data on the video data of the program. The video signal processing unit 905 generates a drive signal based on the video data generated in this way, and drives the display unit 906.
 表示部906は、映像信号処理部905からの駆動信号に基づき表示デバイス(例えば液晶表示素子等)を駆動して、番組の映像などを表示させる。 The display unit 906 drives a display device (for example, a liquid crystal display element or the like) based on a drive signal from the video signal processing unit 905 to display a program video or the like.
 音声信号処理部907は、音声データに対してノイズ除去などの所定の処理を施し、処理後の音声データのD/A変換処理や増幅処理を行い、スピーカ908に供給することで音声出力を行う。 The audio signal processing unit 907 performs predetermined processing such as noise removal on the audio data, performs D / A conversion processing and amplification processing on the processed audio data, and outputs the audio data by supplying the audio data to the speaker 908. .
 外部インタフェース部909は、外部機器やネットワークと接続するためのインタフェースであり、映像データや音声データ等のデータ送受信を行う。 The external interface unit 909 is an interface for connecting to an external device or a network, and transmits and receives data such as video data and audio data.
 制御部910にはユーザインタフェース部911が接続されている。ユーザインタフェース部911は、操作スイッチやリモートコントロール信号受信部等で構成されており、ユーザ操作に応じた操作信号を制御部910に供給する。 A user interface unit 911 is connected to the control unit 910. The user interface unit 911 includes an operation switch, a remote control signal receiving unit, and the like, and supplies an operation signal corresponding to a user operation to the control unit 910.
 制御部910は、CPU(Central Processing Unit)やメモリ等を用いて構成されている。メモリは、CPUで実行されるプログラムやCPUが処理を行う上で必要な各種のデータ、EPGデータ、ネットワークを介して取得されたデータ等を記憶する。メモリに記憶されているプログラムは、テレビジョン装置90の起動時などの所定タイミングでCPUで読み出されて実行される。CPUは、プログラムを実行することで、テレビジョン装置90がユーザ操作に応じた動作となるように各部を制御する。 The control unit 910 is configured using a CPU (Central Processing Unit), a memory, and the like. The memory stores programs executed by the CPU, various data necessary for the CPU to perform processing, EPG data, data acquired via a network, and the like. The program stored in the memory is read and executed by the CPU at a predetermined timing such as when the television device 90 is activated. The CPU controls each unit so that the television device 90 operates according to the user operation by executing the program.
 なお、テレビジョン装置90では、チューナ902、デマルチプレクサ903、映像信号処理部905、音声信号処理部907、外部インタフェース部909等と制御部910を接続するためバス912が設けられている。 The television device 90 is provided with a bus 912 for connecting the tuner 902, the demultiplexer 903, the video signal processing unit 905, the audio signal processing unit 907, the external interface unit 909, and the control unit 910.
 このように構成されたテレビジョン装置では、デコーダ904に本願の画像復号化装置(画像復号化方法)の機能が設けられる。このため、テレビジョン装置では、生成した予測動きベクトル情報と受信した差分動きベクトル情報に基づいて、復号化を行う対象ブロックの動きベクトル情報を正しく復元できる。したがって、放送局側で水平予測動きベクトル情報と垂直予測動きベクトル情報を個々に設定して符号化効率を高める処理を行っても、テレビジョン装置で正しく復号化を行える。 In the thus configured television apparatus, the decoder 904 is provided with the function of the image decoding apparatus (image decoding method) of the present application. Therefore, the television apparatus can correctly restore the motion vector information of the target block to be decoded based on the generated predicted motion vector information and the received difference motion vector information. Therefore, even if the broadcast station side sets the horizontal prediction motion vector information and the vertical prediction motion vector information individually and performs a process of increasing the encoding efficiency, the television apparatus can correctly decode.
 図25は、本技術を適用した携帯電話機の概略構成を例示している。携帯電話機92は、通信部922、音声コーデック923、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931を有している。これらは、バス933を介して互いに接続されている。 FIG. 25 illustrates a schematic configuration of a mobile phone to which the present technology is applied. The cellular phone 92 includes a communication unit 922, an audio codec 923, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, and a control unit 931. These are connected to each other via a bus 933.
 また、通信部922にはアンテナ921が接続されており、音声コーデック923には、スピーカ924とマイクロホン925が接続されている。さらに制御部931には、操作部932が接続されている。 In addition, an antenna 921 is connected to the communication unit 922, and a speaker 924 and a microphone 925 are connected to the audio codec 923. Further, an operation unit 932 is connected to the control unit 931.
 携帯電話機92は、音声通話モードやデータ通信モード等の各種モードで、音声信号の送受信、電子メールや画像データの送受信、画像撮影、またはデータ記録等の各種動作を行う。 The mobile phone 92 performs various operations such as transmission / reception of voice signals, transmission / reception of e-mail and image data, image shooting, and data recording in various modes such as a voice call mode and a data communication mode.
 音声通話モードにおいて、マイクロホン925で生成された音声信号は、音声コーデック923で音声データへの変換やデータ圧縮が行われて通信部922に供給される。通信部922は、音声データの変調処理や周波数変換処理等を行い、送信信号を生成する。また、通信部922は、送信信号をアンテナ921に供給して図示しない基地局へ送信する。また、通信部922は、アンテナ921で受信した受信信号の増幅や周波数変換処理および復調処理等を行い、得られた音声データを音声コーデック923に供給する。音声コーデック923は、音声データのデータ伸張やアナログ音声信号への変換を行い、スピーカ924に出力する。 In the voice call mode, the voice signal generated by the microphone 925 is converted into voice data and compressed by the voice codec 923 and supplied to the communication unit 922. The communication unit 922 performs audio data modulation processing, frequency conversion processing, and the like to generate a transmission signal. The communication unit 922 supplies a transmission signal to the antenna 921 and transmits it to a base station (not shown). In addition, the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and supplies the obtained audio data to the audio codec 923. The audio codec 923 performs audio data expansion or conversion into an analog audio signal, and outputs it to the speaker 924.
 また、データ通信モードにおいて、メール送信を行う場合、制御部931は、操作部932の操作によって入力された文字データを受け付けて、入力された文字を表示部930に表示する。また、制御部931は、操作部932におけるユーザ指示等に基づいてメールデータを生成して通信部922に供給する。通信部922は、メールデータの変調処理や周波数変換処理等を行い、得られた送信信号をアンテナ921から送信する。また、通信部922は、アンテナ921で受信した受信信号の増幅や周波数変換処理および復調処理等を行い、メールデータを復元する。このメールデータを、表示部930に供給して、メール内容の表示を行う。 In the data communication mode, when mail transmission is performed, the control unit 931 receives character data input by operating the operation unit 932 and displays the input characters on the display unit 930. In addition, the control unit 931 generates mail data based on a user instruction or the like in the operation unit 932 and supplies the mail data to the communication unit 922. The communication unit 922 performs mail data modulation processing, frequency conversion processing, and the like, and transmits the obtained transmission signal from the antenna 921. In addition, the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and restores mail data. This mail data is supplied to the display unit 930 to display the mail contents.
 なお、携帯電話機92は、受信したメールデータを、記録再生部929で記憶媒体に記憶させることも可能である。記憶媒体は、書き換え可能な任意の記憶媒体である。例えば、記憶媒体は、RAMや内蔵型フラッシュメモリ等の半導体メモリ、ハードディスク、磁気ディスク、光磁気ディスク、光ディスク、USBメモリ、またはメモリカード等のリムーバブルメディアである。 Note that the mobile phone 92 can also store the received mail data in a storage medium by the recording / playback unit 929. The storage medium is any rewritable storage medium. For example, the storage medium is a removable medium such as a semiconductor memory such as a RAM or a built-in flash memory, a hard disk, a magnetic disk, a magneto-optical disk, an optical disk, a USB memory, or a memory card.
 データ通信モードにおいて画像データを送信する場合、カメラ部926で生成された画像データを、画像処理部927に供給する。画像処理部927は、画像データの符号化処理を行い、画像圧縮情報を生成する。 When transmitting image data in the data communication mode, the image data generated by the camera unit 926 is supplied to the image processing unit 927. The image processing unit 927 performs image data encoding processing and generates image compression information.
 多重分離部928は、画像処理部927で生成された画像圧縮情報と、音声コーデック923から供給された音声データを所定の方式で多重化して、通信部922に供給する。通信部922は、多重化データの変調処理や周波数変換処理等を行い、得られた送信信号をアンテナ921から送信する。また、通信部922は、アンテナ921で受信した受信信号の増幅や周波数変換処理および復調処理等を行い、多重化データを復元する。この多重化データを多重分離部928に供給する。多重分離部928は、多重化データの分離を行い、画像圧縮情報を画像処理部927、音声データを音声コーデック923に供給する。 The demultiplexing unit 928 multiplexes the image compression information generated by the image processing unit 927 and the audio data supplied from the audio codec 923 by a predetermined method, and supplies the multiplexed data to the communication unit 922. The communication unit 922 performs modulation processing and frequency conversion processing of multiplexed data, and transmits the obtained transmission signal from the antenna 921. In addition, the communication unit 922 performs amplification, frequency conversion processing, demodulation processing, and the like of the reception signal received by the antenna 921, and restores multiplexed data. This multiplexed data is supplied to the demultiplexing unit 928. The demultiplexing unit 928 performs demultiplexing of the multiplexed data, and supplies image compression information to the image processing unit 927 and audio data to the audio codec 923.
 画像処理部927は、画像圧縮情報の復号化処理を行い、画像データを生成する。この画像データを表示部930に供給して、受信した画像の表示を行う。音声コーデック923は、音声データをアナログ音声信号に変換してスピーカ924に供給して、受信した音声を出力する。 The image processing unit 927 performs a decoding process on the image compression information to generate image data. The image data is supplied to the display unit 930 and the received image is displayed. The audio codec 923 converts the audio data into an analog audio signal, supplies the analog audio signal to the speaker 924, and outputs the received audio.
 このように構成された携帯電話装置では、画像処理部927に本願の画像符号化装置(画像符号化方法)と画像復号化装置(画像復号化方法)の機能が設けられる。したがって、画像を送信する際に、対象ブロックについて、動きベクトル情報の水平成分に対して水平予測動きベクトル情報、垂直成分に対して垂直予測動きベクトル情報を個々に設定して符号化効率を向上させることができる。また、画像符号化処理によって生成した画像圧縮情報の復号化を正しく行うことができる。 In the cellular phone device configured as described above, the image processing unit 927 is provided with the functions of the image encoding device (image encoding method) and the image decoding device (image decoding method) of the present application. Therefore, when transmitting an image, for the target block, horizontal prediction motion vector information for the horizontal component of the motion vector information and vertical prediction motion vector information for the vertical component are individually set to improve encoding efficiency. be able to. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
 図26は、本技術を適用した記録再生装置の概略構成を例示している。記録再生装置94は、例えば受信した放送番組のオーディオデータとビデオデータを、記録媒体に記録して、その記録されたデータをユーザの指示に応じたタイミングでユーザに提供する。また、記録再生装置94は、例えば他の装置からオーディオデータやビデオデータを取得し、それらを記録媒体に記録させることもできる。さらに、記録再生装置94は、記録媒体に記録されているオーディオデータやビデオデータを復号して出力することで、モニタ装置等において画像表示や音声出力を行うことができるようにする。 FIG. 26 illustrates a schematic configuration of a recording / reproducing apparatus to which the present technology is applied. The recording / reproducing apparatus 94 records, for example, audio data and video data of a received broadcast program on a recording medium, and provides the recorded data to the user at a timing according to a user instruction. The recording / reproducing device 94 can also acquire audio data and video data from another device, for example, and record them on a recording medium. Furthermore, the recording / reproducing device 94 decodes and outputs the audio data and video data recorded on the recording medium, thereby enabling image display and audio output on the monitor device or the like.
 記録再生装置94は、チューナ941、外部インタフェース部942、エンコーダ943、HDD(Hard Disk Drive)部944、ディスクドライブ945、セレクタ946、デコーダ947、OSD(On-Screen Display)部948、制御部949、ユーザインタフェース部950を有している。 The recording / reproducing apparatus 94 includes a tuner 941, an external interface unit 942, an encoder 943, an HDD (Hard Disk Drive) unit 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) unit 948, a control unit 949, A user interface unit 950 is included.
 チューナ941は、図示しないアンテナで受信された放送信号から所望のチャンネルを選局する。チューナ941は、所望のチャンネルの受信信号を復調して得られた画像圧縮情報をセレクタ946に出力する。 Tuner 941 selects a desired channel from a broadcast signal received by an antenna (not shown). The tuner 941 outputs image compression information obtained by demodulating the received signal of the desired channel to the selector 946.
 外部インタフェース部942は、IEEE1394インタフェース、ネットワークインタフェース部、USBインタフェース、フラッシュメモリインタフェース等の少なくともいずれかで構成されている。外部インタフェース部942は、外部機器やネットワーク、メモリカード等と接続するためのインタフェースであり、記録する映像データや音声データ等のデータ受信を行う。 The external interface unit 942 includes at least one of an IEEE 1394 interface, a network interface unit, a USB interface, a flash memory interface, and the like. The external interface unit 942 is an interface for connecting to an external device, a network, a memory card, and the like, and receives data such as video data and audio data to be recorded.
 エンコーダ943は、外部インタフェース部942から供給された映像データや音声データが符号化されていない場合所定の方式で符号化を行い、画像圧縮情報をセレクタ946に出力する。 The encoder 943 performs encoding by a predetermined method when the video data and audio data supplied from the external interface unit 942 are not encoded, and outputs image compression information to the selector 946.
 HDD部944は、映像や音声等のコンテンツデータ、各種プログラムやその他のデータ等を内蔵のハードディスクに記録し、また再生時等にそれらを当該ハードディスクから読み出す。 The HDD unit 944 records content data such as video and audio, various programs, and other data on a built-in hard disk, and reads them from the hard disk during playback.
 ディスクドライブ945は、装着されている光ディスクに対する信号の記録および再生を行う。光ディスク、例えばDVDディスク(DVD-Video、DVD-RAM、DVD-R、DVD-RW、DVD+R、DVD+RW等)やBlu-rayディスク等である。 The disk drive 945 records and reproduces signals with respect to the mounted optical disk. An optical disk such as a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.), Blu-ray disk, or the like.
 セレクタ946は、映像や音声の記録時には、チューナ941またはエンコーダ943からのいずれかのストリームを選択して、HDD部944やディスクドライブ945のいずれかに供給する。また、セレクタ946は、映像や音声の再生時に、HDD部944またはディスクドライブ945から出力されたストリームをデコーダ947に供給する。 The selector 946 selects any stream from the tuner 941 or the encoder 943 and supplies it to either the HDD unit 944 or the disk drive 945 when recording video or audio. In addition, the selector 946 supplies the stream output from the HDD unit 944 or the disk drive 945 to the decoder 947 when playing back video or audio.
 デコーダ947は、ストリームの復号化処理を行う。デコーダ947は、復号処理化を行うことで生成された映像データをOSD部948に供給する。また、デコーダ947は、復号処理化を行うことで生成された音声データを出力する。 The decoder 947 performs a stream decoding process. The decoder 947 supplies the video data generated by performing the decoding process to the OSD unit 948. The decoder 947 outputs audio data generated by performing the decoding process.
 OSD部948は、項目の選択などのメニュー画面等を表示するための映像データを生成し、それをデコーダ947から出力された映像データに重畳して出力する。 The OSD unit 948 generates video data for displaying a menu screen for selecting an item and the like, and superimposes it on the video data output from the decoder 947 and outputs the video data.
 制御部949には、ユーザインタフェース部950が接続されている。ユーザインタフェース部950は、操作スイッチやリモートコントロール信号受信部等で構成されており、ユーザ操作に応じた操作信号を制御部949に供給する。 A user interface unit 950 is connected to the control unit 949. The user interface unit 950 includes an operation switch, a remote control signal receiving unit, and the like, and supplies an operation signal corresponding to a user operation to the control unit 949.
 制御部949は、CPUやメモリ等を用いて構成されている。メモリは、CPUで実行されるプログラムやCPUが処理を行う上で必要な各種のデータを記憶する。メモリに記憶されているプログラムは、記録再生装置94の起動時などの所定タイミングでCPUにより読み出されて実行される。CPUは、プログラムを実行することで、記録再生装置94がユーザ操作に応じた動作となるように各部を制御する。 The control unit 949 is configured using a CPU, a memory, and the like. The memory stores programs executed by the CPU and various data necessary for the CPU to perform processing. The program stored in the memory is read and executed by the CPU at a predetermined timing such as when the recording / reproducing apparatus 94 is activated. The CPU executes the program to control each unit so that the recording / reproducing device 94 operates in accordance with the user operation.
 このように構成された記録再生装置では、エンコーダ943に本願の画像符号化装置(画像符号化方法)の機能が設けられる。また、デコーダ947に本願の画像復号化装置(画像復号化方法)の機能が設けられる。したがって、画像を記録媒体に記録する際に、対象ブロックについて、動きベクトル情報の水平成分に対して水平予測動きベクトル情報、垂直成分に対して垂直予測動きベクトル情報を個々に設定して符号化効率を向上させることができる。また、画像符号化処理によって生成した画像圧縮情報の復号化を正しく行うことができる。 In the recording / reproducing apparatus configured as described above, the encoder 943 is provided with the function of the image encoding apparatus (image encoding method) of the present application. The decoder 947 is provided with the function of the image decoding apparatus (image decoding method) of the present application. Therefore, when recording an image on a recording medium, encoding efficiency is set by individually setting the predicted horizontal motion vector information for the horizontal component of the motion vector information and the predicted vertical motion vector information for the vertical component for the target block. Can be improved. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
 図27は、本技術を適用した撮像装置の概略構成を例示している。撮像装置96は、被写体を撮像し、被写体の画像を表示部に表示させたり、それを画像データとして、記録媒体に記録する。 FIG. 27 illustrates a schematic configuration of an imaging apparatus to which the present technology is applied. The imaging device 96 images a subject and displays an image of the subject on a display unit, or records it on a recording medium as image data.
 撮像装置96は、光学ブロック961、撮像部962、カメラ信号処理部963、画像データ処理部964、表示部965、外部インタフェース部966、メモリ部967、メディアドライブ968、OSD部969、制御部970を有している。また、制御部970には、ユーザインタフェース部971や動き検出センサ部972が接続されている。さらに、画像データ処理部964や外部インタフェース部966、メモリ部967、メディアドライブ968、OSD部969、制御部970等は、バス973を介して接続されている。 The imaging device 96 includes an optical block 961, an imaging unit 962, a camera signal processing unit 963, an image data processing unit 964, a display unit 965, an external interface unit 966, a memory unit 967, a media drive 968, an OSD unit 969, and a control unit 970. Have. In addition, a user interface unit 971 and a motion detection sensor unit 972 are connected to the control unit 970. Furthermore, the image data processing unit 964, the external interface unit 966, the memory unit 967, the media drive 968, the OSD unit 969, the control unit 970, and the like are connected via a bus 973.
 光学ブロック961は、フォーカスレンズや絞り機構等を用いて構成されている。光学ブロック961は、被写体の光学像を撮像部962の撮像面に結像させる。撮像部962は、CCDまたはCMOSイメージセンサを用いて構成されており、光電変換によって光学像に応じた電気信号を生成してカメラ信号処理部963に供給する。 The optical block 961 is configured using a focus lens, a diaphragm mechanism, and the like. The optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962. The imaging unit 962 is configured using a CCD or CMOS image sensor, generates an electrical signal corresponding to the optical image by photoelectric conversion, and supplies the electrical signal to the camera signal processing unit 963.
 カメラ信号処理部963は、撮像部962から供給された電気信号に対してニー補正やガンマ補正、色補正等の種々のカメラ信号処理を行う。カメラ信号処理部963は、カメラ信号処理後の画像データを画像データ処理部964に供給する。 The camera signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the electrical signal supplied from the imaging unit 962. The camera signal processing unit 963 supplies the image data after the camera signal processing to the image data processing unit 964.
 画像データ処理部964は、カメラ信号処理部963から供給された画像データの符号化処理を行う。画像データ処理部964は、符号化処理を行うことで生成された画像圧縮情報を外部インタフェース部966やメディアドライブ968に供給する。また、画像データ処理部964は、外部インタフェース部966やメディアドライブ968から供給された画像圧縮情報の復号化処理を行う。画像データ処理部964は、復号化処理を行うことで生成された画像データを表示部965に供給する。また、画像データ処理部964は、カメラ信号処理部963から供給された画像データを表示部965に供給する処理や、OSD部969から取得した表示用データを、画像データに重畳させて表示部965に供給する。 The image data processing unit 964 performs an encoding process on the image data supplied from the camera signal processing unit 963. The image data processing unit 964 supplies the compressed image information generated by performing the encoding process to the external interface unit 966 and the media drive 968. Further, the image data processing unit 964 performs a decoding process on the compressed image information supplied from the external interface unit 966 and the media drive 968. The image data processing unit 964 supplies the image data generated by performing the decoding process to the display unit 965. Further, the image data processing unit 964 superimposes the processing for supplying the image data supplied from the camera signal processing unit 963 to the display unit 965 and the display data acquired from the OSD unit 969 on the image data. To supply.
 OSD部969は、記号、文字、または図形からなるメニュー画面やアイコンなどの表示用データを生成して画像データ処理部964に出力する。 The OSD unit 969 generates display data such as a menu screen and icons made up of symbols, characters, or figures and outputs them to the image data processing unit 964.
 外部インタフェース部966は、例えば、USB入出力端子などで構成され、画像の印刷を行う場合に、プリンタと接続される。また、外部インタフェース部966には、必要に応じてドライブが接続され、磁気ディスク、光ディスク等のリムーバブルメディアが適宜装着され、それらから読み出されたプログラムが、必要に応じて、インストールされる。さらに、外部インタフェース部966は、LANやインターネット等の所定のネットワークに接続されるネットワークインタフェースを有する。制御部970は、例えば、ユーザインタフェース部971からの指示にしたがって、メモリ部967から画像圧縮情報を読み出し、それを外部インタフェース部966から、ネットワークを介して接続される他の装置に供給させることができる。また、制御部970は、ネットワークを介して他の装置から供給される画像圧縮情報や画像データを、外部インタフェース部966を介して取得し、それを画像データ処理部964に供給したりすることができる。 The external interface unit 966 includes, for example, a USB input / output terminal, and is connected to a printer when printing an image. In addition, a drive is connected to the external interface unit 966 as necessary, a removable medium such as a magnetic disk or an optical disk is appropriately mounted, and a program read from the medium is installed as necessary. Furthermore, the external interface unit 966 has a network interface connected to a predetermined network such as a LAN or the Internet. For example, the control unit 970 reads the image compression information from the memory unit 967 according to an instruction from the user interface unit 971, and supplies the compressed image information from the external interface unit 966 to another device connected via the network. it can. Also, the control unit 970 may acquire image compression information and image data supplied from another device via the network via the external interface unit 966 and supply the acquired information to the image data processing unit 964. it can.
 メディアドライブ968で駆動される記録メディアとしては、例えば、磁気ディスク、光磁気ディスク、光ディスク、または半導体メモリ等の、読み書き可能な任意のリムーバブルメディアが用いられる。また、記録メディアは、リムーバブルメディアとしての種類も任意であり、テープデバイスであってもよいし、ディスクであってもよいし、メモリカードであってもよい。もちろん、非接触ICカード等であってもよい。 As the recording medium driven by the media drive 968, any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory is used. The recording medium may be any type of removable medium, and may be a tape device, a disk, or a memory card. Of course, a non-contact IC card or the like may be used.
 また、メディアドライブ968と記録メディアを一体化し、例えば、内蔵型ハードディスクドライブやSSD(Solid State Drive)等のように、非可搬性の記憶媒体で構成されるようにしてもよい。 Further, the media drive 968 and the recording medium may be integrated, and may be configured by a non-portable storage medium such as a built-in hard disk drive or an SSD (Solid State Drive).
 制御部970は、CPUやメモリ等を用いて構成されている。メモリは、CPUで実行されるプログラムやCPUが処理を行う上で必要な各種のデータ等を記憶する。メモリに記憶されているプログラムは、撮像装置96の起動時などの所定タイミングでCPUにより読み出されて実行される。CPUは、プログラムを実行することで、撮像装置96がユーザ操作に応じた動作となるように各部を制御する。 The control unit 970 is configured using a CPU, a memory, and the like. The memory stores programs executed by the CPU, various data necessary for the CPU to perform processing, and the like. The program stored in the memory is read and executed by the CPU at a predetermined timing such as when the imaging device 96 is activated. The CPU executes the program to control each unit so that the imaging device 96 operates according to the user operation.
 このように構成された撮像装置では、画像データ処理部964に本願の画像符号化装置(画像符号化方法)と画像復号化装置(画像復号化方法)の機能が設けられる。したがって、撮像画像を記録する際に、対象ブロックについて、動きベクトル情報の水平成分に対して水平予測動きベクトル情報、垂直成分に対して垂直予測動きベクトル情報を個々に設定して符号化効率を向上させることができる。また、画像符号化処理によって生成した画像圧縮情報の復号化を正しく行うことができる。 In the imaging device configured as described above, the image data processing unit 964 is provided with the functions of the image encoding device (image encoding method) and the image decoding device (image decoding method) of the present application. Therefore, when recording a captured image, the horizontal prediction motion vector information for the horizontal component of the motion vector information and the vertical prediction motion vector information for the vertical component are individually set for the target block to improve the encoding efficiency. Can be made. Also, it is possible to correctly decode the compressed image information generated by the image encoding process.
 さらに、撮像装置96にジャイロ等を用いて構成された動き検出センサ部972を設けて、撮像装置96のパンニングやチルティング等の動きの検出結果に基づき、データ量の少ないコードを予測精度の高い予測動きベクトル情報に対して割り当てる。このように、撮像装置の動き検出結果に応じてコードの動的割り当てを行うことで、符号化効率をさらに向上させることができる。 Furthermore, a motion detection sensor unit 972 configured using a gyro or the like is provided in the imaging device 96, and a code with a small amount of data is predicted with high accuracy based on the detection result of motion such as panning or tilting of the imaging device 96. Assigned to predicted motion vector information. In this way, the coding efficiency can be further improved by dynamically allocating codes according to the motion detection result of the imaging apparatus.
 なお、本技術は、上述した実施の形態に限定して解釈されるべきではない。この実施の形態は、例示という形態で本技術を開示しており、本技術の要旨を逸脱しない範囲で当業者が実施の形態の修正や代用をなし得ることは自明である。すなわち、本技術の要旨を判断するためには、請求の範囲を参酌すべきである。 Note that the present technology should not be interpreted as being limited to the above-described embodiment. This embodiment discloses the present technology in the form of exemplification, and it is obvious that those skilled in the art can make modifications and substitutions of the embodiment without departing from the gist of the present technology. In other words, the scope of the claims should be considered in order to determine the gist of the present technology.
 この技術の画像符号化装置と動きベクトル符号化方法、画像復号化装置と動きベクトル復号化方法では、対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、対象ブロックと隣接する符号化済みブロックから動きベクトル情報が選択されて水平予測動きベクトル情報と垂直予測動きベクトル情報がそれぞれ設定されて、設定された水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて、対象ブロックの動きベクトル情報の圧縮処理が行われる。また、動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報が生成される。さらに、水平予測ブロック情報と垂直予測ブロック情報に基づいて動きベクトル情報の復号化が行われる。このため、例えば水平予測動きベクトル情報と垂直予測動きベクトル情報の候補の組み合わせ分のフラグよりも少ないデータ量である水平予測ブロック情報と垂直予測ブロック情報で水平予測動きベクトル情報と垂直予測動きベクトル情報の設定が可能となり、符号化効率を向上させることができる。したがって、高い符号化効率を実現できることから、画像圧縮情報(ビットストリーム)を、衛星放送、ケーブルTV、インターネット、携帯電話などのネットワークメディアを介して送受信する際に、または光ディスク、磁気ディスク、フラッシュメモリのような記憶メディアを用いて画像の記録再生を行う装置等に適している。 In the image encoding device and the motion vector encoding method of this technique, and the image decoding device and the motion vector decoding method, the code adjacent to the target block for each of the horizontal component and the vertical component of the motion vector information of the target block. The motion vector information is selected from the converted block, and the horizontal prediction motion vector information and the vertical prediction motion vector information are set, respectively, and the motion of the target block is set using the set horizontal prediction motion vector information and vertical prediction motion vector information. Vector information compression processing is performed. Also, horizontal prediction block information and vertical prediction block information indicating a block for which motion vector information is selected are generated. Further, the motion vector information is decoded based on the horizontal prediction block information and the vertical prediction block information. For this reason, for example, horizontal prediction motion vector information and vertical prediction motion vector information with horizontal prediction block information and vertical prediction block information that have a data amount smaller than a flag corresponding to a combination of horizontal prediction motion vector information and vertical prediction motion vector information candidates Can be set, and the encoding efficiency can be improved. Accordingly, since high encoding efficiency can be realized, image compression information (bitstream) is transmitted / received via network media such as satellite broadcasting, cable TV, the Internet, and cellular phones, or optical disks, magnetic disks, and flash memories. It is suitable for an apparatus for recording and reproducing images using a storage medium such as
 10・・・画像符号化装置、11・・・A/D変換部、12,57・・・画面並べ替えバッファ、13・・・減算部、14・・・直交変換部、15・・・量子化部、16・・・可逆符号化部、17,51・・・蓄積バッファ、18・・・レート制御部、21,53・・・逆量子化部、22,54・・・逆直交変換部、23,55・・・加算部、24,56・・・デブロッキングフィルタ、25,61・・・フレームメモリ、26,62,75・・・セレクタ、31,71・・・イントラ予測部、32・・・動き予測・補償部、33,33a,73,73a・・・予測動きベクトル情報設定部、35・・・予測画像・最適モード選択部、50・・・画像復号化装置、52・・・可逆復号化部、58・・・D/A変換部、72・・・動き補償部、80・・・コンピュータ装置、90・・・テレビジョン装置、92・・・携帯電話機、94・・・記録再生装置、96・・・撮像装置、321・・・動き探索部、322・・・コスト関数値算出部、323・・・モード判定部、324・・・動き補償処理部、325・・・動きベクトルバッファ、331,731・・・水平予測動きベクトル情報生成部、332,732・・・垂直予測動きベクトル情報生成部、333,733・・・水平垂直予測動きベクトル情報生成部、334,334a・・・識別情報生成部、721・・・ブロックサイズ情報バッファ、722・・・差分動きベクトル情報バッファ、723・・・動きベクトル情報生成部、724・・・動き補償処理部、725・・・動きベクトル情報バッファ、730,730a・・・フラグバッファ DESCRIPTION OF SYMBOLS 10 ... Image coding apparatus, 11 ... A / D conversion part, 12, 57 ... Screen rearrangement buffer, 13 ... Subtraction part, 14 ... Orthogonal transformation part, 15 ... Quantum 16, lossless encoding unit 17, 51, accumulation buffer 18, rate control unit 21, 53, inverse quantization unit 22, 54, inverse orthogonal transform unit , 23, 55 ... adder, 24, 56 ... deblocking filter, 25, 61 ... frame memory, 26, 62, 75 ... selector, 31, 71 ... intra prediction unit, 32 ... Motion prediction / compensation unit, 33, 33a, 73, 73a ... Prediction motion vector information setting unit, 35 ... Predicted image / optimum mode selection unit, 50 ... Image decoding device, 52 ...・ Lossless decoding unit, 58... D / A conversion unit, 72... Motion compensation unit 80: Computer device, 90: Television device, 92: Mobile phone, 94: Recording / reproducing device, 96: Imaging device, 321: Motion search unit, 322: Cost Function value calculation unit, 323 ... mode determination unit, 324 ... motion compensation processing unit, 325 ... motion vector buffer, 331, 731 ... horizontal prediction motion vector information generation unit, 332, 732 ... Vertical prediction motion vector information generation unit, 333, 733... Vertical and vertical prediction motion vector information generation unit, 334, 334a... Identification information generation unit, 721... Block size information buffer, 722. Information buffer, 723 ... motion vector information generation unit, 724 ... motion compensation processing unit, 725 ... motion vector information buffer, 730, 730a ... flag buffer

Claims (16)

  1.  対象ブロックと隣接する復号化済みブロックから動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報と、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報を画像圧縮情報から取得する可逆復号化部と、
     前記水平予測ブロック情報で示されたブロックの動きベクトル情報を水平予測動きベクトル情報として設定し、前記垂直予測ブロック情報で示されたブロックの動きベクトル情報を前記垂直予測動きベクトル情報として設定する予測動きベクトル情報設定部と、
     前記予測動きベクトル情報設定部で設定された前記水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて前記対象ブロックの動きベクトル情報を生成する動きベクトル情報生成部と
    を有する画像復号化装置。
    Horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and vertical prediction indicating a block in which motion vector information is selected as vertical prediction motion vector information A lossless decoding unit that acquires block information from image compression information;
    Prediction motion in which motion vector information of a block indicated by the horizontal prediction block information is set as horizontal prediction motion vector information, and motion vector information of a block indicated by the vertical prediction block information is set as the vertical prediction motion vector information A vector information setting section;
    An image decoding apparatus comprising: a motion vector information generation unit configured to generate motion vector information of the target block using the horizontal prediction motion vector information and vertical prediction motion vector information set by the prediction motion vector information setting unit.
  2.  前記可逆復号化部は、前記水平予測動きベクトル情報と前記垂直予測動きベクトル情報、または前記対象ブロックの動きベクトル情報の水平成分と垂直成分に対して前記隣接する復号化済みブロックから選択した動きベクトル情報を示す水平垂直予測動きベクトル情報のいずれが用いられているかを示す識別情報を、前記画像圧縮情報から取得し、
     前記予測動きベクトル情報設定部は、前記識別情報に基づき、前記水平予測動きベクトル情報と前記垂直予測動きベクトル情報、または前記水平垂直予測動きベクトル情報の設定を行い、
     前記動きベクトル情報生成部は、前記水平予測動きベクトル情報と垂直予測動きベクトル情報、または前記水平垂直予測動きベクトル情報を用いて前記対象ブロックの動きベクトル情報を生成する請求項1記載の画像復号化装置。
    The lossless decoding unit may select a motion vector selected from the adjacent decoded block with respect to the horizontal prediction motion vector information and the vertical prediction motion vector information, or a horizontal component and a vertical component of the motion vector information of the target block. Identification information indicating which of the horizontal and vertical predicted motion vector information indicating the information is used is acquired from the image compression information,
    The predicted motion vector information setting unit sets the horizontal predicted motion vector information and the vertical predicted motion vector information, or the horizontal and vertical predicted motion vector information based on the identification information,
    The image decoding according to claim 1, wherein the motion vector information generation unit generates motion vector information of the target block using the horizontal prediction motion vector information and vertical prediction motion vector information, or the horizontal and vertical prediction motion vector information. apparatus.
  3.  前記可逆復号化部は、画像圧縮情報に含まれているコードの復号化を行い、前記水平予測ブロック情報と前記垂直予測ブロック情報を取得し、
     前記予測動きベクトル情報設定部は、前記水平予測ブロック情報と前記垂直予測ブロック情報に基づき、前記水平予測動きベクトルと前記垂直予測動きベクトル情報の設定を行う請求項1記載の画像復号化装置。
    The lossless decoding unit decodes a code included in image compression information, acquires the horizontal prediction block information and the vertical prediction block information,
    The image decoding apparatus according to claim 1, wherein the prediction motion vector information setting unit sets the horizontal prediction motion vector and the vertical prediction motion vector information based on the horizontal prediction block information and the vertical prediction block information.
  4.  対象ブロックと隣接する復号化済みブロックから動きベクトル情報が水平予測動きベクトル情報として選択されたブロックを示す水平予測ブロック情報と、動きベクトル情報が垂直予測動きベクトル情報として選択されたブロックを示す垂直予測ブロック情報を画像圧縮情報から取得する工程と、
     前記水平予測ブロック情報で示されたブロックの動きベクトル情報を水平予測動きベクトル情報として設定し、前記垂直予測ブロック情報で示されたブロックの動きベクトル情報を前記垂直予測動きベクトル情報として設定する工程と、
     前記設定された水平予測動きベクトル情報と垂直予測動きベクトル情報を用いて前記対象ブロックの動きベクトル情報を生成する工程とを
    設けた動きベクトル情報復号化方法。
    Horizontal prediction block information indicating a block in which motion vector information is selected as horizontal prediction motion vector information from a decoded block adjacent to the target block, and vertical prediction indicating a block in which motion vector information is selected as vertical prediction motion vector information Obtaining block information from image compression information;
    Setting the motion vector information of the block indicated by the horizontal prediction block information as horizontal prediction motion vector information, and setting the motion vector information of the block indicated by the vertical prediction block information as the vertical prediction motion vector information; ,
    A motion vector information decoding method comprising: a step of generating motion vector information of the target block using the set horizontal prediction motion vector information and vertical prediction motion vector information.
  5.  対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、前記対象ブロックと隣接する符号化済みブロックから動きベクトル情報を選択して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定を行い、該動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報を生成する予測動きベクトル情報設定部
    を有する画像符号化装置。
    For each of the horizontal and vertical components of the motion vector information of the target block, select the motion vector information from the encoded block adjacent to the target block, and set the horizontal prediction motion vector information and the vertical prediction motion vector information. An image coding apparatus comprising: a prediction motion vector information setting unit that performs horizontal prediction block information and vertical prediction block information indicating a block in which the motion vector information is selected.
  6.  前記予測動きベクトル情報設定部は、前記水平成分の符号化処理で最も符号化効率が高くなる動きベクトル情報を選択して前記水平予測動きベクトル情報として設定し、前記垂直成分の符号化処理で最も符号化効率が高くなる動きベクトル情報を選択して前記垂直予測動きベクトル情報として設定する請求項5記載の画像符号化装置。 The predicted motion vector information setting unit selects and sets motion vector information having the highest encoding efficiency in the horizontal component encoding process as the horizontal predicted motion vector information, and is the highest in the vertical component encoding process. 6. The image encoding device according to claim 5, wherein motion vector information with high encoding efficiency is selected and set as the vertical predicted motion vector information.
  7.  予測モード毎にコスト関数値を算出するコスト関数値算出部と、
     最適予測モードの判定を行うモード判定部とをさらに有し、
     前記モード判定部は、前記算出されたコスト関数値が最小となるモードを最適予測モードと判定する請求項6記載の画像符号化装置。
    A cost function value calculation unit for calculating a cost function value for each prediction mode;
    A mode determination unit for determining the optimal prediction mode;
    The image encoding device according to claim 6, wherein the mode determination unit determines a mode in which the calculated cost function value is a minimum as an optimal prediction mode.
  8.  前記水平予測ブロック情報と垂直予測ブロック情報は、画像圧縮情報に含めて伝送する請求項5記載の画像符号化装置。 The image encoding device according to claim 5, wherein the horizontal prediction block information and the vertical prediction block information are transmitted by being included in image compression information.
  9.  前記予測動きベクトル情報設定部は、前記対象ブロックの動きベクトル情報の水平成分と垂直成分に対して、前記対象ブロックと隣接する符号化済みブロックから選択した動きベクトル情報を水平垂直予測動きベクトル情報とする設定、または前記水平予測動きベクトル情報と前記垂直予測動きベクトル情報の設定を、ピクチャ毎またはスライス毎に切り替え可能とする請求項5記載の画像符号化装置。 The predicted motion vector information setting unit sets, as horizontal and vertical predicted motion vector information, motion vector information selected from an encoded block adjacent to the target block with respect to a horizontal component and a vertical component of motion vector information of the target block. The image encoding device according to claim 5, wherein the setting to be performed or the setting of the predicted horizontal motion vector information and the predicted vertical motion vector information can be switched for each picture or for each slice.
  10.  前記予測動きベクトル情報設定部は、前記水平予測動きベクトル情報と前記垂直予測動きベクトル情報、または前記水平垂直予測動きベクトル情報のいずれが用いられているかを示す識別情報を生成する請求項9記載の画像符号化装置。 The prediction motion vector information setting unit generates identification information indicating which of the horizontal prediction motion vector information, the vertical prediction motion vector information, or the horizontal and vertical prediction motion vector information is used. Image encoding device.
  11.  前記生成した識別情報は、画像圧縮情報のピクチャパラメータセットまたはスライスヘッダに含める請求項10記載の画像符号化装置。 The image encoding device according to claim 10, wherein the generated identification information is included in a picture parameter set or slice header of image compression information.
  12.  前記予測動きベクトル情報設定部は、Pピクチャに対して前記水平予測動きベクトル情報と前記垂直予測動きベクトル情報の設定を行い、Bピクチャに対して前記水平垂直予測動きベクトル情報の設定を行う請求項9記載の画像符号化装置。 The prediction motion vector information setting unit sets the horizontal prediction motion vector information and the vertical prediction motion vector information for a P picture, and sets the horizontal and vertical prediction motion vector information for a B picture. 9. The image encoding device according to 9.
  13.  前記対象ブロックの動きベクトル情報の符号化を行う可逆符号化部を有し、
     前記可逆符号化部は、前記水平予測ブロック情報と前記垂直予測ブロック情報とで異なるコード割り当てを行い、前記水平予測ブロック情報と前記垂直予測ブロック情報に割り当てたコードを画像圧縮情報に含める請求項5記載の画像符号化装置。
    A lossless encoding unit that encodes motion vector information of the target block;
    6. The lossless encoding unit performs different code assignments between the horizontal prediction block information and the vertical prediction block information, and includes the codes assigned to the horizontal prediction block information and the vertical prediction block information in image compression information. The image encoding device described.
  14.  前記可逆符号化部は、動きベクトル情報が空間予測動きベクトル情報として選択されたブロックを示す予測ブロック情報と、動きベクトル情報が時間予測動きベクトル情報として選択されたブロックを示す予測ブロック情報に対して、前記水平予測ブロック情報と前記垂直予測ブロック情報で異なるコード割り当てを行う請求項13記載の画像符号化装置。 The lossless encoding unit performs prediction block information indicating a block in which motion vector information is selected as spatial prediction motion vector information, and prediction block information indicating a block in which motion vector information is selected as temporal prediction motion vector information. The image encoding device according to claim 13, wherein different code allocation is performed for the horizontal prediction block information and the vertical prediction block information.
  15.  前記可逆符号化部は、撮像装置で生成された画像データを用いて検出された前記対象ブロックの動きベクトル情報の符号化処理を行う場合、前記撮像装置の動き検出結果に基づいて前記コード割り当てを行う請求項14記載の画像符号化装置。 The lossless encoding unit performs the code assignment based on the motion detection result of the imaging device when performing the encoding process of the motion vector information of the target block detected using the image data generated by the imaging device. 15. The image encoding device according to claim 14, wherein the image encoding device is performed.
  16.  対象ブロックの動きベクトル情報の水平成分と垂直成分のそれぞれに対して、前記対象ブロックと隣接する符号化済みブロックから動きベクトル情報を選択して水平予測動きベクトル情報と垂直予測動きベクトル情報の設定を行い、該動きベクトル情報が選択されたブロックを示す水平予測ブロック情報と垂直予測ブロック情報を生成する工程を設けた動きベクトル情報符号化方法。 For each of the horizontal and vertical components of the motion vector information of the target block, select the motion vector information from the encoded block adjacent to the target block, and set the horizontal prediction motion vector information and the vertical prediction motion vector information. A motion vector information encoding method comprising a step of generating horizontal prediction block information and vertical prediction block information indicating a block for which the motion vector information is selected.
PCT/JP2011/077510 2010-12-06 2011-11-29 Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method WO2012077533A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/990,506 US20130259134A1 (en) 2010-12-06 2011-11-29 Image decoding device and motion vector decoding method, and image encoding device and motion vector encoding method
CN2011800576190A CN103238329A (en) 2010-12-06 2011-11-29 Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-271769 2010-12-06
JP2010271769A JP2012124591A (en) 2010-12-06 2010-12-06 Image encoder and motion vector encoding method, image decoder and motion vector decoding method, and program

Publications (1)

Publication Number Publication Date
WO2012077533A1 true WO2012077533A1 (en) 2012-06-14

Family

ID=46207024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/077510 WO2012077533A1 (en) 2010-12-06 2011-11-29 Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method

Country Status (4)

Country Link
US (1) US20130259134A1 (en)
JP (1) JP2012124591A (en)
CN (1) CN103238329A (en)
WO (1) WO2012077533A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220157765A (en) * 2021-05-21 2022-11-29 삼성전자주식회사 Video Encoder and the operating method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008211697A (en) * 2007-02-28 2008-09-11 Sharp Corp Encoder and decoder

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983018B1 (en) * 1998-11-30 2006-01-03 Microsoft Corporation Efficient motion vector coding for video compression
KR100680452B1 (en) * 2000-02-22 2007-02-08 주식회사 팬택앤큐리텔 Method and apparatus for updating motion vector memory
US6687384B1 (en) * 2000-03-27 2004-02-03 Sarnoff Corporation Method and apparatus for embedding data in encoded digital bitstreams
KR100642043B1 (en) * 2001-09-14 2006-11-03 가부시키가이샤 엔티티 도코모 Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program
US7606427B2 (en) * 2004-07-08 2009-10-20 Qualcomm Incorporated Efficient rate control techniques for video encoding
CN100581245C (en) * 2004-07-08 2010-01-13 高通股份有限公司 Efficient rate control techniques for video encoding
CN101001383A (en) * 2006-01-12 2007-07-18 三星电子株式会社 Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction
JP4325708B2 (en) * 2007-07-05 2009-09-02 ソニー株式会社 Data processing device, data processing method and data processing program, encoding device, encoding method and encoding program, and decoding device, decoding method and decoding program
JP4990927B2 (en) * 2008-03-28 2012-08-01 三星電子株式会社 Method and apparatus for encoding / decoding motion vector information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008211697A (en) * 2007-02-28 2008-09-11 Sharp Corp Encoder and decoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOEL JUNG ET AL.: "Competition- Based Scheme for Motion Vector Selection and Coding", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG), VCEG-AC06, 29TH MEETING: KLAGENFURT, July 2006 (2006-07-01), AUSTRIA, pages 1 - 7 *
SUNG DEUK KIM ET AL.: "An efficient motion vector coding scheme based on minimum bitrate prediction", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 8, no. 8, August 1999 (1999-08-01), pages 1117 - 1120, XP011026355 *

Also Published As

Publication number Publication date
US20130259134A1 (en) 2013-10-03
CN103238329A (en) 2013-08-07
JP2012124591A (en) 2012-06-28

Similar Documents

Publication Publication Date Title
JP6477939B2 (en) Television apparatus, mobile phone, playback apparatus, camera, and image processing method
JP6057140B2 (en) Image processing apparatus and method, program, and recording medium
WO2012017858A1 (en) Image processing device and image processing method
WO2011155364A1 (en) Image decoder apparatus, image encoder apparatus and method and program thereof
WO2012063878A1 (en) Image processing device, and image processing method
JP2011050001A (en) Image processing apparatus and method
JPWO2010035734A1 (en) Image processing apparatus and method
WO2012063604A1 (en) Image processing device, and image processing method
JP2011151683A (en) Image processing apparatus and method
JP2011146980A (en) Image processor and image processing method
JP2011259093A (en) Image decoding apparatus and image encoding apparatus and method and program therefor
WO2012056924A1 (en) Image processing device and image processing method
JP5387520B2 (en) Information processing apparatus and information processing method
WO2012077533A1 (en) Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method
JP6268556B2 (en) Image processing apparatus and method, program, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11847461

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13990506

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11847461

Country of ref document: EP

Kind code of ref document: A1