WO2013015118A1 - Appareil et procédé de traitement d'image - Google Patents

Appareil et procédé de traitement d'image Download PDF

Info

Publication number
WO2013015118A1
WO2013015118A1 PCT/JP2012/067718 JP2012067718W WO2013015118A1 WO 2013015118 A1 WO2013015118 A1 WO 2013015118A1 JP 2012067718 W JP2012067718 W JP 2012067718W WO 2013015118 A1 WO2013015118 A1 WO 2013015118A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
motion vector
address
prediction
image
Prior art date
Application number
PCT/JP2012/067718
Other languages
English (en)
Japanese (ja)
Inventor
寿治 土屋
田中 潤一
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013015118A1 publication Critical patent/WO2013015118A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access

Definitions

  • the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method capable of reducing memory access.
  • MPEG2 (ISO / IEC 13818-2) is defined as a general-purpose image encoding system, and is a standard that covers both interlaced scanning images and progressive scanning images, as well as standard resolution images and high-definition images.
  • MPEG2 is currently widely used in a wide range of applications for professional and consumer applications.
  • a code amount (bit rate) of 4 to 8 Mbps is assigned to an interlaced scanned image having a standard resolution of 720 ⁇ 480 pixels.
  • a high resolution interlaced scanned image having 1920 ⁇ 1088 pixels is assigned a code amount (bit rate) of 18 to 22 Mbps.
  • bit rate code amount
  • MPEG2 was mainly intended for high-quality encoding suitable for broadcasting, but it did not support encoding methods with a lower code amount (bit rate) than MPEG1, that is, a higher compression rate. With the widespread use of mobile terminals, the need for such an encoding system is expected to increase in the future, and the MPEG4 encoding system has been standardized accordingly. Regarding the image coding system, the standard was approved as an international standard in December 1998 as ISO / IEC 14496-2.
  • H.264 and MPEG-4 Part 10 Advanced Video Coding, hereinafter referred to as H.264 / AVC.
  • the present disclosure has been made in view of such a situation, and can reduce memory access when reading out motion vectors that are temporally different from surrounding motion vectors.
  • the pixel position on the lower right in the divided region obtained by dividing the image indicated by the address calculated when obtaining the prediction vector of temporal correlation is outside the region of the current decoding unit.
  • the lower right pixel position in the divided region obtained by dividing the image indicated by the address calculated when the image processing apparatus obtains the prediction vector of the time correlation is the current decoding. If it is outside the unit area, the address is corrected so as to return the pixel position to the current decoding unit area, and a motion vector corresponding to the divided area indicated by the corrected address is calculated as the time correlation prediction.
  • the image is set as a vector, and the image is generated by decoding the bitstream using the motion vector generated using the set temporal correlation prediction vector.
  • the lower right pixel position in the divided region obtained by dividing the image indicated by the address calculated when the prediction vector of the temporal correlation is obtained is the current encoding unit region. If it is outside, an address correction unit that corrects the address so as to return the pixel position to the region of the current coding unit, and a motion vector corresponding to the divided region indicated by the address corrected by the address correction unit.
  • the lower right pixel position in the divided region obtained by dividing the image indicated by the address calculated when the image processing apparatus obtains the prediction vector of the time correlation is the current code. If it is outside the area of the encoding unit, the address is corrected so as to return the pixel position to the area of the current encoding unit, and a motion vector corresponding to the divided area indicated by the corrected address is set to the time.
  • the prediction image is set as a correlation prediction vector, and the image is encoded using a prediction image predicted from a motion vector corresponding to the set temporal correlation prediction vector.
  • An image processing apparatus includes a first address calculated when obtaining a temporal correlation prediction vector, and an upper left pixel position of a divided region into which the image is divided, which is indicated by the first address.
  • the second address calculated when the prediction unit area including the same is the same as the intra area
  • the second address is corrected to indicate the adjacent divided area adjacent to the divided area
  • An address correction unit configured to set the motion vector corresponding to the adjacent divided region indicated by the address corrected by the address correction unit as the prediction vector of the time correlation, and the time correlation set by the setting unit.
  • a decoding unit that generates a picture by decoding a bitstream using a motion vector generated using the prediction vector.
  • An image processing method includes a first address calculated when the image processing apparatus obtains a prediction vector of temporal correlation, and a divided region obtained by dividing the image indicated by the first address.
  • the second unit address calculated when the prediction unit region including the upper left pixel position is intra is the same as the second divided region, the second divided region is adjacent to the divided region.
  • the motion vector corresponding to the adjacent divided area indicated by the corrected address is set as the temporal correlation prediction vector, and the motion vector generated using the set temporal correlation prediction vector Is used to decode the bitstream to generate the image.
  • An image processing apparatus includes a first address calculated when obtaining a prediction vector of time correlation, and a pixel position at the upper left of a divided region obtained by dividing the image, which is indicated by the first address
  • the second address is corrected to indicate the adjacent divided area adjacent to the divided area
  • An address correction unit configured to set the motion vector corresponding to the adjacent divided region indicated by the address corrected by the address correction unit as the prediction vector of the time correlation, and the time correlation set by the setting unit.
  • An encoding unit that encodes the image using a predicted image predicted from a motion vector corresponding to the predicted vector.
  • An image processing method includes a first address calculated when the image processing apparatus obtains a temporal correlation prediction vector and a divided region obtained by dividing the image indicated by the first address.
  • the second unit address calculated when the prediction unit region including the upper left pixel position is intra is the same as the second divided region, the second divided region is adjacent to the divided region.
  • the motion vector corresponding to the adjacent divided region indicated by the corrected address is set as the prediction vector of the time correlation, and is predicted from the motion vector corresponding to the set prediction vector of the time correlation.
  • the image is encoded using the predicted image.
  • the image processing device when the region of the prediction unit including the pixel position at the upper left of the current divided region obtained by dividing the image is an intra, the image processing device is adjacent to the pixel position.
  • a motion vector of a region of a different prediction unit including an adjacent pixel position or a motion vector of a region of another prediction unit included in the current divided region is selected, and the selected motion vector corresponds to the current divided region.
  • the motion vector is set as a motion vector, and the image is generated by decoding the bitstream using the motion vector generated using the set prediction vector of the time correlation.
  • the image processing device includes an adjacent pixel position adjacent to the pixel position when the prediction unit area including the upper left pixel position of the current divided area obtained by dividing the image is an intra.
  • a motion vector of a different prediction unit region or a motion vector of another prediction unit region included in the current divided region and sets the selected motion vector as a motion vector corresponding to the current divided region A setting unit; and an encoding unit that encodes the image using a predicted image predicted from a motion vector corresponding to the temporal correlation prediction vector set by the setting unit.
  • the address is corrected so that the pixel position is pulled back into the region of the current decoding unit, and a motion vector corresponding to the divided region indicated by the corrected address is set as the prediction vector of the time correlation. Then, the motion vector generated using the read prediction vector of the time correlation is used to decode the bit stream to generate the image.
  • the lower right pixel position in the divided region obtained by dividing the image indicated by the address calculated when obtaining the prediction vector of the temporal correlation is outside the region of the current coding unit.
  • the address is corrected so that the pixel position is pulled back into the area of the current coding unit, and a motion vector stored corresponding to the divided area indicated by the corrected address is the prediction vector of the time correlation.
  • the image is encoded using a predicted image predicted from a motion vector corresponding to the set temporal correlation prediction vector.
  • a first address calculated when obtaining a temporal correlation prediction vector and a pixel position at the upper left of a divided region obtained by dividing the image indicated by the first address are included.
  • the second address calculated when the prediction unit area is intra is the same, the second address is corrected so as to indicate an adjacent divided area adjacent to the divided area.
  • a motion vector stored corresponding to the adjacent divided region indicated by the address is set as the temporal correlation prediction vector. Then, the image is generated by decoding the bitstream using the motion vector generated using the set temporal correlation prediction vector.
  • a first address calculated when obtaining a prediction vector of temporal correlation and a pixel position at the upper left of the divided region obtained by dividing the image indicated by the first address are included.
  • the second address calculated when the prediction unit area is intra is the same, the second address is corrected so as to indicate an adjacent divided area adjacent to the divided area.
  • a motion vector stored corresponding to the adjacent divided region indicated by the address is set as the temporal correlation prediction vector.
  • the image is encoded using a predicted image predicted from a motion vector corresponding to the set temporal correlation prediction vector.
  • the different prediction including the adjacent pixel position adjacent to the pixel position A motion vector of a unit region or a motion vector of another prediction unit region included in the current divided region is selected, and the selected motion vector is set as a motion vector corresponding to the current divided region. Then, the image is generated by decoding the bitstream using the motion vector generated using the set temporal correlation prediction vector.
  • the above-described image processing apparatus may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
  • an image can be processed.
  • memory access can be reduced.
  • FIG. 20 is a block diagram illustrating a main configuration example of a computer. It is a block diagram which shows an example of a schematic structure of a television apparatus. It is a block diagram which shows an example of a schematic structure of a mobile telephone. It is a block diagram which shows an example of a schematic structure of a recording / reproducing apparatus. It is a block diagram which shows an example of a schematic structure of an imaging device.
  • FIG. 1 illustrates a configuration of an embodiment of an image encoding device as an image processing device to which the present disclosure is applied.
  • the image encoding device 100 includes an A / D (Analog / Digital) conversion unit 101, a screen rearrangement buffer 102, a calculation unit 103, an orthogonal transformation unit 104, a quantization unit 105, and a lossless encoding unit 106. And a storage buffer 107.
  • the image encoding device 100 includes an inverse quantization unit 108, an inverse orthogonal transform unit 109, a calculation unit 110, a deblock filter 111, a frame memory 112, a selection unit 113, an intra prediction unit 114, a motion prediction / compensation unit 115, A selection unit 116 and a rate control unit 117 are included.
  • the image encoding device 100 further includes a motion vector difference generation unit 121 and a motion vector storage memory 122.
  • the A / D conversion unit 101 performs A / D conversion on the input image data, outputs it to the screen rearrangement buffer 102, and stores it.
  • the screen rearrangement buffer 102 rearranges the stored frame images in the display order in the order of frames for encoding in accordance with the GOP (Group of Picture) structure.
  • the screen rearrangement buffer 102 supplies the image with the rearranged frame order to the arithmetic unit 103.
  • the screen rearrangement buffer 102 also supplies the image in which the order of the frames is rearranged to the intra prediction unit 114 and the motion prediction / compensation unit 115.
  • the calculation unit 103 subtracts the prediction image supplied from the intra prediction unit 114 or the motion prediction / compensation unit 115 via the selection unit 116 from the image read from the screen rearrangement buffer 102, and orthogonalizes the difference information.
  • the data is output to the conversion unit 104.
  • the calculation unit 103 subtracts the prediction image supplied from the intra prediction unit 114 from the image read from the screen rearrangement buffer 102.
  • the arithmetic unit 103 subtracts the predicted image supplied from the motion prediction / compensation unit 115 from the image read from the screen rearrangement buffer 102.
  • the orthogonal transform unit 104 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the computation unit 103 and supplies the transform coefficient to the quantization unit 105.
  • the quantization unit 105 quantizes the transform coefficient output from the orthogonal transform unit 104.
  • the quantization unit 105 supplies the quantized transform coefficient to the lossless encoding unit 106.
  • the lossless encoding unit 106 performs lossless encoding such as variable length encoding and arithmetic encoding on the quantized transform coefficient.
  • the lossless encoding unit 106 acquires information indicating the intra prediction mode or information indicating the inter prediction mode, motion vector difference information, and the like from the motion vector difference generation unit 121. Although not shown, reference picture information and the like are acquired from the motion prediction / compensation unit 115.
  • the lossless encoding unit 106 encodes the quantized transform coefficient and also transmits information such as intra prediction mode information, inter prediction mode information, reference picture information, and motion vector difference information, as header information of encoded data. As a part of (multiplex).
  • the lossless encoding unit 106 supplies the encoded data obtained by encoding to the accumulation buffer 107 for accumulation.
  • the lossless encoding unit 106 performs lossless encoding processing such as variable length encoding or arithmetic encoding.
  • lossless encoding processing such as variable length encoding or arithmetic encoding.
  • variable length coding include CAVLC (Context-Adaptive Variable Length Coding).
  • arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
  • the transform coefficient quantized by the quantization unit 105 is also supplied to the inverse quantization unit 108.
  • the inverse quantization unit 108 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 105.
  • the inverse quantization unit 108 supplies the obtained transform coefficient to the inverse orthogonal transform unit 109.
  • the inverse orthogonal transform unit 109 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to the orthogonal transform processing by the orthogonal transform unit 104.
  • the inversely orthogonal transformed output (restored difference information) is supplied to the calculation unit 110.
  • the calculation unit 110 uses the inverse prediction unit 114 or the motion prediction / compensation unit 115 via the selection unit 116 for the inverse orthogonal transformation result supplied from the inverse orthogonal transformation unit 109, that is, the restored difference information.
  • the images are added to obtain a locally decoded image (decoded image).
  • the calculation unit 110 adds the prediction image supplied from the intra prediction unit 114 to the difference information.
  • the calculation unit 110 adds the predicted image supplied from the motion prediction / compensation unit 115 to the difference information.
  • the addition result is supplied to the deblock filter 111 and the frame memory 112.
  • the deblocking filter 111 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the frame memory 112 outputs the stored reference image to the intra prediction unit 114 or the motion prediction / compensation unit 115 via the selection unit 113 at a predetermined timing.
  • the frame memory 112 supplies the reference image to the intra prediction unit 114 via the selection unit 113.
  • the frame memory 112 supplies the reference image to the motion prediction / compensation unit 115 via the selection unit 113.
  • the selection unit 113 supplies the reference image to the intra prediction unit 114 when the reference image supplied from the frame memory 112 is an image to be subjected to intra coding. Further, when the reference image supplied from the frame memory 112 is an image to be subjected to inter coding, the selection unit 113 supplies the reference image to the motion prediction / compensation unit 115.
  • the intra prediction unit 114 performs intra prediction (intra-screen prediction) that generates a predicted image using pixel values in the screen.
  • the intra prediction unit 114 performs intra prediction in a plurality of modes (intra prediction modes).
  • the intra prediction unit 114 generates predicted images in all intra prediction modes, obtains cost function values, evaluates each predicted image, and selects an optimal intra mode.
  • the intra prediction unit 114 supplies the prediction image generated in the optimal intra mode to the selection unit 116, and the cost function value of the optimal intra mode (hereinafter referred to as an intra cost function value). Is supplied to the motion vector difference generation unit 121.
  • the motion prediction / compensation unit 115 uses the input image supplied from the screen rearrangement buffer 102 and the reference image supplied from the frame memory 112 via the selection unit 113 for the image to be inter-coded, Motion prediction is performed for all inter prediction modes. Then, the motion prediction / compensation unit 115 performs motion compensation processing according to the detected motion vector, and generates a prediction image (inter prediction image information).
  • the motion prediction / compensation unit 115 evaluates each prediction image by selecting cost function values for all inter prediction modes, and selects an optimal inter mode. When selecting the optimal inter mode, the motion prediction / compensation unit 115 supplies the prediction image generated in the optimal inter mode to the selection unit 116. In addition, the motion prediction / compensation unit 115 includes the cost function value of the optimal inter mode (hereinafter referred to as an inter cost function value), motion vector information, division information of a coding unit region (CU) to be processed, and address information. , Size information, and the like are supplied to the motion vector difference generation unit 121.
  • an inter cost function value the cost function value of the optimal inter mode
  • the selection unit 116 supplies the output of the intra prediction unit 114 to the calculation unit 103 and the calculation unit 110 in the case of an image on which intra coding is performed, corresponding to the information from the motion vector difference generation unit 121.
  • the selection unit 116 supplies the output of the motion prediction / compensation unit 115 to the calculation unit 103 and the calculation unit 110 in the case of an image to be inter-coded in response to the information from the motion vector difference generation unit 121.
  • the rate control unit 117 controls the quantization operation rate of the quantization unit 105 based on the compressed image stored in the storage buffer 107 so that overflow or underflow does not occur.
  • the motion vector difference generation unit 121 uses the information from the motion prediction / compensation unit 115 to generate spatial correlation prediction vector information and temporal correlation prediction vector information from the motion vector information of the peripheral region around the target prediction region.
  • the motion vector generation unit 221 uses the motion vector information stored in the motion vector storage memory 122 to generate temporal correlation prediction vector information.
  • the spatial correlation prediction vector is also referred to as a spatial prediction vector
  • the temporal correlation prediction vector is also referred to as a temporal prediction vector as appropriate.
  • the motion vector difference generation unit 121 generates motion vector difference information that is a difference between the generated prediction vector information and the motion vector information from the motion prediction / compensation unit 115. In addition, the motion vector difference generation unit 121 determines an optimum mode using the intra cost function value from the intra prediction unit 114 and the inter cost function value from the motion prediction / compensation unit 115. The motion vector difference generation unit 121 supplies the determined optimal mode information to the selection unit 116.
  • the motion vector difference generation unit 121 stores the motion vector information of the optimal mode in the motion vector storage memory 122.
  • the motion vector difference generation unit 121 supplies the generated motion vector difference information to the lossless encoding unit 106 together with the optimal mode information.
  • the motion vector storage memory 122 stores motion vector information stored by the motion vector difference generation unit 121 and used when the motion vector difference generation unit 121 generates a temporal correlation prediction vector. .
  • H.264 / AVC format it is possible to divide one macro block into a plurality of motion compensation blocks and to have different motion information for each. That is, H. In the H.264 / AVC format, a hierarchical structure is defined by macroblocks and sub-macroblocks. For example, in the High Efficiency Video Coding (HEVC) format, as shown in FIG. 2, a coding unit (CU (Coding) Unit)).
  • HEVC High Efficiency Video Coding
  • CU is also called Coding Tree Block (CTB).
  • CTB Coding Tree Block
  • This is an area (partial area of an image in picture units) serving as an encoding (decoding) processing unit that plays the same role as a macroblock in the H.264 / AVC format.
  • the latter is fixed to a size of 16 ⁇ 16 pixels, whereas the size of the former is not fixed, and is specified in the image compression information in each sequence.
  • the maximum size (LCU (Largest Coding Unit)) and the minimum size ((SCU (Smallest Coding Unit)) are specified. Is done.
  • the LCU size is 128 and the maximum hierarchical depth is 5.
  • split_flag is “1”
  • the 2N ⁇ 2N size CU is divided into N ⁇ N size CUs that are one level below.
  • the CU is divided into prediction units (Prediction Units (PU)) that are regions (partial regions of images in units of pictures) that are processing units of intra or inter prediction, and are regions that are processing units of orthogonal transformation It is divided into transform units (Transform Unit (TU)), which is (a partial area of an image in units of pictures).
  • Prediction Units PU
  • transform Unit Transform Unit
  • H In the case of an encoding method in which a CU is defined and various processes are performed in units of the CU as in the HEVC method above, H. It can be considered that a macroblock in the H.264 / AVC format corresponds to an LCU. However, since the CU has a hierarchical structure as shown in FIG. 2, the size of the LCU in the highest hierarchy is H.264, for example, 128 ⁇ 128 pixels. Generally, it is set larger than the macroblock of the H.264 / AVC format.
  • This disclosure is not limited to coding schemes that use CU, PU, TU, etc. as in the HEVC scheme, but also H.264.
  • the present invention can also be applied to an encoding method using macroblocks in the H.264 / AVC format. That is, since both the unit and the block indicate a region serving as a processing unit, the following description will be made using the term “processing unit region” so as to include both as appropriate.
  • the unit indicates a region as a processing unit.
  • the H.264 / AVC format it is a block.
  • the merge mode In the merge mode, a total of five vectors of the vector information of these four spatial positions and vector information of the Colocated picture (ColPU vector information) are set as candidates for the target PU, and RD (rate-distortion) is selected from the candidates. In this mode, the optimal vector is used.
  • the Colocated picture is also referred to as ColPic.
  • ColPU is a PU that is spatially the same position as the target PU in a previously encoded or decoded picture, and is referred to as time vector information of the target PU.
  • the vector having the smallest difference (mvd) from the motion vector obtained by motion search among the six vectors of the vector information of these five spatial positions and the vector information of ColPU is the predicted motion vector.
  • the predicted motion vector is hereinafter also referred to as a predicted vector as appropriate.
  • the decoding side reconstructs the motion vector by transmitting the difference and information indicating which vector of the six vectors of the mvp index (mvp_idx) is used in the stream. be able to. Thereby, encoding performance can be improved.
  • FIG. 5 is a diagram for explaining a ColPU determination method.
  • FIG. 5A shows the first processing.
  • the upper left 4 ⁇ 4 pixel shown in A of FIG. 5 is the target PU
  • the upper left pixel position of the target PU is set to (0,0)
  • the size 4 of the target PU is added thereto
  • the PU including the pixel at the position (0,0) obtained by rounding the position 4,4) by 16 is determined as ColPU.
  • the process is performed as shown in FIG. 5B.
  • the upper left pixel position of the target PU is set to (0, 0). Then, (2,2) obtained by adding 2 which is half of the size 4 of the target PU to (0,0), and (-1), the position (1,1) decremented by 16 is rounded by 16 (0,0)
  • the PU including this pixel is determined as ColPU.
  • the ColPU described above is a PU including the upper left pixel of each divided area obtained by dividing the screen into 16 ⁇ 16 pixel units.
  • the motion vector of the PU is used as ColMV.
  • ColMV will be described in detail with reference to FIGS. 6 and 7 below.
  • N, N + 1, and N + 2 pictures are sequentially shown along the time axis t.
  • a memory in which vector information is stored is shown below the picture.
  • the vector information stored during the N process is read from the ColPU position according to the ColPU determination method at the time of processing the N + 1. .
  • the read vector information is used as ColMV in N + 1 pictures.
  • the PU motion vector including the upper left pixel of the divided area divided into 16 ⁇ 16 pixels is stored regardless of the PU size.
  • FIG. 7 a part of a picture divided into 16 ⁇ 16 pixels is shown, and below that, a vector stored in the memory is conceptually shown.
  • the upper left circle in the 16 ⁇ 16 pixel divided area indicates a pixel.
  • the divided area divided into 16 ⁇ 16 pixels in the upper left consists of four PUs, Pl0_0, Pl0_1, Pl0_2, and Pl0_3. Among them, the vector mv00 (Pl0_0) of Pl0_0 that is the PU including the upper left pixel of the divided area is stored in the memory. Saved.
  • the divided area divided into 16 ⁇ 16 pixels located below the upper left divided area is composed of two PUs Pl2_0 and Pl2_1, and of these, the vector mv01 (Pl2_0) of Pl2_0 which is the PU including the upper left pixel of the divided area Is stored in memory.
  • the divided area divided into 16 ⁇ 16 pixels located to the right of the upper left divided area is composed of one PU of Pl1_0.
  • the vector mv10 (Pl1_0) of Pl1_0 which is the PU including the upper left pixel of the divided area is the memory. Saved in.
  • the divided area divided into 16 ⁇ 16 pixels located below the divided area of Pl1_0 is composed of two PUs, Pl3_0 and Pl3_1, and of these, the vector mv11 (Pl1_0) of Pl3_0 which is the PU including the upper left pixel of the divided area Is stored in memory.
  • FIG. 8 a part of a picture divided into 16 ⁇ 16 pixels in the case of LCU 32 ⁇ 32 is shown.
  • the data (motion vector information) read out as ColPU information is a small square that is hatched.
  • the data (motion vector information) read as ColPU information is a small white square.
  • the motion vector difference generation unit 121 illustrated in FIG. 9 performs control to read data by changing the data read position when the read position exceeds the boundary of the LCU.
  • FIG. 9 is a block diagram illustrating a configuration example of the motion vector difference generation unit.
  • the motion vector difference generation unit 121 includes a motion vector difference generation control unit 131, a temporal prediction vector generation unit 132, an in-picture prediction vector generation unit 133, a peripheral motion vector storage unit 134, an optimal mode determination unit 135, and an inter / intra determination unit 136. It is comprised so that it may contain.
  • the motion prediction / compensation unit 115 supplies CU size information, CU partition information, CU address information, motion vector information, and inter cost function values to the motion vector difference generation control unit 131.
  • the motion vector difference generation control unit 131 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 133 and the temporal prediction vector generation unit 132. At that time, the motion vector difference generation control unit 131 supplies CU / PU address information and CU / PU size information to the 16 ⁇ 16 address correction unit 142. The motion vector difference generation control unit 131 supplies the optimal mode determination unit 135 with the inter cost function value and the motion vector information from the motion prediction / compensation unit 115 to determine the optimal mode.
  • the temporal prediction vector generation unit 132 is a unit that generates a temporal correlation prediction vector.
  • the temporal prediction vector generation unit 132 is configured to include a 16 ⁇ 16 address generation unit 141, a 16 ⁇ 16 address correction unit 142, and a memory access control unit 143.
  • the 16 ⁇ 16 address generation unit 141 calculates the address of the 16 ⁇ 16 area to be referenced, and the calculated 16 ⁇ 16 address information is used as the 16 ⁇ 16 address correction unit 142. To supply.
  • the 16 ⁇ 16 address generation unit 141 calculates the address of the 16 ⁇ 16 area in the case of intra, and the calculated 16 ⁇ 16 address information is converted into 16 ⁇ 16 address information. It is supplied to the 16 address correction unit 142.
  • the 16 ⁇ 16 address correction unit 142 is supplied with CU / PU address information and CU / PU size information from the motion vector difference generation control unit 131.
  • the 16 ⁇ 16 address correction unit 142 is supplied with 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141.
  • the 16 ⁇ 16 address correction unit 142 determines whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 is a reference beyond the CU, using the supplied information. . When the reference is beyond the CU, the 16 ⁇ 16 address correction unit 142 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU, and the corrected 16 ⁇ 16 address information is accessed in the memory. This is supplied to the control unit 143. If the reference is not beyond the CU, the 16 ⁇ 16 address correction unit 142 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 143.
  • the memory access control unit 143 issues a request to the motion vector storage memory 122 to read out the PU motion vector including the upper left pixel position of the divided region corresponding to the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142. Do. That is, the memory access control unit 143 sets the motion vector of the PU including the upper left pixel position of the divided region corresponding to the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142 as a temporal correlation prediction vector, A request to read the set time correlation prediction vector is made. In response to this request, motion vector information is supplied from the motion vector storage memory 122 to the inter / intra determination unit 136.
  • the intra-picture prediction vector generation unit 133 reads the motion vector information stored in the peripheral motion vector storage unit 134 under the control of the motion vector difference generation control unit 131, and generates a spatial prediction vector.
  • the intra-frame prediction vector generation unit 133 supplies the spatial prediction vector information to the optimum mode determination unit 135.
  • the peripheral motion vector storage unit 134 includes a memory and stores motion vector information in the processing target picture.
  • the optimal mode determination unit 135 determines whether or not the spatial prediction vector from the in-picture prediction vector generation unit 133 and the temporal prediction vector from the inter / intra determination unit 136 can be referred to.
  • the optimum mode determination unit 135 determines a mode in which the generated bit amount of the difference from the motion vector information calculated by the motion prediction / compensation unit 115 is the smallest among the referenceable motion vectors.
  • the inter-cost function value and the motion vector information are supplied to the optimum mode determination unit 135 from the motion vector difference generation control unit 131.
  • the intra mode function value is supplied from the intra prediction unit 114 to the optimal mode determination unit 135.
  • the optimum mode determination unit 135 also uses these cost function values to determine whether the region to be processed is encoded as an intra region or an inter region.
  • the optimal mode determination unit 135 supplies the determined optimal mode information and motion vector difference information to the lossless encoding unit 106.
  • the optimal mode determination unit 135 also stores motion vector information corresponding to the optimal mode in the motion vector storage memory 122. Although not shown, the optimum mode determination unit 135 also causes the peripheral motion vector storage unit 134 to store the motion vector information.
  • the inter / intra determination unit 136 supplies the motion vector information to the optimum mode determination unit 135 as temporal prediction vector information.
  • the inter / intra determination unit 136 instructs the 16 ⁇ 16 address generation unit 141 to re-read. This re-reading instruction is performed only once.
  • FIG. 10 is a diagram illustrating a ColPU determination method in the HEVC scheme.
  • the ColPU determination method described above with reference to FIG. 10 the ColPU determination method described above with reference to FIG.
  • xPRb in the formula (1) shown in the example of FIG. 10 indicates the pixel position in the horizontal direction at the lower right of the PU.
  • YPRb in Expression (2) indicates the pixel position in the vertical direction at the lower right of the PU.
  • LCU 32 ⁇ 32 is shown.
  • the hatched area located at the lower left is the PU (16 ⁇ 16) to be processed.
  • the PU that includes the pixel that touches the lower right of the processing target PU becomes the ColPU. .
  • the 16 ⁇ 16 address correction unit 142 decrements by 1 with respect to the vertical direction of the pixel as shown in the above-described equation (3).
  • the pixel becomes a pixel included in the PU (ie, LCU) adjacent to the right of the processing target PU.
  • the PU on the right side of the processing target PU that is the PU including the upper left pixel becomes the ColPU, and the reference relationship in the same LCU can be obtained.
  • step S101 the A / D converter 101 performs A / D conversion on the input image.
  • step S102 the screen rearrangement buffer 102 stores the A / D converted image, and rearranges the picture from the display order to the encoding order.
  • step S103 the calculation unit 103 calculates the difference between the image rearranged by the process in step S102 and the predicted image.
  • the predicted image is supplied from the motion prediction / compensation unit 115 in the case of inter prediction and from the intra prediction unit 114 in the case of intra prediction to the calculation unit 103 via the selection unit 116.
  • the data amount of difference data is reduced compared to the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
  • step S104 the orthogonal transform unit 104 orthogonally transforms the difference information generated by the process in step S103. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • step S107 the inverse orthogonal transform unit 109 performs inverse orthogonal transform on the orthogonal transform coefficient obtained by the process in step S106 with characteristics corresponding to the characteristics of the orthogonal transform unit 104.
  • step S108 the calculation unit 110 adds the predicted image to the locally decoded difference information, and generates a locally decoded image (an image corresponding to the input to the calculation unit 103).
  • step S109 the deblock filter 111 performs a deblock filter process on the image generated by the process in step S108. As a result, block distortion (that is, area distortion of a processing unit) is removed.
  • step S110 the frame memory 112 stores an image from which block distortion has been removed by the process in step S109. It should be noted that an image that has not been filtered by the deblocking filter 111 is also supplied from the computing unit 110 and stored in the frame memory 112.
  • the intra prediction unit 114 performs an intra prediction process in the intra prediction mode. That is, the intra prediction unit 114 generates predicted images in all intra prediction modes, obtains an intra cost function value, evaluates each predicted image, and selects an optimal intra mode. When selecting the optimal intra mode, the intra prediction unit 114 supplies the prediction image generated in the optimal intra mode to the selection unit 116, and supplies the intra cost function value of the optimal intra mode to the motion vector difference generation unit 121. To do.
  • the motion prediction / compensation unit 115 performs an inter motion prediction process for performing motion prediction and motion compensation in the inter prediction mode. That is, the motion prediction / compensation unit 115 uses the input image supplied from the screen rearrangement buffer 102 and the reference image supplied from the frame memory 112 via the selection unit 113 to perform motion for all inter prediction modes. Make a prediction. The motion prediction / compensation unit 115 performs motion compensation processing according to the detected motion vector, generates a predicted image, obtains an inter-cost function value, evaluates each predicted image, and selects an optimal inter mode. .
  • the motion prediction / compensation unit 115 supplies the prediction image generated in the optimal inter mode to the selection unit 116.
  • the motion prediction / compensation unit 115 supplies the inter-cost function value, motion vector information, information on the target region (CU), and the like of the optimal inter mode to the motion vector difference generation unit 121.
  • step S113 the motion vector difference generation unit 121 uses the information from the motion prediction / compensation unit 115 to perform a motion vector difference generation process. Details of the motion vector difference generation processing will be described later with reference to FIG.
  • step S113 motion vector difference information is generated, and an optimal mode is determined using the intra cost function value from the intra prediction unit 114 and the inter cost function value from the motion prediction / compensation unit 115.
  • the motion vector difference generation unit 121 supplies the generated motion vector difference information to the lossless encoding unit 106 together with the optimal mode information. Further, the motion vector difference generation unit 121 supplies information on the determined optimum mode to the selection unit 116.
  • the motion prediction / compensation unit 115 transmits a reference image index or the like to the lossless encoding unit 106.
  • step S114 the selection unit 116 selects the prediction image in the optimal mode based on the information supplied from the motion vector difference generation unit 121. That is, the selection unit 116 selects either the prediction image generated by the intra prediction unit 114 or the prediction image generated by the motion prediction / compensation unit 115.
  • step S115 the lossless encoding unit 106 encodes the transform coefficient quantized by the process in step S105. That is, lossless encoding such as variable length encoding or arithmetic encoding is performed on the difference image (secondary difference image in the case of inter).
  • the lossless encoding unit 106 encodes information regarding the prediction mode of the prediction image selected by the process of step S114, and adds the encoded information to the encoded data obtained by encoding the difference image. That is, the lossless encoding unit 106 also encodes mode information, motion vector difference information, and the like supplied from the motion vector difference generation unit 121 and adds them to the encoded data.
  • step S116 the accumulation buffer 107 accumulates the encoded data output from the lossless encoding unit 106.
  • the encoded data stored in the storage buffer 107 is appropriately read out and transmitted to the decoding side via the transmission path.
  • step S117 the rate control unit 117 controls the quantization operation rate of the quantization unit 105 so that overflow or underflow does not occur based on the compressed image accumulated in the accumulation buffer 107 by the process in step S116. To do.
  • step S117 ends, the encoding process ends.
  • CU size information CU partition information, CU address information, motion vector information, intercost function values, and the like are supplied to the motion vector difference generation control unit 131.
  • the motion vector difference generation control unit 131 gives a vector generation instruction to the in-picture prediction vector generation unit 133 and the temporal prediction vector generation unit 132.
  • the intra-picture prediction vector generation unit 133 generates a spatial correlation prediction vector in step S131.
  • the in-picture prediction vector generation unit 133 reads the motion vector information stored in the peripheral motion vector storage unit 134 and generates a spatial prediction vector.
  • the intra-frame prediction vector generation unit 133 supplies the spatial prediction vector information to the optimum mode determination unit 135.
  • step S132 the temporal prediction vector generation unit 132 generates a temporal correlation prediction vector. Details of this temporal correlation prediction vector generation processing will be described later with reference to FIG. 14, but temporal prediction vector information is supplied to the optimum mode determination unit 135 by the processing in step S ⁇ b> 132.
  • the inter-cost function value and the motion vector information are supplied to the optimum mode determination unit 135 from the motion vector difference generation control unit 131.
  • the intra mode function value is supplied from the intra prediction unit 114 to the optimal mode determination unit 135.
  • step S133 the optimum mode determination unit 135 determines whether or not all the surrounding areas are not referable. When at least one of the spatial prediction vector information and the temporal prediction vector information is supplied, the optimum mode determination unit 135 determines that all the surrounding areas are not referable, and the process proceeds to step S134.
  • step S134 when there is an overlapping prediction vector in the supplied spatial prediction vector information and temporal prediction vector information, the optimum mode determination unit 135 deletes it.
  • step S135 the optimum mode determination unit 135 generates motion vector difference information. That is, the optimal mode determination unit 135 performs motion vector difference information that is a difference between the motion vector information supplied from the motion vector difference generation control unit 131 and the prediction vector information for each of the supplied spatial prediction vector information and temporal prediction vector information. Is generated.
  • step S136 the optimal mode determination unit 135 determines the minimum prediction vector of the motion vector difference using the generated motion vector difference information.
  • the optimal mode determination unit 135 determines that all the surrounding areas cannot be referred to, and proceeds to step S137.
  • step S137 the optimum mode determination unit 135 generates motion vector difference information using 0 as a prediction vector.
  • step S138 the optimum mode determination unit 135 determines the optimum mode. That is, the optimal mode determination unit 135 determines an optimal mode by comparing the inter cost function value from the motion vector difference generation control unit 131 and the intra cost function value from the intra prediction unit 114.
  • the optimal mode determination unit 135 stores the motion vector information in the motion vector storage memory 122 in step S139. Specifically, as described above with reference to FIG. 7, the optimal mode determination unit 135 saves the motion vector information of the PU including the upper left pixel of the divided area for each 16 ⁇ 16 divided area as a motion vector. Save in the memory 122. This motion vector information is also stored in the peripheral motion vector storage unit 134.
  • step S140 the optimal mode determination unit 135 supplies the optimal mode information to the lossless encoding unit 106.
  • the optimal mode is the inter prediction mode
  • motion vector difference information about the optimal mode and the index (mvp_idx) of the minimum prediction vector determined in step S136 are also supplied to the lossless encoding unit 106.
  • FIG. 13 is an example in the case of obtaining the predicted motion vector mvp.
  • the processes of steps S135 and S136 are excluded from the process of FIG. 13, and the optimum one of the inter, intra, and merge modes is determined when determining the optimal mode in step S138.
  • the merge index (merge_idx) is supplied to the lossless encoding unit 106 together with the optimum mode information.
  • the process for generating the temporal correlation prediction vector in step S132 and the process for storing the motion vector in step S139 perform basically the same process. Is omitted.
  • the 16 ⁇ 16 address generation unit 141 is supplied with a prediction vector generation instruction from the motion vector difference generation control unit 131. In step S141, the 16 ⁇ 16 address generation unit 141 calculates an address of a 16 ⁇ 16 area to be referred to. The 16 ⁇ 16 address generation unit 141 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 142.
  • the 16 ⁇ 16 address correction unit 142 is further supplied with CU / PU address information and CU / PU size information from the motion vector difference generation control unit 131.
  • step S142 the 16 ⁇ 16 address correction unit 142 uses the supplied information to determine whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 is a reference beyond the CU. Determine whether. That is, in step S142, it is determined whether or not the lower right pixel position of the divided area indicated by the 16 ⁇ 16 address information exceeds the boundary of the processing target CU.
  • the 16 ⁇ 16 address correcting unit 142 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU. Specifically, the 16 ⁇ 16 address correcting unit 142 performs pull-back on the 16 ⁇ 16 address information in the vertical direction using Expression (3).
  • the 16 ⁇ 16 address correcting unit 142 supplies the 16 ⁇ 16 address information corrected as a result of the pull back to the memory access control unit 143.
  • step S142 If it is determined in step S142 that the reference is not beyond the CU, the process in step S143 is skipped. That is, the 16 ⁇ 16 address correction unit 142 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 143.
  • the memory access control unit 143 requests the motion vector storage memory 122 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142. That is, the memory access control unit 143 sets the motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 142 as a temporal correlation prediction vector, and makes a request to read the set temporal correlation prediction vector.
  • step S144 the motion vector storage memory 122 reads the motion vector information in response to the request from the memory access control unit 143, and supplies the motion vector information to the inter / intra determination unit 136.
  • step S145 the inter / intra determination unit 136 determines whether or not the motion vector information read from the motion vector storage memory 122 indicates an intra (picture).
  • step S146 the 16 ⁇ 16 address generation unit 141 calculates the address of the 16 ⁇ 16 area in the case of intra.
  • the 16 ⁇ 16 address generation unit 141 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 142.
  • step S147 the 16 ⁇ 16 address correction unit 142 uses the supplied information to determine whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 is a reference beyond the CU. Determine whether.
  • the 16 ⁇ 16 address correcting unit 142 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU in step S148. Specifically, the 16 ⁇ 16 address correcting unit 142 performs pull-back on the 16 ⁇ 16 address information in the vertical direction using Expression (3).
  • the 16 ⁇ 16 address correcting unit 142 supplies the 16 ⁇ 16 address information corrected as a result of the pull back to the memory access control unit 143.
  • step S147 If it is determined in step S147 that the reference does not exceed the CU, the process in step S148 is skipped. That is, the 16 ⁇ 16 address correction unit 142 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 143.
  • the memory access control unit 143 requests the motion vector storage memory 122 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142. That is, the memory access control unit 143 sets the motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 142 as a temporal correlation prediction vector, and makes a request to read the set temporal correlation prediction vector.
  • step S 149 the motion vector storage memory 122 reads the motion vector information in response to the request from the memory access control unit 143 and supplies it to the inter / intra determination unit 136.
  • the inter / intra determination unit 136 supplies the motion vector information from the motion vector storage memory 122 to the optimum mode determination unit 135 as temporal prediction vector information.
  • step S145 if it is determined in step S145 that the read motion vector information indicates an intra, steps S146 to S149 are skipped. That is, the motion vector information read in step S144 is supplied to the optimum mode determination unit 135 as temporal prediction vector information.
  • the address is corrected so that the reference is pulled back into the LCU.
  • the LCU is not straddled in the vertical direction, so that the Col vector information is not re-read.
  • both processes point to the same PU.
  • the PU size is 8 ⁇ 8 or less, ColPU cannot be used for a PU that does not touch the right end or the lower end divided by 16 ⁇ 16 pixels.
  • the vector (ColMV) is not a candidate for the merge mode described above with reference to FIG. 3 or the prediction vector mvp described above with reference to FIG. The degree of freedom in selecting a vector is reduced.
  • the motion vector difference generation unit 121 shown in FIG. 15 controls to read the data by changing the data reading position.
  • FIG. 15 is a block diagram illustrating another configuration example of the motion vector difference generation unit.
  • the motion vector difference generation unit 121 in FIG. 15 includes the screen prediction vector generation unit 133, the peripheral motion vector storage unit 134, the optimum mode determination unit 135, and the inter / intra determination unit 136, and the motion vector difference generation in FIG. This is common with the unit 121. In addition, since it repeats about the common part, the description is abbreviate
  • the motion vector difference generation unit 121 in FIG. 15 the point that the motion vector difference generation control unit 131 is replaced with the motion vector difference generation control unit 151, and the time prediction vector generation unit 132 is replaced with the time prediction vector generation unit 152. This is different from the motion vector difference generation unit 121 of FIG.
  • the motion vector difference generation control unit 151 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 133 and the temporal prediction vector generation unit 152. In addition, the motion vector difference generation control unit 151 supplies the optimal mode determination unit 135 with the inter cost function value and the motion vector information from the motion prediction / compensation unit 115 to determine the optimal mode.
  • the motion vector difference generation control unit 151 does not supply CU or PU information to the 16 ⁇ 16 address correction unit 162.
  • the temporal prediction vector generation unit 152 is common to the temporal prediction vector generation unit 132 of FIG. 9 in that it includes a memory access control unit 143.
  • temporal prediction vector generation unit 152 the 16 ⁇ 16 address generation unit 141 is replaced with a 16 ⁇ 16 address generation unit 161, and the 16 ⁇ 16 address correction unit 142 is replaced with a 16 ⁇ 16 address correction unit 162.
  • 9 temporal prediction vector generation unit 132 is different.
  • the 16 ⁇ 16 address generation unit 161 calculates the address of the 16 ⁇ 16 area to be referenced, and the calculated 16 ⁇ 16 address information is used as the 16 ⁇ 16 address correction unit 162. To supply.
  • the 16 ⁇ 16 address generation unit 161 calculates the address of the 16 ⁇ 16 area in the case of intra when there is a re-reading instruction from the inter / intra determination unit 136.
  • the 16 ⁇ 16 address generation unit 161 determines whether the address calculated by the first calculation and the address calculated by the second calculation (in the case of intra) are the same. If they are the same, the addresses are the same. A coincidence flag indicating that is generated.
  • the 16 ⁇ 16 address generation unit 161 supplies the generated match flag to the 16 ⁇ 16 address correction unit 162.
  • the 16 ⁇ 16 address correction unit 162 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 161 to the memory access control unit 143. Further, when receiving the flag from the 16 ⁇ 16 address generation unit 161, the 16 ⁇ 16 address correction unit 162 corrects the first 16 ⁇ 16 address so that the address is different from the first time. The 16 ⁇ 16 address correction unit 162 supplies the corrected 16 ⁇ 16 address information to the memory access control unit 143.
  • FIG. 16 is a diagram showing a picture in the case of PU4 ⁇ 4.
  • the lower left pixel position (xPtr, yPtr) of the processing target PU can be expressed by the following equation (4).
  • xPtr xP + nPSW -1
  • yPtr yP + nPSW -1 (4)
  • the hatched pixel position in FIG. 16 (hereinafter referred to as the hatch position) is the first process and 2 in the ColPU calculation method described above with reference to FIG. It is the position where the same PU is pointed out in the second processing.
  • the 16 ⁇ 16 address correction unit 162 determines that the processing target PU belongs to 16 ⁇ 16. The address is corrected so as to indicate 16 ⁇ 16 on the right side of the area.
  • the 16 ⁇ 16 address correcting unit 162 When the right 16 ⁇ 16 is outside the screen, the 16 ⁇ 16 address correcting unit 162 indicates the 16 ⁇ 16 area adjacent to the left of the 16 ⁇ 16 area to which the processing target PU belongs. Correct the address.
  • the pixel position P1 shown in FIG. 17 is 11 with 16 remainders, and the conditional expression of both the x and y components of Expression (4) holds. Therefore, the 16 ⁇ 16 address correcting unit 162 adds +16 only in the horizontal direction of the pixel position P1 and moves it to the pixel position P2 of the 16 ⁇ 16 region adjacent to the right. Thereafter, the pixel position P2 becomes a hatched pixel position P3 by the rounding process at 16.
  • the 16 ⁇ 16 address generation unit 161 is supplied with a prediction vector generation instruction from the motion vector difference generation control unit 151.
  • the 16 ⁇ 16 address generation unit 141 calculates the address of the 16 ⁇ 16 area to be referred to.
  • the 16 ⁇ 16 address generation unit 141 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 142.
  • the 16 ⁇ 16 address correction unit 142 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 143.
  • the memory access control unit 143 requests the motion vector storage memory 122 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142. That is, the memory access control unit 143 sets the motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 142 as a temporal correlation prediction vector, and makes a request to read the set temporal correlation prediction vector.
  • step S152 the motion vector storage memory 122 reads the motion vector information in response to the request from the memory access control unit 143, and supplies the motion vector information to the inter / intra determination unit 136.
  • step S153 when it is determined that the read motion vector information indicates an intra, the inter / intra determination unit 136 instructs the 16 ⁇ 16 address generation unit 161 to re-read. This re-reading instruction is performed only once.
  • step S154 the 16 ⁇ 16 address generation unit 161 calculates the address of the 16 ⁇ 16 area in the case of intra.
  • step S155 the 16 ⁇ 16 address generation unit 161 determines whether the address calculated in the first time is the same as the address calculated in the second time (in the case of intra).
  • the 16 ⁇ 16 address generation unit 161 determines in step S155 that the addresses are the same, the 16 ⁇ 16 address generation unit 161 generates a match flag indicating that the addresses are the same, and uses the generated match flag as the 16 ⁇ 16 address correction unit 162. To supply.
  • the 16 ⁇ 16 address correcting unit 162 corrects the first 16 ⁇ 16 address so that the address becomes different from the first address in step S156. That is, the 16 ⁇ 16 address correction unit 162 corrects the 16 ⁇ 16 address by performing +16 in the horizontal direction of the first 16 ⁇ 16 address so as to indicate the region to the right of the region to which the 16 ⁇ 16 address belongs. .
  • the 16 ⁇ 16 address correction unit 162 supplies the corrected 16 ⁇ 16 address information to the memory access control unit 143.
  • step S155 If it is determined in step S155 that the addresses are different, no match flag is generated and step S156 is also skipped. That is, the 16 ⁇ 16 address correction unit 162 supplies the memory access control unit 143 with a second 16 ⁇ 16 address different from the first 16 ⁇ 16 address.
  • the memory access control unit 143 requests the motion vector storage memory 122 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 142. That is, the memory access control unit 143 sets the motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 142 as a temporal correlation prediction vector, and makes a request to read the set temporal correlation prediction vector.
  • step S157 the motion vector storage memory 122 reads the motion vector information in response to the request from the memory access control unit 143 and supplies the motion vector information to the inter / intra determination unit 136.
  • the inter / intra determination unit 136 supplies the motion vector information from the motion vector storage memory 122 to the optimum mode determination unit 135 as temporal prediction vector information.
  • step S153 if it is determined in step S153 that the read motion vector information indicates an intra, steps S154 to S157 are skipped. That is, the motion vector information read in step S152 is supplied to the optimum mode determination unit 135 as temporal phase prediction vector information.
  • the viewpoint is changed to the motion vector storage side. That is, when the motion vector difference generation unit 121 illustrated in FIG. 19 stores the motion vector, if the processing target region is intra prediction, the motion vector difference generation unit 121 stores an available motion vector at a predetermined position in the vicinity.
  • FIG. 19 is a block diagram illustrating still another configuration example of the motion vector difference generation unit.
  • the motion vector difference generation unit 121 of FIG. 9 is the same as the motion vector difference generation unit 121 of FIG. 9 in that it includes a screen prediction vector generation unit 133 and a peripheral motion vector storage unit 134. In addition, since it repeats about the common part, the description is abbreviate
  • the motion vector difference generation unit 121 in FIG. 19 the point that the motion vector difference generation control unit 131 is replaced with the motion vector difference generation control unit 171, and the temporal prediction vector generation unit 132 is replaced with the temporal prediction vector generation unit 172. This is different from the motion vector difference generation unit 121 of FIG.
  • the motion vector difference generation unit 121 in FIG. 19 is different from the motion vector difference generation unit 121 in FIG. 9 in that the optimal mode determination unit 135 is replaced with an optimal mode determination unit 173.
  • the motion vector difference generation unit 121 in FIG. 19 is different from the motion vector difference generation unit 121 in FIG. 9 in that the inter / intra determination unit 136 is removed and the storage vector selection unit 174 is added. ing.
  • the motion vector difference generation control unit 171 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 133 and the temporal prediction vector generation unit 172. In addition, the motion vector difference generation control unit 171 supplies the intermode function value and the motion vector information from the motion prediction / compensation unit 115 to the optimal mode determination unit 173 to determine the optimal mode.
  • the temporal prediction vector generation unit 172 is common to the temporal prediction vector generation unit 132 of FIG. 9 in that it includes a memory access control unit 143.
  • the temporal prediction vector generation unit 172 has the point that the 16 ⁇ 16 address generation unit 141 is replaced with the 16 ⁇ 16 address generation unit 181 and the point that the 16 ⁇ 16 address correction unit 142 is removed are the temporal prediction vectors in FIG. Different from the generation unit 132.
  • the 16 ⁇ 16 address generation unit 161 calculates the address of the 16 ⁇ 16 area to be referenced under the control of the motion vector difference generation control unit 171, and supplies the calculated 16 ⁇ 16 address information to the memory access control unit 143. To do.
  • the optimal mode determination unit 173 determines whether the spatial prediction vector from the in-picture prediction vector generation unit 133 and the temporal prediction vector from the motion vector storage memory 122 can be referred to.
  • the optimum mode determination unit 173 determines a mode in which the generated bit amount of the difference from the motion vector information calculated by the motion prediction / compensation unit 115 is the smallest among the referenceable motion vectors.
  • the inter-cost function value and the motion vector information are supplied from the motion vector difference generation control unit 171 to the optimum mode determination unit 173.
  • An intra cost function value is supplied from the intra prediction unit 114 to the optimal mode determination unit 173.
  • the optimal mode determination unit 173 also uses these cost function values to determine whether the region to be processed is encoded as an intra region or an inter region.
  • the optimal mode determination unit 173 supplies the determined optimal mode information and motion vector difference information to the lossless encoding unit 106. Further, the optimum mode determination unit 173 supplies the motion vector information and the intra area information to the storage vector selection unit 174.
  • the storage vector selection unit 174 stores the motion vector information of the processing target area in the motion vector storage memory 122. At that time, the saved vector selection unit 174 refers to the motion vector information and intra region information from the optimum mode determination unit 173, the vector and mode information of the surrounding region from the surrounding motion vector storage unit 134, and the like.
  • the storage vector selection unit 174 causes the peripheral motion vector storage unit 134 to store the motion vector information and the mode information.
  • a part of the screen divided in units of 16 ⁇ 16 pixels is shown.
  • the motion vector storage memory 122 a motion vector is stored for each 16 ⁇ 16 divided region.
  • a lower right 16 ⁇ 16 region surrounded by a thick frame is a motion vector storage processing target region (divided region), and a pixel position P is shown at the upper left position of the processing target region.
  • Also shown are pixel location A, pixel location B, and pixel location C that touch pixel location P.
  • the pixel position A adjacent to the left of the pixel position P is the upper right pixel position of the 16 ⁇ 16 area adjacent to the left of the processing target area.
  • the pixel position B in contact with the upper left of the pixel position P is the lower right pixel position of the 16 ⁇ 16 area at the upper left of the processing target area.
  • the pixel position C adjacent on the pixel position P is the lower left pixel position of the 16 ⁇ 16 area adjacent to the processing target area.
  • the saved vector selection unit 174 first saves the vector information of the PU including the pixel position P at the upper left of the 16 ⁇ 16 area in the motion vector saving memory 122, similarly to the processing described above with reference to FIG. However, if the PU is an intra region, the saved vector selection unit 174, if the motion vector information of the PU including the surrounding pixel positions is available, it is used as the processing target 16 ⁇ 16 region. It is stored in the motion vector storage memory 122 as vector information.
  • the saved vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position A is available when the PU including the pixel position P at the upper left of the 16 ⁇ 16 region is an intra. Determine whether. If the motion vector information of the PU including the pixel position A is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and the set motion vector information Save.
  • the stored vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position B is usable if the motion vector information of the PU including the pixel position A is not usable. If the motion vector information of the PU including the pixel position B is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and the set motion vector information Save.
  • the saved vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position C is usable if the motion vector information of the PU including the pixel position B is not usable. If the motion vector information of the PU including the pixel position C is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and the set motion vector information Save.
  • the saved vector selection unit 174 sets the intra as the 16 ⁇ 16 region vector information to be processed, and sets the motion vector thus set. To save the information.
  • vector information outside the 16 ⁇ 16 area is used as a storage candidate, but vector information inside the 16 ⁇ 16 area that is a motion vector storage unit can also be used as a storage candidate. It is.
  • a 16 ⁇ 16 pixel divided region which is a processing target region, is shown in the screen.
  • the processing target area is further divided into 8 ⁇ 8 pixel areas.
  • the pixel position A is shown at the upper left position of the 8 ⁇ 8 area at the upper left of the processing target area.
  • a pixel position B is shown at the upper left position of the 8 ⁇ 8 area at the upper right of the processing target area.
  • a pixel position C is shown at the upper left position of the 8 ⁇ 8 area at the lower left of the processing target area.
  • a pixel position D is shown at the upper left position of the 8 ⁇ 8 area at the lower right of the processing target area.
  • the storage vector selection unit 174 stores the vector information of the PU (PI0_0) including the upper left pixel position A in the 16 ⁇ 16 region in the motion vector storage memory 122, similarly to the processing described above with reference to FIG. .
  • the PU is an intra region
  • the saved vector selection unit 174 if the motion vector information of the PU including the surrounding pixel positions is available, it is used as the processing target 16 ⁇ 16 region.
  • the motion vector information is set as vector information, and the set motion vector information is stored in the motion vector storage memory 122.
  • the saved vector selection unit 174 can use the motion vector information of the PU (PI1_0) including the pixel position B when the PU including the pixel position A at the upper left of the 16 ⁇ 16 region is intra. It is determined whether or not there is. If the motion vector information of the PU including the pixel position B is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and the set motion vector information Save.
  • the saved vector selection unit 174 determines whether the motion vector information of the PU including the pixel position C (PI2_0) is available. To do. If the motion vector information of the PU including the pixel position C is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and the set motion vector information Save.
  • the stored vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position D is usable when the motion vector information of the PU including the pixel position C is not usable. If the motion vector information of the PU including the pixel position D (PI3_0) is available, the saved vector selection unit 174 sets the motion vector information as vector information of the 16 ⁇ 16 region to be processed, and sets the motion vector information. Save motion vector information.
  • the saved vector selection unit 174 sets that the intra is the vector information of the 16 ⁇ 16 region to be processed and sets the set vector information Save.
  • FIG. 22 is a flowchart for explaining an example of the time correlation prediction vector generation process in step S132 of FIG.
  • the 16 ⁇ 16 address generation unit 181 is supplied with a prediction vector generation instruction from the motion vector difference generation control unit 171. In step S161, the 16 ⁇ 16 address generation unit 181 calculates the address of the 16 ⁇ 16 area to be referred to. The 16 ⁇ 16 address generation unit 181 supplies the calculated 16 ⁇ 16 address information to the memory access control unit 143.
  • the memory access control unit 143 requests the motion vector storage memory 122 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 181.
  • step S162 the motion vector storage memory 122 reads motion vector information in response to a request from the memory access control unit 143, and supplies the motion vector information to the optimal mode determination unit 173 as time correlation prediction vector information.
  • the motion vector is stored in the motion vector storage memory 122 as described above with reference to FIGS. 20 and 21, the motion vector is stored even for an intra.
  • the process of reading the motion vector again can be omitted.
  • the conventional intra readout method may be used only when none of the motion vectors described above with reference to FIG. 20 or FIG. 21 is available and stored as intra.
  • FIG. 23 is a flowchart for explaining the flow of the storage process described above with reference to FIG. 20 described above, and will be described with reference to FIG.
  • step S138 of FIG. 13 when the optimum mode of the processing target region is determined, the optimum mode determination unit 173 supplies the motion vector information and the intra region information of the optimum mode to the saved vector selection unit 174.
  • step S171 the saved vector selection unit 174 determines whether or not the processing target region (the thick 16 ⁇ 16 region in FIG. 20) is inter based on the intra region information from the optimum mode determination unit 173. . That is, in step S171, the saved vector selection unit 174 determines whether or not the PU including the pixel position P shown in FIG. If it is determined in step S171 that the processing target area is inter, the process proceeds to step S172.
  • step S172 the saved vector selection unit 174 sets the motion vector information of the PU including the pixel position P as the motion vector information of the processing target region, and saves the set motion vector information in the motion vector saving memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S171 If it is determined in step S171 that the processing target area is intra, the process proceeds to step S173.
  • the saved vector selection unit 174 obtains the motion vector information and mode information of the PU including the pixel position A shown in FIG.
  • step S173 the saved vector selection unit 174 determines whether or not the pixel position A is available. That is, the saved vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position A is available in step S173.
  • step S173 If the mode information of the PU including the pixel position A is intra, and it is determined in step S173 that the pixel position A is not usable, the process proceeds to step S175.
  • the saved vector selection unit 174 obtains the motion vector information and mode information of the PU including the pixel position B shown in FIG.
  • step S175 the storage vector selection unit 174 determines whether or not the pixel position B is available. That is, the saved vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position B is available in step S175.
  • step S175 If the mode information of the PU including the pixel position B is inter, and it is determined in step S175 that the pixel position B is available, the process proceeds to step S176.
  • step S176 the saved vector selection unit 174 sets the motion vector information of the PU including the pixel position B as the vector information of the processing target area, and saves the set motion vector information in the motion vector saving memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S175 If the PU mode information including the pixel position B is intra, and it is determined in step S175 that the pixel position A is not usable, the process proceeds to step S177.
  • the saved vector selection unit 174 acquires the motion vector information and mode information of the PU including the pixel position C shown in FIG.
  • step S177 the storage vector selection unit 174 determines whether or not the pixel position C is available. That is, in step S177, the saved vector selection unit 174 determines whether or not the PU motion vector information including the pixel position C is available.
  • step S178 the storage vector selection unit 174 sets the motion vector information of the PU including the pixel position C as the vector information of the processing target region, and stores the set motion vector information in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S177 If the mode information of the PU including the pixel position C is intra, and it is determined in step S177 that the pixel position C is not usable, the process proceeds to step S179.
  • step S178 the storage vector selection unit 174 sets that the processing target area is intra as the vector information of the processing target area, and stores the set motion vector information in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • FIG. 24 is a flowchart for explaining the flow of the storage process described above with reference to FIG. 21 described above, and will be described with reference to FIG.
  • step S138 of FIG. 13 when the optimum mode of the processing target region is determined, the optimum mode determination unit 173 supplies the motion vector information or the intra region information of the optimum mode to the saved vector selection unit 174.
  • step S181 the save vector selection unit 174 saves the motion vector of the processing target region (16 ⁇ 16 region shown in FIG. 21), and thus based on the motion vector information or the intra region information from the optimum mode determination unit 173. Then, it is determined whether or not the pixel position A shown in FIG. 21 is available. That is, in step S181, the saved vector selection unit 174 determines whether or not the PU motion vector information including the pixel position A is available.
  • step S181 If it is determined in step S181 that the pixel position A is available, the process proceeds to step S182.
  • step S182 the saved vector selection unit 174 sets the motion vector information of the PU (Pl0_0 in FIG. 21) including the pixel position A from the optimal mode determination unit 173 as the vector information of the processing target region, and the set motion Vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S183 the saved vector selection unit 174 determines whether or not the pixel position B shown in FIG. 21 is available based on the motion vector information or the intra area information from the optimal mode determination unit 173. That is, in step S183, the saved vector selection unit 174 determines whether or not the motion vector information of the PU including the pixel position B is available.
  • step S181 If it is determined in step S181 that the pixel position B is available, the process proceeds to step S184.
  • step S184 the saved vector selection unit 174 sets the motion vector information of the PU (Pl1_0 in FIG. 21) including the pixel position B from the optimal mode determination unit 173 as the vector information of the processing target region, and sets the set motion.
  • Vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S185 the saved vector selection unit 174 determines whether or not the pixel position C shown in FIG. 21 is available based on the motion vector information or the intra area information from the optimum mode determination unit 173. That is, in step S185, the saved vector selection unit 174 determines whether or not the PU motion vector information including the pixel position C is available.
  • step S185 If it is determined in step S185 that the pixel position C is available, the process proceeds to step S186.
  • the saved vector selection unit 174 sets the motion vector information of the PU (Pl2_0 in FIG. 21) including the pixel position C from the optimum mode determination unit 173 as the vector information of the processing target region, and the set motion Vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S185 determines whether or not the pixel position C is not usable. If it is determined in step S185 that the pixel position C is not usable, the process proceeds to step S187.
  • step S187 the saved vector selection unit 174 determines whether or not the pixel position D shown in FIG. 21 is available based on the motion vector information or the intra area information from the optimum mode determination unit 173. That is, in step S187, the saved vector selection unit 174 determines whether or not the PU motion vector information including the pixel position D is available.
  • step S187 If it is determined in step S187 that the pixel position D is available, the process proceeds to step S188.
  • the saved vector selection unit 174 sets the motion vector information of the PU (Pl3_0 in FIG. 21) including the pixel position D from the optimum mode determination unit 173 as the vector information of the processing target region, and the set motion Vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • step S188 If it is determined in step S188 that the pixel position D is not available, the process proceeds to step S189.
  • the storage vector selection unit 174 sets that it is intra as the motion vector information of the processing target region, and stores the set motion vector information in the motion vector storage memory 122. Thereafter, the processing returns to step S139 in FIG.
  • FIG. 25 illustrates a configuration of an embodiment of an image decoding device as an image processing device to which the present disclosure is applied.
  • An image decoding apparatus 200 shown in FIG. 25 is a decoding apparatus corresponding to the image encoding apparatus 100 of FIG.
  • encoded data encoded by the image encoding device 100 is transmitted to the image decoding device 200 corresponding to the image encoding device 100 via a predetermined transmission path and decoded.
  • the image decoding apparatus 200 includes a storage buffer 201, a lossless decoding unit 202, an inverse quantization unit 203, an inverse orthogonal transform unit 204, a calculation unit 205, a deblock filter 206, a screen rearrangement buffer 207, And a D / A converter 208.
  • the image decoding apparatus 200 includes a frame memory 209, a selection unit 210, an intra prediction unit 211, a motion prediction / compensation unit 212, and a selection unit 213.
  • the image decoding apparatus 200 includes a motion vector generation unit 221 and a motion vector storage memory 222.
  • the accumulation buffer 201 accumulates the transmitted encoded data. This encoded data is encoded by the image encoding device 100.
  • the lossless decoding unit 202 decodes the encoded data read from the accumulation buffer 201 at a predetermined timing by a method corresponding to the encoding method of the lossless encoding unit 106 in FIG.
  • the inverse quantization unit 203 inversely quantizes the coefficient data (quantization coefficient) obtained by decoding by the lossless decoding unit 202 by a method corresponding to the quantization method of the quantization unit 105 in FIG. That is, the inverse quantization unit 203 uses the quantization parameter supplied from the image coding apparatus 100 to perform inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 108 in FIG.
  • the inverse quantization unit 203 supplies the inversely quantized coefficient data, that is, the orthogonal transform coefficient, to the inverse orthogonal transform unit 204. Further, the inverse quantization unit 203 supplies the quantization parameter obtained when the inverse quantization is performed to the deblocking filter 206 and the quantization parameter difference detection unit 221.
  • the inverse orthogonal transform unit 204 is a method corresponding to the orthogonal transform method of the orthogonal transform unit 104 in FIG. Corresponding decoding residual data is obtained.
  • the decoded residual data obtained by the inverse orthogonal transform is supplied to the calculation unit 205.
  • a prediction image is supplied to the calculation unit 205 from the intra prediction unit 211 or the motion prediction / compensation unit 212 via the selection unit 213.
  • the calculation unit 205 adds the decoded residual data and the prediction image, and obtains decoded image data corresponding to the image data before the prediction image is subtracted by the calculation unit 103 of the image encoding device 100.
  • the arithmetic unit 205 supplies the decoded image data to the deblock filter 206.
  • the deblock filter 206 is configured basically in the same manner as the deblock filter 111 of the image encoding device 100.
  • the deblocking filter 206 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the deblocking filter 206 supplies the filter processing result to the screen rearrangement buffer 207.
  • the screen rearrangement buffer 207 rearranges images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 102 in FIG. 1 is rearranged in the original display order.
  • the D / A conversion unit 208 D / A converts the image supplied from the screen rearrangement buffer 207, outputs it to a display (not shown), and displays it.
  • the output of the deblock filter 206 is further supplied to the frame memory 209.
  • the frame memory 209, the selection unit 210, the intra prediction unit 211, the motion prediction / compensation unit 212, and the selection unit 213 are the frame memory 112, the selection unit 113, the intra prediction unit 114, and the motion prediction / compensation unit of the image encoding device 100. 115 and the selection unit 116 respectively.
  • the selection unit 210 reads out the inter-processed image and the referenced image from the frame memory 209 and supplies them to the motion prediction / compensation unit 212. Further, the selection unit 210 reads an image used for intra prediction from the frame memory 209 and supplies the image to the intra prediction unit 211.
  • the intra prediction unit 211 is appropriately supplied from the lossless decoding unit 202 with information indicating the intra prediction mode obtained by decoding the header information. Based on this information, the intra prediction unit 211 generates a prediction image from the reference image acquired from the frame memory 209 and supplies the generated prediction image to the selection unit 213.
  • the motion prediction / compensation unit 212 is supplied with prediction mode information, motion vector information, reference frame information, flags, various parameters, and the like from the motion vector generation unit 221.
  • the motion prediction / compensation unit 212 generates a prediction image from the reference image acquired from the frame memory 209 based on the information supplied from the motion vector generation unit 221, and supplies the generated prediction image to the selection unit 213. .
  • the selection unit 213 selects the prediction image generated by the motion prediction / compensation unit 212 or the intra prediction unit 211 and supplies the selected prediction image to the calculation unit 205.
  • Information obtained by decoding the header information is stored in the motion vector generation unit 221 as a lossless decoding unit. 202.
  • the motion vector generation unit 221 generates a prediction vector of spatial correlation and temporal correlation from the motion vector of the surrounding area.
  • the motion vector generation unit 221 generates temporal prediction vector information using the motion vector information stored in the motion vector storage memory 222.
  • the motion vector generation unit 221 generates (reconstructs) motion vector information by adding the prediction vector information indicated by the prediction vector index and the motion vector difference information among the generated prediction vector information.
  • the motion vector generation unit 221 supplies the generated motion vector information to the motion prediction / compensation unit 212 together with other information obtained by decoding the header information.
  • the motion vector storage memory 222 stores the motion vector information stored by the motion vector generation unit 221 and used when the motion vector generation unit 221 generates a temporal prediction vector.
  • FIG. 26 is a block diagram illustrating a configuration example of the motion vector generation unit. 26 is a unit corresponding to the motion vector difference generation unit in FIG.
  • the motion vector generation unit 221 includes a motion vector generation control unit 231, a temporal prediction vector generation unit 232, an in-picture prediction vector generation unit 233, a peripheral motion vector storage unit 234, a motion vector reconstruction unit 235, an inter / Intra determination unit 236 is included.
  • CU size information From the lossless decoding unit 202, CU size information, CU partition information, CU address information, motion vector difference information, and prediction mode information are supplied to the motion vector generation control unit 231.
  • the motion vector generation control unit 231 is a unit corresponding to the motion vector difference generation control unit 131 of FIG.
  • the motion vector generation control unit 231 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 233 and the temporal prediction vector generation unit 232.
  • the motion vector generation control unit 231 supplies CU / PU address information and CU / PU size information to the 16 ⁇ 16 address correction unit 242.
  • the motion vector generation control unit 231 supplies the motion vector difference information from the lossless decoding unit 202 to the motion vector reconstruction unit 235 to reconstruct the motion vector.
  • the temporal prediction vector generation unit 232 is a unit corresponding to the temporal prediction vector generation unit 132 in FIG. 9 and is a unit that generates a temporal correlation prediction vector.
  • the temporal prediction vector generation unit 232 is configured to include a 16 ⁇ 16 address generation unit 241, a 16 ⁇ 16 address correction unit 242, and a memory access control unit 243.
  • the 16 ⁇ 16 address generation unit 241 is a unit corresponding to the 16 ⁇ 16 address generation unit 141 of FIG.
  • the 16 ⁇ 16 address generation unit 241 calculates the address of the 16 ⁇ 16 area to be referenced under the control of the motion vector generation control unit 231, and sends the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 242. Supply.
  • the 16 ⁇ 16 address generation unit 241 calculates the address of the 16 ⁇ 16 area in the case of intra, and the calculated 16 ⁇ 16 address information is converted into 16 ⁇ 16 address information.
  • the 16 address correction unit 242 is supplied.
  • the 16 ⁇ 16 address correction unit 242 is supplied with CU / PU address information and CU / PU size information from the motion vector generation control unit 231.
  • the 16 ⁇ 16 address correction unit 242 is supplied with 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 241.
  • the 16 ⁇ 16 address correcting unit 242 is a unit corresponding to the 16 ⁇ 16 address correcting unit 142 in FIG.
  • the 16 ⁇ 16 address correction unit 242 uses the supplied information to determine whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 241 is a reference beyond the CU. . If the reference exceeds the CU, the 16 ⁇ 16 address correction unit 242 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU, and the corrected 16 ⁇ 16 address information is accessed in the memory. This is supplied to the control unit 243. If the reference is not beyond the CU, the 16 ⁇ 16 address correction unit 242 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 243.
  • the memory access control unit 243 is a unit corresponding to the memory access control unit 143 in FIG.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 242. That is, the memory access control unit 243 sets the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 242 as a temporal correlation prediction vector, and makes a request to read the set temporal correlation prediction vector. . In response to this request, motion vector information is supplied from the motion vector storage memory 222 to the inter / intra determination unit 236.
  • the in-picture prediction vector generation unit 233 is a part corresponding to the in-picture prediction vector generation unit 133 in FIG.
  • the intra-picture prediction vector generation unit 233 reads the motion vector information stored in the peripheral motion vector storage unit 234 under the control of the motion vector generation control unit 231 and generates spatial prediction vector information.
  • the in-picture prediction vector generation unit 233 supplies the spatial prediction vector information to the motion vector reconstruction unit 235.
  • the peripheral motion vector storage unit 234 includes a memory and stores motion vector information in the processing target picture.
  • the motion vector reconstruction unit 235 is supplied with motion vector difference information and a prediction vector index from the motion vector generation control unit 231.
  • the motion vector reconstruction unit 235 determines whether the spatial prediction vector from the in-picture prediction vector generation unit 233 and the temporal prediction vector from the inter / intra determination unit 236 can be referred to.
  • the motion vector reconstruction unit 235 generates motion vector information by adding the motion vector indicated by the prediction vector index as a prediction vector among the referenceable motion vectors, and the motion vector difference information.
  • the motion vector reconstruction unit 235 supplies the generated motion vector information to the motion prediction / compensation unit 212 together with other decoded information (prediction mode information, reference frame information, flags, various parameters, and the like).
  • the motion vector reconstruction unit 235 stores the generated motion vector information in the motion vector storage memory 222. Although not shown, the motion vector reconstruction unit 235 causes the peripheral motion vector storage unit 234 to store the motion vector information.
  • the inter / intra determination unit 236 corresponds to the inter / intra determination unit 136 of FIG.
  • the inter / intra determination unit 236 supplies the motion vector information to the motion vector reconstruction unit 235 as a temporal prediction vector.
  • the inter / intra determination unit 236 instructs the 16 ⁇ 16 address generation unit 241 to re-read. This re-reading instruction is performed only once.
  • step S201 the accumulation buffer 201 accumulates the transmitted encoded data.
  • step S202 the lossless decoding unit 202 decodes the encoded data supplied from the accumulation buffer 201. That is, the I picture, P picture, and B picture encoded by the lossless encoding unit 106 in FIG. 1 are decoded.
  • motion vector difference information ⁇ prediction vector index (mvp_index)
  • reference frame information ⁇ prediction vector index (mvp_index)
  • prediction mode information intra prediction mode or inter prediction mode
  • information such as flags and quantization parameters
  • the prediction mode information is intra prediction mode information
  • the prediction mode information is supplied to the intra prediction unit 211.
  • the prediction mode information is inter prediction mode information
  • motion vector difference information corresponding to the prediction mode information is supplied to the motion vector generation unit 221.
  • step S203 the inverse quantization unit 203 inversely quantizes the quantized orthogonal transform coefficient obtained by decoding by the lossless decoding unit 202.
  • step S204 the inverse orthogonal transform unit 204 performs inverse orthogonal transform on the orthogonal transform coefficient obtained by inverse quantization by the inverse quantization unit 203 by a method corresponding to the orthogonal transform unit 104 in FIG. As a result, the difference information corresponding to the input of the orthogonal transform unit 104 (output of the calculation unit 103) in FIG. 1 is decoded.
  • step S205 the calculation unit 205 adds the predicted image to the difference information obtained by the process in step S204. As a result, the original image data is decoded.
  • step S206 the deblocking filter 206 appropriately filters the decoded image obtained by the process in step S205. Thereby, block distortion is appropriately removed from the decoded image.
  • step S207 the frame memory 209 stores the filtered decoded image.
  • step S208 the intra prediction unit 211 or the motion vector generation unit 221 determines whether or not intra coding is performed in accordance with the prediction mode information supplied from the lossless decoding unit 202.
  • the intra prediction unit 211 acquires the intra prediction mode from the lossless decoding unit 202 in step S209. In step S210, the intra prediction unit 211 generates a prediction image according to the intra prediction mode acquired in step S209. The intra prediction unit 211 outputs the generated predicted image to the selection unit 213.
  • step S211 the motion vector generation unit 221 performs a motion vector generation process. This motion vector generation process will be described later with reference to FIG.
  • a prediction vector is generated using the motion vectors of the peripheral region around the processing target region.
  • a motion vector is generated by reconstructing a motion vector using the generated prediction vector and motion vector difference information (difference value).
  • the motion vector generation unit 221 supplies the generated motion vector information and information such as the prediction mode information, such as parameters decoded by the lossless decoding unit 202, to the motion prediction / compensation unit 212.
  • step S ⁇ b> 212 the motion prediction / compensation unit 212 uses the information from the motion vector generation unit 221 to generate a prediction image from the reference image acquired from the frame memory 209, and supplies the generated prediction image to the selection unit 213. .
  • step S213 the selection unit 213 selects a predicted image. That is, the prediction unit 213 is supplied with the prediction image generated by the intra prediction unit 211 or the prediction image generated by the motion prediction / compensation unit 212. The selection unit 213 selects the side to which the predicted image is supplied, and supplies the predicted image to the calculation unit 205. This predicted image is added to the difference information by the process of step S205.
  • step S214 the screen rearrangement buffer 207 rearranges the frames of the decoded image data. That is, the order of frames of the decoded image data rearranged for encoding by the screen rearrangement buffer 102 (FIG. 1) of the image encoding device 100 is rearranged to the original display order.
  • step S215 the D / A converter 208 D / A converts the decoded image data in which the frames are rearranged in the screen rearrangement buffer 207.
  • the decoded image data is output to a display (not shown), and the image is displayed.
  • the motion vector difference information corresponding to the prediction mode information is supplied to the motion vector generation control unit 231.
  • the motion vector generation control unit 231 acquires the supplied prediction mode information and the like. For example, the motion vector generation control unit 231 acquires prediction mode information, motion vector difference information, CU size information, CU partition information, CU address information, and the like. Furthermore, the motion vector generation control unit 231 also obtains a prediction vector index, a reference image index, and the like.
  • the motion vector generation control unit 231 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 233 and the temporal prediction vector generation unit 232.
  • the intra-picture prediction vector generation unit 233 generates a spatial correlation prediction vector in step S232.
  • the intra-picture prediction vector generation unit 233 reads the motion vector information stored in the peripheral motion vector storage unit 234, and generates a spatial correlation prediction vector.
  • the intra-frame prediction vector generation unit 233 supplies the generated spatial correlation prediction vector information to the motion vector reconstruction unit 235.
  • step S233 the temporal prediction vector generation unit 232 generates a temporal correlation prediction vector.
  • the details of this temporal correlation prediction vector generation processing will be described later with reference to FIG. 29, but temporal correlation prediction vector information is supplied to the motion vector reconstruction unit 235 by the processing in step S233.
  • step S234 the motion vector reconstruction unit 235 determines whether or not all the surrounding areas are unreferenceable. When at least one of the spatial prediction vector information and the temporal prediction vector information is supplied, the motion vector reconstruction unit 235 determines that all the surrounding areas are not referable, and the process proceeds to step S235.
  • step S235 the motion vector reconstruction unit 235 deletes, when there is an overlapping prediction vector from the supplied spatial prediction vector information and temporal prediction vector information.
  • step S236 the motion vector reconstruction unit 235 obtains a prediction vector index from the motion vector generation control unit 231.
  • step S237 when there are a plurality of prediction vectors, the motion vector reconstruction unit 235 determines the prediction vector indicated by the prediction vector index acquired in step S236 as the prediction vector.
  • the motion vector reconstruction unit 235 indicates that all the surrounding areas cannot be referred to. Determine and proceed to step S238.
  • step S2308 the motion vector reconstruction unit 235 sets 0 as a prediction vector.
  • step S239 the motion vector reconstruction unit 235 generates a motion vector by adding the prediction vector determined in step S237 or S238 to the motion vector difference from the motion vector generation control unit 231.
  • the motion vector reconstruction unit 235 stores the generated motion vector information in the motion vector storage memory 122. Specifically, as described above with reference to FIG. 7, the motion vector reconstruction unit 235 uses the motion vector information of the PU including the upper left pixel position of the divided region for each 16 ⁇ 16 divided region as a motion. Save in the vector storage memory 222. The motion vector information stored in the motion vector storage memory 222 is used to generate a prediction vector for a later picture in time. This motion vector information is also stored in the peripheral motion vector storage unit 134.
  • the motion vector reconstruction unit 235 supplies the generated motion vector information and information such as parameters decoded by the lossless decoding unit 202 such as prediction mode information to the motion prediction / compensation unit 212.
  • step S239 is excluded from the processing of FIG. Then, the index acquired in step S236 of FIG. 28 becomes a merge index (merge_idx), and a motion vector, not a prediction vector, is determined in step S237 or S238.
  • merge_idx merge index
  • the process for generating the temporal correlation prediction vector in step S233 and the process for saving the motion vector in step S240 perform basically the same process. Is omitted.
  • the 16 ⁇ 16 address generation unit 241 is supplied with a prediction vector generation instruction from the motion vector generation control unit 231. In step S261, the 16 ⁇ 16 address generation unit 241 calculates the address of the 16 ⁇ 16 area to be referred to. The 16 ⁇ 16 address generation unit 241 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 242.
  • the 16 ⁇ 16 address correction unit 242 is further supplied with CU / PU address information and CU / PU size information from the motion vector generation control unit 231.
  • step S262 the 16 ⁇ 16 address correction unit 242 uses the supplied information to determine whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 is a reference beyond the CU. Determine whether. That is, in step S262, it is determined whether the lower right pixel position of the divided area indicated by the 16 ⁇ 16 address information exceeds the boundary of the CU to be processed.
  • the 16 ⁇ 16 address correcting unit 242 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU. Specifically, the 16 ⁇ 16 address correction unit 242 pulls back the 16 ⁇ 16 address information in the vertical direction using Expression (3).
  • the 16 ⁇ 16 address correcting unit 242 supplies the 16 ⁇ 16 address information corrected as a result of the pull back to the memory access control unit 243.
  • step S263 If it is determined in step S262 that the reference does not exceed the CU, the process in step S263 is skipped. That is, the 16 ⁇ 16 address correction unit 242 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 241 to the memory access control unit 243.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 242. That is, the memory access control unit 243 sets a motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 242 as a time correlation prediction vector, and makes a request to read the set time correlation prediction vector.
  • step S 264 the motion vector storage memory 222 reads the motion vector information in response to the request from the memory access control unit 243 and supplies the motion vector information to the inter / intra determination unit 236.
  • step S265 the inter / intra determination unit 236 determines whether or not the motion vector information read from the motion vector storage memory 222 indicates an intra (picture).
  • step S265 when it is determined that the read motion vector information indicates an intra, the inter / intra determination unit 236 instructs the 16 ⁇ 16 address generation unit 241 to re-read. This re-reading instruction is performed only once.
  • step S266 the 16 ⁇ 16 address generation unit 241 calculates the address of the 16 ⁇ 16 area in the case of intra.
  • the 16 ⁇ 16 address generation unit 141 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 242.
  • step S267 the 16 ⁇ 16 address correction unit 242 uses the supplied information to determine whether the reference using the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 241 is a reference beyond the CU. Determine whether.
  • the 16 ⁇ 16 address correcting unit 242 corrects the 16 ⁇ 16 address information so that the reference is pulled back into the CU in step S268. Specifically, the 16 ⁇ 16 address correction unit 242 pulls back the 16 ⁇ 16 address information in the vertical direction using Expression (3).
  • the 16 ⁇ 16 address correcting unit 242 supplies the 16 ⁇ 16 address information corrected as a result of the pull back to the memory access control unit 243.
  • step S267 If it is determined in step S267 that the reference does not exceed the CU, the process in step S268 is skipped. That is, the 16 ⁇ 16 address correction unit 242 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 141 to the memory access control unit 143.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 242. That is, the memory access control unit 243 sets a motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 242 as a time correlation prediction vector, and makes a request to read the set time correlation prediction vector.
  • step S269 the motion vector storage memory 222 reads the motion vector information in response to the request from the memory access control unit 243, and supplies the motion vector information to the inter / intra determination unit 236.
  • the inter / intra determination unit 236 supplies the motion vector information from the motion vector storage memory 222 to the motion vector reconstruction unit 235 as temporal correlation prediction vector information.
  • step S265 if it is determined in step S265 that the read motion vector information indicates an intra, steps S266 to S269 are skipped. That is, the motion vector information read in step S264 is supplied to the motion vector reconstruction unit 235 as temporal correlation prediction vector information.
  • the address is corrected so that the reference is pulled back into the LCU. .
  • the LCU is not straddled in the vertical direction, so that the Col vector information is not re-read.
  • FIG. 30 is a block diagram illustrating another configuration example of the motion vector generation unit.
  • the motion vector generation unit in FIG. 30 corresponds to the motion vector difference generation unit in FIG.
  • the motion vector generation unit 221 of FIG. 30 includes a screen prediction vector generation unit 233, a peripheral motion vector storage unit 234, a motion vector reconstruction unit 235, and an inter / intra determination unit 236, and the motion vector generation unit of FIG. 221 is common.
  • the description is abbreviate
  • the motion vector generation unit 221 in FIG. 30 is different from the motion vector generation control unit 231 in the motion vector generation control unit 251 and the temporal prediction vector generation unit 232 in the temporal prediction vector generation unit 252 in FIG. This is different from the motion vector generation unit 221.
  • the motion vector generation control unit 251 is a unit corresponding to the motion vector difference generation control unit 151 in FIG.
  • the motion vector generation control unit 251 gives a prediction vector generation instruction to the in-picture prediction vector generation unit 233 and the temporal prediction vector generation unit 252.
  • the motion vector generation control unit 251 supplies the motion vector difference information from the lossless decoding unit 202 to the motion vector reconstruction unit 235 to reconstruct the motion vector.
  • the motion vector generation control unit 251 does not supply CU or PU information to the 16 ⁇ 16 address correction unit 262.
  • the temporal prediction vector generation unit 252 is a unit corresponding to the temporal prediction vector generation unit 152 in FIG.
  • the temporal prediction vector generation unit 252 is common to the temporal prediction vector generation unit 232 of FIG. 26 in that it includes a memory access control unit 243.
  • the 16 ⁇ 16 address generation unit 241 is replaced with a 16 ⁇ 16 address generation unit 261
  • the 16 ⁇ 16 address correction unit 242 is replaced with a 16 ⁇ 16 address correction unit 262. This is different from the 26 temporal prediction vector generation units 232.
  • the 16 ⁇ 16 address generation unit 261 is a unit corresponding to the 16 ⁇ 16 address generation unit 161 in FIG.
  • the 16 ⁇ 16 address generation unit 261 calculates the address of the 16 ⁇ 16 area to be referenced under the control of the motion vector generation control unit 251, and sends the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 262. Supply.
  • the 16 ⁇ 16 address generation unit 261 calculates the address of the 16 ⁇ 16 area in the case of intra when there is a reread instruction from the inter / intra determination unit 236.
  • the 16 ⁇ 16 address generation unit 261 determines whether the address calculated in the first time and the address calculated in the second time (in the case of intra) are the same, and if they are the same, the addresses are the same. A coincidence flag indicating that is generated.
  • the 16 ⁇ 16 address generation unit 261 supplies the generated match flag to the 16 ⁇ 16 address correction unit 262.
  • the 16 ⁇ 16 address correcting unit 262 is a unit corresponding to the 16 ⁇ 16 address correcting unit 162 in FIG.
  • the 16 ⁇ 16 address correction unit 262 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 261 to the memory access control unit 243. Further, when receiving the flag from the 16 ⁇ 16 address generation unit 261, the 16 ⁇ 16 address correction unit 262 corrects the first 16 ⁇ 16 address so that the address is different from the first time.
  • the 16 ⁇ 16 address correction unit 262 supplies the corrected 16 ⁇ 16 address information to the memory access control unit 243.
  • the 16 ⁇ 16 address generation unit 261 is supplied with a prediction vector generation instruction from the motion vector generation control unit 251. In step S271, the 16 ⁇ 16 address generation unit 261 calculates the address of the 16 ⁇ 16 area to be referred to. The 16 ⁇ 16 address generation unit 261 supplies the calculated 16 ⁇ 16 address information to the 16 ⁇ 16 address correction unit 262.
  • the 16 ⁇ 16 address correction unit 262 supplies the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 261 to the memory access control unit 243.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 262. That is, the memory access control unit 243 sets a motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 242 as a time correlation prediction vector, and makes a request to read the set time correlation prediction vector.
  • step S272 the motion vector storage memory 222 reads the motion vector information in response to the request from the memory access control unit 243, and supplies the motion vector information to the inter / intra determination unit 236.
  • step S273 the inter / intra determination unit 236 determines whether or not the motion vector information read from the motion vector storage memory 222 indicates an intra (picture).
  • step S273 when it is determined that the read motion vector information indicates an intra, the inter / intra determination unit 236 instructs the 16 ⁇ 16 address generation unit 261 to re-read. This re-reading instruction is performed only once.
  • step S274 the 16 ⁇ 16 address generation unit 261 calculates the address of the 16 ⁇ 16 area in the case of intra.
  • step S275 the 16 ⁇ 16 address generation unit 261 determines whether the address calculated in the first time and the address calculated in the second time (in the case of intra) are the same.
  • the 16 ⁇ 16 address generation unit 261 determines in step S275 that the addresses are the same, the 16 ⁇ 16 address generation unit 261 generates a match flag indicating that the addresses are the same, and uses the generated match flag as the 16 ⁇ 16 address correction unit 262. To supply.
  • the 16 ⁇ 16 address correcting unit 262 When receiving the coincidence flag, the 16 ⁇ 16 address correcting unit 262 corrects the first 16 ⁇ 16 address so as to be different from the first address in step S276. That is, the 16 ⁇ 16 address correcting unit 262 corrects the 16 ⁇ 16 address by performing +16 in the horizontal direction of the first 16 ⁇ 16 address so as to indicate an area to the right of the region to which the 16 ⁇ 16 address belongs. . The 16 ⁇ 16 address correction unit 262 supplies the corrected 16 ⁇ 16 address information to the memory access control unit 243.
  • step S275 If it is determined in step S275 that the addresses are different, no match flag is generated and step S276 is also skipped. That is, the 16 ⁇ 16 address correction unit 262 supplies the second 16 ⁇ 16 address different from the first 16 ⁇ 16 address to the memory access control unit 243.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address correction unit 242. That is, the memory access control unit 243 sets a motion vector indicated by the 16 ⁇ 16 address information from the address correction unit 242 as a time correlation prediction vector, and makes a request to read the set time correlation prediction vector.
  • step S277 the motion vector storage memory 222 reads the motion vector information in response to the request from the memory access control unit 243, and supplies the motion vector information to the inter / intra determination unit 236.
  • the inter / intra determination unit 236 supplies the motion vector information from the motion vector storage memory 222 to the motion vector reconstruction unit 235 as temporal correlation prediction vector information.
  • step S273 if it is determined in step S273 that the read motion vector information indicates an intra, steps S274 to S277 are skipped. That is, the motion vector information read in step S272 is supplied to the motion vector reconstruction unit 235 as temporal correlation prediction vector information.
  • FIG. 32 is a block diagram illustrating still another configuration example of the motion vector generation unit. Note that the motion vector generation unit in FIG. 32 corresponds to the motion vector difference generation unit in FIG.
  • the motion vector generation unit 221 in FIG. 26 is the same as the motion vector generation unit 221 in FIG. 26 in that the motion vector generation unit 221 in FIG. 32 includes a screen prediction vector generation unit 233 and a peripheral motion vector storage unit 234.
  • the description is abbreviate
  • the motion vector generation unit 221 in FIG. 32 is that the motion vector generation control unit 231 is replaced with the motion vector generation control unit 271, and the temporal prediction vector generation unit 232 is replaced with the temporal prediction vector generation unit 272. This is different from the motion vector generation unit 221.
  • the motion vector generation unit 221 in FIG. 32 is different from the motion vector generation unit 221 in FIG. 26 in that the motion vector reconstruction unit 235 is replaced with a motion vector reconstruction unit 273.
  • the motion vector generation unit 221 in FIG. 32 is different from the motion vector generation unit 221 in FIG. 26 in that the inter / intra determination unit 236 is removed and a storage vector selection unit 274 is added. .
  • the motion vector generation control unit 271 is a unit corresponding to the motion vector difference generation control unit 171 of FIG.
  • the motion vector generation control unit 271 gives an instruction to generate a prediction vector to the in-picture prediction vector generation unit 233 and the temporal prediction vector generation unit 272.
  • the motion vector generation control unit 271 supplies the motion vector difference information from the lossless decoding unit 202 to the motion vector reconstruction unit 273 to reconstruct the motion vector.
  • the motion vector generation control unit 271 supplies the intra region information to the storage vector selection unit 274 and causes the motion vector storage memory 222 to store the motion vector.
  • the temporal prediction vector generation unit 272 is a unit corresponding to the temporal prediction vector generation unit 172 of FIG.
  • the temporal prediction vector generation unit 272 is common to the temporal prediction vector generation unit 232 in FIG. 26 in that the temporal prediction vector generation unit 272 includes a memory access control unit 243.
  • the temporal prediction vector generation unit 272 is different from the time prediction vector in FIG. 26 in that the 16 ⁇ 16 address generation unit 241 is replaced with the 16 ⁇ 16 address generation unit 281 and the 16 ⁇ 16 address correction unit 242 is excluded. This is different from the generation unit 232.
  • the motion vector reconstruction unit 273 is a unit corresponding to the motion vector reconstruction unit 173 of FIG. It is determined whether the spatial prediction vector from the in-picture prediction vector generation unit 233 and the temporal prediction vector from the motion vector storage memory 222 can be referred to.
  • the motion vector reconstruction unit 273 generates motion vector information by adding the motion vector indicated by the prediction vector index as a prediction vector among the referenceable motion vectors, and the motion vector difference information.
  • the motion vector reconstruction unit 273 supplies the generated motion vector information to the motion prediction / compensation unit 212 together with other decoded information (prediction mode information, reference frame information, flags, various parameters, and the like).
  • the storage vector selection unit 274 is a unit corresponding to the storage vector selection unit 174 of FIG.
  • the saved vector selection unit 274 saves the motion vector information of the processing target area, which is a divided area obtained by dividing the screen into 16 ⁇ 16, in the motion vector saving memory 222.
  • the save vector selection unit 274 receives the motion vector information from the motion vector reconstruction unit 273, the information on the intra region from the motion vector generation control unit 271, and the vector and mode information of the peripheral region from the peripheral motion vector storage unit 234. And so on.
  • the storage vector selection unit 274 causes the peripheral motion vector storage unit 234 to store motion vector information and mode information.
  • the 16 ⁇ 16 address generation unit 281 is supplied with a prediction vector generation instruction from the motion vector generation control unit 271. In step S281, the 16 ⁇ 16 address generation unit 281 calculates the address of the 16 ⁇ 16 area to be referred to. The 16 ⁇ 16 address generation unit 281 supplies the calculated 16 ⁇ 16 address information to the memory access control unit 243.
  • the memory access control unit 243 requests the motion vector storage memory 222 to read the motion vector indicated by the 16 ⁇ 16 address information from the 16 ⁇ 16 address generation unit 281.
  • FIG. 34 is a flowchart illustrating the flow of the storage process described above with reference to FIG. 20 described above, and will be described with reference to FIG. In each step of FIG. 34, basically the same processing as the processing in each step of FIG. 23 described above is performed.
  • the motion vector reconstruction unit 273 supplies the generated motion vector information to the saved vector selection unit 274.
  • the motion vector generation control unit 271 supplies the intra area information to the saved vector selection unit 274.
  • step S301 based on the intra area information from the motion vector generation control unit 271, the saved vector selection unit 274 determines whether or not the processing target area (that is, the 16 ⁇ 16 area with a thick frame in FIG. 20) is inter. Determine. That is, in step S301, the saved vector selection unit 274 determines whether or not the PU including the pixel position P shown in FIG. If it is determined in step S301 that the processing target area is inter, the process proceeds to step S302.
  • the processing target area that is, the 16 ⁇ 16 area with a thick frame in FIG. 20
  • step S302 the storage vector selection unit 274 sets the motion vector information from the motion vector reconstruction unit 273 as the vector information of the processing target region, and stores the set motion vector information in the motion vector storage memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S301 If it is determined in step S301 that the processing target area is intra, the process proceeds to step S303.
  • the saved vector selection unit 274 obtains the motion vector information and mode information of the PU including the pixel position A shown in FIG.
  • step S303 the storage vector selection unit 274 determines whether or not the pixel position A is available. That is, in step S303, the saved vector selection unit 274 determines whether or not the PU motion vector information including the pixel position A is available.
  • step S304 the save vector selection unit 274 sets the motion vector information of the PU including the pixel position A as the vector information of the processing target area, and saves the set motion vector information in the motion vector save memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S303 If the PU mode information including the pixel position A is intra, and it is determined in step S303 that the pixel position A is not usable, the process proceeds to step S305.
  • the saved vector selection unit 274 obtains the motion vector information and mode information of the PU including the pixel position B shown in FIG.
  • the saved vector selection unit 274 determines whether or not the pixel position B is available in step S305. That is, the saved vector selection unit 274 determines whether or not the motion vector information of the PU including the pixel position B is available in step S305.
  • step S ⁇ b> 306 the save vector selection unit 274 sets the motion vector information of the PU including the pixel position B as the vector information of the processing target area, and saves the set motion vector information in the motion vector save memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S305 If the PU mode information including the pixel position B is intra, and it is determined in step S305 that the pixel position A is not usable, the process proceeds to step S307.
  • the saved vector selection unit 274 obtains the motion vector information and mode information of the PU including the pixel position C shown in FIG.
  • step S307 the storage vector selection unit 274 determines whether or not the pixel position C is available. That is, in step S307, the saved vector selection unit 274 determines whether or not the PU motion vector information including the pixel position C is available.
  • step S308 the save vector selection unit 274 sets the motion vector information of the PU including the pixel position C as the vector information of the processing target area, and saves the set motion vector information in the motion vector save memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S307 If the mode information of the PU including the pixel position C is intra and it is determined in step S307 that the pixel position C is not usable, the process proceeds to step S309.
  • step S308 the saved vector selection unit 274 saves the information as intra in the motion vector saving memory 122 as vector information of the processing target area. Thereafter, the processing returns to step S240 in FIG.
  • FIG. 35 is a flowchart for explaining the flow of the storage process described above with reference to FIG. 21 described above, and will be described with reference to FIG.
  • FIG. 35 basically the same processing as the processing in each step of FIG. 24 described above is performed.
  • the motion vector reconstruction unit 273 supplies the generated motion vector information to the saved vector selection unit 274.
  • the motion vector generation control unit 271 supplies the intra area information to the saved vector selection unit 274.
  • step S311 the save vector selection unit 274 stores the motion vector of the processing target region (16 ⁇ 16 region shown in FIG. 21), and based on the intra region information from the motion vector generation control unit 271, FIG. It is determined whether or not the pixel position A shown in FIG. That is, in step S311, the saved vector selection unit 274 determines whether or not the motion vector information of the PU including the pixel position A is available.
  • step S311 If it is determined in step S311 that the pixel position A is available, the process proceeds to step S312.
  • step S312 the saved vector selection unit 274 sets and sets the motion vector information of the PU (Pl0_0 in FIG. 21) including the pixel position A from the motion vector reconstruction unit 273 as the vector information of the processing target region.
  • the motion vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S311 If it is determined in step S311 that the pixel position A is not usable, the process proceeds to step S313.
  • step S313 the saved vector selection unit 274 determines whether or not the pixel position B shown in FIG. 21 is available based on the intra area information from the motion vector generation control unit 271. That is, in step S313, the saved vector selection unit 274 determines whether or not the PU motion vector information including the pixel position B is available.
  • step S311 If it is determined in step S311 that the pixel position B is available, the process proceeds to step S314.
  • step S314 the saved vector selection unit 274 sets and sets the motion vector information of the PU (Pl1_0 in FIG. 21) including the pixel position B from the motion vector reconstruction unit 273 as the vector information of the processing target region.
  • the motion vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S313 If it is determined in step S313 that the pixel position B is not usable, the process proceeds to step S315.
  • step S315 the saved vector selection unit 274 determines whether or not the pixel position C shown in FIG. 21 is available based on the intra area information from the motion vector generation control unit 271. That is, the saved vector selection unit 274 determines whether or not the motion vector information of the PU including the pixel position C is available in step S315.
  • step S316 the saved vector selection unit 274 sets and sets the motion vector information of the PU (Pl2_0 in FIG. 21) including the pixel position C from the motion vector reconstruction unit 273 as the vector information of the processing target region.
  • the motion vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S315 If it is determined in step S315 that the pixel position C is not usable, the process proceeds to step S317.
  • step S317 the saved vector selection unit 274 determines whether or not the pixel position D shown in FIG. 21 is available based on the intra area information from the motion vector generation control unit 271. That is, in step S317, the saved vector selection unit 274 determines whether or not the motion vector information of the PU including the pixel position D is available.
  • step S317 If it is determined in step S317 that the pixel position D is available, the process proceeds to step S318.
  • step S3108 the saved vector selection unit 274 sets and sets the motion vector information of the PU (Pl3_0 in FIG. 21) including the pixel position D from the motion vector reconstruction unit 273 as the vector information of the processing target region.
  • the motion vector information is stored in the motion vector storage memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S318 If it is determined in step S318 that the pixel position D is not usable, the process proceeds to step S319.
  • step S 319 the saved vector selection unit 274 sets that it is intra as the vector information of the processing target area, and saves the set motion vector information in the motion vector save memory 122. Thereafter, the processing returns to step S240 in FIG.
  • step S3108 the motion vector storage processing target region and the position of ColPU are substantially determined on a one-to-one basis. As described above, rereading of the vector information is suppressed.
  • the HEVC scheme is used as the encoding scheme.
  • the present disclosure is not limited to this, and other encoding schemes / decoding schemes that perform processing using time-correlated motion vectors are applied. be able to.
  • this disclosure includes, for example, MPEG, When receiving image information (bitstream) compressed by orthogonal transform such as discrete cosine transform and motion compensation, such as 26x, via network media such as satellite broadcasting, cable television, the Internet, or mobile phones.
  • the present invention can be applied to an image encoding device and an image decoding device used in the above.
  • the present disclosure can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical disk, a magnetic disk, and a flash memory.
  • the present disclosure can also be applied to motion prediction / compensation devices included in such image encoding devices and image decoding devices.
  • a CPU (Central Processing Unit) 501 of a computer 500 has various programs according to a program stored in a ROM (Read Only Memory) 502 or a program loaded from a storage unit 513 into a RAM (Random Access Memory) 503. Execute the process.
  • the RAM 503 also appropriately stores data necessary for the CPU 501 to execute various processes.
  • the CPU 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504.
  • An input / output interface 510 is also connected to the bus 504.
  • the input / output interface 510 includes an input unit 511 including a keyboard and a mouse, a display including a CRT (Cathode Ray Tube) and an LCD (Liquid Crystal Display), an output unit 512 including a speaker, and a hard disk.
  • a communication unit 514 including a storage unit 513 and a modem is connected. The communication unit 514 performs communication processing via a network including the Internet.
  • a drive 515 is connected to the input / output interface 510 as necessary, and a removable medium 521 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and a computer program read from them is It is installed in the storage unit 513 as necessary.
  • a removable medium 521 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and a computer program read from them is It is installed in the storage unit 513 as necessary.
  • a program constituting the software is installed from a network or a recording medium.
  • the recording medium is distributed to distribute the program to the user separately from the apparatus main body, and includes a magnetic disk (including a flexible disk) on which the program is recorded, an optical disk ( It only consists of removable media 521 consisting of CD-ROM (compact disc -read only memory), DVD (including digital Versatile disc), magneto-optical disk (including MD (mini disc)), or semiconductor memory. Rather, it is composed of a ROM 502 on which a program is recorded and a hard disk included in the storage unit 513, which is distributed to the user in a state of being pre-installed in the apparatus main body.
  • a magnetic disk including a flexible disk
  • an optical disk It only consists of removable media 521 consisting of CD-ROM (compact disc -read only memory), DVD (including digital Versatile disc), magneto-optical disk (including MD (mini disc)), or semiconductor memory. Rather, it is composed of a ROM 502 on which a program is recorded and a hard disk included in the storage unit 513, which is
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
  • system represents the entire apparatus composed of a plurality of devices (apparatuses).
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit).
  • a configuration other than that described above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the gist of the present technology.
  • An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
  • the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
  • a recording device that records an image on a medium such as a magnetic disk and a flash memory
  • a playback device that reproduces an image from these storage media.
  • FIG. 37 shows an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
  • the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
  • Tuner 902 extracts a signal of a desired channel from a broadcast signal received via antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. In other words, the tuner 902 serves as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
  • EPG Electronic Program Guide
  • the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
  • the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
  • the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
  • the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
  • the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
  • GUI Graphic User Interface
  • the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
  • the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
  • the external interface 909 is an interface for connecting the television apparatus 900 to an external device or a network.
  • a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
  • the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
  • the user interface 911 is connected to the control unit 910.
  • the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
  • the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
  • the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910 to each other.
  • the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment. Therefore, when decoding an image by the television apparatus 900, it is possible to reduce memory access when reading out motion vectors that are temporally different from surrounding motion vectors.
  • FIG. 38 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
  • a mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
  • the antenna 921 is connected to the communication unit 922.
  • the speaker 924 and the microphone 925 are connected to the audio codec 923.
  • the operation unit 932 is connected to the control unit 931.
  • the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
  • the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
  • the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
  • the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
  • the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
  • the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the control unit 931 generates character data constituting the e-mail in response to an operation by the user via the operation unit 932.
  • the control unit 931 causes the display unit 930 to display characters.
  • the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
  • the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
  • the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
  • the recording / reproducing unit 929 has an arbitrary readable / writable storage medium.
  • the storage medium may be a built-in storage medium such as RAM or flash memory, and is externally mounted such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Unallocated Space Space Bitmap) memory, or memory card. It may be a storage medium.
  • the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
  • the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
  • the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
  • the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • These transmission signal and reception signal may include an encoded bit stream.
  • the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
  • the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
  • the image processing unit 927 decodes the video stream and generates video data.
  • the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
  • the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the image processing unit 927 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Accordingly, when the mobile phone 920 encodes and decodes an image, it is possible to reduce memory access when reading out temporally different motion vectors from the surrounding motion vectors.
  • FIG. 39 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
  • the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
  • the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
  • the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
  • the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
  • Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 has a role as a transmission unit in the recording / reproducing apparatus 940.
  • the external interface 942 is an interface for connecting the recording / reproducing apparatus 940 to an external device or a network.
  • the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
  • video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
  • the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
  • the HDD 944 records an encoded bit stream in which content data such as video and audio is compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
  • the disk drive 945 performs recording and reading of data to and from the mounted recording medium.
  • the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
  • the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
  • the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
  • OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
  • the control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
  • the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
  • the user interface 950 is connected to the control unit 949.
  • the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
  • the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
  • the encoder 943 has the function of the image encoding apparatus according to the above-described embodiment.
  • the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment.
  • FIG. 40 illustrates an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
  • the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
  • the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
  • the optical block 961 is connected to the imaging unit 962.
  • the imaging unit 962 is connected to the signal processing unit 963.
  • the display unit 965 is connected to the image processing unit 964.
  • the user interface 971 is connected to the control unit 970.
  • the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
  • the optical block 961 includes a focus lens and a diaphragm mechanism.
  • the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
  • the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
  • the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
  • the image processing unit 964 encodes the image data input from the signal processing unit 963 and generates encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
  • the OSD 969 generates a GUI image such as a menu, a button, or a cursor, and outputs the generated image to the image processing unit 964.
  • the external interface 966 is configured as a USB input / output terminal, for example.
  • the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
  • a drive is connected to the external interface 966 as necessary.
  • a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
  • the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
  • the recording medium mounted on the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
  • a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
  • the control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
  • the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
  • the user interface 971 is connected to the control unit 970.
  • the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
  • the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
  • the image processing unit 964 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Thereby, when the image is encoded and decoded by the imaging device 960, memory access when reading out temporally different motion vectors from the surrounding motion vectors can be reduced.
  • the method for transmitting such information is not limited to such an example.
  • these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
  • the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
  • Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
  • this technique can also take the following structures.
  • An image processing apparatus comprising: a decoding unit that decodes a bitstream and generates the image using a motion vector generated using the temporal correlation prediction vector set by the setting unit.
  • the setting unit reads a motion vector corresponding to the divided region indicated by the address corrected by the address correcting unit as a temporal correlation prediction vector from a memory that stores the motion vector for each divided region.
  • the image processing apparatus according to 1).
  • (3) The image processing apparatus according to (1) or (2), wherein the address correction unit corrects the address by subtracting 1 from the address.
  • the address correction unit pulls back the y coordinate of the lower right pixel position into the current decoding unit area.
  • the image processing apparatus is When the lower right pixel position in the divided area obtained by dividing the image indicated by the address calculated when obtaining the prediction vector of temporal correlation is outside the current decoding unit area, the pixel position is set to the current decoding unit area. Correcting the address so as to return to the region, and setting a motion vector corresponding to the divided region indicated by the corrected address as the prediction vector of the time correlation, An image processing method for generating the image by decoding a bitstream using a motion vector generated using the set prediction vector of the time correlation.
  • the pixel position is An address correction unit that corrects the address so as to return to the current encoding unit area;
  • a setting unit that sets a motion vector corresponding to a divided region indicated by the address corrected by the address correction unit as the prediction vector of the time correlation;
  • An image processing apparatus comprising: an encoding unit that encodes the image using a prediction image predicted from a motion vector corresponding to the temporal correlation prediction vector set by the setting unit.
  • the image processing apparatus When the lower right pixel position in the divided area obtained by dividing the image indicated by the address calculated when obtaining the temporal correlation prediction vector is outside the current encoding unit area, the pixel position is converted to the current encoding. Correct the address to pull back into the unit area, A motion vector corresponding to the divided region indicated by the corrected address is set as the temporal correlation prediction vector; An image processing method for encoding the image using a predicted image predicted from a motion vector corresponding to the set temporal correlation prediction vector. (8) A prediction unit area including a first address calculated when obtaining a prediction vector of temporal correlation and an upper left pixel position of a divided area obtained by dividing the image indicated by the first address is intra.
  • An address correction unit that corrects the second address to indicate an adjacent divided region adjacent to the divided region when the second address calculated in the case is the same;
  • a setting unit that sets a motion vector corresponding to an adjacent divided region indicated by the address corrected by the address correcting unit as a prediction vector of the time correlation;
  • An image processing apparatus comprising: a decoding unit that decodes a bitstream and generates the image using a motion vector generated using the temporal correlation prediction vector set by the setting unit. (9)
  • the setting unit reads a motion vector corresponding to an adjacent divided region indicated by the address corrected by the address correction unit as a temporal correlation prediction vector from a memory that stores a motion vector for each divided region.
  • a prediction unit area including a first address calculated when obtaining a prediction vector of time correlation and an upper left pixel position of a divided area obtained by dividing the image indicated by the first address is intra.
  • An address correction unit that corrects the second address to indicate an adjacent divided region adjacent to the divided region when the second address calculated in the case is the same;
  • a setting unit that sets a motion vector corresponding to an adjacent divided region indicated by the address corrected by the address correcting unit as a prediction vector of the time correlation;
  • An image processing apparatus comprising: an encoding unit that encodes the image using a prediction image predicted from a motion vector corresponding to the temporal correlation prediction vector set by the setting unit.
  • the image processing apparatus When the prediction unit region including the first address calculated when obtaining the temporal correlation prediction vector and the upper left pixel position of the divided region obtained by dividing the image indicated by the first address is intra.
  • the second address calculated in the above is the same, the second address is corrected to indicate an adjacent divided area adjacent to the divided area, A motion vector corresponding to the adjacent divided region indicated by the corrected address is set as a prediction vector of the time correlation; An image processing method for encoding the image using a predicted image predicted from a motion vector corresponding to the set temporal correlation prediction vector.
  • the setting unit stores the selected motion vector as a motion vector corresponding to the current divided region in a memory that stores a motion vector used for obtaining a temporal correlation prediction vector for each divided region.
  • the image processing apparatus according to (14).
  • (16) The image processing device according to (14) or (15), wherein the adjacent pixel position is a pixel position adjacent to the left, upper left, or upper side with respect to the pixel position.
  • (17) When the prediction unit region including the pixel position at the upper left of the current divided region is intra, the setting unit determines a region of a different prediction unit including an adjacent pixel position adjacent to the left of the pixel position.
  • a motion vector of a different prediction unit area including an adjacent pixel position adjacent to the upper left of the pixel position is selected.
  • a motion vector of a region of a different prediction unit including an adjacent pixel position adjacent to the pixel position is selected.
  • the image processing apparatus When the prediction unit region including the upper left pixel position of the current divided region obtained by dividing the image is an intra, the motion vector of a different prediction unit region including an adjacent pixel position adjacent to the pixel position, or the Select a motion vector of another prediction unit region included in the current divided region, set the selected motion vector as a motion vector corresponding to the current divided region, An image processing method for generating the image by decoding a bitstream using a motion vector generated using the set prediction vector of the time correlation.
  • the image processing apparatus is When the prediction unit region including the upper left pixel position of the current divided region obtained by dividing the image is an intra, the motion vector of a different prediction unit region including an adjacent pixel position adjacent to the pixel position, or the Select a motion vector of another prediction unit region included in the current divided region, set the selected motion vector as a motion vector corresponding to the current divided region, An image processing method for encoding the image using a predicted image predicted from a motion vector corresponding to the set temporal correlation prediction vector.
  • Optimal mode determination unit 136 inter / intra determination unit, 141 16 ⁇ 16 address generation unit, 142 16 ⁇ 16 address correction unit, 143 memory access control unit, 151 motion vector difference generation control unit, 152 temporal prediction vector generation unit, 161 16 ⁇ 16 address generation unit, 162 16 ⁇ 16 address correction unit, 171 motion vector difference generation control unit, 172 temporal prediction vector generation unit, 173 optimal mode determination unit, 174 save vector Selection unit, 181 16 ⁇ 16 address generation unit, 200 image decoding device, 221 motion vector generation unit, 222 motion vector storage memory, 231 motion vector generation control unit, 232 temporal prediction vector generation unit, 233 intra prediction vector generation unit, 234 peripheral motion vector storage unit, 235 motion vector reconstruction unit, 236 inter / intra determination unit, 241 16 ⁇

Abstract

Cette invention porte sur un appareil et un procédé de traitement d'image destinés à assurer une réduction des accès à la mémoire. Une unité de correction d'adresse 16 x 16 détermine si une référence, qui utilise des informations d'adresse 16 x 16 provenant d'une unité de génération d'adresse 16 x 16, s'étend ou non au-delà d'une CU. Si tel est le cas, l'unité de correction d'adresse 16 x 16 corrige les informations d'adresse 16 x 16 de manière à ce que la référence soit réintroduite dans la CU. L'unité de correction d'adresse 16 x 16 fournit ensuite les informations d'adresse 16 x 16 corrigées à une unité de commande d'accès à la mémoire. L'unité de commande d'accès à la mémoire effectue une requête à une mémoire de stockage de vecteurs de mouvement pour y lire un vecteur de mouvement indiqué par les informations d'adresse 16 x 16 fournies par l'unité de correction d'adresse 16 x 16. Cette invention est applicable, par exemple, à un appareil de traitement d'image.
PCT/JP2012/067718 2011-07-28 2012-07-11 Appareil et procédé de traitement d'image WO2013015118A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011165574A JP2013030978A (ja) 2011-07-28 2011-07-28 画像処理装置および方法
JP2011-165574 2011-07-28

Publications (1)

Publication Number Publication Date
WO2013015118A1 true WO2013015118A1 (fr) 2013-01-31

Family

ID=47600970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/067718 WO2013015118A1 (fr) 2011-07-28 2012-07-11 Appareil et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP2013030978A (fr)
WO (1) WO2013015118A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170103B (zh) * 2018-12-07 2024-03-15 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法和解码方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0220988A (ja) * 1988-07-08 1990-01-24 Fujitsu Ltd 動画像符号化装置における動ベクトル検出方式
JP2004040575A (ja) * 2002-07-04 2004-02-05 Sony Corp 動き補償装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0220988A (ja) * 1988-07-08 1990-01-24 Fujitsu Ltd 動画像符号化装置における動ベクトル検出方式
JP2004040575A (ja) * 2002-07-04 2004-02-05 Sony Corp 動き補償装置

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BIN LI ET AL.: "On motion information compression", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
EDOUARD FRANCOIS ET AL.: "On memory compression for motion vector prediction", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
IL-KOO KIM ET AL.: "Improved motion vector decimation", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
SEUNGWOOK PARK ET AL.: "Modifications of temporal mv memory compression and temporal mv predictor", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
SHIGERU FUKUSHIMA ET AL.: "Partition size based selection for motion vector compression", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
SUNG-CHANG LIM ET AL.: "Dynamic range restriction of temporal motion vector", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
TAICHIRO SHIODERA ET AL.: "Modified motion vector memory compression", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
TOSHIYASU SUGIO ET AL.: "Modified motion vector compression method", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *
XUN GUO ET AL.: "Motion Vector Decimation for Temporal Prediction", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170103B (zh) * 2018-12-07 2024-03-15 松下电器(美国)知识产权公司 编码装置、解码装置、编码方法和解码方法

Also Published As

Publication number Publication date
JP2013030978A (ja) 2013-02-07

Similar Documents

Publication Publication Date Title
JP5741076B2 (ja) 画像処理装置及び画像処理方法
JP5979405B2 (ja) 画像処理装置および方法
JP6274103B2 (ja) 画像処理装置および方法
JP2018057009A (ja) 画像処理装置および方法、並びに、プログラム
JP2013150173A (ja) 画像処理装置および方法
US20230247217A1 (en) Image processing apparatus and method
JP5982734B2 (ja) 画像処理装置および方法
JP6316346B2 (ja) 画像処理装置、画像処理方法、プログラム及び記録媒体
WO2012176684A1 (fr) Dispositif et procédé de traitement d'image
JP2013012995A (ja) 画像処理装置および方法
WO2013065570A1 (fr) Dispositif et procédé de traitement d'image
US20140092979A1 (en) Image processing apparatus and method
WO2014050731A1 (fr) Dispositif et procédé de traitement d'image
JPWO2013108688A1 (ja) 画像処理装置および方法
WO2014103774A1 (fr) Dispositif et procédé de traitement d'image
WO2012173022A1 (fr) Dispositif et procédé de traitement d'image
WO2013065567A1 (fr) Dispositif et procédé de traitement d'image
WO2013015118A1 (fr) Appareil et procédé de traitement d'image
JP2013085096A (ja) 画像処理装置および方法
WO2014141899A1 (fr) Dispositif et procédé de traitement d'image
JP6217997B2 (ja) 画像処理装置および方法
JP2018029347A (ja) 画像処理装置および方法
WO2013002105A1 (fr) Dispositif et procédé de traitement d'image
JP2019146225A (ja) 画像処理装置および方法
JP2013012996A (ja) 画像処理装置および方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12817222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12817222

Country of ref document: EP

Kind code of ref document: A1