WO2019150411A1 - Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system - Google Patents

Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system Download PDF

Info

Publication number
WO2019150411A1
WO2019150411A1 PCT/JP2018/002811 JP2018002811W WO2019150411A1 WO 2019150411 A1 WO2019150411 A1 WO 2019150411A1 JP 2018002811 W JP2018002811 W JP 2018002811W WO 2019150411 A1 WO2019150411 A1 WO 2019150411A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
component
candidates
target block
differential motion
Prior art date
Application number
PCT/JP2018/002811
Other languages
French (fr)
Japanese (ja)
Inventor
幸二 山田
中川 章
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/JP2018/002811 priority Critical patent/WO2019150411A1/en
Publication of WO2019150411A1 publication Critical patent/WO2019150411A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to a video encoding device, a video encoding method, a video decoding device, a video decoding method, and a video encoding system.
  • HEVC High Efficiency Video Coding
  • CABAC context-adaptive binary arithmetic coding
  • Inter prediction is a prediction method that uses a pixel value of a block (reference block) that is temporally close to the encoding target block
  • intra prediction is a pixel value of a block that is close in distance to the encoding target block. The prediction method used.
  • a motion vector indicating a reference block is generated.
  • the motion vector includes a horizontal component (x component) and a vertical component (y component) in the image at each time included in the video.
  • the motion vector for the encoding target block often has a high correlation with the motion vectors for the blocks around the encoding target block. Therefore, in HEVC, a predicted motion vector, which is a predicted value of a motion vector of an encoding target block, is obtained from the motion vectors of surrounding blocks, and is a difference between the actual motion vector of the encoding target block and the predicted motion vector. A differential motion vector is generated. By encoding this differential motion vector, the amount of motion vector coding can be compressed.
  • FVC Future Video Coding
  • the syntax of a differential motion vector in inter prediction is encoded by CABAC.
  • CABAC the syntax of a differential motion vector in inter prediction is encoded by CABAC.
  • the code amount of the flag indicating the sign (positive or negative) of each component of the differential motion vector is not sufficiently compressed.
  • an object of the present invention is to reduce a code amount accompanying a motion vector in video encoding.
  • the video encoding device includes a first encoding unit, a determination unit, a generation unit, and a second encoding unit.
  • the first encoding unit encodes the encoding target block in the image included in the video.
  • the determination unit generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block.
  • the determination unit generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative.
  • the second differential motion vector is determined from among the differential motion vector candidates.
  • the determination unit includes the locally decoded pixel values of the encoded pixels adjacent to the encoding target block and the locally decoded pixels of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates.
  • the second difference motion vector is determined using the value.
  • the generating unit generates coincidence information indicating whether or not the code of the component of the first differential motion vector matches the code of the component of the second differential motion vector.
  • the second encoding unit encodes the absolute value of the first differential motion vector component and the coincidence information.
  • an occurrence probability model used for arithmetic coding is determined in accordance with other syntax elements or values of syntax elements adjacent to a region to be encoded. . In this case, the occurrence probability of each value of logic “0” and logic “1” is variable. On the other hand, for bins whose occurrence probabilities are difficult to estimate, a bypass mode is selected in which the occurrence probabilities of the values of logic “0” and logic “1” are fixed to 0.5.
  • Arithmetic coding Based on the occurrence probability of the occurrence symbol, a real number straight line of 0 or more and less than 1 is sequentially divided into sections, and finally a code word in binary notation is generated from the real number indicating the divided section. Is done.
  • the syntax of the differential motion vector in HEVC includes a flag indicating the sign (positive or negative) of the x component and the y component of the differential motion vector.
  • the flag indicating the sign of the x component is mvd_sign_flag [0]
  • the flag indicating the sign of the y component is mvd_sign_flag [1].
  • the values of these flags are specified as follows.
  • CABAC CABAC since it is difficult to predict the probability that the sign of the x component and the y component of the differential motion vector will be positive or negative, in CABAC, a flag indicating the sign of the x component and the y component is encoded in the bypass mode. For this reason, the code amount of these flags is not compressed.
  • FIG. 1 shows an example of the functional configuration of the video encoding apparatus according to the embodiment.
  • the video encoding device 101 of FIG. 1 includes a first encoding unit 111, a determination unit 112, a generation unit 113, and a second encoding unit 114.
  • FIG. 2 is a flowchart showing an example of video encoding processing performed by the video encoding device 101 of FIG.
  • the first encoding unit 111 encodes an encoding target block in an image included in the video (step 201).
  • the determination unit 112 generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block (step 202).
  • the determination unit 112 generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative (Ste 203). Then, the determination unit 112 determines a second differential motion vector from among these differential motion vector candidates (step 204). At this time, the determination unit 112 performs local decoding of the encoded pixel included in each of the plurality of reference block candidates indicated by the plurality of reference motion vector candidates and the local decoded pixel value of the encoded pixel adjacent to the encoding target block. A second differential motion vector is determined using the pixel value.
  • the generation unit 113 generates coincidence information indicating whether or not the code of the first differential motion vector component matches the code of the second differential motion vector component (step 205). Then, the second encoding unit 114 encodes the absolute value of the first differential motion vector component and the coincidence information (step 206).
  • FIG. 3 shows a functional configuration example of the video decoding apparatus according to the embodiment. 3 includes a first decoding unit 311, a determination unit 312, a generation unit 313, and a second decoding unit 314.
  • FIG. 4 is a flowchart showing an example of video decoding processing performed by the video decoding device 301 in FIG.
  • the first decoding unit 311 decodes the encoded video, and restores the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video (step 401).
  • the first decoding unit 311 matches information indicating whether the code indicating whether the component of the first difference motion vector is positive or negative matches the code of the component of the second difference motion vector. Are restored together with the absolute values of the components of the first differential motion vector.
  • the determination unit 312 generates a plurality of differential motion vector candidates by adding codes to the absolute values of the components of the first differential motion vector (step 402). Then, the determination unit 312 determines a second differential motion vector from among these differential motion vector candidates (step 403). At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates, A differential motion vector is determined.
  • the generation unit 313 generates a first differential motion vector from the second differential motion vector based on the match information (step 404). Then, the generation unit 313 generates a motion vector for the decoding target block from the first differential motion vector and the predicted motion vector for the decoding target block (step 405).
  • the second decoding unit 314 decodes the coefficient information of the decoding target block using the motion vector for the decoding target block (step 406).
  • the video decoding apparatus 301 in FIG. 3 can reduce the amount of code associated with a motion vector in video encoding.
  • FIG. 5 shows an example of the difference motion vector.
  • a reference image 501 in FIG. 5 is a locally decoded image of an image encoded before the encoding target image.
  • the reference image 501 includes a block 511 that exists at the same position as the encoding target block in the encoding target image, and a reference block 512 for the encoding target block.
  • the motion vector 521 for the encoding target block is a vector from the block 511 to the reference block 512, and is obtained by motion search processing.
  • the prediction vector 522 is obtained from the motion vectors for the blocks around the encoding target block, and the difference motion vector 523 represents the difference between the motion vector 521 and the prediction vector 522.
  • FIG. 6 shows examples of differential motion vector candidates and reference block candidates.
  • the direction from left to right is the positive direction of the x coordinate
  • the direction from top to bottom is the positive direction of the y coordinate.
  • Motion vector candidates 614 are generated.
  • the difference motion vector candidate 611 is a vector from the end point of the prediction vector 522 toward the reference block candidate 601, and the x component and the y component of the difference motion vector candidate 611 are positive.
  • the difference motion vector candidate 612 is a vector from the end point of the prediction vector 522 toward the reference block candidate 602, and the x component of the difference motion vector candidate 612 is positive and the y component is negative.
  • the difference motion vector candidate 613 is a vector from the end point of the prediction vector 522 toward the reference block candidate 603, and the x component and the y component of the difference motion vector candidate 613 are negative.
  • the difference motion vector candidate 614 is a vector from the end point of the prediction vector 522 toward the reference block candidate 604, and the x component of the difference motion vector candidate 614 is negative and the y component is positive.
  • FIG. 7 shows a specific example of the video encoding device 101 of FIG. 7 includes a block division unit 711, a prediction error generation unit 712, an orthogonal transformation unit 713, a quantization unit 714, an arithmetic coding unit 715, and an encoding control unit 716. Furthermore, the video encoding device 701 includes an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, an in-loop filter 723, and a memory 724. including.
  • the in-loop filter 723 corresponds to the first encoding unit 111 in FIG.
  • the video encoding device 701 can be implemented as a hardware circuit, for example.
  • each component of the video encoding device 701 may be mounted as an individual circuit or may be mounted as one integrated circuit.
  • the video encoding device 701 encodes the input video to be encoded, and outputs the encoded video as an encoded stream.
  • the video encoding device 701 can transmit the encoded stream to the video decoding device 301 in FIG. 3 via a communication network.
  • the encoding target video includes a plurality of images corresponding to a plurality of times.
  • the image at each time corresponds to the image to be encoded and may be called a picture or a frame.
  • Each image may be a color image or a monochrome image.
  • the pixel value may be in RGB format or YUV format.
  • the block division unit 711 divides the encoding target image into a plurality of blocks, and outputs the original image of the encoding target block to the prediction error generation unit 712, the intra-frame prediction unit 717, and the inter-frame prediction unit 718.
  • the intra-frame prediction unit 717 performs intra prediction on the encoding target block, and outputs a prediction image of intra prediction to the selection unit 719.
  • the inter-frame prediction unit 718 performs inter prediction on the encoding target block, and outputs a predicted image of inter prediction to the selection unit 719. At this time, the inter-frame prediction unit 718 obtains a motion vector for the encoding target block by the motion search process, and outputs the obtained motion vector to the arithmetic coding unit 715.
  • the selection unit 719 selects a prediction image output by either the intra-frame prediction unit 717 or the inter-frame prediction unit 718, and outputs the prediction image to the prediction error generation unit 712 and the reconstruction unit 722.
  • the prediction error generation unit 712 outputs the difference between the prediction image output from the selection unit 719 and the original image of the encoding target block to the orthogonal transformation unit 713 as a prediction error.
  • the orthogonal transform unit 713 performs orthogonal transform on the prediction error output from the prediction error generation unit 712, and outputs a transform coefficient to the quantization unit 714.
  • the quantization unit 714 quantizes the transform coefficient and outputs the quantization coefficient to the arithmetic coding unit 715 and the inverse quantization unit 720.
  • the arithmetic encoding unit 715 encodes the quantized coefficient output from the quantizing unit 714 and the motion vector output from the inter-frame prediction unit 718 using CABAC, and outputs an encoded stream. Then, the arithmetic encoding unit 715 outputs the amount of information generated by CABAC to the encoding control unit 716.
  • the inverse quantization unit 720 performs inverse quantization on the quantization coefficient output from the quantization unit 714, generates an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 721. .
  • the inverse orthogonal transform unit 721 performs inverse orthogonal transform on the inverse quantization coefficient, generates a prediction error, and outputs the generated prediction error to the reconstruction unit 722.
  • the reconstruction unit 722 adds the prediction image output from the selection unit 719 and the prediction error output from the inverse orthogonal transform unit 721 to generate a reconstruction image, and the generated reconstruction image is converted into the in-loop filter 723 and Output to the memory 724.
  • the in-loop filter 723 performs a filtering process such as a deblocking filter on the reconstructed image output from the reconstructing unit 722 to generate a local decoded image, and outputs the generated local decoded image to the memory 724.
  • the memory 724 stores the reconstructed image output from the reconstructing unit 722 as a locally decoded image and also stores the locally decoded image output from the in-loop filter 723.
  • the locally decoded image stored in the memory 724 is output to the intra-frame prediction unit 717, the inter-frame prediction unit 718, and the arithmetic coding unit 715.
  • the intra-frame prediction unit 717 uses the local decoded pixel value included in the local decoded image as a reference pixel value for the subsequent block
  • the inter-frame prediction unit 718 uses the local decoded image as a reference image for the subsequent image.
  • the encoding control unit 716 determines a quantization parameter (QP) so that the information amount output from the arithmetic encoding unit 715 becomes the target information amount, and outputs the determined QP to the quantization unit 714.
  • QP quantization parameter
  • FIG. 8 shows a first functional configuration example of the arithmetic encoding unit 715 of FIG. 8 includes a determination unit 801, a generation unit 802, and an encoding unit 803.
  • the determination unit 801 includes a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813.
  • the determination unit 801, the generation unit 802, and the encoding unit 803 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
  • the difference motion vector calculation unit 811 calculates a difference motion vector representing a difference between the motion vector output from the inter-frame prediction unit 718 and the prediction motion vector for the encoding target block.
  • the predicted motion vector is obtained from the motion vectors for the blocks around the encoding target block by the inter-frame prediction unit 718 or the differential motion vector calculation unit 811.
  • the difference motion vector candidate calculation unit 812 generates four difference motions based on four combinations of the positive sign and negative sign of the x component of the difference motion vector and the positive sign and negative sign of the y component. Calculate vector candidates.
  • the estimated difference motion vector calculation unit 813 obtains four reference block candidates corresponding to each of the four difference motion vector candidates. Then, the estimated difference motion vector calculation unit 813 obtains the local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated differential motion vector.
  • the estimated difference motion vector is calculated by a calculation method different from the motion search process in the inter-frame prediction unit 718, it does not always match the difference motion vector calculated by the difference motion vector calculation unit 811.
  • the generation unit 802 generates a code flag indicating whether or not the code of each component of the differential motion vector matches the code of each component of the estimated differential motion vector.
  • the encoding unit 803 encodes the absolute value and sign flag of each component of the differential motion vector and the quantization coefficient output by the quantization unit 714 using CABAC context modeling using a variable occurrence probability. The sign flag of each component corresponds to the coincidence information.
  • the sign flag does not directly indicate the positive or negative sign of each component of the difference motion vector, but instead shows the difference between the sign of each component of the difference motion vector and the sign of each component of the estimated difference motion vector. Show.
  • the occurrence probability of the value “0” indicating that the two codes are the same can be made higher than the occurrence probability of the value “1” indicating that the two codes are different. Therefore, the code flag can be encoded by arithmetic encoding using context modeling, and the code amount of the code flag is reduced.
  • FIG. 9 shows an example of a first calculation method of the estimated difference motion vector.
  • An encoding target block 903 is included in the area 902 of the encoding target image 901.
  • the estimated difference motion vector calculation unit 813 acquires the local decoded pixel value of the encoded pixel 911 adjacent to the encoding target block 903 in the region 902.
  • the estimated difference motion vector calculation unit 813 arranges each reference block candidate 904 so as to overlap the position of the encoding target block 903 in the region 902, and locally decodes the encoded pixels included in the reference block candidate 904. Get the value.
  • the estimated difference motion vector calculation unit 813 can acquire the locally decoded pixel value of the encoded pixel 912 in the reference block candidate 904 adjacent to the encoded pixel 911.
  • the statistical value Ai of the pixel value is calculated.
  • the statistical value A0 and the statistical value Ai an average value, median value, mode value, and the like of a plurality of local decoded pixel values can be used.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the statistical value A0 and the statistical value Ai. For example, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a differential motion vector candidate indicating the obtained reference block candidate. A differential motion vector can be determined.
  • FIG. 10 shows an example of a second calculation method of the estimated difference motion vector.
  • the estimated difference motion vector calculation unit 813 calculates the absolute difference value of the local decoded pixel values of the two pixels for the pair 1001 of the encoded pixel 911 and the encoded pixel 912. Then, the estimated difference motion vector calculation unit 813 calculates the difference absolute value sum by accumulating the difference absolute values for a plurality of pairs on the upper boundary and the left boundary of the encoding target block 903.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
  • FIG. 11 shows an example of the third calculation method of the estimated difference motion vector.
  • the estimated difference motion vector calculation unit 813 selects a combination 1101 of four encoded pixels in the vicinity of the boundary of the encoding target block 903.
  • the combination 1101 includes an encoded pixel 1111 to an encoded pixel 1114.
  • the encoded pixel 1111 and the encoded pixel 1112 are pixels included in the region 902 of the encoding target image 901.
  • the encoded pixel 1112 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1111 is present. Is adjacent to the encoded pixel 1112.
  • the encoded pixel 1113 and the encoded pixel 1114 are pixels included in the reference block candidate 904, the encoded pixel 1113 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1114 is encoded. Adjacent to the converted pixel 1113.
  • the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1111 and the encoded pixel 1112. Further, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1113 and the encoded pixel 1114.
  • FIG. 12 shows an example of a method for calculating a predicted pixel value on the boundary of the encoding target block 903.
  • the horizontal axis in FIG. 12 represents the y axis of the encoding target image 901, and the vertical axis represents the pixel value.
  • the estimated difference motion vector calculation unit 813 sets the y coordinate y1 of the boundary of the encoding target block 903 on the straight line 1201 that passes through the local decoded pixel value of the encoded pixel 1111 and the local decoded pixel value of the encoded pixel 1112.
  • the corresponding pixel value p1 is obtained as the predicted pixel value.
  • the estimated difference motion vector calculation unit 813 predicts a pixel value p2 corresponding to y1 on a straight line 1202 that passes through the local decoded pixel value of the encoded pixel 1113 and the local decoded pixel value of the encoded pixel 1114. Obtained as a pixel value.
  • the estimated difference motion vector calculation unit 813 calculates a difference absolute value 1203 between the predicted pixel value p1 and the predicted pixel value p2. Then, the estimated difference motion vector calculation unit 813 accumulates the difference absolute values 1203 for a plurality of combinations on the upper boundary and the left boundary of the encoding target block 903, and calculates a difference absolute value sum.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
  • the first to third calculation methods of the estimated difference motion vector it is possible to obtain an estimated difference motion vector that is highly likely to match the difference motion vector with a smaller calculation amount than the motion search processing in the inter-frame prediction unit 718. Can do. As a result, the occurrence probability of the value “0” in the sign flag can be made higher than the occurrence probability of the value “1”.
  • FIG. 13 is a flowchart showing a specific example of the video encoding process performed by the video encoding device 701 in FIG.
  • the intra-frame prediction unit 717 performs intra prediction on the encoding target block (step 1301)
  • the inter-frame prediction unit 718 performs inter prediction on the encoding target block (step 1302).
  • the prediction error generation unit 712, the orthogonal transformation unit 713, and the quantization unit 714 code the encoding target block using the prediction image output from either the intra-frame prediction unit 717 or the inter-frame prediction unit 718. To generate a quantized coefficient (step 1303). Then, the determination unit 801 and the generation unit 802 of the arithmetic encoding unit 715 generate a code flag for the differential motion vector of the encoding target block (step 1304).
  • the video encoding device 701 determines whether or not the encoding of the encoding target image has been completed (step 1305). When an unprocessed block remains (step 1305, NO), the video encoding device 701 repeats the processing after step 1301 for the next block.
  • the encoding unit 803 of the arithmetic encoding unit 715 performs variable length encoding on the quantization coefficient and the prediction mode information (step 1306).
  • the prediction mode information includes the absolute value and sign flag of each component of the differential motion vector.
  • the video encoding device 701 determines whether or not the encoding of the encoding target video has been completed (step 1307). When an unprocessed image remains (step 1307, NO), the video encoding device 701 repeats the processing after step 1301 for the next image. Then, when encoding of the encoding target video is completed (step 1307, YES), the video encoding device 701 ends the process.
  • FIG. 14 is a flowchart showing an example of the first code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the first calculation method shown in FIG.
  • the difference motion vector calculation unit 811 of the determination unit 801 calculates a difference motion vector from the motion vector and the predicted motion vector (step 1401).
  • the difference motion vector candidate calculation unit 812 then calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 1402).
  • the estimated difference motion vector calculation unit 813 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 1403).
  • the estimated difference motion vector calculation unit 813 calculates a statistical value A0 of the locally decoded pixel value of the encoded pixel adjacent to the encoding target block in the encoding target image (step 1404).
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and obtains a differential motion vector candidate indicating the obtained reference block candidate.
  • the estimated difference motion vector is determined (step 1406).
  • the generation unit 802 compares the code of each component of the differential motion vector and the code of each component of the estimated differential motion vector to determine the value of the code flag of each component (step 1407). If the code of the component of the differential motion vector and the code of the component of the estimated differential motion vector are the same, the value of the code flag of that component is determined to be “0”, and if the two codes are different, the code flag of that component The value of is determined to be “1”.
  • FIG. 15 is a flowchart showing an example of the second code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the second calculation method shown in FIG.
  • the processing in steps 1501 to 1503 and step 1506 in FIG. 15 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
  • step 1504 the estimated difference motion vector calculation unit 813, when each reference block candidate is arranged at the position of the encoding target block, is adjacent to the surrounding encoded pixels in the encoding target image. Identify encoded pixels in the candidate. Then, the estimated difference motion vector calculation unit 813, for each reference block candidate, the local decoded pixel value of the surrounding encoded pixel in the encoding target image and the local decoded pixel value of the encoded pixel in the reference block candidate The sum of absolute differences is calculated.
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 1505).
  • FIG. 16 is a flowchart showing an example of the third code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the third calculation method shown in FIG.
  • the processing in steps 1601 to 1603 and step 1606 in FIG. 16 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
  • step 1604 the estimated difference motion vector calculation unit 813 identifies two columns of encoded pixels in the encoding target image that are adjacent to the outside of the boundary of the encoding target block. Then, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
  • the estimated difference motion vector calculation unit 813 has two columns in the reference block candidate that are adjacent to the inside of the boundary of the encoding target block when the reference block candidates are arranged so as to overlap the position of the encoding target block.
  • the encoded pixels are identified.
  • the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
  • the estimated difference motion vector calculation unit 813 for each reference block candidate, the predicted pixel value calculated from the encoded pixel in the encoding target image and the predicted pixel calculated from the encoded pixel in the reference block candidate Calculate the sum of absolute differences from the value.
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 1605).
  • FIG. 17 shows a specific example of the video decoding device 301 in FIG. 17 includes an arithmetic decoding unit 1711, an inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718. , And a memory 1719.
  • An inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 correspond to the second decoding unit 314 in FIG. .
  • the video decoding device 1701 can be implemented as a hardware circuit, for example.
  • each component of the video decoding device 1701 may be mounted as an individual circuit, or may be mounted as one integrated circuit.
  • the video decoding device 1701 decodes the input encoded stream and outputs the decoded video.
  • the video decoding device 1701 can receive the encoded stream from the video encoding device 701 in FIG. 7 via the communication network.
  • the arithmetic decoding unit 1711 decodes the encoded stream by the CABAC decoding method, outputs the quantization coefficient of the decoding target block in the decoding target image to the inverse quantization unit 1712, and motion compensates the motion vector for the decoding target block. To the unit 1717.
  • the inverse quantization unit 1712 performs inverse quantization on the quantization coefficient output from the arithmetic decoding unit 1711 to generate an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 1713.
  • the inverse orthogonal transform unit 1713 performs inverse orthogonal transform on the inverse quantization coefficient to generate a prediction error, and outputs the generated prediction error to the reconstruction unit 1714.
  • the motion compensation unit 1717 performs motion compensation processing on the decoding target block using the motion vector output from the arithmetic decoding unit 1711 and the reference image output from the memory 1719, generates an inter-prediction predicted image, and selects it. To the unit 1718.
  • the selection unit 1718 selects a prediction image output by either the intra-frame prediction unit 1716 or the motion compensation unit 1717 and outputs the prediction image to the reconstruction unit 1714.
  • the reconstruction unit 1714 adds the prediction image output from the selection unit 1718 and the prediction error output from the inverse orthogonal transform unit 1713 to generate a reconstruction image, and the generated reconstruction image is converted into an in-loop filter 1715 and It outputs to the intra-frame prediction part 1716.
  • the intra-frame prediction unit 1716 performs intra prediction on the decoding target block using the reconstructed image of the decoded block output from the reconstruction unit 1714, and outputs a prediction image of intra prediction to the selection unit 1718.
  • the in-loop filter 1715 performs a filtering process such as a deblocking filter and a sample adaptive offset filter on the reconstructed image output from the reconstructing unit 1714 to generate a decoded image. Then, the in-loop filter 1715 outputs the decoded image for one frame as the decoded video and also outputs it to the memory 1719.
  • a filtering process such as a deblocking filter and a sample adaptive offset filter
  • the memory 1719 stores the decoded image output from the in-loop filter 1715.
  • the decoded image stored in the memory 1719 is output to the motion compensation unit 1717 as a reference image for the subsequent image.
  • FIG. 18 shows a first functional configuration example of the arithmetic decoding unit 1711 in FIG.
  • the arithmetic decoding unit 1711 in FIG. 18 includes a decoding unit 1801, a determination unit 1802, and a generation unit 1803.
  • the determination unit 1802 includes a difference motion vector candidate calculation unit 1811 and an estimated difference motion vector calculation unit 1812.
  • the decoding unit 1801, the determination unit 1802, and the generation unit 1803 correspond to the first decoding unit 311, the determination unit 312 and the generation unit 313 in FIG.
  • the decoding unit 1801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Further, the decoding unit 1801 restores the absolute values of the x component and the y component of the differential motion vector and the code flags of the x component and the y component of the differential motion vector. Decoding section 1801 then outputs the absolute value of each component of the difference motion vector to determination section 1802 and outputs the sign flag of each component of the difference motion vector to generation section 1803.
  • the difference motion vector candidate calculation unit 1811 calculates a positive sign and a negative sign of the x component of the difference motion vector, a positive sign and a negative sign of the y component from the absolute values of the x component and the y component of the difference motion vector. Based on these four combinations, four differential motion vector candidates are calculated.
  • the estimated difference motion vector calculation unit 1812 obtains four reference block candidates corresponding to each of the difference motion vector candidates by using the prediction motion vector and the four difference motion vector candidates for the decoding target block.
  • the predicted motion vector is obtained from the motion vectors already calculated for the blocks around the decoding target block.
  • the estimated difference motion vector calculation unit 1812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the difference. Calculate the motion vector.
  • the generation unit 1803 determines the code of each component of the differential motion vector from the code of each component of the estimated differential motion vector based on the code flag of each component of the differential motion vector, and generates a differential motion vector.
  • the generation unit 1803 calculates a motion vector for the decoding target block by adding the predicted motion vector for the decoding target block and the generated differential motion vector.
  • FIG. 19 is a flowchart showing a specific example of video decoding processing performed by the video decoding device 1701 in FIG.
  • the arithmetic decoding unit 1711 performs variable length decoding on the encoded stream, and generates the quantization coefficient and prediction mode information of the decoding target block (step 1901). Then, the arithmetic decoding unit 1711 checks whether the prediction mode information of the decoding target block indicates inter prediction or intra prediction (step 1902).
  • the arithmetic decoding unit 1711 uses the absolute value and the sign flag of each component of the difference motion vector included in the prediction mode information to perform motion. A vector is generated (step 1903). Then, the motion compensation unit 1717 performs motion compensation processing on the decoding target block using the generated motion vector (step 1904).
  • the intra-frame prediction unit 1716 performs intra prediction on the decoding target block (step 1907).
  • the inverse quantization unit 1712 and the inverse orthogonal transform unit 1713 decode the quantization coefficient of the decoding target block and generate a prediction error (step 1905). Then, the selection unit 1718, the reconstruction unit 1714, and the in-loop filter 1715 generate a decoded image from the prediction error using the prediction image output by either the motion compensation unit 1717 or the intra-frame prediction unit 1716.
  • the video decoding device 1701 determines whether or not the decoding of the encoded stream has been completed (step 1906). If an unprocessed code string remains (step 1906, NO), the video decoding device 1701 repeats the processing from step 1901 on for the next code string. Then, when decoding of the encoded stream is completed (step 1906, YES), the video decoding device 1701 ends the process.
  • FIG. 20 is a flowchart showing an example of the first motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the first calculation method shown in FIG.
  • the difference motion vector candidate calculation unit 1811 of the determination unit 1802 calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 2001).
  • the estimated difference motion vector calculation unit 1812 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 2002).
  • the estimated difference motion vector calculation unit 1812 calculates the statistical value B0 of the decoded pixel value of the decoded pixel adjacent to the decoding target block in the decoding target image (step 2003).
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and obtains a differential motion vector candidate indicating the obtained reference block candidate.
  • the estimated difference motion vector is determined (step 2005).
  • the generation unit 1803 generates a difference motion vector using the sign flag of each component of the difference motion vector and the estimated difference motion vector, and adds the predicted motion vector and the difference motion vector to thereby generate a motion vector. Is calculated (step 2006).
  • FIG. 21 is a flowchart showing an example of the second motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 2101, step 2102, and step 2105 in FIG. 21 is the same as the processing in step 2001, step 2002, and step 2006 in FIG. 20.
  • step 2103 when the estimated difference motion vector calculation unit 1812 arranges each reference block candidate so as to overlap the position of the decoding target block, the estimated difference motion vector calculation unit 1812 includes a reference block candidate adjacent to the surrounding decoded pixels in the decoding target image. Identify decoded pixels. Then, for each reference block candidate, the estimated difference motion vector calculation unit 1812 calculates the absolute difference between the decoded pixel value of the surrounding decoded pixel in the decoding target image and the decoded pixel value of the decoded pixel in the reference block candidate. Calculate the sum.
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 2104).
  • FIG. 22 is a flowchart showing an example of the third motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the third calculation method shown in FIG.
  • the processing in step 2201, step 2202, and step 2205 in FIG. 22 is the same as the processing in step 2001, step 2002, and step 2006 in FIG.
  • the estimated difference motion vector calculation unit 1812 identifies two columns of decoded pixels in the decoding target image that are adjacent to the outside of the decoding target block boundary. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
  • the estimated difference motion vector calculation unit 1812 decodes two columns in the reference block candidate that are adjacent to the inner side of the boundary of the decoding target block when each reference block candidate is arranged at the position of the decoding target block. Identify the completed pixel. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
  • the estimated difference motion vector calculation unit 1812 calculates, for each reference block candidate, the predicted pixel value calculated from the decoded pixel in the decoding target image and the predicted pixel value calculated from the decoded pixel in the reference block candidate. Calculate the sum of absolute differences.
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 2204).
  • the video encoding device 101 in FIG. 1 and the video decoding device 301 in FIG. 3 generate a plurality of motion vector candidates by changing the code of the motion vector component instead of the code of the differential motion vector component. It is also possible to do.
  • the first encoding unit 111 in FIG. 1 encodes the encoding target block in the image included in the video.
  • the determining unit 112 generates a plurality of motion vector candidates including the first motion vector by changing a sign indicating whether the component of the first motion vector for the encoding target block is positive or negative. Then, the determination unit 112 determines the second motion vector from these motion vector candidates. At this time, the determination unit 112 performs local decoding pixel values of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates and the local decoding pixel value of the encoded pixel adjacent to the encoding target block. The second motion vector is determined using the value.
  • the generation unit 113 generates coincidence information indicating whether or not the code of the first motion vector component matches the code of the second motion vector component, and the second encoding unit 114 generates the first motion vector component.
  • the absolute value and the coincidence information are encoded.
  • the first decoding unit 311 in FIG. 3 decodes the encoded video, and restores the absolute value of the first motion vector component for the decoding target block in the image included in the encoded video.
  • the first decoding unit 311 includes coincidence information indicating whether or not the code indicating whether the component of the first motion vector is positive or negative matches the code of the component of the second motion vector. Restored together with the absolute value of the first motion vector component.
  • the determining unit 312 generates a plurality of motion vector candidates by adding a sign to the absolute value of the component of the first motion vector, and determines the second motion vector from the motion vector candidates. At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates to perform the second motion. Determine the vector.
  • the generating unit 313 generates a first motion vector from the second motion vector based on the match information, and the second decoding unit 314 decodes the coefficient information of the decoding target block using the first motion vector.
  • the arithmetic encoding unit 715 uses the absolute value and the code flag of each component of the motion vector instead of the absolute value and the code flag of each component of the differential motion vector as the prediction mode information of the inter prediction.
  • the arithmetic encoding unit 715 generates four motion vector candidates based on the four combinations of the codes of the respective components of the motion vector output from the inter-frame prediction unit 718.
  • FIG. 23 shows examples of motion vector candidates and reference block candidates.
  • the reference image 2301 in FIG. 23 includes a block 2311 that exists at the same position as the encoding target block in the encoding target image.
  • the direction from left to right is the positive direction of the x coordinate
  • the direction from top to bottom is the positive direction of the y coordinate.
  • motion vector candidates 2331 to 2334 are generated based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component.
  • the motion vector candidate 2331 is a vector from the block 2311 to the reference block candidate 2321, and the x component and the y component of the motion vector candidate 2331 are positive.
  • the motion vector candidate 2332 is a vector from the block 2311 to the reference block candidate 2322, and the x component of the motion vector candidate 2332 is positive and the y component is negative.
  • the motion vector candidate 2333 is a vector from the block 2311 to the reference block candidate 2323, and the x component and the y component of the motion vector candidate 2333 are negative.
  • the motion vector candidate 2334 is a vector from the block 2311 to the reference block candidate 2324, and the x component of the motion vector candidate 2334 is negative and the y component is positive.
  • FIG. 24 shows a second functional configuration example of the arithmetic encoding unit 715 of FIG. 24 includes a determination unit 2401, a generation unit 2402, and an encoding unit 2403.
  • the determination unit 2401 includes a motion vector candidate calculation unit 2411 and an estimated motion vector calculation unit 2412.
  • the determination unit 2401, the generation unit 2402, and the encoding unit 2403 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
  • the motion vector candidate calculation unit 2411 calculates four motion vector candidates based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component. calculate.
  • the estimated motion vector calculation unit 2412 obtains four reference block candidates corresponding to the four motion vector candidates. Then, the estimated motion vector calculation unit 2412 uses the locally decoded pixel value of the encoded pixel adjacent to the encoding target block and the locally decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated motion vector.
  • the generating unit 2402 generates a code flag indicating whether or not the code of each component of the motion vector matches the code of each component of the estimated motion vector.
  • the encoding unit 2403 encodes the absolute value and the sign flag of each component of the motion vector and the quantization coefficient output from the quantization unit 714 using CABAC context modeling using a variable occurrence probability.
  • the code amount of the code flag is reduced by arithmetic coding using context modeling, similarly to the code flag of the differential motion vector.
  • FIG. 25 is a flowchart illustrating an example of a fourth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in Step 1304 of FIG.
  • the estimated motion vector is calculated by the first calculation method shown in FIG.
  • the processing in step 2503 and step 2504 in FIG. 25 is the same as the processing in step 1404 and step 1405 in FIG.
  • the motion vector candidate calculation unit 2411 of the determination unit 2401 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2501).
  • the estimated motion vector calculation unit 2412 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2502).
  • step 2505 the estimated motion vector calculation unit 2412 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
  • the generation unit 2402 compares the code of each component of the motion vector and the code of each component of the estimated motion vector, and determines the value of the code flag of each component (step 2506). If the code of the motion vector component and the code of the estimated motion vector component are the same, the value of the code flag of the component is determined to be “0”. If the two codes are different, the value of the code flag of the component Is determined to be “1”.
  • FIG. 26 is a flowchart illustrating an example of a fifth code flag generation process performed by the arithmetic encoding unit 715 in FIG. 24 in step 1304 in FIG.
  • the estimated motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 2601, step 2602, and step 2605 in FIG. 26 is the same as the processing in step 2501, step 2502, and step 2506 in FIG. 25, and the processing in step 2603 is the same as the processing in step 1504 in FIG. It is.
  • the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 27 is a flowchart illustrating an example of a sixth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in step 1304 of FIG.
  • the estimated motion vector is calculated by the third calculation method shown in FIG.
  • the processing of Step 2701, Step 2702, and Step 2705 of FIG. 27 is the same as the processing of Step 2501, Step 2502, and Step 2506 of FIG. 25, and the processing of Step 2703 is the same as the processing of Step 1604 of FIG. It is.
  • the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 28 shows a second functional configuration example of the arithmetic decoding unit 1711 in FIG.
  • the arithmetic decoding unit 1711 in FIG. 28 includes a decoding unit 2801, a determination unit 2802, and a generation unit 2803.
  • the determination unit 2802 includes a motion vector candidate calculation unit 2811 and an estimated motion vector calculation unit 2812.
  • the decoding unit 2801, the determination unit 2802, and the generation unit 2803 respectively correspond to the first decoding unit 311, the determination unit 312, and the generation unit 313 in FIG.
  • the decoding unit 2801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Furthermore, the decoding unit 2801 restores the absolute values of the x and y components of the motion vector and the code flags of the x and y components of the motion vector. Then, decoding section 2801 outputs the absolute value of each component of the motion vector to determination section 2802, and outputs the sign flag of each component of the motion vector to generation section 2803.
  • the motion vector candidate calculation unit 2811 calculates the positive and negative signs of the x and y components of the motion vector and the positive and negative signs of the y component from the absolute values of the x and y components of the motion vector. Based on the combination, four motion vector candidates are calculated.
  • the estimated motion vector calculation unit 2812 obtains four reference block candidates corresponding to each of the four motion vector candidates. Then, the estimated motion vector calculation unit 2812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the motion vector. Calculate
  • the generation unit 2803 determines the code of each component of the motion vector from the code of each component of the estimated motion vector based on the code flag of each component of the motion vector, and generates a motion vector for the decoding target block.
  • FIG. 29 is a flowchart illustrating an example of a fourth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG.
  • the estimated motion vector is calculated by the first calculation method shown in FIG.
  • the processing in step 2903 and step 2904 in FIG. 29 is the same as the processing in step 2003 and step 2004 in FIG.
  • the motion vector candidate calculation unit 2811 of the determination unit 2802 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2901).
  • the estimated motion vector calculation unit 2812 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2902).
  • the estimated motion vector calculation unit 2812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
  • the generation unit 2803 calculates a motion vector using the sign flag of each component of the motion vector and the estimated motion vector (step 2906).
  • FIG. 30 is a flowchart illustrating an example of a fifth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in Step 1903 in FIG.
  • the estimated motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 3001, step 3002, and step 3005 in FIG. 30 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3003 is the same as the processing in step 2103 in FIG. It is.
  • step 3004 the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 31 is a flowchart showing an example of sixth motion vector generation processing performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG.
  • the estimated motion vector is calculated by the third calculation method shown in FIG.
  • the processing in step 3101, step 3102, and step 3105 in FIG. 31 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3103 is the same as the processing in step 2203 in FIG. 22. It is.
  • step 3104 the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 32 shows a functional configuration example of the video encoding system.
  • 32 includes the video encoding device 701 in FIG. 7 and the video decoding device 1701 in FIG. 17 and is used for various purposes.
  • the video encoding system 3201 may be a video camera, a video transmission device, a video reception device, a videophone system, a computer, or a mobile phone.
  • FIG. 7 is merely an example, and some components may be omitted or changed depending on the use or conditions of the video encoding device.
  • the configuration of the arithmetic encoding unit 715 in FIGS. 8 and 24 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding device.
  • the video encoding apparatus may adopt an encoding method other than HEVC, or may adopt a variable length encoding method other than CABAC.
  • the video decoding apparatus may employ a decoding scheme other than HEVC or a variable length decoding scheme other than CABAC.
  • the configuration of the video encoding system 3201 in FIG. 32 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding system 3201.
  • FIGS. 2, 4, 13 to 16, 19 to 22, 25 to 27, and 29 to 31 are merely examples, and the configuration of the video encoding device or the video decoding device is shown. Alternatively, some processes may be omitted or changed according to conditions.
  • Step 2603 of FIG. 26, Step 2703 of FIG. 27, Step 3003 of FIG. 30, and Step 3103 of FIG. 31 another index indicating the degree of difference or similarity is used instead of the sum of absolute differences. Also good.
  • the motion vector, the predicted motion vector, the difference motion vector, the difference motion vector candidate, and the motion vector candidate shown in FIG. 5, FIG. 6, and FIG. 23 are merely examples, and these vectors change according to the encoding target video. To do.
  • the calculation methods shown in FIGS. 9 to 12 are merely examples, and the estimated difference motion vector or the estimated motion vector may be calculated by another calculation method using the local decoded pixel value or the decoded pixel value.
  • the video encoding device 101 in FIG. 1, the video decoding device 301 in FIG. 3, the video encoding device 701 in FIG. 7, and the video decoding device 1701 in FIG. 17 can also be implemented as hardware circuits, as shown in FIG. It can also be implemented using such an information processing apparatus (computer).
  • CPU 33 includes a central processing unit (CPU) 3301, a memory 3302, an input device 3303, an output device 3304, an auxiliary storage device 3305, a medium driving device 3306, and a network connection device 3307. These components are connected to each other by a bus 3308.
  • CPU central processing unit
  • the memory 3302 is a semiconductor memory such as a Read Only Memory (ROM), a Random Access Memory (RAM), or a flash memory, and stores programs and data used for processing.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • flash memory any type of non-volatile memory
  • the memory 3302 can be used as the memory 724 in FIG. 7 and the memory 1719 in FIG.
  • the CPU 3301 (processor) operates as, for example, the first encoding unit 111, the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG. 1 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the first decoding unit 311, the determination unit 312, the generation unit 313, and the second decoding unit 314 in FIG. 3 by executing the program using the memory 3302.
  • the CPU 3301 uses the memory 3302 to execute a program, thereby performing block division unit 711, prediction error generation unit 712, orthogonal transform unit 713, quantization unit 714, arithmetic coding unit 715, and coding control in FIG.
  • the unit 716 also operates.
  • the CPU 3301 uses the memory 3302 to execute a program, so that an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, and It also operates as an in-loop filter 723.
  • the CPU 3301 also operates as the determination unit 801, the generation unit 802, and the encoding unit 803 in FIG. 8 by executing the program using the memory 3302.
  • the CPU 3301 also operates as a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the arithmetic decoding unit 1711, the inverse quantization unit 1712, the inverse orthogonal transform unit 1713, the reconstruction unit 1714, and the in-loop filter 1715 in FIG. 17 by executing the program using the memory 3302. .
  • the CPU 3301 also operates as an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the decoding unit 1801, the determination unit 1802, the generation unit 1803, the difference motion vector candidate calculation unit 1811, and the estimated difference motion vector calculation unit 1812 in FIG. 18 by executing the program using the memory 3302. To do.
  • the CPU 3301 also operates as the determination unit 2401, the generation unit 2402, the encoding unit 2403, the motion vector candidate calculation unit 2411, and the estimated motion vector calculation unit 2412 in FIG. 24 by executing a program using the memory 3302. .
  • the CPU 3301 also operates as the decoding unit 2801, the determination unit 2802, the generation unit 2803, the motion vector candidate calculation unit 2811, and the estimated motion vector calculation unit 2812 in FIG. 28 by executing the program using the memory 3302.
  • the input device 3303 is, for example, a keyboard, a pointing device, or the like, and is used for inputting an instruction or information from a user or an operator.
  • the output device 3304 is, for example, a display device, a printer, a speaker, or the like, and is used to output an inquiry to a user or an operator or a processing result.
  • the processing result may be a decoded video.
  • the auxiliary storage device 3305 is, for example, a magnetic disk device, an optical disk device, a magneto-optical disk device, a tape device, or the like.
  • the auxiliary storage device 3305 may be a hard disk drive.
  • the information processing apparatus can store programs and data in the auxiliary storage device 3305 and load them into the memory 3302 for use.
  • the medium driving device 3306 drives the portable recording medium 3309 and accesses the recorded contents.
  • the portable recording medium 3309 is a memory device, a flexible disk, an optical disk, a magneto-optical disk, or the like.
  • the portable recording medium 3309 may be a Compact Disk Read Only Memory (CD-ROM), Digital Versatile Disk (DVD), or Universal Serial Bus (USB) memory.
  • CD-ROM Compact Disk Read Only Memory
  • DVD Digital Versatile Disk
  • USB Universal Serial Bus
  • the computer-readable recording medium for storing the program and data used for processing includes physical (non-transitory) media such as the memory 3302, the auxiliary storage device 3305, and the portable recording medium 3309.
  • a recording medium is included.
  • the network connection device 3307 is a communication interface circuit that is connected to a communication network such as Local Area Network (LAN) or the Internet and performs data conversion accompanying communication.
  • the network connection device 3307 can transmit the encoded stream to the video decoding device and receive the encoded stream from the video encoding device.
  • the information processing apparatus can receive a program and data from an external apparatus via the network connection apparatus 3307 and can use them by loading them into the memory 3302.
  • the information processing apparatus does not have to include all the components shown in FIG. 33, and some of the components can be omitted depending on the application or conditions. For example, when an interface with a user or an operator is unnecessary, the input device 3303 and the output device 3304 may be omitted. When the information processing apparatus does not access the portable recording medium 3309, the medium driving device 3306 may be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A first encoding unit encodes a block to be encoded in an image included in video. A determination unit generates a first difference motion vector from a motion vector and a predicted motion vector with respect to the block to be encoded. Next, the determination unit changes a sign indicating whether a component of the first difference motion vector is positive or negative so as to generate a plurality of difference motion vector candidates, and determines a second difference motion vector from among the difference motion vector candidates. In this case, the determination unit determines the second difference motion vector by using a local decoding pixel value of an encoded pixel adjacent to the block to be encoded and a local decoding pixel value of an encoded pixel included in each of a plurality of reference block candidates indicated by the plurality of difference motion vector candidates. A generation unit generates matching information indicating whether the sign of the component of the first difference motion vector matches the sign of the component of the second difference motion vector. A second encoding unit encodes the absolute value of the component of the first difference motion vector and the matching information.

Description

映像符号化装置、映像符号化方法、映像復号装置、映像復号方法、及び映像符号化システムVideo encoding apparatus, video encoding method, video decoding apparatus, video decoding method, and video encoding system
 本発明は、映像符号化装置、映像符号化方法、映像復号装置、映像復号方法、及び映像符号化システムに関する。 The present invention relates to a video encoding device, a video encoding method, a video decoding device, a video decoding method, and a video encoding system.
 International Telecommunication Union Telecommunication Standardization Sector(ITU-T)のH.265には、最新の動画像符号化規格であるHigh Efficiency Video Coding(HEVC)が規定されている。HEVCでは、可変長符号化方式として、処理負荷は高めであるが圧縮効率の高いコンテキスト適応型2値算術符号化(Context-Adaptive Binary Arithmetic Coding,CABAC)が採用されている。 International Telecommunication Union Telecommunication Standardization Sector (ITU-T) H.265 defines High Efficiency Video Coding (HEVC), which is the latest video coding standard. In HEVC, context-adaptive binary arithmetic coding (CABAC), which has a high processing load but high compression efficiency, is adopted as a variable-length coding method.
 また、HEVCでは、符号化済みのブロックの情報を用いて符号化対象ブロックを符号化するための予測方法として、インター予測及びイントラ予測の2つの予測方法が採用されている。インター予測は、符号化対象ブロックに対して時間的に近いブロック(参照ブロック)の画素値を用いる予測方法であり、イントラ予測は、符号化対象ブロックに対して距離的に近いブロックの画素値を用いる予測方法である。 Also, in HEVC, two prediction methods, inter prediction and intra prediction, are adopted as prediction methods for encoding a block to be encoded using information on encoded blocks. Inter prediction is a prediction method that uses a pixel value of a block (reference block) that is temporally close to the encoding target block, and intra prediction is a pixel value of a block that is close in distance to the encoding target block. The prediction method used.
 インター予測では、参照ブロックを示す動きベクトルが生成される。動きベクトルは、映像に含まれる各時刻の画像内における水平方向の成分(x成分)と垂直方向の成分(y成分)とを含む。 In inter prediction, a motion vector indicating a reference block is generated. The motion vector includes a horizontal component (x component) and a vertical component (y component) in the image at each time included in the video.
 符号化対象ブロックに対する動きベクトルは、符号化対象ブロックの周囲のブロックに対する動きベクトルと高い相関を有することが多い。そこで、HEVCでは、周囲のブロックの動きベクトルから、符号化対象ブロックの動きベクトルの予測値である予測動きベクトルが求められ、符号化対象ブロックの実際の動きベクトルと予測動きベクトルとの差分である差分動きベクトルが生成される。この差分動きベクトルを符号化することで、動きベクトルの符号量を圧縮することができる。 The motion vector for the encoding target block often has a high correlation with the motion vectors for the blocks around the encoding target block. Therefore, in HEVC, a predicted motion vector, which is a predicted value of a motion vector of an encoding target block, is obtained from the motion vectors of surrounding blocks, and is a difference between the actual motion vector of the encoding target block and the predicted motion vector. A differential motion vector is generated. By encoding this differential motion vector, the amount of motion vector coding can be compressed.
 また、次期動画像符号化規格であるFuture Video Coding(FVC)の標準化活動が行われており、圧縮効率のさらなる向上が検討されている。 In addition, standardization activities for Future Video Coding (FVC), which is the next video coding standard, are being conducted, and further improvements in compression efficiency are being studied.
 動きベクトル予測において符号化される差分ベクトルの符号量を削減する動画像符号化装置も知られている(例えば、特許文献1を参照)。 There is also known a moving image coding apparatus that reduces the amount of difference vectors encoded in motion vector prediction (see, for example, Patent Document 1).
特開2012-235278号公報JP 2012-235278 A
 HEVCでは、インター予測における差分動きベクトルのシンタックスがCABACによって符号化される。しかしながら、差分動きベクトルの各成分の符号(正又は負)を示すフラグの符号量は十分に圧縮されていない。 In HEVC, the syntax of a differential motion vector in inter prediction is encoded by CABAC. However, the code amount of the flag indicating the sign (positive or negative) of each component of the differential motion vector is not sufficiently compressed.
 なお、かかる問題は、HEVCを採用した映像符号化に限らず、インター予測を用いる他の映像符号化においても生ずるものである。 Note that such a problem occurs not only in video coding using HEVC but also in other video coding using inter prediction.
 1つの側面において、本発明は、映像符号化における動きベクトルに伴う符号量を削減することを目的とする。 In one aspect, an object of the present invention is to reduce a code amount accompanying a motion vector in video encoding.
 1つの案では、映像符号化装置は、第1符号化部、決定部、生成部、及び第2符号化部を含む。 In one proposal, the video encoding device includes a first encoding unit, a determination unit, a generation unit, and a second encoding unit.
 第1符号化部は、映像に含まれる画像内の符号化対象ブロックを符号化する。決定部は、符号化対象ブロックに対する動きベクトルと符号化対象ブロックに対する予測動きベクトルとから、第1差分動きベクトルを生成する。 The first encoding unit encodes the encoding target block in the image included in the video. The determination unit generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block.
 次に、決定部は、第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、第1差分動きベクトルを含む複数の差分動きベクトル候補を生成し、それらの差分動きベクトル候補の中から第2差分動きベクトルを決定する。このとき、決定部は、符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、第2差分動きベクトルを決定する。 Next, the determination unit generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative. The second differential motion vector is determined from among the differential motion vector candidates. At this time, the determination unit includes the locally decoded pixel values of the encoded pixels adjacent to the encoding target block and the locally decoded pixels of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates. The second difference motion vector is determined using the value.
 生成部は、第1差分動きベクトルの成分の符号が第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を生成する。第2符号化部は、第1差分動きベクトルの成分の絶対値と一致情報とを符号化する。 The generating unit generates coincidence information indicating whether or not the code of the component of the first differential motion vector matches the code of the component of the second differential motion vector. The second encoding unit encodes the absolute value of the first differential motion vector component and the coincidence information.
 実施形態によれば、映像符号化における動きベクトルに伴う符号量を削減することができる。 According to the embodiment, it is possible to reduce a code amount accompanying a motion vector in video encoding.
映像符号化装置の機能的構成図である。It is a functional block diagram of a video coding apparatus. 映像符号化処理のフローチャートである。It is a flowchart of a video encoding process. 映像復号装置の機能的構成図である。It is a functional block diagram of a video decoding apparatus. 映像復号処理のフローチャートである。It is a flowchart of a video decoding process. 差分動きベクトルを示す図である。It is a figure which shows a difference motion vector. 差分動きベクトル候補及び参照ブロック候補を示す図である。It is a figure which shows a difference motion vector candidate and a reference block candidate. 映像符号化装置の具体例を示す機能的構成図である。It is a functional block diagram which shows the specific example of a video coding apparatus. 算術符号化部の第1の機能的構成図である。It is a 1st functional block diagram of an arithmetic coding part. 推定差分動きベクトルの第1の計算方法を示す図である。It is a figure which shows the 1st calculation method of an estimated difference motion vector. 推定差分動きベクトルの第2の計算方法を示す図である。It is a figure which shows the 2nd calculation method of an estimated difference motion vector. 推定差分動きベクトルの第3の計算方法を示す図である。It is a figure which shows the 3rd calculation method of an estimated difference motion vector. 予測画素値の計算方法を示す図である。It is a figure which shows the calculation method of a prediction pixel value. 映像符号化処理の具体例を示すフローチャートである。It is a flowchart which shows the specific example of a video encoding process. 第1の符号フラグ生成処理のフローチャートである。It is a flowchart of a 1st code flag production | generation process. 第2の符号フラグ生成処理のフローチャートである。It is a flowchart of the 2nd code flag generation processing. 第3の符号フラグ生成処理のフローチャートである。It is a flowchart of a 3rd code flag production | generation process. 映像復号装置の具体例を示す機能的構成図である。It is a functional block diagram which shows the specific example of a video decoding apparatus. 算術復号部の第1の機能的構成図である。It is a 1st functional block diagram of an arithmetic decoding part. 映像復号処理の具体例を示すフローチャートである。It is a flowchart which shows the specific example of a video decoding process. 第1の動きベクトル生成処理のフローチャートである。It is a flowchart of a 1st motion vector production | generation process. 第2の動きベクトル生成処理のフローチャートである。It is a flowchart of the 2nd motion vector generation processing. 第3の動きベクトル生成処理のフローチャートである。It is a flowchart of a 3rd motion vector generation process. 動きベクトル候補及び参照ブロック候補を示す図である。It is a figure which shows a motion vector candidate and a reference block candidate. 算術符号化部の第2の機能的構成図である。It is a 2nd functional block diagram of an arithmetic coding part. 第4の符号フラグ生成処理のフローチャートである。It is a flowchart of the 4th code flag generation processing. 第5の符号フラグ生成処理のフローチャートである。It is a flowchart of the 5th code flag generation processing. 第6の符号フラグ生成処理のフローチャートである。It is a flowchart of the 6th code flag generation processing. 算術復号部の第2の機能的構成図である。It is a 2nd functional block diagram of an arithmetic decoding part. 第4の動きベクトル生成処理のフローチャートである。It is a flowchart of a 4th motion vector generation process. 第5の動きベクトル生成処理のフローチャートである。It is a flowchart of a 5th motion vector generation process. 第6の動きベクトル生成処理のフローチャートである。It is a flowchart of a 6th motion vector generation process. 映像符号化システムの機能的構成図である。It is a functional block diagram of a video coding system. 情報処理装置の構成図である。It is a block diagram of information processing apparatus.
 以下、図面を参照しながら、実施形態を詳細に説明する。
 HEVCで採用されているCABACの処理手順は、以下の通りである。
(1)2値化
 符号化対象シンタックス要素の中で多値となるシンタックス要素が、算術符号化の対象となる2値信号(bin)に変換される。
Hereinafter, embodiments will be described in detail with reference to the drawings.
The processing procedure of CABAC adopted in HEVC is as follows.
(1) Binarization Among the encoding target syntax elements, a multilevel syntax element is converted into a binary signal (bin) to be arithmetically encoded.
(2)コンテキストモデリング
 シンタックス要素の各binについて、他のシンタックス要素又は符号化対象となる領域に隣接するシンタックス要素の値等に応じて、算術符号化に用いる生起確率モデルが決定される。この場合、論理“0”及び論理“1”の各値の生起確率は可変である。
 一方、生起確率の推定が困難なbinについては、論理“0”及び論理“1”の各値の生起確率を0.5に固定したバイパスモードが選択される。
(3)算術符号化
 生起シンボルの発生確率に基づいて、0以上1未満の実数直線が、順次、区間分割され、最終的に分割された区間を示す実数から、2進数表記の符号語が生成される。
(2) Context modeling For each bin of syntax elements, an occurrence probability model used for arithmetic coding is determined in accordance with other syntax elements or values of syntax elements adjacent to a region to be encoded. . In this case, the occurrence probability of each value of logic “0” and logic “1” is variable.
On the other hand, for bins whose occurrence probabilities are difficult to estimate, a bypass mode is selected in which the occurrence probabilities of the values of logic “0” and logic “1” are fixed to 0.5.
(3) Arithmetic coding Based on the occurrence probability of the occurrence symbol, a real number straight line of 0 or more and less than 1 is sequentially divided into sections, and finally a code word in binary notation is generated from the real number indicating the divided section. Is done.
 バイパスモードによる算術符号化では、符号量の圧縮は行われない。しかし、確率推定処理がスキップされ、実数直線分割処理の演算量が低減されるため、符号化処理が高速化されるとともに、生起確率を記憶するメモリの記憶領域が削減される。 In arithmetic coding in bypass mode, code amount is not compressed. However, since the probability estimation process is skipped and the amount of computation of the real line segmentation process is reduced, the encoding process is speeded up and the memory area for storing the occurrence probability is reduced.
 HEVCにおける差分動きベクトルのシンタックスには、差分動きベクトルのx成分及びy成分の符号(正又は負)を示すフラグが含まれている。x成分の符号を示すフラグはmvd_sign_flag[0]であり、y成分の符号を示すフラグはmvd_sign_flag[1]である。これらのフラグの値は、以下の通り規定されている。 The syntax of the differential motion vector in HEVC includes a flag indicating the sign (positive or negative) of the x component and the y component of the differential motion vector. The flag indicating the sign of the x component is mvd_sign_flag [0], and the flag indicating the sign of the y component is mvd_sign_flag [1]. The values of these flags are specified as follows.
mvd_sign_flag[0]
 0:差分動きベクトルのx成分の符号が正である場合
 1:差分動きベクトルのx成分の符号が負である場合
mvd_sign_flag[1]
 0:差分動きベクトルのy成分の符号が正である場合
 1:差分動きベクトルのy成分の符号が負である場合
mvd_sign_flag [0]
0: When the sign of the x component of the difference motion vector is positive 1: When the sign of the x component of the difference motion vector is negative mvd_sign_flag [1]
0: When the sign of the y component of the differential motion vector is positive 1: When the sign of the y component of the differential motion vector is negative
 しかしながら、差分動きベクトルのx成分及びy成分の符号が正又は負になる確率を予測することは難しいため、CABACでは、x成分及びy成分の符号を示すフラグがバイパスモードで符号化される。このため、これらのフラグの符号量の圧縮は行われない。 However, since it is difficult to predict the probability that the sign of the x component and the y component of the differential motion vector will be positive or negative, in CABAC, a flag indicating the sign of the x component and the y component is encoded in the bypass mode. For this reason, the code amount of these flags is not compressed.
 図1は、実施形態の映像符号化装置の機能的構成例を示している。図1の映像符号化装置101は、第1符号化部111、決定部112、生成部113、及び第2符号化部114を含む。 FIG. 1 shows an example of the functional configuration of the video encoding apparatus according to the embodiment. The video encoding device 101 of FIG. 1 includes a first encoding unit 111, a determination unit 112, a generation unit 113, and a second encoding unit 114.
 図2は、図1の映像符号化装置101が行う映像符号化処理の例を示すフローチャートである。まず、第1符号化部111は、映像に含まれる画像内の符号化対象ブロックを符号化する(ステップ201)。次に、決定部112は、符号化対象ブロックに対する動きベクトルと符号化対象ブロックに対する予測動きベクトルとから、第1差分動きベクトルを生成する(ステップ202)。 FIG. 2 is a flowchart showing an example of video encoding processing performed by the video encoding device 101 of FIG. First, the first encoding unit 111 encodes an encoding target block in an image included in the video (step 201). Next, the determination unit 112 generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block (step 202).
 次に、決定部112は、第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、第1差分動きベクトルを含む複数の差分動きベクトル候補を生成する(ステップ203)。そして、決定部112は、それらの差分動きベクトル候補の中から第2差分動きベクトルを決定する(ステップ204)。このとき、決定部112は、符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、第2差分動きベクトルを決定する。 Next, the determination unit 112 generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative ( Step 203). Then, the determination unit 112 determines a second differential motion vector from among these differential motion vector candidates (step 204). At this time, the determination unit 112 performs local decoding of the encoded pixel included in each of the plurality of reference block candidates indicated by the plurality of reference motion vector candidates and the local decoded pixel value of the encoded pixel adjacent to the encoding target block. A second differential motion vector is determined using the pixel value.
 次に、生成部113は、第1差分動きベクトルの成分の符号が第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を生成する(ステップ205)。そして、第2符号化部114は、第1差分動きベクトルの成分の絶対値と一致情報とを符号化する(ステップ206)。 Next, the generation unit 113 generates coincidence information indicating whether or not the code of the first differential motion vector component matches the code of the second differential motion vector component (step 205). Then, the second encoding unit 114 encodes the absolute value of the first differential motion vector component and the coincidence information (step 206).
 図3は、実施形態の映像復号装置の機能的構成例を示している。図3の映像復号装置301は、第1復号部311、決定部312、生成部313、及び第2復号部314を含む。 FIG. 3 shows a functional configuration example of the video decoding apparatus according to the embodiment. 3 includes a first decoding unit 311, a determination unit 312, a generation unit 313, and a second decoding unit 314.
 図4は、図3の映像復号装置301が行う映像復号処理の例を示すフローチャートである。まず、第1復号部311は、符号化映像を復号して、符号化映像に含まれる画像内の復号対象ブロックに対する第1差分動きベクトルの成分の絶対値を復元する(ステップ401)。このとき、第1復号部311は、第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を、第1差分動きベクトルの成分の絶対値とともに復元する。 FIG. 4 is a flowchart showing an example of video decoding processing performed by the video decoding device 301 in FIG. First, the first decoding unit 311 decodes the encoded video, and restores the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video (step 401). At this time, the first decoding unit 311 matches information indicating whether the code indicating whether the component of the first difference motion vector is positive or negative matches the code of the component of the second difference motion vector. Are restored together with the absolute values of the components of the first differential motion vector.
 次に、決定部312は、第1差分動きベクトルの成分の絶対値に符号を付加することで複数の差分動きベクトル候補を生成する(ステップ402)。そして、決定部312は、それらの差分動きベクトル候補の中から第2差分動きベクトルを決定する(ステップ403)。このとき、決定部312は、復号対象ブロックに隣接する画素の復号画素値と、複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、第2差分動きベクトルを決定する。 Next, the determination unit 312 generates a plurality of differential motion vector candidates by adding codes to the absolute values of the components of the first differential motion vector (step 402). Then, the determination unit 312 determines a second differential motion vector from among these differential motion vector candidates (step 403). At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates, A differential motion vector is determined.
 次に、生成部313は、一致情報に基づいて、第2差分動きベクトルから第1差分動きベクトルを生成する(ステップ404)。そして、生成部313は、第1差分動きベクトルと復号対象ブロックに対する予測動きベクトルとから、復号対象ブロックに対する動きベクトルを生成する(ステップ405)。 Next, the generation unit 313 generates a first differential motion vector from the second differential motion vector based on the match information (step 404). Then, the generation unit 313 generates a motion vector for the decoding target block from the first differential motion vector and the predicted motion vector for the decoding target block (step 405).
 次に、第2復号部314は、復号対象ブロックに対する動きベクトルを用いて、復号対象ブロックの係数情報を復号する(ステップ406)。 Next, the second decoding unit 314 decodes the coefficient information of the decoding target block using the motion vector for the decoding target block (step 406).
 図1の映像符号化装置101及び図3の映像復号装置301によれば、映像符号化における動きベクトルに伴う符号量を削減することができる。 1 and the video decoding apparatus 301 in FIG. 3 can reduce the amount of code associated with a motion vector in video encoding.
 図5は、差分動きベクトルの例を示している。図5の参照画像501は、符号化対象画像よりも前に符号化された画像の局所復号画像である。参照画像501は、符号化対象画像内の符号化対象ブロックと同じ位置に存在するブロック511と、符号化対象ブロックに対する参照ブロック512とを含む。 FIG. 5 shows an example of the difference motion vector. A reference image 501 in FIG. 5 is a locally decoded image of an image encoded before the encoding target image. The reference image 501 includes a block 511 that exists at the same position as the encoding target block in the encoding target image, and a reference block 512 for the encoding target block.
 符号化対象ブロックに対する動きベクトル521は、ブロック511から参照ブロック512へ向かうベクトルであり、動き探索処理によって求められる。一方、予測ベクトル522は、符号化対象ブロックの周囲のブロックに対する動きベクトルから求められ、差分動きベクトル523は、動きベクトル521と予測ベクトル522との差分を表す。 The motion vector 521 for the encoding target block is a vector from the block 511 to the reference block 512, and is obtained by motion search processing. On the other hand, the prediction vector 522 is obtained from the motion vectors for the blocks around the encoding target block, and the difference motion vector 523 represents the difference between the motion vector 521 and the prediction vector 522.
 図6は、差分動きベクトル候補及び参照ブロック候補の例を示している。図6の参照画像501内において、左から右へ向かう方向がx座標の正方向であり、上から下へ向かう方向がy座標の正方向である。 FIG. 6 shows examples of differential motion vector candidates and reference block candidates. In the reference image 501 of FIG. 6, the direction from left to right is the positive direction of the x coordinate, and the direction from top to bottom is the positive direction of the y coordinate.
 この場合、図5の差分動きベクトル523のx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、差分動きベクトル候補611~差分動きベクトル候補614が生成される。x成分の符号とy成分の符号の4通りの組み合わせは、以下の通りである。
(x成分の符号,y成分の符号)=(正,正)
(x成分の符号,y成分の符号)=(正,負)
(x成分の符号,y成分の符号)=(負,負)
(x成分の符号,y成分の符号)=(負,正)
In this case, based on the four combinations of the positive sign and negative sign of the x component and the positive sign and negative sign of the y component of the difference motion vector 523 in FIG. Motion vector candidates 614 are generated. The four combinations of the x component code and the y component code are as follows.
(Sign of x component, sign of y component) = (positive, positive)
(Sign of x component, sign of y component) = (positive, negative)
(Sign of x component, sign of y component) = (negative, negative)
(Sign of x component, sign of y component) = (negative, positive)
 差分動きベクトル候補611は、予測ベクトル522の終点から参照ブロック候補601へ向かうベクトルであり、差分動きベクトル候補611のx成分及びy成分は正である。差分動きベクトル候補612は、予測ベクトル522の終点から参照ブロック候補602へ向かうベクトルであり、差分動きベクトル候補612のx成分は正であり、y成分は負である。 The difference motion vector candidate 611 is a vector from the end point of the prediction vector 522 toward the reference block candidate 601, and the x component and the y component of the difference motion vector candidate 611 are positive. The difference motion vector candidate 612 is a vector from the end point of the prediction vector 522 toward the reference block candidate 602, and the x component of the difference motion vector candidate 612 is positive and the y component is negative.
 差分動きベクトル候補613は、予測ベクトル522の終点から参照ブロック候補603へ向かうベクトルであり、差分動きベクトル候補613のx成分及びy成分は負である。差分動きベクトル候補614は、予測ベクトル522の終点から参照ブロック候補604へ向かうベクトルであり、差分動きベクトル候補614のx成分は負であり、y成分は正である。 The difference motion vector candidate 613 is a vector from the end point of the prediction vector 522 toward the reference block candidate 603, and the x component and the y component of the difference motion vector candidate 613 are negative. The difference motion vector candidate 614 is a vector from the end point of the prediction vector 522 toward the reference block candidate 604, and the x component of the difference motion vector candidate 614 is negative and the y component is positive.
 図7は、図1の映像符号化装置101の具体例を示している。図7の映像符号化装置701は、ブロック分割部711、予測誤差生成部712、直交変換部713、量子化部714、算術符号化部715、及び符号化制御部716を含む。さらに、映像符号化装置701は、フレーム内予測部717、フレーム間予測部718、選択部719、逆量子化部720、逆直交変換部721、再構成部722、ループ内フィルタ723、及びメモリ724を含む。 FIG. 7 shows a specific example of the video encoding device 101 of FIG. 7 includes a block division unit 711, a prediction error generation unit 712, an orthogonal transformation unit 713, a quantization unit 714, an arithmetic coding unit 715, and an encoding control unit 716. Furthermore, the video encoding device 701 includes an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, an in-loop filter 723, and a memory 724. including.
 予測誤差生成部712、直交変換部713、量子化部714、フレーム内予測部717、フレーム間予測部718、選択部719、逆量子化部720、逆直交変換部721、再構成部722、及びループ内フィルタ723は、図1の第1符号化部111に対応する。 A prediction error generation unit 712, an orthogonal transform unit 713, a quantization unit 714, an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, and The in-loop filter 723 corresponds to the first encoding unit 111 in FIG.
 映像符号化装置701は、例えば、ハードウェア回路として実装することができる。この場合、映像符号化装置701の各構成要素を個別の回路として実装してもよく、1つの集積回路として実装してもよい。 The video encoding device 701 can be implemented as a hardware circuit, for example. In this case, each component of the video encoding device 701 may be mounted as an individual circuit or may be mounted as one integrated circuit.
 映像符号化装置701は、入力される符号化対象映像を符号化し、符号化映像を符号化ストリームとして出力する。映像符号化装置701は、符号化ストリームを、通信ネットワークを介して図3の映像復号装置301へ送信することができる。 The video encoding device 701 encodes the input video to be encoded, and outputs the encoded video as an encoded stream. The video encoding device 701 can transmit the encoded stream to the video decoding device 301 in FIG. 3 via a communication network.
 符号化対象映像は、複数の時刻それぞれに対応する複数の画像を含む。各時刻の画像は、符号化対象画像に対応し、ピクチャ又はフレームと呼ばれることもある。各画像は、カラー画像であってもよく、モノクロ画像であってもよい。カラー画像の場合、画素値はRGB形式であってもよく、YUV形式であってもよい。 The encoding target video includes a plurality of images corresponding to a plurality of times. The image at each time corresponds to the image to be encoded and may be called a picture or a frame. Each image may be a color image or a monochrome image. In the case of a color image, the pixel value may be in RGB format or YUV format.
 ブロック分割部711は、符号化対象画像を複数のブロックに分割し、符号化対象ブロックの原画像を、予測誤差生成部712、フレーム内予測部717、及びフレーム間予測部718へ出力する。 The block division unit 711 divides the encoding target image into a plurality of blocks, and outputs the original image of the encoding target block to the prediction error generation unit 712, the intra-frame prediction unit 717, and the inter-frame prediction unit 718.
 フレーム内予測部717は、符号化対象ブロックに対するイントラ予測を行って、イントラ予測の予測画像を選択部719へ出力する。フレーム間予測部718は、符号化対象ブロックに対するインター予測を行って、インター予測の予測画像を選択部719へ出力する。このとき、フレーム間予測部718は、動き探索処理によって符号化対象ブロックに対する動きベクトルを求め、求めた動きベクトルを算術符号化部715へ出力する。 The intra-frame prediction unit 717 performs intra prediction on the encoding target block, and outputs a prediction image of intra prediction to the selection unit 719. The inter-frame prediction unit 718 performs inter prediction on the encoding target block, and outputs a predicted image of inter prediction to the selection unit 719. At this time, the inter-frame prediction unit 718 obtains a motion vector for the encoding target block by the motion search process, and outputs the obtained motion vector to the arithmetic coding unit 715.
 選択部719は、フレーム内予測部717又はフレーム間予測部718のいずれかが出力する予測画像を選択して、予測誤差生成部712及び再構成部722へ出力する。予測誤差生成部712は、選択部719が出力する予測画像と、符号化対象ブロックの原画像との差分を、予測誤差として直交変換部713へ出力する。 The selection unit 719 selects a prediction image output by either the intra-frame prediction unit 717 or the inter-frame prediction unit 718, and outputs the prediction image to the prediction error generation unit 712 and the reconstruction unit 722. The prediction error generation unit 712 outputs the difference between the prediction image output from the selection unit 719 and the original image of the encoding target block to the orthogonal transformation unit 713 as a prediction error.
 直交変換部713は、予測誤差生成部712が出力する予測誤差に対して直交変換を行い、変換係数を量子化部714へ出力する。量子化部714は、変換係数を量子化し、量子化係数を算術符号化部715及び逆量子化部720へ出力する。 The orthogonal transform unit 713 performs orthogonal transform on the prediction error output from the prediction error generation unit 712, and outputs a transform coefficient to the quantization unit 714. The quantization unit 714 quantizes the transform coefficient and outputs the quantization coefficient to the arithmetic coding unit 715 and the inverse quantization unit 720.
 算術符号化部715は、量子化部714が出力する量子化係数とフレーム間予測部718が出力する動きベクトルとを、CABACによって符号化し、符号化ストリームを出力する。そして、算術符号化部715は、CABACによって発生する情報量を符号化制御部716へ出力する。 The arithmetic encoding unit 715 encodes the quantized coefficient output from the quantizing unit 714 and the motion vector output from the inter-frame prediction unit 718 using CABAC, and outputs an encoded stream. Then, the arithmetic encoding unit 715 outputs the amount of information generated by CABAC to the encoding control unit 716.
 逆量子化部720は、量子化部714が出力する量子化係数に対して逆量子化を行って、逆量子化係数を生成し、生成した逆量子化係数を逆直交変換部721へ出力する。逆直交変換部721は、逆量子化係数に対して逆直交変換を行って、予測誤差を生成し、生成した予測誤差を再構成部722へ出力する。 The inverse quantization unit 720 performs inverse quantization on the quantization coefficient output from the quantization unit 714, generates an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 721. . The inverse orthogonal transform unit 721 performs inverse orthogonal transform on the inverse quantization coefficient, generates a prediction error, and outputs the generated prediction error to the reconstruction unit 722.
 再構成部722は、選択部719が出力する予測画像と、逆直交変換部721が出力する予測誤差とを加算して、再構成画像を生成し、生成した再構成画像をループ内フィルタ723及びメモリ724へ出力する。ループ内フィルタ723は、再構成部722が出力する再構成画像に対して、デブロッキングフィルタ等のフィルタ処理を行って、局所復号画像を生成し、生成した局所復号画像をメモリ724へ出力する。 The reconstruction unit 722 adds the prediction image output from the selection unit 719 and the prediction error output from the inverse orthogonal transform unit 721 to generate a reconstruction image, and the generated reconstruction image is converted into the in-loop filter 723 and Output to the memory 724. The in-loop filter 723 performs a filtering process such as a deblocking filter on the reconstructed image output from the reconstructing unit 722 to generate a local decoded image, and outputs the generated local decoded image to the memory 724.
 メモリ724は、再構成部722が出力する再構成画像を局所復号画像として記憶するとともに、ループ内フィルタ723が出力する局所復号画像を記憶する。メモリ724が記憶する局所復号画像は、フレーム内予測部717、フレーム間予測部718、及び算術符号化部715へ出力される。フレーム内予測部717は、局所復号画像に含まれる局所復号画素値を、後続するブロックに対する参照画素値として用い、フレーム間予測部718は、局所復号画像を後続する画像に対する参照画像として用いる。 The memory 724 stores the reconstructed image output from the reconstructing unit 722 as a locally decoded image and also stores the locally decoded image output from the in-loop filter 723. The locally decoded image stored in the memory 724 is output to the intra-frame prediction unit 717, the inter-frame prediction unit 718, and the arithmetic coding unit 715. The intra-frame prediction unit 717 uses the local decoded pixel value included in the local decoded image as a reference pixel value for the subsequent block, and the inter-frame prediction unit 718 uses the local decoded image as a reference image for the subsequent image.
 符号化制御部716は、算術符号化部715が出力する情報量が目標情報量になるように、量子化パラメータ(QP)を決定し、決定したQPを量子化部714へ出力する。 The encoding control unit 716 determines a quantization parameter (QP) so that the information amount output from the arithmetic encoding unit 715 becomes the target information amount, and outputs the determined QP to the quantization unit 714.
 図8は、図7の算術符号化部715の第1の機能的構成例を示している。図8の算術符号化部715は、決定部801、生成部802、及び符号化部803を含む。決定部801は、差分動きベクトル計算部811、差分動きベクトル候補計算部812、及び推定差分動きベクトル計算部813を含む。決定部801、生成部802、及び符号化部803は、図1の決定部112、生成部113、及び第2符号化部114にそれぞれ対応する。 FIG. 8 shows a first functional configuration example of the arithmetic encoding unit 715 of FIG. 8 includes a determination unit 801, a generation unit 802, and an encoding unit 803. The determination unit 801 includes a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813. The determination unit 801, the generation unit 802, and the encoding unit 803 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
 差分動きベクトル計算部811は、フレーム間予測部718が出力する動きベクトルと符号化対象ブロックに対する予測動きベクトルとの差分を表す差分動きベクトルを計算する。予測動きベクトルは、フレーム間予測部718又は差分動きベクトル計算部811によって、符号化対象ブロックの周囲のブロックに対する動きベクトルから求められる。 The difference motion vector calculation unit 811 calculates a difference motion vector representing a difference between the motion vector output from the inter-frame prediction unit 718 and the prediction motion vector for the encoding target block. The predicted motion vector is obtained from the motion vectors for the blocks around the encoding target block by the inter-frame prediction unit 718 or the differential motion vector calculation unit 811.
 差分動きベクトル候補計算部812は、差分動きベクトルのx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の差分動きベクトル候補を計算する。 The difference motion vector candidate calculation unit 812 generates four difference motions based on four combinations of the positive sign and negative sign of the x component of the difference motion vector and the positive sign and negative sign of the y component. Calculate vector candidates.
 推定差分動きベクトル計算部813は、4個の差分動きベクトル候補それぞれに対応する4個の参照ブロック候補を求める。そして、推定差分動きベクトル計算部813は、符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、4個の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、推定差分動きベクトルを計算する。 The estimated difference motion vector calculation unit 813 obtains four reference block candidates corresponding to each of the four difference motion vector candidates. Then, the estimated difference motion vector calculation unit 813 obtains the local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated differential motion vector.
 この場合、推定差分動きベクトルは、フレーム間予測部718における動き探索処理とは異なる計算方法によって計算されるため、差分動きベクトル計算部811が計算する差分動きベクトルと必ずしも一致するとは限らない。 In this case, since the estimated difference motion vector is calculated by a calculation method different from the motion search process in the inter-frame prediction unit 718, it does not always match the difference motion vector calculated by the difference motion vector calculation unit 811.
 生成部802は、差分動きベクトルの各成分の符号が推定差分動きベクトルの各成分の符号と一致するか否かを示す符号フラグを生成する。符号化部803は、差分動きベクトルの各成分の絶対値及び符号フラグと、量子化部714が出力する量子化係数とを、CABACのコンテキストモデリングにより、可変の生起確率を用いて符号化する。各成分の符号フラグは、一致情報に対応する。 The generation unit 802 generates a code flag indicating whether or not the code of each component of the differential motion vector matches the code of each component of the estimated differential motion vector. The encoding unit 803 encodes the absolute value and sign flag of each component of the differential motion vector and the quantization coefficient output by the quantization unit 714 using CABAC context modeling using a variable occurrence probability. The sign flag of each component corresponds to the coincidence information.
 符号フラグとしては、上述したmvd_sign_flag[0]及びmvd_sign_flag[1]を用いることができる。ただし、これらの符号フラグの値は、以下のように変更される。 As the sign flag, mvd_sign_flag [0] and mvd_sign_flag [1] described above can be used. However, the values of these sign flags are changed as follows.
mvd_sign_flag[0]
 0:差分動きベクトルのx成分の符号が、推定差分動きベクトルのx成分の符号と同じである場合
 1:差分動きベクトルのx成分の符号が、推定差分動きベクトルのx成分の符号と異なる場合
mvd_sign_flag[1]
 0:差分動きベクトルのy成分の符号が、推定差分動きベクトルのy成分の符号と同じである場合
 1:差分動きベクトルのy成分の符号が、推定差分動きベクトルのy成分の符号と異なる場合
mvd_sign_flag [0]
0: When the sign of the x component of the difference motion vector is the same as the sign of the x component of the estimated difference motion vector 1: When the sign of the x component of the difference motion vector is different from the sign of the x component of the estimated difference motion vector mvd_sign_flag [1]
0: When the sign of the y component of the difference motion vector is the same as the sign of the y component of the estimated difference motion vector 1: When the sign of the y component of the difference motion vector is different from the sign of the y component of the estimated difference motion vector
 このように、符号フラグは、差分動きベクトルの各成分の正又は負の符号を直接示すのではなく、差分動きベクトルの各成分の符号と推定差分動きベクトルの各成分の符号との同異を示している。これにより、2つの符号が同じであることを示す値“0”の生起確率を、2つの符号が異なることを示す値“1”の生起確率よりも高くすることが可能になる。したがって、コンテキストモデリングを用いた算術符号化により符号フラグを符号化することができ、符号フラグの符号量が削減される。 In this way, the sign flag does not directly indicate the positive or negative sign of each component of the difference motion vector, but instead shows the difference between the sign of each component of the difference motion vector and the sign of each component of the estimated difference motion vector. Show. As a result, the occurrence probability of the value “0” indicating that the two codes are the same can be made higher than the occurrence probability of the value “1” indicating that the two codes are different. Therefore, the code flag can be encoded by arithmetic encoding using context modeling, and the code amount of the code flag is reduced.
 図9は、推定差分動きベクトルの第1の計算方法の例を示している。符号化対象画像901の領域902内には、符号化対象ブロック903が含まれている。まず、推定差分動きベクトル計算部813は、領域902内において符号化対象ブロック903に隣接する符号化済み画素911の局所復号画素値を取得する。 FIG. 9 shows an example of a first calculation method of the estimated difference motion vector. An encoding target block 903 is included in the area 902 of the encoding target image 901. First, the estimated difference motion vector calculation unit 813 acquires the local decoded pixel value of the encoded pixel 911 adjacent to the encoding target block 903 in the region 902.
 次に、推定差分動きベクトル計算部813は、各参照ブロック候補904を領域902内の符号化対象ブロック903の位置に重ねて配置し、参照ブロック候補904に含まれる符号化済み画素の局所復号画素値を取得する。例えば、推定差分動きベクトル計算部813は、符号化済み画素911に隣接する参照ブロック候補904内の符号化済み画素912の局所復号画素値を取得することができる。 Next, the estimated difference motion vector calculation unit 813 arranges each reference block candidate 904 so as to overlap the position of the encoding target block 903 in the region 902, and locally decodes the encoded pixels included in the reference block candidate 904. Get the value. For example, the estimated difference motion vector calculation unit 813 can acquire the locally decoded pixel value of the encoded pixel 912 in the reference block candidate 904 adjacent to the encoded pixel 911.
 次に、推定差分動きベクトル計算部813は、符号化済み画素911の局所復号画素値の統計値A0と、i番目(i=1~4)の参照ブロック候補に対する符号化済み画素912の局所復号画素値の統計値Aiとを計算する。統計値A0及び統計値Aiとしては、複数の局所復号画素値の平均値、中央値、最頻値等を用いることができる。 Next, the estimated difference motion vector calculation unit 813 performs local decoding of the encoded pixel 912 with respect to the statistical value A0 of the local decoded pixel value of the encoded pixel 911 and the i-th (i = 1 to 4) reference block candidate. The statistical value Ai of the pixel value is calculated. As the statistical value A0 and the statistical value Ai, an average value, median value, mode value, and the like of a plurality of local decoded pixel values can be used.
 そして、推定差分動きベクトル計算部813は、統計値A0と統計値Aiとを比較することで、推定差分動きベクトルを決定する。例えば、推定差分動きベクトル計算部813は、統計値A1~統計値A4のうち統計値A0に最も近い統計値を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定することができる。 Then, the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the statistical value A0 and the statistical value Ai. For example, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a differential motion vector candidate indicating the obtained reference block candidate. A differential motion vector can be determined.
 図10は、推定差分動きベクトルの第2の計算方法の例を示している。推定差分動きベクトル計算部813は、符号化済み画素911と符号化済み画素912のペア1001について、2つの画素の局所復号画素値の差分絶対値を計算する。そして、推定差分動きベクトル計算部813は、符号化対象ブロック903の上側の境界及び左側の境界上の複数のペアに対する差分絶対値を累積して、差分絶対値和を計算する。 FIG. 10 shows an example of a second calculation method of the estimated difference motion vector. The estimated difference motion vector calculation unit 813 calculates the absolute difference value of the local decoded pixel values of the two pixels for the pair 1001 of the encoded pixel 911 and the encoded pixel 912. Then, the estimated difference motion vector calculation unit 813 calculates the difference absolute value sum by accumulating the difference absolute values for a plurality of pairs on the upper boundary and the left boundary of the encoding target block 903.
 次に、推定差分動きベクトル計算部813は、4個の参照ブロック候補それぞれに対する4個の差分絶対値和を比較することで、推定差分動きベクトルを決定する。例えば、推定差分動きベクトル計算部813は、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定することができる。 Next, the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
 図11は、推定差分動きベクトルの第3の計算方法の例を示している。推定差分動きベクトル計算部813は、符号化対象ブロック903の境界付近における4個の符号化済み画素の組み合わせ1101を選択する。組み合わせ1101は、符号化済み画素1111~符号化済み画素1114を含む。 FIG. 11 shows an example of the third calculation method of the estimated difference motion vector. The estimated difference motion vector calculation unit 813 selects a combination 1101 of four encoded pixels in the vicinity of the boundary of the encoding target block 903. The combination 1101 includes an encoded pixel 1111 to an encoded pixel 1114.
 符号化済み画素1111及び符号化済み画素1112は、符号化対象画像901の領域902に含まれる画素であり、符号化済み画素1112は符号化対象ブロック903の境界に隣接し、符号化済み画素1111は符号化済み画素1112に隣接する。一方、符号化済み画素1113及び符号化済み画素1114は、参照ブロック候補904に含まれる画素であり、符号化済み画素1113は符号化対象ブロック903の境界に隣接し、符号化済み画素1114は符号化済み画素1113に隣接する。 The encoded pixel 1111 and the encoded pixel 1112 are pixels included in the region 902 of the encoding target image 901. The encoded pixel 1112 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1111 is present. Is adjacent to the encoded pixel 1112. On the other hand, the encoded pixel 1113 and the encoded pixel 1114 are pixels included in the reference block candidate 904, the encoded pixel 1113 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1114 is encoded. Adjacent to the converted pixel 1113.
 推定差分動きベクトル計算部813は、符号化済み画素1111及び符号化済み画素1112の局所復号画素値から、符号化対象ブロック903の境界上における予測画素値を計算する。また、推定差分動きベクトル計算部813は、符号化済み画素1113及び符号化済み画素1114の局所復号画素値から、符号化対象ブロック903の境界上における予測画素値を計算する。 The estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1111 and the encoded pixel 1112. Further, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1113 and the encoded pixel 1114.
 図12は、符号化対象ブロック903の境界上における予測画素値の計算方法の例を示している。図12の横軸は、符号化対象画像901のy軸を表し、縦軸は、画素値を表す。 FIG. 12 shows an example of a method for calculating a predicted pixel value on the boundary of the encoding target block 903. The horizontal axis in FIG. 12 represents the y axis of the encoding target image 901, and the vertical axis represents the pixel value.
 推定差分動きベクトル計算部813は、符号化済み画素1111の局所復号画素値と符号化済み画素1112の局所復号画素値とを通る直線1201上で、符号化対象ブロック903の境界のy座標y1に対応する画素値p1を、予測画素値として求める。また、推定差分動きベクトル計算部813は、符号化済み画素1113の局所復号画素値と符号化済み画素1114の局所復号画素値とを通る直線1202上で、y1に対応する画素値p2を、予測画素値として求める。 The estimated difference motion vector calculation unit 813 sets the y coordinate y1 of the boundary of the encoding target block 903 on the straight line 1201 that passes through the local decoded pixel value of the encoded pixel 1111 and the local decoded pixel value of the encoded pixel 1112. The corresponding pixel value p1 is obtained as the predicted pixel value. Also, the estimated difference motion vector calculation unit 813 predicts a pixel value p2 corresponding to y1 on a straight line 1202 that passes through the local decoded pixel value of the encoded pixel 1113 and the local decoded pixel value of the encoded pixel 1114. Obtained as a pixel value.
 次に、推定差分動きベクトル計算部813は、予測画素値p1と予測画素値p2との差分絶対値1203を計算する。そして、推定差分動きベクトル計算部813は、符号化対象ブロック903の上側の境界及び左側の境界上の複数の組み合わせに対する差分絶対値1203を累積して、差分絶対値和を計算する。 Next, the estimated difference motion vector calculation unit 813 calculates a difference absolute value 1203 between the predicted pixel value p1 and the predicted pixel value p2. Then, the estimated difference motion vector calculation unit 813 accumulates the difference absolute values 1203 for a plurality of combinations on the upper boundary and the left boundary of the encoding target block 903, and calculates a difference absolute value sum.
 次に、推定差分動きベクトル計算部813は、4個の参照ブロック候補それぞれに対する4個の差分絶対値和を比較することで、推定差分動きベクトルを決定する。例えば、推定差分動きベクトル計算部813は、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定することができる。 Next, the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
 推定差分動きベクトルの第1~第3の計算方法によれば、フレーム間予測部718における動き探索処理よりも少ない演算量で、差分動きベクトルと一致する可能性が高い推定差分動きベクトルを求めることができる。これにより、符号フラグにおける値“0”の生起確率を、値“1”の生起確率よりも高くすることが可能になる。 According to the first to third calculation methods of the estimated difference motion vector, it is possible to obtain an estimated difference motion vector that is highly likely to match the difference motion vector with a smaller calculation amount than the motion search processing in the inter-frame prediction unit 718. Can do. As a result, the occurrence probability of the value “0” in the sign flag can be made higher than the occurrence probability of the value “1”.
 図13は、図7の映像符号化装置701が行う映像符号化処理の具体例を示すフローチャートである。まず、フレーム内予測部717は、符号化対象ブロックに対するイントラ予測を行い(ステップ1301)、フレーム間予測部718は、符号化対象ブロックに対するインター予測を行う(ステップ1302)。 FIG. 13 is a flowchart showing a specific example of the video encoding process performed by the video encoding device 701 in FIG. First, the intra-frame prediction unit 717 performs intra prediction on the encoding target block (step 1301), and the inter-frame prediction unit 718 performs inter prediction on the encoding target block (step 1302).
 次に、予測誤差生成部712、直交変換部713、及び量子化部714は、フレーム内予測部717又はフレーム間予測部718のいずれかが出力する予測画像を用いて、符号化対象ブロックを符号化し、量子化係数を生成する(ステップ1303)。そして、算術符号化部715の決定部801及び生成部802は、符号化対象ブロックの差分動きベクトルに対する符号フラグを生成する(ステップ1304)。 Next, the prediction error generation unit 712, the orthogonal transformation unit 713, and the quantization unit 714 code the encoding target block using the prediction image output from either the intra-frame prediction unit 717 or the inter-frame prediction unit 718. To generate a quantized coefficient (step 1303). Then, the determination unit 801 and the generation unit 802 of the arithmetic encoding unit 715 generate a code flag for the differential motion vector of the encoding target block (step 1304).
 次に、映像符号化装置701は、符号化対象画像の符号化が終了したか否かを判定する(ステップ1305)。未処理のブロックが残っている場合(ステップ1305,NO)、映像符号化装置701は、次のブロックについてステップ1301以降の処理を繰り返す。 Next, the video encoding device 701 determines whether or not the encoding of the encoding target image has been completed (step 1305). When an unprocessed block remains (step 1305, NO), the video encoding device 701 repeats the processing after step 1301 for the next block.
 一方、符号化対象画像の符号化が終了した場合(ステップ1305,YES)、算術符号化部715の符号化部803は、量子化係数及び予測モード情報に対する可変長符号化を行う(ステップ1306)。予測モード情報には、差分動きベクトルの各成分の絶対値及び符号フラグが含まれる。 On the other hand, when the encoding of the encoding target image is completed (step 1305, YES), the encoding unit 803 of the arithmetic encoding unit 715 performs variable length encoding on the quantization coefficient and the prediction mode information (step 1306). . The prediction mode information includes the absolute value and sign flag of each component of the differential motion vector.
 次に、映像符号化装置701は、符号化対象映像の符号化が終了したか否かを判定する(ステップ1307)。未処理の画像が残っている場合(ステップ1307,NO)、映像符号化装置701は、次の画像についてステップ1301以降の処理を繰り返す。そして、符号化対象映像の符号化が終了した場合(ステップ1307,YES)、映像符号化装置701は、処理を終了する。 Next, the video encoding device 701 determines whether or not the encoding of the encoding target video has been completed (step 1307). When an unprocessed image remains (step 1307, NO), the video encoding device 701 repeats the processing after step 1301 for the next image. Then, when encoding of the encoding target video is completed (step 1307, YES), the video encoding device 701 ends the process.
 図14は、図13のステップ1304における第1の符号フラグ生成処理の例を示すフローチャートである。第1の符号フラグ生成処理では、図9に示した第1の計算方法によって、推定差分動きベクトルが計算される。 FIG. 14 is a flowchart showing an example of the first code flag generation process in step 1304 of FIG. In the first code flag generation process, the estimated difference motion vector is calculated by the first calculation method shown in FIG.
 まず、決定部801の差分動きベクトル計算部811は、動きベクトルと予測動きベクトルとから差分動きベクトルを計算する(ステップ1401)。そして、差分動きベクトル候補計算部812は、差分動きベクトルのx成分及びy成分の符号の4通りの組み合わせに基づいて、4個の差分動きベクトル候補を計算する(ステップ1402)。 First, the difference motion vector calculation unit 811 of the determination unit 801 calculates a difference motion vector from the motion vector and the predicted motion vector (step 1401). The difference motion vector candidate calculation unit 812 then calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 1402).
 次に、推定差分動きベクトル計算部813は、4個の差分動きベクトル候補それぞれが示す4個の参照ブロック候補を求める(ステップ1403)。 Next, the estimated difference motion vector calculation unit 813 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 1403).
 次に、推定差分動きベクトル計算部813は、符号化対象画像内において符号化対象ブロックに隣接する符号化済み画素の局所復号画素値の統計値A0を計算する(ステップ1404)。 Next, the estimated difference motion vector calculation unit 813 calculates a statistical value A0 of the locally decoded pixel value of the encoded pixel adjacent to the encoding target block in the encoding target image (step 1404).
 次に、推定差分動きベクトル計算部813は、各参照ブロック候補を符号化対象ブロックの位置に重ねて配置した場合に、ステップ1404で用いた符号化済み画素に隣接する参照ブロック候補内の符号化済み画素を特定する(ステップ1405)。そして、推定差分動きベクトル計算部813は、特定した符号化済み画素の局所復号画素値の統計値Ai(i=1~4)を計算する。 Next, the estimated difference motion vector calculation unit 813 encodes the reference block candidate adjacent to the encoded pixel used in step 1404 when each reference block candidate is arranged at the position of the encoding target block. A completed pixel is identified (step 1405). Then, the estimated difference motion vector calculation unit 813 calculates the statistical value Ai (i = 1 to 4) of the locally decoded pixel value of the identified encoded pixel.
 次に、推定差分動きベクトル計算部813は、統計値A1~統計値A4のうち統計値A0に最も近い統計値を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ1406)。 Next, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and obtains a differential motion vector candidate indicating the obtained reference block candidate. The estimated difference motion vector is determined (step 1406).
 次に、生成部802は、差分動きベクトルの各成分の符号と推定差分動きベクトルの各成分の符号とを比較して、各成分の符号フラグの値を決定する(ステップ1407)。差分動きベクトルの成分の符号と推定差分動きベクトルの成分の符号とが同じである場合、その成分の符号フラグの値は“0”に決定され、2つの符号が異なる場合、その成分の符号フラグの値は“1”に決定される。 Next, the generation unit 802 compares the code of each component of the differential motion vector and the code of each component of the estimated differential motion vector to determine the value of the code flag of each component (step 1407). If the code of the component of the differential motion vector and the code of the component of the estimated differential motion vector are the same, the value of the code flag of that component is determined to be “0”, and if the two codes are different, the code flag of that component The value of is determined to be “1”.
 図15は、図13のステップ1304における第2の符号フラグ生成処理の例を示すフローチャートである。第2の符号フラグ生成処理では、図10に示した第2の計算方法によって、推定差分動きベクトルが計算される。図15のステップ1501~ステップ1503及びステップ1506の処理は、図14のステップ1401~ステップ1403及びステップ1407の処理と同様である。 FIG. 15 is a flowchart showing an example of the second code flag generation process in step 1304 of FIG. In the second code flag generation process, the estimated difference motion vector is calculated by the second calculation method shown in FIG. The processing in steps 1501 to 1503 and step 1506 in FIG. 15 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
 ステップ1504において、推定差分動きベクトル計算部813は、各参照ブロック候補を符号化対象ブロックの位置に重ねて配置した場合に、符号化対象画像内における周囲の符号化済み画素に隣接する、参照ブロック候補内の符号化済み画素を特定する。そして、推定差分動きベクトル計算部813は、各参照ブロック候補について、符号化対象画像内における周囲の符号化済み画素の局所復号画素値と、参照ブロック候補内の符号化済み画素の局所復号画素値との差分絶対値和を計算する。 In step 1504, the estimated difference motion vector calculation unit 813, when each reference block candidate is arranged at the position of the encoding target block, is adjacent to the surrounding encoded pixels in the encoding target image. Identify encoded pixels in the candidate. Then, the estimated difference motion vector calculation unit 813, for each reference block candidate, the local decoded pixel value of the surrounding encoded pixel in the encoding target image and the local decoded pixel value of the encoded pixel in the reference block candidate The sum of absolute differences is calculated.
 次に、推定差分動きベクトル計算部813は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ1505)。 Next, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference. A motion vector is determined (step 1505).
 図16は、図13のステップ1304における第3の符号フラグ生成処理の例を示すフローチャートである。第3の符号フラグ生成処理では、図11に示した第3の計算方法によって、推定差分動きベクトルが計算される。図16のステップ1601~ステップ1603及びステップ1606の処理は、図14のステップ1401~ステップ1403及びステップ1407の処理と同様である。 FIG. 16 is a flowchart showing an example of the third code flag generation process in step 1304 of FIG. In the third code flag generation process, the estimated difference motion vector is calculated by the third calculation method shown in FIG. The processing in steps 1601 to 1603 and step 1606 in FIG. 16 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
 ステップ1604において、推定差分動きベクトル計算部813は、符号化対象ブロックの境界の外側に隣接する、符号化対象画像内の2列の符号化済み画素を特定する。そして、推定差分動きベクトル計算部813は、それらの2列の符号化済み画素の局所復号画素値から、符号化対象ブロックの境界上における予測画素値を計算する。 In step 1604, the estimated difference motion vector calculation unit 813 identifies two columns of encoded pixels in the encoding target image that are adjacent to the outside of the boundary of the encoding target block. Then, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
 次に、推定差分動きベクトル計算部813は、各参照ブロック候補を符号化対象ブロックの位置に重ねて配置した場合に、符号化対象ブロックの境界の内側に隣接する、参照ブロック候補内の2列の符号化済み画素を特定する。そして、推定差分動きベクトル計算部813は、それらの2列の符号化済み画素の局所復号画素値から、符号化対象ブロックの境界上における予測画素値を計算する。 Next, the estimated difference motion vector calculation unit 813 has two columns in the reference block candidate that are adjacent to the inside of the boundary of the encoding target block when the reference block candidates are arranged so as to overlap the position of the encoding target block. The encoded pixels are identified. Then, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
 次に、推定差分動きベクトル計算部813は、各参照ブロック候補について、符号化対象画像内の符号化済み画素から計算した予測画素値と、参照ブロック候補内の符号化済み画素から計算した予測画素値との差分絶対値和を計算する。 Next, the estimated difference motion vector calculation unit 813, for each reference block candidate, the predicted pixel value calculated from the encoded pixel in the encoding target image and the predicted pixel calculated from the encoded pixel in the reference block candidate Calculate the sum of absolute differences from the value.
 次に、推定差分動きベクトル計算部813は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ1605)。 Next, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference. A motion vector is determined (step 1605).
 図17は、図3の映像復号装置301の具体例を示している。図17の映像復号装置1701は、算術復号部1711、逆量子化部1712、逆直交変換部1713、再構成部1714、ループ内フィルタ1715、フレーム内予測部1716、動き補償部1717、選択部1718、及びメモリ1719を含む。 FIG. 17 shows a specific example of the video decoding device 301 in FIG. 17 includes an arithmetic decoding unit 1711, an inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718. , And a memory 1719.
 逆量子化部1712、逆直交変換部1713、再構成部1714、ループ内フィルタ1715、フレーム内予測部1716、動き補償部1717、及び選択部1718は、図3の第2復号部314に対応する。 An inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 correspond to the second decoding unit 314 in FIG. .
 映像復号装置1701は、例えば、ハードウェア回路として実装することができる。この場合、映像復号装置1701の各構成要素を個別の回路として実装してもよく、1つの集積回路として実装してもよい。 The video decoding device 1701 can be implemented as a hardware circuit, for example. In this case, each component of the video decoding device 1701 may be mounted as an individual circuit, or may be mounted as one integrated circuit.
 映像復号装置1701は、入力される符号化ストリームを復号し、復号映像を出力する。映像復号装置1701は、図7の映像符号化装置701から、通信ネットワークを介して符号化ストリームを受信することができる。 The video decoding device 1701 decodes the input encoded stream and outputs the decoded video. The video decoding device 1701 can receive the encoded stream from the video encoding device 701 in FIG. 7 via the communication network.
 算術復号部1711は、CABACの復号方式によって符号化ストリームを復号して、復号対象画像内の復号対象ブロックの量子化係数を逆量子化部1712へ出力し、復号対象ブロックに対する動きベクトルを動き補償部1717へ出力する。 The arithmetic decoding unit 1711 decodes the encoded stream by the CABAC decoding method, outputs the quantization coefficient of the decoding target block in the decoding target image to the inverse quantization unit 1712, and motion compensates the motion vector for the decoding target block. To the unit 1717.
 逆量子化部1712は、算術復号部1711が出力する量子化係数に対して逆量子化を行って、逆量子化係数を生成し、生成した逆量子化係数を逆直交変換部1713へ出力する。逆直交変換部1713は、逆量子化係数に対して逆直交変換を行って、予測誤差を生成し、生成した予測誤差を再構成部1714へ出力する。 The inverse quantization unit 1712 performs inverse quantization on the quantization coefficient output from the arithmetic decoding unit 1711 to generate an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 1713. . The inverse orthogonal transform unit 1713 performs inverse orthogonal transform on the inverse quantization coefficient to generate a prediction error, and outputs the generated prediction error to the reconstruction unit 1714.
 動き補償部1717は、算術復号部1711が出力する動きベクトルと、メモリ1719が出力する参照画像とを用いて、復号対象ブロックに対する動き補償処理を行い、インター予測の予測画像を生成して、選択部1718へ出力する。選択部1718は、フレーム内予測部1716又は動き補償部1717のいずれかが出力する予測画像を選択して、再構成部1714へ出力する。 The motion compensation unit 1717 performs motion compensation processing on the decoding target block using the motion vector output from the arithmetic decoding unit 1711 and the reference image output from the memory 1719, generates an inter-prediction predicted image, and selects it. To the unit 1718. The selection unit 1718 selects a prediction image output by either the intra-frame prediction unit 1716 or the motion compensation unit 1717 and outputs the prediction image to the reconstruction unit 1714.
 再構成部1714は、選択部1718が出力する予測画像と、逆直交変換部1713が出力する予測誤差とを加算して、再構成画像を生成し、生成した再構成画像をループ内フィルタ1715及びフレーム内予測部1716へ出力する。 The reconstruction unit 1714 adds the prediction image output from the selection unit 1718 and the prediction error output from the inverse orthogonal transform unit 1713 to generate a reconstruction image, and the generated reconstruction image is converted into an in-loop filter 1715 and It outputs to the intra-frame prediction part 1716.
 フレーム内予測部1716は、再構成部1714が出力する復号済みブロックの再構成画像を用いて、復号対象ブロックに対するイントラ予測を行い、イントラ予測の予測画像を選択部1718へ出力する。 The intra-frame prediction unit 1716 performs intra prediction on the decoding target block using the reconstructed image of the decoded block output from the reconstruction unit 1714, and outputs a prediction image of intra prediction to the selection unit 1718.
 ループ内フィルタ1715は、再構成部1714が出力する再構成画像に対して、デブロッキングフィルタ、サンプルアダプティブオフセットフィルタ等のフィルタ処理を行って、復号画像を生成する。そして、ループ内フィルタ1715は、1フレーム分の復号画像を復号映像として出力するとともに、メモリ1719へ出力する。 The in-loop filter 1715 performs a filtering process such as a deblocking filter and a sample adaptive offset filter on the reconstructed image output from the reconstructing unit 1714 to generate a decoded image. Then, the in-loop filter 1715 outputs the decoded image for one frame as the decoded video and also outputs it to the memory 1719.
 メモリ1719は、ループ内フィルタ1715が出力する復号画像を記憶する。メモリ1719が記憶する復号画像は、後続する画像に対する参照画像として、動き補償部1717へ出力される。 The memory 1719 stores the decoded image output from the in-loop filter 1715. The decoded image stored in the memory 1719 is output to the motion compensation unit 1717 as a reference image for the subsequent image.
 図18は、図17の算術復号部1711の第1の機能的構成例を示している。図18の算術復号部1711は、復号部1801、決定部1802、及び生成部1803を含む。決定部1802は、差分動きベクトル候補計算部1811及び推定差分動きベクトル計算部1812を含む。復号部1801、決定部1802、及び生成部1803は、図3の第1復号部311、決定部312、及び生成部313にそれぞれ対応する。 FIG. 18 shows a first functional configuration example of the arithmetic decoding unit 1711 in FIG. The arithmetic decoding unit 1711 in FIG. 18 includes a decoding unit 1801, a determination unit 1802, and a generation unit 1803. The determination unit 1802 includes a difference motion vector candidate calculation unit 1811 and an estimated difference motion vector calculation unit 1812. The decoding unit 1801, the determination unit 1802, and the generation unit 1803 correspond to the first decoding unit 311, the determination unit 312 and the generation unit 313 in FIG.
 復号部1801は、CABACのコンテキストモデリングにより、可変の生起確率を用いて符号化ストリームを復号し、復号対象ブロックの量子化係数を復元する。さらに、復号部1801は、差分動きベクトルのx成分及びy成分の絶対値と、差分動きベクトルのx成分及びy成分の符号フラグとを復元する。そして、復号部1801は、差分動きベクトルの各成分の絶対値を決定部1802へ出力し、差分動きベクトルの各成分の符号フラグを、生成部1803へ出力する。 The decoding unit 1801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Further, the decoding unit 1801 restores the absolute values of the x component and the y component of the differential motion vector and the code flags of the x component and the y component of the differential motion vector. Decoding section 1801 then outputs the absolute value of each component of the difference motion vector to determination section 1802 and outputs the sign flag of each component of the difference motion vector to generation section 1803.
 差分動きベクトル候補計算部1811は、差分動きベクトルのx成分及びy成分の絶対値から、差分動きベクトルのx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の差分動きベクトル候補を計算する。 The difference motion vector candidate calculation unit 1811 calculates a positive sign and a negative sign of the x component of the difference motion vector, a positive sign and a negative sign of the y component from the absolute values of the x component and the y component of the difference motion vector. Based on these four combinations, four differential motion vector candidates are calculated.
 推定差分動きベクトル計算部1812は、復号対象ブロックに対する予測動きベクトルと4個の差分動きベクトル候補とを用いて、それらの差分動きベクトル候補それぞれに対応する4個の参照ブロック候補を求める。予測動きベクトルは、復号対象ブロックの周囲のブロックに対する計算済みの動きベクトルから求められる。 The estimated difference motion vector calculation unit 1812 obtains four reference block candidates corresponding to each of the difference motion vector candidates by using the prediction motion vector and the four difference motion vector candidates for the decoding target block. The predicted motion vector is obtained from the motion vectors already calculated for the blocks around the decoding target block.
 そして、推定差分動きベクトル計算部1812は、復号対象ブロックに隣接する復号済み画素の復号画素値と、4個の参照ブロック候補それぞれに含まれる復号済み画素の復号画素値とを用いて、推定差分動きベクトルを計算する。 Then, the estimated difference motion vector calculation unit 1812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the difference. Calculate the motion vector.
 生成部1803は、差分動きベクトルの各成分の符号フラグに基づいて、推定差分動きベクトルの各成分の符号から差分動きベクトルの各成分の符号を決定し、差分動きベクトルを生成する。 The generation unit 1803 determines the code of each component of the differential motion vector from the code of each component of the estimated differential motion vector based on the code flag of each component of the differential motion vector, and generates a differential motion vector.
 mvd_sign_flag[0]=0である場合、生成部1803は、推定差分動きベクトルのx成分の符号を変更しない。一方、mvd_sign_flag[0]=1である場合、生成部1803は、推定差分動きベクトルのx成分の符号を変更する。例えば、推定差分動きベクトルのx成分の符号が正であれば、生成部1803は、その符号を負に変更することで、差分動きベクトルのx成分を生成する。また、推定差分動きベクトルのx成分の符号が負であれば、生成部1803は、その符号を正に変更することで、差分動きベクトルのx成分を生成する。 When mvd_sign_flag [0] = 0, the generation unit 1803 does not change the sign of the x component of the estimated difference motion vector. On the other hand, when mvd_sign_flag [0] = 1, the generation unit 1803 changes the sign of the x component of the estimated difference motion vector. For example, if the sign of the x component of the estimated difference motion vector is positive, the generation unit 1803 changes the sign to negative to generate the x component of the difference motion vector. If the sign of the x component of the estimated difference motion vector is negative, the generation unit 1803 generates the x component of the difference motion vector by changing the sign to positive.
 mvd_sign_flag[1]=0である場合、生成部1803は、推定差分動きベクトルのy成分の符号を変更しない。一方、mvd_sign_flag[1]=1である場合、生成部1803は、推定差分動きベクトルのy成分の符号を変更する。例えば、推定差分動きベクトルのy成分の符号が正であれば、生成部1803は、その符号を負に変更することで、差分動きベクトルのy成分を生成する。また、推定差分動きベクトルのy成分の符号が負であれば、生成部1803は、その符号を正に変更することで、差分動きベクトルのy成分を生成する。 When mvd_sign_flag [1] = 0, the generation unit 1803 does not change the sign of the y component of the estimated difference motion vector. On the other hand, when mvd_sign_flag [1] = 1, the generation unit 1803 changes the sign of the y component of the estimated difference motion vector. For example, if the sign of the y component of the estimated difference motion vector is positive, the generation unit 1803 generates the y component of the difference motion vector by changing the sign to negative. If the sign of the y component of the estimated difference motion vector is negative, the generation unit 1803 generates the y component of the difference motion vector by changing the sign to positive.
 次に、生成部1803は、復号対象ブロックに対する予測動きベクトルと、生成した差分動きベクトルとを加算することで、復号対象ブロックに対する動きベクトルを計算する。 Next, the generation unit 1803 calculates a motion vector for the decoding target block by adding the predicted motion vector for the decoding target block and the generated differential motion vector.
 図19は、図17の映像復号装置1701が行う映像復号処理の具体例を示すフローチャートである。まず、算術復号部1711は、符号化ストリームに対する可変長復号を行って、復号対象ブロックの量子化係数及び予測モード情報を生成する(ステップ1901)。そして、算術復号部1711は、復号対象ブロックの予測モード情報がインター予測又はイントラ予測のいずれを示しているかをチェックする(ステップ1902)。 FIG. 19 is a flowchart showing a specific example of video decoding processing performed by the video decoding device 1701 in FIG. First, the arithmetic decoding unit 1711 performs variable length decoding on the encoded stream, and generates the quantization coefficient and prediction mode information of the decoding target block (step 1901). Then, the arithmetic decoding unit 1711 checks whether the prediction mode information of the decoding target block indicates inter prediction or intra prediction (step 1902).
 予測モード情報がインター予測を示している場合(ステップ1902,YES)、算術復号部1711は、予測モード情報に含まれている、差分動きベクトルの各成分の絶対値及び符号フラグを用いて、動きベクトルを生成する(ステップ1903)。そして、動き補償部1717は、生成された動きベクトルを用いて、復号対象ブロックに対する動き補償処理を行う(ステップ1904)。 When the prediction mode information indicates inter prediction (step 1902, YES), the arithmetic decoding unit 1711 uses the absolute value and the sign flag of each component of the difference motion vector included in the prediction mode information to perform motion. A vector is generated (step 1903). Then, the motion compensation unit 1717 performs motion compensation processing on the decoding target block using the generated motion vector (step 1904).
 一方、予測モード情報がイントラ予測を示している場合(ステップ1902,NO)、フレーム内予測部1716は、復号対象ブロックに対するイントラ予測を行う(ステップ1907)。 On the other hand, when the prediction mode information indicates intra prediction (NO in step 1902), the intra-frame prediction unit 1716 performs intra prediction on the decoding target block (step 1907).
 次に、逆量子化部1712及び逆直交変換部1713は、復号対象ブロックの量子化係数を復号して、予測誤差を生成する(ステップ1905)。そして、選択部1718、再構成部1714、及びループ内フィルタ1715は、動き補償部1717又はフレーム内予測部1716のいずれかが出力する予測画像を用いて、予測誤差から復号画像を生成する。 Next, the inverse quantization unit 1712 and the inverse orthogonal transform unit 1713 decode the quantization coefficient of the decoding target block and generate a prediction error (step 1905). Then, the selection unit 1718, the reconstruction unit 1714, and the in-loop filter 1715 generate a decoded image from the prediction error using the prediction image output by either the motion compensation unit 1717 or the intra-frame prediction unit 1716.
 次に、映像復号装置1701は、符号化ストリームの復号が終了したか否かを判定する(ステップ1906)。未処理の符号列が残っている場合(ステップ1906,NO)、映像復号装置1701は、次の符号列についてステップ1901以降の処理を繰り返す。そして、符号化ストリームの復号が終了した場合(ステップ1906,YES)、映像復号装置1701は、処理を終了する。 Next, the video decoding device 1701 determines whether or not the decoding of the encoded stream has been completed (step 1906). If an unprocessed code string remains (step 1906, NO), the video decoding device 1701 repeats the processing from step 1901 on for the next code string. Then, when decoding of the encoded stream is completed (step 1906, YES), the video decoding device 1701 ends the process.
 図20は、図19のステップ1903における第1の動きベクトル生成処理の例を示すフローチャートである。第1の動きベクトル生成処理では、図9に示した第1の計算方法によって、推定差分動きベクトルが計算される。 FIG. 20 is a flowchart showing an example of the first motion vector generation process in step 1903 of FIG. In the first motion vector generation process, the estimated difference motion vector is calculated by the first calculation method shown in FIG.
 まず、決定部1802の差分動きベクトル候補計算部1811は、差分動きベクトルのx成分及びy成分の符号の4通りの組み合わせに基づいて、4個の差分動きベクトル候補を計算する(ステップ2001)。 First, the difference motion vector candidate calculation unit 1811 of the determination unit 1802 calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 2001).
 次に、推定差分動きベクトル計算部1812は、4個の差分動きベクトル候補それぞれが示す4個の参照ブロック候補を求める(ステップ2002)。 Next, the estimated difference motion vector calculation unit 1812 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 2002).
 次に、推定差分動きベクトル計算部1812は、復号対象画像内において復号対象ブロックに隣接する復号済み画素の復号画素値の統計値B0を計算する(ステップ2003)。 Next, the estimated difference motion vector calculation unit 1812 calculates the statistical value B0 of the decoded pixel value of the decoded pixel adjacent to the decoding target block in the decoding target image (step 2003).
 次に、推定差分動きベクトル計算部1812は、各参照ブロック候補を復号対象ブロックの位置に重ねて配置した場合に、ステップ2003で用いた復号済み画素に隣接する参照ブロック候補内の復号済み画素を特定する(ステップ2004)。そして、推定差分動きベクトル計算部1812は、特定した復号済み画素の復号画素値の統計値Bi(i=1~4)を計算する。 Next, the estimated difference motion vector calculation unit 1812 selects the decoded pixels in the reference block candidates adjacent to the decoded pixels used in step 2003 when each reference block candidate is arranged so as to overlap the position of the decoding target block. Specify (step 2004). Then, the estimated difference motion vector calculation unit 1812 calculates the statistical value Bi (i = 1 to 4) of the decoded pixel value of the identified decoded pixel.
 次に、推定差分動きベクトル計算部1812は、統計値B1~統計値B4のうち統計値B0に最も近い統計値を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ2005)。 Next, the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and obtains a differential motion vector candidate indicating the obtained reference block candidate. The estimated difference motion vector is determined (step 2005).
 次に、生成部1803は、差分動きベクトルの各成分の符号フラグと推定差分動きベクトルとを用いて、差分動きベクトルを生成し、予測動きベクトルと差分動きベクトルとを加算することで、動きベクトルを計算する(ステップ2006)。 Next, the generation unit 1803 generates a difference motion vector using the sign flag of each component of the difference motion vector and the estimated difference motion vector, and adds the predicted motion vector and the difference motion vector to thereby generate a motion vector. Is calculated (step 2006).
 図21は、図19のステップ1903における第2の動きベクトル生成処理の例を示すフローチャートである。第2の動きベクトル生成処理では、図10に示した第2の計算方法によって、推定差分動きベクトルが計算される。図21のステップ2101、ステップ2102、及びステップ2105の処理は、図20のステップ2001、ステップ2002、及びステップ2006の処理と同様である。 FIG. 21 is a flowchart showing an example of the second motion vector generation process in step 1903 of FIG. In the second motion vector generation process, the estimated difference motion vector is calculated by the second calculation method shown in FIG. The processing in step 2101, step 2102, and step 2105 in FIG. 21 is the same as the processing in step 2001, step 2002, and step 2006 in FIG. 20.
 ステップ2103において、推定差分動きベクトル計算部1812は、各参照ブロック候補を復号対象ブロックの位置に重ねて配置した場合に、復号対象画像内における周囲の復号済み画素に隣接する、参照ブロック候補内の復号済み画素を特定する。そして、推定差分動きベクトル計算部1812は、各参照ブロック候補について、復号対象画像内における周囲の復号済み画素の復号画素値と、参照ブロック候補内の復号済み画素の復号画素値との差分絶対値和を計算する。 In step 2103, when the estimated difference motion vector calculation unit 1812 arranges each reference block candidate so as to overlap the position of the decoding target block, the estimated difference motion vector calculation unit 1812 includes a reference block candidate adjacent to the surrounding decoded pixels in the decoding target image. Identify decoded pixels. Then, for each reference block candidate, the estimated difference motion vector calculation unit 1812 calculates the absolute difference between the decoded pixel value of the surrounding decoded pixel in the decoding target image and the decoded pixel value of the decoded pixel in the reference block candidate. Calculate the sum.
 次に、推定差分動きベクトル計算部1812は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ2104)。 Next, the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference. A motion vector is determined (step 2104).
 図22は、図19のステップ1903における第3の動きベクトル生成処理の例を示すフローチャートである。第3の動きベクトル生成処理では、図11に示した第3の計算方法によって、推定差分動きベクトルが計算される。図22のステップ2201、ステップ2202、及びステップ2205の処理は、図20のステップ2001、ステップ2002、及びステップ2006の処理と同様である。 FIG. 22 is a flowchart showing an example of the third motion vector generation process in step 1903 of FIG. In the third motion vector generation process, the estimated difference motion vector is calculated by the third calculation method shown in FIG. The processing in step 2201, step 2202, and step 2205 in FIG. 22 is the same as the processing in step 2001, step 2002, and step 2006 in FIG.
 ステップ2203において、推定差分動きベクトル計算部1812は、復号対象ブロックの境界の外側に隣接する、復号対象画像内の2列の復号済み画素を特定する。そして、推定差分動きベクトル計算部1812は、それらの2列の復号済み画素の復号画素値から、復号対象ブロックの境界上における予測画素値を計算する。 In step 2203, the estimated difference motion vector calculation unit 1812 identifies two columns of decoded pixels in the decoding target image that are adjacent to the outside of the decoding target block boundary. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
 次に、推定差分動きベクトル計算部1812は、各参照ブロック候補を復号対象ブロックの位置に重ねて配置した場合に、復号対象ブロックの境界の内側に隣接する、参照ブロック候補内の2列の復号済み画素を特定する。そして、推定差分動きベクトル計算部1812は、それらの2列の復号済み画素の復号画素値から、復号対象ブロックの境界上における予測画素値を計算する。 Next, the estimated difference motion vector calculation unit 1812 decodes two columns in the reference block candidate that are adjacent to the inner side of the boundary of the decoding target block when each reference block candidate is arranged at the position of the decoding target block. Identify the completed pixel. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
 次に、推定差分動きベクトル計算部1812は、各参照ブロック候補について、復号対象画像内の復号済み画素から計算した予測画素値と、参照ブロック候補内の復号済み画素から計算した予測画素値との差分絶対値和を計算する。 Next, the estimated difference motion vector calculation unit 1812 calculates, for each reference block candidate, the predicted pixel value calculated from the decoded pixel in the decoding target image and the predicted pixel value calculated from the decoded pixel in the reference block candidate. Calculate the sum of absolute differences.
 次に、推定差分動きベクトル計算部1812は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す差分動きベクトル候補を、推定差分動きベクトルに決定する(ステップ2204)。 Next, the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference. A motion vector is determined (step 2204).
 ところで、図1の映像符号化装置101及び図3の映像復号装置301は、差分動きベクトルの成分の符号の代わりに、動きベクトルの成分の符号を変更することで、複数の動きベクトル候補を生成することも可能である。 Meanwhile, the video encoding device 101 in FIG. 1 and the video decoding device 301 in FIG. 3 generate a plurality of motion vector candidates by changing the code of the motion vector component instead of the code of the differential motion vector component. It is also possible to do.
 この場合、図1の第1符号化部111は、映像に含まれる画像内の符号化対象ブロックを符号化する。 In this case, the first encoding unit 111 in FIG. 1 encodes the encoding target block in the image included in the video.
 決定部112は、符号化対象ブロックに対する第1動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、第1動きベクトルを含む複数の動きベクトル候補を生成する。そして、決定部112は、それらの動きベクトル候補の中から第2動きベクトルを決定する。このとき、決定部112は、符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、複数の動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、第2動きベクトルを決定する。 The determining unit 112 generates a plurality of motion vector candidates including the first motion vector by changing a sign indicating whether the component of the first motion vector for the encoding target block is positive or negative. Then, the determination unit 112 determines the second motion vector from these motion vector candidates. At this time, the determination unit 112 performs local decoding pixel values of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates and the local decoding pixel value of the encoded pixel adjacent to the encoding target block. The second motion vector is determined using the value.
 生成部113は、第1動きベクトルの成分の符号が第2動きベクトルの成分の符号と一致するか否かを示す一致情報を生成し、第2符号化部114は、第1動きベクトルの成分の絶対値と一致情報とを符号化する。 The generation unit 113 generates coincidence information indicating whether or not the code of the first motion vector component matches the code of the second motion vector component, and the second encoding unit 114 generates the first motion vector component. The absolute value and the coincidence information are encoded.
 また、図3の第1復号部311は、符号化映像を復号して、符号化映像に含まれる画像内の復号対象ブロックに対する第1動きベクトルの成分の絶対値を復元する。このとき、第1復号部311は、第1動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2動きベクトルの成分の符号と一致するか否かを示す一致情報を、第1動きベクトルの成分の絶対値とともに復元する。 Further, the first decoding unit 311 in FIG. 3 decodes the encoded video, and restores the absolute value of the first motion vector component for the decoding target block in the image included in the encoded video. At this time, the first decoding unit 311 includes coincidence information indicating whether or not the code indicating whether the component of the first motion vector is positive or negative matches the code of the component of the second motion vector. Restored together with the absolute value of the first motion vector component.
 決定部312は、第1動きベクトルの成分の絶対値に符号を付加することで複数の動きベクトル候補を生成し、それらの動きベクトル候補の中から第2動きベクトルを決定する。このとき、決定部312は、復号対象ブロックに隣接する画素の復号画素値と、複数の動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、第2動きベクトルを決定する。 The determining unit 312 generates a plurality of motion vector candidates by adding a sign to the absolute value of the component of the first motion vector, and determines the second motion vector from the motion vector candidates. At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates to perform the second motion. Determine the vector.
 生成部313は、一致情報に基づいて、第2動きベクトルから第1動きベクトルを生成し、第2復号部314は、第1動きベクトルを用いて復号対象ブロックの係数情報を復号する。 The generating unit 313 generates a first motion vector from the second motion vector based on the match information, and the second decoding unit 314 decodes the coefficient information of the decoding target block using the first motion vector.
 図7の映像符号化装置701は、インター予測の予測モード情報として、差分動きベクトルの各成分の絶対値及び符号フラグの代わりに、動きベクトルの各成分の絶対値及び符号フラグを用いる。この場合、算術符号化部715は、フレーム間予測部718が出力する動きベクトルの各成分の符号の4通りの組み合わせに基づいて、4個の動きベクトル候補を生成する。 7 uses the absolute value and the code flag of each component of the motion vector instead of the absolute value and the code flag of each component of the differential motion vector as the prediction mode information of the inter prediction. In this case, the arithmetic encoding unit 715 generates four motion vector candidates based on the four combinations of the codes of the respective components of the motion vector output from the inter-frame prediction unit 718.
 図23は、動きベクトル候補及び参照ブロック候補の例を示している。図23の参照画像2301は、符号化対象画像内の符号化対象ブロックと同じ位置に存在するブロック2311を含む。参照画像2301内において、左から右へ向かう方向がx座標の正方向であり、上から下へ向かう方向がy座標の正方向である。 FIG. 23 shows examples of motion vector candidates and reference block candidates. The reference image 2301 in FIG. 23 includes a block 2311 that exists at the same position as the encoding target block in the encoding target image. In the reference image 2301, the direction from left to right is the positive direction of the x coordinate, and the direction from top to bottom is the positive direction of the y coordinate.
 この場合、動きベクトルのx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、動きベクトル候補2331~動きベクトル候補2334が生成される。 In this case, motion vector candidates 2331 to 2334 are generated based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component. The
 動きベクトル候補2331は、ブロック2311から参照ブロック候補2321へ向かうベクトルであり、動きベクトル候補2331のx成分及びy成分は正である。動きベクトル候補2332は、ブロック2311から参照ブロック候補2322へ向かうベクトルであり、動きベクトル候補2332のx成分は正であり、y成分は負である。 The motion vector candidate 2331 is a vector from the block 2311 to the reference block candidate 2321, and the x component and the y component of the motion vector candidate 2331 are positive. The motion vector candidate 2332 is a vector from the block 2311 to the reference block candidate 2322, and the x component of the motion vector candidate 2332 is positive and the y component is negative.
 動きベクトル候補2333は、ブロック2311から参照ブロック候補2323へ向かうベクトルであり、動きベクトル候補2333のx成分及びy成分は負である。動きベクトル候補2334は、ブロック2311から参照ブロック候補2324へ向かうベクトルであり、動きベクトル候補2334のx成分は負であり、y成分は正である。 The motion vector candidate 2333 is a vector from the block 2311 to the reference block candidate 2323, and the x component and the y component of the motion vector candidate 2333 are negative. The motion vector candidate 2334 is a vector from the block 2311 to the reference block candidate 2324, and the x component of the motion vector candidate 2334 is negative and the y component is positive.
 図24は、図7の算術符号化部715の第2の機能的構成例を示している。図24の算術符号化部715は、決定部2401、生成部2402、及び符号化部2403を含む。決定部2401は、動きベクトル候補計算部2411及び推定動きベクトル計算部2412を含む。決定部2401、生成部2402、及び符号化部2403は、図1の決定部112、生成部113、及び第2符号化部114にそれぞれ対応する。 FIG. 24 shows a second functional configuration example of the arithmetic encoding unit 715 of FIG. 24 includes a determination unit 2401, a generation unit 2402, and an encoding unit 2403. The determination unit 2401 includes a motion vector candidate calculation unit 2411 and an estimated motion vector calculation unit 2412. The determination unit 2401, the generation unit 2402, and the encoding unit 2403 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
 動きベクトル候補計算部2411は、動きベクトルのx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の動きベクトル候補を計算する。 The motion vector candidate calculation unit 2411 calculates four motion vector candidates based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component. calculate.
 推定動きベクトル計算部2412は、4個の動きベクトル候補それぞれに対応する4個の参照ブロック候補を求める。そして、推定動きベクトル計算部2412は、符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、4個の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、推定動きベクトルを計算する。 The estimated motion vector calculation unit 2412 obtains four reference block candidates corresponding to the four motion vector candidates. Then, the estimated motion vector calculation unit 2412 uses the locally decoded pixel value of the encoded pixel adjacent to the encoding target block and the locally decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated motion vector.
 生成部2402は、動きベクトルの各成分の符号が推定動きベクトルの各成分の符号と一致するか否かを示す符号フラグを生成する。符号化部2403は、動きベクトルの各成分の絶対値及び符号フラグと、量子化部714が出力する量子化係数とを、CABACのコンテキストモデリングにより、可変の生起確率を用いて符号化する。 The generating unit 2402 generates a code flag indicating whether or not the code of each component of the motion vector matches the code of each component of the estimated motion vector. The encoding unit 2403 encodes the absolute value and the sign flag of each component of the motion vector and the quantization coefficient output from the quantization unit 714 using CABAC context modeling using a variable occurrence probability.
 符号フラグとしては、上述したmvd_sign_flag[0]及びmvd_sign_flag[1]を用いることができる。ただし、これらの符号フラグの値は、以下のように変更される。 As the sign flag, mvd_sign_flag [0] and mvd_sign_flag [1] described above can be used. However, the values of these sign flags are changed as follows.
mvd_sign_flag[0]
 0:動きベクトルのx成分の符号が、推定動きベクトルのx成分の符号と同じである場合
 1:動きベクトルのx成分の符号が、推定動きベクトルのx成分の符号と異なる場合
mvd_sign_flag[1]
 0:動きベクトルのy成分の符号が、推定動きベクトルのy成分の符号と同じである場合
 1:動きベクトルのy成分の符号が、推定動きベクトルのy成分の符号と異なる場合
mvd_sign_flag [0]
0: When the code of the x component of the motion vector is the same as the code of the x component of the estimated motion vector 1: When the code of the x component of the motion vector is different from the code of the x component of the estimated motion vector mvd_sign_flag [1]
0: The sign of the y component of the motion vector is the same as the sign of the y component of the estimated motion vector 1: The sign of the y component of the motion vector is different from the sign of the y component of the estimated motion vector
 これにより、差分動きベクトルの符号フラグと同様に、コンテキストモデリングを用いた算術符号化によって、符号フラグの符号量が削減される。 Thus, the code amount of the code flag is reduced by arithmetic coding using context modeling, similarly to the code flag of the differential motion vector.
 図25は、図13のステップ1304において、図24の算術符号化部715が行う第4の符号フラグ生成処理の例を示すフローチャートである。第4の符号フラグ生成処理では、図9に示した第1の計算方法によって、推定動きベクトルが計算される。図25のステップ2503及びステップ2504の処理は、図14のステップ1404及びステップ1405の処理と同様である。 FIG. 25 is a flowchart illustrating an example of a fourth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in Step 1304 of FIG. In the fourth code flag generation process, the estimated motion vector is calculated by the first calculation method shown in FIG. The processing in step 2503 and step 2504 in FIG. 25 is the same as the processing in step 1404 and step 1405 in FIG.
 まず、決定部2401の動きベクトル候補計算部2411は、動きベクトルのx成分及びy成分の符号の4通りの組み合わせに基づいて、4個の動きベクトル候補を計算する(ステップ2501)。 First, the motion vector candidate calculation unit 2411 of the determination unit 2401 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2501).
 次に、推定動きベクトル計算部2412は、4個の動きベクトル候補それぞれが示す4個の参照ブロック候補を求める(ステップ2502)。 Next, the estimated motion vector calculation unit 2412 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2502).
 ステップ2505において、推定動きベクトル計算部2412は、統計値A1~統計値A4のうち統計値A0に最も近い統計値を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 2505, the estimated motion vector calculation unit 2412 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
 次に、生成部2402は、動きベクトルの各成分の符号と推定動きベクトルの各成分の符号とを比較して、各成分の符号フラグの値を決定する(ステップ2506)。動きベクトルの成分の符号と推定動きベクトルの成分の符号とが同じである場合、その成分の符号フラグの値は“0”に決定され、2つの符号が異なる場合、その成分の符号フラグの値は“1”に決定される。 Next, the generation unit 2402 compares the code of each component of the motion vector and the code of each component of the estimated motion vector, and determines the value of the code flag of each component (step 2506). If the code of the motion vector component and the code of the estimated motion vector component are the same, the value of the code flag of the component is determined to be “0”. If the two codes are different, the value of the code flag of the component Is determined to be “1”.
 図26は、図13のステップ1304において、図24の算術符号化部715が行う第5の符号フラグ生成処理の例を示すフローチャートである。第5の符号フラグ生成処理では、図10に示した第2の計算方法によって、推定動きベクトルが計算される。図26のステップ2601、ステップ2602、及びステップ2605の処理は、図25のステップ2501、ステップ2502、及びステップ2506の処理と同様であり、ステップ2603の処理は、図15のステップ1504の処理と同様である。 FIG. 26 is a flowchart illustrating an example of a fifth code flag generation process performed by the arithmetic encoding unit 715 in FIG. 24 in step 1304 in FIG. In the fifth code flag generation process, the estimated motion vector is calculated by the second calculation method shown in FIG. The processing in step 2601, step 2602, and step 2605 in FIG. 26 is the same as the processing in step 2501, step 2502, and step 2506 in FIG. 25, and the processing in step 2603 is the same as the processing in step 1504 in FIG. It is.
 ステップ2604において、推定動きベクトル計算部2412は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 2604, the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
 図27は、図13のステップ1304において、図24の算術符号化部715が行う第6の符号フラグ生成処理の例を示すフローチャートである。第6の符号フラグ生成処理では、図11に示した第3の計算方法によって、推定動きベクトルが計算される。図27のステップ2701、ステップ2702、及びステップ2705の処理は、図25のステップ2501、ステップ2502、及びステップ2506の処理と同様であり、ステップ2703の処理は、図16のステップ1604の処理と同様である。 FIG. 27 is a flowchart illustrating an example of a sixth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in step 1304 of FIG. In the sixth code flag generation process, the estimated motion vector is calculated by the third calculation method shown in FIG. The processing of Step 2701, Step 2702, and Step 2705 of FIG. 27 is the same as the processing of Step 2501, Step 2502, and Step 2506 of FIG. 25, and the processing of Step 2703 is the same as the processing of Step 1604 of FIG. It is.
 ステップ2704において、推定動きベクトル計算部2412は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 2704, the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
 図28は、図17の算術復号部1711の第2の機能的構成例を示している。図28の算術復号部1711は、復号部2801、決定部2802、及び生成部2803を含む。決定部2802は、動きベクトル候補計算部2811及び推定動きベクトル計算部2812を含む。復号部2801、決定部2802、及び生成部2803は、図3の第1復号部311、決定部312、及び生成部313にそれぞれ対応する。 FIG. 28 shows a second functional configuration example of the arithmetic decoding unit 1711 in FIG. The arithmetic decoding unit 1711 in FIG. 28 includes a decoding unit 2801, a determination unit 2802, and a generation unit 2803. The determination unit 2802 includes a motion vector candidate calculation unit 2811 and an estimated motion vector calculation unit 2812. The decoding unit 2801, the determination unit 2802, and the generation unit 2803 respectively correspond to the first decoding unit 311, the determination unit 312, and the generation unit 313 in FIG.
 復号部2801は、CABACのコンテキストモデリングにより、可変の生起確率を用いて符号化ストリームを復号し、復号対象ブロックの量子化係数を復元する。さらに、復号部2801は、動きベクトルのx成分及びy成分の絶対値と、動きベクトルのx成分及びy成分の符号フラグとを復元する。そして、復号部2801は、動きベクトルの各成分の絶対値を決定部2802へ出力し、動きベクトルの各成分の符号フラグを、生成部2803へ出力する。 The decoding unit 2801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Furthermore, the decoding unit 2801 restores the absolute values of the x and y components of the motion vector and the code flags of the x and y components of the motion vector. Then, decoding section 2801 outputs the absolute value of each component of the motion vector to determination section 2802, and outputs the sign flag of each component of the motion vector to generation section 2803.
 動きベクトル候補計算部2811は、動きベクトルのx成分及びy成分の絶対値から、動きベクトルのx成分の正の符号及び負の符号と、y成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の動きベクトル候補を計算する。 The motion vector candidate calculation unit 2811 calculates the positive and negative signs of the x and y components of the motion vector and the positive and negative signs of the y component from the absolute values of the x and y components of the motion vector. Based on the combination, four motion vector candidates are calculated.
 推定動きベクトル計算部2812は、4個の動きベクトル候補それぞれに対応する4個の参照ブロック候補を求める。そして、推定動きベクトル計算部2812は、復号対象ブロックに隣接する復号済み画素の復号画素値と、4個の参照ブロック候補それぞれに含まれる復号済み画素の復号画素値とを用いて、推定動きベクトルを計算する。 The estimated motion vector calculation unit 2812 obtains four reference block candidates corresponding to each of the four motion vector candidates. Then, the estimated motion vector calculation unit 2812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the motion vector. Calculate
 生成部2803は、動きベクトルの各成分の符号フラグに基づいて、推定動きベクトルの各成分の符号から動きベクトルの各成分の符号を決定し、復号対象ブロックに対する動きベクトルを生成する。 The generation unit 2803 determines the code of each component of the motion vector from the code of each component of the estimated motion vector based on the code flag of each component of the motion vector, and generates a motion vector for the decoding target block.
 mvd_sign_flag[0]=0である場合、生成部2803は、推定動きベクトルのx成分の符号を変更しない。一方、mvd_sign_flag[0]=1である場合、生成部2803は、推定動きベクトルのx成分の符号を変更する。 When mvd_sign_flag [0] = 0, the generation unit 2803 does not change the sign of the x component of the estimated motion vector. On the other hand, when mvd_sign_flag [0] = 1, the generation unit 2803 changes the sign of the x component of the estimated motion vector.
 mvd_sign_flag[1]=0である場合、生成部2803は、推定動きベクトルのy成分の符号を変更しない。一方、mvd_sign_flag[1]=1である場合、生成部2803は、推定動きベクトルのy成分の符号を変更する。 When mvd_sign_flag [1] = 0, the generation unit 2803 does not change the sign of the y component of the estimated motion vector. On the other hand, when mvd_sign_flag [1] = 1, the generation unit 2803 changes the sign of the y component of the estimated motion vector.
 図29は、図19のステップ1903において、図28の算術復号部1711が行う第4の動きベクトル生成処理の例を示すフローチャートである。第4の動きベクトル生成処理では、図9に示した第1の計算方法によって、推定動きベクトルが計算される。図29のステップ2903及びステップ2904の処理は、図20のステップ2003及びステップ2004の処理と同様である。 FIG. 29 is a flowchart illustrating an example of a fourth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG. In the fourth motion vector generation process, the estimated motion vector is calculated by the first calculation method shown in FIG. The processing in step 2903 and step 2904 in FIG. 29 is the same as the processing in step 2003 and step 2004 in FIG.
 まず、決定部2802の動きベクトル候補計算部2811は、動きベクトルのx成分及びy成分の符号の4通りの組み合わせに基づいて、4個の動きベクトル候補を計算する(ステップ2901)。 First, the motion vector candidate calculation unit 2811 of the determination unit 2802 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2901).
 次に、推定動きベクトル計算部2812は、4個の動きベクトル候補それぞれが示す4個の参照ブロック候補を求める(ステップ2902)。 Next, the estimated motion vector calculation unit 2812 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2902).
 ステップ2905において、推定動きベクトル計算部2812は、統計値B1~統計値B4のうち統計値B0に最も近い統計値を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 2905, the estimated motion vector calculation unit 2812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
 次に、生成部2803は、動きベクトルの各成分の符号フラグと推定動きベクトルとを用いて、動きベクトルを計算する(ステップ2906)。 Next, the generation unit 2803 calculates a motion vector using the sign flag of each component of the motion vector and the estimated motion vector (step 2906).
 図30は、図19のステップ1903において、図28の算術復号部1711が行う第5の動きベクトル生成処理の例を示すフローチャートである。第5の動きベクトル生成処理では、図10に示した第2の計算方法によって、推定動きベクトルが計算される。図30のステップ3001、ステップ3002、及びステップ3005の処理は、図29のステップ2901、ステップ2902、及びステップ2906の処理と同様であり、ステップ3003の処理は、図21のステップ2103の処理と同様である。 FIG. 30 is a flowchart illustrating an example of a fifth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in Step 1903 in FIG. In the fifth motion vector generation process, the estimated motion vector is calculated by the second calculation method shown in FIG. The processing in step 3001, step 3002, and step 3005 in FIG. 30 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3003 is the same as the processing in step 2103 in FIG. It is.
 ステップ3004において、推定動きベクトル計算部2812は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 3004, the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
 図31は、図19のステップ1903において、図28の算術復号部1711が行う第6の動きベクトル生成処理の例を示すフローチャートである。第6の動きベクトル生成処理では、図11に示した第3の計算方法によって、推定動きベクトルが計算される。図31のステップ3101、ステップ3102、及びステップ3105の処理は、図29のステップ2901、ステップ2902、及びステップ2906の処理と同様であり、ステップ3103の処理は、図22のステップ2203の処理と同様である。 FIG. 31 is a flowchart showing an example of sixth motion vector generation processing performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG. In the sixth motion vector generation process, the estimated motion vector is calculated by the third calculation method shown in FIG. The processing in step 3101, step 3102, and step 3105 in FIG. 31 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3103 is the same as the processing in step 2203 in FIG. 22. It is.
 ステップ3104において、推定動きベクトル計算部2812は、4個の参照ブロック候補のうち、最も小さい差分絶対値和を有する参照ブロック候補を求め、求めた参照ブロック候補を示す動きベクトル候補を、推定動きベクトルに決定する。 In step 3104, the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
 図32は、映像符号化システムの機能的構成例を示している。図32の映像符号化システム3201は、図7の映像符号化装置701及び図17の映像復号装置1701を含み、様々な用途に利用される。例えば、映像符号化システム3201は、ビデオカメラ、映像送信装置、映像受信装置、テレビ電話システム、コンピュータ、又は携帯電話機であってもよい。 FIG. 32 shows a functional configuration example of the video encoding system. 32 includes the video encoding device 701 in FIG. 7 and the video decoding device 1701 in FIG. 17 and is used for various purposes. For example, the video encoding system 3201 may be a video camera, a video transmission device, a video reception device, a videophone system, a computer, or a mobile phone.
 図1及び図7の映像符号化装置の構成は一例に過ぎず、映像符号化装置の用途又は条件に応じて一部の構成要素を省略又は変更してもよい。図8及び図24の算術符号化部715の構成は一例に過ぎず、映像符号化装置の用途又は条件に応じて一部の構成要素を省略又は変更してもよい。映像符号化装置は、HEVC以外の符号化方式を採用してもよく、CABAC以外の可変長符号化方式を採用してもよい。 1 and FIG. 7 is merely an example, and some components may be omitted or changed depending on the use or conditions of the video encoding device. The configuration of the arithmetic encoding unit 715 in FIGS. 8 and 24 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding device. The video encoding apparatus may adopt an encoding method other than HEVC, or may adopt a variable length encoding method other than CABAC.
 図3及び図17の映像復号装置の構成は一例に過ぎず、映像復号装置の用途又は条件に応じて一部の構成要素を省略又は変更してもよい。図18及び図28の算術復号部1711の構成は一例に過ぎず、映像復号装置の用途又は条件に応じて一部の構成要素を省略又は変更してもよい。映像復号装置は、HEVC以外の復号方式を採用してもよく、CABAC以外の可変長復号方式を採用してもよい。 3 and 17 are merely examples, and some components may be omitted or changed according to the use or conditions of the video decoding device. The configuration of the arithmetic decoding unit 1711 in FIGS. 18 and 28 is merely an example, and some components may be omitted or changed according to the use or conditions of the video decoding device. The video decoding apparatus may employ a decoding scheme other than HEVC or a variable length decoding scheme other than CABAC.
 図32の映像符号化システム3201の構成は一例に過ぎず、映像符号化システム3201の用途又は条件に応じて一部の構成要素を省略又は変更してもよい。 The configuration of the video encoding system 3201 in FIG. 32 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding system 3201.
 図2、図4、図13~図16、図19~図22、図25~図27、及び図29~図31に示したフローチャートは一例に過ぎず、映像符号化装置又は映像復号装置の構成又は条件に応じて、一部の処理を省略又は変更してもよい。 The flowcharts shown in FIGS. 2, 4, 13 to 16, 19 to 22, 25 to 27, and 29 to 31 are merely examples, and the configuration of the video encoding device or the video decoding device is shown. Alternatively, some processes may be omitted or changed according to conditions.
 例えば、図15のステップ1504、図16のステップ1604、図21のステップ2103、及び図22のステップ2203において、差分絶対値和の代わりに、相違度又は類似度を示す別の指標(差分2乗和等)を用いてもよい。同様に、図26のステップ2603、図27のステップ2703、図30のステップ3003、及び図31のステップ3103において、差分絶対値和の代わりに、相違度又は類似度を示す別の指標を用いてもよい。 For example, in step 1504 in FIG. 15, step 1604 in FIG. 16, step 2103 in FIG. 21, and step 2203 in FIG. 22, another index (difference squared) indicating the degree of difference or similarity is used instead of the difference absolute value sum. Sum etc.) may be used. Similarly, in Step 2603 of FIG. 26, Step 2703 of FIG. 27, Step 3003 of FIG. 30, and Step 3103 of FIG. 31, another index indicating the degree of difference or similarity is used instead of the sum of absolute differences. Also good.
 図5、図6、及び図23に示した動きベクトル、予測動きベクトル、差分動きベクトル、差分動きベクトル候補、及び動きベクトル候補は一例に過ぎず、これらのベクトルは符号化対象映像に応じて変化する。 The motion vector, the predicted motion vector, the difference motion vector, the difference motion vector candidate, and the motion vector candidate shown in FIG. 5, FIG. 6, and FIG. 23 are merely examples, and these vectors change according to the encoding target video. To do.
 図9~図12に示した計算方法は一例に過ぎず、局所復号画素値又は復号画素値を用いた別の計算方法によって、推定差分動きベクトル又は推定動きベクトルを計算してもよい。 The calculation methods shown in FIGS. 9 to 12 are merely examples, and the estimated difference motion vector or the estimated motion vector may be calculated by another calculation method using the local decoded pixel value or the decoded pixel value.
 図1の映像符号化装置101、図3の映像復号装置301、図7の映像符号化装置701、及び図17の映像復号装置1701は、ハードウェア回路として実装することもでき、図33に示すような情報処理装置(コンピュータ)を用いて実装することもできる。 The video encoding device 101 in FIG. 1, the video decoding device 301 in FIG. 3, the video encoding device 701 in FIG. 7, and the video decoding device 1701 in FIG. 17 can also be implemented as hardware circuits, as shown in FIG. It can also be implemented using such an information processing apparatus (computer).
 図33の情報処理装置は、Central Processing Unit(CPU)3301、メモリ3302、入力装置3303、出力装置3304、補助記憶装置3305、媒体駆動装置3306、及びネットワーク接続装置3307を含む。これらの構成要素はバス3308により互いに接続されている。 33 includes a central processing unit (CPU) 3301, a memory 3302, an input device 3303, an output device 3304, an auxiliary storage device 3305, a medium driving device 3306, and a network connection device 3307. These components are connected to each other by a bus 3308.
 メモリ3302は、例えば、Read Only Memory(ROM)、Random Access Memory(RAM)、フラッシュメモリ等の半導体メモリであり、処理に用いられるプログラム及びデータを記憶する。メモリ3302は、図7のメモリ724及び図17のメモリ1719として用いることができる。 The memory 3302 is a semiconductor memory such as a Read Only Memory (ROM), a Random Access Memory (RAM), or a flash memory, and stores programs and data used for processing. The memory 3302 can be used as the memory 724 in FIG. 7 and the memory 1719 in FIG.
 CPU3301(プロセッサ)は、例えば、メモリ3302を利用してプログラムを実行することにより、図1の第1符号化部111、決定部112、生成部113、及び第2符号化部114として動作する。 The CPU 3301 (processor) operates as, for example, the first encoding unit 111, the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG. 1 by executing a program using the memory 3302.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図3の第1復号部311、決定部312、生成部313、及び第2復号部314としても動作する。 The CPU 3301 also operates as the first decoding unit 311, the determination unit 312, the generation unit 313, and the second decoding unit 314 in FIG. 3 by executing the program using the memory 3302.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図7のブロック分割部711、予測誤差生成部712、直交変換部713、量子化部714、算術符号化部715、及び符号化制御部716としても動作する。CPU3301は、メモリ3302を利用してプログラムを実行することにより、フレーム内予測部717、フレーム間予測部718、選択部719、逆量子化部720、逆直交変換部721、再構成部722、及びループ内フィルタ723としても動作する。 The CPU 3301 uses the memory 3302 to execute a program, thereby performing block division unit 711, prediction error generation unit 712, orthogonal transform unit 713, quantization unit 714, arithmetic coding unit 715, and coding control in FIG. The unit 716 also operates. The CPU 3301 uses the memory 3302 to execute a program, so that an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, and It also operates as an in-loop filter 723.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図8の決定部801、生成部802、及び符号化部803としても動作する。CPU3301は、メモリ3302を利用してプログラムを実行することにより、差分動きベクトル計算部811、差分動きベクトル候補計算部812、及び推定差分動きベクトル計算部813としても動作する。 The CPU 3301 also operates as the determination unit 801, the generation unit 802, and the encoding unit 803 in FIG. 8 by executing the program using the memory 3302. The CPU 3301 also operates as a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813 by executing a program using the memory 3302.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図17の算術復号部1711、逆量子化部1712、逆直交変換部1713、再構成部1714、及びループ内フィルタ1715としても動作する。CPU3301は、メモリ3302を利用してプログラムを実行することにより、フレーム内予測部1716、動き補償部1717、及び選択部1718としても動作する。 The CPU 3301 also operates as the arithmetic decoding unit 1711, the inverse quantization unit 1712, the inverse orthogonal transform unit 1713, the reconstruction unit 1714, and the in-loop filter 1715 in FIG. 17 by executing the program using the memory 3302. . The CPU 3301 also operates as an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 by executing a program using the memory 3302.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図18の復号部1801、決定部1802、生成部1803、差分動きベクトル候補計算部1811、及び推定差分動きベクトル計算部1812としても動作する。 The CPU 3301 also operates as the decoding unit 1801, the determination unit 1802, the generation unit 1803, the difference motion vector candidate calculation unit 1811, and the estimated difference motion vector calculation unit 1812 in FIG. 18 by executing the program using the memory 3302. To do.
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図24の決定部2401、生成部2402、符号化部2403、動きベクトル候補計算部2411、及び推定動きベクトル計算部2412としても動作する。 The CPU 3301 also operates as the determination unit 2401, the generation unit 2402, the encoding unit 2403, the motion vector candidate calculation unit 2411, and the estimated motion vector calculation unit 2412 in FIG. 24 by executing a program using the memory 3302. .
 CPU3301は、メモリ3302を利用してプログラムを実行することにより、図28の復号部2801、決定部2802、生成部2803、動きベクトル候補計算部2811、及び推定動きベクトル計算部2812としても動作する。 The CPU 3301 also operates as the decoding unit 2801, the determination unit 2802, the generation unit 2803, the motion vector candidate calculation unit 2811, and the estimated motion vector calculation unit 2812 in FIG. 28 by executing the program using the memory 3302.
 入力装置3303は、例えば、キーボード、ポインティングデバイス等であり、ユーザ又はオペレータからの指示や情報の入力に用いられる。出力装置3304は、例えば、表示装置、プリンタ、スピーカ等であり、ユーザ又はオペレータへの問い合わせや処理結果の出力に用いられる。処理結果は、復号映像であってもよい。 The input device 3303 is, for example, a keyboard, a pointing device, or the like, and is used for inputting an instruction or information from a user or an operator. The output device 3304 is, for example, a display device, a printer, a speaker, or the like, and is used to output an inquiry to a user or an operator or a processing result. The processing result may be a decoded video.
 補助記憶装置3305は、例えば、磁気ディスク装置、光ディスク装置、光磁気ディスク装置、テープ装置等である。補助記憶装置3305は、ハードディスクドライブであってもよい。情報処理装置は、補助記憶装置3305にプログラム及びデータを格納しておき、それらをメモリ3302にロードして使用することができる。 The auxiliary storage device 3305 is, for example, a magnetic disk device, an optical disk device, a magneto-optical disk device, a tape device, or the like. The auxiliary storage device 3305 may be a hard disk drive. The information processing apparatus can store programs and data in the auxiliary storage device 3305 and load them into the memory 3302 for use.
 媒体駆動装置3306は、可搬型記録媒体3309を駆動し、その記録内容にアクセスする。可搬型記録媒体3309は、メモリデバイス、フレキシブルディスク、光ディスク、光磁気ディスク等である。可搬型記録媒体3309は、Compact Disk Read Only Memory(CD-ROM)、Digital Versatile Disk(DVD)、又はUniversal Serial Bus(USB)メモリであってもよい。ユーザ又はオペレータは、この可搬型記録媒体3309にプログラム及びデータを格納しておき、それらをメモリ3302にロードして使用することができる。 The medium driving device 3306 drives the portable recording medium 3309 and accesses the recorded contents. The portable recording medium 3309 is a memory device, a flexible disk, an optical disk, a magneto-optical disk, or the like. The portable recording medium 3309 may be a Compact Disk Read Only Memory (CD-ROM), Digital Versatile Disk (DVD), or Universal Serial Bus (USB) memory. A user or an operator can store programs and data in the portable recording medium 3309 and load them into the memory 3302 for use.
 このように、処理に用いられるプログラム及びデータを格納するコンピュータ読み取り可能な記録媒体には、メモリ3302、補助記憶装置3305、及び可搬型記録媒体3309のような、物理的な(非一時的な)記録媒体が含まれる。 As described above, the computer-readable recording medium for storing the program and data used for processing includes physical (non-transitory) media such as the memory 3302, the auxiliary storage device 3305, and the portable recording medium 3309. A recording medium is included.
 ネットワーク接続装置3307は、Local Area Network(LAN)、インターネット等の通信ネットワークに接続され、通信に伴うデータ変換を行う通信インタフェース回路である。ネットワーク接続装置3307は、符号化ストリームを映像復号装置へ送信したり、符号化ストリームを映像符号化装置から受信したりすることができる。情報処理装置は、プログラム及びデータを外部の装置からネットワーク接続装置3307を介して受信し、それらをメモリ3302にロードして使用することができる。 The network connection device 3307 is a communication interface circuit that is connected to a communication network such as Local Area Network (LAN) or the Internet and performs data conversion accompanying communication. The network connection device 3307 can transmit the encoded stream to the video decoding device and receive the encoded stream from the video encoding device. The information processing apparatus can receive a program and data from an external apparatus via the network connection apparatus 3307 and can use them by loading them into the memory 3302.
 なお、情報処理装置が図33のすべての構成要素を含む必要はなく、用途又は条件に応じて一部の構成要素を省略することも可能である。例えば、ユーザ又はオペレータとのインタフェースが不要である場合は、入力装置3303及び出力装置3304を省略してもよい。また、情報処理装置が可搬型記録媒体3309にアクセスしない場合は、媒体駆動装置3306を省略してもよい。 Note that the information processing apparatus does not have to include all the components shown in FIG. 33, and some of the components can be omitted depending on the application or conditions. For example, when an interface with a user or an operator is unnecessary, the input device 3303 and the output device 3304 may be omitted. When the information processing apparatus does not access the portable recording medium 3309, the medium driving device 3306 may be omitted.
 開示の実施形態とその利点について詳しく説明したが、当業者は、特許請求の範囲に明確に記載した本発明の範囲から逸脱することなく、様々な変更、追加、省略をすることができるであろう。 Although the disclosed embodiments and their advantages have been described in detail, those skilled in the art can make various modifications, additions and omissions without departing from the scope of the present invention as explicitly set forth in the claims. Let's go.

Claims (20)

  1.  映像に含まれる画像内の符号化対象ブロックを符号化する第1符号化部と、
     前記符号化対象ブロックに対する動きベクトルと前記符号化対象ブロックに対する予測動きベクトルとから、第1差分動きベクトルを生成し、前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1差分動きベクトルを含む複数の差分動きベクトル候補を生成し、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の差分動きベクトル候補の中から第2差分動きベクトルを決定する決定部と、
     前記第1差分動きベクトルの成分の符号が前記第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を生成する生成部と、
     前記第1差分動きベクトルの成分の絶対値と前記一致情報とを符号化する第2符号化部と、
    を備えることを特徴とする映像符号化装置。
    A first encoding unit that encodes an encoding target block in an image included in the video;
    A code indicating whether a first differential motion vector component is positive or negative by generating a first differential motion vector from a motion vector for the encoding target block and a predicted motion vector for the encoding target block To generate a plurality of differential motion vector candidates including the first differential motion vector, and locally decoded pixel values of encoded pixels adjacent to the encoding target block, and the plurality of differential motion vector candidates A determination unit that determines a second differential motion vector from among the plurality of differential motion vector candidates using a locally decoded pixel value of an encoded pixel included in each of the plurality of reference block candidates indicated by:
    A generating unit that generates coincidence information indicating whether a code of the component of the first differential motion vector matches a code of the component of the second differential motion vector;
    A second encoding unit that encodes the absolute value of the component of the first differential motion vector and the match information;
    A video encoding device comprising:
  2.  前記第1差分動きベクトルは、前記画像内における水平方向の成分である第1成分と、前記画像内における垂直方向の成分である第2成分とを含み、
     前記決定部は、前記第1成分の正の符号及び負の符号と、前記第2成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の差分動きベクトル候補を生成し、
     前記一致情報は、前記第1差分動きベクトルの第1成分の符号が前記第2差分動きベクトルの第1成分の符号と一致するか否かを示す第1フラグと、前記第1差分動きベクトルの第2成分の符号が前記第2差分動きベクトルの第2成分の符号と一致するか否かを示す第2フラグとを含むことを特徴とする請求項1記載の映像符号化装置。
    The first differential motion vector includes a first component that is a horizontal component in the image and a second component that is a vertical component in the image;
    The determination unit generates four differential motion vector candidates based on four combinations of the positive sign and negative sign of the first component and the positive sign and negative sign of the second component And
    The coincidence information includes a first flag indicating whether a sign of the first component of the first difference motion vector matches a sign of the first component of the second difference motion vector, and the first difference motion vector. 2. The video encoding apparatus according to claim 1, further comprising: a second flag indicating whether a code of the second component matches a code of the second component of the second differential motion vector.
  3.  前記第2符号化部は、コンテキスト適応型2値算術符号化における可変の生起確率を用いて、前記一致情報を符号化することを特徴とする請求項1又は2記載の映像符号化装置。 3. The video encoding apparatus according to claim 1, wherein the second encoding unit encodes the match information using a variable occurrence probability in context adaptive binary arithmetic encoding.
  4.  前記決定部は、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値の第1統計値を計算し、前記複数の参照ブロック候補各々に含まれる符号化済み画素の局所復号画素値の第2統計値を計算し、前記第1統計値と前記複数の参照ブロック候補それぞれの前記第2統計値とを比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項1乃至3のいずれか1項に記載の映像符号化装置。 The determination unit calculates a first statistical value of a local decoded pixel value of an encoded pixel adjacent to the encoding target block, and a local decoded pixel value of an encoded pixel included in each of the plurality of reference block candidates The second statistical motion vector is calculated, and the second differential motion vector is determined by comparing the first statistical value and the second statistical value of each of the plurality of reference block candidates. Item 4. The video encoding device according to any one of Items 1 to 3.
  5.  前記決定部は、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の参照ブロック候補各々に含まれる符号化済み画素の局所復号画素値との間の差分絶対値和を計算し、前記複数の参照ブロック候補それぞれの前記差分絶対値和を比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項1乃至3のいずれか1項に記載の映像符号化装置。 The determination unit includes an absolute difference value between a local decoded pixel value of an encoded pixel adjacent to the encoding target block and a local decoded pixel value of an encoded pixel included in each of the plurality of reference block candidates. 4. The second differential motion vector is determined by calculating a sum and comparing the difference absolute value sum of each of the plurality of reference block candidates. 5. Video encoding device.
  6.  前記決定部は、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値から前記符号化対象ブロックの境界上における第1予測画素値を計算し、前記複数の参照ブロック候補各々に含まれる符号化済み画素の局所復号画素値から前記境界上における第2予測画素値を計算し、前記第1予測画素値と前記第2予測画素値との間の差分絶対値和を計算し、前記複数の参照ブロック候補それぞれの前記差分絶対値和を比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項1乃至3のいずれか1項に記載の映像符号化装置。 The determination unit calculates a first predicted pixel value on a boundary of the encoding target block from a locally decoded pixel value of an encoded pixel adjacent to the encoding target block, and includes the first reference pixel value in each of the plurality of reference block candidates Calculating a second predicted pixel value on the boundary from a locally decoded pixel value of the encoded pixel to be calculated, calculating a sum of absolute differences between the first predicted pixel value and the second predicted pixel value, 4. The video encoding device according to claim 1, wherein the second differential motion vector is determined by comparing the difference absolute value sums of a plurality of reference block candidates. 5.
  7.  映像に含まれる画像内の符号化対象ブロックを符号化する第1符号化部と、
     前記符号化対象ブロックに対する第1動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1動きベクトルを含む複数の動きベクトル候補を生成し、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の動きベクトル候補の中から第2動きベクトルを決定する決定部と、
     前記第1動きベクトルの成分の符号が前記第2動きベクトルの成分の符号と一致するか否かを示す一致情報を生成する生成部と、
     前記第1動きベクトルの成分の絶対値と前記一致情報とを符号化する第2符号化部と、を備えることを特徴とする映像符号化装置。
    A first encoding unit that encodes an encoding target block in an image included in the video;
    A plurality of motion vector candidates including the first motion vector are generated by changing a sign indicating whether a component of the first motion vector for the encoding target block is positive or negative, and the encoding target The plurality of motions using the locally decoded pixel values of the encoded pixels adjacent to the block and the locally decoded pixel values of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates. A determination unit for determining a second motion vector from among vector candidates;
    A generating unit that generates coincidence information indicating whether a sign of the component of the first motion vector matches a sign of the component of the second motion vector;
    A video encoding apparatus comprising: a second encoding unit that encodes the absolute value of the component of the first motion vector and the coincidence information.
  8.  映像符号化装置が、
     映像に含まれる画像内の符号化対象ブロックを符号化し、
     前記符号化対象ブロックに対する動きベクトルと前記符号化対象ブロックに対する予測動きベクトルとから、第1差分動きベクトルを生成し、
     前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1差分動きベクトルを含む複数の差分動きベクトル候補を生成し、
     前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の差分動きベクトル候補の中から第2差分動きベクトルを決定し、
     前記第1差分動きベクトルの成分の符号が前記第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を生成し、
     前記第1差分動きベクトルの成分の絶対値と前記一致情報とを符号化する、
    ことを特徴とする映像符号化方法。
    Video encoding device
    Encode the encoding target block in the image included in the video,
    Generating a first differential motion vector from a motion vector for the encoding target block and a predicted motion vector for the encoding target block;
    Generating a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether a component of the first differential motion vector is positive or negative;
    Using the local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates , Determining a second differential motion vector from the plurality of differential motion vector candidates,
    Generating coincidence information indicating whether or not the code of the component of the first differential motion vector matches the code of the component of the second differential motion vector;
    Encoding the absolute value of the component of the first differential motion vector and the match information;
    And a video encoding method.
  9.  映像に含まれる画像内の符号化対象ブロックを符号化し、
     前記符号化対象ブロックに対する動きベクトルと前記符号化対象ブロックに対する予測動きベクトルとから、第1差分動きベクトルを生成し、
     前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1差分動きベクトルを含む複数の差分動きベクトル候補を生成し、
     前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の差分動きベクトル候補の中から第2差分動きベクトルを決定し、
     前記第1差分動きベクトルの成分の符号が前記第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を生成し、
     前記第1差分動きベクトルの成分の絶対値と前記一致情報とを符号化する、
    処理をコンピュータに実行させるための映像符号化プログラム。
    Encode the encoding target block in the image included in the video,
    Generating a first differential motion vector from a motion vector for the encoding target block and a predicted motion vector for the encoding target block;
    Generating a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether a component of the first differential motion vector is positive or negative;
    Using the local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates , Determining a second differential motion vector from the plurality of differential motion vector candidates,
    Generating coincidence information indicating whether or not the code of the component of the first differential motion vector matches the code of the component of the second differential motion vector;
    Encoding the absolute value of the component of the first differential motion vector and the match information;
    A video encoding program for causing a computer to execute processing.
  10.  符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第1差分動きベクトルの成分の絶対値を復元するとともに、前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を復元する第1復号部と、
     前記第1差分動きベクトルの成分の絶対値に符号を付加することで複数の差分動きベクトル候補を生成し、前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の差分動きベクトル候補の中から前記第2差分動きベクトルを決定する決定部と、
     前記一致情報に基づいて、前記第2差分動きベクトルから前記第1差分動きベクトルを生成し、前記第1差分動きベクトルと前記復号対象ブロックに対する予測動きベクトルとから、前記復号対象ブロックに対する動きベクトルを生成する生成部と、
     前記復号対象ブロックに対する動きベクトルを用いて、前記復号対象ブロックの係数情報を復号する第2復号部と、
    を備えることを特徴とする映像復号装置。
    The encoded video is decoded to restore the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video, and the first differential motion vector component is positive or negative. A first decoding unit that restores coincidence information indicating whether or not the code indicating which of the two matches the code of the component of the second differential motion vector;
    A plurality of differential motion vector candidates are generated by adding a sign to the absolute value of the component of the first differential motion vector, and the decoded pixel value of a pixel adjacent to the decoding target block and the plurality of differential motion vector candidates are A determination unit that determines the second differential motion vector from the plurality of differential motion vector candidates using a decoded pixel value of a pixel included in each of the plurality of reference block candidates shown;
    Based on the coincidence information, the first differential motion vector is generated from the second differential motion vector, and the motion vector for the decoding target block is determined from the first differential motion vector and the predicted motion vector for the decoding target block. A generating unit to generate;
    A second decoding unit that decodes coefficient information of the decoding target block using a motion vector for the decoding target block;
    A video decoding device comprising:
  11.  前記第1差分動きベクトルは、前記画像内における水平方向の成分である第1成分と、前記画像内における垂直方向の成分である第2成分とを含み、
     前記決定部は、前記第1成分の正の符号及び負の符号と、前記第2成分の正の符号及び負の符号との4通りの組み合わせに基づいて、4個の差分動きベクトル候補を生成し、
     前記一致情報は、前記第1差分動きベクトルの第1成分の符号が前記第2差分動きベクトルの第1成分の符号と一致するか否かを示す第1フラグと、前記第1差分動きベクトルの第2成分の符号が前記第2差分動きベクトルの第2成分の符号と一致するか否かを示す第2フラグとを含むことを特徴とする請求項10記載の映像復号装置。
    The first differential motion vector includes a first component that is a horizontal component in the image and a second component that is a vertical component in the image;
    The determination unit generates four differential motion vector candidates based on four combinations of the positive sign and negative sign of the first component and the positive sign and negative sign of the second component And
    The coincidence information includes a first flag indicating whether a sign of the first component of the first difference motion vector matches a sign of the first component of the second difference motion vector, and the first difference motion vector. The video decoding apparatus according to claim 10, further comprising: a second flag indicating whether a code of the second component matches a code of the second component of the second differential motion vector.
  12.  前記第1復号部は、コンテキスト適応型2値算術符号化における可変の生起確率を用いて、前記一致情報を復元することを特徴とする請求項10又は11記載の映像復号装置。 The video decoding device according to claim 10 or 11, wherein the first decoding unit restores the matching information using a variable occurrence probability in context adaptive binary arithmetic coding.
  13.  前記決定部は、前記復号対象ブロックに隣接する画素の復号画素値の第1統計値を計算し、前記複数の参照ブロック候補各々に含まれる画素の復号画素値の第2統計値を計算し、前記第1統計値と前記複数の参照ブロック候補それぞれの前記第2統計値とを比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項10乃至12のいずれか1項に記載の映像復号装置。 The determining unit calculates a first statistical value of a decoded pixel value of a pixel adjacent to the decoding target block, calculates a second statistical value of a decoded pixel value of a pixel included in each of the plurality of reference block candidates, 13. The second differential motion vector is determined by comparing the first statistical value and the second statistical value of each of the plurality of reference block candidates. The video decoding device according to 1.
  14.  前記決定部は、前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の参照ブロック候補各々に含まれる画素の復号画素値との間の差分絶対値和を計算し、前記複数の参照ブロック候補それぞれの前記差分絶対値和を比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項10乃至12のいずれか1項に記載の映像復号装置。 The determination unit calculates a sum of absolute differences between a decoded pixel value of a pixel adjacent to the decoding target block and a decoded pixel value of a pixel included in each of the plurality of reference block candidates, and the plurality of references 13. The video decoding device according to claim 10, wherein the second differential motion vector is determined by comparing the difference absolute value sums of the respective block candidates.
  15.  前記決定部は、前記復号対象ブロックに隣接する画素の復号画素値から前記復号対象ブロックの境界上における第1予測画素値を計算し、前記複数の参照ブロック候補各々に含まれる画素の復号画素値から前記境界上における第2予測画素値を計算し、前記第1予測画素値と前記第2予測画素値との間の差分絶対値和を計算し、前記複数の参照ブロック候補それぞれの前記差分絶対値和を比較することで、前記第2差分動きベクトルを決定することを特徴とする請求項10乃至12のいずれか1項に記載の映像復号装置。 The determination unit calculates a first prediction pixel value on a boundary of the decoding target block from a decoding pixel value of a pixel adjacent to the decoding target block, and decodes a pixel value of a pixel included in each of the plurality of reference block candidates A second predicted pixel value on the boundary is calculated, a difference absolute value sum between the first predicted pixel value and the second predicted pixel value is calculated, and the difference absolute value of each of the plurality of reference block candidates is calculated. The video decoding apparatus according to claim 10, wherein the second differential motion vector is determined by comparing value sums.
  16.  符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第1動きベクトルの成分の絶対値を復元するとともに、前記第1動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2動きベクトルの成分の符号と一致するか否かを示す一致情報を復元する第1復号部と、
     前記第1動きベクトルの成分の絶対値に符号を付加することで複数の動きベクトル候補を生成し、前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の動きベクトル候補の中から前記第2動きベクトルを決定する決定部と、
     前記一致情報に基づいて、前記第2動きベクトルから前記第1動きベクトルを生成する生成部と、
     前記第1動きベクトルを用いて前記復号対象ブロックの係数情報を復号する第2復号部と、
    を備えることを特徴とする映像復号装置。
    The encoded video is decoded to restore the absolute value of the first motion vector component for the decoding target block in the image included in the encoded video, and the first motion vector component is either positive or negative A first decoding unit for restoring matching information indicating whether or not a code indicating whether or not a code matches a code of a component of the second motion vector;
    A plurality of motion vector candidates are generated by adding a sign to the absolute value of the component of the first motion vector, and a decoded pixel value of a pixel adjacent to the block to be decoded and a plurality of motion vector candidates indicated by the plurality of motion vector candidates A determination unit configured to determine the second motion vector from the plurality of motion vector candidates using a decoded pixel value of a pixel included in each of the reference block candidates;
    A generating unit that generates the first motion vector from the second motion vector based on the match information;
    A second decoding unit for decoding coefficient information of the decoding target block using the first motion vector;
    A video decoding device comprising:
  17.  映像復号装置が、
     符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第1差分動きベクトルの成分の絶対値を復元するとともに、前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を復元し、
     前記第1差分動きベクトルの成分の絶対値に符号を付加することで複数の差分動きベクトル候補を生成し、
     前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の差分動きベクトル候補の中から前記第2差分動きベクトルを決定し、
     前記一致情報に基づいて、前記第2差分動きベクトルから前記第1差分動きベクトルを生成し、
     前記第1差分動きベクトルと前記復号対象ブロックに対する予測動きベクトルとから、前記復号対象ブロックに対する動きベクトルを生成し、
     前記復号対象ブロックに対する動きベクトルを用いて、前記復号対象ブロックの係数情報を復号する、
    ことを特徴とする映像復号方法。
    Video decoding device
    The encoded video is decoded to restore the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video, and the first differential motion vector component is positive or negative. Reconstructing the coincidence information indicating whether the code indicating which is the same as the code of the component of the second differential motion vector;
    Generating a plurality of differential motion vector candidates by adding a sign to the absolute value of the component of the first differential motion vector;
    Using the decoded pixel value of a pixel adjacent to the decoding target block and the decoded pixel value of a pixel included in each of a plurality of reference block candidates indicated by the plurality of difference motion vector candidates, the plurality of difference motion vector candidates Determining the second differential motion vector from within,
    Generating the first differential motion vector from the second differential motion vector based on the match information;
    Generating a motion vector for the decoding target block from the first differential motion vector and a predicted motion vector for the decoding target block;
    Decoding coefficient information of the decoding target block using a motion vector for the decoding target block;
    And a video decoding method.
  18.  符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第1差分動きベクトルの成分の絶対値を復元するとともに、前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号が、第2差分動きベクトルの成分の符号と一致するか否かを示す一致情報を復元し、
     前記第1差分動きベクトルの成分の絶対値に符号を付加することで複数の差分動きベクトル候補を生成し、
     前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の差分動きベクトル候補が示す複数の参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の差分動きベクトル候補の中から前記第2差分動きベクトルを決定し、
     前記一致情報に基づいて、前記第2差分動きベクトルから前記第1差分動きベクトルを生成し、
     前記第1差分動きベクトルと前記復号対象ブロックに対する予測動きベクトルとから、前記復号対象ブロックに対する動きベクトルを生成し、
     前記復号対象ブロックに対する動きベクトルを用いて、前記復号対象ブロックの係数情報を復号する、
    処理をコンピュータに実行させるための映像復号プログラム。
    The encoded video is decoded to restore the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video, and the first differential motion vector component is positive or negative. Reconstructing the coincidence information indicating whether the code indicating which is the same as the code of the component of the second differential motion vector;
    Generating a plurality of differential motion vector candidates by adding a sign to the absolute value of the component of the first differential motion vector;
    Using the decoded pixel value of a pixel adjacent to the decoding target block and the decoded pixel value of a pixel included in each of a plurality of reference block candidates indicated by the plurality of difference motion vector candidates, the plurality of difference motion vector candidates Determining the second differential motion vector from within,
    Generating the first differential motion vector from the second differential motion vector based on the match information;
    Generating a motion vector for the decoding target block from the first differential motion vector and a predicted motion vector for the decoding target block;
    Decoding coefficient information of the decoding target block using a motion vector for the decoding target block;
    A video decoding program for causing a computer to execute processing.
  19.  映像に含まれる画像内の符号化対象ブロックを符号化する第1符号化部と、
     前記符号化対象ブロックに対する第1動きベクトルと前記符号化対象ブロックに対する第1予測動きベクトルとから、第1差分動きベクトルを生成し、前記第1差分動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1差分動きベクトルを含む複数の第1差分動きベクトル候補を生成し、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の第1差分動きベクトル候補が示す複数の第1参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の第1差分動きベクトル候補の中から第2差分動きベクトルを決定する第1決定部と、
     前記第1差分動きベクトルの成分の符号が前記第2差分動きベクトルの成分の符号と一致するか否かを示す第1一致情報を生成する第2生成部と、
     前記第1差分動きベクトルの成分の絶対値と前記第1一致情報とを符号化する第2符号化部と、
     符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第3差分動きベクトルの成分の絶対値を復元するとともに、前記第3差分動きベクトルの成分の符号が第4差分動きベクトルの成分の符号と一致するか否かを示す第2一致情報を復元する第1復号部と、
     前記第3差分動きベクトルの成分の絶対値に符号を付加することで複数の第2差分動きベクトル候補を生成し、前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の第2差分動きベクトル候補が示す複数の第2参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の第2差分動きベクトル候補の中から前記第4差分動きベクトルを決定する第2決定部と、
     前記第2一致情報に基づいて、前記第4差分動きベクトルから前記第3差分動きベクトルを生成し、前記第3差分動きベクトルと前記復号対象ブロックに対する第2予測動きベクトルとから、前記復号対象ブロックに対する第2動きベクトルを生成する第2生成部と、
     前記第2動きベクトルを用いて前記復号対象ブロックの係数情報を復号する第2復号部と、
    を備えることを特徴とする映像符号化システム。
    A first encoding unit that encodes an encoding target block in an image included in the video;
    A first differential motion vector is generated from the first motion vector for the encoding target block and the first predicted motion vector for the encoding target block, and the component of the first differential motion vector is either positive or negative A plurality of first differential motion vector candidates including the first differential motion vector by generating a code indicating whether or not, and a local decoded pixel value of an encoded pixel adjacent to the encoding target block; and A second difference is selected from among the plurality of first difference motion vector candidates using local decoded pixel values of encoded pixels included in each of the plurality of first reference block candidates indicated by the plurality of first difference motion vector candidates. A first determination unit for determining a motion vector;
    A second generation unit that generates first matching information indicating whether or not a code of the component of the first differential motion vector matches a code of the component of the second differential motion vector;
    A second encoding unit that encodes the absolute value of the component of the first differential motion vector and the first match information;
    The encoded video is decoded, the absolute value of the third differential motion vector component for the decoding target block in the image included in the encoded video is restored, and the sign of the third differential motion vector component is fourth. A first decoding unit that restores second matching information indicating whether or not the code of the component of the difference motion vector matches,
    A plurality of second difference motion vector candidates are generated by adding a sign to the absolute value of the component of the third difference motion vector, the decoded pixel value of a pixel adjacent to the decoding target block, and the plurality of second differences A second determination for determining the fourth differential motion vector from the plurality of second differential motion vector candidates using a decoded pixel value of a pixel included in each of the plurality of second reference block candidates indicated by the motion vector candidate And
    Based on the second match information, the third differential motion vector is generated from the fourth differential motion vector, and the decoding target block is calculated from the third differential motion vector and the second predicted motion vector for the decoding target block. A second generator for generating a second motion vector for
    A second decoding unit that decodes coefficient information of the decoding target block using the second motion vector;
    A video encoding system comprising:
  20.  映像に含まれる画像内の符号化対象ブロックを符号化する第1符号化部と、
     前記符号化対象ブロックに対する第1動きベクトルの成分が正又は負のいずれであるかを示す符号を変更することで、前記第1動きベクトルを含む複数の第1動きベクトル候補を生成し、前記符号化対象ブロックに隣接する符号化済み画素の局所復号画素値と、前記複数の第1動きベクトル候補が示す複数の第1参照ブロック候補それぞれに含まれる符号化済み画素の局所復号画素値とを用いて、前記複数の第1動きベクトル候補の中から第2動きベクトルを決定する第1決定部と、
     前記第1動きベクトルの成分の符号が前記第2動きベクトルの成分の符号と一致するか否かを示す第1一致情報を生成する第2生成部と、
     前記第1動きベクトルの成分の絶対値と前記第1一致情報とを符号化する第2符号化部と、
     符号化映像を復号して、前記符号化映像に含まれる画像内の復号対象ブロックに対する第3動きベクトルの成分の絶対値を復元するとともに、前記第3動きベクトルの成分の符号が第4動きベクトルの成分の符号と一致するか否かを示す第2一致情報を復元する第1復号部と、
     前記第3動きベクトルの成分の絶対値に符号を付加することで複数の第2動きベクトル候補を生成し、前記復号対象ブロックに隣接する画素の復号画素値と、前記複数の第2動きベクトル候補が示す複数の第2参照ブロック候補それぞれに含まれる画素の復号画素値とを用いて、前記複数の第2動きベクトル候補の中から前記第4動きベクトルを決定する第2決定部と、
     前記第2一致情報に基づいて、前記第4動きベクトルから前記第3動きベクトルを生成する第2生成部と、
     前記第3動きベクトルを用いて前記復号対象ブロックの係数情報を復号する第2復号部と、
    を備えることを特徴とする映像符号化システム。
    A first encoding unit that encodes an encoding target block in an image included in the video;
    A plurality of first motion vector candidates including the first motion vector are generated by changing a sign indicating whether a component of the first motion vector for the encoding target block is positive or negative, and the code The local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the plurality of first reference block candidates indicated by the plurality of first motion vector candidates are used. A first determination unit for determining a second motion vector from among the plurality of first motion vector candidates;
    A second generator for generating first match information indicating whether or not the sign of the component of the first motion vector matches the sign of the component of the second motion vector;
    A second encoding unit for encoding the absolute value of the component of the first motion vector and the first match information;
    The encoded video is decoded to restore the absolute value of the third motion vector component for the block to be decoded in the image included in the encoded video, and the sign of the third motion vector component is the fourth motion vector. A first decoding unit that restores second matching information indicating whether or not the code of the component matches,
    A plurality of second motion vector candidates are generated by adding a sign to the absolute value of the component of the third motion vector, the decoded pixel values of pixels adjacent to the decoding target block, and the plurality of second motion vector candidates A second determination unit that determines the fourth motion vector from among the plurality of second motion vector candidates using a decoded pixel value of a pixel included in each of the plurality of second reference block candidates indicated by:
    A second generator for generating the third motion vector from the fourth motion vector based on the second match information;
    A second decoding unit that decodes coefficient information of the decoding target block using the third motion vector;
    A video encoding system comprising:
PCT/JP2018/002811 2018-01-30 2018-01-30 Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system WO2019150411A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/002811 WO2019150411A1 (en) 2018-01-30 2018-01-30 Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/002811 WO2019150411A1 (en) 2018-01-30 2018-01-30 Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system

Publications (1)

Publication Number Publication Date
WO2019150411A1 true WO2019150411A1 (en) 2019-08-08

Family

ID=67477955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/002811 WO2019150411A1 (en) 2018-01-30 2018-01-30 Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system

Country Status (1)

Country Link
WO (1) WO2019150411A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019140630A (en) * 2018-02-15 2019-08-22 日本放送協会 Video encoder, video decoder, and their program
WO2023128548A1 (en) * 2021-12-28 2023-07-06 주식회사 케이티 Video signal encoding/decoding mehod, and recording medium on which bitstream is stored

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011034148A1 (en) * 2009-09-18 2011-03-24 シャープ株式会社 Encoder apparatus, decoder apparatus, moving image encoder apparatus, moving image decoder apparatus, and encoding data
WO2012042646A1 (en) * 2010-09-30 2012-04-05 富士通株式会社 Motion-video encoding apparatus, motion-video encoding method, motion-video encoding computer program, motion-video decoding apparatus, motion-video decoding method, and motion-video decoding computer program
JP2012235278A (en) * 2011-04-28 2012-11-29 Jvc Kenwood Corp Moving image encoding apparatus, moving image encoding method and moving image encoding program
WO2012176450A1 (en) * 2011-06-24 2012-12-27 パナソニック株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
WO2013006483A1 (en) * 2011-07-01 2013-01-10 Qualcomm Incorporated Video coding using adaptive motion vector resolution

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011034148A1 (en) * 2009-09-18 2011-03-24 シャープ株式会社 Encoder apparatus, decoder apparatus, moving image encoder apparatus, moving image decoder apparatus, and encoding data
WO2012042646A1 (en) * 2010-09-30 2012-04-05 富士通株式会社 Motion-video encoding apparatus, motion-video encoding method, motion-video encoding computer program, motion-video decoding apparatus, motion-video decoding method, and motion-video decoding computer program
JP2012235278A (en) * 2011-04-28 2012-11-29 Jvc Kenwood Corp Moving image encoding apparatus, moving image encoding method and moving image encoding program
WO2012176450A1 (en) * 2011-06-24 2012-12-27 パナソニック株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
WO2013006483A1 (en) * 2011-07-01 2013-01-10 Qualcomm Incorporated Video coding using adaptive motion vector resolution

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019140630A (en) * 2018-02-15 2019-08-22 日本放送協会 Video encoder, video decoder, and their program
JP7076221B2 (en) 2018-02-15 2022-05-27 日本放送協会 Video coding device, video decoding device, and their programs
JP2022103284A (en) * 2018-02-15 2022-07-07 日本放送協会 Video encoder, video decoder, and their program
JP7361838B2 (en) 2018-02-15 2023-10-16 日本放送協会 Video encoding device, video decoding device, and these programs
WO2023128548A1 (en) * 2021-12-28 2023-07-06 주식회사 케이티 Video signal encoding/decoding mehod, and recording medium on which bitstream is stored

Similar Documents

Publication Publication Date Title
US11178421B2 (en) Method and apparatus for encoding/decoding images using adaptive motion vector resolution
JP5277257B2 (en) Video decoding method and video encoding method
US11641481B2 (en) Method and apparatus for encoding/decoding images using adaptive motion vector resolution
KR102070719B1 (en) Method for inter prediction and apparatus thereof
CN111133759B (en) Method and apparatus for encoding or decoding video data
US11284087B2 (en) Image encoding device, image decoding device, and image processing method
JP2018511997A (en) Image prediction method and related apparatus
CN111418214A (en) Syntactic prediction using reconstructed pixel points
CN113383550A (en) Early termination of optical flow modification
JP7494403B2 (en) Decoding method, encoding method, apparatus, device and storage medium
CN117280691A (en) Enhanced motion vector prediction
JP6662123B2 (en) Image encoding apparatus, image encoding method, and image encoding program
WO2019150411A1 (en) Video encoding device, video encoding method, video decoding device, and video decoding method, and video encoding system
KR20110048004A (en) Motion vector encoding / decoding method and apparatus using motion vector resolution limitation and image encoding / decoding method using same
JP6019797B2 (en) Moving picture coding apparatus, moving picture coding method, and program
US20150237345A1 (en) Video coding device, video coding method, and video coding program
JP2019140630A (en) Video encoder, video decoder, and their program
WO2019150435A1 (en) Video encoding device, video encoding method, video decoding device, video decoding method, and video encoding system
KR20200126954A (en) Method for inter prediction and apparatus thereof
KR102173576B1 (en) Method for inter prediction and apparatus thereof
WO2021111595A1 (en) Filter generation method, filter generation device, and program
JP6853697B2 (en) Time prediction motion vector candidate generator, coding device, decoding device, and program
WO2011142221A1 (en) Encoding device and decoding device
CN111247804A (en) Image processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18904400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18904400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP