WO2013002150A1 - 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム - Google Patents
映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム Download PDFInfo
- Publication number
- WO2013002150A1 WO2013002150A1 PCT/JP2012/066039 JP2012066039W WO2013002150A1 WO 2013002150 A1 WO2013002150 A1 WO 2013002150A1 JP 2012066039 W JP2012066039 W JP 2012066039W WO 2013002150 A1 WO2013002150 A1 WO 2013002150A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interpolation
- filter coefficient
- interpolation filter
- weighted
- decimal precision
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a video encoding / decoding technique for improving performance of an interpolation filter in video encoding and improving encoding efficiency.
- This application claims priority based on Japanese Patent Application No. 2011-141725 for which it applied to Japan on June 27, 2011, and uses the content here.
- inter-frame prediction (motion compensation) coding in which prediction is performed between different screens, a motion vector that minimizes prediction error energy is obtained by referring to an already decoded frame.
- a signal also called a residual signal
- quantization is performed, and entropy coding is performed to finally become binary data, that is, a bit stream.
- a prediction method with high prediction accuracy is required.
- This tool is called multiple reference frame prediction.
- 16 ⁇ 16 and 8 ⁇ 8, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4, etc. in addition to 16 ⁇ 16 and 8 ⁇ 8
- the block size can be divided finely. This tool is called variable block size prediction.
- a half precision pixel is interpolated from the integer precision pixel of the reference frame using a 6-tap filter, and a quarter precision pixel is generated by linear interpolation using the pixel.
- the prediction is applied to the decimal precision motion. This tool is called 1/4 pixel accuracy prediction.
- H. ITU / IEC “MPEG” International Organization for Standardization / International Electrotechnical Commission “Moving Picture Experts Group”
- ITU- for the development of next-generation video coding standards with higher coding efficiency than H.264 / AVC T “VCEG” (International Telecommunication Union-Telecommunication Standardization Sector “Video Coding Experts Group”) jointly established a study team (Joint Collaborative Team for Video Coding: JCT-VC).
- the next generation standard system is called High Efficiency Video Coding (High HEV Video Coding: HEVC), and various new encoding technologies are currently gathered from around the world and are being discussed at the JCT-VC meeting.
- HEVC test Model: HM includes tools to improve the prediction efficiency of motion vectors and block sizes.
- a tool that expands to 16 ⁇ 16 or more is employed.
- DCT-IF DCT-based interpolation filter
- HM Discrete Cosine Transform
- the conventional interpolation filter will be described in more detail.
- FIG. 2 is a diagram illustrating a pixel interpolation method with decimal precision in H.264 / AVC.
- H. In H.264 / AVC as shown in FIG. 8, at the time of interpolation at a 1/2 pixel position, interpolation is performed using a total of 6 integer pixels for each of the three left and right points of the target interpolation pixel. In the vertical direction, interpolation is performed using a total of 6 integer pixels for each of the upper and lower three points.
- the filter coefficients are [(1, -5, 20, 20, -5, 1) / 32], respectively.
- the 1 ⁇ 4 pixel position is interpolated using an average value filter of [1/2, 1/2].
- Non-Patent Document 1 Since it is necessary to interpolate all 1 ⁇ 2 pixel positions once, calculation complexity is high, but high-performance interpolation is possible, leading to improved coding efficiency.
- the above-described interpolation technique using the fixed filter is disclosed in Non-Patent Document 1 and the like.
- a filter that uses the same coefficient value for all input images and all frames such as a one-dimensional 6-tap filter of H.264 / AVC, is called a fixed interpolation filter (Fixed Interpolation Filter).
- the DCT-based interpolation filter (DCT-IF) is employed in the HEVC reference software HM.
- FIG. 9 shows a decimal pixel interpolation method of this DCT-based interpolation filter.
- the interpolation target pixel at the decimal precision position is p
- the integer position pixel is p x
- the integer position pixel parameter indicating the position of p is ⁇ (0 ⁇ ⁇ ⁇ 1).
- the number of pixels at integer positions used for interpolation that is, the tap length is 2M (M is an integer value of 1 or more). Equation (1) is established from the DCT transformation definition equation.
- equation (2) holds.
- the pixel interpolation formula at the decimal position ⁇ is given by the following formula (3).
- This DCT-based interpolation filter is compatible with an arbitrary filter length and interpolation accuracy, and is a high-performance interpolation filter. Therefore, this DCT-based interpolation filter is adopted in the HEVC test model HM.
- the filter coefficient value is constant regardless of input image conditions (sequence type / image size / frame rate) and encoding conditions (block size / GOP (Group of Pictures) structure / QP (Quantization Parameter)). is there.
- time-varying effects such as aliasing, quantization error, error due to motion estimation, and camera noise are not considered. Therefore, it is considered that there is a limit to the performance improvement in terms of coding efficiency. Therefore, a method for adaptively changing the interpolation filter coefficient has been proposed in Non-Patent Document 3, and is called a non-separation type adaptive interpolation filter.
- Non-Patent Document 3 considers a two-dimensional interpolation filter (6 ⁇ 6 total 36 filter coefficients), and the filter coefficient is determined so as to minimize the prediction error energy. H. The encoding efficiency was higher than that of the one-dimensional 6-tap fixed interpolation filter used in H.264 / AVC, but the calculation complexity for obtaining the filter coefficient is very high.
- Non-Patent Document 4 introduces a proposal for reducing the above.
- Non-Patent Document 4 The method introduced in Non-Patent Document 4 is called a separable adaptive interpolation filter (SAIF), which uses a one-dimensional 6-tap interpolation filter instead of using a two-dimensional interpolation filter.
- SAIF separable adaptive interpolation filter
- FIGS. 10A to 10C are diagrams illustrating a pixel interpolation method with decimal precision in the separable adaptive interpolation filter (SAIF).
- SAIF separable adaptive interpolation filter
- S is an original image
- P is a decoded reference image
- x and y are horizontal and vertical positions in the image, respectively.
- ⁇ x ( ⁇ is a symbol above x; the same applies to others)
- ⁇ X x + MV x -FilterOffset
- MV x indicates the horizontal component of the motion vector obtained in advance
- FilterOffset indicates the offset for adjustment (value obtained by dividing the horizontal filter length by 2).
- ⁇ y y + MV y
- MV y represents the vertical component of the motion vector.
- w ci represents a horizontal filter coefficient group c i (0 ⁇ c i ⁇ 6) to be obtained.
- Equation (4) IV The same number of linear equations as the filter coefficients obtained in Equation (4) IV are obtained, and the minimization process is performed independently for each decimal pixel position in the horizontal direction. Through this minimization process, three types of 6-tap filter coefficient groups are obtained, and decimal precision pixels a, b, and c are interpolated using the filter coefficient groups.
- Step 2 of FIG. 10C The filter coefficient in the vertical direction is determined by solving a linear problem similar to that in the horizontal direction. Specifically, the vertical filter coefficient that minimizes the prediction error energy function E V 2 of Equation (5) is analytically determined.
- S is an original image
- ⁇ P ( ⁇ is a symbol on P) is an image subjected to interpolation processing in the horizontal direction after decoding
- x and y indicate horizontal and vertical positions in the image, respectively.
- ⁇ x 4 ⁇ (x + MV x ) is expressed
- MV x indicates a horizontal component of the rounded motion vector.
- w cj represents a vertical filter coefficient group c j (0 ⁇ c j ⁇ 6) to be obtained.
- the minimization process is performed independently for each decimal precision pixel, and 12 types of 6-tap filter coefficients are obtained. The remaining decimal precision pixels are interpolated using this filter coefficient.
- the filter coefficients can be inverted and used. That is, if 6 coefficients of d are transmitted, the value can be applied to l.
- c (d) 1 c (l) 6
- c (d) 2 c (l) 5
- c (d) 3 c (l) 4
- c (d) 4 c (l) 3
- c ( d) 5 c (l) 2
- c (d) 6 c (l) 1
- This symmetry is also available for e and m, f and n, and g and o. The same theory holds for a and c.
- the filter coefficient to be transmitted for each frame is 51 (15 in the horizontal direction and 36 in the vertical direction).
- the unit of the prediction error energy minimization processing is fixed in the frame.
- 51 filter coefficients are determined for one frame. If the encoding target frame is divided into two large (or multiple) texture areas A and B, the optimum filter coefficient is a coefficient group taking into account both (all textures). In the situation where a characteristic filter coefficient originally appears only in the vertical direction in the area A, but in the situation where the filter coefficient can be obtained only in the horizontal direction in the area B, the filter coefficient is derived in the form of averaging both.
- Non-Patent Document 5 proposes a method for achieving energy reduction and improving coding efficiency.
- Non-Patent Document 6 grouping is performed for each interpolation position, and the fixed interpolation filter and adaptive interpolation filter are selected for each group so that the prediction error energy can be reduced.
- a technique for generating an interpolated image has been proposed (see Non-Patent Document 6).
- Non-Patent Document 4 and Non-Patent Document 5 do not have a function of switching the interpolation filter for each interpolation position, and there is room for improvement in improving motion compensation performance.
- the interpolation filter having interpolation position adaptability described in Non-Patent Document 6 determines whether to use a fixed interpolation filter defined in advance for each position to be interpolated or an adaptive interpolation filter derived in units of frames.
- the filter selection is selected from the viewpoint of minimizing the prediction error energy, and either one is always selected.
- Non-Patent Document 6 is an alternative choice, and it is considered that there is a limit to improving the performance. Encoding efficiency can be improved if an interpolation filter that further reduces the prediction error energy can be designed by putting the weighted sum of a plurality of interpolation filters into the selection range.
- the present invention provides a new method for solving the above-described problems and reducing the prediction error energy in the motion compensation inter-picture prediction and improving the coding efficiency by providing a wide selection of interpolation filters.
- the purpose is to do.
- the present invention adds the weighted interpolation filter to the selection of the interpolation image generation based on the assumption that the weighted sum of the plurality of interpolation filters can further reduce the prediction error energy. If it is determined that the weighted interpolation filter minimizes the prediction error energy, a weighting factor is transmitted. By using this means, it is possible to generate an interpolated prediction image more flexibly, and the encoding efficiency can be improved by reducing the prediction error energy.
- the following processing is performed in video encoding using decimal precision motion compensation.
- weighted filter coefficients are derived using a plurality of parameters from a plurality of interpolation filter sets. Select a weighted filter coefficient that minimizes the prediction error energy among the weighted filter coefficients.
- decimal precision pixels are interpolated with respect to the reference picture, and the input picture is encoded by decimal precision motion compensation.
- a weight parameter that specifies the weight value of the selected weighted filter coefficient and the interpolation filter coefficient used for motion compensation are encoded and added to the encoded bitstream.
- the following processing is performed in video decoding using decimal precision motion compensation.
- a weight parameter for specifying a weight value of the interpolation filter coefficient and an interpolation filter coefficient for generating a decimal precision pixel are decoded.
- the unit for switching the interpolation filter is an alternative method, and there is a limit to the performance improvement.
- a weighted sum of a plurality of filter coefficients is calculated for each interpolation position group to obtain an optimum weight coefficient, and motion compensation is performed by applying an interpolation filter using the weight coefficient.
- the interpolation filter can be expressed more flexibly, and the performance of motion compensation can be improved and the coding efficiency can be improved by generating a filter that reduces the prediction error energy.
- the present invention it is possible to express a weighted sum of interpolation filters that could not be taken into consideration with a conventional interpolation filter having interpolation position adaptability, and the interpolation filter selection range is widened to reduce prediction error energy. Improved coding efficiency can be achieved.
- FIG. 1 is a diagram illustrating a configuration example of a video encoding device according to an embodiment of the present invention.
- the interpolation filter coefficient calculation unit 11 calculates the interpolation filter coefficient of the decimal precision pixel used for the reference image in the predictive encoding.
- the interpolation filter coefficient determination unit 12 calculates a weighted sum of a plurality of filter coefficients using the motion vector MV detected by the motion detection unit 132, obtains an optimum weight coefficient, and outputs the weight parameter and the interpolation filter coefficient To do.
- the interpolation filter coefficient determination unit 12 outputs the interpolation position group information when the interpolation position group serving as a unit for switching the weighted filter coefficient is dynamically determined.
- the prediction signal generation unit 13 includes a reference image interpolation unit 131 and a motion detection unit 132.
- the reference image interpolation unit 131 applies an interpolation filter based on the interpolation filter coefficient selected by the interpolation filter coefficient determination unit 12 to the decoded reference image stored in the reference image memory 17.
- the motion detection unit 132 calculates a motion vector by performing a motion search on the interpolated reference image.
- the prediction signal generation unit 13 generates a prediction signal by motion compensation using a decimal precision motion vector calculated by the motion detection unit 132.
- the prediction encoding unit 14 calculates a residual signal between the input video signal and the prediction signal, orthogonally transforms the signal, and performs prediction encoding by quantization of a transform coefficient.
- the decoding unit 16 decodes the result of predictive encoding, and stores the decoded image in the reference image memory 17 for later predictive encoding.
- an in-loop filter for removing coding noise such as a deblocking filter or ALF (Adaptive Loop Filter) may be applied and stored.
- variable length coding unit 15 performs variable length coding on the quantized transform coefficient and motion vector, variable length codes the weight parameter and the interpolation filter coefficient that are output from the interpolation filter coefficient determination unit 12, and codes these. Output as a generalized bitstream.
- the variable length encoding unit 15 also encodes the information of the interpolation position group and encodes the encoded bit. Add to stream.
- FIG. 2 is a diagram illustrating a first configuration example of the interpolation filter coefficient determination unit.
- the part of the interpolation filter coefficient determination unit 12 is a part that is significantly different from that of the prior art.
- the interpolation filter information acquisition unit 121 outputs a fixed interpolation filter coefficient value prepared in advance at the time of encoding to the weighted filter coefficient generation unit 123.
- a fixed interpolation filter coefficient value prepared in advance at the time of encoding to the weighted filter coefficient generation unit 123.
- H.M. A 6-tap coefficient defined in H.264 / AVC, or a DCT-IF 8-tap or 12-tap coefficient may be used.
- the weighted filter coefficient generation unit 123 (a) Adaptive interpolation filter coefficient value generated in frame unit or region unit: f A (x), (b) Weight parameters output from the weight parameter specifying unit 122: ⁇ , (c) The following calculation is performed with the fixed interpolation filter coefficient value: f B (x) output from the interpolation filter information acquisition unit 121 as an input.
- the weighted filter coefficient generation unit 123 outputs the obtained weighted filter coefficient to the MSE calculation unit 124 described later.
- the MSE calculation unit 124 inputs (a) a motion vector MV obtained by the motion search process, (b) Decoded signal (local decoded image), (c) Weighted filter coefficient output from the weighted filter coefficient generation unit 123, And an interpolated image at the position indicated by the motion vector MV is generated by the integer position pixel of the decoded image and the weighted filter coefficient, and the MSE (Mean Square Error), that is, the prediction residual energy of the original image is calculated. calculate.
- MSE ⁇ (total of original signal ⁇ predicted signal) 2 ⁇ / the number of pixels calculated MSE is output to minimum MSE storage section 125.
- the minimum MSE storage unit 125 uses the MSE obtained by the MSE calculation unit 124 as an input, and stores the value.
- the minimum MSE storage unit 125 compares the minimum value stored or defined in advance with the input MSE, (a) If the input MSE value is smaller than the stored minimum value: input MSE value, (b) If the saved minimum value is smaller than the input MSE value: the saved minimum value, Thus, the minimum MSE is determined, and the minimum value is stored and updated. Further, when the minimum value is stored, the minimum MSE storage unit 125 stores a weight parameter that realizes the MSE value.
- the interpolation filter coefficient determination unit 12 processes possible combinations related to the interpolation filter in the MSE calculation unit 124, and then outputs a combination that realizes the minimum MSE, that is, (1) weight parameter and (2) interpolation filter coefficient. .
- FIG. 3 is a diagram illustrating a second configuration example of the interpolation filter coefficient determination unit.
- the interpolation filter coefficient determination unit 12 ′ shown in FIG. 3 is different from the interpolation filter coefficient unit 12 shown in FIG. 2 in that an interpolation target position grouping unit 126 is provided. This is performed for each group of positions, and the interpolation filter coefficient is switched for each group of interpolation positions.
- a weighted filter is used for all pixel interpolation positions.
- an interpolation process using an interpolation filter is performed for each specified interpolation position group or each interpolation position group calculated by some method.
- Set the weighted filter coefficient used for It is also possible to select whether or not a weighted sum of filter coefficients is taken for a certain interpolation position group.
- a weighted filter is used for an important interpolation position or an interpolation position with a high selection probability, and an unweighted filter is used for other positions, and adaptive processing can be performed for the interpolation position group.
- the functions of the interpolation filter information acquisition unit 121, the weight parameter specification unit 122, and the weighted filter coefficient generation unit 123 are the same as those in the configuration example 1 described above.
- the interpolation target position grouping unit 126 performs grouping for each interpolation position, which is a decimal pixel position, and outputs group information for each interpolation position.
- This interpolation position group may be predetermined with respect to the decimal pixel position, or may be determined dynamically using, for example, a probability distribution of motion vectors.
- the MSE calculation unit 124 uses the weighted filter coefficient output from the weighted filter coefficient generation unit 123 for each interpolation position group specified by the interpolation position group information output by the interpolation target position grouping unit 126, and uses the motion vector.
- An interpolated image at the position indicated by MV is generated by the integer position pixel of the decoded image and the weighted filter coefficient, and the MSE of the original image, that is, the prediction residual energy is calculated.
- the calculated MSE is output to the minimum MSE storage unit 125. Further, when the minimum value is stored, the minimum MSE storage unit 125 stores a weight parameter and interpolation position group information for realizing the MSE value.
- the interpolation filter coefficient determination unit 12 ′ executes combinations that can be taken with respect to the interpolation filter, and then realizes the minimum MSE, that is, (1) weight parameter, (2) interpolation position group information, (3) interpolation filter coefficient , Is output.
- the interpolation position group is fixed and the interpolation position group information can be shared between the encoding side and the decoding side, it is not necessary to output the interpolation position group information.
- FIG. 4 is a process flowchart of the video encoding device shown in FIG.
- the flow of processing when the video encoding apparatus encodes one frame will be described with reference to FIG.
- an example of processing when the interpolation filter coefficient determination unit is configured in the configuration example 2 will be described, but the processing in the configuration example 1 is almost the same.
- processing with a luminance signal is assumed.
- step S101 a frame of an original image necessary for encoding processing is input.
- step S102 for example, H.264 is performed.
- the fixed interpolation filter provided in the encoder such as the one-dimensional 6-tap filter used in H.264 / AVC and the one-dimensional 8-tap / 12-tap filter of the DCT-based interpolation filter, is used as a predefined interpolation filter.
- MV motion vector
- step S103 the coefficient value of the adaptive interpolation filter in the encoding target frame is calculated using the motion vector obtained in step S102.
- a generally known prediction error energy minimization method linear regression is used to calculate the coefficients of the interpolation filter in this step.
- step S104 the interpolation target pixels to which the interpolation filter is applied are grouped. Specifically, according to a predetermined table, the interpolation target pixels are divided into a plurality of groups according to the interpolation position. Note that the interpolation position group may be dynamically determined by some method. When the interpolation position group is dynamically determined, the interpolation position group information needs to be encoded and transmitted to the decoding side.
- step S105 a weighted filter coefficient is generated for each interpolation position group divided in step S104.
- the weighting parameters are read from the table and set, and weighted interpolation filter coefficients are generated from each weighting parameter as shown in Equation (6) IV below.
- f A (x) is a predefined interpolation filter coefficient
- f B (x) is an interpolation filter coefficient obtained in the frame.
- x indicates a coefficient number and takes a range of 0 ⁇ x ⁇ n (n indicates a tap length).
- weighted filter coefficients of two types of filter coefficients are assumed, but even in the case of m (> 2) types of interpolation filters, they can be expressed in the same format.
- step S106 for each group determined in step S104, an interpolation filter to be actually applied to encoding is determined using the weighted filter coefficient generated in step S105.
- a prediction error energy is set as an evaluation criterion (cost function)
- a prediction image using an interpolation filter with the original image and the weight parameter selected in step S105 is generated, and a weight parameter for minimizing the square error sum Is calculated.
- the weight parameter for example, the value of ⁇ is changed in increments of 0.1 or 0.01, and ⁇ that achieves the lowest cost value is calculated.
- step S107 interpolation processing is performed using an interpolation filter determined by the weighted filter coefficient determined in step S106.
- step S108 the weight parameter calculated in step S106 is encoded.
- step S109 the interpolation filter information such as the interpolation filter coefficient calculated in step S103 is encoded.
- the interpolation position group information is transmitted, it is encoded in step S109.
- step S110 the remaining information to be encoded, for example, the prediction error signal (texture component information) and the motion vector are all encoded.
- step S111 it is determined whether the encoded frame has reached the final frame. If the processing frame is not the final frame, the process returns to step S101 to process the next frame. If the processing frame is the last frame, the encoding process is terminated.
- the function of performing the encoding process by optimizing the weight parameter of the interpolation filter for each interpolation position group described in this embodiment can be applied not only to the luminance signal but also to the color difference signal. .
- FIG. 5 is a diagram illustrating a configuration example of a video decoding apparatus according to an embodiment of the present invention.
- variable length decoding unit 21 receives the encoded bit stream and decodes the quantized transform coefficient, motion vector, weight parameter, interpolation filter coefficient, and the like. When the interpolation position group information is encoded, the variable length decoding unit 21 also decodes the interpolation position group information.
- the weighted filter coefficient generation unit 22 generates weighted filter coefficients from the weight parameters and interpolation filter coefficients decoded by the variable length decoding unit 21.
- the interpolation filter coefficient determination unit 23 determines an interpolation filter coefficient to be used for each grouped interpolation position using the weighted filter coefficient generated by the weighted filter coefficient generation unit 22.
- the reference image interpolation unit 241 in the prediction signal generation unit 24 applies the interpolation filter based on the interpolation filter coefficient received from the interpolation filter coefficient determination unit 23 to the decoded reference image stored in the reference image memory 26, and calculates the decimal number of the reference image. Restore precision pixels.
- the prediction signal generation unit 24 generates a prediction signal of the decoding target block from the reference image in which the decimal precision pixels are restored.
- the prediction decoding unit 25 performs inverse quantization, inverse orthogonal transformation, and the like on the quantization coefficient decoded by the variable length decoding unit 21, and the prediction error signal calculated thereby, the prediction signal generated by the prediction signal generation unit 24, Are added to generate a decoded image and output as an output image.
- the decoded image decoded by the predictive decoding unit 25 is stored in the reference image memory 26 for later predictive decoding.
- an in-loop filter for removing coding noise such as a deblocking filter or ALF (Adaptive Loop Filter) may be applied and stored.
- FIG. 6 is a processing flowchart of the video decoding apparatus shown in FIG.
- the flow of processing when the video decoding apparatus decodes one frame will be described with reference to FIG.
- processing with a luminance signal is assumed.
- step S201 information on the frame header (or slice header) is acquired. Subsequently, in step S202, the weight parameter is decoded and the information is stored. In step S203, the interpolation filter coefficient information is decoded. When the interpolation position group information is encoded, the interpolation position group information is also decoded. Subsequently, in step S204, all other information necessary for decoding (for example, motion vectors and prediction error signals) is decoded.
- step S205 a weighted filter coefficient necessary for interpolation of each interpolation position group is generated from the weight parameter decoded in step S202 and the interpolation filter coefficient decoded in step S203.
- step S206 interpolation processing is performed using the motion vector decoded in step S204 and the weighted filter coefficient obtained in step S205 to generate a prediction signal.
- step S207 the prediction error signal decoded in step S204 and the prediction signal obtained in step S206 are combined to restore the decoded signal.
- step S208 it is determined whether or not all the frames to be decoded have been decoded. If all the frames have not been decoded, the process returns to step S201 to proceed to the decoding of the next frame. Ends the decoding process.
- the luminance signal is used, but this flow can be applied to the color difference signal as well.
- the video encoding and decoding processes described above can be realized by a computer and a software program, and the program can be recorded on a computer-readable recording medium or provided through a network.
- FIG. 7 shows a configuration example of a system when the embodiment of the present invention is implemented using a computer and a software program.
- This system includes a CPU (Central Processing Unit) 50 that executes a program, a memory 51 such as a RAM (Random Access Memory) that stores programs and data accessed by the CPU 50, a video signal to be encoded, or a decoded image.
- a video signal storage unit 52 that stores the video signal
- a program storage device 53 that stores a program for causing the CPU 50 to execute the processing described in the embodiment of the present invention, and a bit stream of an encoding result or a decoding target
- the encoded stream storage unit 54 that stores the bit stream is connected by a bus.
- the program storage device 53 includes a video encoding program 531 for encoding a video signal using the embodiment of the present invention, and a video decoding program 532 for decoding an encoded bitstream using the embodiment of the present invention. One of them is stored.
- the program storage device 53 may store both of these programs.
- the video encoding program 531 is loaded into the memory 51, and the CPU 50 sequentially fetches and executes the instructions of the video encoding program 531 loaded into the memory 51,
- the video signal stored in the video signal storage unit 52 is encoded by the method described in the embodiment of the present invention, and the bit stream of the encoding result is stored in the encoded stream storage unit 54.
- the bit stream may be output to an external device via an interface such as a network adapter.
- the video decoding program 532 When the present system is used as a video decoding device, the video decoding program 532 is loaded into the memory 51, and the CPU 50 sequentially fetches and executes the instructions of the video decoding program 532 loaded into the memory 51, and executes the encoded stream.
- the bit stream stored in the storage unit 54 is decoded by the method described in the embodiment of the present invention, and the decoded video signal is stored in the video signal storage unit 52. Alternatively, the decoded video signal is output to an external playback device.
- the present invention can be used for, for example, video encoding and video decoding using motion compensation inter-screen prediction. According to the present invention, it is possible to achieve an improvement in coding efficiency by reducing the prediction error energy by giving a wide range to the selection range of the interpolation filter.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
本願は,2011年6月27日に日本へ出願された特願2011-141725号に基づき優先権を主張し,その内容をここに援用する。
図8は,H.264/AVCにおける小数精度の画素補間方法を示す図である。H.264/AVCにおいては,図8に示すように1/2画素位置の補間時は,対象となる補間画素の左右3点ずつ計6整数画素を用いて補間を行う。垂直方向については上下3点ずつ計6整数画素を用いて補間する。フィルタ係数は,それぞれ[(1,-5,20,20,-5,1)/32]となっている。1/2画素位置が補間された後,1/4画素位置は[1/2,1/2]の平均値フィルタを用いて補間を行う。一度,1/2画素位置をすべて補間して求める必要があるため,計算複雑度は高いものの,性能の高い補間が可能となり,符号化効率向上を導いている。以上の固定フィルタによる補間の技術は,非特許文献1等に示されている。
H.264/AVCでは,入力画像条件(シーケンス種類/画像サイズ/フレームレート)や符号化条件(ブロックサイズ/GOP(Group of Pictures)構造/QP(Quantization Parameter))に関わらず,フィルタ係数値は一定である。フィルタ係数値が固定である場合,例えば,エイリアシング,量子化誤差,動き推定による誤差,カメラノイズといった時間的に変化する効果が考慮されていない。したがって,符号化効率の点で性能向上に限界があると考えられる。そこで,補間フィルタ係数を適応的に変化させる方式が非特許文献3では提案されており,非分離型の適応補間フィルタと呼ばれている。
~x=x+MVx -FilterOffset
であり,MVx は事前に得られた動きベクトルの水平成分, FilterOffset は調整のためのオフセット(水平方向フィルタ長を2で割った値)を示している。垂直方向については,~y=y+MVy となり,MVy は動きベクトルの垂直成分を示す。wciは求めるべき水平方向フィルタ係数群ci (0≦ci <6)を示す。
・小数精度画素の補間位置をあらかじめ指定された方法または他の何らかの方法により複数のグループに分ける。
・補間位置のグループごとに,複数の補間フィルタセットから複数のパラメータを用いて重み付きフィルタ係数を導出する。
・重み付きフィルタ係数の中で,予測誤差エネルギーを最小化する重み付きフィルタ係数を選択する。
・重み付きフィルタ係数による補間フィルタの適用により,参照画像に対する小数精度画素の補間を行い,小数精度の動き補償により入力画像を符号化する。
・選択された重み付きフィルタ係数の重み値を指定する重みパラメータと動き補償に用いた補間フィルタ係数とを符号化し,符号化ビットストリームに加える。
・補間位置のグループごとに,補間フィルタ係数の重み値を指定する重みパラメータと小数精度画素生成のための補間フィルタ係数を復号する。
・復号された重みパラメータを用いて復号された補間フィルタ係数の重み和により定まる重み付きフィルタ係数を生成する。
・生成された重み付きフィルタ係数を用いて予測信号を生成する。
・残差信号を復号し,その残差信号および前記予測信号を用いて復号画像を生成する。
図1は,本発明の一実施形態である映像符号化装置の構成例を示す図である。
図2は,補間フィルタ係数判定部の第1の構成例を示す図である。補間フィルタ係数判定部12の部分が従来技術と顕著に異なる部分である。
(a) フレーム単位または領域単位で生成される適応補間フィルタ係数値:fA (x),
(b) 重みパラメータ指定部122から出力される重みパラメータ:α,
(c) 補間フィルタ情報取得部121から出力される固定補間フィルタ係数値:fB (x),を入力とし,以下の計算を行う。
(ただし,0≦x<t;tはタップ長,β=1-α)
ここでは2種類のフィルタ係数の重み付き和を想定しているが,3種類以上のフィルタ係数の重み付き和でも同様に設定することができる。重み付きフィルタ係数生成部123は,得られる重み付きフィルタ係数を,後述するMSE算出部124へ出力する。
(a) 動き探索処理によって得られる動きベクトルMV,
(b) 復号信号(ローカルデコード画像) ,
(c) 重み付きフィルタ係数生成部123から出力される重み付きフィルタ係数,
とし,動きベクトルMVが指し示す位置における補間画像を,復号画像の整数位置画素と重み付きフィルタ係数とによって生成し,原画像とのMSE(Mean Square Error :平均二乗誤差) ,すなわち予測残差エネルギーを計算する。
計算されたMSEは,最小MSE記憶部125に出力される。
(a) 保存済みの最小値よりも入力MSE値が小さい場合:入力MSE値,
(b) 保存済みの最小値の方が入力MSE値よりも小さい場合:保存済みの最小値,
というように,最小となるMSEを判定して,最小値の保存と更新を行う。また,最小MSE記憶部125は,その最小値を保存する際には,そのMSE値を実現する重みパラメータを保存する。
図3は,補間フィルタ係数判定部の第2の構成例を示す図である。図3に示す補間フィルタ係数判定部12′が図2に示す補間フィルタ係数部12と異なるのは,補間対象位置グルーピング部126が備わっており,前述した重みパラメータの設定を小数画素位置である補間位置のグループごとに行い,補間位置のグループごとに補間フィルタ係数を切り替える点である。
図4は,図1に示す映像符号化装置の処理フローチャートである。以下,図4に従って,映像符号化装置が1枚のフレームを符号化する場合の処理の流れを説明する。ここでは,補間フィルタ係数判定部が構成例2で構成される場合の処理の例を説明するが,構成例1の場合の処理もほぼ同様である。以下では,特に断りがない限り,輝度信号での処理を仮定する。
ここで,αおよびβは重みパラメータ(α+β=1),fA (x)は事前定義された補間フィルタ係数,fB (x)は当該フレームにて求まった補間フィルタ係数である。xは係数番号を示し,0≦x<nの範囲を取る(nはタップ長を示す)。ここでは,2種類のフィルタ係数の重み付きフィルタ係数を想定しているが,m(>2)種類の補間フィルタの場合でも,同様の形式で表現できる。
図5は,本発明の一実施形態である映像復号装置の構成例を示す図である。
図6は,図5に示す映像復号装置の処理フローチャートである。以下,図6に従って,映像復号装置が1枚のフレームを復号する場合の処理の流れを説明する。以下では,特に断りがない限り,輝度信号での処理を仮定する。
以上の映像符号化,復号の処理は,コンピュータとソフトウェアプログラムとによっても実現することができ,そのプログラムをコンピュータ読み取り可能な記録媒体に記録することも,ネットワークを通して提供することも可能である。
11 補間フィルタ係数算出部
12,12′,23 補間フィルタ係数判定部
121 補間フィルタ情報取得部
122 重みパラメータ指定部
123,22 重み付きフィルタ係数生成部
124 MSE算出部
125 最小MSE記憶部
126 補間対象位置グルーピング部
13,24 予測信号生成部
131,241 参照画像補間部
132 動き検出部
14 予測符号化部
15 可変長符号化部
16 復号部
17,26 参照画像メモリ
20 映像復号装置
21 可変長復号部
25 予測復号部
Claims (10)
- 小数精度の動き補償を用いる映像符号化方法であって,
複数の補間フィルタセットから補間フィルタ係数の重み値を指定する重みパラメータを用いて複数の重み付きフィルタ係数を生成するステップと,
前記重み付きフィルタ係数の中で,前記小数精度の動き補償における予測誤差エネルギーを最小化する重み付きフィルタ係数を選択するステップと,
前記選択された重み付きフィルタ係数による補間フィルタの適用により,参照画像に対する小数精度画素の補間を行い,前記小数精度の動き補償により入力画像を符号化するステップと,
前記選択された重み付きフィルタ係数の生成に用いた重みパラメータと,前記補間フィルタ係数とを符号化し,符号化された重みパラメータと符号化された補間フィルタ係数を符号化ビットストリームに加えるステップと
を有する映像符号化方法。 - 請求項1記載の映像符号化方法において,
前記重み付きフィルタ係数の選択を,あらかじめ定められた方法により算出された小数精度画素の補間位置グループごと,または指定された小数精度画素の補間位置グループごとに行い,
前記重み付きフィルタ係数による補間フィルタを,前記小数精度画素の補間位置グループごとに切り替える
映像符号化方法。 - 小数精度の動き補償を用いる映像符号化装置であって,
複数の補間フィルタセットから補間フィルタ係数の重み値を指定する重みパラメータを用いて複数の重み付きフィルタ係数を生成する重み付きフィルタ係数生成部と,
前記重み付きフィルタ係数の中で,前記小数精度の動き補償における予測誤差エネルギーを最小化する重み付きフィルタ係数を選択する重み付きフィルタ係数選択部と,
前記選択された重み付きフィルタ係数による補間フィルタの適用により,参照画像に対する小数精度画素の補間を行い,前記小数精度の動き補償により入力画像を符号化する第1の符号化部と,
前記選択された重み付きフィルタ係数の生成に用いた重みパラメータと,前記補間フィルタ係数とを符号化し,符号化された重みパラメータと符号化された補間フィルタ係数を符号化ビットストリームに加える第2の符号化部と
を備える映像符号化装置。 - 請求項3記載の映像符号化装置において,
前記重み付きフィルタ係数の選択を,あらかじめ定められた方法により算出された小数精度画素の補間位置グループごと,または指定された小数精度画素の補間位置グループごとに行い,
前記重み付きフィルタ係数による補間フィルタを,前記小数精度画素の補間位置グループごとに切り替える
映像符号化装置。 - 小数精度の動き補償を用いる映像復号方法であって,
小数精度画素生成のための補間フィルタ係数と,前記補間フィルタ係数の重み値を指定する重みパラメータとを復号するステップと,
復号された重みパラメータを用いて復号された補間フィルタ係数の重み和により定まる重み付きフィルタ係数を生成するステップと,
生成された重み付きフィルタ係数により定まる補間フィルタを用いて予測信号を生成するステップと,
残差信号を復号するステップと,
復号した残差信号および生成された予測信号を用いて復号画像を生成するステップと
を有する映像復号方法。 - 請求項5記載の映像復号方法において,
前記小数精度画素生成のための補間フィルタ係数と,前記補間フィルタ係数の重み値を指定する重みパラメータの復号を,符号化側から指定された補間位置グループまたは所定の補間位置グループごとに行い,
前記重み付きフィルタ係数により定まる補間フィルタを,前記補間位置グループごとに切り替える
映像復号方法。 - 小数精度の動き補償を用いる映像復号装置であって,
小数精度画素生成のための補間フィルタ係数と,前記補間フィルタ係数の重み値を指定する重みパラメータとを復号する第1の復号部と,
復号された重みパラメータを用いて復号された補間フィルタ係数の重み和により定まる重み付きフィルタ係数を生成する重み付きフィルタ係数生成部と,
生成された重み付きフィルタ係数により定まる補間フィルタを用いて予測信号を生成する予測信号生成部と,
残差信号を復号する第2の復号部と,
復号した残差信号および生成された予測信号を用いて復号画像を生成する復号画像生成部と
を備える 映像復号装置。 - 請求項7記載の映像復号装置において,
前記小数精度画素生成のための補間フィルタ係数と,前記補間フィルタ係数の重み値を指定する重みパラメータの復号を,符号化側から指定された補間位置グループまたは所定の補間位置グループごとに行い,
前記重み付きフィルタ係数により定まる補間フィルタを,前記補間位置グループごとに切り替える
映像復号装置。 - 請求項1または請求項2記載の映像符号化方法を,コンピュータに実行させるための映像符号化プログラム。
- 請求項5または請求項6記載の映像復号方法を,コンピュータに実行させるための映像復号プログラム。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112013031911A BR112013031911A2 (pt) | 2011-06-27 | 2012-06-22 | método e aparelho para codificação de vídeo, método e aparelho para decodificação de vídeo e programas para os mesmos |
US14/125,463 US9667963B2 (en) | 2011-06-27 | 2012-06-22 | Method and apparatus for encoding video, method and apparatus for decoding video, and programs therefor |
KR1020137029324A KR20140029434A (ko) | 2011-06-27 | 2012-06-22 | 영상 부호화 방법, 장치, 영상 복호 방법, 장치 및 그 프로그램 |
EP12805330.3A EP2709364A4 (en) | 2011-06-27 | 2012-06-22 | METHOD AND DEVICE FOR ENCODING VIDEO IMAGES, METHOD AND DEVICE FOR DECODING VIDEO IMAGES, AND PROGRAM THEREOF |
RU2013154760/08A RU2013154760A (ru) | 2011-06-27 | 2012-06-22 | Способ и устройство кодирования видео, способ и устройство декодирования видео и программы для них |
CN201280028582.3A CN103597835B (zh) | 2011-06-27 | 2012-06-22 | 影像编码方法、装置、影像解码方法、装置 |
CA2839089A CA2839089A1 (en) | 2011-06-27 | 2012-06-22 | Video encoding and decoding methods and apparatuses using fractional-ac curacy motion compensation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-141725 | 2011-06-27 | ||
JP2011141725A JP5649524B2 (ja) | 2011-06-27 | 2011-06-27 | 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013002150A1 true WO2013002150A1 (ja) | 2013-01-03 |
Family
ID=47424045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/066039 WO2013002150A1 (ja) | 2011-06-27 | 2012-06-22 | 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム |
Country Status (10)
Country | Link |
---|---|
US (1) | US9667963B2 (ja) |
EP (1) | EP2709364A4 (ja) |
JP (1) | JP5649524B2 (ja) |
KR (1) | KR20140029434A (ja) |
CN (1) | CN103597835B (ja) |
BR (1) | BR112013031911A2 (ja) |
CA (1) | CA2839089A1 (ja) |
RU (1) | RU2013154760A (ja) |
TW (1) | TWI504241B (ja) |
WO (1) | WO2013002150A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11596920B2 (en) | 2018-06-21 | 2023-03-07 | Battelle Memorial Institute | Enhanced microchannel or mesochannel devices and methods of additively manufacturing the same |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015149241A1 (zh) * | 2014-03-31 | 2015-10-08 | 北京大学深圳研究生院 | 一种针对色度的插值方法及滤波器 |
KR101982788B1 (ko) * | 2014-10-01 | 2019-05-27 | 엘지전자 주식회사 | 향상된 예측 필터를 이용하여 비디오 신호를 인코딩, 디코딩하는 방법 및 장치 |
KR20180009358A (ko) * | 2015-06-16 | 2018-01-26 | 엘지전자 주식회사 | 영상의 부호화/복호화 방법 및 이를 위한 장치 |
US10419755B2 (en) | 2016-05-16 | 2019-09-17 | Qualcomm Incorporated | Confusion of multiple filters in adaptive loop filtering in video coding |
US10419757B2 (en) * | 2016-08-31 | 2019-09-17 | Qualcomm Incorporated | Cross-component filter |
WO2019065537A1 (ja) * | 2017-09-28 | 2019-04-04 | シャープ株式会社 | 動き補償フィルタ装置、画像復号装置および動画像符号化装置 |
WO2019093919A1 (en) | 2017-11-13 | 2019-05-16 | Huawei Technologies Co., Ltd | In-loop filter apparatus and method for video coding |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011082725A (ja) | 2009-10-06 | 2011-04-21 | Nippon Telegr & Teleph Corp <Ntt> | 映像符号化方法,映像符号化装置,映像復号方法,映像復号装置,映像符号化・復号方法,およびプログラム |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2690551B1 (fr) * | 1991-10-15 | 1994-06-03 | Thomson Csf | Procede de quantification d'un filtre predicteur pour vocodeur a tres faible debit. |
US6968008B1 (en) | 1999-07-27 | 2005-11-22 | Sharp Laboratories Of America, Inc. | Methods for motion estimation with adaptive motion accuracy |
JP4241588B2 (ja) | 2004-11-30 | 2009-03-18 | 日本ビクター株式会社 | 動画像符号化装置、及び動画像復号化装置 |
KR100723403B1 (ko) * | 2005-02-28 | 2007-05-30 | 삼성전자주식회사 | 컬러 성분간 단일 부호화 모드를 이용한 예측영상 생성 방법 및 장치와, 그를 이용한 영상 및 비디오 부호화/복호화방법 및 장치 |
JP2007043651A (ja) | 2005-07-05 | 2007-02-15 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法及び動画像復号プログラム |
WO2007100128A1 (en) * | 2006-02-28 | 2007-09-07 | Sharp Kabushiki Kaisha | Method and system for picture up-sampling |
US7782401B1 (en) * | 2006-06-20 | 2010-08-24 | Kolorific, Inc. | Method and system for digital image scaling with sharpness enhancement and transient improvement |
US9014280B2 (en) | 2006-10-13 | 2015-04-21 | Qualcomm Incorporated | Video coding with adaptive filtering for motion compensated prediction |
US8942505B2 (en) * | 2007-01-09 | 2015-01-27 | Telefonaktiebolaget L M Ericsson (Publ) | Adaptive filter representation |
KR101369746B1 (ko) * | 2007-01-22 | 2014-03-07 | 삼성전자주식회사 | 적응적 보간 필터를 이용한 영상 부호화, 복호화 방법 및장치 |
WO2009046601A1 (en) * | 2007-10-09 | 2009-04-16 | Hong Kong Applied Science And Technology Research Institute Co. Ltd. | Method for motion compensation |
US8462842B2 (en) | 2008-04-10 | 2013-06-11 | Qualcomm, Incorporated | Symmetry for interpolation filtering of sub-pixel positions in video coding |
US20090323807A1 (en) | 2008-06-30 | 2009-12-31 | Nicholas Mastronarde | Enabling selective use of fractional and bidirectional video motion estimation |
EP2157799A1 (en) * | 2008-08-18 | 2010-02-24 | Panasonic Corporation | Interpolation filter with local adaptation based on block edges in the reference frame |
CN102224735A (zh) | 2008-10-03 | 2011-10-19 | 索尼公司 | 图像处理设备和方法 |
EP2182732A1 (en) | 2008-10-28 | 2010-05-05 | Panasonic Corporation | Switching between scans in image coding |
EP2393296A1 (en) | 2009-01-29 | 2011-12-07 | Panasonic Corporation | Image coding method and image decoding method |
-
2011
- 2011-06-27 JP JP2011141725A patent/JP5649524B2/ja active Active
-
2012
- 2012-06-22 EP EP12805330.3A patent/EP2709364A4/en not_active Ceased
- 2012-06-22 KR KR1020137029324A patent/KR20140029434A/ko active IP Right Grant
- 2012-06-22 US US14/125,463 patent/US9667963B2/en active Active
- 2012-06-22 RU RU2013154760/08A patent/RU2013154760A/ru not_active Application Discontinuation
- 2012-06-22 CA CA2839089A patent/CA2839089A1/en not_active Abandoned
- 2012-06-22 CN CN201280028582.3A patent/CN103597835B/zh active Active
- 2012-06-22 BR BR112013031911A patent/BR112013031911A2/pt not_active IP Right Cessation
- 2012-06-22 WO PCT/JP2012/066039 patent/WO2013002150A1/ja active Application Filing
- 2012-06-26 TW TW101122753A patent/TWI504241B/zh active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011082725A (ja) | 2009-10-06 | 2011-04-21 | Nippon Telegr & Teleph Corp <Ntt> | 映像符号化方法,映像符号化装置,映像復号方法,映像復号装置,映像符号化・復号方法,およびプログラム |
Non-Patent Citations (10)
Title |
---|
ALSHINA.E. ET AL.: "CE3: Experimental results of DCTIF by Samsung", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 4TH MEETING: JCTVC-D344, 20 January 2011 (2011-01-20), XP030047677 * |
FAOUZI KOSSENTINI; NADER MAHDI; HSAN GUERMAZI; MOHAMMED ALI BEN AYED: "An Adaptive Interpolation Filtering Technique", JCTVC-E284, 5TH JCT-VC MEETING, GENEVA, March 2011 (2011-03-01) |
KEN MCCANN; WOO-JIN HAN; IL-KOO KIM; JUNG-HYE MIN; ELENA ALSHINA; ALEXANDER ALSHIN; TAMMY LEE; JIANLE CHEN; VADIM SEREGIN; SUNIL L: "Samsung's Response to the Call for Proposals on Video Compression Technology", JCTVC-A124 R2, April 2010 (2010-04-01), pages 12 - 14 |
KOSSENTINI,F. ET AL.: "An Adaptive Interpolation Filtering Technique", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-E284, 16 March 2011 (2011-03-16), XP030048378 * |
MATSUO,S. ET AL.: "CE3: Region-based adaptive interpolation filter", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-E078, 10 March 2011 (2011-03-10), XP030008584 * |
S. WITTMANN; T. WEDI: "Separable adaptive interpolation filter for video coding", PROC. ICIP2008, IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, October 2008 (2008-10-01), pages 2500 - 2503, XP031374548 |
SAKAE OKUBO; SHINYA KADONO; YOSHIHIRO KIKUCHI; TERUHIKO SUZUKI: "H.264/AVC Textbook", 2009, IMPRESS R&D, pages: 119 - 123 |
See also references of EP2709364A4 * |
SHOHEI MATSUO; YUKIHIRO BANDOH; SEISHI TAKAMURA; HIROHISA JOZAWA: "Enhanced region-based adaptive interpolation filter", PROC. PCS2010, IEEE PICTURE CODING SYMPOSIUM, December 2010 (2010-12-01), pages 526 - 529, XP031885224, DOI: doi:10.1109/PCS.2010.5702554 |
Y. VATIS; B. EDLER; D. T. NGUYEN; J. OSTERMANN: "Motion-and aliasing-compensated prediction using a two-dimensional non-separable adaptive Wiener interpolation filter", PROC. ICIP2005, IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, September 2005 (2005-09-01), pages II 894 - 897 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11596920B2 (en) | 2018-06-21 | 2023-03-07 | Battelle Memorial Institute | Enhanced microchannel or mesochannel devices and methods of additively manufacturing the same |
Also Published As
Publication number | Publication date |
---|---|
EP2709364A4 (en) | 2014-11-26 |
EP2709364A1 (en) | 2014-03-19 |
KR20140029434A (ko) | 2014-03-10 |
JP5649524B2 (ja) | 2015-01-07 |
CN103597835B (zh) | 2017-04-26 |
TWI504241B (zh) | 2015-10-11 |
BR112013031911A2 (pt) | 2016-12-13 |
CA2839089A1 (en) | 2013-01-03 |
JP2013009241A (ja) | 2013-01-10 |
RU2013154760A (ru) | 2015-08-10 |
TW201311010A (zh) | 2013-03-01 |
CN103597835A (zh) | 2014-02-19 |
US20140105308A1 (en) | 2014-04-17 |
US9667963B2 (en) | 2017-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2755889C (en) | Image processing device and method | |
JP5649523B2 (ja) | 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム | |
WO2013002150A1 (ja) | 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム | |
JP5563403B2 (ja) | 映像符号化方法,映像復号方法,映像符号化装置,映像復号装置およびそれらのプログラム | |
JP5485851B2 (ja) | 映像符号化方法,映像復号方法,映像符号化装置,映像復号装置およびそれらのプログラム | |
JP5552092B2 (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
US9609318B2 (en) | Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, and programs thereof | |
JP5762243B2 (ja) | 映像符号化方法,装置,映像復号方法,装置およびそれらのプログラム | |
AU2015255215B2 (en) | Image processing apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12805330 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20137029324 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012805330 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2839089 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14125463 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013031911 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2013154760 Country of ref document: RU Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112013031911 Country of ref document: BR Kind code of ref document: A2 Effective date: 20131211 |