KR20140124438A - Method for encoding and decoding video using weighted prediction, and apparatus thereof - Google Patents

Method for encoding and decoding video using weighted prediction, and apparatus thereof Download PDF

Info

Publication number
KR20140124438A
KR20140124438A KR20130041289A KR20130041289A KR20140124438A KR 20140124438 A KR20140124438 A KR 20140124438A KR 20130041289 A KR20130041289 A KR 20130041289A KR 20130041289 A KR20130041289 A KR 20130041289A KR 20140124438 A KR20140124438 A KR 20140124438A
Authority
KR
South Korea
Prior art keywords
brightness compensation
prediction block
brightness
block
prediction
Prior art date
Application number
KR20130041289A
Other languages
Korean (ko)
Inventor
문주희
임성원
한종기
Original Assignee
인텔렉추얼디스커버리 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사 filed Critical 인텔렉추얼디스커버리 주식회사
Priority to KR20130041289A priority Critical patent/KR20140124438A/en
Publication of KR20140124438A publication Critical patent/KR20140124438A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video encoding method using brightness compensation according to an embodiment of the present invention includes encoding a temporal prediction block from an image signal; And performing brightness compensation on the temporal prediction block, wherein performing the brightness compensation comprises: converting the encoded temporal prediction block to a frequency domain; And performing brightness compensation based on the transformed temporal prediction block.

Description

TECHNICAL FIELD The present invention relates to a video encoding / decoding method and apparatus using brightness compensation,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a video codec, and more particularly, to a weighted prediction method for compensating a brightness difference between screens in a signal encoding process.

To date, video codecs (H.264, HEVC) have included a weighted predication technique to compensate for the difference in brightness between images. The need for this technology has been important in the process of developing all codecs, but there are many things that need improvement in terms of actual performance.

Also, the conventional weighted prediction (WP) technique is applied to the pixel value signal, and there is a limitation in using the same compensation parameter for the entire picture without considering the local characteristics of the image.

An embodiment of the present invention provides a weight prediction prediction method and apparatus for improving brightness compensation performance and compression efficiency with respect to image information to be encoded and reflecting local characteristics of an image.

It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may be present.

According to a first aspect of the present invention, there is provided a method and an apparatus for predicting a weight in a frequency domain, which performs brightness compensation in a frequency domain of image information to be encoded. Also, brightness compensation can be performed at various levels, and surrounding blocks can be considered.

According to the embodiment of the present invention, since the accurate compensation value can be calculated efficiently by analyzing the frequency characteristic of the image to be coded and calculating the brightness compensation coefficient, the performance of the brightness compensation function can be improved.

In addition, the coding efficiency can be improved by using the brightness compensation technique considering the local image characteristic of the coding block.

In the future, the HEVC will compress the image signal with a much larger resolution than the 4K image which the HEVC currently encodes, and the effect of this patented technology will appear very efficiently in this environment.

1 is a block diagram showing an example of a configuration of a video encoding apparatus.
2 is a block diagram showing an example of a configuration of a video decoding apparatus.
3 and 4 are block diagrams for explaining an example of the brightness compensation method.
5 is a block diagram illustrating a brightness compensation method according to an embodiment of the present invention.
6 is a diagram for explaining a bitstream extracting structure in a slice header according to an embodiment of the present invention.
7 is a diagram for explaining a bitstream extracting structure in a prediction block according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.

Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.

Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.

Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise. The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.

Throughout this specification, the term " combination thereof " included in the expression of the machine form means one or more combinations or combinations selected from the group consisting of the constituents described in the expression of the machine form, And the like.

Hereinafter, a method and apparatus for providing weighted prediction in a frequency transform domain according to an embodiment of the present invention and a brightness compensation technique using the method and apparatus will be described.

The brightness compensation technique according to an embodiment of the present invention performs brightness compensation in the frequency domain without compensating brightness in a pixel region for image information to be encoded. In this case, DCT, DST and WT can be selectively used in various methods to convert to the frequency domain.

The brightness compensation technique according to an exemplary embodiment of the present invention can be performed at a picture level, a slice level, a tile level, a CU level, a PU level, a TU level, and a pixel level.

The brightness compensation technique according to an embodiment of the present invention can be determined in consideration of the brightness compensation state of neighboring neighboring blocks when the brightness compensation technique is applied to the current block.

The brightness compensation technique according to an embodiment of the present invention can determine the compensation method using the signal values of surrounding blocks when applying the brightness compensation technique on a block-by-block basis.

The brightness compensation technique according to an exemplary embodiment of the present invention can be used to compensate for brightness difference between frames in a 2D video signal compression process.

The brightness compensation technique according to an exemplary embodiment of the present invention can be used to compensate for a brightness signal difference between views in a 3D video signal compression process. This can improve the compression efficiency in the interview prediction mode.

The brightness compensation technique according to an embodiment of the present invention can be used in Scalable Video Codec. Accordingly, the brightness compensation function can be performed in the interlayer prediction mode.

1 is a block diagram of an example of a configuration of a video encoding apparatus, and shows a coding structure of H.264.

Referring to FIG. 1, a unit for processing data in the H.264 coding scheme is a macroblock having a size of 16 x 16 pixels, and is encoded in an Intra mode or an Inter mode by receiving an image. And outputs the bit stream.

In the intra mode, the switch is switched to the intra mode, and in the inter mode, the switch is switched to the inter mode. The main flow of the encoding process is to generate a prediction block for the inputted block image, and then to obtain the difference between the input block and the prediction block and to code the difference.

First, the generation of the prediction block is performed according to the intra mode and the inter mode. In case of the intra mode, a prediction block is generated by spatial prediction using the already encoded neighboring pixel values of the current block in the intra prediction process. In the inter mode, in the motion prediction process, A motion vector is obtained by searching an area where the best match with the current input block is obtained, and motion compensation is performed using the obtained motion vector to generate a prediction block.

As described above, the difference between the currently input block and the prediction block is calculated to generate a residual block, and then the residual block is encoded. A method of encoding a block is roughly divided into an intra mode and an inter mode. 8x8, 8x8, and 8x8 inter modes for the inter mode, and 8x8, 8x8, and 8x8 inter modes for the 8x8 inter mode. 4x8, and 4x4 sub inter modes.

The encoding for the residual block is performed in the order of conversion, quantization, and entropy encoding. First, a block encoded in the 16x16 intra mode performs conversion to the difference block to output a transform coefficient, and only the DC coefficient is collected from the output transform coefficients to perform Hadamard transform to output Hadamard transformed DC coefficient.

In a block encoded in a coding mode other than the 16x16 intra mode, the transform process receives the input residual block, transforms the block, and outputs a transform coefficient.

In the quantization process, the input transform coefficient is quantized according to a quantization parameter, and outputs a quantized coefficient. In the entropy encoding process, the input quantized coefficients are output as a bitstream by performing entropy encoding according to a probability distribution. Since H.264 performs inter-frame predictive coding, it is necessary to decode and store the currently encoded image in order to use it as a reference image of a subsequent input image.

Therefore, the quantized coefficients are dequantized and the inverse transform is performed to generate reconstructed blocks through the predictive image and the adder. Then, the blocking artifacts generated in the encoding process are removed through the deblocking filter, and the reconstructed blocks are stored in the reference image buffer do.

2 is a block diagram of an example of the configuration of a video decoding apparatus, and shows a decoding structure of H.264.

Referring to FIG. 2, the unit for processing data in the H.264 decoding structure is a macroblock having a size of 16 x 16 pixels, and the decoding is performed in an Intra mode or an Inter mode by receiving a bitstream. And outputs the reconstructed image.

In the intra mode, the switch is switched to the intra mode, and in the inter mode, the switch is switched to the inter mode. The main flow of the decoding process is to generate a reconstructed block by adding a block and a prediction block as a result of decoding a bitstream after generating a prediction block.

First, the generation of the prediction block is performed according to the intra mode and the inter mode. First, in the intra mode, a spatial prediction is performed using the already encoded neighboring pixel values of the current block in the intra prediction process to generate a prediction block,

In the inter mode, a motion vector is used to search for a region in a reference image stored in a reference image buffer, and motion compensation is performed to generate a prediction block.

In the entropy decoding process, the input bitstream is entropy-decoded according to a probability distribution to output a quantized coefficient. The quantized coefficients are dequantized and inverse transformed to generate a reconstructed block through a predictive image and an adder. Blocking artifacts are removed through a deblocking filter, and the reconstructed blocks are stored in a reference image buffer.

As an example of another method of encoding a real image and its depth information map, HEVC (High Efficiency Video Coding), which is currently being jointly standardized by MPEG (Moving Picture Experts Group) and VCEG (Video Coding Experts Group) have. In addition to HD and UHD images, 3D video and mobile communication networks can provide high-quality images with lower bandwidths.

HEVC includes various new algorithms such as coding unit and structure, inter prediction, intra prediction, interpolation, filtering, and transform.

In the video codecs such as H.264, HEVC, and the like, the weighted prediction method for compensating for the difference in brightness between the images is included in the basic profile. The need for this technology has been important in the process of developing all codecs, but there are many things that need improvement in terms of actual performance.

The weighted prediction (WP) technique to compensate for the brightness up to now uses a pixel value signal of the entire picture to be coded and a weight coefficient to compensate for brightness using the pixel value of the whole image to be referred to (Weight, offset), and then performs inter coding using the coefficient value.

Accordingly, the current weight prediction technique uses unconditional weight prediction if a reference image determined to use weight prediction is used for each block to be coded in the current image without considering the local characteristics of the image, and also includes a plurality of pixels There is a limit to use the same weighting factor over one block.

The weight prediction methods use the pixels of the current picture to be coded and the pixels of the picture to be referred to as samples, calculate the brightness difference between the samples, calculate weight and offset values for brightness compensation through the pixels and apply them to the motion compensation. .

These conventional brightness compensating methods are compensated in units of blocks, and the same brightness weight and offset values are applied to all pixel values in one block.

Figure pat00001

In the above equation (1), pred [x, y] denotes a brightness compensated prediction block, and rec [x, y] denotes a prediction block of a reference image. In the equation, the W value and the O value refer to a weight and an offset value, respectively.

3 and 4 are block diagrams for explaining an example of a weight prediction method.

Referring to FIG. 3, a weighting factor is calculated using a current image and a reference image (S301), and a weighting coefficient is applied as shown in Equation (1) when Inter coding is performed.

Fig. 4 shows a detailed description of the method of obtaining and applying the weighting coefficient. The contents related to S301 are described as S401 to S404, and S302 and S405 are the same.

DC and AC are obtained by using the current image and the reference images before the encoding of the current image is started. Here, DC is the average value of the image, which is obtained by adding the total values of the pixels and then dividing the sum by the total number of pixels. AC is obtained by taking an absolute value sum (SAD) for a difference between each pixel value and a DC (average) value in an image (S401). Then, weights and offsets are calculated using [Equation 2] and [Equation 3] (S402).

Figure pat00002

Figure pat00003

If weights and offsets are obtained for each of the existing reference images using the current image, the weight and offset are applied to the entire reference image, and then the absolute value sum (SAD_WP) with the current image, the current image (SAD_Org) between the reference image and the reference image and then obtains the ratio of the two values (S403). If the ratio of the two values is higher than the threshold value, the weight and offset are set to be unconditionally applied when Inter coding is performed on the reference image. If the ratio is lower than the threshold, the weight and offset are applied (S404). Thereafter, the weighting prediction is applied according to the items determined at the time of performing the inter coding in a block unit (S405).

A method of compensating brightness in a frequency domain (encoding section)

According to an embodiment of the present invention, brightness compensation can be performed by calculating brightness compensation coefficients in the frequency domain without calculating them in the pixel area. In this case, before the brightness compensation is performed, the inter-coding inter-blocks are collected and the weight coefficients are obtained, and the brightness compensation is performed through the re-encoding process. At this time, the unit in which the encoding is ended may be ended in units of pictures, or may be set in any unit such as a slice unit.

 In an embodiment of the present invention, weighting prediction is applied after encoding ends in units of slices for convenience of explanation, but it is not limited to slice units.

FIG. 5 is a block diagram illustrating a configuration of a brightness compensator according to an exemplary embodiment of the present invention. Referring to FIG. 5, an exemplary embodiment of a brightness compensator according to the present invention will be described in detail.

After inter coding is completed in units of slices (S501), the blocks are converted into a frequency domain to derive a weighting coefficient (S502), and it is determined whether to apply the weighting coefficients to the prediction blocks (S503), and re-coding is performed using the changed prediction blocks (S504).

The process of step S502 of deriving the weighting factor will be described in detail with reference to FIG.

The Inger Coded blocks and the prediction blocks are assembled as shown in Fig. qm, n means the blocks that have the original block in width and height, and pm and n are the blocks in which the prediction block is divided into width and height. m, and n denote the positions of the pixels in each block. The collected blocks are converted into the frequency domain, and the number of times to optimize the weighting factor is set to n (S601). Then, the least squares method is applied to the collected blocks using Equation (4).

Figure pat00004

Where Qm, n is the transformed original block, Pm, n is the transformed prediction block, Wm, n is the weight, O is the offset, K is the number of aggregated blocks, m and n are the locations of the coefficients in the aggregated block Dm, n means error. To obtain the weight Wm, n that minimizes the error Dm, n, the partial differentiation is performed using Equation (5) to obtain Equation (6).

Figure pat00005

Figure pat00006

Further, in order to obtain the offset O that minimizes Dm, n, the partial differentiation is performed using Equation (7), and it is summarized as Equation (8).

Figure pat00007

Figure pat00008

In the case of the offset, only the (0, 0) position is applied. In the conversion domain, energy is accumulated at the (0, 0) position, that is, the DC position. The most significant difference from the conventional technique is that the optimal weight coefficients are obtained using the least squares method in the transformed frequency domain. In the case of the weight W, an array is used instead of a constant. (S602). (K), and if there are fewer blocks of residual signals with qm, n (k) (S603), the number of optimizations is reduced by one (S604). If the number of optimizations is 0 If not (S605), the residual signal is collected only for the smaller number of blocks (S606), and the weight coefficients are optimized by applying [Equation 4] - [Equation 8].

The step S503 of applying the weighting coefficients to the prediction block will be described with reference to FIG.

The weighting coefficients are applied to the transformed prediction block Pm, n to generate a compensated prediction block P'm, n (S801). Then, the residual signal of qm, n, pm, n is compared with the residual signal of qm, n and p'm, n (S802). According to the result (S803), a flag indicating whether or not weight prediction is used is set in the prediction block (S804, S805). This flag can be used to determine whether or not weight prediction is applied to each prediction block in the entire image. When encoding is completed in units of slices, the cost of the original slice in which the weight prediction is not used is compared with the cost of the slice to which the weight prediction is applied (S806), and a final bitstream is generated in the corresponding slice according to the result (S808 and S809) .

In the case of the step S504 of re-encoding, the existing prediction information and the conversion information may be used as they are to reduce the complexity, or the prediction information and the conversion information may be newly calculated.

On the other hand, when calculating the brightness compensation coefficient in the frequency domain or applying the brightness compensation, it can be selectively used in various methods such as DCT, DST and DWT in order to convert into the frequency domain.

A method of compensating brightness in a frequency domain (decoding unit)

FIG. 6 is a flowchart of a bitstream extracting structure in a slice header according to the present embodiment, and FIG. 7 is a bitstream extracting structure in a prediction block according to the present embodiment.

The slice_wp_flag bit indicating whether the current slice is used in the slice header is extracted (S1401). If it is determined to be false according to the slice_wp_flag bit, the process ends immediately and if it is determined to be true (S1402), the weight coefficient is extracted (S1403). The bitstream extracting structure in the prediction block is changed according to the slice_wp_flag bit extracted from the slice header. If the slice_wp_flag bit is determined to be false according to the slice_wp_flag bit extracted from the slice header, the process ends (step S1501). If the slice_wp_flag bit is false, (S1503) The pu_wp_flag bit indicating whether the weight prediction is used in the current block is extracted (S1504). Only when the pu_wp_flag bit is determined to be true after all the bitstreams necessary for weight prediction are decoded, the prediction blocks are collected as shown in FIG. 7 and converted into the frequency domain, and then the extracted weighting coefficients are applied and inversely transformed, A prediction block is generated and the residual signal is added and decoded.

The brightness compensation method according to an embodiment of the present invention as described above can be used to compensate for brightness difference between frames in a 2D video signal compression process.

In addition, the brightness compensation method according to an embodiment of the present invention as described above can be used to compensate for the brightness signal difference between views in the 3D video signal compression process. This can improve the compression efficiency in the interview prediction mode.

In addition, the brightness compensation method according to an embodiment of the present invention as described above can be used in a Scalable Video Codec. Accordingly, the brightness compensation function can be performed in the interlayer prediction mode.

The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).

The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (8)

A video encoding method comprising:
Encoding a temporally predicted block from a video signal; And
And performing brightness compensation on the temporal prediction block,
The step of performing the brightness compensation
Transforming the encoded temporal prediction block into a frequency domain; And
And performing brightness compensation based on the transformed temporal prediction block.
The method according to claim 1,
Wherein the transforming into the frequency domain is performed by selectively using one of DCT, DST, and WT.
The method according to claim 1,
Wherein the performing the brightness compensation comprises performing brightness compensation using brightness compensation information of neighboring blocks corresponding to the temporal prediction block.
The method according to claim 1,
Wherein transforming to the frequency domain comprises deriving a weighting factor for brightness compensation.
A video encoding apparatus comprising:
An encoding unit for encoding a temporal prediction block from a video signal; And
And a brightness compensation unit for performing brightness compensation on the temporal prediction block,
The brightness compensation unit
Transforming the encoded temporal prediction block into a frequency domain, and performing brightness compensation based on the transformed temporal prediction block.
6. The method of claim 5,
Wherein the brightness compensator selectively converts one of DCT, DST, and WT to the frequency domain.
6. The method of claim 5,
Wherein the brightness compensation unit performs brightness compensation using brightness compensation information of neighboring blocks corresponding to the temporal prediction block.
6. The method of claim 5,
Wherein the brightness compensator derives a weight coefficient for brightness compensation and converts the weight factor into a frequency domain.
KR20130041289A 2013-04-15 2013-04-15 Method for encoding and decoding video using weighted prediction, and apparatus thereof KR20140124438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130041289A KR20140124438A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using weighted prediction, and apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130041289A KR20140124438A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using weighted prediction, and apparatus thereof

Publications (1)

Publication Number Publication Date
KR20140124438A true KR20140124438A (en) 2014-10-27

Family

ID=51994648

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130041289A KR20140124438A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using weighted prediction, and apparatus thereof

Country Status (1)

Country Link
KR (1) KR20140124438A (en)

Similar Documents

Publication Publication Date Title
US10334258B2 (en) Scalable video coding method and apparatus using inter prediction mode
US11889052B2 (en) Method for encoding video information and method for decoding video information, and apparatus using same
KR101830352B1 (en) Method and Apparatus Video Encoding and Decoding using Skip Mode
KR101590511B1 (en) / / Motion Vector Coding Method and Apparatus
RU2654149C1 (en) Video encoding device, video decoding device, video encoding method and video decoding method
KR102006443B1 (en) Method and apparatus for video encoding/decoding using error compensation
KR20110062516A (en) Apparatus and method for encoding video, apparatus and method for decoding video and directional intra-prediction method therefor
CN104704835A (en) Method and apparatus of motion information management in video coding
Oudin et al. Block merging for quadtree-based video coding
KR20100087600A (en) Method and apparatus for coding and decoding using adaptive interpolation filters
KR101979284B1 (en) Method and apparatus for scalable video coding using inter prediction mode
KR20230150284A (en) Efficient video encoder architecture
KR20110065116A (en) Video encoding apparatus and method, transform encoding apparatus and method, basis transform generating apparatus and method, and video decoding apparatus and method
JP2023521609A (en) Method, computer program and apparatus for video coding
JP6528635B2 (en) Moving picture coding apparatus, moving picture coding method, and computer program for moving picture coding
JP5439162B2 (en) Moving picture encoding apparatus and moving picture decoding apparatus
KR20080006494A (en) A method and apparatus for decoding a video signal
KR20140124440A (en) Method for encoding and decoding video using weighted prediction, and apparatus thereof
KR101582493B1 (en) Motion Vector Coding Method and Apparatus
KR101582495B1 (en) Motion Vector Coding Method and Apparatus
KR20140124438A (en) Method for encoding and decoding video using weighted prediction, and apparatus thereof
KR101422058B1 (en) Motion Vector Coding Method and Apparatus
KR20140124439A (en) Method for encoding and decoding video using weighted prediction, and apparatus thereof
KR20160087206A (en) Transcoder and transcoding method for moving picture
WO2023149972A1 (en) Affine estimation in pre-analysis of encoder

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination