WO2020263499A1 - Adaptive resolution change in video processing - Google Patents
Adaptive resolution change in video processing Download PDFInfo
- Publication number
- WO2020263499A1 WO2020263499A1 PCT/US2020/035079 US2020035079W WO2020263499A1 WO 2020263499 A1 WO2020263499 A1 WO 2020263499A1 US 2020035079 W US2020035079 W US 2020035079W WO 2020263499 A1 WO2020263499 A1 WO 2020263499A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reference picture
- picture
- version
- resolution
- resampling
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
- H04N19/426—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
- H04N19/428—Recompression, e.g. by spatial or temporal decimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/573—Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure generally relates to video processing, and more particularly, to methods and systems for performing adaptive resolution change in video coding.
- a video is a set of static pictures (or“frames”) capturing the visual information.
- a video can be compressed before storage or transmission and decompressed before display.
- the compression process is usually referred to as encoding and the decompression process is usually referred to as decoding.
- There are various video coding formats which use standardized video coding technologies, most commonly based on prediction, transform, quantization, entropy coding and in-loop filtering.
- the video coding standards such as the High Efficiency Video Coding (HEVC/H.265) standard, the Versatile Video Coding (VVC/H.266) standard
- a VS standards specifying the specific video coding formats, are developed by standardization organizations. With more and more advanced video coding technologies being adopted in the video standards, the coding efficiency of the new video coding standards get higher and higher.
- the embodiments of the present disclosure provide a method for performing adaptive resolution change during video encoding or decoding.
- a method for performing adaptive resolution change during video encoding or decoding In one exemplary
- the method includes: comparing resolutions of a target picture and a first reference picture, in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
- the embodiments of the present disclosure also provide a device for performing adaptive resolution change during video encoding or decoding.
- the device includes: one or more memories storing computer instructions; and one or more processors configured to execute the computer instructions to cause the device to: compare resolutions of a target picture and a first reference picture, in response to the target picture and the first reference picture having different resolutions, resample the first reference picture to generate a second reference picture; and encode or decode the target picture using the second reference picture.
- the embodiments of the present disclosure also provide a non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for adaptive resolution change.
- the method includes: comparing resolutions of a target picture and a first reference picture; in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
- FIG. 1 is a schematic diagram illustrating an exemplary' video encoder, consistent with embodiments of the disclosure.
- FIG. 2 is a schematic diagram illustrating an exemplary video decoder, consistent with embodiments of the disclosure
- FIG. 3 illustrates an example in which resolution of reference pictures are different from a current picture, consistent with embodiments of the disclosure.
- FIG. 4 is a table showing sub-pel motion compensation interpolation filter used for luma component in Versatile Video Coding (VVC), consistent with embodiments of the disclosure.
- VVC Versatile Video Coding
- FIG. 5 is a rable showing sub-pel motion compensation interpolation filter used for chroma component in VVC, consistent with embodiments of the disclosure.
- FIG. 6 illustrates an exemplary reference picture buffer, consistent with embodiments of the disclosure.
- FIG. 7 is a table showing an example of a supported resolution set including three different resolutions, consistent with embodiments of the disclosure.
- FIG. 8 illustrates an exemplary decoded picture buffer (DPB) when both resampled and original reference pictures are stored, consistent with embodiments of the disclosure.
- DPB decoded picture buffer
- FIG. 9 illustrates progressive down-sampling, consistent with embodiments of the disclosure.
- FIG. 10 illustrates an exemplary video coding process with resolution change, consistent with embodiments of the disclosure.
- FIG. 11 is a table showing an exemplary down-sampling filter, consistent with embodiments of the disclosure.
- FIG. 12 illustrates an exemplary video coding process with peak-signal to noise ratio (PSNR) computation, consistent with embodiments of the disclosure.
- PSNR peak-signal to noise ratio
- FIG. 13 illustrates frequency response of an exemplary low pass filter, consistent with embodiments of the disclosure
- FIG. 14 is a table showing 6-tap filters, consistent with embodiments of the disclosure.
- FIG. 15 is a table showing 8-tap filters, consistent with embodiments of the disclosure.
- FIG. 16 is a table showing 4-tap filters, consistent with embodiments of the disclosure.
- FIG. 17 is a table showing Interpolation Filter coefficients for 2: 1 down- sampling, consistent with embodiments of the disclosure
- FIG. 18 is a table showing Interpolation Filter coefficients for 1.5: 1 down- sampling, consistent with embodiments of the disclosure.
- FIG. 19 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 20 show's an exemplar ⁇ ' chroma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 21 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 22 shows an exemplary chroma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 23 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 24 shows an exemplary chroma sample interpolation filtering process for reference down- sampling, consistent with embodiments of the disclosure.
- FIG. 25 show ' s an exemplar ⁇ ' chroma fractional sample position calculation for reference down-sampling, consistent with embodiments of the disclosure.
- FIG. 26 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 2: 1, consistent with embodiments of the disclosure.
- FIG. 27 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 1.5: 1, consistent with embodiments of the disclosure.
- FIG. 28 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 2: 1, consistent with embodiments of the disclosure.
- FIG. 29 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 1.5: 1, consistent with embodiments of the disclosure.
- FIG. 30 is a table showing 6-tap filter coefficients for luma 4x4 block MC interpolation with reference downsampling with ratio 2: 1, consistent with embodiments of the disclosure.
- FIG. 31 is a table showing 6-tap filter coefficients for luma 4x4 block MC interpolation with reference downsampling with ratio 1.5: 1, consistent with embodiments of the disclosure. DESCRIPTION
- a video is a set of static pictures (or“frames”) arranged in a temporal sequence to store visual information.
- a video capture device e.g., a camera
- a video playback device e.g., a television, a computer, a smartphone, a tablet computer, a video player, or any end-user terminal with a function of display
- a video capturing device can transmit the captured video to the video playback device (e.g., a computer with a monitor) in real-time, such as for surveillance, conferencing, or live broadcasting
- the video can be compressed.
- the video can be compressed before storage and transmission and decompressed before the display.
- the compression and decompression can be implemented by software executed by a processor (e.g., a processor of a generic computer) or specialized hardw-are.
- the module for compression is generally referred to as an“encoder,” and the module for decompression is generally referred to as a “decoder.”
- the encoder and the decoder can be collectively referred to as a“codec.”
- the encoder and the decoder can be implemented as any of a variety of suitable hardware, software, or a combination thereof.
- the hardware implementation of the encoder and the decoder can include circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, or any combinations thereof.
- the software implementation of the encoder and the decoder can include program codes, computer-executable instructions, firmware, or any suitable computer-implemented algorithm or process fixed in a computer- readable medium.
- Video compression and decompression can be implemented by various algorithms or standards, such as MPEG- 1, MPEG-2, MPEG-4, H.26x series, or the like.
- the codec can decompress the video from a first coding standard and re- compress the decompressed video using a second coding standard, in which case the codec can be referred to as a“transcoder.”
- the video encoding process can identify and keep useful information that can be used to reconstruct a picture. If information that was disregarded in the video encoding process cannot be fully reconstructed, the encoding process can be referred to as“lossy.” Otherwise, it can be referred to as“lossless.” Most encoding processes are lossy, which is a tradeoff to reduce the needed storage space and the transmission bandwidth.
- the useful information of a picture being encoded can include changes with respect to a reference picture (e.g., a picture previously encoded or reconstructed). Such changes can include position changes, luminosity changes, or color changes of the pixels, among which the position changes are mostly concerned. Position changes of a group of pixels that represent an object can reflect the motion of the object between the reference picture and the current picture.
- the JVET has been developing technologies beyond HE VC using the joint exploration model (“JEM”) reference software.
- JEM joint exploration model
- the JEM achieved substantially higher coding performance than HE VC.
- the VC EG and MPEG have also formally started the development of a next generation video compression standard beyond HE VC— the Versatile Video Coding (VVC/H.266) standard.
- VVC Video Coding Coding
- FIG. 1 is a schematic diagram illustrating an exemplary video encoder 100, consistent with the disclosed embodiments.
- video encoder 100 may perform intra- or inter-coding of blocks within video frames, including video blocks, or partitions or sub-partitions of video blocks.
- Intra-coding may rely on spatial prediction to reduce or remove spatial redundancy in video within a given video frame.
- Inter coding may rely on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence.
- Intra modes may refer to a number of spatial based compression modes and inter modes (such as uni-prediction or bi-prediction) may refer to a number of temporal-based compression modes.
- input video signal 102 may be processed block by block.
- the video block unit may be a 16x 16 pixel block (e.g , a macroblock (MB)).
- extended block sizes e.g., a coding unit (CU)
- CU coding unit
- a CU may include up to 64x64 luma samples and corresponding chroma samples.
- the size of a CU may be further increased to include 128x 128 luma samples and corresponding chroma samples.
- a CU may be partitioned into prediction units (PUs), for which separate prediction methods may be applied.
- Each input video block (e.g., MB, CU, PU, etc.) may be processed by using spatial prediction unit 160 or temporal prediction unit 162
- Spatial prediction unit 160 performs spatial prediction (e.g., intra prediction) to the current CU using information on the same picture/slice containing the current CU. Spatial prediction may use pixels from the already coded neighboring blocks in the same video picture/slice to predict the current video block. Spatial prediction may reduce spatial redundancy inherent in the video signal. Temporal prediction (e.g., inter prediction or motion compensated prediction) may use samples from the already coded video pictures to predict the current video block. Temporal prediction may reduce temporal redundancy inherent in the video signal.
- spatial prediction e.g., intra prediction
- Spatial prediction may use pixels from the already coded neighboring blocks in the same video picture/slice to predict the current video block. Spatial prediction may reduce spatial redundancy inherent in the video signal.
- Temporal prediction e.g., inter prediction or motion compensated prediction
- Temporal prediction may reduce temporal redundancy inherent in the video signal.
- Temporal prediction unit 162 performs temporal prediction (e.g., inter prediction) to the current CU using information from picture(s)/slice(s) different from the picture/slice containing the current CU.
- Temporal prediction for a video block may be signaled by one or more motion vectors.
- the motion vectors may indicate the amount and the direction of motion between the current block and one or more of its prediction block(s) in the reference frames. If multiple reference pictures are supported, one or more reference picture indices may be sent for a video block.
- the one or more reference indices may be used to identify from which reference picture(s) in the decoded picture buffer (DPB) 164 (also called reference picture store 164), the temporal prediction signal may come.
- DPB decoded picture buffer
- the mode decision and encoder control unit 180 in the encoder may choose the prediction mode, for example based on a rate-distortion optimization method.
- the prediction block may be subtracted from the current video block at adder 116.
- the prediction residual may be transformed by transformation unit 104 and quantized by quantization unit 106.
- the quantized residual coefficients may he inverse quantized at inverse quantization unit 110 and inverse transformed at inverse transform unit 112 to form the reconstructed residual.
- the reconstructed block may be added to the prediction block at adder 126 to form the reconstructed video block.
- the in-loop filtering such as deblocking filter and adaptive loop filters 166, may be applied on the reconstructed video block before it is put in the reference picture store 164 and used to code future video blocks.
- coding mode e.g., inter or intra
- prediction mode information e.g., motion information
- quantized residual coefficients may be sent to the entropy coding unit 108 to be compressed and packed to form the bitstream 120.
- the above-described units of video encoder 100 can be implemented as software modules (e.g., computer programs that fulfill different functions), hardware components (e.g., different circuitry blocks for performing the respective functions), or a hybrid of software and hardware.
- FIG. 2 is a schematic diagram illustrating an exemplary video decoder 200, consistent with the disclosed embodiments.
- a video bitstream 202 may ⁇ be unpacked or entropy decoded at entropy decoding unit 208.
- the coding mode or prediction information may be sent to the spatial prediction unit 260 (e.g., if intra coded) or the temporal prediction unit 262 (e.g., if inter coded) to form the prediction block.
- the prediction information may comprise prediction block sizes, one or more motion vectors (e.g., which may indicate direction and amount of motion), or one or more reference indices (e.g., which may indicate from which reference picture the prediction signal is to be obtained).
- Motion compensated prediction may be applied by the temporal prediction unit 262 to form the temporal prediction block.
- the residual transform coefficients may be sent to inverse quantization unit 210 and inverse transform unit 212 to reconstruct the residual block.
- the prediction block and the residual block may be added together at 226
- the reconstructed block may go through in-loop filtering (via loop filer 266) before it is stored in decoded picture buffer (DPB) 264 (also called reference picture store 264).
- DPB decoded picture buffer
- the reconstructed video in the DPB 264 may he used to drive a display device or used to predict future video blocks.
- Decoded video 220 may he displayed on a display.
- the above-described units of video decoder 200 can be implemented as software modules (e.g., computer programs that fulfill different functions), hardware components (e.g., different circuitry blocks for performing the respective functions), or a hybrid of software and hardware.
- One of the objectives of the VVC standard is to offer video conferencing applications the ability to tolerate diversity of networks and devices.
- the VVC standard needs to provide the ability of rapidly adapting to varying network environments, including rapidly reducing encoded bitrate when network conditions deteriorate, and rapidly increasing video quality when network conditions improve.
- each of the multiple representations may have different properties (e.g. spatial resolution or sample bit depth) and the video quality may vary from low to high.
- the VVC standard thus needs to support fast representation switching for the adaptive streaming services. During switching from one representation to another representation (such as switching from one resolution to another resolution), the VVC standard needs to enable the use of efficient prediction structure without compromising the fast and seamless switching capability.
- the encoder e.g., encoder 100 in FIG. 1
- an instantaneous-decoder-refresh (IDR) coded picture to clear the contents of the reference picture buffer (e.g , the DPB 164 in FIG. 1 and DPB 264 in FIG. 2).
- the decoder e.g., decoder 200 in FIG. 2 marks all pictures in the reference buffer as“unused for reference.” All subsequent transmitted pictures can be decoded without reference to any frame decoded prior to the IDR picture.
- the first picture in a coded video sequence is always an IDR picture.
- adaptive resolution change (ARC) technology can be used to allow a video stream to change spatial resolution between coding pictures within the same video sequence, without requiring a new IDR picture and without requiring multi-layers as in scalable video codec.
- ARC adaptive resolution change
- a currently coded picture is either predicted from reference pictures with the same resolution (if available), or predicted from reference pictures of a different resolution by resampling the reference pictures.
- FIG. 3 An illustration of adaptive resolution change is shown in FIG. 3, where the resolution of reference picture“Ref 0” is the same as the resolution of the currently coded picture. However, resolutions of reference pictures“Ref 1” and“Ref 2” are different from the resolution of the current picture.
- To generate motion compensated prediction signal of the current picture both“Ref 1” and “Ref 2” are resampled to the resolution of the current picture.
- one way to generate the motion compensated prediction signal is picture-based resampling, where the reference picture is first resampled to the same resolution as the current picture, and the existing motion compensation process with motion vectors can be applied.
- the motion vectors may be scaled (if they are sent in units before resampling is applied) or not scaled (if they are sent in units after resampling is applied).
- Another way is block-based resampling, where resampling is performed at the block level. This is done by examining the reference picture(s) used by the current block, and if one or both of them have different resolutions than the current picture, then resampling is performed in combination with the sub-pel motion compensated interpolation process.
- the present disclosure provides both picture-based resampling methods and block-based resampling methods for use with ARC. The following description first addresses the disclosed picture-based resampling methods, and then addresses the disclosed block- based resampling methods.
- the disclosed picture-based resampling methods can solve some problems caused by traditional block-based resampling methods.
- on-the-fiy block level resampling can be performed when it is determined that the resolution of a reference picture is different from the resolution of current picture.
- the block level resampling process could complicate the encoder design because the encoder may have to resample the block at each search point during motion search.
- Motion search is generally a time-consuming process for an encoder, and thus the on- the-fiy requirement during motion search may complicate the motion search process.
- the encoder may resample the reference picture in advance such that the resampled reference picture has the same resolution as that of the current picture.
- this may degrade coding efficiency because it may cause the prediction signals during motion estimation and motion compensation to be different.
- Block level resampling is not compatible with some other useful coding tools, such as subblock- based temporal motion vector prediction (SbTMVP), affine motion compensated prediction, decoder side motion vector refinement (DMVR), etc.
- SBTMVP subblock- based temporal motion vector prediction
- affine motion compensated prediction affine motion compensated prediction
- DMVR decoder side motion vector refinement
- any sub-pel motion compensated interpolation filter known in the art may be used for block resampling, it may not be equally applicable to block up- sampling and block down-sampling.
- interpolation filter used for motion compensation may be used for the up-sampling in cases where reference picture has lower resolution than the current picture.
- the interpolation filter is not suitable for the down-sampling in cases where reference picture has higher resolution than the current picture, because the interpolation filter cannot filter the integer positions and thus may produce aliasing.
- Table 1 (FIG. 4) shows exemplary interpolation filter coefficients used for luma component integer positions with various values of the fractional sample positions.
- Table 2 (FIG.
- the picture resampling process can involve either up-sampling or down-sampling.
- Up sampling is the increasing of the spatial resolution while keeping the two-dimensional (2D) representation of an image.
- the resolution of the reference picture is increased by interpolating the unavailable samples from the neighboring available samples.
- the resolution of the reference image is reduced.
- resampling can be performed at the picture level.
- picture level resampling if the resolution of a reference picture is different from the resolution of the current picture, the reference picture is resampled to the resolution of the current picture.
- the motion estimation and/or compensation of the current picture can be performed based on the resampled reference picture.
- the ARC can be implemented in a“transparent” manner at the encoder and the decoder, because the block- level operations are agnostic to the resolution change.
- the picture-level resampling can be performed on the fly, i.e., while the current picture is predicted.
- only original (un-resampled) reference pictures are stored in the decoded picture buffer (DPB), e.g., DPB 164 in the encoder 100 (FIG, 1) and DBP 264 in the decoder 200 (FIG. 2).
- the DPB can be managed in the same way as the current version of the VVC.
- the encoder or decoder resamples the reference picture if the resolution of a reference picture in the DPB is different from the resolution of the current picture.
- a resampled picture buffer can be used to store all the resampled reference pictures for the current picture.
- FIG. 6 shows an example of on-the-fly picture level resampling where the low-resolution reference picture is resampled and stored in the resampled picture buffer.
- the DPB of the encoder or decoder contains 3 reference pictures.
- the resolution of reference pictures“Ref 0” and“Ref 2” are the same as the resolution of the current picture. Therefore,“Ref 0” and“Ref 2” do not need to be resampled.
- reference picture“Ref 1” is different from the resolution of current picture and therefore need to be resampled. Accordingly, only resampled“Ref 1” is stored in the reference picture buffer, while“Ref 0” and“Ref 2” are not stored in the reference picture buffer.
- the number of resampled reference pictures that a picture can use is constrained.
- the maximum number of resampled reference pictures of a given current picture may be preset.
- the maximum number may be set to be one.
- a bitstream constraint may be imposed such that the encoder or decoder may allow at most one of the reference pictures to have a different resolution from that of the current picture, and all other reference pictures must have the same resolution.
- this maximum number indicates the size of the resample picture buffer and the maximum number of resampling that can be performed for any pictures in the current video sequence, the maximum number directly relates to the worst-case decoder complexity. Therefore, this maximum number may be signaled as part of the sequence parameter set (SPS) or the picture parameter set (PPS), and it may be specifi ed as part of the profile/level definition.
- SPS sequence parameter set
- PPS picture parameter set
- two versions of a reference picture are stored in the DPB.
- One version has the original resolution and the other version has the maximum resolution. If the resolution of the current picture is different from either the original resolution or the maximum resolution, the encoder or decoder can perform on -th e-fly down-sampling from the stored maximum resolution picture. For the picture output, the DPB always output the original (un-resampled) reference picture.
- the resampling ratios can be arbitrarily chosen, and vertical and horizontal scaling ratios are allowed to be different. Because picture- level resampling is applied, the block-level operations of encoding/decoder are made agnostic to the resolutions of the reference pictures, allowing arbitrary resampling ratio to be enabled without further complicating the block-level design logics of the encoder/decoder.
- the signaling of the coded resolution and maximum resolution can be performed as follows.
- the maximum resolution of any picture in the video sequence is signaled in the SPS.
- the coded resolution of the picture can be signaled either in PPS or in slice header. In either case, one flag is signaled whether the coded resolution is same as the maximum resolution or not. If the coded resolution is not same as the maximum resolution, the coded width and height of the current picture are additionally signaled. If PPS signaling is used, the signaled coded resolution is applied to all pictures that refer to this PPS. If slice header signaling is used, the signaled coded resolution is only applied to the current picture itself. In some embodiments, the difference between the current coded resolution and the maximum resolution signaled may be signaled in the SPS.
- the resolution is restricted to a pre-defmed set of A supported resolutions, where A is the number of supported resolutions within a sequence.
- A is the number of supported resolutions within a sequence.
- the value of A and supported resolutions can be signaled in the SPS.
- the coded resolution of a picture is signaled either through the PPS or the slice header. In either case, instead of signaling the actual coded resolution, the corresponding resolution index is signaled.
- both resampled and un-resampled reference pictures are stored in the DPB.
- one original (i.e., un-resampled) picture and A-l resampled copies are stored. If the resolution of the reference picture is different from the resolution of the current picture, the resampled pictures in the DPB are used to code the current picture.
- the DPB outputs the original (un- resampled) reference picture.
- A The selection of the value of A depends on the application. Larger values of A make resolution selection more flexible, but increase the complexity and memory' requirements. Smaller values of A are suitable for limited-complexity devices, but limit the resolutions that can be selected. Therefore, in some embodiments, flexibility can be given to the encoder to decide the value of M based on applications and device capabilities and signaled through SPS.
- the down-sampling of reference pictures can be performed progressively (i.e., gradually).
- conventional (i.e., direct) down- sampling the down-sampling of an input image is performed in one step. If the down- sampling ratio is higher, single-step down-sampling requires longer tap down-sampling filter in order to avoid severe aliasing problem. However, longer tap filters are computationally expensive.
- a threshold e.g higher than 2; 1 down- sampling
- progressive down-sampling is used where down-sampling is performed gradually.
- FIG. 9 shows an example of a progressive down-sampling where down-sampling by 4, in both horizontal and vertical dimensions, is implemented over 2 pictures. The first picture is down-sampled by 2 (in both direction) and the second picture is down-sampled by 2 (in both directions).
- scaled motion vector based on the resampled reference pictures can be used during temporal motion vector predictor (TMVP) or advanced temporal motion vector predictor (ATMVP)
- the test methodology to evaluate the coding tools for ARC is described. Since adaptive resolution change is mainly used for adapting to the network bandwidth, the following test conditions may be considered to compare coding efficiency of different ARC schemes during network bandwidth change.
- the resolution is changed to half (in both vertical and horizontal directions) at a specific time instance and later reverted back to the original resolution after a certain time period.
- FIG. 10 shows the example of the resolution change. At time h the resolution is changed to half and later reverted back to the full resolution at time h.
- the down-sampling ratio is too large (e g. the down-sampling ratio in a given dimension is larger than 2: 1), progressive downscaling is used over a certain time period. However, when up-sampling is used, progressive up-sampling is not used.
- scalable HE VC test model (SMH) down-sampling filter can be used for source resampling.
- SSH scalable HE VC test model
- PSNRs peak-signal to noise ratios
- the number of pictures to be encoded is the same as current test conditions of VVC.
- the disclosed block-based resampling methods are described.
- combining the resampling and motion compensated interpolation into one filtering operation may reduce the information loss mentioned above.
- the motion vector of the current block has half-pel precision in one dimension, e.g., the horizontal dimension
- the reference picture’s width is 2x that of the current picture.
- the block-based resampling method will directly fetch the odd positions in the reference pictures as the reference block at half-pel precision.
- VVC motion compensation
- luma component if half-pel AMVR mode is selected and interpolated position is half-pel, a 6-tap filter [3, 9, 20, 20, 9, 3] is used; if motion compensated block size is 4x4, the following 6-tap filters as shown in Table 5 (FIG. 13) are used: and otherwise 8-tap filters as shown Table 6 (FIG. 14) are used.
- the 4-tap filters shown in Table 7 (FIG. IS) are used.
- VVC the same filters are used for MC interpolation without reference resampling and MC interpolation with reference resampling.
- VVC MCIF is designed based on DCT up-sampling, it may not be appropriate to use it as a one-step filter combining reference down-sampling and MC interpolation.
- phase 0 filtering i.e. scaled mv is integer
- the VVC 8-tap MCIF coefficients are [0, 0, 0, 64, 0, 0, 0, 0,
- wiiich means the prediction sample is directly copied from the reference sample. While it may not be a problem for MC interpolation without reference down-sampling or with reference up-sampling, it may cause aliasing artifact for reference down-sampling case, due to the lack of low-pass filter before decimation.
- This present disclosure provides methods of using cosine windowed-sine filters for MC interpolation with reference down-sampling.
- Windowed-sine filters are band-pass filters that separate one band of frequency from others.
- the windowed-sine filter is a low-pass filter with a frequency response that allows all frequencies below a cutoff frequency passthrough with amplitude 1 and stops all frequencies above the cutoff frequency with zero amplitude, as shown in FIG. 16.
- the filter kernel also known as the impulse response of the filter, is obtained by taking the Inverse Fourier Transform of the frequency response of the ideal low-pass filter.
- the impulse response of the low-pass filter is in the general form of a sine function:
- fc is the cut-off frequency with the value in [0, 1 ]
- r is the down-sampling ratio, i e.,
- the sine function is infinite.
- a window function is applied to truncate the filter kernel to L points.
- a cosine-windowed function is used, which is given by:
- the kernel of the cosine windowed-sine fi lter is the product of the ideal response function h(n ) and the cosine-window function vv(n):
- Filter coefficients obtained in (4) are real numbers. Applying a filter is equivalent to calculating the weighted average of reference samples, with the weights being the filter coefficients. For efficient calculation in digital computers or hardware, the coefficients are normalized, multiplied by a scaler and rounded to integers, such that the sum of the coefficients is equal to 2 L N, where N is an integer. The filtered sample is divided by 2 L N (equivalent to right shift N bits). For example, in VVC draft 6, the sum of interpolation filter coefficients is 64.
- the down-sampling filters is used in SHM for the VVC motion compensation interpolation with reference down-sampling, for both luma and chroma components, and the existing MCIF is used for motion compensation
- the filter length can be reduced to 12, without impact to the filter performance.
- the filter coefficients for 2: 1 down-sampling and 1.5: 1 down- sampling are shown in Table 8 (FIG. 17) and Table 9 (FIG. 18), respectively.
- the SHM filter requires filtering in integer sample position as well as in fractional sample position, while the MCIF only requires filtering in fractional sample position.
- Table 10 An example of modification to the VVC draft 6 luma sample interpolation filtering process for reference down-sampling case is described in Table 10 (FIG. 19).
- the internal precision can be increased by 1- bit, and addition l-bit right shift can be used to convert the internal precision to the output precision.
- An example of modification to the VVC draft 6 luma sample interpolation filtering process for reference down-sampling case is shown in Table 14 (FIG. 23).
- SHM filter has 12 taps. Therefore, to generate an interpolated sample, 11 neighboring samples (5 to the left and 6 to the right, or 5 above or 6 below) are needed. Comparing to MCIF, additional neighboring samples are fetched. In VVC draft 6, the chroma mv precision is 1/32. However, the SHM filter has only 16 phases.
- the chroma mv can be rounded to 1/16 for reference down-sampling. This can be done by right shifting the last 5 bits of the chroma mv by 1 bit.
- Table 16 An example of modification to the VVC draft 6 chroma fractional sample position calculation for reference down- sampling case is shown in Table 16 (FIG. 25).
- the sum of filter coefficients may be set to 64 for further alignment to the existing MCIF filters.
- Example filter coefficients for 2: 1 and 1.5: 1 ratios are shown in the following Table 17 (FIG. 26) and Table 18 (FIG. 27), respectively.
- a 32-phase cosine windowed-sine filter set may be used in chroma motion compensation interpolation with reference down-sampling.
- Examples of filter coefficients for 2 : 1 and 1.5: 1 ratios are shown in Table 19 (FIG. 28) and Table 20 (FIG. 29), respectively.
- 6-tap cosine windowed- sine filter may be used for MC interpolation with reference down- sampling.
- filter coefficients for 2: 1 and 1.5: 1 ratios are shown in the following Table 21 (FIG. 30) and Table 22 (FIG, 31), respectively.
- a non-transitory computer-readable storage medium including instructions is also provided, and the instructions may be executed by a device (such as the disclosed encoder and decoder), for performing the above-described methods.
- a device such as the disclosed encoder and decoder
- Common forms of non-transitory media include, for example, a floppy disk, a fl exible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD- ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same.
- the device may include one or more processors (CPUs), an input/output interface, a network interface, and/or a memory'.
- processors CPUs
- input/output interface a network interface
- memory' a memory'.
- the relational terms herein such as“first” and“second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations.
- the words“comprising,”“having,”“containing,” and“including,” and other similar forms are intended to he equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
- a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B.
- the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
- the above described embodiments can be implemented by hardware, or software (program codes), or a combinati on of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods.
- the computing units and other functional units described in this disclosure can be implemented by hardware, or software, or a combination of hardware and software.
- One of ordinary skill in the art will also understand that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
- a computer-implemented video processing method comprising:
- encoding or decoding the target picture comprises: signaling the predetermined number as part of a sequence parameter set or a picture parameter set.
- a device comprising:
- processors configured to execute the computer instructions to cause the device to:
- a non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
- a computer-implemented video processing method comprising:
- n fc being a cutoff frequency of the cosine windowed-sine filter
- L being a kernel length
- r being a down-sampling ratio
- a device comprising:
- processors configured to execute the computer instructions to cause the device to: in response to a target picture and a reference picture having different resolutions, apply a band-pass filter to the reference picture, to perform a motion compensated interpolation and generate a reference block;
- a non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
- a non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present disclosure provides systems and methods for performing adaptive resolution change during video encoding and decoding. The methods include: comparing resolutions of a target picture and a first reference picture; in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
Description
ADAPTIVE RESOLUTION CHANGE IN VIDEO PROCESSING
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001 ] The present disclosure claims the benefits of priority to U.S. Provisional Patent Application No. 62/865,927, filed on June 24, 2019, and U.S. Provisional Patent Application No. 62/900,439, filed on September 13, 2019, both of which are incorporated herein by reference in their entireties.
TECHNICAL FIELD
[0002] The present disclosure generally relates to video processing, and more particularly, to methods and systems for performing adaptive resolution change in video coding.
BACKGROUND
[0003 ] A video is a set of static pictures (or“frames”) capturing the visual information. To reduce the storage memory and the transmission bandwidth, a video can be compressed before storage or transmission and decompressed before display. The compression process is usually referred to as encoding and the decompression process is usually referred to as decoding. There are various video coding formats which use standardized video coding technologies, most commonly based on prediction, transform, quantization, entropy coding and in-loop filtering. The video coding standards, such as the High Efficiency Video Coding (HEVC/H.265) standard, the Versatile Video Coding (VVC/H.266) standard A VS standards, specifying the specific video coding formats, are developed by standardization organizations. With more and more advanced video coding technologies being adopted in the video standards, the coding efficiency of the new video coding standards get higher and higher.
SUMMARY OF THE DISCLOSURE
[0004] The embodiments of the present disclosure provide a method for performing adaptive resolution change during video encoding or decoding. In one exemplary
embodiment, the method includes: comparing resolutions of a target picture and a first reference picture, in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
[0005] The embodiments of the present disclosure also provide a device for performing adaptive resolution change during video encoding or decoding. In one exemplary embodiment, the device includes: one or more memories storing computer instructions; and one or more processors configured to execute the computer instructions to cause the device to: compare resolutions of a target picture and a first reference picture, in response to the target picture and the first reference picture having different resolutions, resample the first reference picture to generate a second reference picture; and encode or decode the target picture using the second reference picture.
[0006] The embodiments of the present disclosure also provide a non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for adaptive resolution change. In one exemplary embodiment, the method includes: comparing resolutions of a target picture and a first reference picture; in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
[0008] FIG. 1 is a schematic diagram illustrating an exemplary' video encoder, consistent with embodiments of the disclosure.
[0009] FIG. 2 is a schematic diagram illustrating an exemplary video decoder, consistent with embodiments of the disclosure
[0010] FIG. 3 illustrates an example in which resolution of reference pictures are different from a current picture, consistent with embodiments of the disclosure.
[0011] FIG. 4 is a table showing sub-pel motion compensation interpolation filter used for luma component in Versatile Video Coding (VVC), consistent with embodiments of the disclosure.
[0012] FIG. 5 is a rable showing sub-pel motion compensation interpolation filter used for chroma component in VVC, consistent with embodiments of the disclosure.
[0013] FIG. 6 illustrates an exemplary reference picture buffer, consistent with embodiments of the disclosure.
[0014] FIG. 7 is a table showing an example of a supported resolution set including three different resolutions, consistent with embodiments of the disclosure.
[0015] FIG. 8 illustrates an exemplary decoded picture buffer (DPB) when both resampled and original reference pictures are stored, consistent with embodiments of the disclosure.
[0016] FIG. 9 illustrates progressive down-sampling, consistent with embodiments of the disclosure.
[0017] FIG. 10 illustrates an exemplary video coding process with resolution change, consistent with embodiments of the disclosure.
[0018] FIG. 11 is a table showing an exemplary down-sampling filter, consistent with embodiments of the disclosure.
[0019] FIG. 12 illustrates an exemplary video coding process with peak-signal to noise ratio (PSNR) computation, consistent with embodiments of the disclosure.
[0020] FIG. 13 illustrates frequency response of an exemplary low pass filter, consistent with embodiments of the disclosure
[0021] FIG. 14 is a table showing 6-tap filters, consistent with embodiments of the disclosure.
[0022] FIG. 15 is a table showing 8-tap filters, consistent with embodiments of the disclosure.
[0023] FIG. 16 is a table showing 4-tap filters, consistent with embodiments of the disclosure.
[0024] FIG. 17 is a table showing Interpolation Filter coefficients for 2: 1 down- sampling, consistent with embodiments of the disclosure
[0025] FIG. 18 is a table showing Interpolation Filter coefficients for 1.5: 1 down- sampling, consistent with embodiments of the disclosure.
[0026] FIG. 19 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
[0027] FIG. 20 show's an exemplar}' chroma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
[0028] FIG. 21 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
[0029] FIG. 22 shows an exemplary chroma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
[0030] FIG. 23 shows an exemplary luma sample interpolation filtering process for reference down-sampling, consistent with embodiments of the disclosure.
[0031 ] FIG. 24 shows an exemplary chroma sample interpolation filtering process for reference down- sampling, consistent with embodiments of the disclosure.
[0032] FIG. 25 show's an exemplar}' chroma fractional sample position calculation for reference down-sampling, consistent with embodiments of the disclosure.
[0001] FIG. 26 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 2: 1, consistent with embodiments of the disclosure.
[0002] FIG. 27 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 1.5: 1, consistent with embodiments of the disclosure.
[0003] FIG. 28 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 2: 1, consistent with embodiments of the disclosure.
[0004] FIG. 29 is a table showing 8-tap filter for MC interpolation with reference downsampling at ratio 1.5: 1, consistent with embodiments of the disclosure.
[0005] FIG. 30 is a table showing 6-tap filter coefficients for luma 4x4 block MC interpolation with reference downsampling with ratio 2: 1, consistent with embodiments of the disclosure.
[0006] FIG. 31 is a table showing 6-tap filter coefficients for luma 4x4 block MC interpolation with reference downsampling with ratio 1.5: 1, consistent with embodiments of the disclosure. DESCRIPTION
[0007] Reference will now be made in detail to exemplar}' embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the
accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
[0008] A video is a set of static pictures (or“frames”) arranged in a temporal sequence to store visual information. A video capture device (e.g., a camera) can be used to capture and store those pictures in a temporal sequence, and a video playback device (e.g., a television, a computer, a smartphone, a tablet computer, a video player, or any end-user terminal with a function of display) can be used to display such pictures in the temporal sequence. Also, in some applications, a video capturing device can transmit the captured video to the video playback device (e.g., a computer with a monitor) in real-time, such as for surveillance, conferencing, or live broadcasting
[0009] To reduce the storage space and the transmission bandwidth needed by such applications, the video can be compressed. For example, the video can be compressed before storage and transmission and decompressed before the display. The compression and decompression can be implemented by software executed by a processor (e.g., a processor of a generic computer) or specialized hardw-are. The module for compression is generally referred to as an“encoder,” and the module for decompression is generally referred to as a “decoder.” The encoder and the decoder can be collectively referred to as a“codec.” The encoder and the decoder can be implemented as any of a variety of suitable hardware, software, or a combination thereof. For example, the hardware implementation of the encoder
and the decoder can include circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, or any combinations thereof. The software implementation of the encoder and the decoder can include program codes, computer-executable instructions, firmware, or any suitable computer-implemented algorithm or process fixed in a computer- readable medium. Video compression and decompression can be implemented by various algorithms or standards, such as MPEG- 1, MPEG-2, MPEG-4, H.26x series, or the like. In some applications, the codec can decompress the video from a first coding standard and re- compress the decompressed video using a second coding standard, in which case the codec can be referred to as a“transcoder.”
[0010] The video encoding process can identify and keep useful information that can be used to reconstruct a picture. If information that was disregarded in the video encoding process cannot be fully reconstructed, the encoding process can be referred to as“lossy.” Otherwise, it can be referred to as“lossless.” Most encoding processes are lossy, which is a tradeoff to reduce the needed storage space and the transmission bandwidth.
[001 1 ] In many cases, the useful information of a picture being encoded (referred to as a“current picture” ) can include changes with respect to a reference picture (e.g., a picture previously encoded or reconstructed). Such changes can include position changes, luminosity changes, or color changes of the pixels, among which the position changes are mostly concerned. Position changes of a group of pixels that represent an object can reflect the motion of the object between the reference picture and the current picture.
[0012] In order to achieve the same subjective quality as HEVC/H.265 using half the bandwidth, the JVET has been developing technologies beyond HE VC using the joint exploration model (“JEM”) reference software. As coding technologies were incorporated into the JEM, the JEM achieved substantially higher coding performance than HE VC. The
VC EG and MPEG have also formally started the development of a next generation video compression standard beyond HE VC— the Versatile Video Coding (VVC/H.266) standard.
[0013] The VVC standard is continuing to include more coding technologies that provide better compression performance. VVC can be implemented in the same video coding system that has been used in modern video compression standards such as HEVC,
H.264/AVC, MPEG2, H.263, etc. FIG. 1 is a schematic diagram illustrating an exemplary video encoder 100, consistent with the disclosed embodiments. For example, video encoder 100 may perform intra- or inter-coding of blocks within video frames, including video blocks, or partitions or sub-partitions of video blocks. Intra-coding may rely on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Inter coding may rely on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames of a video sequence. Intra modes may refer to a number of spatial based compression modes and inter modes (such as uni-prediction or bi-prediction) may refer to a number of temporal-based compression modes.
[0014] Referring to FIG. 1, input video signal 102 may be processed block by block. For example, the video block unit may be a 16x 16 pixel block (e.g , a macroblock (MB)). In HEVC, extended block sizes (e.g., a coding unit (CU)) may be used to compress video signals of resolution, e.g., lOBOp and beyond. In HEVC, a CU may include up to 64x64 luma samples and corresponding chroma samples. In VVC, the size of a CU may be further increased to include 128x 128 luma samples and corresponding chroma samples. A CU may be partitioned into prediction units (PUs), for which separate prediction methods may be applied. Each input video block (e.g., MB, CU, PU, etc.) may be processed by using spatial prediction unit 160 or temporal prediction unit 162
[0015] Spatial prediction unit 160 performs spatial prediction (e.g., intra prediction) to the current CU using information on the same picture/slice containing the current CU.
Spatial prediction may use pixels from the already coded neighboring blocks in the same video picture/slice to predict the current video block. Spatial prediction may reduce spatial redundancy inherent in the video signal. Temporal prediction (e.g., inter prediction or motion compensated prediction) may use samples from the already coded video pictures to predict the current video block. Temporal prediction may reduce temporal redundancy inherent in the video signal.
[0016] Temporal prediction unit 162 performs temporal prediction (e.g., inter prediction) to the current CU using information from picture(s)/slice(s) different from the picture/slice containing the current CU. Temporal prediction for a video block may be signaled by one or more motion vectors. The motion vectors may indicate the amount and the direction of motion between the current block and one or more of its prediction block(s) in the reference frames. If multiple reference pictures are supported, one or more reference picture indices may be sent for a video block. The one or more reference indices may be used to identify from which reference picture(s) in the decoded picture buffer (DPB) 164 (also called reference picture store 164), the temporal prediction signal may come. After spatial or temporal prediction, the mode decision and encoder control unit 180 in the encoder may choose the prediction mode, for example based on a rate-distortion optimization method. The prediction block may be subtracted from the current video block at adder 116. The prediction residual may be transformed by transformation unit 104 and quantized by quantization unit 106. The quantized residual coefficients may he inverse quantized at inverse quantization unit 110 and inverse transformed at inverse transform unit 112 to form the reconstructed residual. The reconstructed block may be added to the prediction block at adder 126 to form the reconstructed video block. The in-loop filtering, such as deblocking filter and adaptive loop filters 166, may be applied on the reconstructed video block before it is put in the reference picture store 164 and used to code future video blocks. To form the output video
bitstream 120, coding mode (e.g., inter or intra), prediction mode information, motion information, and quantized residual coefficients may be sent to the entropy coding unit 108 to be compressed and packed to form the bitstream 120.
[0017 ] Consistent with the disclosed embodiments, the above-described units of video encoder 100 can be implemented as software modules (e.g., computer programs that fulfill different functions), hardware components (e.g., different circuitry blocks for performing the respective functions), or a hybrid of software and hardware.
[0018] FIG. 2 is a schematic diagram illustrating an exemplary video decoder 200, consistent with the disclosed embodiments. Referring to FIG. 2, a video bitstream 202 may¬ be unpacked or entropy decoded at entropy decoding unit 208. The coding mode or prediction information may be sent to the spatial prediction unit 260 (e.g., if intra coded) or the temporal prediction unit 262 (e.g., if inter coded) to form the prediction block. If inter coded, the prediction information may comprise prediction block sizes, one or more motion vectors (e.g., which may indicate direction and amount of motion), or one or more reference indices (e.g., which may indicate from which reference picture the prediction signal is to be obtained).
[0019] Motion compensated prediction may be applied by the temporal prediction unit 262 to form the temporal prediction block. The residual transform coefficients may be sent to inverse quantization unit 210 and inverse transform unit 212 to reconstruct the residual block. The prediction block and the residual block may be added together at 226
The reconstructed block may go through in-loop filtering (via loop filer 266) before it is stored in decoded picture buffer (DPB) 264 (also called reference picture store 264). The reconstructed video in the DPB 264 may he used to drive a display device or used to predict future video blocks. Decoded video 220 may he displayed on a display.
[0020] Consistent with the disclosed embodiments, the above-described units of video decoder 200 can be implemented as software modules (e.g., computer programs that fulfill different functions), hardware components (e.g., different circuitry blocks for performing the respective functions), or a hybrid of software and hardware.
[0021 ] One of the objectives of the VVC standard is to offer video conferencing applications the ability to tolerate diversity of networks and devices. In particular, the VVC standard needs to provide the ability of rapidly adapting to varying network environments, including rapidly reducing encoded bitrate when network conditions deteriorate, and rapidly increasing video quality when network conditions improve. Additionally, for adaptive streaming sendees that offer multiple representations of the same content, each of the multiple representations may have different properties (e.g. spatial resolution or sample bit depth) and the video quality may vary from low to high. The VVC standard thus needs to support fast representation switching for the adaptive streaming services. During switching from one representation to another representation (such as switching from one resolution to another resolution), the VVC standard needs to enable the use of efficient prediction structure without compromising the fast and seamless switching capability.
[0022] In some embodiments consistent with the present disclosure, to change the resolution, the encoder (e.g., encoder 100 in FIG. 1) sends an instantaneous-decoder-refresh (IDR) coded picture to clear the contents of the reference picture buffer (e.g , the DPB 164 in FIG. 1 and DPB 264 in FIG. 2). On receiving an IDR coded picture, the decoder (e.g., decoder 200 in FIG. 2) marks all pictures in the reference buffer as“unused for reference.” All subsequent transmitted pictures can be decoded without reference to any frame decoded prior to the IDR picture. The first picture in a coded video sequence is always an IDR picture.
[0023] In some embodiments consistent with the present disclosure, adaptive resolution change (ARC) technology can be used to allow a video stream to change spatial
resolution between coding pictures within the same video sequence, without requiring a new IDR picture and without requiring multi-layers as in scalable video codec. According to the ARC technology, at a resolution-switch point, a currently coded picture is either predicted from reference pictures with the same resolution (if available), or predicted from reference pictures of a different resolution by resampling the reference pictures. An illustration of adaptive resolution change is shown in FIG. 3, where the resolution of reference picture“Ref 0” is the same as the resolution of the currently coded picture. However, resolutions of reference pictures“Ref 1” and“Ref 2” are different from the resolution of the current picture. To generate motion compensated prediction signal of the current picture, both“Ref 1” and “Ref 2” are resampled to the resolution of the current picture.
[0024] Consistent with the present disclosure, when the resolution of reference frame is different from that of the current frame, one way to generate the motion compensated prediction signal is picture-based resampling, where the reference picture is first resampled to the same resolution as the current picture, and the existing motion compensation process with motion vectors can be applied. The motion vectors may be scaled (if they are sent in units before resampling is applied) or not scaled (if they are sent in units after resampling is applied). With the picture-based resampling, in particular for reference picture down sampling (that is, resolution of the reference picture is larger than that of the current picture), information may be lost in the reference resampling step before the motion compensated interpolation, because down-sampling is usually achieved with a low-pass filtering followed by decimation).
[0025] Another way is block-based resampling, where resampling is performed at the block level. This is done by examining the reference picture(s) used by the current block, and if one or both of them have different resolutions than the current picture, then resampling is performed in combination with the sub-pel motion compensated interpolation process.
[0026] The present disclosure provides both picture-based resampling methods and block-based resampling methods for use with ARC. The following description first addresses the disclosed picture-based resampling methods, and then addresses the disclosed block- based resampling methods.
[0027] The disclosed picture-based resampling methods can solve some problems caused by traditional block-based resampling methods. First, in traditional block-level resampling method, on-the-fiy block level resampling can be performed when it is determined that the resolution of a reference picture is different from the resolution of current picture. The block level resampling process, however, could complicate the encoder design because the encoder may have to resample the block at each search point during motion search. Motion search is generally a time-consuming process for an encoder, and thus the on- the-fiy requirement during motion search may complicate the motion search process. As an alternative to the on-the-fiy block level resampling, the encoder may resample the reference picture in advance such that the resampled reference picture has the same resolution as that of the current picture. However, this may degrade coding efficiency because it may cause the prediction signals during motion estimation and motion compensation to be different.
[0028] Second, in the traditional block-based resampling method, the design of block level resampling is not compatible with some other useful coding tools, such as subblock- based temporal motion vector prediction (SbTMVP), affine motion compensated prediction, decoder side motion vector refinement (DMVR), etc. These coding tools may be disabled when ARC is enabled. But such disabling decreases the coding performance significantly.
[0029] Third, although any sub-pel motion compensated interpolation filter known in the art may be used for block resampling, it may not be equally applicable to block up- sampling and block down-sampling. For example, interpolation filter used for motion compensation may be used for the up-sampling in cases where reference picture has lower
resolution than the current picture. However, the interpolation filter is not suitable for the down-sampling in cases where reference picture has higher resolution than the current picture, because the interpolation filter cannot filter the integer positions and thus may produce aliasing. For example, Table 1 (FIG. 4) shows exemplary interpolation filter coefficients used for luma component integer positions with various values of the fractional sample positions. Table 2 (FIG. 5) shows exemplar}- interpolation filter coefficients used for chroma components with various values of the fractional sample positions. As it is shown in the Table 1 (FIG. 4) and Table 2 (FIG. 5), at fractional sample position 0 (i.e. integer position), no interpolation filter is applied.
[0030] To avoid the above-described problems associated with the traditional block- based resampling method, the present disclosure provides picture-level resampling methods that can be used for ARC. The picture resampling process can involve either up-sampling or down-sampling. Up sampling is the increasing of the spatial resolution while keeping the two-dimensional (2D) representation of an image. In the up-sampling process, the resolution of the reference picture is increased by interpolating the unavailable samples from the neighboring available samples. In the down-sampling process, the resolution of the reference image is reduced.
[0031] According to the disclosed embodiments, resampling can be performed at the picture level. In picture level resampling, if the resolution of a reference picture is different from the resolution of the current picture, the reference picture is resampled to the resolution of the current picture. The motion estimation and/or compensation of the current picture can be performed based on the resampled reference picture. This way, the ARC can be implemented in a“transparent” manner at the encoder and the decoder, because the block- level operations are agnostic to the resolution change.
[0032] The picture-level resampling can be performed on the fly, i.e., while the current picture is predicted. In some exemplary embodiments, only original (un-resampled) reference pictures are stored in the decoded picture buffer (DPB), e.g., DPB 164 in the encoder 100 (FIG, 1) and DBP 264 in the decoder 200 (FIG. 2). The DPB can be managed in the same way as the current version of the VVC. In the disclosed picture-level resampling method, before encoding or decoding of a picture is performed, the encoder or decoder resamples the reference picture if the resolution of a reference picture in the DPB is different from the resolution of the current picture. In some embodiments, a resampled picture buffer can be used to store all the resampled reference pictures for the current picture. The resampled picture of a reference picture is stored in the resampled picture buffer and the motion search/compensation of the current picture is performed using pictures from the resampled picture buffer. After the encoding or decoding is completed, the resampled picture buffer is removed. FIG. 6 shows an example of on-the-fly picture level resampling where the low-resolution reference picture is resampled and stored in the resampled picture buffer. As shown in FIG. 6, the DPB of the encoder or decoder contains 3 reference pictures. The resolution of reference pictures“Ref 0” and“Ref 2” are the same as the resolution of the current picture. Therefore,“Ref 0” and“Ref 2” do not need to be resampled. However, the resolution of reference picture“Ref 1” is different from the resolution of current picture and therefore need to be resampled. Accordingly, only resampled“Ref 1” is stored in the reference picture buffer, while“Ref 0” and“Ref 2” are not stored in the reference picture buffer.
[0033] Consistent with some exemplary embodiments, in on-the-fly picture level resampling, the number of resampled reference pictures that a picture can use is constrained. For instance, the maximum number of resampled reference pictures of a given current picture may be preset. For example, the maximum number may be set to be one. In this case, a
bitstream constraint may be imposed such that the encoder or decoder may allow at most one of the reference pictures to have a different resolution from that of the current picture, and all other reference pictures must have the same resolution. Because this maximum number indicates the size of the resample picture buffer and the maximum number of resampling that can be performed for any pictures in the current video sequence, the maximum number directly relates to the worst-case decoder complexity. Therefore, this maximum number may be signaled as part of the sequence parameter set (SPS) or the picture parameter set (PPS), and it may be specifi ed as part of the profile/level definition.
[0034] In some exemplary embodiments, two versions of a reference picture are stored in the DPB. One version has the original resolution and the other version has the maximum resolution. If the resolution of the current picture is different from either the original resolution or the maximum resolution, the encoder or decoder can perform on -th e-fly down-sampling from the stored maximum resolution picture. For the picture output, the DPB always output the original (un-resampled) reference picture.
[0035] In some exemplary embodiments, the resampling ratios can be arbitrarily chosen, and vertical and horizontal scaling ratios are allowed to be different. Because picture- level resampling is applied, the block-level operations of encoding/decoder are made agnostic to the resolutions of the reference pictures, allowing arbitrary resampling ratio to be enabled without further complicating the block-level design logics of the encoder/decoder.
[0036] In some exemplary embodiments, the signaling of the coded resolution and maximum resolution can be performed as follows. The maximum resolution of any picture in the video sequence is signaled in the SPS. The coded resolution of the picture can be signaled either in PPS or in slice header. In either case, one flag is signaled whether the coded resolution is same as the maximum resolution or not. If the coded resolution is not same as the maximum resolution, the coded width and height of the current picture are additionally
signaled. If PPS signaling is used, the signaled coded resolution is applied to all pictures that refer to this PPS. If slice header signaling is used, the signaled coded resolution is only applied to the current picture itself. In some embodiments, the difference between the current coded resolution and the maximum resolution signaled may be signaled in the SPS.
[0037] In some exemplary' embodiments, instead of using arbitrary' resampling ratio, the resolution is restricted to a pre-defmed set of A supported resolutions, where A is the number of supported resolutions within a sequence. The value of A and supported resolutions can be signaled in the SPS. The coded resolution of a picture is signaled either through the PPS or the slice header. In either case, instead of signaling the actual coded resolution, the corresponding resolution index is signaled. Table 3 (FIG. 7) shows an example of the supported resolution set, where the coding sequence allows 3 different resolutions. If the coded resolution of current picture is 1440 >< 816, the corresponding index value (=: 1) is signaled either through PPS or slice header.
[0038] In some exemplary' embodiments, both resampled and un-resampled reference pictures are stored in the DPB. For each reference picture, one original (i.e., un-resampled) picture and A-l resampled copies are stored. If the resolution of the reference picture is different from the resolution of the current picture, the resampled pictures in the DPB are used to code the current picture. For the picture output, the DPB outputs the original (un- resampled) reference picture. As an example, FIG, 8 shows the occupancy of a DPB of the supported resolution sets given in Table 3 (FIG. 7). As shown in FIG. 8, the DPB contains A (e.g., N= 3) copies of each picture that is used as a reference picture.
[0039] The selection of the value of A depends on the application. Larger values of A make resolution selection more flexible, but increase the complexity and memory' requirements. Smaller values of A are suitable for limited-complexity devices, but limit the resolutions that can be selected. Therefore, in some embodiments, flexibility can be given to
the encoder to decide the value of M based on applications and device capabilities and signaled through SPS.
[0040] Consistent with the present disclosure, the down-sampling of reference pictures can be performed progressively (i.e., gradually). In conventional (i.e., direct) down- sampling, the down-sampling of an input image is performed in one step. If the down- sampling ratio is higher, single-step down-sampling requires longer tap down-sampling filter in order to avoid severe aliasing problem. However, longer tap filters are computationally expensive. In some exemplary embodiments, to maintain the quality of the down-sampled image, if the down sampling ratio is higher than a threshold (e.g higher than 2; 1 down- sampling), progressive down-sampling is used where down-sampling is performed gradually. For example, a down-sampling filter sufficient for 2: 1 down-sampling can be applied repeatedly, to implement down-sampling ratios greater than 2: 1. FIG. 9 shows an example of a progressive down-sampling where down-sampling by 4, in both horizontal and vertical dimensions, is implemented over 2 pictures. The first picture is down-sampled by 2 (in both direction) and the second picture is down-sampled by 2 (in both directions).
[0041 ] Consistent with the present disclosure, the disclosed picture-level resampling methods can be used together with other coding tools. For example, in some exemplary embodiments, scaled motion vector based on the resampled reference pictures can be used during temporal motion vector predictor (TMVP) or advanced temporal motion vector predictor (ATMVP)
[0042] Next, the test methodology to evaluate the coding tools for ARC is described. Since adaptive resolution change is mainly used for adapting to the network bandwidth, the following test conditions may be considered to compare coding efficiency of different ARC schemes during network bandwidth change.
[0043] In some embodiments, the resolution is changed to half (in both vertical and horizontal directions) at a specific time instance and later reverted back to the original resolution after a certain time period. FIG. 10 shows the example of the resolution change. At time h the resolution is changed to half and later reverted back to the full resolution at time h.
[0044] In some embodiments, when the resolution is reduced, if the down-sampling ratio is too large (e g. the down-sampling ratio in a given dimension is larger than 2: 1), progressive downscaling is used over a certain time period. However, when up-sampling is used, progressive up-sampling is not used.
[0045] In some embodiments, scalable HE VC test model (SMH) down-sampling filter can be used for source resampling. The detailed filter coefficients with different fractional sample positions are shown in Table 4 (FIG. 11).
[0046] In some exemplary embodiments, two peak-signal to noise ratios (PSNRs) are computed to measure the video quality. An illustration of the PSNR computation is shown in FIG. 12. The first PSNR is computed between resampled source and decoded pictures. The second PSNR is computed between original source and up-sampled decoded source.
[0047] In some embodiments, the number of pictures to be encoded is the same as current test conditions of VVC.
[0048] Next, the disclosed block-based resampling methods are described. In the block-based resampling, combining the resampling and motion compensated interpolation into one filtering operation may reduce the information loss mentioned above. Take the following case as an example: the motion vector of the current block has half-pel precision in one dimension, e.g., the horizontal dimension, and the reference picture’s width is 2x that of the current picture. In this case, compared to the picture-level resampling, which will reduce the width of the reference picture by half to match the width of the current picture, and then doing half-pel motion interpolation, the block-based resampling method will directly fetch
the odd positions in the reference pictures as the reference block at half-pel precision. In the 15lh JVET meeting, a block-based resampling method for ARC was adopted in VVC, where the motion compensation (MC) interpolation and reference resampling is combined and performed in one-step filter. In VVC draft 6, the existing filters for MC interpolation with no reference resampling are reused for the MC interpolation with reference resampling. The same filter is used for both reference up-sampling and down-sampling. The details of filter selection are described belowr
[0049] For luma component: if half-pel AMVR mode is selected and interpolated position is half-pel, a 6-tap filter [3, 9, 20, 20, 9, 3] is used; if motion compensated block size is 4x4, the following 6-tap filters as shown in Table 5 (FIG. 13) are used: and otherwise 8-tap filters as shown Table 6 (FIG. 14) are used. For chroma component, the 4-tap filters shown in Table 7 (FIG. IS) are used.
[0050] In VVC, the same filters are used for MC interpolation without reference resampling and MC interpolation with reference resampling. While the VVC MCIF is designed based on DCT up-sampling, it may not be appropriate to use it as a one-step filter combining reference down-sampling and MC interpolation. For example, for phase 0 filtering, i.e. scaled mv is integer, the VVC 8-tap MCIF coefficients are [0, 0, 0, 64, 0, 0, 0,
0], wiiich means the prediction sample is directly copied from the reference sample. While it may not be a problem for MC interpolation without reference down-sampling or with reference up-sampling, it may cause aliasing artifact for reference down-sampling case, due to the lack of low-pass filter before decimation.
[0051] This present disclosure provides methods of using cosine windowed-sine filters for MC interpolation with reference down-sampling.
[0052] Windowed-sine filters are band-pass filters that separate one band of frequency from others. The windowed-sine filter is a low-pass filter with a frequency
response that allows all frequencies below a cutoff frequency passthrough with amplitude 1 and stops all frequencies above the cutoff frequency with zero amplitude, as shown in FIG. 16.
[0053 ] The filter kernel, also known as the impulse response of the filter, is obtained by taking the Inverse Fourier Transform of the frequency response of the ideal low-pass filter. The impulse response of the low-pass filter is in the general form of a sine function:
where fc is the cut-off frequency with the value in [0, 1 ], r is the down-sampling ratio, i e.,
1.5 for 1.5: 1 down-sampling and 2 for the 2: 1 down-sampling. The sine function is defined below:
Sinc
[0054] The sine function is infinite. To make the filter kernel in finite length, a window function is applied to truncate the filter kernel to L points. To obtain a smooth tapered curve, a cosine-windowed function is used, which is given by:
[0055] The kernel of the cosine windowed-sine fi lter is the product of the ideal response function h(n ) and the cosine-window function vv(n):
[0056] There are two parameters selected for a wtndowed-stnc kernel, the cutoff frequency fc and the kernel length L. For down-sampling filters used in the scalable FIE VC test model (SHM), fc = 0 9 and L=T3.
[0057] Filter coefficients obtained in (4) are real numbers. Applying a filter is equivalent to calculating the weighted average of reference samples, with the weights being the filter coefficients. For efficient calculation in digital computers or hardware, the coefficients are normalized, multiplied by a scaler and rounded to integers, such that the sum of the coefficients is equal to 2LN, where N is an integer. The filtered sample is divided by 2LN (equivalent to right shift N bits). For example, in VVC draft 6, the sum of interpolation filter coefficients is 64.
[0058] In some disclosed embodiments, the down-sampling filters is used in SHM for the VVC motion compensation interpolation with reference down-sampling, for both luma and chroma components, and the existing MCIF is used for motion compensation
interpolation with reference up-sampling. While the kernel length L=13, the first coefficient is small and rounded to zero, the filter length can be reduced to 12, without impact to the filter performance.
[0059] As an example, the filter coefficients for 2: 1 down-sampling and 1.5: 1 down- sampling are shown in Table 8 (FIG. 17) and Table 9 (FIG. 18), respectively.
[0060] Besides the value of the coefficients, there are several other differences between the designs of SHM filter and the existing MCIF.
[0061] As a first difference, the SHM filter requires filtering in integer sample position as well as in fractional sample position, while the MCIF only requires filtering in fractional sample position. An example of modification to the VVC draft 6 luma sample interpolation filtering process for reference down-sampling case is described in Table 10 (FIG. 19).
[0062] An example of modification to the VVC draft 6 chroma sample interpolation filtering process for reference down-sampling case is described in Table 11 (FIG. 20).
[0063] As a second difference, the sum of filter coefficients is 128 for SHM filters while the sum of filter coefficients is 64 for the existing MCIF. In VVC draft 6, in order to reduce the loss caused by rounding error, the intermediate prediction signals are kept in a higher precision (represented by higher bit-depth) than the output signal. The precision of the intermediate signal is called internal precision. In one embodiment, in order to keep the internal precision being the same as in VVC draft 6, the output of SHM filters needs to right shift 1 additional bit comparing to using existing MCIF. An example of modification to the VVC draft 6 luma sample interpolation filtering process for reference down-sampling case is shown in Table 12 (FIG. 21).
[0064] An example of modification to the VVC draft 6 chroma sample interpolation filtering process for reference down-sampling case is shown in Table 13 (FIG. 22)
[0065] According to some embodiments, the internal precision can be increased by 1- bit, and addition l-bit right shift can be used to convert the internal precision to the output precision. An example of modification to the VVC draft 6 luma sample interpolation filtering process for reference down-sampling case is shown in Table 14 (FIG. 23).
[0066] An example of modification to the VVC draft 6 chroma sample interpolation filtering process for reference down-sampling case is shown in Table 15 (FIG. 24).
[0067] As a third difference, SHM filter has 12 taps. Therefore, to generate an interpolated sample, 11 neighboring samples (5 to the left and 6 to the right, or 5 above or 6 below) are needed. Comparing to MCIF, additional neighboring samples are fetched. In VVC draft 6, the chroma mv precision is 1/32. However, the SHM filter has only 16 phases.
Therefore, the chroma mv can be rounded to 1/16 for reference down-sampling. This can be done by right shifting the last 5 bits of the chroma mv by 1 bit. An example of modification to the VVC draft 6 chroma fractional sample position calculation for reference down- sampling case is shown in Table 16 (FIG. 25).
[0068] According to some embodiments, in order to align to the existing MCIF design in VVC draft, we propose to use 8-tap cosine windowed-sine filters. The filter coeff can be derived by setting L=9 in the cosine windowed-sine function in Equation (4). The sum of filter coefficients may be set to 64 for further alignment to the existing MCIF filters.
Example filter coefficients for 2: 1 and 1.5: 1 ratios are shown in the following Table 17 (FIG. 26) and Table 18 (FIG. 27), respectively.
[0069] According to some embodiments, to adapt to the 1/32-sample precision for chroma component, a 32-phase cosine windowed-sine filter set may be used in chroma motion compensation interpolation with reference down-sampling. Examples of filter coefficients for 2 : 1 and 1.5: 1 ratios are shown in Table 19 (FIG. 28) and Table 20 (FIG. 29), respectively.
[0070] According to some embodiments, for 4x4 luma block, 6-tap cosine windowed- sine filter may be used for MC interpolation with reference down- sampling. Examples of filter coefficients for 2: 1 and 1.5: 1 ratios are shown in the following Table 21 (FIG. 30) and Table 22 (FIG, 31), respectively.
[0071 ] In some embodiments, a non-transitory computer-readable storage medium including instructions is also provided, and the instructions may be executed by a device (such as the disclosed encoder and decoder), for performing the above-described methods. Common forms of non-transitory media include, for example, a floppy disk, a fl exible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD- ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The device may include one or more processors (CPUs), an input/output interface, a network interface, and/or a memory'.
[0072] It should be noted that, the relational terms herein such as“first” and“second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words“comprising,”“having,”“containing,” and“including,” and other similar forms are intended to he equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
[0073] As used herein, unless specifically stated otherwise, the term“or”
encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
[0074] It is appreciated that the above described embodiments can be implemented by hardware, or software (program codes), or a combinati on of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in this disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
[0075] In the foregoing specification, embodiments have been described with reference to numerous specific details that can van- from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other
embodiments can be apparent to those skilled in the art from consideration of the
specification and practice of the invention disclosed herein. It is intended that the
specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
[0076] The embodiments may further be described using the following clauses:
1. A computer-implemented video processing method, comprising:
comparing resolutions of a target picture and a first reference picture;
in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
2. The method according to clause 1, further comprising:
storing the second reference picture in a first buffer, the first buffer being different from a second buffer storing decoded pictures that are used for prediction of future pictures; and
removing the second reference picture from the first buffer after the encoding or decoding of the target picture is completed.
3. The method according to any one of clauses 1 and 2, wherein encoding or decoding the target picture comprises:
encoding or decoding the target picture by using no more than a predetermined number of resampled reference pictures
4. The method according to clause 3, wherein encoding or decoding the target picture comprises:
signaling the predetermined number as part of a sequence parameter set or a picture parameter set.
5. The method according to clause 1, wherein resampling the first reference picture to generate the second reference picture comprises:
storing a first version and a second version of the first reference picture, the first version having an original resolution, and the second version have a maximum resolution allowed for resampling the reference picture; and
resampling the maximum version to generate the second reference picture.
6. The method according to clause 5, further comprising:
storing the first version and the second version of the first reference picture in a decoded picture buffer; and
outputting the first version for coding future pictures.
7. The method according to clause 1, wherein resampling the first reference picture to generate the second reference picture comprises:
generating the second reference picture in a supported resolution.
8. The method according to clause 7, wherein further comprising:
signaling, as part of a sequence parameter set or a picture parameter set, information indicating:
a number of supported resolutions, and
pixel dimensions corresponding to the supported resolutions
9. The method according to clause 8, wherein the information comprises:
an index corresponding to at least one of the supported resolutions.
10. The method according to any one of clauses 8 and 9, further comprising:
setting the number of supported resolutions based on a configuration of a video application or a video device.
1 1. The method according to any one of clauses 1 -10, wherein resampling the first reference picture to generate the second reference picture:
performing progressive down-sampling of the first reference picture to generate the second reference picture.
12. A device, comprising:
one or more memories storing computer instructions; and
one or more processors configured to execute the computer instructions to cause the device to:
compare a resolution of a target picture to a resolution of a first reference picture
in response to the target picture and the first reference picture having different resolutions, resample the first reference picture to generate a second reference picture; and
encode or decode the target picture using the second reference picture.
13. A non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
comparing resolutions of a target picture and a first reference picture;
in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
14. A computer-implemented video processing method, comprising:
in response to a target picture and a reference picture having different resolutions, applying a hand-pass filter to the reference picture, to perform a motion compensated interpolation and to generate a reference block; and
encoding or decoding a block of the target picture using the reference block.
15. The method according to clause 14, wherein the band-pass filter is a cosine windowed-sine filter.
16. The method according to clause 15, wherein the cosine windowed-sine filter has a kernel function /(n) =
? Sine ( \— T" n J) cos \L 1/ wherein n
fc being a cutoff frequency of the cosine windowed-sine filter, L being a kernel length, and r being a down-sampling ratio.
17. The method according to clause 16, wherein the/c is equal to 0.9 and the L is equal to 13.
18. The method according to any one of clauses 15-17, wherein the cosine windowed- sine filter is an 8-tap filter.
19. The method according to any one of clauses 15-17, wherein the cosine windowed- sine filter is a 4-tap filter.
20. The method according to any one of clauses 15-17, wherein the cosine windowed- sine filter is a 32-phase filter.
21. The method according to clause 20, wherein the 32-phase filter is used in chroma motion compensation interpolation.
22. The method according to any one of clauses 14-21, wherein applying the band pass filter to the reference picture comprises:
obtaining a luma sample or a chroma sample at a fractional sample position.
23. A device, comprising:
one or more memories storing computer instructions; and
one or more processors configured to execute the computer instructions to cause the device to:
in response to a target picture and a reference picture having different resolutions, apply a band-pass filter to the reference picture, to perform a motion compensated interpolation and generate a reference block; and
encode or decode a block of the target picture using the reference block
24 A non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
25. A non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method comprising:
comparing resolutions of a target picture and a first reference picture;
in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture
[0077] In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims
1. A computer-implemented video processing method, comprising:
comparing resolutions of a target picture and a first reference picture;
in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
2. The method according to claim 1, further comprising:
storing the second reference picture in a first buffer, the first buffer being different from a second buffer storing decoded pictures that are used for prediction of future pictures; and
removing the second reference picture from the first buffer after the encoding or decoding of the target picture is completed.
3. The method according to claim 1, wherein encoding or decoding the target picture comprises:
encoding or decoding the target picture by using no more than a predetermined number of resampled reference pictures.
4. The method according to claim 3, wherein encoding or decoding the target picture comprises:
signaling the predetermined number as part of a sequence parameter set or a picture parameter set.
5. The method according to claim 1 , wherein resampling the first reference picture to generate the second reference picture comprises:
storing a first version and a second version of the first reference picture, the first version having an original resolution, and the second version have a maximum resolution allowed for resampling the reference picture; and
resampling the maximum version to generate the second reference picture.
6. The method according to claim 5, further comprising:
storing the first version and the second version of the first reference picture in a decoded picture buffer; and
outputting the first version for coding future pictures.
7. The method according to claim 1, wherein resampling the first reference picture to generate the second reference picture comprises:
generating the second reference picture in a supported resolution.
8. The method according to claim 7, wherein further comprising:
signaling, as part of a sequence parameter set or a picture parameter set, information indicating:
a number of supported resolutions, and
pixel dimensions corresponding to the supported resolutions.
9. The method according to claim 8, wherein the information comprises:
an index corresponding to at least one of the supported resolutions.
10. The method according to claim 8, further comprising:
setting the number of supported resolutions based on a configuration of a video application or a video device.
11. The method according to claim 1, wherein resampling the first reference picture to generate the second reference picture comprises:
performing progressive down-sampling of the first reference picture to generate the second reference picture.
12. A device, comprising:
one or more memories storing computer instructions; and
one or more processors configured to execute the computer instructions to cause the device to:
compare a resolution of a target picture to a resolution of a first reference picture:
in response to the target picture and the first reference picture having different resolutions, resample the first reference picture to generate a second reference picture; and
encode or decode the target picture using the second reference picture.
13. The device according to claim 12, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
store the second reference picture in a first buffer, the first buffer being different from a second buffer storing decoded pictures that are used for prediction of future pictures, and remove the second reference picture from the first buffer after the encoding or decoding of the target picture is completed.
14. The device according to claim 12, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
encode or decode the target picture by using no more than a predetermined number of resampled reference pictures.
15. The device according to claim 14, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
signal the predetermined number as part of a sequence parameter set or a picture parameter set.
16. The device according to claim 12, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
store, in the one or more memories, a first version and a second version of the first reference picture, the first version having an original resolution, and the second version have a maximum resolution allowed for resampling the reference picture: and
resample the maximu version to generate the second reference picture.
17. The device according to claim 16, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
store the first version and the second version of the first reference picture in a decoded picture buffer; and
output the first version for coding future pictures
18. The device according to claim 12, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
generate the second reference picture in a supported resolution
19. The device according to claim 18, wherein the one or more processors is further configured to execute the computer instructions to cause the device to:
signal, as part of a sequence parameter set or a picture parameter set, information indicating:
a number of supported resolutions, and
pixel dimensions corresponding to the supported resolutions.
20. A non-transitory computer readable medium storing a set of instructions that is executable by at least one processor of a computer system to cause the computer system to perform a method for processing video content, the method compri sing:
comparing resolutions of a target picture and a first reference picture;
in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20833197.5A EP3981155A4 (en) | 2019-06-24 | 2020-05-29 | Adaptive resolution change in video processing |
CN202080046836.9A CN114128262A (en) | 2019-06-24 | 2020-05-29 | Adaptive resolution change in video processing |
JP2021570430A JP2022539657A (en) | 2019-06-24 | 2020-05-29 | Adaptive resolution change in video processing |
KR1020227001695A KR20220024659A (en) | 2019-06-24 | 2020-05-29 | Adaptive resolution change in video processing |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962865927P | 2019-06-24 | 2019-06-24 | |
US62/865,927 | 2019-06-24 | ||
US201962900439P | 2019-09-13 | 2019-09-13 | |
US62/900,439 | 2019-09-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020263499A1 true WO2020263499A1 (en) | 2020-12-30 |
Family
ID=74038999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/035079 WO2020263499A1 (en) | 2019-06-24 | 2020-05-29 | Adaptive resolution change in video processing |
Country Status (6)
Country | Link |
---|---|
US (3) | US11190781B2 (en) |
EP (1) | EP3981155A4 (en) |
JP (1) | JP2022539657A (en) |
KR (1) | KR20220024659A (en) |
CN (1) | CN114128262A (en) |
WO (1) | WO2020263499A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112022002480A2 (en) * | 2019-08-20 | 2022-04-26 | Beijing Bytedance Network Tech Co Ltd | Method for processing video, apparatus in a video system, and computer program product stored on non-transient computer-readable media |
US11632540B2 (en) * | 2019-12-20 | 2023-04-18 | Qualcomm Incorporated | Reference picture scaling ratios for reference picture resampling in video coding |
US12015785B2 (en) * | 2020-12-04 | 2024-06-18 | Ofinno, Llc | No reference image quality assessment based decoder side inter prediction |
CA3142044A1 (en) * | 2020-12-14 | 2022-06-14 | Comcast Cable Communications, Llc | Methods and systems for improved content encoding |
JP2024535510A (en) * | 2021-10-04 | 2024-09-30 | エルジー エレクトロニクス インコーポレイティド | Image encoding/decoding method and device for adaptively changing resolution, and method for transmitting bitstreams |
CN114302061B (en) * | 2021-12-31 | 2023-08-29 | 北京中联合超高清协同技术中心有限公司 | 8K and 4K synchronous mixed manufactured ultra-high definition video rebroadcasting vehicle and rebroadcasting method |
WO2024080778A1 (en) * | 2022-10-12 | 2024-04-18 | 엘지전자 주식회사 | Image encoding/decoding method and device for adaptively changing resolution, and method for transmitting bitstream |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100200417A1 (en) * | 2009-02-04 | 2010-08-12 | Impulse Devices, Inc. | Method and Apparatus for Electrodeposition in Metal Acoustic Resonators |
US20130107953A1 (en) * | 2011-10-31 | 2013-05-02 | Qualcomm Incorporated | Random access with advanced decoded picture buffer (dpb) management in video coding |
US20140146137A1 (en) * | 2011-06-24 | 2014-05-29 | Sehoon Yea | Encoding/decoding method and apparatus using a skip mode |
US20140269912A1 (en) * | 2006-01-06 | 2014-09-18 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
US20150010062A1 (en) * | 2013-01-30 | 2015-01-08 | Neelesh N. Gokhale | Content adaptive parametric transforms for coding for next generation video |
US20150350726A1 (en) * | 2014-05-30 | 2015-12-03 | Alibaba Group Holding Limited | Method and apparatus of content-based self-adaptive video transcoding |
WO2018002425A2 (en) * | 2016-06-30 | 2018-01-04 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
US20180262753A1 (en) * | 2010-12-28 | 2018-09-13 | Sun Patent Trust | Image decoding apparatus for decoding a current picture with prediction using one or both of a first reference picture list and a second reference picture list |
WO2018178507A1 (en) * | 2017-03-27 | 2018-10-04 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7333141B2 (en) * | 2002-10-22 | 2008-02-19 | Texas Instruments Incorporated | Resampling methods for digital images |
US8107571B2 (en) * | 2007-03-20 | 2012-01-31 | Microsoft Corporation | Parameterized filters and signaling techniques |
TWI387317B (en) * | 2008-12-11 | 2013-02-21 | Novatek Microelectronics Corp | Apparatus for reference picture resampling generation and method thereof and video decoding system |
KR101678968B1 (en) * | 2009-08-21 | 2016-11-25 | 에스케이텔레콤 주식회사 | Reference Picture Interpolation Method and Apparatus and Video Coding Method and Apparatus Using Same |
CN105453564B (en) * | 2013-07-30 | 2019-05-10 | 株式会社Kt | Support multiple layers of image coding and decoding method and the device using this method |
KR20150075041A (en) * | 2013-12-24 | 2015-07-02 | 주식회사 케이티 | A method and an apparatus for encoding/decoding a multi-layer video signal |
EP3090549A1 (en) * | 2014-01-02 | 2016-11-09 | VID SCALE, Inc. | Methods and systems for scalable video coding with mixed interlace and progressive content |
US20150264404A1 (en) * | 2014-03-17 | 2015-09-17 | Nokia Technologies Oy | Method and apparatus for video coding and decoding |
US10375399B2 (en) * | 2016-04-20 | 2019-08-06 | Qualcomm Incorporated | Methods and systems of generating a background picture for video coding |
KR102393736B1 (en) | 2017-04-04 | 2022-05-04 | 한국전자통신연구원 | Method and apparatus for coding video |
-
2020
- 2020-05-29 EP EP20833197.5A patent/EP3981155A4/en active Pending
- 2020-05-29 JP JP2021570430A patent/JP2022539657A/en active Pending
- 2020-05-29 CN CN202080046836.9A patent/CN114128262A/en active Pending
- 2020-05-29 KR KR1020227001695A patent/KR20220024659A/en unknown
- 2020-05-29 WO PCT/US2020/035079 patent/WO2020263499A1/en unknown
- 2020-05-29 US US16/887,103 patent/US11190781B2/en active Active
-
2021
- 2021-11-29 US US17/537,471 patent/US11843790B2/en active Active
-
2023
- 2023-11-20 US US18/514,161 patent/US20240089473A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140269912A1 (en) * | 2006-01-06 | 2014-09-18 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
US20100200417A1 (en) * | 2009-02-04 | 2010-08-12 | Impulse Devices, Inc. | Method and Apparatus for Electrodeposition in Metal Acoustic Resonators |
US20180262753A1 (en) * | 2010-12-28 | 2018-09-13 | Sun Patent Trust | Image decoding apparatus for decoding a current picture with prediction using one or both of a first reference picture list and a second reference picture list |
US20140146137A1 (en) * | 2011-06-24 | 2014-05-29 | Sehoon Yea | Encoding/decoding method and apparatus using a skip mode |
US20130107953A1 (en) * | 2011-10-31 | 2013-05-02 | Qualcomm Incorporated | Random access with advanced decoded picture buffer (dpb) management in video coding |
US20150010062A1 (en) * | 2013-01-30 | 2015-01-08 | Neelesh N. Gokhale | Content adaptive parametric transforms for coding for next generation video |
US20150350726A1 (en) * | 2014-05-30 | 2015-12-03 | Alibaba Group Holding Limited | Method and apparatus of content-based self-adaptive video transcoding |
WO2018002425A2 (en) * | 2016-06-30 | 2018-01-04 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
WO2018178507A1 (en) * | 2017-03-27 | 2018-10-04 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
Non-Patent Citations (1)
Title |
---|
See also references of EP3981155A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3981155A1 (en) | 2022-04-13 |
US11843790B2 (en) | 2023-12-12 |
JP2022539657A (en) | 2022-09-13 |
EP3981155A4 (en) | 2022-08-17 |
US20200404297A1 (en) | 2020-12-24 |
KR20220024659A (en) | 2022-03-03 |
CN114128262A (en) | 2022-03-01 |
US20230045775A9 (en) | 2023-02-09 |
US11190781B2 (en) | 2021-11-30 |
US20240089473A1 (en) | 2024-03-14 |
US20220086469A1 (en) | 2022-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11843790B2 (en) | Adaptive resolution change in video processing | |
KR102421921B1 (en) | Apparatus and method for image coding and decoding | |
US7379496B2 (en) | Multi-resolution video coding and decoding | |
WO2017142448A1 (en) | Methods and devices for encoding and decoding video pictures | |
KR101482896B1 (en) | Optimized deblocking filters | |
US20210092390A1 (en) | Methods and apparatuses for prediction refinement with optical flow in reference picture resampling | |
JP5581688B2 (en) | Image processing apparatus and method, and program | |
US20120014450A1 (en) | System for low resolution power reduction with deblocking flag | |
US20210044799A1 (en) | Adaptive resolution change in video processing | |
US20120263225A1 (en) | Apparatus and method for encoding moving picture | |
JPWO2011039931A1 (en) | Image encoding device, image decoding device, image encoding method, and image decoding method | |
US20240137575A1 (en) | Filters for motion compensation interpolation with reference down-sampling | |
US11638019B2 (en) | Methods and systems for prediction from multiple cross-components | |
US20230353777A1 (en) | Motion compensation methods for video coding | |
WO2015138311A1 (en) | Phase control multi-tap downscale filter | |
WO2022037583A1 (en) | Systems and methods for intra prediction smoothing filter | |
Bier | Introduction to Video Compression (CV-902) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021570430 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020833197 Country of ref document: EP Effective date: 20220106 |
|
ENP | Entry into the national phase |
Ref document number: 20227001695 Country of ref document: KR Kind code of ref document: A |