WO2015102329A1 - Method and apparatus for processing video signal for reducing visibility of blocking artifacts - Google Patents

Method and apparatus for processing video signal for reducing visibility of blocking artifacts Download PDF

Info

Publication number
WO2015102329A1
WO2015102329A1 PCT/KR2014/012952 KR2014012952W WO2015102329A1 WO 2015102329 A1 WO2015102329 A1 WO 2015102329A1 KR 2014012952 W KR2014012952 W KR 2014012952W WO 2015102329 A1 WO2015102329 A1 WO 2015102329A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
shift information
extended
random shift
video signal
Prior art date
Application number
PCT/KR2014/012952
Other languages
French (fr)
Inventor
Amir Said
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to US15/107,856 priority Critical patent/US20160330486A1/en
Priority to EP14876473.1A priority patent/EP3090560A4/en
Priority to JP2016544118A priority patent/JP2017509188A/en
Priority to CN201480071577.XA priority patent/CN105874802A/en
Priority to KR1020167021029A priority patent/KR20160102075A/en
Publication of WO2015102329A1 publication Critical patent/WO2015102329A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks

Definitions

  • the present invention relates to a method and apparatus for processing a video signal and, more particularly, to a technology for reducing the visibility of blocking artifacts.
  • Video compression is an application that requires computation, but needs to be supported within a cheap consumer device. Accordingly, in order to maintain computation complexity of a manageable level, some steps for video coding independently operate on a set of pixels grouped within relatively small square blocks. Such an approach has been adopted in existing codecs and continues to be used.
  • Such coding is disadvantageous in that discontinuity, that is, a so-called blocking artifact, is generated when reconstruction is performed in neighboring blocks. Such an artifact tends to be seen by an eye, thereby significantly reducing the subjective picture quality of a reconstructed video.
  • An embodiment of the present invention provides a method of reducing the visibility of blocking artifacts.
  • an embodiment of the present invention provides a method of extending a frame using random shift information.
  • an embodiment of the present invention proposes a method of obtaining a target frame from an extended frame using random shift information.
  • an embodiment of the present invention provides a method of coding and sending random shift information.
  • an embodiment of the present invention provides a method of improving the subjective picture quality of a video signal.
  • the visibility of blocking artifacts can be reduced by extending a frame using random shift information and obtaining a target frame from the extended frame. Furthermore, the subjective picture quality of a video signal can be improved by reducing the visibility of blocking artifacts.
  • blocking artifacts can be reduced by making blocking artifacts appear at different positions of a target frame obtained from an extended frame.
  • the picture quality of a video signal can be improved using a simple method and very low costs, and more improved picture quality can be obtained at a low bit rate.
  • FIGS. 1 and 2 illustrate schematic block diagrams of video signal processing apparatuses respectively including an encoder and decoder in accordance with embodiments to which the present invention is applied;
  • FIG. 3 illustrates the block structure of extended frames having fixed block boundaries in accordance with an embodiment to which the present invention is applied;
  • FIG. 4 illustrates the block structure of extended frames having block boundaries changed by random shift information in accordance with an embodiment to which the present invention is applied;
  • FIG. 5 illustrates block structures for performing a comparison between extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
  • FIG. 6 illustrates block structures for performing a comparison between the vertical block boundaries of extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
  • FIG. 7 illustrates block structures for performing a comparison between the horizontal block boundaries of extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
  • FIG. 8 is a flowchart illustrating a process of processing a video signal using random shift information in accordance with an embodiment to which the present invention is applied;
  • FIG. 9 illustrates test images for comparing an image (a) coded using an existing method with an image (b) to which an embodiment of the present invention has been applied.
  • FIGS. 10 to 13 are schematic block diagrams of an encoder and a decoder for processing a video signal based on an extended frame using random shift information in accordance with embodiments to which the present invention is applied.
  • a method of processing a video signal including receiving a video signal including the original frame, generating random shift information being used to derive a relative position of the original frame, copying the original frame within an extended frame using the generated random shift information, and encoding the extended frame and the random shift information.
  • the boundary of frames included in the video signal varies for each frame based on the random shift information.
  • the random shift information is generated horizontally and/or vertically for each frame.
  • the extended frame is extended by one block size or more in each dimension of the original frame.
  • the random shift information is inserted in a slice header.
  • a method of processing a video signal including receiving the video signal including an extended frame and random shift information, decoding the extended frame including a target frame and the random shift information, and outputting the extended frame and the random shift information.
  • the target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
  • the random shift information is used to derive the position of the target frame horizontally and/or vertically.
  • the extended frame has been extended by one block size or more in each dimension of the target frame.
  • the random shift information is extracted from the slice header of the video signal.
  • an apparatus for processing a video signal including a frame extension unit configured to receive a video signal including the original frame, generate random shift information being used to derive a relative position of the original frame, and copy the original frame within an extended frame using the generated random shift information and an encoder configured to encode the extended frame and the random shift information.
  • the boundary of frames included in the video signal varies for each frame based on the random shift information.
  • the random shift information is generated horizontally and/or vertically for each frame.
  • the extended frame is extended by one block size or more in each dimension of the original frame.
  • the random shift information is inserted in a slice header.
  • a decoder for decoding a video signal, wherein the decoder configured to receive the video signal including an extended frame and random shift information, decode the extended frame including a target frame and the random shift information, and output the extended frame and the random shift information.
  • the target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
  • the random shift information is used to derive the position of the target frame horizontally and/or vertically.
  • the extended frame has been extended by one block size or more in each dimension of the target frame.
  • the random shift information is extracted from the slice header of the video signal.
  • a basic problem in removing a blocking artifact is that an artifact generated after the blocking artifact is removed is easily recognized because it remains still with respect to an image including a moving object. Accordingly, an embodiment of the present invention proposes a technology capable of removing an artifact that remains still along with an increase in negligible complexity, in particular, a technology that makes an artifact unseen in a higher frame rate.
  • new video content requires an increase of a video frame rate according to higher resolution.
  • Some new movies are produced with 48 frames/second, and some pieces of TV are recorded with 60 frames/second.
  • Content having such a frame rate is reaching the visibility response limit of the human sight.
  • the present invention proposes various embodiments.
  • FIGS. 1 and 2 illustrate schematic block diagrams of video signal processing apparatuses respectively including an encoder and decoder in accordance with embodiments to which the present invention is applied.
  • the video signal processing apparatus to which an embodiment of the present invention is applied may include a frame extension unit 101 and an encoder 100.
  • the frame extension unit 101 may receive a video signal including the original frame.
  • the frame extension unit 101 may generate an extended frame by extending the original frame. In this case, shift information for extending the original frame may be used.
  • the shift information may mean used to obtain the relative position of a target frame and may include horizontal shift information and vertical shift information. Furthermore, the shift information may be randomly generated for each frame, which is hereinafter called random shift information.
  • the target frame may mean a frame to be finally output by the video signal processing apparatus.
  • the target frame may mean a frame cropped from the extended frame.
  • the encoder 100 may receive an extended frame and shift information from the frame extension unit 101. Furthermore, the encoder 100 may encode the received extended frame and shift information and output the encoded extended frame and shift information.
  • the video signal processing apparatus to which an embodiment of the present invention is applied may include a decoder 200 and a frame processing unit 201.
  • the decoder 200 may receive a bit stream including an extended frame and shift information.
  • the decoder 200 may decode the extended frame and the shift information and send the decoded extended frame and shift information to the frame processing unit 201.
  • the frame processing unit 201 may obtain a target frame from the extended frame using the shift information.
  • the target frame may be obtained by cropping the extended frame by the shift information.
  • Each target frame may be obtained based on each piece of shift information. Accordingly, each target frame may have a different block boundary.
  • the visibility of blocking artifacts can be reduced by continuously outputting frames having different block boundaries as described above.
  • FIG. 3 illustrates the block structure of extended frames having fixed block boundaries in accordance with an embodiment to which the present invention is applied.
  • a white region denotes the original frame
  • an oblique region denotes a frame extension region.
  • an extended frame may have a fixed block boundary like the block boundary of the original frame. That is, the block boundary of the extended frame is the same as the block boundary of the original frame.
  • FIG. 4 illustrates the block structure of extended frames having block boundaries changed by random shift information in accordance with an embodiment to which the present invention is applied.
  • a white region denotes the original frame
  • an oblique region denotes a frame extension region.
  • An extended frame to which an embodiment of the present invention is applied may be extended by a block size of at least one or more in each dimension.
  • the extended frame may be extended by shift information that has been randomly determined horizontally and/or vertically. That is, when the original frame is copied to a frame buffer, separate random shift information may be used in each frame.
  • the extended frame has been vertically extended by S v and horizontally extended by S h from the original frame. Accordingly, the original frame has a block boundary shifted by the shift information.
  • each frame is extended by separate random shift information based on such a principle, the frame has a different block boundary. As a result, if the frame is recovered, the visibility of a block artifact can be reduced because different block boundaries are output at the same position.
  • FIG. 5 illustrates block structures for performing a comparison between extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied.
  • a white region denotes the original frame
  • an oblique region denotes a frame extension region.
  • an Nth extended frame has been vertically extended by S v (n) and horizontally extended by S h (n) from an Nth original frame.
  • the S v (n) is indicative of the vertical shift information of the Nth frame
  • S h (n) is indicative of the horizontal shift information of the Nth frame.
  • the vertical shift information of the Nth frame and the horizontal shift information of the Nth frame may be randomly determined and may have the same value or different values.
  • an (N+1)th extended frame has been vertically extended by S v (n+1) and horizontally extended by S h (n+1) from an (N+1)th original frame.
  • the S v (n+1) is indicative of the vertical shift information of the (N+1)th Nth frame
  • the S h (n+1) is indicative of the horizontal shift information of the (N+1)th frame.
  • the vertical shift information of the (N+1)th frame and the horizontal shift information of the (N+1)th frame may be randomly determined and may have the same value or different values.
  • the vertical and horizontal shift information of the Nth frame and the vertical and horizontal shift information of the (N+1)th frame may have the same value or different values according to circumstances because they are randomly determined.
  • a region cropped from the Nth extended frame may be defined as an Nth target frame
  • a region cropped from the (N+1)th extended frame may be defined as an (N+1)th target frame.
  • FIGS. 6 and 7 illustrate block structures for performing a comparison between the vertical and horizontal block boundaries of extended frames based on different pieces of random shift information in accordance with embodiments to which the present invention is applied.
  • FIG. 6 illustrates the block structures and vertical block boundaries of an Nth target frame cropped from an Nth extended frame and an (N+1)th target frame cropped from an (N+1)th extended frame.
  • a first dotted line from the left is indicative of the vertical block boundary of the Nth target frame, and a second dotted line from the left is indicative of the vertical block boundary of the Nth target frame.
  • output target frames have different vertical block boundaries by applying different pieces of random shift information.
  • FIG. 7 illustrates the block structures and horizontal block boundaries of an Nth target frame cropped from an Nth extended frame and an (N+1)th target frame cropped from an (N+1)th extended frame.
  • a first dotted line from the top is indicative of the horizontal block boundary of the Nth target frame, and a second dotted line from the top is indicative of the horizontal block boundary of the Nth target frame.
  • output target frames have different horizontal block boundaries by applying different pieces of random shift information.
  • FIG. 8 is a flowchart illustrating a process of processing a video signal using random shift information in accordance with an embodiment to which the present invention is applied.
  • the video signal processing apparatus may receive a video signal including the original frame at step S810.
  • the video signal processing apparatus may extend the original frame in order to improve coding efficiency.
  • the video signal processing apparatus may generate random shift information used to derive the relative position of the original frame from an extended frame at step S820.
  • the random shift information may include at least one of vertical shift information and horizontal shift information.
  • the random shift information may be included in at least one of a sequence parameter, a picture parameter, a slice header, and Supplemental Enhancement Information (SEI).
  • SEI Supplemental Enhancement Information
  • the video signal processing apparatus may copy the original frame within the extended frame using the random shift information at step S830.
  • the video signal processing apparatus may generate a bit stream by encoding the extended frame and the random shift information at step S840.
  • the generated bit stream may be transmitted to another apparatus.
  • the random shift information may be directly transmitted from the frame extension unit 101 of FIG. 1 to the decoder 200 or the frame processing unit 201 of FIG. 2 without being decoded.
  • FIG. 9 illustrates test images for comparing an image (a) coded using an existing method with an image (b) to which an embodiment of the present invention has been applied.
  • FIG. 9(a) illustrates an image coded using an existing method
  • FIG. 9(b) illustrates an image to which an embodiment of the present invention has been applied
  • FIG. 9(b) illustrates that four JPEG-coded images have been shifted using random shift information and averaged.
  • FIG. 9(a) From a comparison between FIG. 9(a) and FIG. 9(b), it may be seen that picture quality of FIG. 9(b) is better than that of FIG. 9(a).
  • FIGS. 10 to 13 are schematic block diagrams of an encoder and a decoder for processing a video signal based on an extended frame using random shift information in accordance with embodiments to which the present invention is applied.
  • the present invention may be applied to a unit of the encoder and the decoder that requires shift information in a process of encoding or decoding a video signal.
  • FIG. 10 is a schematic block diagram of an encoder in which encoding is performed on a video signal in accordance with an embodiment to which the present invention is applied.
  • the encoder 100 includes a transform unit 120, a quantization unit 125, a dequantization unit 130, an inverse transform unit 135, a filtering unit 140, a Decoded Picture Buffer (DPB) unit 150, an inter-prediction unit 160, an intra-prediction unit 165, and an entropy encoding unit 170.
  • a transform unit 120 a quantization unit 125, a dequantization unit 130, an inverse transform unit 135, a filtering unit 140, a Decoded Picture Buffer (DPB) unit 150, an inter-prediction unit 160, an intra-prediction unit 165, and an entropy encoding unit 170.
  • DPB Decoded Picture Buffer
  • the encoder 100 receives an video signal and generates a residual signal by subtracting a prediction signal, output by the inter-prediction unit 160 or the intra-prediction unit 165, from the input video signal.
  • the video signal includes an extended frame, and the extended frame has been extended by shift information from an original video signal.
  • the generated residual signal is sent to the transform unit 120, and the transform unit 120 generates a transform coefficient by applying a transform scheme to the residual signal.
  • the quantization unit 125 quantizes the generated transform coefficient and sends the quantized coefficient to the entropy encoding unit 170.
  • the entropy encoding unit 170 performs entropy coding on the quantized signal and outputs the resulting signal.
  • an artifact in which a block boundary appears may occur because neighboring blocks are quantized by different quantization parameters.
  • Such a phenomenon is called a blocking artifact, which is one of factors that may be used for people to evaluate picture quality.
  • the filtering unit 140 applies filtering to the reconstructed signal and outputs the filtered signal to a playback device or sends the filtered signal to the DPB unit 150.
  • the DPB unit 150 may store the filtered frame in order to use the filtered frame as a reference frame in the inter-prediction unit 160.
  • the inter-prediction unit 160 performs temporal prediction and/or spatial prediction with reference to a reconstructed picture in order to remove temporal redundancy and/or spatial redundancy.
  • a reference picture used to perform prediction may include a blocking artifact or a ringing artifact because it is a signal that has been quantized or dequantized in a block unit when the reference picture is previously coded or decoded.
  • the intra-prediction unit 165 predicts a current block with reference to samples that neighbor a block to be now coded.
  • FIG. 11 illustrates a schematic block diagram of a decoder configured to decode a video signal in an embodiment to which the present invention is applied.
  • the decoder 200 of FIG. 11 includes an entropy decoding unit 210, a dequantization unit 220, an inverse transform unit 225, a filtering unit 230, a DPB unit 240, an inter-prediction unit 250, and an intra-prediction unit 255.
  • the decoder 200 receives a signal output by the encoder 100 of FIG. 10.
  • the output signal may include an extended frame, and may additionally include shift information.
  • the received signal is subjected to entropy decoding through the entropy decoding unit 210.
  • the dequantization unit 220 obtains a transform coefficient from the entropy-decoded signal using information about a quantization step size.
  • the inverse transform unit 225 obtains a difference signal by inversely transforming the transform coefficient.
  • a reconstructed signal is generated by adding the obtained difference signal to a prediction signal output by the inter-prediction unit 250 or the intra-prediction unit 255.
  • the filtering unit 230 applies filtering to the reconstructed signal and outputs the filtered signal to a playback device or the DPB unit 240.
  • the filtered signal transmitted by the DPB unit 240 may be used as a reference frame in the inter-prediction unit 250.
  • FIGS. 12 and 13 illustrate schematic block diagrams of an encoder and a decoder to which an embodiment of the present invention has been applied.
  • the encoder 100 of FIG. 12 includes a transform unit 110, a quantization unit 120, a dequantization unit 130, an inverse transform unit 140, a buffer 150, a prediction unit 160, and an entropy encoding unit 170.
  • the decoder 200 of FIG. 13 includes an entropy decoding unit 210, a dequantization unit 220, an inverse transform unit 230, a buffer 240, and a prediction unit 250.
  • the encoder 100 receives a video signal and generates a prediction error by subtracting a predicted signal, output by the prediction unit 160, from the video signal.
  • the video signal includes an extended frame, and the extended frame has been extended by shift information from an original video signal.
  • the generated prediction error is transmitted to the transform unit 110.
  • the transform unit 110 generates a transform coefficient by applying a transform scheme to the prediction error.
  • the quantization unit 120 quantizes the generated transform coefficient and sends the quantized coefficient to the entropy encoding unit 170.
  • the entropy encoding unit 170 performs entropy coding on the quantized signal and outputs an entropy-coded signal.
  • the quantized signal output by the quantization unit 120 may be used to generate a prediction signal.
  • the dequantization unit 130 and the inverse transform unit 140 within the loop of the encoder 100 may perform dequantization and inverse transform on the quantized signal so that the quantized signal is reconstructed into a prediction error.
  • a reconstructed signal may be generated by adding the reconstructed prediction error to a prediction signal output by the prediction unit 160.
  • the buffer 150 stores the reconstructed signal for the future reference of the prediction unit 160.
  • the prediction unit 160 generates a prediction signal using a previously reconstructed signal stored in the buffer 150.
  • the decoder 200 of FIG. 13 receives a signal output by the encoder 100 of FIG. 12.
  • the output signal may include an extended frame, and may additionally include shift information.
  • the entropy decoding unit 210 performs entropy decoding on the received signal.
  • the dequantization unit 220 obtains a transform coefficient from the entropy-decoded signal based on information about a quantization step size.
  • the inverse transform unit 230 obtains a prediction error by performing inverse transform on the transform coefficient.
  • a reconstructed signal is generated by adding the obtained prediction error to a prediction signal output by the prediction unit 250.
  • the buffer 240 stores the reconstructed signal for the future reference of the prediction unit 250.
  • the prediction unit 250 generates a prediction signal using a previously reconstructed signal stored in the buffer 240.
  • the visibility of blocking artifacts can be reduced by encoding and decoding the extended frame and shift information. Furthermore, the subjective picture quality of a video signal can be improved by reducing the visibility of blocking artifacts.
  • a processing apparatus including the decoder and the encoder to which the present invention is applied may be included in a multimedia broadcasting transmission/reception apparatus, a mobile communication terminal, a home cinema video apparatus, a digital cinema video apparatus, a surveillance camera, a video chatting apparatus, a real-time communication apparatus, such as video communication, a mobile streaming apparatus, a storage medium, a camcorder, a VoD service providing apparatus, an Internet streaming service providing apparatus, a three-dimensional (3D) video apparatus, a teleconference video apparatus, and a medical video apparatus and may be used to process video signals and data signals.
  • a multimedia broadcasting transmission/reception apparatus a mobile communication terminal, a home cinema video apparatus, a digital cinema video apparatus, a surveillance camera, a video chatting apparatus, a real-time communication apparatus, such as video communication, a mobile streaming apparatus, a storage medium, a camcorder, a VoD service providing apparatus, an Internet streaming service providing apparatus, a three-dimensional (3D) video apparatus, a teleconference
  • the processing method to which the present invention is applied may be produced in the form of a program that is to be executed by a computer and may be stored in a computer-readable recording medium.
  • Multimedia data having a data structure according to the present invention may also be stored in computer-readable recording media.
  • the computer-readable recording media include all types of storage devices in which data readable by a computer system is stored.
  • the computer-readable recording media may include a BD, a USB, ROM, RAM, CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device, for example.
  • the computer-readable recording media includes media implemented in the form of carrier waves (e.g., transmission through the Internet).
  • a bit stream generated by the encoding method may be stored in a computer-readable recording medium or may be transmitted over wired/wireless communication networks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Disclosed herein is a method of processing a video signal, comprising: receiving a video signal comprising an original frame; generating random shift information being used to derive a relative position of the original frame; copying the original frame within an extended frame using the generated random shift information; and encoding the extended frame and the random shift information, wherein a boundary of frames included in the video signal varies for each frame based on the random shift information.

Description

METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL FOR REDUCING VISIBILITY OF BLOCKING ARTIFACTS
The present invention relates to a method and apparatus for processing a video signal and, more particularly, to a technology for reducing the visibility of blocking artifacts.
Video compression is an application that requires computation, but needs to be supported within a cheap consumer device. Accordingly, in order to maintain computation complexity of a manageable level, some steps for video coding independently operate on a set of pixels grouped within relatively small square blocks. Such an approach has been adopted in existing codecs and continues to be used.
Such coding, however, is disadvantageous in that discontinuity, that is, a so-called blocking artifact, is generated when reconstruction is performed in neighboring blocks. Such an artifact tends to be seen by an eye, thereby significantly reducing the subjective picture quality of a reconstructed video.
The visibility of artifacts may be reduced through a deblocking filter, but a new artifact that may not be fully removed tends to be generated in a need for more bandwidths. For example, excessive filtering reduces resolution and obviates details. Furthermore, such a new artifact is still seen visually, thereby reducing reconstruction quality.
There is a problem in that the subjective picture quality of a reconstructed video is significantly reduced because blocking artifacts are generated when reconstruction is performed in neighboring blocks.
There is a problem in that a new artifact is generated when deblocking filtering is performed.
There are problems in that excessive filtering reduces resolution and obviates the details of an image.
An embodiment of the present invention provides a method of reducing the visibility of blocking artifacts.
Furthermore, an embodiment of the present invention provides a method of extending a frame using random shift information.
Furthermore, an embodiment of the present invention proposes a method of obtaining a target frame from an extended frame using random shift information.
Furthermore, an embodiment of the present invention provides a method of coding and sending random shift information.
Furthermore, an embodiment of the present invention provides a method of improving the subjective picture quality of a video signal.
In accordance with the present invention, the visibility of blocking artifacts can be reduced by extending a frame using random shift information and obtaining a target frame from the extended frame. Furthermore, the subjective picture quality of a video signal can be improved by reducing the visibility of blocking artifacts.
Furthermore, the visibility of blocking artifacts can be reduced by making blocking artifacts appear at different positions of a target frame obtained from an extended frame.
Furthermore, in accordance with the present invention, the picture quality of a video signal can be improved using a simple method and very low costs, and more improved picture quality can be obtained at a low bit rate.
FIGS. 1 and 2 illustrate schematic block diagrams of video signal processing apparatuses respectively including an encoder and decoder in accordance with embodiments to which the present invention is applied;
FIG. 3 illustrates the block structure of extended frames having fixed block boundaries in accordance with an embodiment to which the present invention is applied;
FIG. 4 illustrates the block structure of extended frames having block boundaries changed by random shift information in accordance with an embodiment to which the present invention is applied;
FIG. 5 illustrates block structures for performing a comparison between extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
FIG. 6 illustrates block structures for performing a comparison between the vertical block boundaries of extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
FIG. 7 illustrates block structures for performing a comparison between the horizontal block boundaries of extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied;
FIG. 8 is a flowchart illustrating a process of processing a video signal using random shift information in accordance with an embodiment to which the present invention is applied;
FIG. 9 illustrates test images for comparing an image (a) coded using an existing method with an image (b) to which an embodiment of the present invention has been applied; and
FIGS. 10 to 13 are schematic block diagrams of an encoder and a decoder for processing a video signal based on an extended frame using random shift information in accordance with embodiments to which the present invention is applied.
In accordance with an aspect of the present invention, there is provided a method of processing a video signal, including receiving a video signal including the original frame, generating random shift information being used to derive a relative position of the original frame, copying the original frame within an extended frame using the generated random shift information, and encoding the extended frame and the random shift information. The boundary of frames included in the video signal varies for each frame based on the random shift information.
The random shift information is generated horizontally and/or vertically for each frame.
The extended frame is extended by one block size or more in each dimension of the original frame.
The random shift information is inserted in a slice header.
In accordance with another aspect of the present invention, there is provided a method of processing a video signal, including receiving the video signal including an extended frame and random shift information, decoding the extended frame including a target frame and the random shift information, and outputting the extended frame and the random shift information. The target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
The random shift information is used to derive the position of the target frame horizontally and/or vertically.
The extended frame has been extended by one block size or more in each dimension of the target frame.
The random shift information is extracted from the slice header of the video signal.
In accordance with yet another aspect of the present invention, there is provided an apparatus for processing a video signal, including a frame extension unit configured to receive a video signal including the original frame, generate random shift information being used to derive a relative position of the original frame, and copy the original frame within an extended frame using the generated random shift information and an encoder configured to encode the extended frame and the random shift information. The boundary of frames included in the video signal varies for each frame based on the random shift information.
The random shift information is generated horizontally and/or vertically for each frame.
The extended frame is extended by one block size or more in each dimension of the original frame.
The random shift information is inserted in a slice header.
In still yet another aspect of the present invention, there is provided a decoder for decoding a video signal, wherein the decoder configured to receive the video signal including an extended frame and random shift information, decode the extended frame including a target frame and the random shift information, and output the extended frame and the random shift information. The target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
The random shift information is used to derive the position of the target frame horizontally and/or vertically.
The extended frame has been extended by one block size or more in each dimension of the target frame.
The random shift information is extracted from the slice header of the video signal.
Hereinafter, exemplary elements and operations in accordance with embodiments of the present invention are described with reference to the accompanying drawings. It is however to be noted that the elements and operations of the present invention described with reference to the drawings are provided as only embodiments and the technical spirit and kernel configuration and operation of the present invention are not limited thereto.
Furthermore, terms used in this specification are common terms that are now widely used, but in special cases, terms randomly selected by the applicant are used. In such a case, the meaning of a corresponding term is clearly described in the detailed description of a corresponding part. Accordingly, it is to be noted that the present invention should not be construed as being based on only the name of a term used in a corresponding description of this specification and that the present invention should be construed by checking even the meaning of a corresponding term.
Furthermore, terms used in this specification are common terms selected to describe the invention, but may be replaced with other terms for more appropriate analysis if such terms having similar meanings are present. For example, a signal, data, a sample, a picture, a frame, and a block may be properly replaced and interpreted in each coding process.
A basic problem in removing a blocking artifact is that an artifact generated after the blocking artifact is removed is easily recognized because it remains still with respect to an image including a moving object. Accordingly, an embodiment of the present invention proposes a technology capable of removing an artifact that remains still along with an increase in negligible complexity, in particular, a technology that makes an artifact unseen in a higher frame rate.
Furthermore, new video content requires an increase of a video frame rate according to higher resolution. Some new movies are produced with 48 frames/second, and some pieces of TV are recorded with 60 frames/second. Content having such a frame rate is reaching the visibility response limit of the human sight. In order to view such a content image with higher picture quality, the present invention proposes various embodiments.
FIGS. 1 and 2 illustrate schematic block diagrams of video signal processing apparatuses respectively including an encoder and decoder in accordance with embodiments to which the present invention is applied.
Referring to FIG. 1, the video signal processing apparatus to which an embodiment of the present invention is applied may include a frame extension unit 101 and an encoder 100.
The frame extension unit 101 may receive a video signal including the original frame. The frame extension unit 101 may generate an extended frame by extending the original frame. In this case, shift information for extending the original frame may be used.
The shift information may mean used to obtain the relative position of a target frame and may include horizontal shift information and vertical shift information. Furthermore, the shift information may be randomly generated for each frame, which is hereinafter called random shift information.
The target frame may mean a frame to be finally output by the video signal processing apparatus. Alternatively, the target frame may mean a frame cropped from the extended frame.
The encoder 100 may receive an extended frame and shift information from the frame extension unit 101. Furthermore, the encoder 100 may encode the received extended frame and shift information and output the encoded extended frame and shift information.
Referring to FIG. 2, the video signal processing apparatus to which an embodiment of the present invention is applied may include a decoder 200 and a frame processing unit 201.
The decoder 200 may receive a bit stream including an extended frame and shift information. The decoder 200 may decode the extended frame and the shift information and send the decoded extended frame and shift information to the frame processing unit 201.
The frame processing unit 201 may obtain a target frame from the extended frame using the shift information. The target frame may be obtained by cropping the extended frame by the shift information.
Each target frame may be obtained based on each piece of shift information. Accordingly, each target frame may have a different block boundary.
The visibility of blocking artifacts can be reduced by continuously outputting frames having different block boundaries as described above.
FIG. 3 illustrates the block structure of extended frames having fixed block boundaries in accordance with an embodiment to which the present invention is applied.
Referring to FIG. 3, a white region denotes the original frame, and an oblique region denotes a frame extension region. In FIG. 3, an extended frame may have a fixed block boundary like the block boundary of the original frame. That is, the block boundary of the extended frame is the same as the block boundary of the original frame.
Accordingly, although coding is performed based on an extended frame, the visibility of block artifacts may not be reduced because the extended frame has a fixed block boundary. In order to improve such a problem, an extended frame having a different block boundary in each frame needs to be used.
FIG. 4 illustrates the block structure of extended frames having block boundaries changed by random shift information in accordance with an embodiment to which the present invention is applied.
Referring to FIG. 4, a white region denotes the original frame, and an oblique region denotes a frame extension region. An extended frame to which an embodiment of the present invention is applied may be extended by a block size of at least one or more in each dimension. For example, the extended frame may be extended by shift information that has been randomly determined horizontally and/or vertically. That is, when the original frame is copied to a frame buffer, separate random shift information may be used in each frame.
Referring to FIG. 4, the extended frame has been vertically extended by Sv and horizontally extended by Sh from the original frame. Accordingly, the original frame has a block boundary shifted by the shift information.
If each frame is extended by separate random shift information based on such a principle, the frame has a different block boundary. As a result, if the frame is recovered, the visibility of a block artifact can be reduced because different block boundaries are output at the same position.
Embodiments in which target frames having vertically and horizontally different block boundaries using different pieces of random shift information in respective frames are compared with each other are described below with reference to FIGS. 5 to 7.
FIG. 5 illustrates block structures for performing a comparison between extended frames based on different pieces of random shift information in accordance with an embodiment to which the present invention is applied.
Referring to FIG. 5, a white region denotes the original frame, and an oblique region denotes a frame extension region. A case in which different pieces of shift information have been applied to an Nth frame and an (N+1)th frame is described with reference to FIG. 5.
First, an Nth extended frame has been vertically extended by Sv(n) and horizontally extended by Sh(n) from an Nth original frame. In this case, the Sv(n) is indicative of the vertical shift information of the Nth frame, and Sh(n) is indicative of the horizontal shift information of the Nth frame.
In this case, the vertical shift information of the Nth frame and the horizontal shift information of the Nth frame may be randomly determined and may have the same value or different values.
Furthermore, an (N+1)th extended frame has been vertically extended by Sv(n+1) and horizontally extended by Sh(n+1) from an (N+1)th original frame. In this case, the Sv(n+1) is indicative of the vertical shift information of the (N+1)th Nth frame, and the Sh(n+1) is indicative of the horizontal shift information of the (N+1)th frame.
Likewise, the vertical shift information of the (N+1)th frame and the horizontal shift information of the (N+1)th frame may be randomly determined and may have the same value or different values.
Furthermore, the vertical and horizontal shift information of the Nth frame and the vertical and horizontal shift information of the (N+1)th frame may have the same value or different values according to circumstances because they are randomly determined.
Furthermore, a region cropped from the Nth extended frame may be defined as an Nth target frame, and a region cropped from the (N+1)th extended frame may be defined as an (N+1)th target frame.
FIGS. 6 and 7 illustrate block structures for performing a comparison between the vertical and horizontal block boundaries of extended frames based on different pieces of random shift information in accordance with embodiments to which the present invention is applied.
FIG. 6 illustrates the block structures and vertical block boundaries of an Nth target frame cropped from an Nth extended frame and an (N+1)th target frame cropped from an (N+1)th extended frame.
A first dotted line from the left is indicative of the vertical block boundary of the Nth target frame, and a second dotted line from the left is indicative of the vertical block boundary of the Nth target frame.
That is, it may be seen that output target frames have different vertical block boundaries by applying different pieces of random shift information.
FIG. 7 illustrates the block structures and horizontal block boundaries of an Nth target frame cropped from an Nth extended frame and an (N+1)th target frame cropped from an (N+1)th extended frame.
A first dotted line from the top is indicative of the horizontal block boundary of the Nth target frame, and a second dotted line from the top is indicative of the horizontal block boundary of the Nth target frame.
That is, it may be seen that output target frames have different horizontal block boundaries by applying different pieces of random shift information.
Accordingly, the visibility of blocking artifacts can be reduced by consecutively outputting target frames having different block boundaries as described above.
FIG. 8 is a flowchart illustrating a process of processing a video signal using random shift information in accordance with an embodiment to which the present invention is applied.
The video signal processing apparatus may receive a video signal including the original frame at step S810. The video signal processing apparatus may extend the original frame in order to improve coding efficiency. In this case, the video signal processing apparatus may generate random shift information used to derive the relative position of the original frame from an extended frame at step S820. In this case, the random shift information may include at least one of vertical shift information and horizontal shift information. Furthermore, the random shift information may be included in at least one of a sequence parameter, a picture parameter, a slice header, and Supplemental Enhancement Information (SEI).
The video signal processing apparatus may copy the original frame within the extended frame using the random shift information at step S830.
The video signal processing apparatus may generate a bit stream by encoding the extended frame and the random shift information at step S840. The generated bit stream may be transmitted to another apparatus.
In another embodiment, the random shift information may be directly transmitted from the frame extension unit 101 of FIG. 1 to the decoder 200 or the frame processing unit 201 of FIG. 2 without being decoded.
FIG. 9 illustrates test images for comparing an image (a) coded using an existing method with an image (b) to which an embodiment of the present invention has been applied.
FIG. 9(a) illustrates an image coded using an existing method, and FIG. 9(b) illustrates an image to which an embodiment of the present invention has been applied. FIG. 9(b) illustrates that four JPEG-coded images have been shifted using random shift information and averaged.
From a comparison between FIG. 9(a) and FIG. 9(b), it may be seen that picture quality of FIG. 9(b) is better than that of FIG. 9(a).
FIGS. 10 to 13 are schematic block diagrams of an encoder and a decoder for processing a video signal based on an extended frame using random shift information in accordance with embodiments to which the present invention is applied.
The present invention may be applied to a unit of the encoder and the decoder that requires shift information in a process of encoding or decoding a video signal.
FIG. 10 is a schematic block diagram of an encoder in which encoding is performed on a video signal in accordance with an embodiment to which the present invention is applied.
Referring to FIG. 10, the encoder 100 includes a transform unit 120, a quantization unit 125, a dequantization unit 130, an inverse transform unit 135, a filtering unit 140, a Decoded Picture Buffer (DPB) unit 150, an inter-prediction unit 160, an intra-prediction unit 165, and an entropy encoding unit 170.
The encoder 100 receives an video signal and generates a residual signal by subtracting a prediction signal, output by the inter-prediction unit 160 or the intra-prediction unit 165, from the input video signal. In this case, the video signal includes an extended frame, and the extended frame has been extended by shift information from an original video signal.
The generated residual signal is sent to the transform unit 120, and the transform unit 120 generates a transform coefficient by applying a transform scheme to the residual signal.
The quantization unit 125 quantizes the generated transform coefficient and sends the quantized coefficient to the entropy encoding unit 170. The entropy encoding unit 170 performs entropy coding on the quantized signal and outputs the resulting signal.
In such a compression process, an artifact in which a block boundary appears may occur because neighboring blocks are quantized by different quantization parameters. Such a phenomenon is called a blocking artifact, which is one of factors that may be used for people to evaluate picture quality.
The filtering unit 140 applies filtering to the reconstructed signal and outputs the filtered signal to a playback device or sends the filtered signal to the DPB unit 150.
The DPB unit 150 may store the filtered frame in order to use the filtered frame as a reference frame in the inter-prediction unit 160.
The inter-prediction unit 160 performs temporal prediction and/or spatial prediction with reference to a reconstructed picture in order to remove temporal redundancy and/or spatial redundancy. In this case, a reference picture used to perform prediction may include a blocking artifact or a ringing artifact because it is a signal that has been quantized or dequantized in a block unit when the reference picture is previously coded or decoded.
The intra-prediction unit 165 predicts a current block with reference to samples that neighbor a block to be now coded.
FIG. 11 illustrates a schematic block diagram of a decoder configured to decode a video signal in an embodiment to which the present invention is applied.
The decoder 200 of FIG. 11 includes an entropy decoding unit 210, a dequantization unit 220, an inverse transform unit 225, a filtering unit 230, a DPB unit 240, an inter-prediction unit 250, and an intra-prediction unit 255.
The decoder 200 receives a signal output by the encoder 100 of FIG. 10. In this case, the output signal may include an extended frame, and may additionally include shift information.
The received signal is subjected to entropy decoding through the entropy decoding unit 210. The dequantization unit 220 obtains a transform coefficient from the entropy-decoded signal using information about a quantization step size. The inverse transform unit 225 obtains a difference signal by inversely transforming the transform coefficient. A reconstructed signal is generated by adding the obtained difference signal to a prediction signal output by the inter-prediction unit 250 or the intra-prediction unit 255.
The filtering unit 230 applies filtering to the reconstructed signal and outputs the filtered signal to a playback device or the DPB unit 240. The filtered signal transmitted by the DPB unit 240 may be used as a reference frame in the inter-prediction unit 250.
FIGS. 12 and 13 illustrate schematic block diagrams of an encoder and a decoder to which an embodiment of the present invention has been applied.
The encoder 100 of FIG. 12 includes a transform unit 110, a quantization unit 120, a dequantization unit 130, an inverse transform unit 140, a buffer 150, a prediction unit 160, and an entropy encoding unit 170. The decoder 200 of FIG. 13 includes an entropy decoding unit 210, a dequantization unit 220, an inverse transform unit 230, a buffer 240, and a prediction unit 250.
The encoder 100 receives a video signal and generates a prediction error by subtracting a predicted signal, output by the prediction unit 160, from the video signal. In this case, the video signal includes an extended frame, and the extended frame has been extended by shift information from an original video signal.
The generated prediction error is transmitted to the transform unit 110. The transform unit 110 generates a transform coefficient by applying a transform scheme to the prediction error.
The quantization unit 120 quantizes the generated transform coefficient and sends the quantized coefficient to the entropy encoding unit 170.
The entropy encoding unit 170 performs entropy coding on the quantized signal and outputs an entropy-coded signal.
Meanwhile, the quantized signal output by the quantization unit 120 may be used to generate a prediction signal. For example, the dequantization unit 130 and the inverse transform unit 140 within the loop of the encoder 100 may perform dequantization and inverse transform on the quantized signal so that the quantized signal is reconstructed into a prediction error. A reconstructed signal may be generated by adding the reconstructed prediction error to a prediction signal output by the prediction unit 160.
The buffer 150 stores the reconstructed signal for the future reference of the prediction unit 160. The prediction unit 160 generates a prediction signal using a previously reconstructed signal stored in the buffer 150.
The decoder 200 of FIG. 13 receives a signal output by the encoder 100 of FIG. 12. In this case, the output signal may include an extended frame, and may additionally include shift information.
The entropy decoding unit 210 performs entropy decoding on the received signal. The dequantization unit 220 obtains a transform coefficient from the entropy-decoded signal based on information about a quantization step size. The inverse transform unit 230 obtains a prediction error by performing inverse transform on the transform coefficient. A reconstructed signal is generated by adding the obtained prediction error to a prediction signal output by the prediction unit 250.
The buffer 240 stores the reconstructed signal for the future reference of the prediction unit 250. The prediction unit 250 generates a prediction signal using a previously reconstructed signal stored in the buffer 240.
In accordance with the present invention, the visibility of blocking artifacts can be reduced by encoding and decoding the extended frame and shift information. Furthermore, the subjective picture quality of a video signal can be improved by reducing the visibility of blocking artifacts.
As described above, a processing apparatus including the decoder and the encoder to which the present invention is applied may be included in a multimedia broadcasting transmission/reception apparatus, a mobile communication terminal, a home cinema video apparatus, a digital cinema video apparatus, a surveillance camera, a video chatting apparatus, a real-time communication apparatus, such as video communication, a mobile streaming apparatus, a storage medium, a camcorder, a VoD service providing apparatus, an Internet streaming service providing apparatus, a three-dimensional (3D) video apparatus, a teleconference video apparatus, and a medical video apparatus and may be used to process video signals and data signals.
Furthermore, the processing method to which the present invention is applied may be produced in the form of a program that is to be executed by a computer and may be stored in a computer-readable recording medium. Multimedia data having a data structure according to the present invention may also be stored in computer-readable recording media. The computer-readable recording media include all types of storage devices in which data readable by a computer system is stored. The computer-readable recording media may include a BD, a USB, ROM, RAM, CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device, for example. Furthermore, the computer-readable recording media includes media implemented in the form of carrier waves (e.g., transmission through the Internet). Furthermore, a bit stream generated by the encoding method may be stored in a computer-readable recording medium or may be transmitted over wired/wireless communication networks.
The exemplary embodiments of the present invention have been disclosed for illustrative purposes, and those skilled in the art may improve, change, replace, or add various other embodiments within the technical spirit and scope of the present invention disclosed in the attached claims.

Claims (16)

  1. A method of processing a video signal, comprising:
    receiving a video signal comprising an original frame;
    generating random shift information being used to derive a relative position of the original frame;
    copying the original frame within an extended frame using the generated random shift information; and
    encoding the extended frame and the random shift information,
    wherein a boundary of frames included in the video signal varies for each frame based on the random shift information.
  2. The method of claim 1, wherein the random shift information is generated horizontally and/or vertically for each frame.
  3. The method of claim 1, wherein the extended frame is extended by one block size or more in each dimension of the original frame.
  4. The method of claim 1, wherein the random shift information is inserted in a slice header.
  5. A method of processing a video signal, comprising:
    receiving the video signal comprising an extended frame and random shift information;
    decoding the extended frame comprising a target frame and the random shift information; and
    outputting the extended frame and the random shift information,
    wherein the target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
  6. The method of claim 5, wherein the random shift information is used to derive a position of the target frame horizontally and/or vertically.
  7. The method of claim 5, wherein the extended frame has been extended by one block size or more in each dimension of the target frame.
  8. The method of claim 5, wherein the random shift information is extracted from a slice header of the video signal.
  9. An apparatus for processing a video signal, comprising:
    a frame extension unit configured to receive a video signal comprising an original frame, generate random shift information being used to derive a relative position of the original frame, and copy the original frame within an extended frame using the generated random shift information; and
    an encoder configured to encode the extended frame and the random shift information,
    wherein a boundary of frames included in the video signal varies for each frame based on the random shift information.
  10. The apparatus of claim 9, wherein the random shift information is generated horizontally and/or vertically for each frame.
  11. The apparatus of claim 9, wherein the extended frame is extended by one block size or more in each dimension of the original frame.
  12. The apparatus of claim 9, wherein the random shift information is inserted in a slice header.
  13. A decoder for decoding a video signal, wherein:
    the decoder configured to:
    receive the video signal comprising an extended frame and random shift information,
    decode the extended frame comprising a target frame and the random shift information, and
    output the extended frame and the random shift information,
    wherein the target frame indicates a frame with an original frame size which is cropped from the extended frame based on the random shift information.
  14. The decoder of claim 13, wherein the random shift information is used to derive a position of the target frame horizontally and/or vertically.
  15. The decoder of claim 13, wherein the extended frame has been extended by one block size or more in each dimension of the target frame.
  16. The decoder of claim 13, wherein the random shift information is extracted from a slice header of the video signal.
PCT/KR2014/012952 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts WO2015102329A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US15/107,856 US20160330486A1 (en) 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts
EP14876473.1A EP3090560A4 (en) 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts
JP2016544118A JP2017509188A (en) 2014-01-01 2014-12-29 Video signal processing method and apparatus for reducing the visibility of blocking artifacts
CN201480071577.XA CN105874802A (en) 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts
KR1020167021029A KR20160102075A (en) 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461922858P 2014-01-01 2014-01-01
US61/922,858 2014-01-01

Publications (1)

Publication Number Publication Date
WO2015102329A1 true WO2015102329A1 (en) 2015-07-09

Family

ID=53493618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/012952 WO2015102329A1 (en) 2014-01-01 2014-12-29 Method and apparatus for processing video signal for reducing visibility of blocking artifacts

Country Status (6)

Country Link
US (1) US20160330486A1 (en)
EP (1) EP3090560A4 (en)
JP (1) JP2017509188A (en)
KR (1) KR20160102075A (en)
CN (1) CN105874802A (en)
WO (1) WO2015102329A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11016212B2 (en) * 2017-04-11 2021-05-25 Saudi Arabian Oil Company Compressing seismic wavefields in three-dimensional reverse time migration
US11656378B2 (en) 2020-06-08 2023-05-23 Saudi Arabian Oil Company Seismic imaging by visco-acoustic reverse time migration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060034374A1 (en) * 2004-08-13 2006-02-16 Gwang-Hoon Park Method and device for motion estimation and compensation for panorama image
US20060109913A1 (en) * 2001-11-27 2006-05-25 Limin Wang Macroblock level adaptive frame/field coding for digital video content
US20060188012A1 (en) * 2003-03-24 2006-08-24 Tetsujiro Kondo Data encoding apparatus, data encoding method, data output apparatus, data output method, signal processing system, signal processing apparatus, signal processing method, data decoding apparatus, and data decoding method
US20110122950A1 (en) * 2009-11-26 2011-05-26 Ji Tianying Video decoder and method for motion compensation for out-of-boundary pixels
EP2533537A1 (en) * 2011-06-10 2012-12-12 Panasonic Corporation Transmission of picture size for image or video coding

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05130585A (en) * 1991-11-06 1993-05-25 Matsushita Electric Ind Co Ltd Encoding device
JP2921720B2 (en) * 1992-02-17 1999-07-19 松下電器産業株式会社 Video signal encoding device by encoding
JPH05347755A (en) * 1992-06-16 1993-12-27 Matsushita Electric Ind Co Ltd Animation image coding device
JPH0646398A (en) * 1992-07-24 1994-02-18 Kubota Corp Picture data compression method
JPH07123271A (en) * 1993-10-25 1995-05-12 Sony Corp Method for compressing and expansting picture data
JPH08130735A (en) * 1994-10-31 1996-05-21 Matsushita Electric Ind Co Ltd Picture data encoding and decoding device
JPH08149469A (en) * 1994-11-21 1996-06-07 Tec Corp Moving image coder
JP2006013766A (en) * 2004-06-24 2006-01-12 Matsushita Electric Ind Co Ltd Method and device for recording, and method and device for reproducing moving image
US7965900B2 (en) * 2007-09-26 2011-06-21 Hewlett-Packard Development Company, L.P. Processing an input image to reduce compression-related artifacts
US8285068B2 (en) * 2008-06-25 2012-10-09 Cisco Technology, Inc. Combined deblocking and denoising filter
CN102823243B (en) * 2010-02-02 2017-05-31 汤姆森特许公司 The method and apparatus for reducing vector quantization error are moved by dough sheet
BR112012025267A2 (en) * 2010-04-06 2019-09-24 Koninklijke Philps Electronics N V '' method of processing a three-dimensional video signal [3d], 3d video device for processing a three-dimensional video signal [3d], 3d video signal, registration vehicle and computer program product for processing a video signal three-dimensional [3d] ''
WO2012175195A1 (en) * 2011-06-20 2012-12-27 Panasonic Corporation Simplified pipeline for filtering
US10075737B2 (en) * 2011-08-26 2018-09-11 Qualcomm Incorporated Method and apparatus for shift DCT-based sharpening of a video image
US9058656B2 (en) * 2012-01-23 2015-06-16 Eiffel Medtech Inc. Image restoration system and method
EP2712542B1 (en) * 2012-09-28 2017-04-12 Ruprecht-Karls-Universität Heidelberg Structured illumination ophthalmoscope

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060109913A1 (en) * 2001-11-27 2006-05-25 Limin Wang Macroblock level adaptive frame/field coding for digital video content
US20060188012A1 (en) * 2003-03-24 2006-08-24 Tetsujiro Kondo Data encoding apparatus, data encoding method, data output apparatus, data output method, signal processing system, signal processing apparatus, signal processing method, data decoding apparatus, and data decoding method
US20060034374A1 (en) * 2004-08-13 2006-02-16 Gwang-Hoon Park Method and device for motion estimation and compensation for panorama image
US20110122950A1 (en) * 2009-11-26 2011-05-26 Ji Tianying Video decoder and method for motion compensation for out-of-boundary pixels
EP2533537A1 (en) * 2011-06-10 2012-12-12 Panasonic Corporation Transmission of picture size for image or video coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOHAMMED EBRAHIM AL_MUALLA; C. NISHAN CANAGARAJAH; DAVID R. BULL: "Video Coding for Mobile Communications", 2002, ACADEMIC PRESS
See also references of EP3090560A4 *
THOMAS WIEGAND; GARY J. SULLIVAN: "Overview of the H.264/AVC video coding standard", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 13, no. 7, July 2003 (2003-07-01), XP008129745, DOI: doi:10.1109/TCSVT.2003.815165

Also Published As

Publication number Publication date
JP2017509188A (en) 2017-03-30
KR20160102075A (en) 2016-08-26
US20160330486A1 (en) 2016-11-10
EP3090560A4 (en) 2017-08-23
CN105874802A (en) 2016-08-17
EP3090560A1 (en) 2016-11-09

Similar Documents

Publication Publication Date Title
WO2020017840A1 (en) Method and device for inter predicting on basis of dmvr
WO2011139121A2 (en) Apparatus for encoding and decoding image by skip encoding and method for same
WO2017043766A1 (en) Video encoding and decoding method and device
WO2020185009A1 (en) Method and apparatus for efficiently coding residual blocks
WO2021201515A1 (en) Image encoding/decoding method and device for signaling hls, and computer-readable recording medium in which bitstream is stored
WO2018212430A1 (en) Frequency domain filtering method in image coding system, and device therefor
WO2021118295A1 (en) Image coding device and method for controlling loop filtering
WO2019059721A1 (en) Image encoding and decoding using resolution enhancement technique
WO2019212230A1 (en) Method and apparatus for decoding image by using transform according to block size in image coding system
WO2021133060A1 (en) Image coding apparatus and method based on sub-picture
WO2021118293A1 (en) Filtering-based image coding device and method
WO2018159987A1 (en) Block-based video decoding method using pre-scan and apparatus thereof
WO2015102329A1 (en) Method and apparatus for processing video signal for reducing visibility of blocking artifacts
WO2015009091A1 (en) Method and apparatus for processing video signal
WO2020004931A1 (en) Method and apparatus for processing image according to inter-prediction in image coding system
WO2019135636A1 (en) Image coding/decoding method and apparatus using correlation in ycbcr
WO2021206524A1 (en) Image decoding method and device for same
WO2014107098A1 (en) Method and device for generating parameter set for image encoding/decoding
WO2021118262A1 (en) Method and device for signaling video information applicable at picture level or slice level
WO2021137588A1 (en) Image decoding method and apparatus for coding image information including picture header
WO2020009375A1 (en) Intra prediction method and device in image coding system
WO2021118297A1 (en) Apparatus and method for coding image on basis of signaling of information for filtering
WO2020040439A1 (en) Intra prediction method and device in image coding system
WO2013077650A1 (en) Method and apparatus for decoding multi-view video
WO2019194458A1 (en) Image coding method using obmc and device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14876473

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014876473

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014876473

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15107856

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2016544118

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20167021029

Country of ref document: KR

Kind code of ref document: A