US20180278952A1 - Method and device for encoding and decoding image - Google Patents

Method and device for encoding and decoding image Download PDF

Info

Publication number
US20180278952A1
US20180278952A1 US15/537,718 US201615537718A US2018278952A1 US 20180278952 A1 US20180278952 A1 US 20180278952A1 US 201615537718 A US201615537718 A US 201615537718A US 2018278952 A1 US2018278952 A1 US 2018278952A1
Authority
US
United States
Prior art keywords
pixel
unit
resolutions
motion vector
integer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/537,718
Other languages
English (en)
Inventor
Jong Ki HAN
Jae Yung LEE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Intellectual Discovery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intellectual Discovery Co Ltd filed Critical Intellectual Discovery Co Ltd
Assigned to INTELLECTUAL DISCOVERY CO., LTD. reassignment INTELLECTUAL DISCOVERY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAN, JONG KI, LEE, JAE YUNG
Publication of US20180278952A1 publication Critical patent/US20180278952A1/en
Assigned to DOLBY LABORATORIES LICENSING CORPORATION reassignment DOLBY LABORATORIES LICENSING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTELLECTUAL DISCOVERY CO., LTD.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/27Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving both synthetic and natural picture components, e.g. synthetic natural hybrid coding [SNHC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention generally relates to a video encoding/decoding method and apparatus and, more particularly, to a video encoding/decoding method and apparatus, which may more efficiently process artificially created images, such as screen content.
  • screen content Unlike typical natural images, screen content has a limited range of color difference signals, has relatively low noise, and has characteristics differing from natural images, having high color saturation. Meanwhile, “screen content” defined in the present invention also means a form in which screen content is combined with typical natural images, as well as the case where the entire video is composed of screen content.
  • Examples of such screen content may include e-Learning content, game broadcasting content or home-shopping content.
  • e-Learning content the form of a user interface in which educational content is indicated only by text, or the form in which a natural image is provided to be inserted into the user interface in the form of a frame may be considered, and this type of content may be examples of screen content.
  • Such screen content is characterized in that, unlike natural images, a portion indicating the boundary of an object may be clearly identified for each pixel. That is, unlike typical natural images, there is only a slim possibility that the motion of an object will occur for each sub-pixel.
  • Korean Patent Application Publication No. 10-2013-0078569 (entitled “Region of Interest based Screen Contents Quality Improving Video Encoding/Decoding Method and Apparatus Thereof”) discloses a method for determining a major Region of Interest (RoI) in consideration of the characteristics of input screen content video, reflecting the major ROI into a video encoding process, and allocating more information to the major ROI, thus improving subjective video quality.
  • RoI Region of Interest
  • JCT-VC Joint Collaborative Team on Video Coding
  • MPEG Moving Picture Experts Group
  • VCEG Video Coding Experts Group
  • SCC Screen Content Coding
  • the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a video encoding/decoding method and apparatus, which variously set the resolutions of motion vectors depending on whether screen content video is included.
  • a video encoding method includes generating header information that includes information about resolutions of motion vectors of respective blocks, determined based on motion prediction for a unit image.
  • the header information includes flag information indicating whether resolutions of all motion vectors included in the unit image are integer-pixel resolutions.
  • a video decoding method includes extracting information about resolutions of motion vectors of each unit image from header information included in a target bitstream to be decoded; and a decoding unit for decoding the unit image based on the resolution information.
  • the header information includes flag information indicating whether resolutions of all motion vectors included in the unit image are integer-pixel resolutions.
  • a video encoding apparatus includes a bitstream generation unit for including information about resolutions of motion vectors of respective blocks, determined based on motion prediction for a unit image, in header information of a bitstream.
  • the header information includes flag information indicating whether resolutions of all motion vectors included in the unit image are integer-pixel resolutions.
  • a video decoding apparatus includes a parsing unit for extracting information about resolutions of motion vectors of each unit image from header information included in a target bitstream to be decoded, and a decoding unit for decoding the unit image based on the resolution information, wherein the header information includes flag information indicating whether resolutions of all motion vectors included in the unit image are integer-pixel resolutions.
  • the resolutions of motion vectors may be set to integer-pixel resolutions for an image including screen content, thus improving the efficiency of a video encoding process and a video decoding process.
  • FIG. 1 is a diagram showing the characteristics of screen content proposed in the present invention
  • FIG. 2 is a block diagram showing the configuration of a video encoding apparatus according to an embodiment of the present invention
  • FIG. 3 is a diagram showing motion vector encoding applied to the embodiment of the present invention.
  • FIG. 4 is a diagram showing a syntax structure used in the video encoding apparatus according to an embodiment of the present invention.
  • FIG. 5 is a diagram showing a method for processing screen content in the video encoding apparatus according to an embodiment of the present invention
  • FIG. 6 is a diagram showing a syntax structure used in the video encoding apparatus according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing a motion vector prediction procedure using a motion vector of the current block and motion vectors of neighboring blocks in a motion vector encoding process according to an embodiment of the present invention
  • FIG. 8 is a diagram showing the detailed configuration of the video encoding apparatus according to the embodiment of the present invention.
  • FIG. 9 is a block diagram showing the configuration of a video decoding apparatus according to an embodiment of the present invention.
  • FIG. 10 is a diagram showing the detailed configuration of the video decoding apparatus according to the embodiment of the present invention.
  • a representation indicating that a first component is “connected” to a second component may include the case where the first component is electrically connected to the second component with some other component interposed therebetween, as well as the case where the first component is “directly connected” to the second component.
  • element units described in the embodiments of the present invention are independently shown in order to indicate different and characteristic functions, but this does not mean that each of the element units is formed of a separate piece of hardware or software. That is, the element units are arranged and included for convenience of description, and at least two of the element units may form one element unit or one element unit may be divided into a plurality of element units to perform their own functions. An embodiment in which the element units are integrated and an embodiment in which the element units are separated are included in the scope of the present invention, unless it departs from the essence of the present invention.
  • FIG. 1 is a diagram showing the characteristics of screen content proposed in the present invention.
  • screen content is characterized in that, unlike a natural image, a portion indicating the boundary of an object may be clearly identified for each pixel. That is, the boundary of an object in a screen content image shown on the left side of FIG. 1 is clearly identified for each integer pixel, however, in a typical natural image shown on the right side thereof, there is a strong possibility that the motion of an object will occur for each sub-pixel.
  • the present invention is intended to utilize such screen content characteristics for encoding/decoding processes.
  • FIG. 2 is a block diagram showing the configuration of a video encoding apparatus according to an embodiment of the present invention.
  • a video encoding apparatus 100 includes a motion prediction unit 110 for performing motion prediction, a motion vector encoding unit 120 , and a bitstream generation unit 130 for generating a bitstream that includes information about the resolutions of motion vectors for respective blocks.
  • the motion prediction unit 110 searches for the predicted block most similar to the current encoding target block from reference pictures.
  • motion prediction is performed on a per-integer-pixel basis depending on the characteristics of the image.
  • the resolution of a motion vector indicating the predicted block, selected based on motion prediction is also determined on a per-integer-pixel basis.
  • motion prediction is performed on a per-sub-pixel basis, such as a half-pixel unit or a 1 ⁇ 4-pixel unit, as well as an integer-pixel unit.
  • motion prediction is performed on a per-integer-pixel basis, and thus encoding efficiency may be improved.
  • Such motion prediction is performed on a unit image
  • a unit image may be a slice-based unit image, a picture-based unit image, or a sequence-based unit image.
  • the motion vector encoding unit 120 encodes a motion vector of each block determined by the motion prediction unit 110 .
  • a predicted motion vector (PMV) for the target block to be encoded is generated using information about the motion vectors of neighboring blocks, and the difference value between the predicted motion vector and the motion vector of the target block to be currently encoded is encoded.
  • PMV predicted motion vector
  • a low unit-based resolution such as an integer unit
  • the number of bits used to encode motion vectors may be reduced.
  • a sub-pixel resolution such as a half-pixel, a 1 ⁇ 4 pixel, or a 1 ⁇ 8 pixel, is applied, the number of bits required to encode motion vectors increases. This will be described in greater detail with reference to the attached drawings.
  • FIG. 3 is a diagram showing motion vector encoding to be applied to the embodiment of the present invention.
  • the motion vector encoding unit 120 may use an encoding scheme such as a first-order exponential-Golomb code in order to encode a differential motion vector.
  • an encoding scheme such as a first-order exponential-Golomb code
  • FIG. 3( a ) illustrates an example of a codebook required to encode the motion vector to which a 1 ⁇ 4 pixel resolution is applied
  • FIG. 3( b ) illustrates an example of a codebook required to encode the motion vector to which an integer-pixel resolution is applied.
  • a differential motion vector is (3,2)
  • a bitstream of ‘000011000’ having a code number of ‘23’ is used to encode ‘3’
  • a bitstream of ‘000010000’ having a code number of ‘15’ is used to encode ‘2’.
  • an integer-pixel resolution is applied, if the differential motion vector is (3,2), a bitstream having a code number of ‘5’ and a bitstream having a code number of ‘3’ are used, thus greatly improving encoding efficiency.
  • the reason for using long codewords to encode a motion vector of small-magnitude is that both codewords required to encode motion vectors having a 1 ⁇ 2 pixel resolution and a 1 ⁇ 4 pixel resolution and codewords required to encode motion vectors having an integer-pixel resolution are used together.
  • the motion vector is determined on a per-integer-pixel basis, and thus encoding efficiency may be improved.
  • the bitstream generation unit 130 may generate a bitstream from data, which is output through inter prediction, intra prediction, frequency transform, quantization, and entropy coding procedures, based on a syntax structure set according to the video compression standard.
  • bitstream generation unit 130 generates the bitstream by including the information about the resolution of the motion vector for each block determined based on motion prediction in the header information of the bitstream.
  • the header information includes flag information indicating whether the resolutions of all 5motion vectors included in a unit image are integer-pixel units. For example, when the unit image is screen content, all of the motion vectors of the image are integer-pixel units, and thus a flag indicating this state is generated.
  • a video decoding unit for receiving such a bitstream performs decoding on a per-integer-pixel basis in response to the corresponding flag, thus improving decoding efficiency.
  • bitstream generation unit 130 generates a bitstream by including information about motion vectors encoded by the motion vector encoding unit 120 in the bitstream.
  • FIG. 4 is a diagram showing a syntax structure used by the video encoding apparatus according to an embodiment of the present invention.
  • FIG. 4 illustrates an embodiment in which motion prediction is performed on a slice-unit image, and the resolution information of motion vectors is recorded on a slice header, and this corresponds to an example given for the convenience of description.
  • the flag Integer_MV_Resolution_flag indicating whether the resolution of motion vectors is an integer-pixel unit. For example, when the value of the flag is set to ‘1’, it indicates that the resolution of the motion vectors of the unit image is an integer-pixel unit.
  • the flag SCC_AMVR_Enable_flag indicating whether the resolution of each motion vector of a unit image is in a changeable state
  • information about the resolutions of the motion vectors of blocks in a unit image may be included in a slice header, a sequence header, or a picture header.
  • the present invention proposes not only technology for, when all blocks to be encoded in a unit image correspond to a screen content image, encoding and transmitting information related to the resolutions of motion vectors of all blocks, but also a method and apparatus for, when a screen content region and a non-screen content region are included together in a unit image, transmitting header information, with information about those regions being included in the header information.
  • FIG. 5 is a diagram showing a method for processing screen content in the video encoding apparatus according to an embodiment of the present invention.
  • a unit image is composed of screen content regions and where one or more non-screen content regions 50 and 52 are included in the unit image may be considered.
  • bitstream generation unit 130 allows information about the number of non-screen content regions and the positions of the non-screen content regions to be included in header information.
  • the number of non-screen content regions is 2.
  • the position information of the non-screen content regions may be specified by the index of a Coding Tree Unit (CTU) block.
  • CTU Coding Tree Unit
  • a CTU block may be only an example given for the convenience of description, and the corresponding regions may be specified by various types of block indexes that are usable in a video compression procedure.
  • the corresponding region may be specified by the coordinates of a start pixel of a start block and the coordinates of an end pixel of an end block in accordance with embodiments.
  • FIG. 6 is a diagram showing a syntax structure used in the video encoding apparatus according to the embodiment of the present invention.
  • information about the number of non-screen content regions NumNonScreenContentsRegion may be included in header information.
  • start block index start_nsc_idx[i]
  • end block index end_nsc_idx[i]
  • the coordinate values of the upper-left vertex of the non-screen content region may be calculated based on the start block index
  • the coordinate values of the lower-right vertex of the non-screen content region may be calculated based on the end block index.
  • the horizontal coordinate value (start_nsc_point_x[i]) and the vertical coordinate value (start_nsc_point_y[i]) of the upper-left vertex may be individually calculated using the following Equation 1:
  • start_ nsc _point_ y[i ] (start_ nsc _ idx[i ]/PicWidthIn CtbsY ⁇ log 2 Ctb Size
  • start_ nsc _point_ x[i ] (start_ nsc _ idx[i ]% PicWidthIn CtbsY ⁇ log 2 Ctb Size [Equation 1]
  • PictWidthInCtbsY denotes a value obtained by dividing the horizontal length of a picture by the length of one side of the CTU and rounding off the resultant value. That is, the value, which is obtained by dividing the start block index by PictWidthInCtbsY and performing a shift operation on the remainder, is set to the horizontal coordinate value (start_nsc_point_x[i]), and the value, which is obtained by dividing the start block index by PictWidthInCtbsY and performing a shift operation on the quotient, is set to the vertical coordinate value (start_nsc_point_y[i]).
  • horizontal coordinate value (end_nsc_point_x[i]) and vertical coordinate value (end_nsc_point_y[i]) of the lower right vertex may be individually calculated using the following Equation 2:
  • end_ nsc _point_ y[i] Ctb Size+(end_ nsc _ idx[i ]/PicWidthIn CtbsY ⁇ log 2 Ctb Size
  • PictWidthInCtbsY denotes a value obtained by dividing the horizontal length of the picture by the length of one side of a CTU and rounding off the resultant value. That is, the value, which is obtained by dividing the end block index by PictWidthInCtbsY and performing a shift operation on the remainder, is added to the value corresponding to the length of one side of the CTU, and the resultant value is set to the horizontal coordinate value end_nsc_point_x[i].
  • the value obtained by dividing the end block index by PictWidthInCtbsY and performing a shift operation on the quotient, is added to the value corresponding to the length of one side of the CTU, and the resultant value is set to the vertical coordinate value end_nsc_point_y[i].
  • FIGS. 5 and 6 a description has been chiefly made based on the case where a non-screen content region is included in screen content, but the present invention may be sufficiently applied even to the opposite case. That is, the case where one or more screen content regions are included in a non-screen content region may be considered.
  • bitstream generation unit 130 allows information about the number of screen content regions and the position information of the screen content regions to be included in header information.
  • the procedure for scaling a prediction vector of a motion vector extracted from a neighboring block, depending on the resolution of each motion vector may be performed.
  • FIG. 7 is a diagram showing a motion vector scaling procedure performed during a motion vector encoding process according to an embodiment of the present invention.
  • FIG. 7( a ) illustrates the case where a target block to be encoded is not screen content and blocks neighboring the target block to be encoded are screen content.
  • the present invention scales the motion vectors of neighboring blocks with respect to the motion vector of the target block to be encoded.
  • the motion vectors of the neighboring blocks are scaled to sub-pixel units, and a differential motion vector is calculated based on the scaled motion vectors.
  • the motion vectors of the neighboring blocks are converted into the form of 4n/4 pixel units. That is, when the motion vector of the neighboring block is 1, it is scaled to 4/4, and when the motion vector of the neighboring block is 2, it is scaled to 8/4, and then the scaled values are indicated.
  • FIG. 7( b ) illustrates the case where the target block to be encoded is screen content and blocks neighboring the target block to be encoded are not screen content.
  • the present invention scales the motion vectors of neighboring blocks based on the resolution of the motion vector of the target block to be encoded.
  • the motion vectors of neighboring blocks which are represented by sub-pixel units, are mapped to values in integer-pixel units depending on the values of the motion vectors.
  • the motion vectors may be mapped to values corresponding to quotients for values in a sub-pixel-unit.
  • the motion vectors of neighboring blocks are 1 ⁇ 4 pixel units, if quotients for the values in the sub-pixel unit are less than 1 (e.g. 0, 1 ⁇ 4, 2/4, 3 ⁇ 4), the corresponding motion vectors are mapped to 0, whereas if the quotients for the values in the sub-pixel unit are 1 (e.g. 4/4, 5/4, 6/4, 7/4), the corresponding motion vectors are mapped to 1.
  • motion vector encoding may be performed even in the case where a screen content region and a non-screen content region are included together.
  • FIG. 8 is a diagram showing the detailed configuration of the video encoding apparatus according to the embodiment of the present invention.
  • the video encoding apparatus 100 may further include an intra-prediction unit for performing intra prediction on the current frame, a motion prediction unit for searching for a predicted block most similar to the target block to be currently encoded from reference frames in an inter-prediction procedure, and a motion compensation unit for performing motion compensation based on the motion vector of the optimal predicted block found via motion prediction. Further, data, output from the intra-prediction unit, the motion prediction unit, and the motion compensation unit, is output in the form of a bitstream after passing through a transform unit, a quantization unit, and an entropy encoding unit.
  • quantized transform coefficients obtained via frequency transform and quantization steps, are reconstructed into spatial domain data while passing through an inverse quantization unit and an inverse transform unit, and the reconstructed spatial domain data is output as a reference frame while passing through a deblocking unit and an offset adjustment unit.
  • a video encoding algorithm corresponds to the conventional technology, and thus a detailed description thereof will be omitted.
  • FIG. 9 is a block diagram showing the configuration of a video decoding apparatus according to an embodiment of the present invention.
  • a video decoding apparatus 200 includes a parsing unit 210 for receiving and parsing a bitstream and a decoding unit 220 for reconstructing an image based on parsed data.
  • the parsing unit 210 extracts information about the resolutions of motion vectors of each unit image from header information included in a target bitstream to be decoded.
  • the bitstream output from the video encoding apparatus 100 includes flag information indicating whether the resolutions of all motion vectors included in the unit image are integer-pixel resolutions.
  • the bitstream may include information about the number and positions of the regions.
  • the bitstream may include information about the number and positions of the corresponding regions.
  • the parsing unit 210 extracts such information by parsing the bitstream.
  • the decoding unit 220 may reconstruct a target block based on the difference signal between the predicted block included in the bitstream and the original block and on the motion vectors. That is, the predicted block is determined via a motion compensation procedure based on motion vectors, and the original block may be reconstructed by adding the predicted block to the difference signal. Further, the decoding unit 220 decodes a difference vector between the motion vector of the current block included in the bitstream and the predicted motion vector.
  • the decoding unit 220 decodes a unit image based on resolution information extracted by the parsing unit 210 . For example, based on the flag information indicating that the resolutions of all motion vectors included in each unit image are based on integer-pixel resolutions, decoding is performed on the corresponding unit image on a per-integer-pixel basis.
  • the header information may include information about the number and positions of the regions.
  • the decoding unit 220 Based on such header information, the decoding unit 220 performs decoding on screen content regions on a per-integer-pixel basis, and performs decoding on the remaining regions in consideration even of sub-pixel units.
  • header information may include the index information of a start block and the index information of an end block, among blocks having resolutions that are not integer-pixel resolutions.
  • the decoding unit 220 may calculate the coordinate values of the upper-left vertex of the start block and the coordinate values of the lower-right vertex of the end block, based on both the index information of the start block and the index information of the end block. That is, as described above, respective coordinate values may be calculated using the above-described Equations 1 and 2.
  • the header information may include information about the number and positions of the regions. Further, such header information may include the index information of a start block and the index information of an end block among blocks in regions having integer-pixel resolutions. Furthermore, according to embodiments, in addition to block indexes, information about the coordinates of a start pixel of a start block and the coordinates of an end pixel of an end block may be included in the header information.
  • the decoding unit 220 may perform a procedure for scaling prediction vectors for motion vectors extracted from neighboring blocks depending on the resolutions of respective motion vectors. That is, as described above with reference to FIG. 7 , when the resolutions of motion vectors for the target block to be decoded and neighboring blocks thereof are different from each other, there is a need to perform scaling, which matches the units of resolutions when obtaining a differential motion vector.
  • the motion vectors of the neighboring blocks are scaled with respect to the resolution of the target block to be decoded. That is, the motion vectors of the neighboring blocks are scaled to sub-pixel units, and a differential motion vector is calculated based on the scaled motion vectors. For example, when the motion vector of the target block to be decoded is a 1 ⁇ 4 pixel unit, the motion vectors of the neighboring blocks are converted into the form of 4n/4 pixel units.
  • the motion vector of the neighboring block when the motion vector of the neighboring block is 1, it is scaled to 4/4, and when the motion vector of the neighboring block is 2, it is scaled to 8/4, and then the scaled values are indicated.
  • the motion vectors of the neighboring blocks are scaled with respect to the resolution of the target block to be decoded. That is, the motion vectors of neighboring blocks, which are represented by sub-pixel units, are mapped to values in integer-pixel units depending on the values of the motion vectors.
  • the motion vectors may be mapped to values corresponding to quotients for values in a sub-pixel-unit.
  • the motion vectors of neighboring blocks are 1 ⁇ 4 pixel units, if quotients for the values in the sub-pixel unit are less than 1 (e.g. 0, 1 ⁇ 4, 2/4, 3 ⁇ 4), the corresponding motion vectors are mapped to 0, whereas if the quotients for the values in the sub-pixel unit are 1 (e.g. 4/4, 5/4, 6/4, 7/4), the corresponding motion vectors are mapped to 1.
  • motion vector decoding may be performed even in the case where a screen content region and a non-screen content region are included together.
  • FIG. 10 is a diagram showing the detailed configuration of the video decoding apparatus according to the embodiment of the present invention.
  • the video decoding apparatus 200 includes a parsing unit for receiving and parsing a bitstream and outputting encoded image data and various types of information required for decoding. Further, the encoded image data is output as inversely quantized data while passing through an entropy decoding unit and an inverse quantization unit, and is then reconstructed into spatial domain image data while passing through an inverse transform unit.
  • An intra-prediction unit performs intra prediction on spatial domain image data for each encoding unit in an intra mode
  • a motion compensation unit performs motion compensation for each encoding unit in an inter mode using a reference frame.
  • Spatial domain data obtained after passing through the intra-prediction unit and the motion compensation unit, is post-processed while passing through a deblocking unit and an offset adjustment unit, and then a reconstructed frame is output. Further, the data, which is post-processed through the deblocking unit and the offset adjustment unit, may be output as a reference frame.
  • Such a video coding algorithm corresponds to the conventional technology, and thus a detailed description thereof will be omitted.
  • the components shown in FIG. 2 or 9 may denote software components, or hardware components such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and may perform predetermined functions.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the components included in embodiments of the present invention are not limited to software or hardware, and may be configured to be stored in addressable storage media and to execute on one or more processors.
  • the components may include components such as software components, object-oriented software components, class components, and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • components such as software components, object-oriented software components, class components, and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the components and functionality provided in the corresponding components may be combined into fewer components, or may be further separated into additional components.
  • the video encoding apparatus and the video decoding apparatus may be any of a Personal Computer (PC), a notebook computer, a Personal Digital Assistant (PDA), a portable Multimedia Player (PMP), a PlayStation Portable (PSP), a mobile communication terminal, a smart phone, a tablet PC, etc., and may denote various types of devices, each including a communication device such as a communication modem for performing communication with various types of devices or wired/wireless communication networks, memory for storing various types of programs and data required to encode or decode images, a microprocessor for executing programs and performing operations and control, etc.
  • a communication device such as a communication modem for performing communication with various types of devices or wired/wireless communication networks
  • memory for storing various types of programs and data required to encode or decode images
  • a microprocessor for executing programs and performing operations and control, etc.
  • images encoded into a bitstream by the video encoding apparatus may be transmitted to the video decoding apparatus in real time or in non-real time over wired/wireless communication networks such as the Internet, a short-range wireless communication network, a wireless Local Area Network (LAN), a Wibro network, or a mobile communication network, or through a communication interface such as a cable or a Universal Serial Bus (USB), and may then be reconstructed and reproduced as images.
  • wired/wireless communication networks such as the Internet, a short-range wireless communication network, a wireless Local Area Network (LAN), a Wibro network, or a mobile communication network, or through a communication interface such as a cable or a Universal Serial Bus (USB), and may then be reconstructed and reproduced as images.
  • wired/wireless communication networks such as the Internet, a short-range wireless communication network, a wireless Local Area Network (LAN), a Wibro network, or a mobile communication network, or through a communication interface such as a cable or a Universal Serial Bus (US
  • the embodiments of the present invention may also be implemented in the form of storage media including instructions that are executed by a computer, such as program modules executed by the computer.
  • the computer-readable media may be arbitrary available media that can be accessed by the computer, and may include all of volatile and nonvolatile media and removable and non-removable media. Further, the computer-readable media may include all of computer storage media and communication media.
  • the computer-storage media include all of volatile and nonvolatile media and removable and non-removable media, which are implemented using any method or technology for storing information, such as computer-readable instructions, data structures, program modules or additional data.
  • the communication media typically include transmission media for computer-readable instructions, data structures, program modules or additional data for modulated data signals, such as carrier waves, or additional transmission mechanisms, and include arbitrary information delivery media.
  • the present invention has industrial applicability in technical fields for improving the efficiency of procedures for encoding and decoding screen content video.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US15/537,718 2015-01-13 2016-01-11 Method and device for encoding and decoding image Abandoned US20180278952A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR1020150006074A KR102349788B1 (ko) 2015-01-13 2015-01-13 영상의 부호화/복호화 방법 및 장치
KR10-2015-0006074 2015-01-13
PCT/KR2016/000253 WO2016114539A1 (ko) 2015-01-13 2016-01-11 영상의 부호화 및 복호화 방법 및 장치

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/000253 A-371-Of-International WO2016114539A1 (ko) 2015-01-13 2016-01-11 영상의 부호화 및 복호화 방법 및 장치

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/504,337 Continuation US20240073444A1 (en) 2015-01-13 2023-11-08 Method and device for encoding and decoding image using motion vector resolution scaling

Publications (1)

Publication Number Publication Date
US20180278952A1 true US20180278952A1 (en) 2018-09-27

Family

ID=56406039

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/537,718 Abandoned US20180278952A1 (en) 2015-01-13 2016-01-11 Method and device for encoding and decoding image
US18/504,337 Pending US20240073444A1 (en) 2015-01-13 2023-11-08 Method and device for encoding and decoding image using motion vector resolution scaling

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/504,337 Pending US20240073444A1 (en) 2015-01-13 2023-11-08 Method and device for encoding and decoding image using motion vector resolution scaling

Country Status (4)

Country Link
US (2) US20180278952A1 (zh)
KR (4) KR102349788B1 (zh)
CN (5) CN113573052B (zh)
WO (1) WO2016114539A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11006144B2 (en) 2018-02-28 2021-05-11 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US20220014737A1 (en) * 2019-09-27 2022-01-13 Tencent Technology (Shenzhen) Company Limited Video processing method and apparatus, storage medium, and electronic device
US20220279196A1 (en) * 2021-02-22 2022-09-01 Tencent America LLC Method and apparatus for video coding
US20230209095A1 (en) * 2012-09-26 2023-06-29 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102450863B1 (ko) * 2017-03-22 2022-10-05 에스케이텔레콤 주식회사 움직임벡터를 부호화 또는 복호화하기 위한 장치 및 방법
CN108848376B (zh) * 2018-06-20 2022-03-01 腾讯科技(深圳)有限公司 视频编码、解码方法、装置和计算机设备
WO2020140216A1 (zh) * 2019-01-02 2020-07-09 北京大学 视频处理方法和装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111183A1 (en) * 2007-04-25 2010-05-06 Yong Joon Jeon Method and an apparatus for decording/encording a video signal
US20110206125A1 (en) * 2010-02-19 2011-08-25 Quallcomm Incorporated Adaptive motion resolution for video coding
US20120207220A1 (en) * 2009-08-21 2012-08-16 Sk Telecom Co., Ltd. Method and apparatus for encoding/decoding images using adaptive motion vector resolution
US20120263235A1 (en) * 2011-04-12 2012-10-18 Toshiyasu Sugio Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US20130058397A1 (en) * 2010-05-17 2013-03-07 Sk Telecom Co., Ltd. Apparatus and method for constructing and indexing a reference image
US20150195562A1 (en) * 2014-01-09 2015-07-09 Qualcomm Incorporated Adaptive motion vector resolution signaling for video coding
US20150195527A1 (en) * 2014-01-08 2015-07-09 Microsoft Corporation Representing Motion Vectors in an Encoded Bitstream
US20200304801A1 (en) * 2017-12-08 2020-09-24 Huawei Technologies Co., Ltd. Inter prediction method and apparatus, and terminal device

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2285119B1 (en) * 1997-06-09 2015-08-05 Hitachi, Ltd. Image decoding method
JP2004064518A (ja) * 2002-07-30 2004-02-26 Nec Corp 動画像符号化方法、動画像符号化装置、およびコンピュータプログラム
FR2880743A1 (fr) * 2005-01-12 2006-07-14 France Telecom Dispositif et procedes de codage et de decodage echelonnables de flux de donnees d'images, signal, programme d'ordinateur et module d'adaptation de qualite d'image correspondants
WO2006131891A2 (en) * 2005-06-10 2006-12-14 Nxp B.V. Alternating up- and down-ward motion vector
KR101369746B1 (ko) * 2007-01-22 2014-03-07 삼성전자주식회사 적응적 보간 필터를 이용한 영상 부호화, 복호화 방법 및장치
CN101340578A (zh) * 2007-07-03 2009-01-07 株式会社日立制作所 运动矢量估计装置、编码器及摄像机
JP2009153102A (ja) * 2007-11-28 2009-07-09 Oki Electric Ind Co Ltd 動きベクトル検出装置、フレーム間符号化装置、動画像符号化装置、動きベクトル検出プログラム、フレーム間符号化プログラム及び動画像符号化プログラム
CN101904173B (zh) * 2007-12-21 2013-04-03 艾利森电话股份有限公司 用于视频编码的改进像素预测的方法及设备
JP2009224854A (ja) * 2008-03-13 2009-10-01 Toshiba Corp 画像符号化装置及び方法
KR101359500B1 (ko) * 2008-07-28 2014-02-11 에스케이 텔레콤주식회사 양자화/역 양자화 장치 및 방법과 그를 이용한 영상부호화/복호화 장치
KR101527148B1 (ko) * 2008-08-08 2015-06-10 에스케이 텔레콤주식회사 인터 예측 장치 및 그를 이용한 영상 부호화/복호화 장치와방법
CN102210150A (zh) * 2008-11-07 2011-10-05 三菱电机株式会社 运动图像编码装置以及运动图像解码装置
WO2010111261A1 (en) * 2009-03-23 2010-09-30 Azuki Systems, Inc. Method and system for efficient streaming video dynamic rate adaptation
KR101452859B1 (ko) * 2009-08-13 2014-10-23 삼성전자주식회사 움직임 벡터를 부호화 및 복호화하는 방법 및 장치
KR101449696B1 (ko) * 2009-08-21 2014-10-20 에스케이텔레콤 주식회사 차분 움직임 벡터의 정밀도를 고려한 움직임 벡터 부호화/복호화 방법 및 장치, 및 그를 위한 영상처리 장치 및 방법
CN102026002B (zh) * 2009-09-14 2014-02-19 富士通株式会社 帧率下采样转码方法和装置以及矢量重建方法和装置
JP2011091498A (ja) * 2009-10-20 2011-05-06 Victor Co Of Japan Ltd 動画像符号化装置、動画像復号化装置、動画像符号化方法、及び動画像復号化方法
KR101449683B1 (ko) * 2009-10-30 2014-10-15 에스케이텔레콤 주식회사 움직임 벡터 해상도 제한을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치
CN101783957B (zh) * 2010-03-12 2012-04-18 清华大学 一种视频预测编码方法和装置
KR101479130B1 (ko) * 2010-10-18 2015-01-07 에스케이 텔레콤주식회사 차분 움직임벡터 부호화/복호화 장치 및 방법, 및 그것을 이용한 영상 부호화/복호화 장치 및 방법
CN102611887B (zh) * 2011-01-21 2015-08-05 华为技术有限公司 非整像素位置运动矢量的坐标值取整方法和装置
KR101444675B1 (ko) * 2011-07-01 2014-10-01 에스케이 텔레콤주식회사 영상 부호화 및 복호화 방법과 장치
WO2013109092A1 (ko) * 2012-01-18 2013-07-25 한국전자통신연구원 영상 부호화 및 복호화 방법 및 장치
JP5422681B2 (ja) * 2012-02-09 2014-02-19 株式会社日立製作所 画像復号化方法
KR20130098122A (ko) * 2012-02-27 2013-09-04 세종대학교산학협력단 영상 부호화/복호화 장치 및 영상을 부호화/복호화하는 방법
US20140119446A1 (en) * 2012-11-01 2014-05-01 Microsoft Corporation Preserving rounding errors in video coding
JP6031366B2 (ja) * 2013-01-28 2016-11-24 日本放送協会 撮像制振装置
KR101369174B1 (ko) * 2013-03-20 2014-03-10 에스케이텔레콤 주식회사 고해상도 동영상의 부호화/복호화 방법 및 장치
CN104065972B (zh) * 2013-03-21 2018-09-28 乐金电子(中国)研究开发中心有限公司 一种深度图像编码方法、装置及编码器
JP5690898B2 (ja) * 2013-09-24 2015-03-25 日立マクセル株式会社 画像復号化方法
KR101575634B1 (ko) * 2014-01-02 2015-12-09 에스케이텔레콤 주식회사 고해상도 동영상의 부호화/복호화 방법 및 장치

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111183A1 (en) * 2007-04-25 2010-05-06 Yong Joon Jeon Method and an apparatus for decording/encording a video signal
US20120207220A1 (en) * 2009-08-21 2012-08-16 Sk Telecom Co., Ltd. Method and apparatus for encoding/decoding images using adaptive motion vector resolution
US20110206125A1 (en) * 2010-02-19 2011-08-25 Quallcomm Incorporated Adaptive motion resolution for video coding
US20130058397A1 (en) * 2010-05-17 2013-03-07 Sk Telecom Co., Ltd. Apparatus and method for constructing and indexing a reference image
US20120263235A1 (en) * 2011-04-12 2012-10-18 Toshiyasu Sugio Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US20150195527A1 (en) * 2014-01-08 2015-07-09 Microsoft Corporation Representing Motion Vectors in an Encoded Bitstream
US20150195562A1 (en) * 2014-01-09 2015-07-09 Qualcomm Incorporated Adaptive motion vector resolution signaling for video coding
US20200304801A1 (en) * 2017-12-08 2020-09-24 Huawei Technologies Co., Ltd. Inter prediction method and apparatus, and terminal device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230209095A1 (en) * 2012-09-26 2023-06-29 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US11943484B2 (en) * 2012-09-26 2024-03-26 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US11006144B2 (en) 2018-02-28 2021-05-11 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US11388435B2 (en) 2018-02-28 2022-07-12 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US11394998B2 (en) 2018-02-28 2022-07-19 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US11412251B2 (en) 2018-02-28 2022-08-09 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US11451822B2 (en) 2018-02-28 2022-09-20 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US20220014737A1 (en) * 2019-09-27 2022-01-13 Tencent Technology (Shenzhen) Company Limited Video processing method and apparatus, storage medium, and electronic device
US11838503B2 (en) * 2019-09-27 2023-12-05 Tencent Technology (Shenzhen) Company Limited Video processing method and apparatus, storage medium, and electronic device
US20220279196A1 (en) * 2021-02-22 2022-09-01 Tencent America LLC Method and apparatus for video coding
US11778217B2 (en) * 2021-02-22 2023-10-03 Tencent America LLC High level syntax control for screen content coding

Also Published As

Publication number Publication date
CN113573052A (zh) 2021-10-29
CN113573053B (zh) 2023-11-28
KR102349788B1 (ko) 2022-01-11
CN113573054A (zh) 2021-10-29
KR20160087208A (ko) 2016-07-21
CN113573051B (zh) 2024-03-01
US20240073444A1 (en) 2024-02-29
CN113573052B (zh) 2023-12-05
KR20230106577A (ko) 2023-07-13
WO2016114539A1 (ko) 2016-07-21
CN107211123B (zh) 2021-07-23
CN113573053A (zh) 2021-10-29
KR20220009473A (ko) 2022-01-24
CN113573054B (zh) 2024-03-01
CN107211123A (zh) 2017-09-26
KR20220155972A (ko) 2022-11-24
KR102468243B1 (ko) 2022-11-16
KR102554364B1 (ko) 2023-07-10
CN113573051A (zh) 2021-10-29

Similar Documents

Publication Publication Date Title
US20240073444A1 (en) Method and device for encoding and decoding image using motion vector resolution scaling
US11425415B2 (en) Affine motion prediction
KR102578820B1 (ko) 히스토리 기반 움직임 정보를 이용한 영상 코딩 방법 및 그 장치
TWI688262B (zh) 用於視訊寫碼之重疊運動補償
US10142654B2 (en) Method for encoding/decoding video by oblong intra prediction
KR102148466B1 (ko) 스킵 모드를 이용한 영상 복호화 방법 및 이러한 방법을 사용하는 장치
US11632563B2 (en) Motion vector derivation in video coding
US11962796B2 (en) Gradient-based prediction refinement for video coding
JP2017508346A (ja) ビデオコーディングのための適応的な動きベクトル分解シグナリング
WO2017129023A1 (zh) 解码方法、编码方法、解码设备和编码设备
JP7423647B2 (ja) 異なるクロマフォーマットを使用した三角予測ユニットモードでのビデオコーディング
US11089325B2 (en) Constrained affine motion inheritance for video coding
US20220191548A1 (en) Picture prediction method, encoder, decoder and storage medium
CN111837389A (zh) 适用于多符号位隐藏的块检测方法及装置
US20210195238A1 (en) Method and apparatus for encoding/decoding image
TW202141977A (zh) 視訊解碼中的變換跳過塊的係數編碼
WO2019147403A1 (en) Encoding and decoding with refinement of the reconstructed picture
KR20170058870A (ko) 비선형 매핑을 통한 영상 부호화/복호화 방법 및 장치
KR102646642B1 (ko) 확장된 Intra Block Copy 방법을 통한 비디오 복호화 방법 및 장치
US20160050441A1 (en) Video encoding apparatus, video decoding apparatus, video encoding method, video decoding method, and computer program

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTELLECTUAL DISCOVERY CO., LTD., KOREA, REPUBLIC

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAN, JONG KI;LEE, JAE YUNG;REEL/FRAME:042749/0385

Effective date: 20170613

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: DOLBY LABORATORIES LICENSING CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL DISCOVERY CO., LTD.;REEL/FRAME:058356/0603

Effective date: 20211102

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION