WO2020138997A1 - Procédé et dispositif de traitement de signal vidéo à l'aide d'une inter-prédiction - Google Patents

Procédé et dispositif de traitement de signal vidéo à l'aide d'une inter-prédiction Download PDF

Info

Publication number
WO2020138997A1
WO2020138997A1 PCT/KR2019/018560 KR2019018560W WO2020138997A1 WO 2020138997 A1 WO2020138997 A1 WO 2020138997A1 KR 2019018560 W KR2019018560 W KR 2019018560W WO 2020138997 A1 WO2020138997 A1 WO 2020138997A1
Authority
WO
WIPO (PCT)
Prior art keywords
mmvd
motion vector
prediction
unit
current block
Prior art date
Application number
PCT/KR2019/018560
Other languages
English (en)
Korean (ko)
Inventor
박내리
남정학
장형문
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020138997A1 publication Critical patent/WO2020138997A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • An embodiment of the present disclosure relates to a method and apparatus for processing a video signal using inter prediction, and more specifically, performs inter-screen prediction using merge mode to which merge with motion vector difference (MMVD) is applied. It relates to a method and apparatus for.
  • MMVD motion vector difference
  • Compression coding refers to a series of signal processing techniques for transmitting digitized information through a communication line or storing it in a form suitable for a storage medium.
  • Media such as video, image, and audio may be the subject of compression encoding, and a technique for performing compression encoding on an image is referred to as video image compression.
  • Next-generation video content will have the characteristics of high spatial resolution, high frame rate and high dimensionality of scene representation. In order to process such content, it will bring a huge increase in terms of memory storage, memory access rate and processing power.
  • the video codec standard after the high efficiency video coding (HEVC) standard requires a prediction technique capable of generating prediction samples accurately while using resources more efficiently.
  • Embodiments of the present specification provide a video signal processing method and apparatus capable of improving the accuracy of a motion vector when applying merge mode.
  • an embodiment of the present specification provides a video signal processing method and apparatus capable of reducing signaling overhead in the process of applying a merge with motion vector difference (MMVD) technique.
  • MMVD merge with motion vector difference
  • Embodiments of the present specification provide a method and apparatus for processing a video signal using inter prediction.
  • a video signal processing method includes obtaining at least one motion vector for inter-frame prediction of the current block from at least one neighboring block adjacent to the current block based on a merge index, and MMVD determining an MMVD offset applied to the at least one motion vector based on a distance index (merge with motion vector difference); and a reference picture associated with the merge index and at least one motion vector to which the MMVD offset is applied.
  • the determining of the MMVD candidate set includes determining one of a plurality of pre-defined MMVD candidate sets based on the at least one motion vector, and wherein Each of the plurality of MMVD candidate sets may include a different range of MMVD offset candidates.
  • determining the MMVD candidate set may include determining the MMVD candidate set based on the magnitude of the at least one motion vector.
  • the determining of the MMVD candidate set includes determining the MMVD candidate set based on a comparison result of a size and a reference value of the at least one motion vector, and for the current block.
  • the MMVD candidate set is based on a comparison result of at least one of a first motion vector for a first prediction direction or a second motion vector for a second prediction direction and the reference value. This can be determined.
  • determining the MMVD candidate set may include determining the MMVD candidate set based on the size of the at least one motion vector and the size of the current block.
  • determining the MMVD candidate set may include determining the MMVD candidate set based on a pixel resolution of coordinates indicated by the at least one motion vector.
  • the determining of the MMVD candidate set may include determining the MMVD candidate set based on the size of the at least one motion vector and the pixel resolution of coordinates indicated by the at least one motion vector. It may include.
  • An apparatus for processing a video signal includes a memory for storing the video signal, a processor coupled with the memory, and the processor, based on a merge index, at least one adjacent to the current block Obtain at least one motion vector for inter-frame prediction of the current block from neighboring blocks, determine an MMVD offset applied to the at least one motion vector based on an MMVD length index, and at least one applied with the MMVD offset Set to generate a prediction sample of the current block based on a motion vector and a reference picture associated with the merge index, and to determine the MMVD offset, the processor sets an MMVD candidate set based on the at least one motion vector. And the MMVD offset associated with the MMVD length index in the MMVD candidate set.
  • a non-transitory computer-executable component that stores computer executable components configured to execute on one or more processors of a computing device.
  • a non-transitory computer-readable medium acquires at least one motion vector for inter-frame prediction of the current block from at least one neighboring block adjacent to the current block based on a merge index, and an MMVD ( merge with motion vector difference) determine an MMVD offset applied to the at least one motion vector based on a distance index, and based on at least one motion vector to which the MMVD offset is applied and a reference picture associated with the merge index Set to generate a prediction sample of the current block, and to determine the MMVD offset, the computer-executable component determines an MMVD candidate set based on the at least one motion vector, and the MMVD from the MMVD candidate set It is set to determine the MMVD offset associated with the length index.
  • MMVD merge with motion vector difference
  • accuracy of a motion vector may be improved by performing prediction using a merge mode to which a merge with motion vector difference (MMVD) technique is applied.
  • MMVD merge with motion vector difference
  • signaling overhead is reduced by limiting the number of candidate MMVD offsets that can be applied based on a motion vector and reducing the bit length of the MMVD length index. It provides a video signal processing method and apparatus that can be.
  • FIG. 1 shows an example of a video coding system according to an embodiment of the present specification.
  • FIG. 2 shows a schematic block diagram of an encoding apparatus for encoding a video/image signal according to an embodiment of the present specification.
  • FIG. 3 is an embodiment of the present specification, and shows a schematic block diagram of a decoding apparatus for decoding a video signal.
  • FIG. 4 shows an example of a structural diagram of a content streaming system according to an embodiment of the present specification.
  • FIG. 5 shows an example of a block diagram of an apparatus for processing a video signal according to an embodiment of the present specification.
  • CTUs coding tree units
  • FIG 7 shows an example of multi-type tree splitting modes according to an embodiment of the present specification.
  • FIG. 8 shows an example of a signaling mechanism of partition partition information of a quadtree with nested multi-type tree structure according to an embodiment of the present specification.
  • a CTU is divided into multiple coding units (CUs) based on a quadtree and nested multi-type tree structure according to an embodiment of the present specification.
  • FIG. 10 shows an example of a case where TT (ternary tree) partitioning is limited for a 128x128 coding block according to an embodiment of the present specification.
  • FIG. 11 illustrates examples of redundant partition patterns that may occur in binary tree partition and ternary tree partition according to an embodiment of the present disclosure.
  • FIGS. 12 and 13 illustrate a video/video encoding procedure based on inter prediction according to an embodiment of the present specification and an inter prediction unit in an encoding device.
  • FIG. 14 and 15 illustrate a video/image decoding procedure based on inter prediction according to an embodiment of the present specification and an inter prediction unit in a decoding apparatus.
  • FIG. 16 shows an example of a spatial merge candidate configuration for a current block according to an embodiment of the present specification.
  • FIG 17 shows an example of a flowchart for configuring a merge candidate list according to an embodiment of the present specification.
  • MVP candidate list shows an example of a flowchart for constructing a prediction candidate list (MVP candidate list).
  • FIG 19 shows an example of an MMVD discovery process according to an embodiment of the present specification.
  • FIG 20 shows an example of triangular prediction units according to an embodiment of the present specification.
  • FIG. 21 illustrates an example of a flowchart for determining a set of MMVD candidates based on the size of a motion vector according to an embodiment of the present specification.
  • 22 and 23 show examples of flowcharts for determining a set of MMVD candidates based on the size and block size of a motion vector according to an embodiment of the present specification.
  • FIG. 24 illustrates an example of a flow chart for determining an MMVD candidate set based on the resolution of a motion vector according to an embodiment of the present specification.
  • 25 and 26 illustrate an example of a flowchart for determining a set of MMVD candidates based on the size and resolution of a motion vector according to an embodiment of the present specification.
  • FIG. 27 shows an example of a flowchart for processing a video signal according to an embodiment of the present specification.
  • FIG. 28 is a diagram schematically showing an example of a service system including a digital device.
  • 29 is a block diagram illustrating a digital device according to an embodiment.
  • FIG. 30 is a configuration block diagram illustrating another embodiment of a digital device.
  • 31 is a configuration block diagram illustrating another embodiment of a digital device.
  • FIGS. 29 to 31 are block diagrams illustrating a detailed configuration of the control unit of FIGS. 29 to 31 to illustrate one embodiment.
  • FIG 33 is a diagram illustrating an example in which a screen of a digital device displays a main image and a sub image simultaneously, according to an embodiment.
  • signals, data, samples, pictures, slices, tiles, frames, and blocks may be interpreted by being appropriately substituted in each coding process.
  • the term'processing unit' means a unit in which encoding/decoding processing processes such as prediction, transformation, and/or quantization are performed.
  • the processing unit may be interpreted as meaning including a unit for a luminance component and a unit for a chroma component.
  • the processing unit may correspond to a block, a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • the processing unit may be interpreted as a unit for a luminance component or a unit for a color difference component.
  • the processing unit may correspond to a coding tree block (CTB), a coding block (CB), a PU or a transform block (TB) for the luminance component.
  • the processing unit may correspond to CTB, CB, PU or TB for the color difference component.
  • the present invention is not limited thereto, and the processing unit may be interpreted to include a unit for a luminance component and a unit for a color difference component.
  • processing unit is not necessarily limited to a square block, and may be configured in a polygonal shape having three or more vertices.
  • pixels, pixels, or coefficients transformation coefficients or transformation coefficients that have undergone first-order transformation
  • samples are hereinafter collectively referred to as samples.
  • using a sample may mean using a pixel value, a pixel value, or a coefficient (a transform coefficient or a transform coefficient that has undergone first-order transformation).
  • FIG. 1 shows an example of a video coding system according to an embodiment of the present specification.
  • the video coding system can include a source device 10 and a receiving device 20.
  • the source device 10 may transmit the encoded video/video information or data to the receiving device 20 through a digital storage medium or a network in a file or streaming form.
  • the source device 10 may include a video source 11, an encoding device 12, and a transmitter 13.
  • the receiving device 20 may include a receiver 21, a decoding device 22 and a renderer 23.
  • the encoding device 10 may be called a video/video encoding device, and the decoding device 20 may be called a video/video decoding device.
  • the transmitter 13 may be included in the encoding device 12.
  • the receiver 21 may be included in the decoding device 22.
  • the renderer 23 may include a display unit, and the display unit may be configured as a separate device or an external component.
  • the video source may acquire a video/image through a capture, synthesis, or generation process of the video/image.
  • the video source may include a video/image capture device and/or a video/image generation device.
  • the video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured video/images, and the like.
  • the video/image generating device may include, for example, a computer, a tablet and a smartphone, and may (electronically) generate a video/image.
  • a virtual video/image may be generated through a computer or the like, and in this case, a video/image capture process may be replaced by a process of generating related data.
  • the encoding device 12 may encode an input video/image.
  • the encoding apparatus 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
  • the encoded data (encoded video/image information) may be output in the form of a bitstream.
  • the transmitting unit 13 may transmit the encoded video/video information or data output in the form of a bitstream to a receiving unit of a receiving device through a digital storage medium or a network in a file or streaming format.
  • Digital storage media include universal serial bus (USB), secure digital (SD), compact disk (CD), digital video disk (DVD), bluray, hard disk drive (HDD), and solid state drive (SSD). It may include various storage media.
  • the transmission unit 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
  • the receiver 21 may extract the bitstream and transmit it to the decoding device 22.
  • the decoding apparatus 22 may decode a video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding apparatus 12.
  • the renderer 23 may render the decoded video/image.
  • the rendered video/image may be displayed through the display unit.
  • FIG. 2 shows a schematic block diagram of an encoding apparatus for encoding a video/image signal according to an embodiment of the present specification.
  • the encoding apparatus 100 includes an image segmentation unit 110, a subtraction unit 115, a conversion unit 120, a quantization unit 130, an inverse quantization unit 140, and an inverse conversion unit 150, It may include an adder 155, a filtering unit 160, a memory 170, an inter prediction unit 180, an intra prediction unit 185, and an entropy encoding unit 190.
  • the inter prediction unit 180 and the intra prediction unit 185 may be collectively referred to as a prediction unit. That is, the prediction unit may include an inter prediction unit 180 and an intra prediction unit 185.
  • the transform unit 120, the quantization unit 130, the inverse quantization unit 140, and the inverse transform unit 150 may be included in a residual processing unit.
  • the residual processing unit may further include a subtraction unit 115.
  • the above-described image segmentation unit 110, subtraction unit 115, conversion unit 120, quantization unit 130, inverse quantization unit 140, inverse conversion unit 150, addition unit 155, filtering unit 160 ), the inter prediction unit 180, the intra prediction unit 185 and the entropy encoding unit 190 may be configured by one hardware component (for example, an encoder or processor) according to an embodiment.
  • the memory 170 may be configured by one hardware component (for example, a memory or digital storage medium) according to an embodiment, and the memory 170 may include a decoded picture buffer (DPB) 175. .
  • DPB decoded picture buffer
  • the image division unit 110 may divide the input image (or picture, frame) input to the encoding apparatus 100 into one or more processing units.
  • the processing unit may be referred to as a coding unit (CU).
  • the coding unit may be recursively divided according to a quad-tree binary-tree (QTBT) structure from a coding tree unit (CTU) or a largest coding unit (LCU).
  • QTBT quad-tree binary-tree
  • CTU coding tree unit
  • LCU largest coding unit
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure and/or a binary tree structure.
  • a quad tree structure may be applied first, and a binary tree structure may be applied later.
  • a binary tree structure may be applied first.
  • the coding procedure according to the present specification may be performed based on the final coding unit that is no longer split.
  • the maximum coding unit may be directly used as a final coding unit based on coding efficiency according to image characteristics, or the coding unit may be recursively divided into coding units having a lower depth than optimal, if necessary.
  • the coding unit of the size of can be used as the final coding unit.
  • the coding procedure may include procedures such as prediction, transformation, and reconstruction, which will be described later.
  • the processing unit may further include a prediction unit (PU) or a transformation unit (TU).
  • the prediction unit and transform unit may be partitioned or partitioned from the above-described final coding unit, respectively.
  • the prediction unit may be a unit of sample prediction
  • the transformation unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
  • the unit may be used interchangeably with terms such as a block or area depending on the case.
  • the MxN block may represent samples of M columns and N rows or a set of transform coefficients.
  • the sample may generally represent a pixel or a pixel value, and may indicate only a pixel/pixel value of a luma component or only a pixel/pixel value of a saturation component.
  • the sample may be used as a term for one picture (or image) corresponding to a pixel or pel.
  • the encoding apparatus 100 subtracts a prediction signal (a predicted block, a prediction sample array) output from the inter prediction unit 180 or the intra prediction unit 185 from the input image signal (original block, original sample array)
  • a signal (remaining block, residual sample array) may be generated, and the generated residual signal is transmitted to the converter 120.
  • a unit that subtracts a prediction signal (a prediction block, a prediction sample array) from an input video signal (original block, original sample array) in the encoding apparatus 100 may be referred to as a subtraction unit 115.
  • the prediction unit may perform prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction is applied or inter prediction is applied in units of blocks or CUs.
  • the prediction unit may generate various pieces of information about prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit them to the entropy encoding unit 190.
  • the prediction information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
  • the intra prediction unit 185 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in the neighborhood of the current block or may be located apart depending on a prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the non-directional mode may include, for example, a DC mode and a planar mode (planar mode).
  • the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to the degree of detail of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
  • the intra prediction unit 185 may determine a prediction mode applied to the current block by using a prediction mode applied to neighboring blocks.
  • the inter prediction unit 180 may derive the predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted in units of blocks, subblocks, or samples based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be referred to by a name such as a collocated reference block or a colCU, and a reference picture including a temporal neighboring block may also be called a collocated picture (colPic).
  • the inter prediction unit 180 constructs a motion information candidate list based on neighboring blocks, and generates information indicating which candidates are used to derive the motion vector and/or reference picture index of the current block. can do. Inter prediction may be performed based on various prediction modes. For example, in the case of the skip mode and the merge mode, the inter prediction unit 180 may use motion information of neighboring blocks as motion information of the current block.
  • the residual signal may not be transmitted.
  • the motion vector of the current block is obtained by using the motion vector of the neighboring block as a motion vector predictor and signaling a motion vector difference. I can order.
  • the prediction signal generated by the inter prediction unit 180 or the intra prediction unit 185 may be used to generate a reconstructed signal or may be used to generate a residual signal.
  • the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
  • the transformation technique may include at least one of a discrete cosine transform (DCT), a discrete sine transform (DST), a Karhunen-Loeve transform (KLT), a graph-based transform (GBT), or a conditionally non-linear transform (CNT).
  • DCT discrete cosine transform
  • DST discrete sine transform
  • KLT Karhunen-Loeve transform
  • GBT graph-based transform
  • CNT conditionally non-linear transform
  • GBT refers to a transformation obtained from this graph when it is said to graphically represent relationship information between pixels.
  • CNT means a transform obtained by generating a prediction signal using all previously reconstructed pixels and based on it.
  • the transform process may be applied to pixel blocks having the same size of a square, or may be applied to blocks of variable sizes other than squares.
  • the quantization unit 130 quantizes the transform coefficients and transmits them to the entropy encoding unit 190, and the entropy encoding unit 190 encodes the quantized signal (information about quantized transform coefficients) and outputs it as a bitstream. have. Information about the quantized transform coefficients may be called residual information.
  • the quantization unit 130 may rearrange block-type quantized transform coefficients into a one-dimensional vector form based on a coefficient scan order, and the quantized transform based on the one-dimensional vector form quantized transform coefficients Information about coefficients may be generated.
  • the entropy encoding unit 190 may perform various encoding methods such as exponential Golomb (CAVLC), context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
  • the entropy encoding unit 190 may encode information necessary for video/image reconstruction (eg, values of syntax elements, etc.) together with the quantized transform coefficients together or separately.
  • the encoded information (eg, video/video information) may be transmitted or stored in the unit of a network abstraction layer (NAL) unit in the form of a bitstream.
  • NAL network abstraction layer
  • the bitstream can be transmitted over a network or stored on a digital storage medium.
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD.
  • the signal output from the entropy encoding unit 190 may be configured as an internal/external element of the encoding apparatus 100 by a transmitting unit (not shown) and/or a storing unit (not shown) for storing, or the transmitting unit It may be a component of the entropy encoding unit 190.
  • the quantized transform coefficients output from the quantization unit 130 may be used to generate a prediction signal.
  • the residual signal may be reconstructed by applying inverse quantization and inverse transform through the inverse quantization unit 140 and the inverse transform unit 150 in the loop to the quantized transform coefficients.
  • the adder 155 adds the reconstructed residual signal to the predicted signal output from the inter predictor 180 or the intra predictor 185, so that the reconstructed signal (restored picture, reconstructed block, reconstructed sample array) Can be generated. If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the adding unit 155 may be referred to as a restoration unit or a restoration block generation unit.
  • the reconstructed signal may be used for intra prediction of a next processing target block in a current picture, or may be used for inter prediction of a next picture through filtering as described below.
  • the filtering unit 160 may improve subjective/objective image quality by applying filtering to the reconstructed signal. For example, the filtering unit 160 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and may transmit the modified reconstructed picture to the decoded picture buffer 170.
  • Various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, and bilateral filter.
  • the filtering unit 160 may generate various pieces of information regarding filtering as described later in the description of each filtering method and transmit them to the entropy encoding unit 190.
  • the filtering information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
  • the modified reconstructed picture transmitted to the decoded picture buffer 170 may be used as a reference picture in the inter prediction unit 180.
  • inter prediction When the inter prediction is applied through the encoding apparatus 100, prediction mismatches in the encoding apparatus 100 and the decoding apparatus 200 may be avoided, and encoding efficiency may be improved.
  • the decoded picture buffer 170 may store the modified reconstructed picture for use as a reference picture in the inter prediction unit 180.
  • FIG. 3 is an embodiment of the present specification, and shows a schematic block diagram of a decoding apparatus for decoding a video signal.
  • the decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse conversion unit 230, an addition unit 235, a filtering unit 240, a memory 250, and an inter It may be configured to include a prediction unit 260 and the intra prediction unit 265.
  • the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a prediction unit. That is, the prediction unit may include an inter prediction unit 180 and an intra prediction unit 185.
  • the inverse quantization unit 220 and the inverse conversion unit 230 may be collectively referred to as a residual processing unit. That is, the residual processing unit may include an inverse quantization unit 220 and an inverse conversion unit 230.
  • the decoded picture buffer 250 may be implemented by one hardware component (eg, a memory or digital storage medium) according to an embodiment.
  • the memory 250 may include the DPB 175 and may be configured by a digital storage medium.
  • the decoding apparatus 200 may restore an image in response to a process in which the video/image information is processed by the encoding apparatus 100 of FIG. 2.
  • the decoding apparatus 200 may perform decoding using a processing unit applied by the encoding apparatus 100.
  • the processing unit may be, for example, a coding unit, and the coding unit may be divided according to a quad tree structure and/or a binary tree structure from a coding tree unit or a largest coding unit. Then, the decoded video signal decoded and output through the decoding apparatus 200 may be reproduced through the reproduction apparatus.
  • the decoding apparatus 200 may receive the signal output from the encoding apparatus 100 of FIG. 2 in the form of a bitstream, and the received signal may be decoded through the entropy decoding unit 210.
  • the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/image information) necessary for image reconstruction (or picture reconstruction).
  • the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and quantizes a value of a syntax element necessary for image reconstruction and a transform coefficient for residual.
  • the CABAC entropy decoding method receives bins corresponding to each syntax element in a bitstream, and decodes syntax information of the target syntax element and surrounding and decoded blocks, or the symbol/bin decoded in the previous step.
  • the context model is determined using the information of, and the probability of occurrence of the bin is predicted according to the determined context model to perform arithmetic decoding of the bin to generate a symbol corresponding to the value of each syntax element. have.
  • the CABAC entropy decoding method may update the context model using the decoded symbol/bin information for the next symbol/bin context model after determining the context model.
  • a prediction unit inter prediction unit 260 and intra prediction unit 265
  • the entropy decoding unit 210 performs entropy decoding.
  • the dual value that is, quantized transform coefficients and related parameter information, may be input to the inverse quantization unit 220.
  • information related to filtering among information decoded by the entropy decoding unit 210 may be provided to the filtering unit 240.
  • a receiving unit (not shown) receiving a signal output from the encoding apparatus 100 may be further configured as an internal/external element of the decoding apparatus 200, or the receiving unit may be a component of the entropy decoding unit 210. It may be.
  • the inverse quantization unit 220 may output transform coefficients by inverse quantizing the quantized transform coefficients.
  • the inverse quantization unit 220 may rearrange the quantized transform coefficients in a two-dimensional block form. In this case, reordering may be performed based on the coefficient scan order performed by the encoding apparatus 100.
  • the inverse quantization unit 220 may perform inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information), and obtain a transform coefficient.
  • a quantization parameter for example, quantization step size information
  • the inverse transform unit 230 may output a residual signal (residual block, residual sample array) by applying an inverse transform to the transform coefficient.
  • the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on information about prediction output from the entropy decoding unit 210, and may determine a specific intra/inter prediction mode.
  • the intra prediction unit 265 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in the neighborhood of the current block or spaced apart according to the prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the intra prediction unit 265 may determine a prediction mode applied to the current block using a prediction mode applied to neighboring blocks.
  • the inter prediction unit 260 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted on a block, subblock, or sample basis based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include information on the inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.).
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • the inter prediction unit 260 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or reference picture index of the current block based on the received candidate selection information.
  • Inter prediction may be performed based on various prediction modes, and information on prediction may include information indicating a mode of inter prediction for a current block.
  • the adding unit 235 adds the obtained residual signal to the prediction signal (predicted block, prediction sample array) output from the inter prediction unit 260 or the intra prediction unit 265, thereby restoring a signal (restored picture, reconstructed block). , A reconstructed sample array). If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the adding unit 235 may be called a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of a next processing target block in a current picture, or may be used for inter prediction of a next picture through filtering as described below.
  • the filtering unit 240 may improve subjective/objective image quality by applying filtering to the reconstructed signal. For example, the filtering unit 240 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and may transmit the modified reconstructed picture to the decoded picture buffer 250.
  • Various filtering methods may include, for example, deblocking filtering, sample adaptive offset (SAO), adaptive loop filter (ALF), bilateral filter, and the like.
  • the corrected reconstructed picture transmitted to the decoded picture buffer 250 may be used as a reference picture by the inter prediction unit 260.
  • the embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the encoding device 100 are respectively the filtering unit 240 and the inter prediction unit 260 of the decoding device.
  • the intra prediction unit 265 may be applied to the same or corresponding.
  • FIG. 4 shows an example of a structural diagram of a content streaming system according to an embodiment of the present specification.
  • the content streaming system to which the present specification is applied may largely include an encoding server 410, a streaming server 420, a web server 430, a media storage 440, a user device 450, and a multimedia input device 460. have.
  • the encoding server 410 may compress the content input from multimedia input devices such as a smartphone, camera, camcorder, etc. into digital data to generate a bitstream and transmit it to the streaming server 420.
  • multimedia input devices 460 such as a smartphone, camera, and camcorder directly generate a bitstream
  • the encoding server 410 may be omitted.
  • the bitstream may be generated by an encoding method or a bitstream generation method to which the present specification is applied, and the streaming server 420 may temporarily store the bitstream in the process of transmitting or receiving the bitstream.
  • the streaming server 420 transmits multimedia data to the user device 450 based on a user request through the web server 430, and the web server 430 serves as an intermediary to inform the user of the service.
  • the web server 430 delivers it to the streaming server 420, and the streaming server 420 transmits multimedia data to the user.
  • the content streaming system may include a separate control server, in which case the control server serves to control commands/responses between devices in the content streaming system.
  • Streaming server 420 may receive content from media storage 440 and/or encoding server 410.
  • the streaming server 420 may receive content in real time from the encoding server 410.
  • the streaming server 420 may store the bitstream for a predetermined time.
  • the user device 450 includes a mobile phone, a smart phone, a laptop computer, a terminal for digital broadcasting, a personal digital assistants (PDA), a portable multimedia player (PMP), navigation, a slate PC ( slate PC), tablet PC (tablet PC), ultrabook (ultrabook), wearable device (wearable device, for example, a smart watch (smartwatch), glass type (smart glass), HMD (head mounted display), digital TVs, desktop computers, and digital signage.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • slate PC slate PC
  • tablet PC tablet PC
  • ultrabook ultrabook
  • wearable device wearable device
  • wearable device for example, a smart watch (smartwatch), glass type (smart glass), HMD (head mounted display), digital TVs, desktop computers, and digital signage.
  • Each server in the content streaming system can be operated as a distributed server, and in this case, data received from each server can be distributed.
  • FIG. 5 shows an example of a block diagram of an apparatus for processing a video signal according to an embodiment of the present specification.
  • the video signal processing apparatus of FIG. 5 may correspond to the encoding apparatus 100 of FIG. 2 or the decoding apparatus 200 of FIG. 3.
  • the video signal processing apparatus 500 may include a memory 520 for storing a video signal, and a processor 510 for processing a video signal while being combined with the memory.
  • the processor 510 may be configured with at least one processing circuit for processing a video signal, and may process a video signal by executing instructions for encoding or decoding the video signal. That is, the processor 510 may encode the original video signal or decode the encoded video signal by executing the encoding or decoding methods described below.
  • the video/image coding method according to this document may be performed based on various detailed technologies, and the detailed description of each detailed technology is as follows.
  • the techniques described below may be related to related procedures such as prediction, residual processing (transformation, quantization, etc.), syntax element coding, filtering, partitioning/segmentation, etc. in the video/image encoding/decoding procedure described above and/or described below. It is apparent to those skilled in the art.
  • Pictures may be divided into a sequence of coding tree units (CTUs).
  • the CTU may correspond to a coding tree block (CTB).
  • CTU may include a coding tree block of luma samples and two coding tree blocks of corresponding chroma samples.
  • the CTU may include two corresponding blocks of chroma samples and an NxN block of luma samples.
  • CTUs coding tree units
  • one picture may be divided into a plurality of CTUs having a constant size.
  • the maximum allowable size of the CTU for coding and prediction may be different from the maximum allowable size of the CTU for transformation.
  • the maximum allowable size of the luma block in the CTU may be 128x128 (even if the maximum size of the luma CTUs is 64x64).
  • FIG 7 shows an example of multi-type tree splitting modes according to an embodiment of the present specification.
  • the CTU may be divided into CUs based on a quad-tree (QT) structure.
  • the quadtree structure may be referred to as a quaternary tree structure. This is to reflect various local characteristics.
  • the CTU can be divided based on the division of a multi-type tree structure including a binary tree (BT) and a ternary tree (TT) as well as a quad tree.
  • the QTBT structure may include a quadtree and binary tree based split structure
  • the QTBTTT may include a quadtree, binary tree and ternary tree based split structure.
  • the QTBT structure may include a quadtree, binary tree, and ternary tree based splitting structure.
  • the CU can have a square or rectangular shape.
  • the CTU can be first divided into a quadtree structure. Thereafter, leaf nodes having a quadtree structure may be further divided by a multi-type tree structure. For example, as illustrated in FIG. 6, the multi-type tree structure may schematically include four division types.
  • the four split types shown in FIG. 6 are vertical binary splitting (SPLIT_BT_VER), horizontal binary splitting (SPLIT_BT_HOR), vertical ternary splitting (SPLIT_TT_VER), horizontal ternary splitting (horizontal ternary) splitting, SPLIT_TT_HOR).
  • Leaf nodes of a multitype tree structure may be referred to as CUs. These CUs can be used as a unit for prediction and transformation procedures.
  • CU, PU, and TU may have the same block size. However, when the maximum supported transform length is smaller than the width or height of the color component of the CU, the CU and the TU may have different block sizes.
  • FIG. 8 illustrates an example of a signaling mechanism of partition partition information of a quadtree with nested multi-type tree structure according to an embodiment of the present specification.
  • the CTU is treated as a root of a quadtree, and is first divided into a quadtree structure.
  • Each quadtree leaf node can then be further divided into a multitype tree structure.
  • a first flag eg, mtt_split_cu_flag
  • a second flag eg, mtt_split_cu_vertical_flag
  • a third flag eg, mtt_split_cu_binary_flag
  • a multi-type tree splitting mode (MttSplitMode) of a CU may be derived as shown in Table 1 below.
  • a CTU is divided into multiple coding units (CUs) based on a quadtree and nested multi-type tree structure according to an embodiment of the present specification.
  • the CU may correspond to a coding block (CB).
  • the CU may include a coding block of luma samples and two coding blocks of corresponding chroma samples.
  • the size of the CU may be as large as the CTU, or may be configured in 4x4 units in luma sample units. For example, in the case of a 4:2:0 color format (or chroma format), the maximum chroma CB size may be 64x64 and the minimum chroma CB size may be 2x2.
  • the maximum allowed luma TB size may be 64x64, and the maximum allowed chroma TB size may be 32x32. If the width or height of the CB divided according to the tree structure is greater than the maximum conversion width or height, the CB may be automatically (or implicitly) divided until the horizontal and vertical TB size limitations are satisfied.
  • SPS sequence parameter set
  • -CTU size the root node size of a quaternary tree
  • the CTU size can be set to 64x64 blocks of 128x128 luma samples and two corresponding chroma samples (in 4:2:0 chroma format).
  • MinOTSize can be set to 16x16
  • MaxBtSize is set to 128x1208
  • MaxTtSzie can be set to 64x64
  • MinBtSize and MinTtSize (for width and height) can be set to 4x4
  • MaxMttDepth can be set to 4.
  • Quadtree splitting may be applied to CTU to generate quadtree leaf nodes.
  • the quadtree leaf node may be referred to as a leaf QT node.
  • Quadtree leaf nodes may have a size of 16x16 (ie MinOTSize) to a size of 128x128 (ie CTU size). If the leaf QT node is 128x128, it may not be additionally divided into a binary tree/ternary tree. This is because, even in this case, it exceeds MaxBtsize and MaxTtszie (ie, 64x64). In other cases, the leaf QT node may be further divided into a multi-type tree. Therefore, a leaf QT node is a root node for a multitype tree, and a leaf QT node may have a multitype tree depth (mttDepth) 0 value.
  • mttDepth multitype tree depth
  • MaxMttdepth (eg 4)
  • further partitioning may not be considered. If the width of the multi-type tree node is equal to MinBtSize and less than or equal to 2xMinTtSize, additional horizontal splitting may not be considered. If the height of the multitype tree node is equal to MinBtSize and less than or equal to 2xMinTtSize, additional vertical splitting may not be considered any more.
  • TT segmentation can be forbidden in certain cases. For example, if the width or height of the luma coding block is greater than 64, as shown in FIG. 9, TT splitting may be prohibited. Also, for example, if the width or height of the chroma coding block is greater than 32, TT splitting may be prohibited.
  • FIG. 10 shows an example of a case where TT (ternary tree) partitioning is limited for a 128x128 coding block according to an embodiment of the present specification.
  • the coding tree scheme may support luma and chroma blocks having a separate block tree structure.
  • luma and chroma CTBs in one CTU can be restricted to have the same coding tree structure.
  • luma and chroma blocks may have a separate block tree structure from each other. If the individual block tree mode is applied, the luma CTB may be divided into CUs based on a specific coding tree structure, and the chroma CTB may be divided into chroma CUs based on another coding tree structure. This may mean that a CU in an I slice is composed of a coding block of luma components or coding blocks of two chroma components, and a CU of a P or B slice can be composed of blocks of three color components.
  • the quadtree coding tree structure with a multi-type tree has been described, but the structure in which the CU is divided is not limited to this.
  • the BT structure and the TT structure may be interpreted as a concept included in a multiple partitioning tree (MPT) structure, and a CU may be divided through a QT structure and an MPT structure.
  • MPT multiple partitioning tree
  • a CU may be divided through a QT structure and an MPT structure.
  • a syntax element eg, MPT_split_type
  • the splitting structure may be determined by signaling a syntax element (eg, MPT_split_mode) including information on which direction to split.
  • the CU may be divided in a different way from the QT structure, BT structure, or TT structure. That is, according to the QT structure, the CU of the lower depth is divided into 1/4 the size of the CU of the upper depth, or the CU of the lower depth is divided into 1/2 the size of the CU of the upper depth according to the BT structure, or according to the TT structure Unlike the CU of the lower depth, which is divided into 1/4 or 1/2 the size of the CU of the upper depth, the CU of the lower depth may be 1/5, 1/3, 3/8, 3 of the CU of the upper depth depending on the case. It may be divided into /5, 2/3 or 5/8 size, and the method in which the CU is divided is not limited thereto.
  • the tree node block includes all samples of all coded CUs within the picture boundaries. It can be restricted to be located. In this case, for example, the division rule as shown in Table 2 below may be applied.
  • FIG. 11 illustrates examples of redundant partition patterns that may occur in binary tree partition and ternary tree partition according to an embodiment of the present disclosure.
  • the quadtree coding block structure with a multi-type tree can provide a very flexible block partitioning structure. Due to the division types supported in the multitype tree, different division patterns can potentially result in the same coding block structure in some cases. By limiting the occurrence of such redundant partition patterns, the data amount of partitioning information can be reduced.
  • two levels of consecutive binary splits in one direction have the same coding block structure as binary partitions for the center partition after ternary splitting. .
  • binary tree partitioning to the center partition of the ternary tree partitioning is prohibited.
  • This prohibition can be applied to CUs of all pictures.
  • signaling of the corresponding syntax elements can be modified to reflect this prohibited case, thereby reducing the number of bits signaled for partitioning. For example, as in the example shown in FIG.
  • the mtt_split_cu_binary_flag syntax element indicating whether the partition is a binary partition or a ternary partition is not signaled, and its value is It can be inferred by the decoder to zero.
  • inter prediction described below may be performed by the inter prediction unit 180 of the encoding apparatus 100 of FIG. 2 or the inter prediction unit 260 of the decoding apparatus 200 of FIG. 3.
  • the prediction unit of the encoding apparatus 100 / decoding apparatus 200 may perform inter prediction in block units to derive prediction samples.
  • Inter prediction may represent prediction derived in a manner dependent on data elements (eg, sample values, or motion information) of a picture(s) other than the current picture (Inter prediction can be a prediction derived in a manner that is dependent on data elements (eg, sample values or motion information) of picture(s) other than the current picture).
  • motion information of the current block may be predicted in units of blocks, subblocks, or samples based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction type (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be referred to by a name such as a collocated reference block or a colCU, and a reference picture including a temporal neighboring block may also be called a collocated picture (colPic).
  • a motion information candidate list may be constructed based on neighboring blocks of the current block, and a flag indicating which candidate is selected (used) to derive a motion vector and/or reference picture index of the current block, or Index information may be signaled.
  • Inter prediction may be performed based on various prediction modes. For example, in the case of the skip mode and the merge mode, motion information of a current block may be the same as motion information of a selected neighboring block. In the skip mode, unlike the merge mode, the residual signal may not be transmitted.
  • a motion vector of a selected neighboring block is used as a motion vector predictor, and a motion vector difference can be signaled.
  • the motion vector of the current block may be derived using the sum of the motion vector predictor and the motion vector difference.
  • FIGS. 12 and 13 illustrate a video/video encoding procedure based on inter prediction according to an embodiment of the present specification and an inter prediction unit in an encoding device.
  • the encoding apparatus 100 performs inter prediction on the current block (S1210).
  • the encoding apparatus 100 may derive the inter prediction mode and motion information of the current block, and generate prediction samples of the current block.
  • the procedure for determining the inter prediction mode, deriving motion information, and generating prediction samples may be performed simultaneously, or one procedure may be performed before the other procedure.
  • the inter prediction unit 180 of the encoding apparatus 100 may include a prediction mode determination unit 181, a motion information derivation unit 182, and a prediction sample derivation unit 183, and the prediction mode determination unit
  • the prediction mode for the current block may be determined at 181, motion information of the current block may be derived from the motion information derivation unit 182, and prediction samples of the current block may be derived from the prediction sample derivation unit 183.
  • the inter prediction unit 180 of the encoding apparatus 100 searches a block similar to the current block in a certain area (search area) of reference pictures through motion estimation, and It is possible to derive a reference block with a difference of less than or equal to a certain standard.
  • a reference picture index indicating a reference picture in which the reference block is located may be derived, and a motion vector may be derived based on a position difference between the reference block and the current block.
  • the encoding apparatus 100 may determine a mode applied to a current block among various prediction modes.
  • the encoding apparatus 100 may compare the RD cost for various prediction modes and determine the optimal prediction mode for the current block.
  • the encoding apparatus 100 configures a merge candidate list, which will be described later, and the current block among the reference blocks indicated by the merge candidates included in the merge candidate list.
  • a reference block in which the difference from the current block is less than or equal to a certain criterion can be derived.
  • a merge candidate associated with the derived reference block is selected, and merge index information indicating the selected merge candidate may be generated and signaled to the decoding apparatus 200.
  • Motion information of the current block may be derived using the motion information of the selected merge candidate.
  • the encoding apparatus 100 configures the (A)MVP candidate list, which will be described later, and (A) the motion vector predictor (MVP) candidates included in the MVP candidate list.
  • the motion vector of the selected MVP candidate can be used as the MVP of the current block.
  • a motion vector indicating a reference block derived by the above-described motion estimation may be used as a motion vector of the current block, and among the MVP candidates, a motion vector having the smallest difference from the motion vector of the current block may be used.
  • the MVP candidate to have may be the selected MVP candidate.
  • a motion vector difference which is a difference obtained by subtracting MVP from the motion vector of the current block, may be derived.
  • information about the MVD may be signaled to the decoding device 200.
  • the value of the reference picture index may be configured and reference signal index information may be separately signaled to the decoding apparatus 200.
  • the encoding apparatus 100 may derive residual samples based on the predicted samples (S1220). The encoding apparatus 100 may derive residual samples through comparison of original samples and prediction samples of the current block.
  • the encoding apparatus 100 encodes video information including prediction information and residual information (S1230).
  • the encoding apparatus 100 may output encoded image information in the form of a bitstream.
  • the prediction information is information related to a prediction procedure and may include prediction mode information (eg, skip flag, merge flag, or mode index) and motion information.
  • the motion information may include candidate selection information (eg, merge index, mvp flag, or mvp index) that is information for deriving a motion vector.
  • the motion information may include information on the MVD and/or reference picture index information.
  • the motion information may include information indicating whether L0 prediction, L1 prediction, or bi prediction is applied.
  • the residual information is information about residual samples.
  • the residual information may include information about quantized transform coefficients for residual samples.
  • the output bitstream may be stored in a (digital) storage medium and transmitted to a decoding device, or may be delivered to a decoding device through a network.
  • the encoding apparatus may generate a reconstructed picture (including reconstructed samples and reconstructed blocks) based on the reference samples and the residual samples. This is because the encoding apparatus 100 derives the same prediction results as those performed by the decoding apparatus 200, and thus, it is possible to increase coding efficiency. Accordingly, the encoding apparatus 100 may store a reconstructed picture (or reconstructed samples, reconstructed block) in a memory and use it as a reference picture for inter prediction. As described above, an in-loop filtering procedure may be further applied to the reconstructed picture.
  • FIG. 14 and 15 illustrate a video/image decoding procedure based on inter prediction according to an embodiment of the present specification and an inter prediction unit in a decoding apparatus.
  • the decoding apparatus 200 may perform an operation corresponding to an operation performed by the encoding apparatus 100.
  • the decoding apparatus 200 may perform prediction on the current block and derive prediction samples based on the received prediction information.
  • the decoding apparatus 200 may determine a prediction mode for the current block based on the received prediction information (S1410). The decoding apparatus 200 may determine which inter prediction mode is applied to the current block based on the prediction mode information in the prediction information.
  • the decoding apparatus 200 may determine whether a merge mode is applied to the current block or (A)MVP mode is determined based on a merge flag. Alternatively, the decoding apparatus 200 may select one of various inter prediction mode candidates based on the mode index.
  • the inter prediction mode candidates may include skip mode, merge mode and/or (A) MVP mode, or various inter prediction modes described below.
  • the decoding apparatus 200 derives motion information of the current block based on the determined inter prediction mode (S1420). For example, when the skip mode or the merge mode is applied to the current block, the decoding apparatus 200 may configure a merge candidate list, which will be described later, and select one of the merge candidates included in the merge candidate list. . The selection of the merge candidate may be performed based on a merge index. Motion information of a current block may be derived from motion information of a selected merge candidate. Motion information of the selected merge candidate may be used as motion information of the current block.
  • the decoding apparatus 200 configures (A)MVP candidate list to be described later, and (A) an MVP candidate selected from among MVP candidates included in the MVP candidate list
  • the motion vector of can be used as the MVP of the current block.
  • the selection of the MVP may be performed based on the selection information (MVP flag or MVP index) described above.
  • the decoding apparatus 200 may derive the MVD of the current block based on information about the MVD, and may derive a motion vector of the current block based on the MVP and MVD of the current block.
  • the decoding apparatus 200 may derive the reference picture index of the current block based on the reference picture index information.
  • the picture indicated by the reference picture index in the reference picture list for the current block may be derived as a reference picture referenced for inter prediction of the current block.
  • motion information of the current block may be derived without configuring a candidate list, and in this case, motion information of the current block may be derived according to a procedure disclosed in the prediction mode described below.
  • the candidate list configuration as described above may be omitted.
  • the decoding apparatus 200 may generate prediction samples for the current block based on the motion information of the current block (S1430). In this case, the decoding apparatus 200 may derive a reference picture based on the reference picture index of the current block, and derive predictive samples of the current block using samples of the reference block indicated by the motion vector of the current block on the reference picture. . In this case, as described later, a prediction sample filtering procedure for all or part of the prediction samples of the current block may be further performed depending on the case.
  • the inter prediction unit 260 of the decoding apparatus 200 may include a prediction mode determination unit 261, a motion information derivation unit 262, and a prediction sample derivation unit 263, and the prediction mode determination unit
  • the prediction mode for the current block is determined based on the prediction mode information received at (181), and the motion information (motion vector) of the current block is based on the motion information received from the motion information derivation unit 182. And/or a reference picture index), and predicted samples of the current block may be derived from the predicted sample deriving unit 183.
  • the decoding apparatus 200 generates residual samples for the current block based on the received residual information (S1440).
  • the decoding apparatus 200 may generate reconstructed samples for the current block based on the predicted samples and residual samples, and generate a reconstructed picture based on the reconstructed samples. (S1450). As described above, an in-loop filtering procedure may be further applied to the reconstructed picture.
  • the inter prediction procedure may include a step of determining an inter prediction mode, a step of deriving motion information according to the determined prediction mode, and performing a prediction (generating a predictive sample) based on the derived motion information.
  • inter prediction modes may be used for prediction of a current block in a picture.
  • various modes such as merge mode, skip mode, MVP mode, and affine mode may be used.
  • Decoder side motion vector refinement (DMVR) mode, adaptive motion vector resolution (AMVR) mode, and the like may be further used as ancillary modes.
  • the affine mode may also be called aaffine motion prediction mode.
  • the MVP mode may also be called AMVP (advanced motion vector prediction) mode.
  • Prediction mode information indicating the inter prediction mode of the current block may be signaled from the encoding device to the decoding device 200.
  • the prediction mode information may be included in the bitstream and received by the decoding apparatus 200.
  • the prediction mode information may include index information indicating one of a plurality of candidate modes.
  • the inter prediction mode may be indicated through hierarchical signaling of flag information.
  • the prediction mode information may include one or more flags.
  • the encoding apparatus 100 signals whether a skip mode is applied by signaling a skip flag, and indicates whether a merge mode is applied by signaling a merge flag when a skip mode is not applied, and when a merge mode is not applied. It may be indicated that the MVP mode is applied or may further signal a flag for additional classification.
  • the affine mode may be signaled as an independent mode, or may be signaled as a mode dependent on a merge mode or an MVP mode.
  • the affine mode may be configured as one candidate of the merge candidate list or the MVP candidate list, as described later.
  • the encoding apparatus 100 or the decoding apparatus 200 may perform inter prediction using motion information of a current block.
  • the encoding apparatus 100 may derive optimal motion information for the current block through a motion estimation procedure.
  • the encoding apparatus 100 may search for a similar reference block having a high correlation within a predetermined search range in a reference picture by a fractional pixel unit using the original block in the original picture for the current block, through which motion information Can be derived.
  • the similarity of the block can be derived based on the difference of phase-based sample values.
  • the similarity of a block may be calculated based on a sum of absolute difference (SAD) between a current block (or a template of a current block) and a reference block (or a template of a reference block).
  • SAD sum of absolute difference
  • motion information may be derived based on a reference block having the smallest SAD in the search area.
  • the derived motion information may be signaled to the decoding apparatus according to various methods based on the inter
  • the encoding apparatus 100 may indicate motion information of the current prediction block by transmitting flag information indicating that the merge mode is used and a merge index indicating which prediction blocks are used.
  • the encoding apparatus 100 must search a merge candidate block used to derive motion information of a current prediction block in order to perform a merge mode. For example, up to five merge candidate blocks may be used, but the present specification is not limited thereto. In addition, the maximum number of merge candidate blocks may be transmitted in the slice header, and the present specification is not limited thereto. After finding the merge candidate blocks, the encoding apparatus 100 may generate a merge candidate list, and may select a merge candidate block having the smallest cost as a final merge candidate block.
  • This specification provides various embodiments of a merge candidate block constituting a merge candidate list.
  • the merge candidate list may use 5 merge candidate blocks, for example. For example, four spatial merge candidates and one temporal merge candidate can be used.
  • FIG. 16 shows an example of a spatial merge candidate configuration for a current block according to an embodiment of the present specification.
  • a left neighboring block (A1), a bottom-left neighboring block (A2), a top-right neighboring block (B0), and an upper neighboring block (B1) ), at least one of a top-left neighboring block B2 may be used.
  • the merge candidate list for the current block may be configured based on the procedure shown in FIG. 17.
  • FIG 17 shows an example of a flowchart for configuring a merge candidate list according to an embodiment of the present specification.
  • the coding apparatus searches for spatial neighboring blocks of the current block and inserts the derived spatial merge candidates into the merge candidate list (S1710).
  • the spatial peripheral blocks may include blocks around the lower left corner of the current block, blocks around the left corner, blocks around the upper right corner, blocks around the upper corner, and blocks around the upper left corner.
  • additional peripheral blocks such as a right peripheral block, a lower peripheral block, and a lower right peripheral block may be further used as the spatial peripheral blocks.
  • the coding apparatus may detect available blocks by searching for spatial neighboring blocks based on priority, and derive motion information of the detected blocks as spatial merge candidates.
  • the encoding apparatus 100 or the decoding apparatus 200 searches the five blocks shown in FIG. 11 in the order of A1, B1, B0, A0, B2, sequentially indexes available candidates, and merges candidates It can be configured as a list.
  • the coding apparatus searches for temporal neighboring blocks of the current block and inserts the derived temporal merge candidate into the merge candidate list (S1720).
  • the temporal peripheral block may be located on a reference picture that is a different picture from the current picture in which the current block is located.
  • the reference picture in which the temporal neighboring block is located may be referred to as a collocated picture or a coll picture.
  • the temporal neighboring blocks may be searched in the order of the lower right corner peripheral block and the lower right center block of the co-located block for the current block on the call picture. Meanwhile, when motion data compression is applied, specific motion information may be stored as representative motion information for each predetermined storage unit in a call picture.
  • the predetermined storage unit may be predetermined in units of 16x16 sample units, 8x8 sample units, or the like, or size information for a predetermined storage unit may be signaled from the encoding device 100 to the decoding device 200. have.
  • motion information of a temporal peripheral block may be replaced with representative motion information of a certain storage unit in which the temporal peripheral block is located.
  • a temporal merge candidate may be derived based on the motion information of the prediction block. For example, if the constant storage unit is 2nx2n sample units, and the coordinates of the temporal peripheral block are (xTnb, yTnb), the corrected position ((xTnb >> n) ⁇ n), (yTnb >> n) The motion information of the prediction block located at ⁇ n)) can be used for temporal merge candidates.
  • the constant storage unit is 16x16 sample units, and the coordinates of the temporal peripheral block are (xTnb, yTnb), the corrected position ((xTnb >> 4) ⁇ 4), (yTnb >> 4) Motion information of the prediction block located at ⁇ 4)) can be used for temporal merge candidates.
  • the constant storage unit is 8x8 sample units, and the coordinates of the temporal peripheral block are (xTnb, yTnb), the corrected position ((xTnb >> 3) ⁇ 3), (yTnb >> 3 ) ⁇ 3)) motion information of the prediction block located at may be used for temporal merge candidate.
  • the coding apparatus may check whether the number of current merge candidates is smaller than the maximum number of merge candidates (S1730).
  • the maximum number of merge candidates may be predefined or signaled from the encoding device 100 to the decoding device 200.
  • the encoding apparatus 100 may generate information on the number of maximum merge candidates, encode, and transmit the encoded information to the decoding apparatus 200 in the form of a bitstream.
  • the subsequent candidate addition process may not proceed.
  • the coding apparatus inserts an additional merge candidate into the merge candidate list (S1740).
  • Additional merge candidates are, for example, adaptive temporal motion vector prediction (ATMVP), combined bi-predictive merge candidates (if the slice type of the current slice is of type B) and/or zero vector merge. Candidates may be included.
  • ATMVP adaptive temporal motion vector prediction
  • MVP candidate list shows an example of a flowchart for constructing a prediction candidate list (MVP candidate list).
  • a motion vector of a reconstructed spatial neighboring block eg, the neighboring block of FIG. 16
  • a motion vector corresponding to a temporal neighboring block or Col block
  • a motion vector predictor (MVP) candidate list may be generated. That is, the motion vector of the reconstructed spatial neighboring block and/or the motion vector corresponding to the temporal neighboring block may be used as a motion vector predictor candidate.
  • the prediction information may include selection information (eg, an MVP flag or an MVP index) indicating an optimal motion vector predictor candidate selected from among motion vector predictor candidates included in the list.
  • the prediction unit may select a motion vector predictor of the current block from among motion vector predictor candidates included in the motion vector candidate list using the selection information.
  • the prediction unit of the encoding apparatus 100 may obtain a motion vector difference (MVD) between a motion vector of a current block and a motion vector predictor, encode it, and output it in the form of a bitstream. That is, the MVD can be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit of the decoding apparatus may obtain a motion vector difference included in the information about the prediction, and derive the motion vector of the current block through addition of the motion vector difference and the motion vector predictor.
  • the prediction unit of the decoding apparatus may obtain or derive a reference picture index indicating the reference picture from the information on the prediction.
  • the motion vector predictor candidate list may be configured as shown in FIG. 18.
  • the coding apparatus searches for a spatial candidate block for motion vector prediction and inserts it into the prediction candidate list (S1810).
  • the coding apparatus may search for neighboring blocks according to a predetermined search order, and add information of neighboring blocks satisfying the condition for the spatial candidate block to the prediction candidate list (MVP candidate list).
  • the coding apparatus After constructing the spatial candidate block list, the coding apparatus compares the number of spatial candidate lists included in the prediction candidate list with a preset reference number (eg, 2) (S1820). If the number of spatial candidate lists included in the prediction candidate list is greater than or equal to the reference number (eg, 2), the coding apparatus may end the construction of the prediction candidate list.
  • a preset reference number eg, 2
  • the coding device searches for the temporal candidate block and inserts it into the prediction candidate list (S1830), and the temporal candidate block is used If it is not possible, the zero motion vector is added to the prediction candidate list (S1840).
  • the predicted block for the current block may be derived based on the motion information derived according to the prediction mode.
  • the predicted block may include predictive samples (predictive sample array) of the current block.
  • an interpolation procedure may be performed, and through this, prediction samples of the current block may be derived based on reference samples in a fractional sample unit in a reference picture. .
  • prediction samples may be generated based on a motion vector per sample/subblock.
  • prediction samples derived based on first direction prediction eg, L0 prediction
  • prediction samples derived based on second direction prediction eg, L1 prediction
  • the final prediction samples can be derived through weighted sum (per phase).
  • reconstruction samples and reconstruction pictures may be generated based on the derived prediction samples, and then procedures such as in-loop filtering may be performed.
  • MVP may be applied.
  • the merge with MVD (MMVD) technique can increase accuracy by adjusting the size and direction of a motion vector with respect to a selected candidate among candidates configured with a merge candidate list construction method.
  • the coding apparatus may determine a specific candidate from the merge candidate list as a base candidate. When the number of available base candidates is two, the first candidate and the second candidate in the list can be used as base candidates.
  • the encoding apparatus 100 may transmit information on the selected base candidate by signaling a base candidate index.
  • the number of base candidates may be variously set, and if the number of base candidates is 1, the base candidate index may not be used. Table 3 below shows an example of the base candidate index.
  • the motion vector corresponding to the first candidate is determined as the base candidate in the currently configured merge candidate list (or MVP candidate list), and if the base candidate index is 1, the currently configured merge candidate list ( Alternatively, a motion vector corresponding to the second candidate in the MVP candidate list) may be determined as a base candidate.
  • the encoding apparatus 100 may be refined by signaling the size (MMVD length) and direction (MMVD code) applied to the motion vector corresponding to the base candidate, where the length index (MMVD length index) is applied to the motion vector.
  • the size of the applied MVD is indicated, and the direction index (MMVD code index) indicates the direction of the MVD applied to the motion vector.
  • Table 4 below shows the size of the MVD (MMVD offset) according to the length index (MMVD length index), and Table 5 shows the direction of the MVD according to the direction index (MMVD code index).
  • FIG 19 shows an example of an MMVD discovery process according to an embodiment of the present specification.
  • a motion search method in MMVD may be expressed as shown in FIG. 19.
  • a motion vector in the L1 direction may be derived using a mirroring scheme for the L0 motion vector.
  • the embodiments herein provide a method and apparatus for efficiently applying the MMVD technique.
  • AMVR adaptive motion vector resolution
  • the embodiment of the present specification allows the distance table to be adaptively selected in the MMVD using the imv_idx value.
  • Table 6 shows a table of MMVD offset values for the MMVD length index according to this embodiment.
  • different MMVD offsets are allocated for each MMVD length index according to imv_idx. That is, in case 1, imv_idx is 0, the base motion vector may be adjusted by a distance of 1/4-pel to 2-pel. In addition, in case 2, when imv_idx is 1, the base motion vector may be adjusted by a distance of 1/2-pel to 4-pel. Finally, in case 3, imv_idx is 2, and the base motion vector may be adjusted by a distance of 1-pel to 8-pel.
  • the number of candidates and candidate values in the distance table described in this specification are only examples, and other numbers or different values may be used.
  • the AMVR index may be determined based on the resolution of a motion vector difference (MVD) applied to a motion vector of a neighboring block associated with the merge index.
  • the AMVR index may be determined based on AMVR information (AMVR index) of spatial or temporal neighboring blocks used in the process of generating a merge or MVP candidate list of the current block.
  • AMVR mode is applied in AMVP and is not used in merge/skip mode. Therefore, in order to determine the above-mentioned AMVR mode-based length table determination method, the AMVR index can be separately set in the merge/skip mode. That is, when the current block is in the merge/skip mode, the AMVR index (imv_idx) of the neighboring block is set and stored so that the AMVR can be applied when constructing candidates for the MMVD.
  • the coding device performs context modeling using imv_idx of the left or upper neighboring block in the parsing process, MMVD in order to prevent imv_idx of the stored neighboring block from being applied to the parsing process in the decoding process of the block to which merge/skip mode is applied
  • the AMVR index of the block to which is applied is distinguished from the AMVR index of the block to which AMVP is applied by naming it as a separate syntax element (eg, imv_idc).
  • the following embodiment provides a method of storing an AMVR index (imv_idc) in the process of constructing an MVP candidate list for a block to which merge/skip mode is applied.
  • an AMVR index (imv_idc)
  • MMVD a base motion vector corresponding to a selected candidate among MVP candidates configured in a decoding process of a block to which merge/skip mode is applied is adjusted, and according to characteristics of adjacent blocks (resolution or AMVR index of MVD) in a candidate list configuration process
  • the AMVR index can be set as follows.
  • the AMVR index of the adjacent blocks is used.
  • a default value (eg 0) is used.
  • HMVP is a method of using prediction information (motion vector, reference picture index) of another block that has already been decoded (restored) in the current picture as information for prediction of the current block, where HVMP candidate is a spatial candidate, After the temporal candidate, it may be added to the merge candidate list.
  • the corresponding AMVR index is used, and in other cases, the larger AMVR index value is used.
  • FIG 20 shows an example of triangular prediction units according to an embodiment of the present specification.
  • the AMVR index value of adjacent blocks can be stored to propagate the merge/skip mode afterwards.
  • one CU coding unit
  • PUs triangular shaped prediction units
  • each PU can be predicted by uni-prediction.
  • a candidate list for unidirectional prediction may be constructed in a manner similar to a general merge/skip mode.
  • one CU is composed of two PUs (Cand0, Cand1), so the AMVR index of the corresponding block (CU) can be determined by considering all of the AMVR indexes of the two blocks (PU). Can.
  • the default value (eg 0) is used.
  • the AMVR index of adjacent blocks can be stored so that the AMVR index value is propagated to the merge/skip mode later.
  • the M/H mode is a technique in which intra prediction and inter prediction are combined in the merge/skip mode, and signals both indexes for the intra mode and the merge mode.
  • the candidate list is constructed in a similar manner to the normal merge/skip mode, and when the M/H mode is applied, the AMVR index can be set for the corresponding block as follows.
  • the AMVR index of the adjacent block is used.
  • the default value (eg 0) is used.
  • HMVP candidates i) use the AMVR index matching the candidates stored in the HMVP buffer, or ii) always use the default value (eg 0).
  • L0 candidate (L0 block) and L1 candidate (L1 block) use the common AMVR index when they have the same AMVR index; otherwise, use the default value (eg 0) Or, ii) When the L0 candidate (L0 block) and the L1 candidate (L1 block) have the same AMVR index, a common AMVR index is used, and in other cases, a larger AMVR index is used.
  • the AMVR index may be updated based on the MMVD length (offset). Also, the updated AMVR index may be reused as the AMVR index of at least one block processed after the current block.
  • the AMVR index of the base motion vector for MMVD can be derived, and the distance of the motion vector for refinement can be determined by selecting the MMVD length table based on the derived AMVR index.
  • a distance value such as 1-pel or 4-pel can be selected among the candidates in the table, so the AMVR index is selected based on the length selected in the MMVD process. Can be updated. This is because MMVD length also plays the role of AMVR. Accordingly, the AMVR index can be updated according to the selected MMVD length as shown in Table 7 below.
  • the MMVD length is determined to be 4-pel or 8-pel, it is similar to the MMVD offset candidate values corresponding to Case 3 in Table 6 (when AMVR index (imv_idc) is 2) (corresponds to Case 3 in Table 6). Since it is highly possible), even if the AMVR index of the base motion vector is 0, it can be updated to 2.
  • the MMVD distance is determined to be a value greater than 1-pel, this is similar to the MMVD offset candidate values corresponding to Case 2 in Table 6 (when AMVR index (imv_idc) is 1) (possibility of corresponding to Case 2 in Table 6) Because it is large) it can be updated to 1.
  • the AMVR index is determined to indicate the first MMVD set
  • the MMVD range index is determined to indicate the second MMVD set index.
  • the first MMVD set has a value greater than the MMVD offset of the second MMVD set for the same MMVD length index.
  • the magnitude of the x and y values of the base motion vector of the MMVD may have an effect similar to that of the AMVR of the block in deriving the distance for refinement.
  • the present embodiment provides a method of determining the MMVD length table according to the value of the base motion vector.
  • Table 8 below is another example of a table showing the relationship between the MMVD length index and the MMVD offset values, and shows a method of determining the MMVD length table according to the value of the base motion vector.
  • Case 1 has the following conditions.
  • the distance may be adjusted to a distance of 1/4-pel to 2-pel corresponding to Case 3.
  • the number of candidates and candidate values in the distance table described in this specification are only examples, and it is natural that other numbers or different values may be applied.
  • Case 1 to Case 3 described above may be modified and applied as follows. That is, even if one of the candidates to which bi-directional prediction is applied has a motion vector in units of 4-pel or 1-pel as shown in the following conditions, it may be determined that the condition is satisfied.
  • Case 1 may be as follows.
  • This embodiment may be applied when AMVR is not used, or may be used regardless of whether AMVR is applied.
  • the MMVD length table is determined according to the AMVR index
  • the MMVD is based on the pixel unit of the motion vector as in this embodiment.
  • the length table can be determined.
  • the range of the MMVD offset applied to the motion vector may be determined based on the resolution of the position coordinates indicated by the AMVR index and the base motion vector. For example, when the AMVR index is 0, the range of the MMVD offset applied to the motion vector for prediction of the current block may be determined based on the position coordinates indicated by the motion vector.
  • the position coordinates may include a horizontal position (x coordinate) and/or a vertical position (y coordinate) from an arbitrary position in the picture or block (eg, the position of the upper left pixel).
  • the size of the current block may be considered in the process of applying the MMVD by reflecting the characteristics of the current block.
  • the table for the MMVD offset value may be determined in consideration of at least one of the AMVR mode, the size of the x, y values of the base motion vector, or the size of the current block.
  • the range of the MMVD offset applied to the motion vector may be determined based on the AMVR index and the size of the current block. For example, the following method may be considered.
  • AMVR index (imv_idc) is 0 and wxh> 256 (hereinafter, w corresponds to the width of the current block, h corresponds to the height of the current block), the MMVD length table of ⁇ 2, 4, 8, 16 ⁇ is used, and Otherwise, Table 6 or Table 8 is used.
  • the threshold value compared with the used block size may be changed, and the width and height may be considered simultaneously, or the width x height may be considered. It is also natural that the AMVR index and/or base motion vector can be considered together.
  • One embodiment of the present specification may determine the MMVD candidate set based on the size of the base motion vector of the MMVD (horizontal (x) size and/or vertical (y) size).
  • the magnitude of the x and y values of the base motion vector of the MMVD can be used to derive an effect similar to the AMVR in deriving the distance for MMVD refinement.
  • Table 7 shows an example in which different distance tables are allocated according to x and y values of the base motion vector according to an embodiment of the present specification.
  • FIG. 21 illustrates an example of a flowchart for determining a set of MMVD candidates based on the size of a motion vector according to an embodiment of the present specification.
  • the first MMVD candidate set corresponding to Case 1 (1-pel, 2-pel, 4-pel, 8-pel) Can be used. If the motion vector MV is less than or equal to the first reference value T1 and greater than the second reference value T2, the second set of MMVD candidates corresponding to Case 2 (1/2-pel, 1-pel, 2- pel, 4-pel) can be used. If the motion vector MV is smaller than the second reference value T2, a third MMVD candidate set corresponding to Case 3 (1/4-pel, 1/2-pel, 1-pel, 2-pel) is used. Can.
  • first reference value T1 and the second reference value T2 may be set as follows, and various values may be used depending on the implementation.
  • T1 128 (128(8-pel) when 1/16 precision is applied, 64(8-pel) when 1/4 precision is applied)
  • T2 16 (16(1-pel) when 1/16 precision is applied, 4(1-pel) when 1/4 precision is applied)
  • L0_x is the horizontal size (x value) of the motion vector with respect to the first prediction direction (L0)
  • L0_y is the vertical size (y value) of the motion vector with respect to the first prediction direction (L0)
  • L1_x is the second The horizontal direction magnitude (x value) of the motion vector with respect to the prediction direction L1
  • L1_y represents the vertical direction magnitude (y value) of the motion vector with respect to the second prediction direction L1.
  • L0 or L1 motion vectors may be considered and may be expressed as follows.
  • an x or y value of each of the L0 or L1 direction motion vectors may be considered.
  • 22 and 23 show examples of flowcharts for determining a set of MMVD candidates based on the size and block size of a motion vector according to an embodiment of the present specification.
  • An embodiment of the present specification may determine the MMVD candidate set based on the size of the base motion vector of the MMVD (horizontal (x) size and/or vertical (y) size) and block size.
  • Table 10 shows an example in which different distance tables are allocated according to the size and block size of the base motion vector according to the present embodiment. Cases according to the base motion vector and block size may be expressed as shown in FIGS. 22 and 23.
  • the base motion vector MV when the base motion vector MV is greater than the first reference value T1, if the block size BS is greater than the first block size reference value BS_T1, the first MMVD candidate set corresponding to Case 1 If (1-pel, 2-pel, 4-pel, 8-pel) is used, and the block size (BS) is less than or equal to the first block size reference value (BS_T1), the third MMVD candidate set corresponding to Case 3 ( 1/4-pel, 1/2-pel, 1-pel, 2-pel) can be used.
  • the base motion vector MV is less than or equal to the first reference value T1 and greater than the second reference value T2 if the block size BS is greater than the second block size reference value BS_T2, Case 2 A second MMVD candidate set corresponding to (1/2-pel, 1-pel, 2-pel, 4-pel) is used, and if the block size (BS) is less than or equal to the second block size reference value (BS_T2), Case A third MMVD candidate set corresponding to 3 (1/4-pel, 1/2-pel, 1-pel, 2-pel) may be used. If the base motion vector MV is less than or equal to the second second reference value T2, a third set of MMVD candidates corresponding to Case 3 (1/4-pel, 1/2-pel, 1-pel, 2- pel) can be used.
  • the base motion vector MV is greater than the first reference value T1
  • the block size BS is greater than the first block size reference value BS_T1
  • the first MMVD candidate set corresponding to Case 1 If (1-pel, 2-pel, 4-pel, 8-pel) is used, and the block size (BS) is less than or equal to the first block size reference value (BS_T1), the second set of MMVD candidates corresponding to Case 2 ( 1/2-pel, 1-pel, 2-pel, 4-pel) can be used.
  • the base motion vector MV is less than or equal to the first reference value T1 and greater than the second reference value T2 if the block size BS is greater than the second block size reference value BS_T2, Case 2 A second MMVD candidate set corresponding to (1/2-pel, 1-pel, 2-pel, 4-pel) is used, and if the block size (BS) is less than or equal to the second block size reference value (BS_T2), Case A third MMVD candidate set corresponding to 3 (1/4-pel, 1/2-pel, 1-pel, 2-pel) may be used. If the base motion vector MV is less than or equal to the second second reference value T2, a third set of MMVD candidates corresponding to Case 3 (1/4-pel, 1/2-pel, 1-pel, 2- pel) can be used.
  • the first reference value T1 and the second reference value T2 may be set as follows, and various values may be used depending on the implementation.
  • T1 128 (128(8-pel) when 1/16 precision is applied, 64(8-pel) when 1/4 precision is applied)
  • T2 16 (16(1-pel) when 1/16 precision is applied, 4(1-pel) when 1/4 precision is applied)
  • first and second block size reference values BS_T1 and T2 may be set as follows, but this is only an example and may be changed.
  • L0_x is the horizontal size (x value) of the motion vector with respect to the first prediction direction (L0)
  • L0_y is the vertical size (y value) of the motion vector with respect to the first prediction direction (L0)
  • L1_x is the second The horizontal direction magnitude (x value) of the motion vector with respect to the prediction direction L1
  • L1_y represents the vertical direction magnitude (y value) of the motion vector with respect to the second prediction direction L1.
  • L0 or L1 motion vectors may be considered and may be expressed as follows.
  • an x or y value of each of the L0 or L1 direction motion vectors may be considered.
  • the bit length of the MMVD length index can be reduced by determining the MMVD candidate set based on the size and block size of the motion vector as in this embodiment.
  • FIG. 24 illustrates an example of a flow chart for determining an MMVD candidate set based on the resolution of a motion vector according to an embodiment of the present specification.
  • the MMVD candidate set may be determined based on the pixel resolution of coordinates indicated by the motion vector.
  • Table 11 shows an example of a distance table based on the resolution of coordinates indicated by a motion vector.
  • the second MMVD candidate set corresponding to Case 2 (1/2-pel, 1-pel, 2-pel, 4-pel) may be used.
  • a third MMVD candidate set corresponding to Case 3 (1/4-pel, 1/2-pel, 1-pel, 2-pel) may be used.
  • the motion vector when the motion vector is not 0, it may be determined as follows.
  • the following is an example of the case where the base motion vector is bi-directional prediction, and a similar method can be applied to uni-directional prediction.
  • a method for determining whether the motion vector has a resolution of 4-pel units may be as follows. The following is an example of the case where the base motion vector is bi-directional prediction, and a similar method can be used when uni-directional prediction is applied.
  • 64 and 16 may be applied when the motion vector has a precision of 1/16, but this is only an example and may vary according to a video processing system.
  • the bit length of the MMVD length index can be reduced by determining the MMVD candidate set based on the resolution of the pixel indicated by the motion vector as in this embodiment.
  • 25 and 26 illustrate an example of a flowchart for determining a set of MMVD candidates based on the size and resolution of a motion vector according to an embodiment of the present specification.
  • the MMVD candidate set may be determined based on the size of the motion vector and the resolution of the pixel coordinates indicated by the resolution of the motion vector.
  • the MMVD distance table can be determined as follows, considering the size of the x and y values of the base motion vector of the MMVD and the motion vector resolution.
  • Table 12 shows an example in which different distance tables are allocated according to the size of the base motion vector and the resolution of the base motion vector. Case according to the base motion vector value and resolution may be set as shown in FIG. 25 or FIG. 26.
  • the first reference value T1 and the second reference value T2 may be set as follows, and various values may be used depending on the implementation.
  • T1 128 (128(8-pel) when 1/16 precision is applied, 64(8-pel) when 1/4 precision is applied)
  • T2 16 (16(1-pel) when 1/16 precision is applied, 4(1-pel) when 1/4 precision is applied)
  • the reference values 32 and 16 for the motion vector resolution can be applied when the motion vector has an accuracy of 1/16, which means that it has an accuracy of 2-pel and 1-pel, respectively, but this is only one example. Its value can vary.
  • the bit length of the MMVD length index can be reduced by determining the MMVD candidate set based on the size and resolution of the motion vector as in this embodiment.
  • FIG. 27 shows an example of a flowchart for processing a video signal according to an embodiment of the present specification.
  • the operations of FIG. 27 may be performed by the inter prediction unit 180 of the encoding apparatus 100, the inter prediction unit 260 of the decoding apparatus 200, or the processor 510 of the video signal processing apparatus 500. .
  • the following operations are collectively referred to as being performed by a coding apparatus.
  • the coding apparatus acquires at least one motion vector for inter-frame prediction of the current block from at least one neighboring block adjacent to the current block based on the merge index. For example, the coding apparatus constructs a merge candidate list from the spatial neighboring blocks A0, A1, B0, B1, and B2 of FIG. 16 and temporal neighboring blocks, and the merge candidate indicated by the merge index in the merge candidate list Decide.
  • the coding apparatus may acquire information (reference picture index) for a motion vector and a reference picture corresponding to a merge candidate.
  • step S2720 the coding apparatus determines an MMVD offset applied to at least one motion vector based on the MMVD length index.
  • the MMVD offset represents a value for refinement of a motion vector obtained based on a merge index.
  • the MMVD length index indicates an index indicating one of a plurality of MMVD offset candidate values.
  • the coding apparatus determines the MMVD candidate set based on the at least one motion vector and determines the MMVD offset applied to the motion vector, and the MMVD offset associated with the MMVD length index in the determined MVMD candidate set. Can decide.
  • the MMVD candidate set may have different MMVD offset candidate values, as set differently for each case as shown in Tables 6 and 8 to 12. That is, the coding apparatus may determine one of a plurality of preset MMVD candidate sets, where each of the plurality of MMVD candidate sets may have a different range of MMVD offset candidates.
  • the coding apparatus may determine the MMVD candidate set based on the magnitude of the at least one motion vector. In addition, the coding apparatus may determine the set of MMVD candidates based on a comparison result of at least one motion vector size and a reference value, wherein when bi-prediction is applied to the current block, the first prediction direction
  • the MMVD candidate set may be determined based on a comparison result between at least one of a first motion vector for (L0 direction) or a second motion vector for a second prediction direction (L1 direction) and a reference value. For example, the coding apparatus may determine the MMVD candidate set according to the size of the motion vector as shown in FIG. 21.
  • the coding apparatus may determine the MMVD candidate set based on the size of at least one motion vector and the size of the current block. For example, the coding apparatus may determine the MMVD candidate set according to the motion vector size and block size as shown in FIG. 22 or FIG. 23.
  • the coding device may determine the MMVD candidate set based on the pixel resolution of the coordinates indicated by the at least one motion vector. For example, the coding apparatus may determine the MMVD candidate set according to the resolution of the motion vector.
  • the coding apparatus may determine the MMVD candidate set based on the size of at least one motion vector and the pixel resolution of the coordinates indicated by the at least one motion vector. For example, the coding apparatus may determine the MMVD candidate set according to the size of the motion vector and the pixel resolution of the motion vector, as shown in FIG. 25 or FIG. 26.
  • the coding apparatus may generate a prediction sample of the current block based on at least one motion vector to which the MMVD offset is applied and a reference picture associated with the merge index.
  • the coding apparatus determines the sign applied to the x value or y value of the MMVD offset using the MMVD direction index, applies the MMVD offset to the sign determined by the MMVD direction index, and applies the signed MMVD offset value.
  • a prediction sample may be generated using a motion vector to which an MMVD offset is applied.
  • the embodiments described herein may be implemented and implemented on a processor, microprocessor, controller, or chip.
  • the functional units shown in each figure may be implemented and implemented on a computer, processor, microprocessor, controller, or chip.
  • the processing method to which the present specification is applied may be produced in the form of a computer-implemented program, and may be stored in a computer-readable recording medium.
  • Multimedia data having a data structure according to the present specification may also be stored in a computer-readable recording medium.
  • the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
  • the computer-readable recording medium includes, for example, Blu-ray Disc (BD), Universal Serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical. It may include a data storage device.
  • the computer-readable recording medium includes media implemented in the form of a carrier wave (for example, transmission via the Internet).
  • the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
  • embodiments of the present specification may be implemented as computer program products using program codes, and the program codes may be executed on a computer by the embodiments of the present specification.
  • the program code can be stored on a computer readable carrier.
  • the decoding device and encoding device to which the present specification is applied may be included in a digital device.
  • digital device includes, for example, all digital devices capable of performing at least one of transmission, reception, processing, and output of data, content, and services.
  • the processing of the data, content, service, etc. by the digital device includes an operation of encoding and/or decoding data, content, service, and the like.
  • These digital devices are paired or connected (hereinafter referred to as'pairing') with other digital devices, external servers, etc. through a wire/wireless network to transmit and receive data. Convert it accordingly.
  • Digital devices include, for example, standing devices such as network TV, HBBTV (Hybrid Broadcast Broadband TV), smart TV (Smart TV), IPTV (internet protocol television), PC (Personal Computer), and the like. , PDA (Personal Digital Assistant), smart phones (Smart Phone), tablet PCs (Tablet PC), notebooks, mobile devices (mobile devices or handheld devices).
  • HBBTV Hybrid Broadcast Broadband TV
  • Smart TV Smart TV
  • IPTV Internet protocol television
  • PC Personal Computer
  • PDA Personal Digital Assistant
  • smart phones Smart Phone
  • Tablett PC Tablett PC
  • notebooks mobile devices
  • mobile devices mobile devices or handheld devices.
  • wired/wireless network refers to a communication network that supports various communication standards or protocols for interconnection and/or data transmission and reception between digital devices or digital devices and external servers.
  • Such a wired/wireless network may include both current and future communication networks to be supported by the standard and communication protocols therefor, such as Universal Serial Bus (USB), Composite Video Banking Sync (CVBS), component, and S-Video.
  • USB Universal Serial Bus
  • CVBS Composite Video Banking Sync
  • component component
  • S-Video S-Video
  • Wi-Fi Direct can be formed by a communication standard for a wireless connection.
  • DVI Digital Visual Interface
  • HDMI High Definition Multimedia Interface
  • RGB High Definition Multimedia Interface
  • D-SUB and Bluetooth
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • ZigBee Digital Living Network Alliance
  • DLNA Wireless LAN
  • Wi-Fi Wireless broadband
  • Wibro World Interoperability for Microwave (Wimax) Access
  • HSDPA High Speed Downlink Packet Access
  • LTE Long Term Evolution
  • Wi-Fi Direct can be formed by a communication standard for a wireless connection.
  • a digital device in the case of merely referring to a digital device in the present specification, it may mean a fixed device or a mobile device or both.
  • the digital device is an intelligent device that supports, for example, a broadcast reception function, a computer function or support, and at least one external input, e-mail, web browsing through a wired/wireless network described above ( It can support web browsing, banking, games, and applications.
  • the digital device may include an interface for supporting at least one input or control means (hereinafter referred to as an input means) such as a handwritten input device, a touch screen, and a space remote control.
  • the digital device can use a standardized general-purpose operating system (OS). For example, a digital device can add, delete, modify, and update various applications on a general-purpose OS kernel, through which further You can configure and provide a user-friendly environment.
  • OS general-purpose operating system
  • the external input described in this specification includes an external input device, that is, all input means or digital devices connected to the above-mentioned digital device by wire/wireless and capable of transmitting/receiving related data through it.
  • the external input is, for example, a high-definition multimedia interface (HDMI), a game device such as a play station or an X-box, a smart phone, a tablet PC, a printer, a digital digital device such as a smart TV. Includes all devices.
  • HDMI high-definition multimedia interface
  • server described in this specification is a client (client), that is, includes all digital devices or systems that supply data to the above-described digital devices, referred to as a processor (processor) Also.
  • a server include a web page or a portal server providing web content, an advertising server providing advertising data, a content server providing content, and an SNS Social Network Service) may include an SNS server providing a service, a service server provided by a manufacturer, or a manufacturing server.
  • channel (channel) refers to a path (path), means (means), etc. for transmitting and receiving data, for example, a broadcasting channel (broadcasting channel).
  • the broadcast channel is expressed in terms of a physical channel, a virtual channel, and a logical channel according to the activation of digital broadcasting.
  • a broadcast channel can be called a broadcast network.
  • a broadcast channel refers to a channel for providing or accessing broadcast content provided by a broadcasting station, and the broadcast content is mainly based on real-time broadcasting and is also called a live channel. .
  • the medium for broadcasting has become more diversified, and non-real time broadcasting is also activated in addition to real-time broadcasting, so the live channel is not only real-time broadcasting, but in some cases, non-real-time broadcasting. It may also be understood as a term meaning the entire channel.
  • arbitrary channel is further defined in relation to a channel other than the above-described broadcast channel.
  • the arbitrary channel may be provided together with a service guide such as an EPG (Electronic Program Guide) along with a broadcast channel, or a service guide, a GUI (Graphic User Interface) or an OSD screen (On-Screen Display) with only a random channel. screen).
  • EPG Electronic Program Guide
  • GUI Graphic User Interface
  • OSD screen On-Screen Display
  • a random channel is a channel randomly allocated by a receiver, and a channel number that is not basically overlapped with a channel number for expressing the broadcast channel is allocated.
  • the receiver receives a broadcast signal that transmits broadcast content and signaling information therefor through the tuned channel.
  • the receiver parses channel information from the signaling information, and configures a channel browser, an EPG, and the like based on the parsed channel information and provides it to the user.
  • the receiver responds accordingly.
  • the broadcast channel is a content previously promised between the transmitting and receiving terminals
  • a random channel when a random channel is repeatedly allocated with the broadcast channel, it may cause confusion or confusion of the user, so it is preferable not to repeatedly allocate as described above.
  • the random channel number is not duplicated with the broadcast channel number as described above, there is still a confusion in the channel surfing process of the user, and it is required to allocate the random channel number in consideration of this.
  • any channel according to the present specification can also be implemented to be accessed as a broadcast channel in the same manner in response to a user's request to switch channels through an input means in the same way as a conventional broadcast channel.
  • the random channel number is a type in which characters are written in parallel, such as random channel-1, random channel-2, and the like, for a convenience of discrimination or identification between a user's random channel access and a broadcast channel number. It can be defined and marked as.
  • the display of an arbitrary channel number may be realized in the form of a character in parallel as in random channel-1, or in the form of a number in the receiver, as in the number of the broadcast channel.
  • an arbitrary channel number may be provided in the form of a number, such as a broadcast channel, and a channel number may be defined and displayed in various ways distinguishable from a broadcast channel, such as video channel-1, title-1, and video-1. have.
  • the digital device executes a web browser for a web service, and provides various types of web pages to the user.
  • the web page includes a web page including a video content
  • the video is processed separately or independently from the web page.
  • the separated video can be implemented by allocating an arbitrary channel number as described above, providing it through a service guide, etc., and outputting it according to a channel switching request in a process of viewing a service guide or a broadcast channel.
  • predetermined content, images, audio, items, etc. are separately processed from the broadcast content, games, and applications themselves, and for playback, processing, etc. Any channel number can be assigned and implemented as described above.
  • FIG. 28 is a diagram schematically showing an example of a service system including a digital device.
  • Service systems including digital devices include a content provider (CP) 2810, a service provider (SP) 2820, a network provider (NP) 2830, and a home network end user (HNED). ) (Customer) 2840.
  • the HNED 2840 is, for example, a client 2800, that is, a digital device.
  • the content provider 2810 produces and provides various content. As shown in FIG. 34 as such a content provider 2810, terrestrial broadcaster, cable SO (System Operator) or MSO (Multiple SO), satellite broadcaster (satellite broadcaster) , Various Internet broadcasters, and private content providers (Private CPs). Meanwhile, the content provider 2810 provides various applications in addition to broadcast content.
  • the service provider 2820 provides the content provided by the content provider 2810 as a service package to the HNED 2840.
  • the service provider 2820 of FIG. 34 packages the first terrestrial broadcast, the second terrestrial broadcast, the cable MSO, the satellite broadcast, various Internet broadcasts, applications, etc., and provides them to the HNED 2840.
  • the service provider 2820 provides services to the client 300 in a uni-cast or multi-cast manner. Meanwhile, the service provider 2820 may transmit data to a plurality of pre-registered clients 2800 at a time, and for this, an Internet Group Management Protocol (IGMP) protocol may be used.
  • IGMP Internet Group Management Protocol
  • the above-described content provider 2810 and service provider 2820 may be identical or single entities.
  • the content provided by the content provider 2810 may be service packaged and provided to the HNED 2840 to perform the function of the service provider 2820 together or vice versa.
  • the network provider 2830 provides a network for data exchange between the content provider 2810 or/and the service provider 2820 and the client 2800.
  • the client 2800 may establish a home network to transmit and receive data.
  • the content provider 2810 or/and the service provider 2820 in the service system may use conditional access or content protection means to protect transmitted content.
  • the client 300 may use processing means such as a CableCARD (POD: Point of Deployment), DCAS (Downloadable CAS), etc. in response to the restriction reception or content protection.
  • CableCARD Point of Deployment
  • DCAS Downloadable CAS
  • the client 2800 may also use a bidirectional service through a network (or communication network). In this case, rather, the client 2800 may perform the function of a content provider, and the existing service provider 2820 may receive it and transmit it back to another client.
  • 29 is a block diagram illustrating a digital device according to an embodiment. 29, for example, may correspond to the client 2800 of FIG. 28, and refers to the digital device described above.
  • the digital device 2900 includes a network interface 2901, a TCP/IP manager 2902, a service delivery manager 2903, an SI decoder 2904, Demultiplexer (demux) 2905, audio decoder 2906, video decoder 2907, display A/V and OSD module 2908, service control manager control manager (2909), service discovery manager (service discovery manager) 2910, SI & metadata database (SI&Metadata DB) 2911, metadata manager 2912, service manager 2913, UI And a manager 2914 and the like.
  • SI& metadata database SI&Metadata DB
  • the network interface unit 2901 receives or transmits Internet protocol (IP) packets through a network. That is, the network interface unit 2901 receives services, content, and the like from the service provider 3420 through a network network.
  • IP Internet protocol
  • the TCP/IP manager 2902 is configured to transmit packets between IP packets received by the digital device 2900 and IP packets transmitted by the digital device 2900, that is, between a source and a destination. Get involved. Then, the TCP/IP manager 2902 classifies the received packet(s) to correspond to an appropriate protocol, and the service delivery manager 2905, the service discovery manager 2910, the service control manager 2909, and the metadata manager 2912 ), and the like.
  • the service delivery manager 2901 is responsible for controlling received service data. For example, the service delivery manager 2901 may use RTP/RTCP when controlling real-time streaming data.
  • the service delivery manager 2930 parses the received data packet according to RTP and transmits it to the demultiplexer 2905 or control of the service manager 2913 According to the SI & metadata database 2911.
  • the service delivery manager 2930 uses the RTCP to feed back the network reception information to a server providing a service.
  • the demultiplexing unit 2905 demultiplexes the received packets into audio, video, and system information (SI) data, and transmits them to the audio/video decoder 2906/2907 and the SI decoder 2904, respectively.
  • SI system information
  • the SI decoder 2904 decodes service information such as program specific information (PSI), program and system information protocol (PSIP), and digital video broadcasting-service information (DVB-SI).
  • PSI program specific information
  • PSIP program and system information protocol
  • DVB-SI digital video broadcasting-service information
  • the SI decoder 2904 stores the decoded service information in the SI & metadata database 2911, for example.
  • the service information stored in this way may be read and used by a corresponding configuration, for example, by a user's request.
  • the audio/video decoder 2906/2907 decodes each audio data and video data demultiplexed by the demultiplexing unit 405.
  • the audio data and video data thus decoded are provided to the user through the display unit 2908.
  • the application manager may include, for example, a UI manager 2914 and a service manager 2913.
  • the application manager may manage the overall state of the digital device 2900, provide a user interface, and manage other managers.
  • the UI manager 2914 provides a graphical user interface (GUI) for a user using an on-screen display (OSD) or the like, and receives a key input from a user to perform device operation according to the input. For example, when the UI manager 2914 receives a key input for channel selection from a user, the UI manager 2914 transmits the key input signal to the service manager 2913.
  • GUI graphical user interface
  • OSD on-screen display
  • the service manager 2913 controls a manager associated with a service, such as a service delivery manager 2901, a service discovery manager 2910, a service control manager 2909, and a metadata manager 2912.
  • the service manager 2913 creates a channel map and selects a channel using the channel map according to a key input received from the user interface manager 2914. Then, the service manager 2913 receives the channel service information from the SI decoder 2904 and sets the audio/video PID (packet identifier) of the selected channel to the demultiplexer 2905. The PID thus set is used in the demultiplexing process described above. Accordingly, the demultiplexer 2905 filters the audio data, video data, and SI data using the PID.
  • the service discovery manager 2910 provides information necessary to select a service provider that provides a service. When a signal regarding channel selection is received from the service manager 2913, the service discovery manager 2910 finds a service using the information.
  • the service control manager 2909 is responsible for selecting and controlling services.
  • the service control manager 2909 uses IGMP or RTSP when a user selects a live broadcasting service such as a conventional broadcasting method, and selects a service such as video on demand (VOD).
  • the RTSP is used to select and control services.
  • the RTSP protocol may provide a trick mode for real-time streaming.
  • the service control manager 2909 may initialize and manage a session through the IMS gateway 2950 using an IP multimedia subsystem (IMS) and a session initiation protocol (SIP).
  • IMS IP multimedia subsystem
  • SIP session initiation protocol
  • the protocols are one embodiment, and other protocols may be used according to implementation examples.
  • the metadata manager 2912 manages metadata associated with a service and stores the metadata in the SI & metadata database 2911.
  • the SI & metadata database 2911 stores service information decoded by the SI decoder 2904, metadata managed by the metadata manager 2912, and information necessary to select a service provider provided by the service discovery manager 2910. To save.
  • the SI & metadata database 2911 may store set-up data and the like for the system.
  • the SI & metadata database 2911 may be implemented using non-volatile RAM (NVRAM), flash memory, or the like.
  • NVRAM non-volatile RAM
  • the IMS gateway 2950 is a gateway that collects functions necessary for accessing an IMS-based IPTV service.
  • FIG. 30 is a configuration block diagram illustrating another embodiment of a digital device.
  • FIG. 30 illustrates a block diagram of a mobile device as another embodiment of a digital device.
  • the mobile device 3000 includes a wireless communication unit 3010, an audio/video (A/V) input unit 3020, a user input unit 3030, a sensing unit 3040, and an output unit 3050.
  • a memory 3060, an interface unit 3070, a control unit 3080, and a power supply unit 3090 may be included.
  • the components shown in FIG. 36 are not essential, so a mobile device with more or fewer components may be implemented.
  • the wireless communication unit 3010 may include one or more modules that enable wireless communication between the mobile device 3000 and the wireless communication system or between the mobile device and the network where the mobile device is located.
  • the wireless communication unit 3010 may include a broadcast reception module 3011, a mobile communication module 3012, a wireless Internet module 3013, a short-range communication module 3014, and a location information module 3015. .
  • the broadcast receiving module 3011 receives a broadcast signal and/or broadcast-related information from an external broadcast management server through a broadcast channel.
  • the broadcast channel may include a satellite channel and a terrestrial channel.
  • the broadcast management server may mean a server that generates and transmits broadcast signals and/or broadcast-related information or a server that receives previously generated broadcast signals and/or broadcast-related information and transmits them to a terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, and a data broadcast signal, and may also include a TV broadcast signal or a radio broadcast signal combined with a data broadcast signal.
  • the broadcast related information may mean information related to a broadcast channel, a broadcast program, or a broadcast service provider. Broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 3012.
  • Broadcast-related information may exist in various forms, for example, an electronic program guide (EPG) or an electronic service guide (ESG).
  • EPG electronic program guide
  • ESG electronic service guide
  • Broadcast receiving module 3011 for example, ATSC, DVB-T (digital video broadcasting-terrestrial), DVB-S (satellite), MediaFLO (media forward link only), DVB-H (handheld), ISDB-T (Digital broadcast signals can be received using digital broadcast systems such as integrated services digital broadcast-terrestrial.
  • the broadcast receiving module 3011 may be configured to be suitable for other broadcasting systems as well as the digital broadcasting system described above.
  • the broadcast signal and/or broadcast-related information received through the broadcast receiving module 3011 may be stored in the memory 3060.
  • the mobile communication module 3012 transmits and receives wireless signals to and from at least one of a base station, an external terminal, and a server on a mobile communication network.
  • the wireless signal may include various types of data according to transmission and reception of a voice signal, a video call signal, or a text/multimedia message.
  • the wireless Internet module 3013 includes a module for wireless Internet access, and may be built in or external to the mobile device 3000.
  • wireless Internet technology wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), and high speed downlink packet access (HSDPA) may be used.
  • WLAN wireless LAN
  • Wibro wireless broadband
  • Wimax wireless broadband
  • HSDPA high speed downlink packet access
  • the short-range communication module 3014 refers to a module for short-range communication.
  • Bluetooth radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, RS-232, RS-485, etc. can be used as short range communication technology.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wideband
  • ZigBee ZigBee
  • RS-232 RS-485, etc.
  • the location information module 3015 is a module for obtaining location information of the mobile device 3000, and may use a global positioning system (GPS) module as an example.
  • GPS global positioning system
  • the A/V input unit 3020 is for audio or/and video signal input, and may include a camera 3021, a microphone 3022, and the like.
  • the camera 3021 processes image frames such as still images or moving pictures obtained by an image sensor in a video call mode or a shooting mode.
  • the processed image frame may be displayed on the display portion 3051.
  • the image frame processed by the camera 3021 may be stored in the memory 3060 or transmitted to the outside through the wireless communication unit 3010. Two or more cameras 3021 may be provided depending on the use environment.
  • the microphone 3022 receives an external sound signal by a microphone in a call mode or a recording mode, a voice recognition mode, and processes it as electrical voice data.
  • the processed voice data may be converted and output in a form that can be transmitted to the mobile communication base station through the mobile communication module 3012 in the call mode.
  • the microphone 3022 may be implemented with various noise reduction algorithms for removing noise generated in the process of receiving an external sound signal.
  • the user input unit 3030 generates input data for the user to control the operation of the terminal.
  • the user input unit 3030 may be configured of a key pad, a dome switch, a touch pad (static pressure/power outage), a jog wheel, a jog switch, or the like.
  • the sensing unit 3040 displays the current state of the mobile device 3000, such as the open/closed state of the mobile device 3000, the location of the mobile device 3000, the presence or absence of user contact, the orientation of the mobile device, and acceleration/deceleration of the mobile device. It senses and generates a sensing signal for controlling the operation of the mobile device 3000. For example, when the mobile device 3000 is moved or tilted, the position or tilt of the mobile device may be sensed. In addition, whether power is supplied to the power supply unit 3090 or whether external devices are coupled to the interface unit 3070 may be sensed. Meanwhile, the sensing unit 3040 may include a proximity sensor 3041 including near field communication (NFC).
  • NFC near field communication
  • the output unit 3050 is for generating output related to vision, hearing, or tactile sense, and may include a display unit 3051, an audio output module 3052, an alarm unit 3053, and a haptic module 3054. have.
  • the display unit 3051 displays (outputs) information processed by the mobile device 3000. For example, when the mobile device is in a call mode, a user interface (UI) or a graphic user interface (GUI) related to the call is displayed. When the mobile device 3000 is in a video call mode or a shooting mode, the photographed and/or received video or UI, GUI is displayed.
  • UI user interface
  • GUI graphic user interface
  • the display portion 3051 includes a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display ( flexible display), and a 3D display.
  • LCD liquid crystal display
  • TFT LCD thin film transistor-liquid crystal display
  • OLED organic light-emitting diode
  • flexible display flexible display
  • 3D display 3D display
  • Some of these displays may be of a transparent type or a light transmissive type so that the outside can be seen through them. This may be called a transparent display, and a typical example of the transparent display is TOLED (transparent OLED).
  • the rear structure of the display portion 3051 may also be configured as a light transmissive structure. With this structure, the user can see an object located behind the terminal body through an area occupied by the display unit 3051 of the terminal body.
  • Two or more display units 3051 may be present depending on the implementation form of the mobile device 3000.
  • a plurality of display units may be spaced apart from one surface or integrally disposed on the mobile device 3000, or may be respectively disposed on different surfaces.
  • the display unit 3051 When the display unit 3051 and a sensor that senses a touch operation (hereinafter referred to as a'touch sensor') form a mutual layer structure (hereinafter referred to as a'touch screen'), the display unit 3051 inputs other than an output device. It can also be used as a device.
  • the touch sensor may have, for example, a form of a touch film, a touch sheet, and a touch pad.
  • the touch sensor may be configured to convert a change in pressure applied to a specific portion of the display portion 3051 or capacitance generated in a specific portion of the display portion 3051 into an electrical input signal.
  • the touch sensor may be configured to detect not only the touched position and area, but also the pressure at the time of touch.
  • the corresponding signal(s) is sent to the touch controller.
  • the touch controller processes the signal(s) and then transmits corresponding data to the controller 3080. Accordingly, the control unit 3080 can know which area of the display unit 3051 is touched, and the like.
  • a proximity sensor 3041 may be disposed in an inner area of the mobile device surrounded by the touch screen or near the touch screen.
  • the proximity sensor refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or an object in the vicinity using mechanical force or infrared light without mechanical contact.
  • Proximity sensors have a longer lifespan and higher utilization than contact sensors.
  • the proximity sensor examples include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive type proximity sensor, a magnetic type proximity sensor, and an infrared proximity sensor.
  • the touch screen When the touch screen is capacitive, it is configured to detect the proximity of the pointer due to a change in the electric field according to the proximity of the pointer. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.
  • proximity touch the act of allowing the pointer to be recognized as being positioned on the touch screen without being touched by the pointer on the touch screen
  • contact touch the act of actually touching the pointer
  • the location on the touch screen that is the proximity touch with the pointer means a location where the pointer is perpendicular to the touch screen when the pointer is touched close.
  • the proximity sensor detects a proximity touch and a proximity touch pattern (eg, proximity touch distance, proximity touch direction, proximity touch speed, proximity touch time, proximity touch position, proximity touch movement state, etc.). Information corresponding to the sensed proximity touch operation and the proximity touch pattern may be output on the touch screen.
  • a proximity touch pattern eg, proximity touch distance, proximity touch direction, proximity touch speed, proximity touch time, proximity touch position, proximity touch movement state, etc.
  • the audio output module 3052 may output audio data received from the wireless communication unit 3010 or stored in the memory 3060 in a call signal reception, call mode or recording mode, voice recognition mode, broadcast reception mode, or the like.
  • the sound output module 3052 may also output sound signals related to functions (for example, call signal reception sound, message reception sound, etc.) performed by the mobile device 3000.
  • the sound output module 3052 may include a receiver, a speaker, and a buzzer.
  • the alarm unit 3053 outputs a signal for notifying the occurrence of the event of the mobile device 3000. Examples of events generated in the mobile device include call signal reception, message reception, key signal input, and touch input.
  • the alarm unit 3053 may also output a signal for notifying the occurrence of an event in a form other than a video signal or an audio signal, for example, vibration.
  • the video signal or the audio signal may also be output through the display unit 3051 or the audio output module 3052, so that the display unit and the audio output modules 3051 and 3052 may be classified as part of the alarm unit 3053.
  • the haptic module 3054 generates various tactile effects that the user can feel.
  • a typical example of the tactile effect generated by the haptic module 3054 is vibration.
  • the intensity and pattern of vibration generated by the haptic module 3054 can be controlled. For example, different vibrations may be synthesized and output or sequentially output.
  • the haptic module 3054 in addition to vibration, is a pin array that vertically moves with respect to the contact surface of the skin, stimulation of the ejection force or inhalation force of the air through the ejection or intake, grazing on the skin surface, contact of the electrode, electrostatic force, etc.
  • Various tactile effects can be generated, such as the effect of the product and the effect of reproducing the feeling of cold and warm using an element capable of absorbing heat or generating heat.
  • the haptic module 3054 can not only deliver the tactile effect through direct contact, but also implement it so that the user can feel the tactile effect through muscle sensations such as fingers or arms. Two or more haptic modules 3054 may be provided according to a configuration aspect of the mobile device 3000.
  • the memory 3060 may store a program for the operation of the control unit 3080, and may temporarily store input/output data (eg, a phone book, a message, a still image, a video, etc.).
  • the memory 3060 may store data related to various patterns of vibration and sound output when a touch is input on the touch screen.
  • the memory 3060 is a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory, etc.), Random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, It may include a storage medium of at least one of a magnetic disk and an optical disk.
  • the mobile device 3000 may operate in relation to a web storage that performs a storage function of the memory 3060 on the Internet.
  • the interface unit 3070 serves as a passage with all external devices connected to the mobile device 3000.
  • the interface unit 3070 receives data from an external device, receives power, transfers it to each component inside the mobile device 3000, or allows data inside the mobile device 3000 to be transmitted to the external device.
  • wired/wireless headset port, external charger port, wired/wireless data port, memory card port, port for connecting devices equipped with an identification module, audio input/output (I/O) port, A video I/O port, an earphone port, and the like may be included in the interface unit 3070.
  • the identification module is a chip that stores various information for authenticating the usage rights of the mobile device 3000, a user identification module (UIM), a subscriber identification module (SIM), and a universal user authentication module ( universal subscriber identity module, USIM).
  • the device provided with the identification module (hereinafter referred to as an'identification device') may be manufactured in a smart card format. Therefore, the identification device may be connected to the terminal 3000 through the port.
  • the interface unit 3070 When the mobile terminal 3000 is connected to an external cradle, the interface unit 3070 becomes a passage through which power from the cradle is supplied to the mobile terminal 3000, or various command signals input from the cradle by the user. It can be a passage to the mobile terminal.
  • Various command signals or power input from the cradle may be operated as signals for recognizing that the mobile terminal is correctly mounted on the cradle.
  • the control unit 3080 typically controls the overall operation of the mobile device. For example, it performs related control and processing for voice calls, data communication, video calls, and the like.
  • the control unit 3080 may include a multimedia module 3081 for multimedia playback.
  • the multimedia module 3081 may be implemented in the control unit 3080, or may be implemented separately from the control unit 3080.
  • the control unit 3080, particularly the multimedia module 3081 may include the above-described encoding device 100 and/or decoding device 200.
  • the controller 3080 may perform a pattern recognition process capable of recognizing handwriting input or picture drawing input performed on a touch screen as characters and images, respectively.
  • the power supply unit 3090 receives external power and internal power under the control of the control unit 3080 and supplies power required for the operation of each component.
  • the embodiments described herein include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), It may be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and electrical units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • It may be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and electrical units for performing other functions.
  • the embodiments described herein may include a control unit ( 3080) can be implemented by itself.
  • embodiments such as procedures and functions described herein may be implemented as separate software modules.
  • Each of the software modules can perform one or more functions and operations described herein.
  • Software code can be implemented in a software application written in an appropriate programming language.
  • the software code is stored in the memory 3060 and can be executed by the control unit 3080.
  • 31 is a configuration block diagram illustrating another embodiment of a digital device.
  • the digital device 3100 include a broadcast receiving unit 3105, an external device interface unit 3135, a storage unit 3140, a user input interface unit 3150, a control unit 3170, a display unit 3180, and audio. It may include an output unit 3185, a power supply unit 3190 and a photographing unit (not shown).
  • the broadcast reception unit 3105 may include at least one tuner 3110, a demodulation unit 3120, and a network interface unit 3130. However, in some cases, the broadcast receiving unit 3105 may include a tuner 3110 and a demodulator 3120, but may not include the network interface unit 3130, and vice versa.
  • a multiplexer is provided to multiplex the signal demodulated by the demodulator 3120 via the tuner 3110 and the signal received through the network interface unit 3130. It might be.
  • a demultiplexer may be provided to demultiplex the multiplexed signal or demultiplex the demodulated signal or the signal that has passed through the network interface unit 3130. have.
  • the tuner 3110 receives an RF broadcast signal by tuning a channel selected by a user or all pre-stored channels among radio frequency (RF) broadcast signals received through an antenna. In addition, the tuner 3110 converts the received RF broadcast signal into an intermediate frequency (IF) signal or a baseband signal.
  • IF intermediate frequency
  • the tuner 3110 can process both digital broadcast signals or analog broadcast signals.
  • the analog baseband video or audio signal (CVBS/SIF) output from the tuner 3110 may be directly input to the controller 3170.
  • the tuner 3110 may receive a single carrier RF broadcast signal according to an advanced television system committee (ATSC) scheme or a multiple carrier RF broadcast signal according to a digital video broadcasting (DVB) scheme.
  • ATSC advanced television system committee
  • DVD digital video broadcasting
  • the tuner 3110 may sequentially tune and receive RF broadcast signals of all broadcast channels stored through a channel storage function among RF broadcast signals received through an antenna and convert them into an intermediate frequency signal or a baseband signal. .
  • the demodulator 3120 receives and demodulates the digital IF signal DIF converted by the tuner 3110. For example, when the digital IF signal output from the tuner 3110 is an ATSC method, the demodulator 3120 performs 8-vestigal side band (8-VSB) demodulation, for example. Also, the demodulator 3120 may perform channel decoding. To this end, the demodulator 3120 includes a trellis decoder, a de-interleaver, a Reed-Solomon decoder, and the like, trellis decoding, deinterleaving, and Reed Soloman decoding can be performed.
  • 8-VSB 8-vestigal side band
  • the demodulator 3120 when the digital IF signal output from the tuner 3110 is a DVB method, the demodulator 3120 performs, for example, coded orthogonal frequency division modulation (COFDMA) demodulation.
  • the demodulator 3120 may perform channel decoding.
  • the demodulator 3120 may include a convolution decoder, a deinterleaver, and a read-soloman decoder, and perform convolutional decoding, deinterleaving, and read soloman decoding.
  • the demodulator 3120 may output a stream signal TS after demodulation and channel decoding.
  • the stream signal may be a video signal, an audio signal or a data signal multiplexed.
  • the stream signal may be an MPEG-2 transport stream (TS) in which an MPEG-2 standard video signal and a Dolby AC-3 standard audio signal are multiplexed.
  • the MPEG-2 TS may include a header of 4 bytes and a payload of 184 bytes.
  • the above-described demodulation unit 3120 may be provided separately according to the ATSC method and the DVB method. That is, the digital device may separately include an ATSC demodulator and a DVB demodulator.
  • the stream signal output from the demodulator 3120 may be input to the controller 3170.
  • the control unit 3170 may control demultiplexing, video/audio signal processing, and the like, and control an image output through the display unit 3180 and an audio output unit through the audio output unit 3185.
  • the external device interface unit 3135 provides an environment in which various external devices are interfaced to the digital device 3100.
  • the external device interface unit 3135 may include an A/V input/output unit (not shown) or a wireless communication unit (not shown).
  • the external device interface unit 3135 includes digital versatile disk (DVD), blu-ray, game devices, cameras, camcorders, computers (laptops, tablets), smartphones, Bluetooth devices, and cloud It can be connected to external devices such as (cloud) and wired/wirelessly.
  • the external device interface unit 3135 transmits a video, audio, or data (including image) signal input from the outside through the connected external device to the controller 3170 of the digital device.
  • the control unit 3170 may control the processed image, audio, or data signal to be output to a connected external device.
  • the external device interface unit 3135 may further include an A/V input/output unit (not shown) or a wireless communication unit (not shown).
  • A/V input/output unit USB terminal, CVBS (composite video banking sync) terminal, component terminal, S-video terminal (analog), DVI ( digital visual interface (HDMI) terminal, a high definition multimedia interface (HDMI) terminal, an RGB terminal, and a D-SUB terminal.
  • CVBS composite video banking sync
  • component terminal S-video terminal (analog)
  • DVI digital visual interface (HDMI) terminal
  • HDMI high definition multimedia interface
  • RGB terminal an RGB terminal
  • D-SUB terminal D-SUB terminal
  • the wireless communication unit may perform short-range wireless communication with other electronic devices.
  • the digital device 3100 includes, for example, Bluetooth, radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, digital living network alliance (DLNA).
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wideband
  • ZigBee digital living network alliance
  • Other electronic devices may be networked according to a communication protocol.
  • the external device interface unit 3135 may be connected to at least one of various set-top boxes and various terminals described above, and perform input/output operations with the set-top box.
  • the external device interface unit 3135 may receive an application or a list of applications in an adjacent external device and transmit it to the control unit 3170 or the storage unit 3140.
  • the network interface unit 3130 provides an interface for connecting the digital device 3100 with a wired/wireless network including an Internet network.
  • the network interface unit 3130 may include, for example, an Ethernet terminal or the like for connection with a wired network, for example, a wireless LAN (WLAN) (Wi-) for connection with a wireless network.
  • WLAN wireless LAN
  • Wi- wireless broadband
  • Wimax world interoperability for microwave access
  • HSDPA high speed downlink packet access
  • the network interface unit 3130 may transmit or receive data with other users or other digital devices through a connected network or another network linked to the connected network.
  • some content data stored in the digital device 3100 may be transmitted to another user registered in advance in the digital device 3100 or to a selected user or selected digital device among other digital devices.
  • the network interface unit 3130 may access a predetermined web page through a connected network or another network linked to the connected network. That is, it is possible to connect to a predetermined web page through a network and transmit or receive data with the corresponding server.
  • content or data provided by a content provider or a network operator may be received. That is, it is possible to receive content such as a movie, advertisement, game, VOD, broadcast signal and related information provided by a content provider or a network provider through a network.
  • the network interface unit 3130 may select and receive a desired application from among applications that are open to the public through a network.
  • the storage unit 3140 may store programs for processing and controlling each signal in the control unit 3170, or may store signal-processed video, audio, or data signals.
  • the storage unit 3140 may perform a function for temporarily storing an image, audio, or data signal input from the external device interface unit 3135 or the network interface unit 3130.
  • the storage unit 3140 may store information regarding a predetermined broadcast channel through a channel memory function.
  • the storage unit 3140 may store an application or a list of applications input from the external device interface unit 3135 or the network interface unit 3130.
  • the storage unit 3140 may store various platforms, which will be described later.
  • the storage unit 3140 includes, for example, a flash memory type, a hard disk type, a multimedia card micro type, and a card type memory (for example, SD or XD) Memory, etc.), RAM (RAM), and ROM (EEPROM, etc.).
  • the digital device 3100 may play and provide content files (video files, still image files, music files, document files, application files, etc.) stored in the storage unit 3140 to the user.
  • FIG 31 illustrates an embodiment in which the storage unit 3140 is provided separately from the control unit 3170, but the scope of the present specification is not limited thereto. That is, the storage unit 3140 may be included in the control unit 3170.
  • the user input interface unit 3150 transmits a signal input by the user to the control unit 3170 or a signal from the control unit 3170 to the user.
  • the user input interface unit 3150 controls power on/off, channel selection, and screen setting from the remote control device 3200 according to various communication methods such as RF communication method and infrared (IR) communication method.
  • the signal may be received and processed, or may be processed to transmit the control signal of the control unit 3170 to the remote control device 3200.
  • the user input interface unit 3150 may transmit a control signal input from a local key (not shown) such as a power key, a channel key, a volume key, and a set value to the controller 3170.
  • a local key such as a power key, a channel key, a volume key, and a set value
  • the user input interface unit 3150 transmits a control signal input from a sensing unit (not shown) that senses a user's gesture to the control unit 3170, or senses a signal from the control unit 3170.
  • the sensing unit may include a touch sensor, a voice sensor, a position sensor, and a motion sensor.
  • the control unit 3170 de-multiplexes the stream input through the tuner 3110, the demodulator 3120, or the external device interface unit 3135 or processes demultiplexed signals to generate a signal for video or audio output. And output.
  • the control unit 3170 may include the aforementioned encoding device and/or decoding device.
  • the image signal processed by the control unit 3170 may be input to the display unit 3180 and displayed as an image corresponding to the corresponding image signal. Also, the image signal processed by the control unit 3170 may be input to an external output device through the external device interface unit 3135.
  • the audio signal processed by the control unit 3170 may be audio output to the audio output unit 3185.
  • the audio signal processed by the control unit 3170 may be input to the external output device through the external device interface unit 3135.
  • control unit 3170 may include a demultiplexing unit, an image processing unit, and the like.
  • the control unit 3170 may control the overall operation of the digital device 3100. For example, the control unit 3170 may control the tuner 3110 to tune the RF broadcast corresponding to a channel selected by a user or a pre-stored channel.
  • the control unit 3170 may control the digital device 3100 by a user command input through the user input interface unit 3150 or an internal program. In particular, it is possible to access a network and download a desired application or application list into the digital device 3100.
  • control unit 3170 controls the tuner 3110 so that a signal of a selected channel is input according to a predetermined channel selection command received through the user input interface unit 3150. And it processes the video, audio or data signal of the selected channel.
  • the control unit 3170 allows the user to select the channel information selected by the user and output the processed image or audio signal through the display unit 3180 or the audio output unit 3185.
  • control unit 3170 may be input from an external device input through the external device interface unit 3135, for example, a camera or camcorder, according to an external device image playback command received through the user input interface unit 3150.
  • the video signal or the audio signal can be output through the display unit 3180 or the audio output unit 3185.
  • control unit 3170 may control the display unit 3180 to display an image.
  • It can be controlled to be displayed on the display unit 3180.
  • the image displayed on the display unit 3180 may be a still image or a video, and may be a 2D video or a 3D video.
  • control unit 3170 may control to play the content.
  • the content at this time may be content stored in the digital device 3100, received broadcast content, or external input content input from the outside.
  • the content may be at least one of a broadcast image, an external input image, an audio file, a still image, a connected web screen, and a document file.
  • the controller 3170 may control to display a list of applications or applications that can be downloaded from the digital device 3100 or from an external network.
  • the control unit 3170 may control to install and operate an application downloaded from an external network along with various user interfaces. Also, an image related to an application to be executed can be controlled to be displayed on the display unit 3180 by a user's selection.
  • a channel browsing processing unit for generating a thumbnail image corresponding to a channel signal or an external input signal is further provided.
  • the channel browsing processing unit receives a stream signal (TS) output from the demodulator 3120 or a stream signal output from the external device interface unit 3135, extracts an image from the input stream signal, and generates a thumbnail image.
  • TS stream signal
  • the generated thumbnail image can be input to the control unit 3170 as it is or encoded.
  • the generated thumbnail image may be encoded in a stream form and input to the controller 3170.
  • the controller 3170 may display a list of thumbnails having a plurality of thumbnail images on the display unit 3180 using the input thumbnail images. Meanwhile, thumbnail images in the thumbnail list may be updated sequentially or simultaneously. Accordingly, the user can easily grasp the contents of a plurality of broadcast channels.
  • the display unit 3180 converts an image signal, a data signal, an OSD signal processed by the control unit 3170, or an image signal or data signal received from the external device interface unit 3135 into R, G, and B signals, respectively. Generate a drive signal.
  • the display unit 3180 may be a PDP, LCD, OLED, flexible display, 3D display, or the like.
  • the display unit 3180 may be configured as a touch screen and used as an input device in addition to an output device.
  • the audio output unit 3185 receives a signal processed by the controller 3170, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs the audio.
  • the audio output unit 3185 may be implemented as various types of speakers.
  • a sensing unit having at least one of a touch sensor, a voice sensor, a position sensor, and a motion sensor may be further provided in the digital device 3100. .
  • the signal detected by the sensing unit may be transmitted to the control unit 3170 through the user input interface unit 3150.
  • a photographing unit (not shown) for photographing a user may be further provided. Image information photographed by the photographing unit (not shown) may be input to the control unit 3170.
  • the control unit 3170 may detect a user's gesture by individually or in combination with an image captured by a photographing unit (not shown) or a detected signal from a sensing unit (not shown).
  • the power supply unit 3190 supplies corresponding power throughout the digital device 3100.
  • control unit 3170 that can be implemented in the form of a system on chip (SOC), a display unit 3180 for image display, and an audio output unit 3185 for audio output.
  • SOC system on chip
  • display unit 3180 for image display
  • audio output unit 3185 for audio output.
  • the power supply unit 3190 may include a converter (not shown) that converts AC power into DC power.
  • a PWM-operable inverter (not shown) may be further provided for driving luminance or dimming. have.
  • the remote control device 3200 transmits a user input to the user input interface unit 3150.
  • the remote control device 3200 Bluetooth (bluetooth), RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee (ZigBee) method can be used.
  • the remote control device 3200 may receive an image, audio, or data signal output from the user input interface unit 3150, display it on the remote control device 3200, or output voice or vibration.
  • the digital device 3100 described above may be a digital broadcast receiver capable of processing a fixed or mobile ATSC or DVB digital broadcast signal.
  • the digital device may omit some components or further include components not illustrated, as required.
  • the digital device does not have a tuner and a demodulator, and can also receive and play content through a network interface unit or an external device interface unit.
  • FIGS. 29 to 31 are block diagrams illustrating a detailed configuration of the control unit of FIGS. 29 to 31 to illustrate one embodiment.
  • control unit examples include a voice processing unit and a data processing unit.
  • the demultiplexing unit 3210 demultiplexes the input stream.
  • the demultiplexer 3210 can demultiplex the input MPEG-2 TS video, audio, and data signals.
  • the stream signal input to the demultiplexer 3210 may be a stream signal output from a tuner or demodulator or an external device interface.
  • the image processing unit 3220 performs image processing of the demultiplexed image signal.
  • the image processing unit 3220 may include an image decoder 3225 and a scaler 3235.
  • the video decoder 3225 decodes the demultiplexed video signal, and the scaler 3235 scales the resolution of the decoded video signal to be output from the display unit.
  • the video decoder 3225 may support various standards.
  • the video decoder 3225 performs the function of the MPEG-2 decoder when the video signal is encoded in the MPEG-2 standard, and the video signal is encoded in the digital multimedia broadcasting (DMB) method or the H.264 standard.
  • DMB digital multimedia broadcasting
  • H.264 the function of the H.264 decoder can be performed.
  • the video signal decoded by the video processing unit 3220 is input to the mixer 3250.
  • the OSD generation unit 3240 generates OSD data according to a user input or by itself. For example, the OSD generating unit 3240 generates data for displaying various data on the screen of the display unit in a graphic or text form based on the control signal of the user input interface unit.
  • the generated OSD data includes various data such as a user interface screen of a digital device, various menu screens, widgets, icons, and viewing rate information.
  • the OSD generator 3240 may generate data for displaying subtitles of broadcast images or broadcast information based on EPG.
  • the mixer 3250 mixes the OSD data generated by the OSD generating unit 3240 and the image signal processed by the image processing unit and provides it to the formatter 3260. Because the decoded video signal and the OSD data are mixed, the OSD is displayed overlaid on a broadcast video or an external input video.
  • a frame rate converter (FRC) 3255 converts a frame rate of an input video.
  • the frame rate converting unit 3255 may convert the input 60 Hz image frame rate to have a frame rate of, for example, 120 Hz or 240 Hz according to the output frequency of the display unit.
  • various methods may exist in the method for converting the frame rate. For example, when converting the frame rate from 60 Hz to 120 Hz, the frame rate converter 3255 inserts the same first frame between the first frame and the second frame, or predicts the first frame from the first frame and the second frame. It can be converted by inserting 3 frames.
  • the frame rate converter 3255 converts the frame rate from 60 Hz to 240 Hz, three or more identical frames or predicted frames may be inserted and converted between existing frames. Meanwhile, if a separate frame conversion is not performed, the frame rate conversion unit 3255 may be bypassed.
  • the formatter 3260 changes the output of the input frame rate conversion unit 3255 to match the output format of the display unit.
  • the formatter 3260 may output R, G, and B data signals, and the R, G, and B data signals may be output as low voltage differential signaling (LVDS) or mini-LVDS Can be.
  • LVDS low voltage differential signaling
  • the formatter 3260 may support 3D service through the display unit by configuring and outputting a 3D format according to the output format of the display unit.
  • a voice processing unit (not shown) in the control unit may perform voice processing of a demultiplexed voice signal.
  • the voice processing unit (not shown) may support various audio formats. For example, even when an audio signal is encoded in formats such as MPEG-2, MPEG-4, AAC, HE-AAC, AC-3, BSAC, a decoder corresponding thereto may be provided and processed.
  • the voice processing unit (not shown) in the control unit may process a base, treble, volume control, and the like.
  • the data processing unit (not shown) in the control unit may perform data processing of the demultiplexed data signal.
  • the data processing unit can decode the demultiplexed data signal even when it is encoded.
  • the encoded data signal may be EPG information including broadcast information such as a start time and an end time of a broadcast program broadcast on each channel.
  • each component may be integrated, added, or omitted according to the specification of the actual digital device. That is, if necessary, two or more components may be combined into one component, or one component may be subdivided into two or more components.
  • the function performed in each block is for describing an embodiment of the present specification, and the specific operation or device does not limit the scope of rights of the present specification.
  • the digital device may be an image signal processing device that performs signal processing of an image stored in the device or an input image.
  • the display unit 3180 and the audio output unit 3185 shown in FIG. 37 are excluded from the set-top box (STB), the above-described DVD player, Blu-ray player, game device, computer And the like can be further exemplified.
  • FIG 33 is a diagram illustrating an example in which a screen of a digital device displays a main image and a sub image simultaneously, according to an embodiment.
  • the digital device may simultaneously display the main image 3310 and the auxiliary image 3320 on the screen 3300.
  • the main image 3310 may be referred to as a first image, and the auxiliary image 3320 may be referred to as a second image.
  • the main image 3310 and the auxiliary image 3320 may include a video, a still image, an electronic program guide (EPG), a graphical user interface (GUI), an on-screen display (OSD), and the like.
  • EPG electronic program guide
  • GUI graphical user interface
  • OSD on-screen display
  • the main image 3310 may mean an image that is relatively smaller in size than the screen 3300 of the electronic device while being displayed simultaneously with the auxiliary image 3320 on the screen 3300 of the electronic device, as a picture in picture (PIP). Also referred to as.
  • PIP picture in picture
  • the main image 3310 is shown as being displayed on the upper left of the screen 3300 of the digital device, but the location where the main image 3310 is displayed is not limited thereto, and the main image 3310 is a digital device. Can be displayed at any location within the screen 3300.
  • the main image 3310 and the auxiliary image 3320 may be directly or indirectly related to each other.
  • the main video 3310 is a streaming video
  • the auxiliary video 3320 may be a GUI that sequentially displays thumbnails of videos including information similar to the streaming video.
  • the main image 3310 may be a broadcasted image
  • the auxiliary image 3320 may be an EPG.
  • the main image 3310 may be a broadcast image
  • the auxiliary image 3320 may be a GUI. Examples of the main image 3310 and the auxiliary image 3320 are not limited thereto.
  • the main image 3310 is a broadcast image received through a broadcasting channel
  • the auxiliary image 3320 may be information related to a broadcast image received through a broadcast channel.
  • the information related to the broadcast video received through the broadcast channel may include, for example, EPG information including a comprehensive channel schedule, broadcast program detailed information, and broadcast program review information, but is not limited thereto.
  • the main image 3310 is a broadcast image received through a broadcast channel
  • the auxiliary image 3320 may be an image generated based on information pre-stored in a digital device.
  • the image generated based on the information pre-stored in the digital device may include, for example, a basic user interface (UI) of the EPG, basic channel information, an image resolution manipulation UI, and a bedtime reservation UI. Does not work.
  • UI basic user interface
  • the main image 3310 is a broadcast image received through a broadcast channel
  • the auxiliary image 3320 may be information related to a broadcast image received through a network.
  • the information related to the broadcast image received through the network may be, for example, information obtained through a network-based search engine. More specifically, for example, information related to a character currently being displayed on the main image 3310 may be obtained through a network-based search engine.
  • information related to a broadcast image received through a network may be obtained by using, for example, an artificial intelligence (AI) system.
  • AI artificial intelligence
  • an estimated-location in map of a place currently being displayed on the main image 3310 can be obtained using a network-based deep-learning, and digital The device may receive information about the estimated location on the map of the place currently being displayed on the main image 3310 through the network.
  • the digital device may receive at least one of image information of the main image 3310 and image information of the auxiliary image 3320 from the outside.
  • the video information of the main video 3310 includes, for example, a broadcast signal received through a broadcasting channel, a source code information of the main video 3310, and a main received through a network network. IP packet (internet protocol packet) information of the image 3310 may be included, but is not limited thereto.
  • the video information of the secondary video 3320 includes, for example, a broadcast signal received through a broadcast channel, source code information of the secondary video 3320, IP packet information of the secondary video 3320 received through a network, etc. It may include, but is not limited to.
  • the digital device may decode and use video information of the main video 3310 or video information of the secondary video 3320 received from the outside. However, in some cases, the digital device may store image information of the main image 3310 or image information of the auxiliary image 3320 internally.
  • the digital device may display the main image 3310 and the auxiliary image 3320 on the screen 3300 of the digital device based on the image information of the main image 3310 and information related to the auxiliary image 3320.
  • the decoding apparatus 200 of the digital device includes a main image decoding apparatus and an auxiliary image decoding apparatus, and the main image decoding apparatus and the auxiliary image decoding apparatus respectively include image information and auxiliary images 3320 of the main image 3310.
  • Can decode video information includes a main image renderer (first renderer) and an auxiliary image renderer (second renderer), and the main image renderer displays the main image 3310 on the screen 3300 of the digital device based on the information decoded by the main image decoding device. ), and the auxiliary image renderer may display the auxiliary image 3320 on the second area of the screen 3300 of the digital device based on the information decoded by the auxiliary image decoding device. .
  • the decoding apparatus 200 of the digital device may decode image information of the main image 3310 and image information of the auxiliary image 3320. Based on the information decoded by the decoding apparatus 200, the renderer may process the main image 3310 and the auxiliary image 3320 together to be simultaneously displayed on the screen 3300 of the digital device.
  • the decoding of the first image may follow the decoding procedure in the decoding apparatus 200 according to FIG. 3 described above.
  • decoding the first image may include deriving prediction samples for the current block based on inter or intra prediction, and residual samples for the current block based on the received residual information. And deriving reconstructed samples based on predictive samples and/or residual samples.
  • decoding the first image may include performing an in-loop filtering procedure on a reconstructed picture including reconstructed samples.
  • the auxiliary image may be an electronic program guide (EPG), an on screen display (OSD), or a graphical user interface (GUI).
  • EPG electronic program guide
  • OSD on screen display
  • GUI graphical user interface
  • the video information may be received through a broadcast network, and information regarding the auxiliary video may be received through the broadcast network.
  • the image information may be received through a communication network, and information regarding the auxiliary image may be received through the communication network.
  • the video information may be received through a broadcast network, and information regarding the auxiliary video may be received through a communication network.
  • the image information may be received through a broadcasting network or a communication network, and information regarding the auxiliary image may be stored in a storage medium in the digital device.
  • an embodiment of the present specification may be implemented in the form of a module, procedure, function, etc. that performs the functions or operations described above.
  • the software code can be stored in memory and driven by a processor.
  • the memory is located inside or outside the processor, and can exchange data with the processor by various known means.

Abstract

La présente invention, selon un mode de réalisation, concerne un procédé et un dispositif de traitement d'un signal vidéo à l'aide d'une inter-prédiction. Un procédé de traitement de signal vidéo selon un mode de réalisation de la présente invention comprend : une étape consistant à acquérir au moins un vecteur de mouvement permettant l'inter-prédiction du bloc courant à partir d'au moins un bloc environnant adjacent au bloc courant sur la base d'un index de fusion ; une étape consistant à déterminer une fusion avec un décalage de différence de vecteur de mouvement (MMVD) à appliquer au ou aux vecteurs de mouvement sur la base d'un index de distance MMVD ; et une étape consistant à générer un échantillon de prédiction du bloc courant sur la base dudit vecteur de mouvement auquel a été appliqué le décalage de MMVD et d'une image de référence associée à l'index de fusion, l'étape de détermination du décalage de MMVD comprenant une étape consistant à déterminer un ensemble de candidats de MMVD sur la base dudit vecteur de mouvement, et une étape consistant à déterminer un décalage de MMVD associé à l'index de distance MMVD à partir de l'ensemble de candidats de MMVD. La longueur de bits requise pour la signalisation d'un index de MMVD peut être réduite par une limitation de l'ensemble de candidats de MMVD sur la base d'un vecteur de mouvement de base.
PCT/KR2019/018560 2018-12-28 2019-12-27 Procédé et dispositif de traitement de signal vidéo à l'aide d'une inter-prédiction WO2020138997A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862785716P 2018-12-28 2018-12-28
US62/785,716 2018-12-28

Publications (1)

Publication Number Publication Date
WO2020138997A1 true WO2020138997A1 (fr) 2020-07-02

Family

ID=71129223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/018560 WO2020138997A1 (fr) 2018-12-28 2019-12-27 Procédé et dispositif de traitement de signal vidéo à l'aide d'une inter-prédiction

Country Status (1)

Country Link
WO (1) WO2020138997A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086678A (zh) * 2022-08-22 2022-09-20 北京达佳互联信息技术有限公司 视频编码方法和装置、视频解码方法和装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170078672A (ko) * 2014-10-31 2017-07-07 삼성전자주식회사 고정밀 스킵 부호화를 이용한 비디오 부호화 장치 및 비디오 복호화 장치 및 그 방법
US20180352247A1 (en) * 2015-09-24 2018-12-06 Lg Electronics Inc. Inter prediction method and apparatus in image coding system
KR20180135092A (ko) * 2011-01-07 2018-12-19 엘지전자 주식회사 영상 정보 부호화 방법 및 복호화 방법과 이를 이용한 장치

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180135092A (ko) * 2011-01-07 2018-12-19 엘지전자 주식회사 영상 정보 부호화 방법 및 복호화 방법과 이를 이용한 장치
KR20170078672A (ko) * 2014-10-31 2017-07-07 삼성전자주식회사 고정밀 스킵 부호화를 이용한 비디오 부호화 장치 및 비디오 복호화 장치 및 그 방법
US20180352247A1 (en) * 2015-09-24 2018-12-06 Lg Electronics Inc. Inter prediction method and apparatus in image coding system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEUNGSOO JEONG , MIN WOO PARK , CHANYUL KIM: "CE4 Ultimate motion vector expression in JVET-J0024 (Test 4.2.9)", THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16, no. JVET-K0115_v4, 18 July 2018 (2018-07-18), Ljubljana, SI, pages 1 - 7, XP030249220 *
SEUNGSOO JEONG , MIN WOO PARK , YINJI PIAO , KIHO CHOI: "CE4 Ultimate motion vector expression (Test 4.5.4)", THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16, no. JVET-L0054, 12 October 2018 (2018-10-12), Macao, CN, pages 1 - 6, XP030195377 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086678A (zh) * 2022-08-22 2022-09-20 北京达佳互联信息技术有限公司 视频编码方法和装置、视频解码方法和装置

Similar Documents

Publication Publication Date Title
WO2020122640A1 (fr) Procédé et dispositif de traitement de signal vidéo sur la base d'une transformée de vecteurs de mouvements basés sur l'historique
WO2020141914A1 (fr) Procédé et appareil de traitement de signal vidéo sur la base d'une prédiction de vecteur de mouvement basé sur l'historique
WO2020197083A1 (fr) Procédé et dispositif d'interprédiction basée sur le dmvr et le bdof
WO2020117018A1 (fr) Procédé et dispositif de traitement de signaux vidéo sur la base d'une interprédiction
WO2020060376A1 (fr) Procédé et appareil de traitement de signaux vidéo par inter-prédiction
WO2020197084A1 (fr) Procédé et appareil d'inter-prédiction sur la base d'un dmvr
WO2015137783A1 (fr) Procédé et dispositif de configuration d'une liste de candidats de fusion pour le codage et le décodage de vidéo intercouche
WO2020141911A1 (fr) Dispositif et procédé de traitement de signal vidéo par usage d'inter-prédiction
WO2020117016A1 (fr) Procédé et dispositif de traitement de signal vidéo sur la base d'une inter-prédiction
WO2020071871A1 (fr) Procédé et appareil de traitement de service d'image
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2020262931A1 (fr) Procédé et dispositif de signalisation permettant de fusionner une syntaxe de données dans un système de codage vidéo/image
WO2020009449A1 (fr) Procédé et dispositif permettant de traiter un signal vidéo à l'aide d'une prédiction affine
WO2020141913A1 (fr) Procédé et appareil permettant de traiter un signal vidéo sur la base d'une inter-prédiction
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2020060312A1 (fr) Procédé et dispositif de traitement du signal d'image
WO2020141915A1 (fr) Procédé et dispositif de traitement de signal vidéo sur la base d'une prédiction de vecteur de mouvement basé sur l'historique
WO2020117013A1 (fr) Procédé et appareil de traitement de signal vidéo sur la base d'une inter-prédiction
WO2014163453A1 (fr) Procédé et appareil d'encodage vidéo intercouche et procédé et appareil de décodage vidéo intercouche permettant de compenser une différence de luminance
WO2020141853A1 (fr) Procédé et appareil de traitement de signal vidéo sur la base d'une inter-prédiction
WO2021040482A1 (fr) Dispositif et procédé de codage d'image à base de filtrage de boucle adaptatif
WO2020262929A1 (fr) Procédé et dispositif de signalisation de syntaxe dans un système de codage d'image/de vidéo
WO2020256487A1 (fr) Procédé et dispositif de codage d'image sur la base d'une prédiction de mouvement
WO2020141935A1 (fr) Dispositif et procédé de traitement d'un signal vidéo sur la base d'une interprédiction
WO2020138997A1 (fr) Procédé et dispositif de traitement de signal vidéo à l'aide d'une inter-prédiction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19905666

Country of ref document: EP

Kind code of ref document: A1