WO2018221838A1 - Dispositifs de traitement et leurs procédés de commande - Google Patents

Dispositifs de traitement et leurs procédés de commande Download PDF

Info

Publication number
WO2018221838A1
WO2018221838A1 PCT/KR2018/002048 KR2018002048W WO2018221838A1 WO 2018221838 A1 WO2018221838 A1 WO 2018221838A1 KR 2018002048 W KR2018002048 W KR 2018002048W WO 2018221838 A1 WO2018221838 A1 WO 2018221838A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
coding unit
frame
additional information
processor
Prior art date
Application number
PCT/KR2018/002048
Other languages
English (en)
Korean (ko)
Inventor
나상권
유기원
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to US16/495,469 priority Critical patent/US20200099950A1/en
Priority to CN201880030707.3A priority patent/CN110612725B/zh
Publication of WO2018221838A1 publication Critical patent/WO2018221838A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to processing apparatuses and control methods thereof, and more particularly, to processing apparatuses and control methods for performing inter coding and intra coding.
  • transmission packet loss and error may occur during data transmission within a predetermined time.
  • restoration through packet retransmission may be limited.
  • Frame / Sub-Frame Duplication method and Context-based error concealment / restore method are used to recover the transmission error.
  • Frame / Sub-Frame Duplication method determines whether an error occurs through CRC (cyclic redundancy check), and if an error occurs, repeatedly outputs the last normally transmitted image (frame) or a part of the image where the error occurred. frame) is copied to the corresponding area of the previous last normal transmitted image and output.
  • CRC cyclic redundancy check
  • the frame / sub-frame duplication method has a noticeable deterioration in viewing quality due to low reconstruction accuracy and freezing artifacts due to repeated playback of the previous image, and delays transmission during CRC checking of the bit-stream of frame or sub-frame unit. This is always accompanied. In particular, when an error occurs in successive frames, freezing artifacts due to repeated playback of the same frame may be weighted.
  • the context-based error concealment / recovery method is a method of predicting and restoring pixels of a lost area by using mode and pixel information of an adjacent block.
  • the motion vector (MV) of the adjacent block and pixel information of a previous normal restored frame A method of predicting and restoring a pixel of a lost region by using a method, or predicting and restoring a pixel of a lost region through a motion prediction process in a decoder by using a mode, a pixel of a neighboring block, and pixel information of a previously normal frame.
  • the context-based error concealment / restoration method generates a reference MV only through the neighboring MVs, so that the accuracy decreases, and there is a problem that an error of an incorrectly reconstructed image propagates to the last frame.
  • the MV correction technique using the neighboring pixels on the decoder side requires high computational complexity, and in the absence of available neighboring pixels or MV information, the deterioration of the deterioration quality in the process of error-restored data being used in series. There is a problem.
  • the present invention is directed to the above-described needs, and an object of the present invention is to provide processing apparatuses and control methods thereof for improving the reconstruction efficiency of an error-prone pixel area in a frame constituting video content.
  • the processing apparatus divides the memory storing the video content and the frame constituting the video content into a plurality of coding units, for each of the plurality of coding units And a processor configured to perform encoding to generate an encoded frame, wherein the processor may add additional information including a motion vector obtained in the encoding process for each of the plurality of coding units to the encoded frame.
  • the additional information may include a motion vector for all of the plurality of coding units.
  • the additional information may be included in a reserved area of the header corresponding to the encoded frame.
  • the processor may search for a motion vector corresponding to the current coding unit in a current frame including a current coding unit and a predetermined number of adjacent frames based on the current frame, and the search among the current frame and the adjacent frame. Identification information of at least one frame including the pixel area corresponding to the motion vector may be added to the additional information.
  • the processor searches for a motion vector corresponding to the current coding unit, and if a pixel value corresponding to a pixel region corresponding to the searched motion vector and a position corresponding to the current coding unit satisfies a preset condition, the current coding.
  • Information for using the motion vector of the neighboring coding unit of the unit may be added to the additional information.
  • the processor may include the position information for the at least two coding units and the motion vector of one of the at least two coding units to the additional information.
  • the processor may add information corresponding to the detected regularity to the additional information.
  • the processing apparatus decodes by decoding in units of coding units on a memory in which encoded video content is stored and an encoded frame constituting the encoded video content.
  • a processor configured to generate the encoded frame, wherein the encoded video content includes additional information including motion vectors obtained in the encoding process for each of the plurality of coding units constituting the encoded frame. If the decoding of the current coding unit is impossible, the processor obtains a motion vector for the current coding unit from the additional information, and replaces the current coding unit with a pixel region corresponding to the obtained motion vector to perform decoding. can do.
  • the additional information may include a motion vector for all of the plurality of coding units.
  • the additional information may be included in a reserved area of a header corresponding to the encoded frame.
  • the additional information may include identification information of at least one frame including a pixel area corresponding to the motion vector, and the processor may be configured to perform the decoding on the current coding unit in the additional information when decoding of the current coding unit is impossible.
  • the decoding may be performed by obtaining a motion vector and identification information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector in a frame corresponding to the obtained identification information.
  • the additional information includes information for using a motion vector of a neighboring coding unit of the current coding unit, and wherein the processor is adjacent coding unit of the current coding unit in the additional information when decoding of the current coding unit is impossible.
  • the additional information may include position information about at least two coding units having the same motion vector and a motion vector of one of the at least two coding units, and the processor may determine the position when the decoding of the current coding unit is impossible. Based on the information, decoding may be performed by replacing the current coding unit with a pixel region corresponding to a motion vector of one of the at least two coding units.
  • the additional information includes information corresponding to regularity between motion vectors for all of the plurality of coding units, and when the decoding of the current coding unit is impossible, the processor based on the current information based on the information corresponding to the regularity.
  • a motion vector corresponding to a coding unit may be obtained and decoding may be performed by replacing the current coding unit with a pixel region corresponding to the obtained motion vector.
  • control method of the processing apparatus may include dividing a frame constituting video content into a plurality of coding units and encoding the respective plurality of coding units to generate an encoded frame.
  • the generating of the encoded frame may include additional information including the motion vector obtained in the encoding process for each of the plurality of coding units, to the encoded frame.
  • the additional information may include a motion vector for all of the plurality of coding units.
  • the additional information may be included in a reserved area of a header corresponding to the encoded frame.
  • the generating of the encoded frame may include searching for a motion vector corresponding to the current coding unit in a current frame including a current coding unit and a predetermined number of adjacent frames based on the current frame and the current frame. And adding identification information of at least one frame including a pixel area corresponding to the searched motion vector among the adjacent frames, to the additional information.
  • the generating of the encoded frame may include searching for a motion vector corresponding to the current coding unit, and presetting a pixel value of a pixel region corresponding to the searched motion vector and a position corresponding to the current coding unit. If satisfies the information, the method may include adding information to the additional information to use the motion vector of the neighboring coding unit of the current coding unit.
  • the control method of the processing apparatus performs decoding in units of coding units for the encoded frame constituting the encoded video content and the plurality of coding units in which the decoding is performed.
  • Generating a decoded frame by arranging in a predetermined direction, wherein the encoded video content includes additional information including a motion vector obtained in an encoding process for each of a plurality of coding units constituting the encoded frame.
  • the decoding is performed for each encoded frame, and the decoding may include obtaining a motion vector for the current coding unit from the additional information when decoding of the current coding unit is impossible, and assigning the current coding unit to the obtained motion vector.
  • Decoding can be performed by replacing with the corresponding pixel region have.
  • the processing apparatuses may add a motion vector for each of the plurality of coding units constituting the frame to the encoded frame and use the same to improve reconstruction efficiency when an error occurs.
  • FIG. 1 is a block diagram showing a configuration of a processing apparatus that performs encoding for better understanding of the present invention.
  • FIG. 2 is a block diagram showing a configuration of a processing apparatus that performs decoding for better understanding of the present invention.
  • FIG. 3 is a simplified block diagram illustrating a processing apparatus for performing encoding according to an embodiment of the present invention.
  • FIG. 4 is a diagram for describing a method of generating additional information according to an exemplary embodiment.
  • FIG. 5 is a diagram for describing a method of generating additional information according to another exemplary embodiment.
  • 6A and 6B are diagrams for describing a case in which occlusion occurs according to an embodiment of the present invention.
  • FIG. 7 is a diagram for describing a method for reducing a data amount of additional information according to an exemplary embodiment.
  • FIG. 8 is a simplified block diagram illustrating a processing apparatus for performing decoding according to an embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a control method of a processing apparatus for performing encoding according to an embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating a control method of a processing apparatus for performing decoding according to an embodiment of the present invention.
  • the processing apparatus 100 includes a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, and a converter 130. And a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference frame buffer 190.
  • each functional unit may be implemented in at least one hardware form (eg, at least one processor), but may also be implemented in at least one software or program form.
  • the processing device 100 is a device that encodes video content and changes it into another signal type.
  • the video content may include a plurality of frames, and each frame may include a plurality of pixels.
  • the processing device 100 may be a device for compressing raw raw data.
  • the processing device 100 may be a device for changing the pre-encoded data into another signal form.
  • the processing apparatus 100 may perform encoding by dividing each frame into a plurality of blocks.
  • the processing device 100 may perform encoding on a block basis through temporal or spatial prediction, transformation, quantization, filtering, entropy encoding, and the like.
  • Prediction means generating a prediction block similar to a target block to be encoded.
  • a unit of a target block to be encoded may be defined as a prediction unit (PU), and prediction is divided into temporal prediction and spatial prediction.
  • Temporal prediction means inter prediction.
  • the processing apparatus 100 may store some reference frames having a high correlation with the frame to be currently encoded, and perform inter-prediction using the reference frames. That is, the processing apparatus 100 may generate a prediction block from a reference frame reconstructed after encoding at a previous time. In this case, the processing device 100 is said to be inter-encoded.
  • the motion predictor 111 may search for a block having the highest temporal correlation with the target block in the reference frame stored in the reference frame buffer 190.
  • the motion predictor 111 may interpolate the reference frame and search for a block having the highest temporal correlation with the target block in the interpolated frame.
  • the reference frame buffer 190 is a space for storing the reference frame.
  • the reference frame buffer 190 is used only when performing inter prediction, and may store some reference frames having a high correlation with the frame to be currently encoded.
  • the reference frame may be a frame generated by sequentially transforming, quantizing, inverse quantization, inverse transform, and filtering a difference block to be described later. That is, the reference frame may be a frame reconstructed after encoding.
  • the motion compensator 112 may generate a prediction block based on the motion information of the block having the highest temporal correlation with the target block found by the motion predictor 111.
  • the motion information may include a motion vector, a reference frame index, and the like.
  • the intra predictor 120 may generate a prediction value for the target block by performing spatial prediction from adjacent pixels reconstructed after encoding in the current frame. In this case, the processing apparatus 100 is said to encode intra.
  • Inter encoding or intra encoding may be determined in units of coding units (CUs).
  • the coding unit may include at least one prediction unit.
  • the position of the switch 115 may be changed to correspond to the encoding prediction method.
  • a reference frame reconstructed after encoding in temporal prediction may be a frame to which filtering has been applied, or adjacent pixels reconstructed after encoding in spatial prediction may be pixels to which no filtering is applied.
  • the subtractor 125 may generate a residual block by obtaining a difference between the target block and the prediction block obtained from the temporal prediction or the spatial prediction.
  • the difference block may be a block from which a lot of redundancy has been removed by the prediction process, but may be a block including information to be encoded because the prediction is not completely performed.
  • the transformer 130 may output a transform coefficient of the frequency domain by transforming the difference block after prediction within the screen or between the screens in order to remove spatial redundancy.
  • a unit of a transform is a transform unit (TU), and may be determined irrespective of a prediction unit.
  • TU transform unit
  • a frame including a plurality of difference blocks may be divided into a plurality of transform units regardless of a prediction unit, and the transform unit 130 may perform the transform for each transform unit.
  • the division of the transform unit may be determined according to the bit rate optimization.
  • the present invention is not limited thereto, and the transform unit may be determined in association with at least one of the coding unit and the prediction unit.
  • the converter 130 may perform a conversion to concentrate energy of each conversion unit in a specific frequency region.
  • the transform unit 130 may concentrate data in the low frequency region by performing a discrete cosine transform (DCT) based transformation on each transform unit.
  • the transform unit 130 may perform a Discrete Fourier Transform (DFT) based transform or a Discrete Sine Transform (DST) based transform.
  • DCT discrete cosine transform
  • DFT Discrete Fourier Transform
  • DST Discrete Sine Transform
  • the quantization unit 140 performs quantization on the transform coefficients and approximates the transform coefficients to representative values of a predetermined number. That is, the quantization unit 140 may map input values in a specific range to one representative value. In this process, high frequency signals that are not well recognized by humans can be eliminated and information loss can occur.
  • the quantization unit 140 may use one of equalization and non-uniform quantization methods according to the probability distribution of the input data or the purpose of quantization. For example, when the probability distribution of the input data is equal, the quantization unit 140 may use an equalization quantization method. Alternatively, the quantization unit 140 may use a non-uniform quantization method when the probability distribution of the input data is not equal.
  • the entropy encoding unit 150 may reduce the amount of data by variably allocating the length of the symbol according to the occurrence probability of the symbol with respect to the data input from the quantization unit 140. That is, the entropy encoding unit 150 may generate a bit stream by expressing the input data as a bit string having a variable length consisting of 0 and 1 based on the probability model.
  • the entropy encoding unit 150 may express input data by allocating a small number of bits to a symbol having a high occurrence probability and a large number of bits to a symbol having a low occurrence probability. Accordingly, the size of the bit string of the input data can be reduced, and the compression performance of the video encoding can be improved.
  • the entropy encoding unit 150 may perform entropy encoding by a variable length coding or arithmetic coding method such as Huffman coding and Exponential-Golomb coding.
  • the inverse quantization unit 160 and the inverse transform unit 170 may receive the input quantized transform coefficients and perform inverse transformation after inverse quantization, respectively, to generate a reconstructed differential block.
  • the adder 175 may generate the reconstructed block by adding the reconstructed difference block and the predictive block obtained from the temporal prediction or the spatial prediction.
  • the filter unit 180 may apply at least one of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstructed image.
  • the filtered reconstructed frame may be stored in the reference frame buffer 190 and used as a reference frame.
  • the processing apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an adder 235, an intra predictor 240, and a motion compensator ( 250, a switch 255, a filter unit 260, and a reference frame buffer 270.
  • the processing apparatus 200 that performs decoding may receive a bit stream generated by the processing apparatus 100 that performs encoding, and perform decoding to reconstruct the video.
  • the processing apparatus 200 may perform decoding through entropy decoding, inverse quantization, inverse transformation, filtering, and the like on a block basis.
  • the entropy decoding unit 210 may entropy decode the input bit stream to generate quantized transform coefficients.
  • the entropy decoding method may be a method in which the method used by the entropy decoding unit 150 is reversely applied to FIG. 1.
  • the inverse quantization unit 220 may receive inverse quantization by receiving a quantized transform coefficient. That is, according to the operations of the quantization unit 140 and the inverse quantization unit 220, an input value of a specific range is changed to any one reference input value within a specific range, and in this process, an input value and one reference input value are changed. As much as errors can occur.
  • the inverse transform unit 230 inversely transforms the data output from the inverse quantization unit 220, and inversely applies the method used by the transformer 130 inversely.
  • the inverse transform unit 230 may generate a reconstructed difference block by performing inverse transform.
  • the adder 235 may generate the reconstructed block by adding the reconstructed difference block and the predictive block.
  • the prediction block may be a block generated through inter encoding or intra encoding.
  • the motion compensator 250 receives or derives motion information about a target block to be decoded from the processing apparatus 100 that performs encoding (derivation from a neighboring block), thereby receiving or derived motion information. Based on the prediction block can be generated.
  • the motion compensator 250 may generate a prediction block from a reference frame stored in the reference frame buffer 270.
  • the motion information may include a motion vector, a reference frame index, etc. for the block having the highest temporal correlation with the target block.
  • the reference frame buffer 270 may store some reference frames having a high correlation with the frame to be currently decoded.
  • the reference frame may be a frame generated by filtering the above-described reconstruction block. That is, the reference frame may be a frame in which a bit stream generated by the processing apparatus 100 that performs encoding is reconstructed.
  • the reference frame used in the processing device 200 to perform decoding may be the same as the reference frame used in the processing device 100 to perform encoding.
  • the intra prediction unit 240 may generate a prediction value for the target block by performing spatial prediction from reconstructed neighboring pixels in the current frame.
  • the switch 255 may be changed in position according to the prediction method of decoding the target block.
  • the filter unit 260 may apply at least one of a deblocking filter, SAO, and ALF to the reconstructed frame.
  • the filtered reconstructed frame may be stored in the reference frame buffer 270 to be used as a reference frame.
  • the processing apparatus 200 may further include a parser (not shown) that parses information related to an encoded frame included in the bit stream.
  • the parsing unit may include the entropy decoding unit 210 or may be included in the entropy decoding unit 210.
  • the processing apparatus 100 that performs encoding may compress data of the video through an encoding process, and transmit the compressed data to the processing apparatus 200 that performs decoding.
  • the processing device 200 that performs decoding may reconstruct the video content by decoding the compressed data.
  • FIG. 3 is a simplified block diagram illustrating a processing apparatus 100 for performing encoding according to an embodiment of the present invention.
  • the processing device 100 includes a memory 310 and a processor 320.
  • the memory 310 is provided separately from the processor 320 and may be implemented as a hard disk, a nonvolatile memory, a volatile memory, or the like. However, in some cases, the memory 310 may be implemented as a memory inside the processor 320.
  • the memory 310 may store video content, a reference frame, and the like.
  • the reference frame may be a reconstruction frame of the frame encoded by the processor 320.
  • the memory 310 may store the entire video content, but may also store a part of the video content streamed from an external server in real time. In this case, the memory 310 may store only a part of the video content that is received in real time, and may delete data of the encoded video content.
  • the processor 320 generally controls the operation of the processing device 100.
  • the processor 320 may divide a frame constituting the video content into a plurality of coding units, and generate an encoded frame by performing encoding on each of the plurality of coding units.
  • the plurality of coding units may be a Largest Coding Unit (LCU).
  • LCU Largest Coding Unit
  • the present invention is not limited thereto, and the processor 320 may divide the frame into a plurality of coding units having different sizes.
  • the sizes of the plurality of coding units may all be different.
  • the processor 320 may add additional information including the motion vector obtained in the encoding process for each of the plurality of coding units to the encoded frame.
  • the processor 320 may perform temporal prediction and spatial prediction on the current coding unit. In addition, the processor 320 may determine whether to intra-encode or inter-encode the current coding unit based on the error due to temporal prediction and the error due to spatial prediction.
  • the processor 320 may separately generate additional information including a motion vector of the current coding unit regardless of the encoding method of the current coding unit.
  • the processor 320 may include additional information in the reserved area of the header corresponding to the encoded frame.
  • the header may be a Supplemental Enhancement Information (SEI) header.
  • SEI Supplemental Enhancement Information
  • the processor 320 may include additional information in a separate storage area instead of the header.
  • the processor 320 may store additional information by generating an additional area other than the header and data areas.
  • the processing apparatus 200 that performs decoding which will be described later, may also store information about a location where the additional information is stored.
  • the processor 320 may separately generate additional information including the motion vector of the current coding unit even if the current coding unit is intra-encoded. If the current coding unit is conventionally intra encoded, the motion vector of the current coding unit is deleted without being stored separately.
  • the processor 320 may store the inter-encoded current coding unit and the motion vector in the data region, and separately generate additional information including the motion vector of the current coding unit. That is, when the current coding unit inter-encodes, the processor 320 may store the motion vector of the current coding unit twice.
  • the processor 320 may separately generate additional information including motion vectors for all of the plurality of coding units.
  • the processor 320 may generate additional information in frame units, but is not limited thereto.
  • the processor 320 may generate additional information based on the slice.
  • the processor 320 may separately generate additional information including motion vectors for all the coding units included in the slice, and include the generated additional information in the header of the slice.
  • the processor 320 may generate additional information in units of a plurality of frames.
  • the number of motion vectors for the plurality of coding units included in one additional information by the processor 320 is not limited thereto.
  • the processor 320 may generate additional information based on a communication state.
  • the processor 320 searches for a motion vector corresponding to the current coding unit in the current frame including the current coding unit and a predetermined number of adjacent frames based on the current frame, and corresponds to the searched motion vector among the current frame and the adjacent frame. Identification information of at least one frame including the pixel area may be added to the additional information.
  • the processor 320 may search for motion vectors in adjacent frames as well as the current frame including the current coding unit in temporal prediction for the current coding unit.
  • the motion vector alone may not indicate the pixel area searched as most similar to the current coding unit.
  • the processor 320 may generate identification information of the frame including the pixel area corresponding to the searched motion vector and additional information including the motion vector.
  • the processor 320 searches for a motion vector corresponding to the current coding unit, and if a pixel value of a pixel region corresponding to the searched motion vector and a position corresponding to the current coding unit satisfies a preset condition, the processor 320 is adjacent to the current coding unit. Information for using the motion vector of the coding unit may be added to the additional information.
  • occlusion may occur in a particular area within a frame, such as a human motion, such that the pixel data of a particular coding unit may be temporarily varied at all.
  • a frame such as a human motion
  • an area having a large difference between the current coding unit and the pixel data value may be searched.
  • the decoding apparatus 200 which will be described later replaces the current coding unit with a pixel region corresponding to the motion vector included in the additional information when an error occurs, and thus the pixel region corresponding to the current coding unit and the motion vector.
  • the difference between pixel data values is large, the viewer may feel heterogeneous.
  • the processor 320 adds all the pixel values of the pixel region corresponding to the searched motion vector and the corresponding position between the current coding unit, and if the value is larger than the predetermined value, adjacent coding of the current coding unit instead of the motion vector of the current coding unit Information for using the motion vector of the unit may be added to the additional information.
  • the processor 320 adds the position information of the at least two coding units and the motion vector of at least two coding units to the additional information. can do. That is, the processor 320 may compress the size of the additional information through the above operation.
  • the processor 320 may obtain position information about at least two coding units and a motion vector of at least two coding units. You can add to the additional information. For example, if the motion vectors for at least two consecutive coding units are the same, the processor 320 may add position information for at least two coding units and a motion vector of at least two coding units to the additional information. Can be. In this case, the processor 320 may improve the compression efficiency because only the information about the coding units from the first to the last of the coding units having the same motion vector is added to the additional information.
  • the processor 320 may include position information and at least two coding units for at least two coding units. Identification information of a frame including one motion vector and a pixel region corresponding to the motion vector may be added to the additional information.
  • the processor 320 may add information corresponding to the detected regularity to the additional information. For example, if the regularity between the motion vectors for all of the plurality of coding units is detected, the processor 320 may add information about a table or a formula corresponding to the detected regularity to the additional information. That is, the processor 320 may compress the size of the additional information through the above operation.
  • the present invention is not limited thereto, and the processor 320 may detect regularity between motion vectors for a portion of the plurality of coding units instead of all of the plurality of coding units.
  • the processing device 100 may further include an interface (not shown), and may communicate with the processing device 200 that performs decoding, which will be described later, through the interface.
  • the processor 320 may transmit an encoded bit stream, a motion vector, additional information, and the like, to the processing device 200 that performs decoding through an interface.
  • the interface includes a processing device 200 that performs decoding using wired / wireless LAN, WAN, Ethernet, Bluetooth, Zigbee, IEEE 1394, Wifi, or Power Line Communication (PLC). Communication can be performed.
  • a processing device 200 that performs decoding using wired / wireless LAN, WAN, Ethernet, Bluetooth, Zigbee, IEEE 1394, Wifi, or Power Line Communication (PLC). Communication can be performed.
  • PLC Power Line Communication
  • FIG. 4 is a diagram for describing a method of generating additional information according to an exemplary embodiment.
  • the processor 320 may divide a frame constituting video content into a plurality of coding units. For example, the processor 320 may divide the frame into 12 coding units. However, this is only an example, and the processor 320 may distinguish the number of the plurality of coding units differently for each frame.
  • the processor 320 may add additional information including the motion vector obtained in the encoding process for each of the plurality of coding units to the encoded frame. For example, as shown at the bottom of FIG. 4, the processor 320 may include additional information in a reserved area of a header corresponding to the encoded frame.
  • the additional information included in the reserved area of the header may include motion vectors MV1 to MV12 for all of the plurality of coding units.
  • the processor 320 may include only motion vectors of coding units having a predetermined size or more among the plurality of coding units in the additional information.
  • the processor 320 may include, in the additional information, only MV1 and MV12 which are motion vectors of the first coding unit and the last coding unit larger than a predetermined size among the plurality of coding units.
  • the processor 320 may further store only the motion vector of the coding unit of the small size that is difficult to identify with the viewer's eye, but only the coding vector of the coding unit of the viewer's eye.
  • the processing apparatus 200 that performs decoding may restore the coding unit that is not decodeable according to the related art when the decoding of the small coding unit is impossible. In this case, the viewer may not feel heterogeneous because the size of the coding unit that cannot be decoded is very small.
  • the processing apparatus 200 that performs decoding may reconstruct a coding unit that is not decodable using a motion vector stored in additional information when decoding of a coding unit of a predetermined size or more is impossible. In this case, the restoration performance is improved compared to the prior art, thereby minimizing the heterogeneity felt by the viewer.
  • the processor 320 may sequentially include a bit string for a motion vector of a coding unit of a predetermined size or more among a plurality of coding units in a header.
  • the processor 320 may include the bit strings of MV1 and MV12 in the header without an identifier for a separate coding unit.
  • the processing apparatus 200 that performs decoding may determine that the first motion vector in the bit string is MV1 and the second motion vector is MV12 based on the size information of the plurality of coding units stored in the header.
  • FIG. 5 is a diagram for describing a method of generating additional information according to another exemplary embodiment.
  • the processor 320 searches for a motion vector corresponding to the current coding unit in a preset number of adjacent frames based on the current frame and the current frame including the current coding unit, and searches for a motion vector found in the current frame and the adjacent frames. Identification information of at least one frame including a corresponding pixel area may be added to the additional information.
  • the processor 320 may include only the previous frame of the frame including the current coding unit as the search target and include the previous frame or the frame after the frame including the current coding unit as the search target. In this case, an error may occur when only the motion vector for the searched pixel area is stored, and the processor 320 may additionally include identification information about a frame including the searched pixel area in the additional information.
  • the processor 320 may include a first motion vector for the first coding unit, a frame including a pixel area corresponding to the first motion vector, a second motion vector, a frame including a pixel area corresponding to the second motion vector,. .., a frame including a n-th motion vector and a pixel region corresponding to the n-th motion vector may be sequentially included in the additional information.
  • the present invention is not limited thereto, and the processor 320 may generate additional information in any other order.
  • 6A and 6B are diagrams for describing a case in which occlusion occurs according to an embodiment of the present invention. As described above, when occlusion occurs, it may be more efficient to use the peripheral pixel region of the current coding unit rather than the searched pixel region.
  • the processor 320 may compare the current coding unit T and the searched pixel area A corresponding to the current coding unit T.
  • the processor 320 may calculate a difference between pixel values of corresponding positions between the current coding unit T and the searched pixel region A.
  • the processor 320 may calculate the difference A1-T1 of the pixel value of the upper left pixel and calculate the difference of the pixel value of the remaining pixels.
  • the processor 320 may add all the difference between the pixel values of the four pixel areas, and if the sum is greater than the predetermined value, the processor 320 may determine that the difference between the current coding unit T and the searched pixel area A is large. In this case, as illustrated in FIG. 6B, the processor 320 may add, to the additional information, information for using the motion vector of the neighboring coding units 620 to 690 of the current coding unit T.
  • the processor 320 and the current coding unit T and the current coding are performed.
  • Each of the adjacent coding units 620 to 690 of the unit T may be compared.
  • the comparison method may be the same as the method of comparing the current coding unit T and the searched pixel region A.
  • the processor 320 may calculate a difference between pixel values of corresponding positions between the current coding unit T and the neighboring coding unit 620 and calculate a sum of all the calculated differences. In addition, the processor 320 may repeat the same calculation for the remaining adjacent coding units 630 to 690. Finally, the processor 320 may calculate eight sums and determine an adjacent coding unit corresponding to the smallest sum of the eight sums. In addition, the processor 320 may add, to the additional information, information for using the motion vectors of the neighboring coding units determined among the plurality of neighboring coding units 620 ⁇ 690 instead of the motion vectors of the current coding unit T.
  • FIG. 7 is a diagram for describing a method for reducing a data amount of additional information according to an exemplary embodiment.
  • the processor 320 may add the position information for the at least two coding units and the motion vector of the at least two coding units to the additional information. have.
  • the processor 320 may add MV4 and positional information about three coding units having the same motion vector as MV4 among the 10 coding units to the additional information.
  • the processor 320 may add the position information for the at least two consecutive coding units and the motion vector of one of the at least two consecutive coding units to the additional information. It may be.
  • the processor 320 may obtain the position information of the at least two coding units and the motion vector of at least two coding units. It may be added to the additional information.
  • the processor 320 may add information corresponding to the detected regularity to the additional information.
  • the processor 320 adds an equation indicating the linearity to the additional information, and adds information about the coding unit to which the added equation is applied to the additional information. You can add
  • the present invention is not limited thereto, and the processor 320 may generate a table and add it to additional information.
  • the processor 320 may generate additional information by repeating the above process for a plurality of coding units constituting all frames.
  • FIG. 8 is a simplified block diagram illustrating a processing apparatus 200 for performing decoding according to an embodiment of the present invention.
  • the processing device 200 includes a memory 810 and a processor 820.
  • the memory 810 is provided separately from the processor 820 and may be implemented as a hard disk, a nonvolatile memory, a volatile memory, or the like.
  • the memory 810 may store encoded video content, reference frames, and the like.
  • the reference frame may be a reconstruction frame of the frame encoded by the processor 820.
  • the memory 810 may store the entire encoded video content, but may also store a portion of the encoded video content streamed from the processing apparatus 100 that performs encoding in real time. In this case, the memory 810 may store only a part of the encoded video content that is received in real time, and delete data of the displayed video content.
  • the processor 820 generally controls the operation of the processing device 100.
  • the processor 820 may generate a decoded frame by decoding the encoded frame constituting the encoded video content in units of coding units.
  • the encoded video content may be content in which additional information including a motion vector obtained in the encoding process for each of the plurality of coding units constituting the encoded frame is added for each encoded frame.
  • the processor 820 may obtain a motion vector for the current coding unit from the additional information, and perform decoding by replacing the current coding unit with a pixel region corresponding to the obtained motion vector.
  • the processor 820 may move from the additional information to the current coding unit when decoding is impossible in the decoding process of the current coding unit, such as when communication is temporarily disconnected so that specific data is not received or specific data is corrupted.
  • the decoding may be performed by obtaining a vector and replacing a current coding unit with a pixel region corresponding to the obtained motion vector.
  • the additional information may include motion vectors for all of the plurality of coding units.
  • the present invention is not limited thereto, and the additional information may include motion vectors for some of the plurality of coding units.
  • the additional information may include information about motion vectors for all of the plurality of coding units in a modified state.
  • the additional information may be included in the reserved area of the header corresponding to the encoded frame.
  • the present invention is not limited thereto, and the additional information may be included as long as it is included separately from the data area. That is, the motion vector included in the additional information may be stored separately from the motion vector stored in the inter encoding process. The motion vector may also be stored separately for the intra encoded coding unit.
  • the additional information may include identification information of at least one frame including a pixel region corresponding to the motion vector, and the processor 820 may determine a motion vector for the current coding unit in the additional information when decoding of the current coding unit is impossible.
  • the decoding may be performed by acquiring the identification information and replacing the current coding unit with a pixel region corresponding to the motion vector obtained in the frame corresponding to the acquired identification information.
  • the additional information includes information for using the motion vector of the neighboring coding unit of the current coding unit, and the processor 820 may determine the motion vector of the neighboring coding unit of the current coding unit in the additional information when decoding of the current coding unit is impossible.
  • Information may be obtained, and decoding may be performed by replacing a current coding unit with a pixel region corresponding to a motion vector of an adjacent coding unit based on the obtained information.
  • the additional information includes position information for at least two coding units having the same motion vector and a motion vector of at least two coding units, and if the decoding of the current coding unit is impossible, the processor 820 may determine the position information. On the basis of this, decoding may be performed by replacing the current coding unit with a pixel region corresponding to a motion vector of at least two coding units.
  • the additional information includes information corresponding to regularity between motion vectors for all of the plurality of coding units, and when the decoding of the current coding unit is impossible, the processor 820 may transmit information to the current coding unit based on the information corresponding to the regularity.
  • the decoding may be performed by obtaining a corresponding motion vector and replacing a current coding unit with a pixel region corresponding to the obtained motion vector.
  • the processing device 200 may further include an interface (not shown), and may communicate with the processing device 100 that performs encoding through the interface.
  • the processor 820 may receive an encoded bit stream, a motion vector, additional information, and the like from the processing device 100 that performs encoding through an interface.
  • the interface includes a processing device 100 that performs encoding using wired / wireless LAN, WAN, Ethernet, Bluetooth, Zigbee, IEEE 1394, Wifi, or Power Line Communication (PLC). Communication can be performed.
  • a processing device 100 that performs encoding using wired / wireless LAN, WAN, Ethernet, Bluetooth, Zigbee, IEEE 1394, Wifi, or Power Line Communication (PLC). Communication can be performed.
  • PLC Power Line Communication
  • the processing device may include a memory and a processor.
  • a frame constituting video content is divided into a plurality of coding units (S910).
  • an encoded frame is generated by encoding the plurality of coding units.
  • Generating the encoded frame may add additional information including the motion vector obtained in the encoding process for each of the plurality of coding units to the encoded frame.
  • the additional information may include motion vectors for all of the plurality of coding units.
  • the additional information may be included in a reserved area of the header corresponding to the encoded frame.
  • the encoded frame may be searched for a motion vector corresponding to the current coding unit from a current frame including the current coding unit and a predetermined number of adjacent frames based on the current frame.
  • the method may include adding identification information of at least one frame including a pixel area corresponding to the searched motion vector among adjacent frames, to the additional information.
  • the generating of the encoded frame in operation S920 may include searching for a motion vector corresponding to the current coding unit and presetting pixel values of a pixel region corresponding to the searched motion vector and a position corresponding to the current coding unit. If satisfies the information, the method may include adding information to additional information to use the motion vector of the neighboring coding unit of the current coding unit.
  • the encoded frame when the encoded frame is the same as the motion vector for at least two coding units, the position information for the at least two coding units and the movement of the at least two coding units may be determined. Vectors can be added to the side information.
  • the processing device may include a memory and a processor.
  • decoding is performed in units of coding units with respect to encoded frames constituting encoded video content (S1010).
  • a plurality of coding units on which decoding is performed are arranged in a predetermined direction to generate a decoded frame.
  • the encoded video content may include additional information including motion vectors obtained in the encoding process for each of the plurality of coding units constituting the encoded frame, for each encoded frame.
  • the decoding operation when decoding of the current coding unit is impossible, the decoding operation may be performed by obtaining a motion vector for the current coding unit from the additional information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector. can do.
  • the additional information may include motion vectors for all of the plurality of coding units.
  • the additional information may be included in a reserved area of the header corresponding to the encoded frame.
  • the additional information includes identification information of at least one frame including a pixel region corresponding to the motion vector
  • the step of performing decoding (S1010) may be performed on the current coding unit from the additional information when decoding of the current coding unit is impossible.
  • the decoding may be performed by acquiring a motion vector and identification information for the mobile terminal and replacing the current coding unit with a pixel region corresponding to the motion vector obtained in a frame corresponding to the acquired identification information.
  • the additional information includes information for using the motion vector of the neighboring coding unit of the current coding unit, and performing decoding (S1010) is adjacent coding of the current coding unit in the additional information when decoding of the current coding unit is impossible
  • the information may be obtained by using the motion vector of the unit, and the decoding may be performed by replacing the current coding unit with a pixel region corresponding to the motion vector of the neighboring coding unit based on the obtained information.
  • the additional information may include position information about at least two coding units having the same motion vector and a motion vector of one of the at least two coding units, and the performing of decoding (S1010) may be impossible when decoding of the current coding unit is impossible. Based on the position information, decoding may be performed by replacing the current coding unit with a pixel region corresponding to a motion vector of at least two coding units.
  • the additional information includes information corresponding to regularity between motion vectors for all of the plurality of coding units, and performing decoding may be performed based on information corresponding to regularity when decoding of the current coding unit is impossible.
  • a decoding may be performed by obtaining a motion vector corresponding to the current coding unit and replacing the current coding unit with a pixel region corresponding to the obtained motion vector.
  • the processing apparatuses may add a motion vector for each of the plurality of coding units constituting the frame to the encoded frame and use the same to improve reconstruction efficiency when an error occurs.
  • the methods according to various embodiments of the present disclosure may be programmed and stored in various storage media. Accordingly, the methods according to the various embodiments of the present disclosure may be implemented in various types of electronic devices that execute a storage medium.
  • a non-transitory computer readable medium may be provided in which a program for sequentially performing the above-described control method is stored.
  • the non-transitory readable medium refers to a medium that stores data semi-permanently and is readable by a device, not a medium storing data for a short time such as a register, a cache, a memory, and the like.
  • a non-transitory readable medium such as a CD, a DVD, a hard disk, a Blu-ray disk, a USB, a memory card, a ROM, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un dispositif de traitement. Le dispositif de traitement comprend : une mémoire dans laquelle un contenu vidéo est stocké ; et un processeur, qui divise, en une pluralité d'unités de codage, une image formant le contenu vidéo, et qui code chaque unité de la pluralité d'unités de codage de manière à générer une image codée, le processeur pouvant ajouter, à l'image codée, des informations supplémentaires comprenant un vecteur de mouvement obtenu pendant le codage pour chaque unité de la pluralité des unités de codage.
PCT/KR2018/002048 2017-05-31 2018-02-20 Dispositifs de traitement et leurs procédés de commande WO2018221838A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/495,469 US20200099950A1 (en) 2017-05-31 2018-02-20 Processing devices and control methods therefor
CN201880030707.3A CN110612725B (zh) 2017-05-31 2018-02-20 处理设备及其控制方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0067744 2017-05-31
KR1020170067744A KR102379196B1 (ko) 2017-05-31 2017-05-31 처리 장치들 및 그 제어 방법들

Publications (1)

Publication Number Publication Date
WO2018221838A1 true WO2018221838A1 (fr) 2018-12-06

Family

ID=64456445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/002048 WO2018221838A1 (fr) 2017-05-31 2018-02-20 Dispositifs de traitement et leurs procédés de commande

Country Status (4)

Country Link
US (1) US20200099950A1 (fr)
KR (1) KR102379196B1 (fr)
CN (1) CN110612725B (fr)
WO (1) WO2018221838A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100860689B1 (ko) * 2007-02-05 2008-09-26 삼성전자주식회사 동영상 복호화시 프레임 손실 은폐 방법 및 장치
KR101082581B1 (ko) * 2010-01-29 2011-11-10 충북대학교 산학협력단 H.264/avc 복호화기의 에러 은닉 장치 및 방법
KR101144283B1 (ko) * 2010-07-27 2012-05-11 중앙대학교 산학협력단 디코딩된 비디오에 포함된 에러를 은닉하는 장치 및 방법
JP2012105179A (ja) * 2010-11-12 2012-05-31 Mitsubishi Electric Corp 画像復号装置
KR101217627B1 (ko) * 2006-02-02 2013-01-02 삼성전자주식회사 블록 기반의 움직임 추정 방법 및 장치

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5270813A (en) * 1992-07-02 1993-12-14 At&T Bell Laboratories Spatially scalable video coding facilitating the derivation of variable-resolution images
JP2001298728A (ja) * 2000-04-12 2001-10-26 Meidensha Corp 遠方監視システム及び画像符号化処理方法
JP2006513633A (ja) * 2003-01-10 2006-04-20 トムソン ライセンシング エラー隠蔽中に生成されるアーチファクトをスムージングするデコーダ装置及び方法
US7646815B2 (en) * 2003-07-15 2010-01-12 Lsi Corporation Intra estimation chroma mode 0 sub-block dependent prediction
US7616692B2 (en) * 2003-09-07 2009-11-10 Microsoft Corporation Hybrid motion vector prediction for interlaced forward-predicted fields
US8938009B2 (en) * 2007-10-12 2015-01-20 Qualcomm Incorporated Layered encoded bitstream structure
BRPI0818444A2 (pt) * 2007-10-12 2016-10-11 Qualcomm Inc codificação adaptativa de informação de cabeçalho de bloco de vídeo
US20130301734A1 (en) * 2011-01-12 2013-11-14 Canon Kabushiki Kaisha Video encoding and decoding with low complexity
MX2013008691A (es) * 2011-02-10 2013-08-21 Panasonic Corp Metodo de codificacion de imagenes en movimiento, aparato de codificacion de imagenes en movimiento, metodo de decodificacion de imagenes en movimiento, aparato de decodificacion de imagenes en movimiento y aparato de codificacion y decodificacion de imagenes en movimiento.
CN102883163B (zh) * 2012-10-08 2014-05-28 华为技术有限公司 用于运动矢量预测的运动矢量列表建立的方法、装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101217627B1 (ko) * 2006-02-02 2013-01-02 삼성전자주식회사 블록 기반의 움직임 추정 방법 및 장치
KR100860689B1 (ko) * 2007-02-05 2008-09-26 삼성전자주식회사 동영상 복호화시 프레임 손실 은폐 방법 및 장치
KR101082581B1 (ko) * 2010-01-29 2011-11-10 충북대학교 산학협력단 H.264/avc 복호화기의 에러 은닉 장치 및 방법
KR101144283B1 (ko) * 2010-07-27 2012-05-11 중앙대학교 산학협력단 디코딩된 비디오에 포함된 에러를 은닉하는 장치 및 방법
JP2012105179A (ja) * 2010-11-12 2012-05-31 Mitsubishi Electric Corp 画像復号装置

Also Published As

Publication number Publication date
CN110612725A (zh) 2019-12-24
US20200099950A1 (en) 2020-03-26
KR102379196B1 (ko) 2022-03-28
KR20180131123A (ko) 2018-12-10
CN110612725B (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2020036417A1 (fr) Procédé de prédiction inter faisant appel à un vecteur de mouvement fondé sur un historique, et dispositif associé
WO2017069419A1 (fr) Procédé et appareil de prédiction intra dans un système de codage vidéo
WO2016204360A1 (fr) Procédé et dispositif de prédiction de bloc basée sur la compensation d'éclairage dans un système de codage d'image
WO2017057953A1 (fr) Procédé et dispositif de codage de signal résiduel dans un système de codage vidéo
WO2020017840A1 (fr) Procédé et dispositif pour exécuter une prédiction inter sur la base d'un dmvr
WO2017043766A1 (fr) Procédé et dispositif de codage et de décodage vidéo
WO2020071873A1 (fr) Procédé de codage vidéo basé sur une intra-prédiction à l'aide d'une liste mpm, et dispositif associé
WO2013109093A1 (fr) Procédé et appareil de codage/décodage d'image
WO2017086765A2 (fr) Procédé et appareil de codage et de décodage entropique d'un signal vidéo
WO2011019234A2 (fr) Procédé et appareil de codage et de décodage d'image à l'aide d'une unité de transformation de grande dimension
WO2017069590A1 (fr) Procédé et dispositif de décodage d'image à base de modélisation dans un système de codage d'image
WO2011126277A2 (fr) Procédé et appareil destinés à un codage/décodage d'entropie de faible complexité
WO2014163249A1 (fr) Procédé et appareil permettant de traiter une vidéo
WO2013109039A1 (fr) Procédé et appareil de codage/décodage d'images utilisant la prédiction de poids
WO2020091213A1 (fr) Procédé et appareil de prédiction intra dans un système de codage d'image
WO2012044124A2 (fr) Procédé pour le codage et le décodage d'images et appareil de codage et de décodage l'utilisant
EP2556671A2 (fr) Procédé et appareil destinés à un codage/décodage d'entropie de faible complexité
WO2017048008A1 (fr) Procédé et appareil de prédiction inter dans un système de codage vidéo
WO2017043769A1 (fr) Dispositif de codage, dispositif de décodage, et procédé de codage et procédé de décodage correspondants
WO2016104854A1 (fr) Procédé et appareil de codage, et procédé et appareil de décodage
WO2019059736A1 (fr) Dispositif de codage d'images, dispositif de décodage d'images. procédé de codage d'images, et procédé de décodage d'images
WO2020009390A1 (fr) Procédé et dispositif de traitement d'image selon une prédiction inter dans un système de codage d'image
WO2017052272A1 (fr) Procédé et appareil pour une prédiction intra dans un système de codage vidéo
WO2020005002A1 (fr) Procédé et dispositif de dérivation d'une zone de modèle en fonction d'une prédiction inter dans un système de codage d'image
WO2020009427A1 (fr) Procédé et appareil de réordonnancement d'une liste de candidats basée sur un modèle en prédiction inter d'un système de codage d'images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18808810

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18808810

Country of ref document: EP

Kind code of ref document: A1