WO2016040116A1 - Optimisation de la perception pour codage vidéo à base de modèles - Google Patents

Optimisation de la perception pour codage vidéo à base de modèles Download PDF

Info

Publication number
WO2016040116A1
WO2016040116A1 PCT/US2015/048353 US2015048353W WO2016040116A1 WO 2016040116 A1 WO2016040116 A1 WO 2016040116A1 US 2015048353 W US2015048353 W US 2015048353W WO 2016040116 A1 WO2016040116 A1 WO 2016040116A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
blocks
frame
tcsf
motion vector
Prior art date
Application number
PCT/US2015/048353
Other languages
English (en)
Inventor
Nigel Lee
Sangseok Park
Myo Tun
Dane P. Kottke
Jeyun Lee
Christopher Weed
Original Assignee
Euclid Discoveries, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/532,947 external-priority patent/US9621917B2/en
Application filed by Euclid Discoveries, Llc filed Critical Euclid Discoveries, Llc
Priority to CN201580049004.1A priority Critical patent/CN106688232A/zh
Priority to JP2017513750A priority patent/JP6698077B2/ja
Priority to CA2960617A priority patent/CA2960617A1/fr
Priority to EP15770689.6A priority patent/EP3175618A1/fr
Publication of WO2016040116A1 publication Critical patent/WO2016040116A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation

Definitions

  • Video compression can be considered the process of representing digital video data in a form that uses fewer bits when stored or transmitted.
  • Video encoding can achieve compression by exploiting redundancies in the video data, whether spatial, temporal, or color-space.
  • Video compression processes typically segment the video data into portions, such as groups of frames and groups of pels, to identify areas of redundancy within the video that can be represented with fewer bits than required by the original video data. When these redundancies in the data are exploited, greater compression can be achieved.
  • An encoder can be used to transform the video data into an encoded format, while a decoder can be used to transform encoded video back into a form comparable to the original video data.
  • the implementation of the encoder/decoder is referred to as a codec.
  • Standard encoders divide a given video frame into non-overlapping coding units or macroblocks (rectangular regions of contiguous pels) for encoding.
  • the macroblocks (herein referred to more generally as “input blocks” or “data blocks”) are typically processed in a traversal order of left to right and top to bottom in a video frame. Compression can be achieved when input blocks are predicted and encoded using previously-coded data.
  • the process of encoding input blocks using spatially neighboring samples of previously-coded blocks within the same frame is referred to as intra-prediction. Intra-prediction attempts to exploit spatial redundancies in the data.
  • inter-prediction The encoding of input blocks using similar regions from previously-coded frames, found using a motion estimation process, is referred to as inter-prediction.
  • Inter-prediction attempts to exploit temporal redundancies in the data.
  • the motion estimation process can generate a motion vector that specifies, for example, the location of a matching region in a reference frame relative to an input block that is being encoded.
  • Most motion estimation processes consist of two main steps: initial motion estimation, which provides an first, rough estimate of the motion vector (and corresponding temporal prediction) for a given input block, and fine motion estimation, which performs a local search in the neighborhood of the initial estimate to determine a more precise estimate of the motion vector (and corresponding prediction) for that input block.
  • the encoder may measure the difference between the data to be encoded and the prediction to generate a residual.
  • the residual can provide the difference between a predicted block and the original input block.
  • the predictions, motion vectors (for inter-prediction), residuals, and related data can be combined with other processes such as a spatial transform, a quantizer, an entropy encoder, and a loop filter to create an efficient encoding of the video data.
  • the residual that has been quantized and transformed can be processed and added back to the prediction, assembled into a decoded frame, and stored in a framestore. Details of such encoding techniques for video will be familiar to a person skilled in the art.
  • MPEG-2 and H.264 are two codec standards for video compression that achieve high quality video representation at relatively low bitrates.
  • the basic coding units for MPEG-2 and H.264 are 16x16 macroblocks.
  • H.264 is the most recent widely-accepted standard in video compression and is generally thought to be twice as efficient as MPEG-2 at compressing video data.
  • the basic MPEG standard defines three types of frames (or pictures), based on how the input blocks in the frame are encoded.
  • An I-frame intra-coded picture
  • a P-frame predicted picture
  • P-frames are encoded via forward prediction, using data from previously- decoded I-frames or P-frames, also known as reference frames.
  • P-frames can contain either intra blocks or (forward-)predicted blocks.
  • a B-frame (bi-predicted picture) is encoded via bi-directional prediction, using data from both previous and subsequent frames.
  • B-frames can contain intra, (forward-)predicted, or bi-predicted blocks.
  • a particular set of reference frames is termed a Group of Pictures (GOP).
  • the GOP contains only the decoded pels within each reference frame and does not include information as to how the input blocks or frames themselves were originally encoded (I- frame, B-frame, or P-frame).
  • Older video compression standards such as MPEG-2 use one reference frame (in the past) to predict P-frames and two reference frames (one past, one future) to predict B-frames.
  • more recent compression standards such as H.264 and HEVC (High Efficiency Video Coding) allow the use of multiple reference frames for P- frame and B-frame prediction. While reference frames are typically temporally adjacent to the current frame, the standards also allow reference frames that are not temporally adjacent.
  • BBMEC block-based motion estimation and compensation
  • the simplest form of the BBMEC process initializes the motion estimation using a (0, 0) motion vector, meaning that the initial estimate of a target block is the co-located block in the reference frame. Fine motion estimation is then performed by searching in a local neighborhood for the region that best matches (i.e., has lowest error in relation to) the target block.
  • the local search may be performed by exhaustive query of the local neighborhood (termed here full block search) or by any one of several "fast search" methods, such as a diamond or hexagonal search.
  • the EPZS method considers a set of motion vector candidates for the initial estimate of a target block, based on the motion vectors of neighboring blocks that have already been encoded, as well as the motion vectors of the co-located block (and neighbors) in the previous reference frame.
  • the EPZS method hypothesizes that the video's motion vector field has some spatial and temporal redundancy, so it is logical to initialize motion estimation for a target block with motion vectors of neighboring blocks, or with motion vectors from nearby blocks in already-encoded frames.
  • the EPZS method narrows the set via approximate rate-distortion analysis, after which fine motion estimation is performed.
  • the encoder may generate multiple inter-predictions to choose from.
  • the predictions may result from multiple prediction processes (e.g., BBMEC, EPZS, or model-based schemes).
  • the predictions may also differ based on the prediction processes (e.g., BBMEC, EPZS, or model-based schemes).
  • the best prediction for a given target block is usually accomplished through rate-distortion optimization, where the best prediction is the one that minimizes the rate-distortion metric D+ ⁇ , where the distortion D measures the error between the target block and the prediction, while the rate R quantifies the cost (in bits) to encode the prediction and ⁇ is a scalar weighting factor.
  • model-based compression schemes have also been proposed to avoid the limitations of BBMEC prediction.
  • These model-based compression schemes (the most well-known of which is perhaps the MPEG-4 Part 2 standard) rely on the detection and tracking of objects or features (defined generally as "components of interest") in the video and a method for encoding those features/objects separately from the rest of the video frame.
  • Feature/object detection/tracking occurs independently of the spatial search in standard motion estimation processes, so feature/object tracks can give rise to a different set of predictions than achievable through standard motion estimation.
  • Such feature/object-based model-based compression schemes suffer from the challenges associated with segmenting video frames into object vs. non-object (or feature vs. non-feature) regions.
  • objects can be of arbitrary size, their shapes need to be encoded in addition to their texture (color content).
  • a third challenge is that not all video content is composed of objects or features, so there needs to be a fallback encoding scheme when objects/features are not present.
  • Co-pending U.S. Patent Application No. 61/950,784, filed November 4, 2014 presents a model-based compression scheme that avoids the segmentation challenge noted above.
  • the continuous block tracker (CBT) of the '784 application does not detect objects and features, eliminating the need to segment objects and features from the non-object/non- feature background. Instead the CBT tracks all input blocks ("macroblocks") in the video frame as if they are regions of interest by combining frame-to- frame motion estimates into continuous tracks. In so doing, the CBT models motion in the video, achieving the benefits of higher-level modeling of the data to improve inter-prediction while avoiding the challenges of segmentation.
  • HVS human visual system
  • Importance maps take on values for each input or data block in a video frame, and the importance map values for any given block may change from frame to frame throughout the video.
  • importance maps are defined such that higher values indicate more important data blocks.
  • TCSF temporal contrast sensitivity function
  • SSF spatial-contrast sensitivity function
  • CSF contrast sensitivity function
  • temporal frequency is computed by using structural similarity (SSIM) in the colorspace domain to approximate wavelength and the encoder's motion vectors to approximate velocity.
  • Temporal frequency then serves as an input to the temporal contrast sensitivity function (TCSF), which can be computed for every data block to generate a temporal importance map that indicates which regions of the video frame are most noticeable to human observers.
  • SSIM structural similarity
  • TCSF temporal contrast sensitivity function
  • information about the relative quality of the motion vectors generated by the encoder can be computed at different points in the encoding process and then used to generate a true motion vector map that outputs, for each target block, how reliable its motion vector is.
  • the true motion vector map which takes on values of 0 or 1, can then be used as a mask to refine the TCSF, such that the TCSF is not used for target blocks whose motion vectors are not accurate (i.e., the true motion vector map is 0).
  • spatial complexity maps can be calculated from metrics such as block variance, block luminance, and edge detection to determine the spatial contrast of a given target block relative to its neighbors.
  • information from the SCMs can be combined with the TCSF to obtain a composite, unified importance map. The combination of spatial and temporal contrast information in the unified importance map effectively balances both aspects of human visual response.
  • the unified importance map (including information from both the TCSF and SCM) is used to weight the distortion part of the standard rate- distortion metric, D+ ⁇ . This results in a modified rate-distortion optimization that is weighted toward solutions that fit the relative perceptual importance of each target block, either low-distortion solutions when the importance map is closer to its maximum or low-rate solutions when the importance map is closer to its minimum.
  • either the TCSF or SCM may be used individually for the above purpose.
  • the TCSF (with true motion vector refinement) and SCM can be used to modify the block-level quantization of the encoder.
  • the quantization parameter is reduced relative to the frame quantization parameter, resulting in higher quality for those blocks.
  • the quantization parameter is increased relative to the frame quantization parameter, resulting in lower quality for those blocks.
  • either the TCSF or SCM may be used individually for the above purpose.
  • the TCSF can be computed for any encoder that incorporates inter- prediction and generates motion vectors (used by the TCSF to approximate the velocity of the content in the video)
  • application of the TCSF to video compression is most effective within a model-based compression framework such as the continuous block tracker (CBT) of the '784 Application that provides accurate determination of which motion vectors are true motion vectors.
  • CBT continuous block tracker
  • most standard video encoders compute motion vectors that optimize compression efficiency rather than reflecting true motion.
  • the CBT provides both motion vectors suitable for high compression efficiency and modeling information that maximizes the effectiveness of the TCSF.
  • Some example inventive embodiments are structured so that the resulting bitstream is compliant with any video compression standard - including, but not limited to, MPEG-2, H.264, and HEVC - that employs block-based motion estimation followed by transform, quantization, and entropy encoding of residual signals.
  • the present invention can also be applied to non-standard video encoders that are not block-based, as long as the encoder incorporates inter-prediction and generates motion vectors.
  • Some example embodiments may include methods and systems of encoding video data, as well as any codecs (encoders/decoders) for implementing the same.
  • a plurality of video frames having non-overlapping target blocks may be processed by an encoder.
  • the plurality of video frames may be encoded by the encoder using importance maps, such that the importance maps modify the quantization, as well as the encoding quality of each target block to be encoded in each video frame.
  • the importance maps may be formed using at least one of: temporal information or spatial information. If both temporal and spatial information are used, the importance map is considered a unified importance map.
  • the importance maps may be configured so that they indicate/identify/represent parts of a video frame in the plurality of video frames that are the most noticeable to human perception. Specifically, in blocks where the importance maps take on high values, the block quantization parameter (QP) is reduced relative to the frame quantization parameter QP frame, resulting in higher quality for those blocks; and in target blocks where the importance maps take on low values, the block quantization parameter is increased relative to the frame quantization parameter QP fra me, resulting in lower quality for those blocks.
  • QP block quantization parameter
  • the spatial information may be provided by a rule-based spatial complexity map (SCM) in which the initial step is to determine which target blocks in the frame have higher variance than the average block variance in the frame, CO rame .
  • SCM spatial complexity map
  • a QP value may be assigned that is higher than the frame quantization parameter QPframe , with the block QP assignment QP block scaled linearly between QPframe and the maximum quantization parameter QP max , based on how much higher the block variance var b i ock is than conference ram e ⁇
  • the temporal information may preferably be provided by a temporal contrast sensitivity function (TCSF) that indicates which target blocks are most temporally noticeable to a human observer and a true motion vector map (TMVM) that indicates which target blocks correspond to foreground data.
  • TCSF temporal contrast sensitivity function
  • TMVM true motion vector map
  • a high-variance block may have its block QP assignment QP block further refined by the TCSF and TMVM, such that if the TMVM identifies a target block as foreground data and the TCSF has a log contrast sensitivity value less than 0.5 for that block, QPbiock ⁇ raised by 2.
  • the SCM may include luminance masking, in which target blocks that are either very bright (luminance above 170) or very dark (luminance below 60) have their block quantization parameters QP block adjusted back to QP m ax ⁇
  • the SCM may include dynamic determination of QP m ax based on the quality level of the encoded video, where quality is measured using an average structural similarity (SSIM) calculation of target blocks in Intra (I) frames, together with the average block variance V ram e of such frames; such that when the measured quality is low, the value of QP m ax is lowered to something closer to QPframe-
  • Very-low-variance blocks may be assigned fixed, low QP values QP block to ensure high-quality encoding in those regions, such that the lower the block variance, the lower the value of QP block (and the higher the quality).
  • the assignment of low QP values QP block for very-low-variance blocks may be fixed first for I frames and then determined for P and B frames using the ipratio and pbratio parameters. Blocks that are low-variance but do not qualify as very-low-variance are examined to determine whether quality enhancement is needed for those blocks; in that an initial estimate of the block QP, QP b i ock , is calculated by average the QP values of neighboring, already encoded blocks to the left, top-left, right, and top-right of the current block. An estimate of the SSIM of the current block, SSIM est , may be calculated from the SSIM values of neighboring, already-encoded blocks to the left, top-left, right, and top-right of the current block. The value of QP b i ock may be lowered by 2 if SSIM est is lower than 0.9.
  • the quality enhancement is only applied to those blocks that are identified as foreground data by the TMVM and for which the TCSF has log contrast sensitivity value greater than 0.8.
  • the TMVM may be set to 1 only for foreground data.
  • the temporal frequency of the TCSF is computed by using SSIM in the colorspace domain between the target block and its reference block to approximate wavelength and by using motion vector magnitudes and the framerate to approximate velocity.
  • the TCSF may be calculated over multiple frames, such that the TCSF for the current frame is a weighted average of the TCSF maps over recent frames, with more recent frames receiving higher weighting.
  • the foreground data may be identified by computing the difference between the encoder motion vector for a given target block and the global motion vector for that block, such that blocks with sufficiently large differences are determined to be foreground data.
  • the encoder motion vector may be subtracted from the global motion vector to obtain a differential motion vector, and it is the magnitude of the differential motion vector that is used in calculating the temporal frequency of the TCSF.
  • FIG. 1 is a block diagram depicting a standard encoder configuration.
  • FIG. 2 is a block diagram depicting the steps involved in inter-prediction for general encoders.
  • FIG. 3 is a block diagram depicting the steps involved in initial motion estimation via continuous block tracking.
  • FIG. 4 is a block diagram depicting unified motion estimation via a combination of continuous block tracking and enhanced predictive zonal search.
  • FIG. 5 is a plot depicting a recent measurement of the temporal contrast sensitivity function by Wooten et al [2010].
  • FIG. 6 is a block diagram depicting the calculation of structural similarity (SSIM) in CIE 1976 Lab colorspace, according to an embodiment of the invention.
  • FIG. 7 is a block diagram depicting the general application of perceptual statistics to improve the perceptual quality of video encodings, according to an embodiment of the invention.
  • FIG. 8 A is a block diagram depicting the use of perceptual statistics to modify inter-prediction via continuous block tracking to improve the perceptual quality of video encodings, according to an embodiment of the invention.
  • FIG. 8B is a block diagram depicting an example process of encoding using importance maps to modify block quantization.
  • FIG. 9A is a schematic diagram of a computer network environment in which embodiments are deployed.
  • FIG. 9B is a block diagram of the computer nodes in the network of FIG. 9A. DETAILED DESCRIPTION
  • the invention can be applied to various standard encodings.
  • the terms “conventional” and “standard” (sometimes used together with “compression,” “codecs,” “encodings,” or “encoders”) can refer to MPEG-2, MPEG-4, H.264, or HEVC.
  • “Input blocks” are referred to without loss of generality as the basic coding unit of the encoder and may also sometimes be referred to interchangeably as “data blocks” or “macroblocks.”
  • the current input block being encoded is referred to as a "target block.”
  • the encoding process may convert video data into a compressed, or encoded, format.
  • the decompression or decoding process may convert compressed video back into an uncompressed, or raw, format.
  • the video compression and decompression processes may be implemented as an encoder/decoder pair commonly referred to as a codec.
  • FIG. 1 is a block diagram of a standard transform-based, motion-compensated encoder.
  • the encoder in FIG. 1 may be implemented in a software or hardware environment, or combination thereof.
  • the encoder may include any combination of components, including, but not limited to, a motion estimation module 15 that feeds into an inter- prediction module 20, an intra-prediction module 30, a transform and quantization module 60, an inverse transform and quantization module 70, an in- loop filter 80, a frame store 85, and an entropy encoding module 90.
  • a motion estimation module 15 that feeds into an inter- prediction module 20, an intra-prediction module 30, a transform and quantization module 60, an inverse transform and quantization module 70, an in- loop filter 80, a frame store 85, and an entropy encoding module 90.
  • the purpose of the prediction modules is to generate the best predicted signal 40 for the input block.
  • the predicted signal 40 is subtracted from the input block 10 to create a prediction residual 50 that undergoes transform and quantization 60.
  • the quantized coefficients 65 of the residual then get passed to the entropy encoding module 90 for encoding into the compressed bitstream.
  • the quantized coefficients 65 also pass through the inverse transform and quantization module 70, and the resulting signal (an approximation of the prediction residual) gets added back to the predicted signal 40 to create a reconstructed signal 75 for the input block 10.
  • the reconstructed signal 75 may be passed through an in-loop filter 80 such as a deblocking filter, and the (possibly filtered) reconstructed signal becomes part of the frame store 85 that aids prediction of future input blocks.
  • the function of each of the components of the encoder shown in FIG. 1 is well known to one of ordinary skill in the art.
  • FIG. 2 depicts the steps in standard inter-prediction (30 in FIG. 1), where the goal is to encode new data using previously-decoded data from earlier frames, taking advantage of temporal redundancy in the data.
  • inter-prediction an input block 10 from the frame currently being encoded (also called the target frame) is "predicted” from a region of the same size within a previously-decoded reference frame, stored in the frame store 85 from FIG. 1.
  • the two-component vector indicating the (x, y) displacement between the location of the input block in the frame being encoded and the location of its matching region in the reference frame is termed a motion vector.
  • the process of motion estimation thus involves determining the motion vector that best links an input block to be encoded with its matching region in a reference frame.
  • Most inter-prediction processes begin with initial motion estimation (110 in FIG. 2), which generates one or more rough estimates of "good" motion vectors 115 for a given input block. This is followed by an optional motion vector candidate filtering step 120, where multiple motion vector candidates can be reduced to a single candidate using an approximate rate-distortion metric.
  • the best motion vector candidate (prediction) is chosen as the one that minimizes the rate-distortion metric D+ ⁇ , where the distortion D measures the error between the input block and its matching region, while the rate R quantifies the cost (in bits) to encode the prediction and ⁇ is a scalar weighting factor.
  • the actual rate cost contains two components: texture bits, the number of bits needed to encode the quantized transform coefficients of the residual signal (the input block minus the prediction), and motion vector bits, the number of bits needed to encode the motion vector.
  • motion vectors are usually encoded differentially, relative to already-encoded motion vectors.
  • texture bits are not available, so the rate portion of the rate-distortion metric is approximated by the motion vector bits, which in turn are approximated as a motion vector penalty factor dependent on the magnitude of the differential motion vector.
  • the approximate rate-distortion metric is used to select either a single "best" initial motion vector or a smaller set of "best" initial motion vectors 125.
  • the initial motion vectors 125 are then refined with fine motion estimation 130, which performs a local search in the neighborhood of each initial estimate to determine a more precise estimate of the motion vector (and corresponding prediction) for the input block.
  • the local search is usually followed by subpixel refinement, in which integer-valued motion vectors are refined to half- pixel or quarter-pixel precision via interpolation.
  • the fine motion estimation block 130 produces a set of refined motion vectors 135.
  • a mode generation module 140 generates a set of candidate predictions 145 based on the possible encoding modes of the encoder. These modes vary depending on the codec. Different encoding modes may account for (but are not limited to) interlaced vs. progressive (field vs. frame) motion estimation, direction of the reference frame (forward-predicted, backward-predicted, bi-predicted), index of the reference frame (for codecs such as H.264 and HEVC that allow multiple reference frames), inter-prediction vs. intra-prediction (certain scenarios allowing reversion to intra-prediction when no good inter-predictions exist), different quantization parameters, and various subpartitions of the input block.
  • interlaced vs. progressive (field vs. frame) motion estimation direction of the reference frame (forward-predicted, backward-predicted, bi-predicted), index of the reference frame (for codecs such as H.264 and HEVC that allow multiple reference frames)
  • the full set of prediction candidates 145 undergoes "final" rate-distortion analysis 150 to determine the best single candidate.
  • “final” rate-distortion analysis a precise rate-distortion metric D+ ⁇ is used, computing the prediction error D for the distortion portion (usually calculated as sum of squared errors [SSE]) and the actual encoding bits R (from the entropy encoding 90 in FIG. 1) for the rate portion.
  • the final prediction 160 (or 40 in FIG. 1) is the one that has lowest rate-distortion score D+ ⁇ among all the candidates, and this final prediction is passed to the subsequent steps of the encoder, along with its motion vector and other encoding parameters.
  • FIG. 3 depicts how initial motion estimation can be performed during inter- prediction via continuous block tracking (CBT).
  • CBT is useful when there is a gap of greater than one frame between the target frame and the reference frame from which temporal predictions are derived.
  • a typical GOP structure of IBBPBBP consisting of intra-predicted I-frames, bi-predicted B-frames, and forward-predicted P-frames
  • IBBPBBP intra-predicted I-frames, bi-predicted B-frames, and forward-predicted P-frames
  • H.264 and HEVC which allow multiple reference frames for each frame to be encoded, the same GOP structure allows reference frames to be located six or more frames away from the current frame.
  • reference frames can be located even further from the target frame.
  • continuous tracking enables the encoder to capture motion in the data in a way that standard temporal prediction methods cannot, allowing CBT to produce superior temporal predictions.
  • the first step in CBT is to perform frame-to-frame tracking (210 in FIG. 3). For each input block 10 in a frame, motion vectors are calculated in both the backward direction to the previous frame in the frame buffer 205 and the forward direction to the next frame in the frame buffer.
  • frame-to-frame tracking operates on frames from the original source video, not reconstructed reference frames. This is advantageous because source video frames are not corrupted by quantization and other coding artifacts, so tracking based on source video frames more accurately represents the true motion field in the video.
  • Frame-to-frame tracking may be carried out using either conventional block-based motion estimation (BBME) or hierarchical motion estimation (HME).
  • the result of frame -to-frame tracking is a set of frame -to-frame motion vectors 215 that signify, for each input block in a frame, the best matching region in the most recent frame in the frame buffer 205, and, for each block of the most recent frame in the frame buffer 205, the best matching region in the current frame.
  • Continuous tracking 220 then aggregates available frame-to-frame tracking information to create continuous tracks across multiple reference frames for each input block. Details of how to perform continuous tracking are found in the '784 Application, which is incorporated by reference herein in its entirety.
  • the output of continuous tracking 220 are the continuous block tracking (CBT) motion vectors 225 that track all input blocks in the current frame being encoded to their matching regions in past reference frames.
  • the CBT motion vectors are the initial motion vectors (125 in FIG. 2) for the CBT, and they can be refined with fine motion estimation (130 in FIG. 2) as noted above.
  • FIG. 4 depicts how the CBT can be combined with the EPZS method to create a unified motion estimation process, according to an embodiment of the invention.
  • CBT generates its motion vectors through frame-to-frame tracking 210 and continuous tracking 220 for initial motion estimation 110, followed by local search and subpixel refinement 250 for fine motion estimation 130.
  • EPZS generates its initial motion vectors through a candidate generation module 230, followed by a candidate filtering module 240, with the filtering carried out via approximate rate-distortion analysis as detailed above. This is followed by fine motion estimation 130 via local search and subpixel refinement 260.
  • the resulting CBT motion vector 255 and EPZS motion vector 265 are both passed forward to the remaining inter-prediction steps (mode generation 140 and final rate-distortion analysis 150 in FIG. 2) to determine the overall "best" inter-prediction.
  • the CBT and EPZS motion vector candidates 255 and 265 in FIG. 4 may be supplemented by additional candidates, including (but not limited to) random motion vectors, the (0, 0) motion vector, and the so-called "median predictor.”
  • the random motion vector may have fine motion estimation 130 applied to it to find the best candidate in its local neighborhood.
  • the (0, 0) motion vector is one of the initial candidates in EPZS, but it is not always selected after EPZS candidate filtering (240 in FIG. 4), and even if it selected after candidate filtering, fine motion estimation 130 may result in a motion vector other than (0, 0).
  • the "median predictor” is also one of the initial candidates in EPZS, but it is also not always selected after EPZS candidate filtering (240 in FIG. 4).
  • the median predictor is defined as the median of the motion vectors previously calculated in the data blocks to the left, top, and top right of the data block currently being encoded. Explicitly including the median predictor (with no accompanying fine motion estimation) as a candidate for final rate-distortion analysis can be especially beneficial for encoding spatially homogeneous ("flat") regions of the video frame.
  • five or more motion vector candidates may be passed forward to the remaining inter-prediction steps (mode generation 140 and final rate-distortion analysis 150 in FIG. 2), including (but not limited to) a CBT-derived motion vector, an EPZS-derived motion vector, a motion vector derived from a random motion vector, the (0, 0) motion vector, and the median predictor.
  • Perceptual statistics may be used to compute importance maps that indicate which regions of a video frame are important to the human visual system (HVS).
  • TCSF temporal contrast sensitivity function
  • HVS human visual system
  • HVS human visual system
  • v ⁇ MV ⁇ * framerate/N , where ⁇ MV ⁇ is the magnitude of the motion vector associated with the data block, framerate is the number of frames per second at which the video has been generated, and N is the number of frames between the reference frame pointed to by the motion vector and the current frame.
  • a suitable approximation for the wavelength ⁇ can be derived from a computation of structural similarity (SSIM) [Wang, Z. et al., 2004, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans, on Image Processing, 13(4):600-612], computed in CIE 1976 Lab colorspace [www://en.wikipedia.org/wiki/Lab_color_space].
  • SSIM structural similarity
  • FIG. 6 SSIM is computed between a target block 300 (the current data block to be encoded) and the reference block 310 to which its motion vector points.
  • the video data processed by encoder is usually
  • the next step is to convert both the target block (320) and the reference block (330) into CIE 1976 Lab space, using any of the methods commonly found in the literature.
  • the error AE (340) between the target block and the reference block in Lab space is computed as
  • SSIM 360 between the error AE and the zero matrix of the same dimension is computed to serve as a measure of the colorspace variation of the data.
  • SSIM as originally defined takes on values between -1 and 1, with values of 1 indicating perfect similarity (no spatial distinction).
  • DSSIM (1— SSIM)/2, which takes on values between 0 and 1 , where 0 corresponds to small wavelengths
  • the TCSF value for that block can be determined from the curve fit (solid line) in FIG. 5.
  • the TCSF takes on values between 0 and 1.08 in loglO scale or between 1 and 11.97 on an absolute scale.
  • the aggregate set of TCSF values over all the blocks in a frame forms an importance map, with high values indicating blocks that are perceptually important from a temporal contrast perspective and low values indicating blocks that are perceptually unimportant.
  • the values of the TCSF from recent frames may be averaged for each data block to prevent the TCSF-based importance map from fluctuating too much from frame to frame.
  • information about the relative quality of the motion vectors generated by the encoder can be computed at different points in the encoding process and then used to generate a true motion vector map (TMVM) that outputs, for each data block, how reliable its motion vector is.
  • TMVM true motion vector map
  • the true motion vector map which takes on values of 0 or 1, can then be used as a mask to refine the TCSF, such that the TCSF is not used for data blocks whose motion vectors are not accurate (i.e., whose TMVM values are 0).
  • motion vector accuracy can be determined by estimating a global motion model for a given video frame, applying the motion model to each of the data blocks in the frame to determine a global motion vector for each data block, and then comparing the global motion vector with the encoder's motion vector for that data block.
  • the magnitude of the difference between the global motion vector and encoder motion vector for a given data block is used to identify that the data block is foreground data, meaning that the content in the data block is moving differently than the rest of the frame (the background).
  • the TMVM is set to 1 - and the TCSF is applied - only for foreground data.
  • the encoder motion vector is subtracted from the global motion vector to obtain a differential motion vector, and it is the magnitude of the differential motion vector (not the encoder motion vector) that is used to calculate frequency for the TCSF (see the expression above, substituting
  • , where DMV differential motion vector).
  • motion vector symmetry may be used to refine the TMVM.
  • Motion vector symmetry Bartels, C. and de Haan, G., 2009, "Temporal symmetry constraints in block matching," Proc. IEEE 13 th Int'l. Symposium on Consumer Electronics, pp. 749-752] is defined as the relative similarity of pairs of counterpart motion vectors when the temporal direction of the motion estimation is switched, is a measure of the quality of calculated motion vectors (the higher the symmetry, the better the motion vector quality).
  • the "symmetry error vector” is defined as the difference between the motion vector obtained through forward-direction motion estimation and the motion vector obtained through backward-direction motion estimation.
  • Low motion vector symmetry (a large symmetry error vector) is often an indicator of the presence of complex phenomena such as occlusions (one object moving in front of another, thus either covering or revealing the background object), motion of objects on or off the video frame, and illumination changes, all of which make it difficult to derive accurate motion vectors.
  • low symmetry is declared when the symmetry error vector is larger in magnitude than half the extent of the data block being encoded (e.g., larger in magnitude than an (8, 8) vector for a 16x16 macroblock).
  • low symmetry is declared when the symmetry error vector is larger in magnitude than a threshold based on motion vector statistics derived during the tracking process, such as the mean motion vector magnitude plus a multiple of the standard deviation of the motion vector magnitude in the current frame or some combination of recent frames.
  • data blocks whose motion vectors have low symmetry as defined above are automatically assigned a TMVM value of 0, while other data blocks retain their previous TMVM value from comparison of the global motion vector with the encoder motion vector.
  • Flat blocks may be detected, for example, using an edge detection process (where a flat block would be declared if no edges are detected in a data block) or by comparing the variance of a data block to a threshold (low variance less than the threshold would indicate a flat block).
  • block flatness may be used to modify the TMVM calculated as above. For example, a block may be reassigned a TMVM value of 0 if it is detected as a flat block.
  • the TMVM may be used as a mask to refine the TCSF, which depends on having reliable motion vectors. Since the TMVM has values of 0 or 1, block-by- block multiplication of the TMVM value for a block with the TCSF value for that block has the effect of masking the TCSF. For blocks where the TMVM value is 0, the TCSF is "turned off," since the motion vector the TCSF relies on for its calculation is unreliable. For blocks where the TMVM value is 1, the TCSF calculation is considered reliable and used with confidence in any of the ways described above.
  • spatial contrast maps can be generated instead of, or in addition to, the temporal contrast map (the TCSF as described above).
  • spatial complexity In the present invention, simple metrics are used to measure spatial contrast, the opposite of which is termed here "spatial complexity.”
  • block variance measured for both the luma and chroma components of the data, is used to measure the spatial complexity of a given input block. If an input block has high variance, it is thought to be spatially complex and less noticeable to the HVS, and thus it has low spatial contrast.
  • block luminance measured for the luma component of the data, is used to refine the variance measurement of spatial complexity. If an input block has low variance (low spatial complexity, high spatial contrast) but is either very bright or very dark, the block is automatically considered to have low spatial contrast, overriding its previously-measured high spatial contrast. The reason for this is that very dark and very bright regions are not noticeable to the HVS.
  • the luma thresholds for classifying a block as very bright or very dark are application specific, but typical values for 8-bit video are "above 170" for very bright and "below 60" for very dark.
  • Block variance modified by block luminance as described above, may be calculated for all the input blocks of a video frame to form a spatial contrast map (SCM) that indicates regions of high and low noticeability to the HVS in terms of spatial contrast.
  • the SCM can be combined with the TCSF (refined by the TMVM) to form a unified importance map.
  • the unified map may be formed, for example, by block-by-block multiplication of the SCM value for a block with the TCSF value for that block, with both the SCM and TCSF appropriately normalized.
  • the SCM may be used in place of the TCSF.
  • the SCM may be used to refine the TCSF. For example, in a block of high complexity, the SCM value may override the TCSF value for that block, whereas in a block of low complexity, the TCSF value for that block may be used directly.
  • Importance maps as described above may be applied to the video encoding process to enhance the quality of encoded bitstreams, either for general encoders (FIG. 2) or for the CBT encoder (FIG. 3).
  • FIG. 7 depicts the general application of importance maps to video encoding.
  • the input video frame 5 and frame store 85 are used to generate perceptual statistics 390 that are then applied to form importance maps 400 as described above, the TCSF (refined by the TMVM) and/or the SCM.
  • the perceptual statistics 390 may include (but are not limited to) motion vector magnitudes, block variance, block luminance, edge detection, and global motion model parameters.
  • the input video frame 5 and frame store 85 are also inputted as usual to the encoding of the video frame in 450, which includes the usual encoding steps (in FIG. 2, motion estimation 15, inter-prediction 20, intra-prediction 30, transform and quantization 60, and entropy encoding 90).
  • the encoding 450 is enhanced by the importance maps 400, as described below.
  • FIG. 8 A depicts the specific application of importance maps to enhance video encoding using the CBT.
  • FIG. 8A shows initial motion estimation (110 in FIG. 2) via the frame -to-frame tracking 210 and continuous tracking 220 steps from CBT.
  • Fine motion estimation 130 is then applied to the global CBT motion vectors 225, with the same fine motion estimation steps of local search and subpixel refinement (250 in FIG. 4).
  • a mode generation module 140 that generates a set of candidate predictions 145 based on the possible encoding modes of the encoder. As in FIG.
  • EPZS and other non-model-based candidates such as the (0, 0) motion vector and the median predictor may also be generated in parallel as part of a unified motion estimation framework (these other candidates are not shown in FIG. 8A to simplify the diagram).
  • the full set of prediction candidates 145 including all encoding modes for CBT candidates and possibly all encoding modes for other, non-model-based candidates, again undergoes "final" rate-distortion analysis 155 to determine the best single candidate.
  • "final" rate-distortion analysis a precise rate-distortion metric D+ ⁇ is used, computing the prediction error D for the distortion portion and the actual encoding bits R (from the entropy encoding 90 in FIG. 1) for the rate portion.
  • the final prediction 160 (or 40 in FIG. 1) is passed to the subsequent steps of the encoder, along with its motion vector and other encoding parameters.
  • perceptual statistics 390 can be calculated from the motion vectors derived from frame-to-frame motion tracking 210 and then applied to form importance maps 400 as described above, which are then inputted into the final rate-distortion analysis 155.
  • the perceptual statistics 390 may include (but are not limited to) motion vector magnitudes, block variance, block luminance, edge detection, and global motion model parameters.
  • importance maps are used to modify the rate-distortion optimization criterion accordingly.
  • a standard encoder see FIG. 2
  • the full set of prediction candidates 145 for a given input block 10 undergoes "final" rate-distortion analysis 150 to determine the best single candidate.
  • "final" rate-distortion analysis a precise rate- distortion metric D+ ⁇ is used, computing the prediction error D for the distortion portion and the actual encoding bits R (from the entropy encoding 90 in FIG. 1) for the rate portion.
  • the candidate with the lowest score for the rate-distortion metric D+ ⁇ R becomes the final prediction 160 for the given input block 10.
  • the importance map IM is calculated in 400 and the final rate-distortion analysis 155 uses a modified rate-distortion metric D ⁇ IM+ ⁇ R.
  • the modified rate-distortion metric the IM value for a given input block multiplies the distortion term, assigning more importance to low-distortion solutions the higher the IM value is, since a high IM value indicates that the corresponding input block is perceptually important.
  • the importance map may include the TCSF (possibly refined by the TMVM), the SCM, or a composite of both.
  • the distortion D in the rate distortion metric may be computed as a weighted sum of SSE (sum of squared errors, the "standard” method calculating distortion) and SSIM, calculated in YUV space.
  • the modified rate-distortion metric would then be
  • importance maps may be used to modify the block quantization of the encoder in addition to (or instead of) modifying the rate-distortion optimization.
  • Quantization controls the relative quality at which a given data block is encoded; highly-quantized data results in poorer quality encoded output, while less-quantized data results in higher quality encoded output.
  • the amount of quantization is controlled by a quantization parameter, QP.
  • Standard encoders assign different QP values QP fram e to different frame types, with I-frames being encoded with the smallest QP (highest quality), B-frames being encoded with the highest QP (lowest quality), and P-frames being encoded with an intermediate QP (intermediate quality).
  • the above technique represents a method of encoding a plurality of video frames having nonoverlapping target blocks, by using importance maps to modify the quantization (and thus affecting the encoding quality) of each target block in each video frame.
  • the importance maps may be configured using temporal information (the TCSF with TMVM refinement), spatial information, or a combination of the two (i.e., a unified importance map).
  • the importance map values should modify the QP for each target block as follows: (i) for blocks where the importance maps take on high values, the block QP is reduced relative to QP fram e , resulting in higher quality for those blocks; (ii) for blocks where the importance maps take on low values, the block QP is increased relative to the frame quantization parameter QP frame , resulting in lower quality for those blocks.
  • FIG. 8B shows an example process for using importance maps 400 to modify quantization during encoding.
  • importance maps may be configured/created using temporal information and/or spatial information derived from perceptual statistics 390.
  • Temporal information may be provided by a temporal contrast sensitivity function (TCSF) that indicates which target blocks are most temporally noticeable to a human observer and a true motion vector map (TMVM) that indicates which target blocks correspond to foreground data, with the TCSF only considered valid for those target blocks identified as foreground data.
  • TCSF temporal contrast sensitivity function
  • TMVM true motion vector map
  • Spatial information may be provided by a rule- based spatial complexity map (SCM).
  • SCM spatial complexity map
  • the importance maps 400 are then used to modify the quantization step 430 within the encoding 450, as described above.
  • the block quantization parameter (QP) is reduced relative to the frame quantization parameter QP frame , resulting in higher encoding quality for those blocks.
  • the block quantization parameter is increased relative to the frame quantization parameter QP fra me , resulting in lower encoding quality for those blocks.
  • the TCSF map for a given frame can be used to adjust the frame QP on a block-by-block basis.
  • One method of calculating the block QP, QP block is to relate the adjustment to the full TCSF map in the frame, following the method of [Li, Z. et al, 2011, "Visual attention guided bit allocation in video compression, J. of Image and Vision Computing, 29(1): 1-14].
  • TCSFf rame is the sum of TCSF values for all blocks in the frame
  • TCSFbiock is the TCSF value for the given block
  • QPframe is the frame QP
  • TCSFf rame / (TCSF Mock xM)] may be scaled to prevent the final values of QPbiock from becoming too high or too low relative to QPframe-
  • the block-by-block adjustment of the QP via the TCSF map can be accomplished without reference to the full TCSF map for the frame.
  • the calculation of QPbiock is simpler:
  • QPbiock QPframe /TCSF Mock .
  • the resulting value of QPbiock is clipped so that it does not exceed a predetermined maximum or minimum QP value for the frame:
  • the outputs of the SCM may be used to modify the quantization parameter on a block-by-block basis using a rule-based approach.
  • This embodiment begins by assigning blocks with high variance a high QP value (low quality), because highly-complex regions are less noticeable to the HVS. Blocks with low variance are assigned a low QP value (high quality), because less-complex regions are more noticeable to the HVS.
  • the QP assignment for a given block is bounded by the frame's maximum and minimum QP values, QP max and QP m i n , and is scaled linearly based on the block variance relative to the variance of other blocks in the frame.
  • the QP assignment for high-variance blocks may be further refined by the TCSF. For example, if the block is considered a foreground data in the TMVM and the TCSF has a log contrast sensitivity value (vertical axis in FIG. 5) less than 0.5, meaning that the block is temporally unimportant, QP Mock is raised by 2.
  • an edge detection process can be applied and blocks containing edges can have their QPs adjusted to QPmin, overwriting the previously-assigned QPs from spatial complexity, because edges are particularly noticeable to the HVS.
  • blocks that are either very bright or very dark can have their QPs adjusted to QPmax, again by overwriting the previously-assigned QPs from variance and (if applicable) from edge detection, because very dark or very bright regions are not noticeable to the HVS. This process is known as luminance masking.
  • the value of QP max for high-variance blocks may be determined dynamically based on the quality level of the encoded video.
  • the idea is that low-quality encodings cannot afford any quality drop in high-variance blocks, so QP ma x should be closer to QPframe, whereas high-quality encodings can afford an increased QP m ax for high-variance blocks, to save bits.
  • the quality of the encoding may be updated at each I (Intra) frame by calculating the average SSIM of blocks having variance within 5% of the average frame variance, with higher SSIM values corresponding to greater values of QP m ax.
  • the average SSIM is adjusted by the average variance of the frame, so that the quality indicator is calculated as the product of the average SSIM and the average frame variance.
  • very-low-variance blocks may be assigned fixed, low QP values to ensure high-quality encoding in those regions.
  • QP assignments for blocks in P and B frames may then be derived from the above QPs using the ipratio and pbratio parameters.
  • low variance blocks for example, those having variance between 60 and the average frame variance
  • the frame QP, QP fr ame are assigned the frame QP, QP fr ame and then examined to determine whether further quality enhancement is needed.
  • one can detect blockiness artifacts by comparing the spatial complexity and luminance of both the reconstructed pixels and the original pixels from the current (target) block being encoded with the spatial complexity and luminance of previously-encoded surrounding blocks (e.g., blocks to the left, top-left, top, and top-right when available).
  • the target block is considered "blocky."
  • the block's QP value is decreased (e.g., decreased by 2) to improve the encoding quality of the block.
  • the estimated quality of the target block is calculated by averaging the SSIM and QP values of previously-encoded surrounding blocks (e.g., blocks to the left, top-left, right, and top-right when available).
  • the average QP value, QP avg is the estimated QP, QP block, for the target block.
  • QP b i ock QP avg is lowered by 2, increasing its quality.
  • QP b i ock QP avg is lowered by 2 only if the TCSF has a log contrast sensitivity value (vertical axis in FIG. 5) greater than 0.8, meaning that the block is temporally important.
  • the methods outlined above may use temporal importance maps (the TCSF, with or without TMVM refinement), spatial importance maps (the SCM), or both. If both temporal and spatial importance maps are used, the result is termed a unified importance map.
  • Importance maps generated from perceptual statistics as described above, can be applied to any video compression framework that uses motion compensation to produce motion vectors, such that both rate-distortion analysis and quantization are enhanced to produce visually superior encodings for the same encoding sizes.
  • the use of importance maps for video compression does not require specific application to the continuous block tracker (CBT) as detailed above.
  • CBT continuous block tracker
  • the CBT provides the additional capability of accurately determining which motion vectors are true motion vectors, so importance maps are more effective in a CBT-based encoding framework.
  • the particular reason for this is that the CBT's frame -to-frame motion vectors (from frame-to-frame tracking 210 in FIG. 8A) are generated from the original frames of the video and not the reconstructed frames.
  • the frame store 85 in FIG. 2 and FIG. 7 for general encoders contains reconstructed frames generated from the encoding process, but the frame store 205 in FIG. 3, FIG. 4, and FIG. 8 A contains the original video frames. Because of this, the CBT's frame-to-frame tracking (210 in FIGS. 3, 4, and 8) is better able to track the true motion of the video, and its frame-to-frame motion vectors generate more accurate true motion vector maps. By contrast, a general encoder's motion vectors are selected to optimize rate-distortion (compression) performance and may not reflect the true motion of the video.
  • importance maps may be applied to intra-predicted frames as well, either by modifying the rate-distortion optimization among intra-prediction modes or by modifying the block-level quantization, following the techniques described above.
  • computation of the TCSF requires a separate encoding module (such as frame-to-frame tracking 210 in FIG. 8 A) to generate motion vectors for each data block in the video frame.
  • Example implementations of the present invention may be implemented in a software, firmware, or hardware environment.
  • FIG. 9A illustrates one such environment.
  • Client computer(s)/devices 950 e.g., mobile phones or computing devices
  • a cloud 960 or server computer or cluster thereof
  • Client computer(s)/devices 950 can also be linked through communications network 970 to other computing devices, including other client devices/processes 950 and server computer(s) 960.
  • Communications network 970 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, Local area or Wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth, etc.) to communicate with one another.
  • Other electronic devices/computer network architectures are suitable.
  • Embodiments of the invention may include means for encoding, tracking, modeling, filtering, tuning, decoding, or displaying video or data signal information.
  • FIG. 9B is a diagram of the internal structure of a computer/computing node (e.g., client processor/device/mobile phone device/tablet 950 or server computers 960) in the processing environment of FIG. 9A, which may be used to facilitate encoding such videos or data signal information.
  • Each computer 950, 960 contains a system bus 979, where a bus is a set of actual or virtual hardware lines used for data transfer among the components of a computer or processing system.
  • Bus 979 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, encoder chip, decoder chip, disk storage, memory, input/output ports, etc.) that enables the transfer of data between the elements. Attached to the system bus 979 is an I/O device interface 982 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 950, 960. Network interface 986 allows the computer to connect to various other devices attached to a network (for example, the network illustrated at 970 of FIG. 9A).
  • Memory 990 provides volatile storage for computer software instructions 992 and data 994 used to implement a software implementation of the present invention (e.g., codec: encoder/decoder).
  • Disk storage 995 provides non- volatile storage for computer software instructions 998 (equivalently "OS program”) and data 994 used to implement an embodiment of the present invention: it can also be used to store the video in compressed format for long-term storage.
  • Central processor unit 984 is also attached to system bus 979 and provides for the execution of computer instructions. Note that throughout the present text, "computer software instructions” and “OS program” are equivalent.
  • an encoder may be configured with computer readable instructions 992 to encode video data using importance maps formed from temporal information or spatial information.
  • the importance maps may be configured to provide a feedback loop to an encoder (or elements thereof) to optimize the encoding/decoding of video data.
  • the processor routines 992 and data 994 are a computer program product, with an encoder (generally referenced 992), including a computer readable medium capable of being stored on a storage device 994 which provides at least a portion of the software instructions for the encoder.
  • the computer program product 992 can be installed by any suitable software installation procedure, as is well known in the art.
  • at least a portion of the encoder software instructions may also be downloaded over a cable, communication, and/or wireless connection.
  • the encoder system software is a computer program propagated signal product 907 (in Fig. 9A) embodied on a nontransitory computer readable medium, which when executed can be implemented as a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)).
  • a propagation medium e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)
  • Such carrier media or signals provide at least a portion of the software instructions for the present invention routines/program 992.
  • the propagated signal is an analog carrier wave or digital signal carried on the propagated medium.
  • the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network.
  • the propagated signal is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer.
  • the computer readable medium of computer program product 992 is a propagation medium that the computer system 950 may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for the computer program propagated signal product.

Abstract

Selon la présente invention, des statistiques de perception peuvent être utilisées pour calculer des cartes d'importance indiquant les régions d'une image vidéo qui sont importantes pour le système visuel humain. Les cartes d'importance peuvent être appliquées au processus de codage vidéo afin d'améliorer la qualité des trains de bits codés. La fonction de sensibilité au contraste temporelle (TCSF) peut être calculée à partir des vecteurs de mouvement du codeur. Des mesures de la qualité des vecteurs de mouvement peuvent être utilisées pour construire une carte de vecteurs de mouvement vrai (TMVM) qui peut servir à affiner la TCSF. Des cartes de complexité spatiale (SCM) peuvent être calculées à partir de mesures, telles que la variance de bloc, la luminance de bloc, la SSIM et la résistance de bord, et les SCM peuvent être combinées avec la TCSF pour obtenir une carte d'importance unifiée. Les cartes d'importance peuvent être utilisées pour améliorer le codage grâce à la modification du critère de sélection de solutions de codage optimales, ou grâce à la modification de la quantification pour chaque bloc cible à coder.
PCT/US2015/048353 2014-09-11 2015-09-03 Optimisation de la perception pour codage vidéo à base de modèles WO2016040116A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201580049004.1A CN106688232A (zh) 2014-09-11 2015-09-03 基于模型的视频编码的感知优化
JP2017513750A JP6698077B2 (ja) 2014-09-11 2015-09-03 モデルベースの映像符号化用の知覚的最適化
CA2960617A CA2960617A1 (fr) 2014-09-11 2015-09-03 Optimisation de la perception pour codage video a base de modeles
EP15770689.6A EP3175618A1 (fr) 2014-09-11 2015-09-03 Optimisation de la perception pour codage vidéo à base de modèles

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201462049342P 2014-09-11 2014-09-11
US62/049,342 2014-09-11
US14/532,947 US9621917B2 (en) 2014-03-10 2014-11-04 Continuous block tracking for temporal prediction in video encoding
US14/532,947 2014-11-04
US201462078181P 2014-11-11 2014-11-11
US62/078,181 2014-11-11
US201562158523P 2015-05-07 2015-05-07
US62/158,523 2015-05-07

Publications (1)

Publication Number Publication Date
WO2016040116A1 true WO2016040116A1 (fr) 2016-03-17

Family

ID=55459438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/048353 WO2016040116A1 (fr) 2014-09-11 2015-09-03 Optimisation de la perception pour codage vidéo à base de modèles

Country Status (5)

Country Link
EP (1) EP3175618A1 (fr)
JP (1) JP6698077B2 (fr)
CN (1) CN106688232A (fr)
CA (1) CA2960617A1 (fr)
WO (1) WO2016040116A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
CN109819252A (zh) * 2019-03-20 2019-05-28 福州大学 一种不依赖gop结构的量化参数级联方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547802A (zh) * 2017-09-22 2019-03-29 江苏智谋科技有限公司 基于三维视觉技术的无人机避障系统
CN107843227B (zh) * 2017-12-09 2020-04-10 连云港杰瑞电子有限公司 一种基于校准技术提高编码器精度的方法
US10652550B2 (en) 2017-12-22 2020-05-12 Shenzhen China Star Optoelectronics Semiconductor Display Technology Co., Ltd. Compensation table compressing method
CN108172168B (zh) * 2017-12-22 2019-11-15 深圳市华星光电半导体显示技术有限公司 一种补偿表压缩方法
WO2020065520A2 (fr) 2018-09-24 2020-04-02 Beijing Bytedance Network Technology Co., Ltd. Prédiction de fusion étendue
WO2019194572A1 (fr) * 2018-04-03 2019-10-10 Samsung Electronics Co., Ltd. Procédés et appareil pour déterminer un paramètre de réglage pendant le codage d'un contenu multimédia sphérique
EP3785427A4 (fr) * 2018-04-28 2021-05-12 SZ DJI Technology Co., Ltd. Estimation de mouvement
WO2019234598A1 (fr) 2018-06-05 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Interaction entre ibc et stmvp
GB2589223B (en) * 2018-06-21 2023-01-25 Beijing Bytedance Network Tech Co Ltd Component-dependent sub-block dividing
CN110636298B (zh) 2018-06-21 2022-09-13 北京字节跳动网络技术有限公司 对于Merge仿射模式和非Merge仿射模式的统一约束
CN110859057A (zh) * 2018-06-29 2020-03-03 深圳市大疆创新科技有限公司 运动矢量确定方法、设备及机器可读存储介质
US10992938B2 (en) * 2018-09-28 2021-04-27 Ati Technologies Ulc Spatial block-level pixel activity extraction optimization leveraging motion vectors
KR20210089155A (ko) 2018-11-10 2021-07-15 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 페어와이즈 평균 후보 계산에서 라운딩
CN109982082B (zh) * 2019-05-05 2022-11-15 山东大学 一种基于局部纹理特性的hevc多失真准则率失真优化方法
CN111882564A (zh) * 2020-07-27 2020-11-03 山东大学 一种超高清医学病理图像的压缩处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1250012A2 (fr) * 2001-03-23 2002-10-16 Sharp Kabushiki Kaisha Quantification adaptative sur critère de prédiction de débit et d'énergie d'erreur de prédiction
US20100290524A1 (en) * 2009-05-16 2010-11-18 Thomson Licensing Method and apparatus for joint quantization parameter adjustment
US8135062B1 (en) * 2006-01-16 2012-03-13 Maxim Integrated Products, Inc. Method and apparatus for QP modulation based on perceptual models for picture encoding
US8737464B1 (en) * 2011-07-21 2014-05-27 Cisco Technology, Inc. Adaptive quantization for perceptual video coding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184221A (zh) * 2007-12-06 2008-05-21 上海大学 基于视觉关注度的视频编码方法
CN101325711A (zh) * 2008-07-16 2008-12-17 上海大学 基于时空掩盖效应的自适应码率控制方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1250012A2 (fr) * 2001-03-23 2002-10-16 Sharp Kabushiki Kaisha Quantification adaptative sur critère de prédiction de débit et d'énergie d'erreur de prédiction
US8135062B1 (en) * 2006-01-16 2012-03-13 Maxim Integrated Products, Inc. Method and apparatus for QP modulation based on perceptual models for picture encoding
US20100290524A1 (en) * 2009-05-16 2010-11-18 Thomson Licensing Method and apparatus for joint quantization parameter adjustment
US8737464B1 (en) * 2011-07-21 2014-05-27 Cisco Technology, Inc. Adaptive quantization for perceptual video coding

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
"Lumi masking", WIKIPEDIA, 8 November 2006 (2006-11-08), XP055232466, Retrieved from the Internet <URL:https://web.archive.org/web/20061124153834/http://en.wikipedia.org/wiki/Lumi_masking> [retrieved on 20151201] *
ANONYMOUS: "The H.264 Advanced Video Compression Standard, 2nd Edition, chapter 7, H.264 transform and coding, Iain E. Richardson", NOT KNOWN,, 20 April 2010 (2010-04-20), XP030001638 *
BARTELS, C.; DE HAAN, G.: "Temporal symmetry constraints in block matching", PROC. IEEE 13TH INT'L. SYMPOSIUM ON CONSUMER ELECTRONICS, 2009, pages 749 - 752
BARTEN, P.: "Contrast Sensitivity of the Human Eye and Its Effects on Image Quality", 1999, SPIE PRESS
CHEN ZHENZHONG ET AL: "Perception-oriented video coding based on foveated JND model Â", PICTURE CODING SYMPOSIUM 2009; 6-5-2009 - 8-5-2009; CHICAGO,, 6 May 2009 (2009-05-06), XP030081866 *
CHIH-WEI TANG: "Spatiotemporal Visual Considerations for Video Coding", IEEE TRANSACTIONS ON MULTIMEDIA, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 9, no. 2, 1 February 2007 (2007-02-01), pages 231 - 238, XP011346384, ISSN: 1520-9210, DOI: 10.1109/TMM.2006.886328 *
CHRISTOPHER BULLA ET AL: "High Quality Video Conferencing: Region of Interest Encoding and Joint Video/Audio Analysis", INTERNATIONAL JOURNAL ON ADVANCES IN TELECOMMUNICATIONS, vol. 6, no. 3-4, 1 December 2013 (2013-12-01), pages 153 - 163, XP055232071, ISSN: 1942-2601 *
DE LANGE, H.: "Relationship between critical flicker frequency and a set of low frequency characteristics of the eye", J. OPT. SOC. AM., vol. 44, 1954, pages 380 - 389
NACCARI M ET AL: "Improving HEVC compression efficiency by intensity dependant spatial quantisation", 101. MPEG MEETING; 16-7-2012 - 20-7-2012; STOCKHOLM; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m25398, 11 July 2012 (2012-07-11), XP030053732 *
See also references of EP3175618A1 *
TOURAPIS, A.: "Enhanced predictive zonal search for single and multiple frame motion estimation", PROC. SPIE 4671, VISUAL COMMUNICATIONS AND IMAGE PROCESSING, 2002, pages 1069 - 1078
WANG, Z. ET AL.: "Image quality assessment: From error visibility to structural similarity", IEEE TRANS. ON IMAGE PROCESSING, vol. 13, no. 4, 2004, pages 600 - 612
WOOTEN, B. ET AL.: "A practical method of measuring the temporal contrast sensitivity function", BIOMEDICAL OPTICAL EXPRESS, vol. 1, no. 1, 2010, pages 47 - 58
ZHICHENG LI ET AL: "Visual attention guided bit allocation in video compression", IMAGE AND VISION COMPUTING, vol. 29, no. 1, 1 January 2011 (2011-01-01), pages 1 - 14, XP055126506, ISSN: 0262-8856, DOI: 10.1016/j.imavis.2010.07.001 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
CN109819252A (zh) * 2019-03-20 2019-05-28 福州大学 一种不依赖gop结构的量化参数级联方法
CN109819252B (zh) * 2019-03-20 2021-05-18 福州大学 一种不依赖gop结构的量化参数级联方法

Also Published As

Publication number Publication date
CN106688232A (zh) 2017-05-17
CA2960617A1 (fr) 2016-03-17
EP3175618A1 (fr) 2017-06-07
JP2017532858A (ja) 2017-11-02
JP6698077B2 (ja) 2020-05-27

Similar Documents

Publication Publication Date Title
US10097851B2 (en) Perceptual optimization for model-based video encoding
US10091507B2 (en) Perceptual optimization for model-based video encoding
JP6698077B2 (ja) モデルベースの映像符号化用の知覚的最適化
US11228766B2 (en) Dynamic scaling for consistent video quality in multi-frame size encoding
US9621917B2 (en) Continuous block tracking for temporal prediction in video encoding
US10212456B2 (en) Deblocking filter for high dynamic range (HDR) video
EP1797722B1 (fr) Compression par repetition de zone de chevauchement adaptative pour la compensation de mouvement precis
EP3075154B1 (fr) Sélection de la précision d&#39;un vecteur de mouvement
US11240496B2 (en) Low complexity mixed domain collaborative in-loop filter for lossy video coding
US9241160B2 (en) Reference processing using advanced motion models for video coding
US9313526B2 (en) Data compression for video
US9078009B2 (en) Data compression for video utilizing non-translational motion information
US20110206132A1 (en) Data Compression for Video
US20120300850A1 (en) Image encoding/decoding apparatus and method
US9294764B2 (en) Video encoder with intra-prediction candidate screening and methods for use therewith
US20150256853A1 (en) Video encoder with transform size preprocessing and methods for use therewith
US9438925B2 (en) Video encoder with block merging and methods for use therewith
KR20150034699A (ko) 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치
EP2899975A1 (fr) Codeur vidéo avec prétraitement d&#39;intra-prédiction et procédés d&#39;utilisation associés
US20240031580A1 (en) Method and apparatus for video coding using deep learning based in-loop filter for inter prediction
EP4268460A1 (fr) Filtre temporel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15770689

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015770689

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015770689

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2960617

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2017513750

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE