WO2016115968A1 - 一种融合视觉感知特征的可分层视频编码方法 - Google Patents

一种融合视觉感知特征的可分层视频编码方法 Download PDF

Info

Publication number
WO2016115968A1
WO2016115968A1 PCT/CN2015/100056 CN2015100056W WO2016115968A1 WO 2016115968 A1 WO2016115968 A1 WO 2016115968A1 CN 2015100056 W CN2015100056 W CN 2015100056W WO 2016115968 A1 WO2016115968 A1 WO 2016115968A1
Authority
WO
WIPO (PCT)
Prior art keywords
macroblock
mode
motion
coding
visual
Prior art date
Application number
PCT/CN2015/100056
Other languages
English (en)
French (fr)
Inventor
刘鹏宇
贾克斌
Original Assignee
北京工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京工业大学 filed Critical 北京工业大学
Priority to US15/124,672 priority Critical patent/US10313692B2/en
Publication of WO2016115968A1 publication Critical patent/WO2016115968A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/36Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/37Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation

Definitions

  • the present invention relates to a video coding method, and in particular to a layered video coding method that integrates visual perception features.
  • Video coding has shown broad development prospects in information processing and related fields.
  • the network bandwidth and storage space are limited, the requirements for video quality are constantly improving, and the performance indicators of digital video, such as resolution, quality, and frame rate, are continuously improved, and the existing video coding standards are proposed. New requirements.
  • the present invention is directed to the above problem, and proposes a layered video coding method that integrates visual perception features, including setting of a priority of a visual region of interest and setting of a video coding resource allocation scheme;
  • the priority of the visual interest region is mainly set: in view of the richness of the video image content and the selective attention mechanism of the human eye, the video content usually has both the time domain and the air domain dual visual features, and the visual feature saliency region is marked.
  • the calculation formula can be expressed as:
  • ROI(x, y) represents the visual interest priority of the currently coded macroblock
  • T(x, y, MV) represents the time domain visual feature saliency of the currently coded macroblock
  • S(x, y, Mode) represents The spatial domain visual feature saliency of the current coded macroblock
  • (x, y) represents the position coordinate of the currently coded macroblock
  • the setting of the video coding resource allocation scheme is as follows: to improve the video coding real-time performance while ensuring video coding quality and compression efficiency, firstly satisfying the coding optimization of the macroblock of the region of interest,
  • the macroblock gray histogram is used to describe the degree of macroblock flatness, and the set of possible intra prediction modes is adaptively selected according to the macroblock flatness;
  • the specific mode is pre-judged to terminate the unnecessary inter-prediction mode search and the rate-distortion cost calculation in advance, thereby reducing the coding time-consuming;
  • the search hierarchy is determined according to the degree of motion of the coding block, and the efficient search is realized.
  • the time domain visual saliency region labeling is specifically divided into two steps: step 1 motion vector noise detection and step 2 translation motion vector detection, respectively, for weakening due to The effect of motion vector noise and translational motion vector generated by camera motion on the accuracy of time domain visual saliency detection, completes the separation of foreground and background, and obtains a more standard time-domain visual saliency regional annotation that conforms to human visual features.
  • step 1 motion vector noise detection and step 2 translation motion vector detection respectively, for weakening due to The effect of motion vector noise and translational motion vector generated by camera motion on the accuracy of time domain visual saliency detection, completes the separation of foreground and background, and obtains a more standard time-domain visual saliency regional annotation that conforms to human visual features.
  • the spatial saliency degree area standard is performed
  • the visual feature saliency area is marked according to the time domain and the spatial domain visual feature saliency area, and the visual feature saliency area is marked.
  • a motion vector representing a macroblock included in the motion reference region C rr Indicates the number of accumulations
  • the motion reference area C rr is defined as follows so that the shape, position, and area of the reference area C rr can follow the current motion vector Adaptive adjustments made by changes:
  • the four macroblocks located at the upper left, upper right, lower left, and lower right of C rr are denoted as MB 1 , MB 2 , MB 3, MB 4 , and their position coordinates are defined as:
  • the calculation formula of the step 2 translational motion vector detection can be expressed as:
  • (x, y) represents the position coordinates of the current coding block; It is a dynamic threshold; SAD (x, y) is the absolute difference between the current coding block and the corresponding position block of the previous frame and SAD (Sum of Absolute Differences, SAD), which is used to characterize the change of the corresponding coding block of two adjacent frames.
  • SAD Sum of Absolute Differences, SAD
  • s(i, j) is the pixel value of the current coding block
  • c(i, j) is the pixel value of the corresponding position block of the previous frame
  • M, N are the length and width dimensions of the current coding block, respectively;
  • S c represents the background area of the previous frame; Indicates that S c contains the accumulated sum of the coded block corresponding to the SAD value; Num represents the accumulated number of times;
  • (x, y) represents the position coordinate of the current coding block;
  • Mode represents the prediction mode of the coding block;
  • mode P represents the prediction mode of the current coding block in the P frame coding;
  • mode I represents the current coding in the I frame coding Block prediction mode;
  • mode P selects sub-block inter prediction mode set Inter8 (8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4) or mode I selects Intra4 ⁇ 4 prediction mode, it indicates that the spatial details are rich and also high.
  • the visual feature saliency area is marked.
  • Step 1 Calculate the gray histogram of the current encoded macroblock luminance component Y, and record the maximum number of pixels Max Value;
  • Step 2 setting an upper threshold Th high and a lower threshold Th low , Th high and Th low are integers between [1, 256];
  • Step 3 If Max Value ⁇ Th high , consider the macroblock to be flat, discard the Intra4 ⁇ 4 prediction mode set, select the Intra16 ⁇ 16 prediction mode set, and use the mode with the lowest rate distortion overhead as the optimal intra prediction mode; Threshold: Otherwise, proceed to step 4;
  • Step 4 If Max Value ⁇ Th low , the macroblock is considered to be rich in detail, the Intra16 ⁇ 16 prediction mode set is discarded, the Intra4 ⁇ 4 prediction mode set is selected, and the mode with the lowest rate distortion overhead is used as the optimal intra prediction mode; Lower threshold: Otherwise, proceed to step 5;
  • Step 5 If Th low ⁇ Max Value ⁇ Th high , the macroblock flatness characteristic is not significant, and a standard intra prediction algorithm is adopted;
  • the upper limit threshold Th high and the lower limit threshold Th low in the present invention are set to 150 and 50, respectively.
  • Step 1 Pre-judgment of Skip mode
  • Step 1.1 Calculate the rate distortion value J skip of the Skip mode (mode0), stop searching for other modes if it is less than the threshold T, select Skip as the best prediction mode, and jump to step 4; otherwise, perform step 1.2;
  • Min_cost is the optimal rate distortion generation value of the previous coded macroblock
  • Step 1.2 Calculate the rate distortion value of Inter16 ⁇ 16 mode (mode1) J 16 ⁇ 16 . If J 16 ⁇ 16 >J skip , then select Skip as the best coding mode and jump to step 4; otherwise, perform steps. 2;
  • Step 2 Predicting the macroblock/subblock inter prediction mode
  • Step 2.1 Calculate the rate distortion value of Inter 16 ⁇ 16 mode and Inter8 ⁇ 8 mode J 16 ⁇ 16 and J 8 ⁇ 8 . If J 8 ⁇ 8 -J 16 ⁇ 16 >T 0 , select Inter16 ⁇ 16 mode as Best inter-frame coding mode, go to step 4; otherwise, go to step 2.2;
  • T 0 0.2 ⁇ Min_cost, which is an adaptive empirical domain value obtained from experimental data, which can reduce the false positive rate while ensuring the fast decision mode, and Min_cost is the optimal rate distortion generation of the previous coded macroblock. value;
  • Step 2.2 Calculate the rate distortion value of the Inter4 ⁇ 4 mode J 4 ⁇ 4 . If J 4 ⁇ 4 ⁇ min(J 16 ⁇ 16 , J 8 ⁇ 8 ), sub-divide the macro block and sub-block the frame. Inter prediction modes Inter8 ⁇ 8, Inter8 ⁇ 4, Inter4 ⁇ 8 and Inter4 ⁇ 4 (mode4 ⁇ mode7) are used as an inter-frame candidate mode set; otherwise, the macroblock inter prediction mode Inter16 ⁇ 16, Inter16 ⁇ 8, Inter8 ⁇ 16 (mode1 to mode3) as a set of inter prediction prediction modes, discarding sub-segment prediction;
  • Step 3 Predicting the intra mode
  • Step 3.1 Calculate the average edge error value ABE (Average Boundary Error) and the edge error sum SBE (Summation Boundary Error) of the current coded macroblock, and the average edge error value ABE reflects the time correlation of the macroblock;
  • Y orig is the pixel value of the currently coded macroblock
  • Y rec is the pixel value of the reconstructed macroblock
  • (x, y) represents the position coordinate of the currently coded macroblock
  • Step 3.2 Calculate an average bit rate AR (Average Rate) of the currently coded macroblock, and the average bit rate AR reflects the spatial correlation of the macroblock;
  • AR Average Rate
  • is the Lagrangian multiplier factor
  • Rate is the number of bits required for macroblock coding
  • Step 4 Calculate and select the optimal inter prediction mode according to the rate distortion criterion, and complete the interframe prediction coding.
  • Step 1 Describe the macroblock motion characteristics
  • Step 1.1 Calculate the rate distortion value of the current coded macroblock motion estimation based on the rate distortion criterion: RD cost motion :
  • ⁇ motin ) SAD[s,r(ref,mv)]+ ⁇ motin [R(mv-pred)+R(ref)]
  • s is the current macroblock pixel value
  • mv is the macroblock motion vector, pred is the prediction vector
  • ref is the selected reference frame
  • r(ref, mv) is the pixel value of the reference macroblock
  • R is the motion vector for the difference
  • the number of bits consumed by the encoding including the number of coded bits of the difference between the motion vector and its predicted value and the number of coded bits of the reference frame
  • ⁇ motion is a Lagrangian multiplier
  • SAD is the absolute error between the current block and the reference block , which is defined as:
  • M and N represent the width and height of the currently coded macroblock
  • x, y represent the location of the macroblock
  • s represents the true value
  • c represents the predicted value
  • m x and m y respectively represent motion vectors of the macroblock in the horizontal and vertical directions;
  • Step 1.2 Calculate the rate distortion RD cost mode in mode mode based on the rate distortion criterion:
  • ⁇ mode ) SSD(s,c,mode
  • mode is the inter-coding mode of the current macroblock
  • s is the original video signal
  • c is the reconstructed video signal encoded in mode mode
  • ⁇ mode is the Lagrangian multiplier
  • QP Is the total binary digits including macroblock header information, motion vector and all DCT block information related to the mode and quantization parameters
  • QP is the encoding quantization step size
  • SSD(s, c, mode) is the original signal and the reconstructed signal
  • the sum of squared differences namely:
  • B 1 and B 2 respectively represent the horizontal pixel number and the vertical pixel number of the coding block, and the value thereof may be 16, 8, 4; s Y [x, y], c Y [x, y, mode
  • Step 1.3 Select the minimum rate distortion generation value from RD cost motion and RD cost mode , and record it as RD_mincost;
  • Step 2 Determine the intensity of macroblock motion
  • are the adjustment factors for judging the degree of motion of the macroblock, respectively defined as:
  • Bsize[blocktype] is the current coded macroblock size, and there are 7 kinds of values: 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4; pred_mincost by UMHexagonS
  • the algorithm starts from the motion vector prediction method selected by the search point:
  • the pred_mincost selects the reference frame prediction motion vector reference frame MV predictor
  • pred_mincost selects a median prediction motion vector median MV predictor
  • pred_mincost selects an upper layer prediction motion vector uplayer MV predictor
  • the array ⁇ 1 [blocktype] and the array ⁇ 2 [blocktype] are defined as:
  • Step 3 Determine the macroblock motion estimation search hierarchy
  • Step 3.1 When the degree of motion of the macroblock is low, in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm, only the non-uniform six sides of the first layer and the second layer of the inner layer are performed. Shape grid search;
  • Step 3.2 When the degree of motion of the macroblock is medium, the non-uniform hexagonal grid search of the first layer to the third layer is performed in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm;
  • Step 3.3 When the degree of motion of the macroblock is high, a 4-layer non-uniform hexagonal grid search is performed in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm.
  • the invention adopts the two-layer structure of the video coding layer and the visual perception analysis layer to realize fast coding.
  • the visual perception analysis layer uses the code stream information of the coding layer to perform visual feature saliency analysis, identifies the priority of the visual interest region, and greatly shortens the computational time consumption of visual perception analysis; on the other hand, the coding layer reuse perception
  • the results of the visual feature saliency analysis of the output of the analysis layer optimize the allocation coding computing resources, realize the layered video coding, and improve the coding speed.
  • the invention not only ensures the video image quality and coding efficiency, but also improves the coding calculation speed as a whole, and achieves a balance between the coding speed, the subjective video quality and the compression code rate.
  • a layered video coding method combining visual perception features, including setting of priority of a visual region of interest and setting of a video coding resource allocation scheme;
  • the priority of the visual interest region is mainly set: in view of the richness of the video image content and the selective attention mechanism of the human eye, the video content usually has both the time domain and the air domain dual visual features, in order to reduce the time of the video content.
  • the computational complexity of the visual features of the domain and the airspace is proposed to use the existing video coded stream information to mark the time domain and the spatial visual saliency region of the human eye.
  • time domain visual saliency area labeling is divided into two steps: (1) motion vector noise detection and (2) translation motion vector detection, which are used to weaken the translational motion due to motion vector noise and camera motion, respectively.
  • motion vector noise detection and (2) translation motion vector detection, which are used to weaken the translational motion due to motion vector noise and camera motion, respectively.
  • translation motion vector detection which are used to weaken the translational motion due to motion vector noise and camera motion, respectively.
  • the influence of vector on the accuracy of time domain visual saliency detection the separation of foreground and background is completed, and the results of time-domain visual saliency regional labeling with more standard and human visual features are obtained.
  • Representing the motion vector of the macroblock included in the reference area C rr Indicates the number of accumulations.
  • the four macroblocks located at the upper left, upper right, lower left, and lower right of C rr are denoted as MB 1 , MB 2 , MB 3 , MB 4 , and their position coordinates are defined as:
  • T 2 The foreground translational region of the translational motion vector, labeled T 2 (x, y, MV).
  • the calculation formula for performing translational motion vector detection can be expressed as:
  • (x, y) represents the position coordinates of the current coding block; It is a dynamic threshold; SAD (x, y) is the absolute difference between the current coding block and the corresponding position block of the previous frame and SAD (Sum of Absolute Differences, SAD), which is used to characterize the change of the corresponding coding block of two adjacent frames.
  • SAD Sum of Absolute Differences, SAD
  • s(i,j) is the pixel value of the current coding block
  • c(i,j) is the pixel value of the corresponding position block of the previous frame
  • M and N are the length and width dimensions of the current coding block, respectively.
  • S c represents the background area of the previous frame; Indicates that S c contains the accumulated sum of the coded block corresponding SAD values; Num represents the accumulated number of times.
  • (x, y) represents the position coordinate of the current coding block
  • Mode represents the prediction mode of the coding block
  • mode P represents the prediction mode of the current coding block in the P frame coding
  • mode I represents the current coding in the I frame coding The prediction mode of the block.
  • mode P selects sub-block inter prediction mode set Inter8 (8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4) or mode I selects Intra4 ⁇ 4 prediction mode, it indicates that the spatial details are rich and also high.
  • the calculation formula can be expressed as:
  • ROI(x, y) represents the visual interest priority of the currently coded macroblock
  • T(x, y, MV) represents the time domain visual feature saliency of the currently coded macroblock
  • S(x, y, Mode) represents the spatial domain visual feature saliency of the currently coded macroblock
  • (x, y) represents the position coordinates of the currently coded macroblock
  • the video coding resource allocation scheme is set to improve video coding while ensuring video coding quality and compression efficiency.
  • the real-time performance first satisfies the coding optimization of the macroblock of the region of interest.
  • the hierarchical coding scheme is shown in Table 1.
  • Table 1 uses a fast intra prediction algorithm to describe the degree of macroblock flatness using a macroblock gray histogram, and adaptively selects a set of possible intra prediction modes according to macroblock flatness.
  • the macroblock gray histogram describes the gray level information contained in the macroblock.
  • the gray histogram counts the number or probability of occurrence of each gray level in the macroblock; graphically, the gray histogram is a two-dimensional graph, and the abscissa indicates the gray level contained in the macroblock.
  • the gray level range from all black to all white is [0, 255], and the ordinate indicates the number of times the pixel in the macro block appears at each gray level.
  • the shape of the macroblock gray histogram can reflect the richness of the macroblock texture.
  • a gray level ie, peak
  • Max Value the maximum number of pixels of the macro block. If Max Value is significantly higher than the number of other gray levels in the macroblock gray histogram, it is the main gray component of the macroblock, and the spatial correlation of the pixels in the macroblock is large, that is, the macroblock is flat.
  • Max Value is equivalent to the number of occurrences of other gray levels in the macro block gray value histogram, indicating that the macro block covers multiple gray levels, and the pixel gray in the macro block
  • the degree of change is severe, that is, the macroblock texture is rich, and it is suitable to adopt the Intra4 ⁇ 4 prediction mode set.
  • Step 1 Calculate the gray histogram of the current encoded macroblock luminance component Y, and record the maximum number of pixels Max Value;
  • Step 2 setting an upper threshold Th high and a lower threshold Th low , Th high and Th low are integers between [1, 256];
  • Step 3 If Max Value ⁇ Th high , consider the macroblock to be flat, discard the Intra4 ⁇ 4 prediction mode set, select the Intra16 ⁇ 16 prediction mode set, and use the mode with the lowest rate distortion overhead as the optimal intra prediction mode; Threshold: Otherwise, proceed to step 4;
  • Step 4 If Max Value ⁇ Th low , the macroblock is considered to be rich in detail, the Intra16 ⁇ 16 prediction mode set is discarded, the Intra4 ⁇ 4 prediction mode set is selected, and the mode with the lowest rate distortion overhead is used as the optimal intra prediction mode; Lower threshold: Otherwise, proceed to step 5;
  • Step 5 If Th low ⁇ Max Value ⁇ Th high , the macroblock flatness feature is considered to be insignificant, and a standard intra prediction algorithm is employed.
  • the upper limit threshold Th high and the lower limit threshold Th low in the present invention are set to 150 and 50, respectively.
  • Table 1 uses a fast inter-frame prediction algorithm to predict the specific mode by predicting the statistical characteristics of the probability of occurrence of various inter-frame prediction modes to prematurely terminate unnecessary inter-frame prediction mode search and rate distortion cost calculation, reducing coding time-consuming.
  • the H.264/AVC video coding standard uses seven variable block prediction modes in inter-frame coding, and each coded macroblock can be divided into Inter16 ⁇ 16, Inter16 ⁇ 8, Inter8 ⁇ 16 and Inter8 ⁇ .
  • the 8 mode in which the Inter8 ⁇ 8 mode can also be sub-divided, is divided into Inter8 ⁇ 8, Inter8 ⁇ 4, Inter4 ⁇ 8 and Inter4 ⁇ 4 modes.
  • H.264/AVC inter prediction also supports Skip mode and Intra16 ⁇ 16 and Intra4 ⁇ 4 intra prediction modes.
  • H.264/AVC traverses all possible selected prediction modes for each coded macroblock to optimize the rate-distortion performance for optimal prediction.
  • video images can be divided into three categories: background texture flat area, background texture detail area and motion area: usually the background texture flat area occupies a large proportion in the video content.
  • Skip mode mode0
  • macroblock-level inter prediction mode Inter16 ⁇ 16, Inter16 ⁇ 8, Inter8 ⁇ 16 (mode1 ⁇ mode3) are used for prediction; in the case of complex motion, more coding modes are needed for prediction.
  • the inter-subdivision prediction modes Inter8 ⁇ 8, Inter8 ⁇ 4, Inter4 ⁇ 8, and Inter4 ⁇ 4 are used; only Intra16 ⁇ 16 and Intra4 ⁇ 4 frames are used in the edge portion of the video image.
  • Prediction mode I16MB, I4MB
  • its probability of occurrence is very low. Therefore, the pre-judgment and diversity screening can be performed according to the statistical characteristics of the inter prediction mode to eliminate the coding mode with a small probability of occurrence and improve the coding speed.
  • Step 1 Pre-judgment of Skip mode
  • Step 1.1 Calculate the rate distortion value J skip of the Skip mode (mode0), stop searching for other modes if it is less than the threshold T, select Skip as the best prediction mode, and jump to step 4; otherwise, perform step 1.2;
  • Min_cost is the optimal rate distortion value of the previous coded macroblock.
  • Step 1.2 Calculate the rate distortion value of Inter16 ⁇ 16 mode (mode1) J 16 ⁇ 16 . If J 16 ⁇ 16 >J skip , then select Skip as the best coding mode and jump to step 4; otherwise, perform steps. 2.
  • Step 2 Predicting the macroblock/subblock inter prediction mode
  • Step 2.1 Calculate the rate distortion value of Inter 16 ⁇ 16 mode and Inter8 ⁇ 8 mode J 16 ⁇ 16 and J 8 ⁇ 8 . If J 8 ⁇ 8 -J 16 ⁇ 16 >T 0 , select Inter16 ⁇ 16 mode as Best inter-frame coding mode, go to step 4; otherwise, go to step 2.2;
  • Min_cost is an adaptive empirical domain value based on experimental data, which can minimize the false positive rate while ensuring the fast decision mode.
  • Min_cost is the optimal rate distortion value of the previous coded macroblock.
  • Step 2.2 Calculate the rate distortion value of the Inter4 ⁇ 4 mode J 4 ⁇ 4 . If J 4 ⁇ 4 ⁇ min(J 16 ⁇ 16 , J 8 ⁇ 8 ), sub-divide the macro block and sub-block the frame. Inter prediction modes Inter8 ⁇ 8, Inter8 ⁇ 4, Inter4 ⁇ 8 and Inter4 ⁇ 4 (mode4 ⁇ mode7) are used as an inter-frame candidate mode set; otherwise, the macroblock inter prediction mode Inter16 ⁇ 16, Inter16 ⁇ 8, Inter8 ⁇ 16 (mode1 to mode3) is used as a set of inter prediction prediction modes, and sub-segment prediction is discarded.
  • Step 3 Predicting the intra mode
  • Step 3.1 Calculate the average edge error value ABE (Average Boundary Error) and the edge error sum SBE (Summation Boundary Error) of the current coded macroblock, and the average edge error value ABE reflects the time correlation of the macroblock;
  • Y orig is the pixel value of the currently coded macroblock
  • Y rec is the pixel value of the reconstructed macroblock
  • (x, y) represents the position coordinate of the currently coded macroblock.
  • Step 3.2 Calculate an average bit rate AR (Average Rate) of the currently coded macroblock, and the average bit rate AR reflects the spatial correlation of the macroblock;
  • AR Average Rate
  • is the Lagrangian multiplier factor
  • Rate is the number of bits required for macroblock coding.
  • Step 4 Calculate and select the optimal inter prediction mode according to the rate distortion criterion, and complete the interframe prediction coding.
  • Table 1 uses the fast motion estimation search algorithm, based on the correlation of the motion vector of the coding block, determines the search hierarchy according to the degree of motion of the coding block, and achieves efficient search.
  • the excessive search radius and the search points distributed on the outer layer contribute little to the accuracy of motion estimation, but consume more motion. estimated time.
  • the traversal calculation of the search points on the inner layer also causes coding time. It can be seen that the intensity of the current coded macroblock is inextricably linked with the motion estimation search level of the best matching point. If the number of search layers can be adaptively selected according to the degree of macroblock motion, the number of search points will be greatly saved, and the computational complexity of motion estimation will be reduced. The selection of which features and criteria to discriminate the degree of macroblock motion becomes the key to optimizing the motion estimation algorithm.
  • the present invention improves the 4-layer non-uniform hexagonal mesh search in the original UMHexagonS algorithm to search the number of layers with macroblocks.
  • Non-uniform hexagonal mesh search with adaptive degree of motion First, the macroblock motion feature is described; then the macroblock motion degree is divided into three grades: lower degree of motion, medium degree of motion, and higher degree of motion; finally, the corresponding search level is selected according to the degree of motion.
  • Step 1 Describe the macroblock motion characteristics
  • Step 1.1 Calculate the rate distortion value of the current coded macroblock motion estimation based on the rate distortion criterion: RD cost motion :
  • ⁇ motin ) SAD[s,r(ref,mv)]+ ⁇ motin [R(mv-pred)+R(ref)]
  • s is the current macroblock pixel value
  • mv is the macroblock motion vector, pred is the prediction vector
  • ref is the selected reference frame
  • r(ref, mv) is the pixel value of the reference macroblock
  • R is the motion vector for the difference
  • the number of bits consumed by the encoding including the number of coded bits of the difference between the motion vector and its predicted value and the number of coded bits of the reference frame
  • ⁇ motion is a Lagrangian multiplier
  • SAD is the absolute error between the current block and the reference block , which is defined as:
  • M and N represent the width and height of the currently coded macroblock
  • x, y represent the location of the macroblock
  • s represents the true value
  • c represents the predicted value
  • m x and m y represent the motion vectors of the macroblock in the horizontal and vertical directions, respectively.
  • Step 1.2 Calculate the rate distortion RD cost mode in mode mode based on the rate distortion criterion:
  • ⁇ mode ) SSD(s,c,mode
  • mode is the inter-coding mode of the current macroblock
  • s is the original video signal
  • c is the reconstructed video signal encoded in mode mode
  • ⁇ mode is the Lagrangian multiplier
  • R(s,c, modeQP) is The total binary digits including macroblock header information, motion vector and all DCT block information related to the mode and quantization parameters
  • QP is the encoding quantization step size
  • SSD(s, c, mode) is between the original signal and the reconstructed signal
  • the squared difference sum ie:
  • B 1 and B 2 respectively represent the horizontal pixel number and the vertical pixel number of the coding block, and the value thereof may be 16, 8, 4;
  • QP] Represents the value of the original video and the reconstructed video luminance signal;
  • c U , c V and s U , s V represent the values of the corresponding color difference signals.
  • Step 1.3 Select the minimum rate distortion generation value from RD cost motion and RD cost mode , and record it as RD_mincost.
  • Step 2 Determine the intensity of macroblock motion
  • are the adjustment factors for judging the degree of motion of the macroblock, respectively defined as:
  • Bsize[blocktype] is the current coded macroblock size, and there are 7 kinds of values: 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4; pred_mincost by UMHexagonS
  • the algorithm starts from the motion vector prediction method selected by the search point:
  • the pred_mincost selects the reference frame prediction motion vector reference frame MV predictor
  • pred_mincost selects a median prediction motion vector median MV predictor
  • the pred_mincost selects the upper layer prediction motion vector uplayer MV predictor.
  • the array ⁇ 1 [blocktype] and the array ⁇ 2 [blocktype] are defined as:
  • Step 3 Determine the macroblock motion estimation search hierarchy
  • Step 3.1 When the degree of motion of the macroblock is low, in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm, only the non-uniform six sides of the first layer and the second layer of the inner layer are performed. Shape grid search;
  • Step 3.2 When the degree of motion of the macroblock is medium, the non-uniform hexagonal grid search of the first layer to the third layer is performed in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm;
  • Step 3.3 When the degree of motion of the macroblock is high, a 4-layer non-uniform hexagonal grid search is performed in the "non-uniform 4-level hexagonal grid search" step of the UMHexagonS algorithm.
  • the human eye has the highest degree of attention, the traversal performs fast intra prediction and the inter-subblock prediction mode set Inter8, and the motion estimation search performs the second to fourth layer search, and the number of allowed reference frames is five.
  • the fast inter-frame macroblock prediction mode set Inter16 is traversed, and the motion estimation search performs the first to third layer search, and the number of reference frames is three.
  • the motion estimation search performs the first to second layer search, and the number of reference frames is one.
  • the motion estimation search performs a layer 1 search with a reference frame number of one.
  • Intra4 ⁇ 4 prediction mode rich in spatial details, also has high spatial visual feature saliency, belongs to the attention area, skips the Intra16 ⁇ 16 prediction.
  • the present invention firstly implements efficient visual perception feature analysis and detection based on low-level coding information, and then labels the results according to the priority of the visual region of interest, guides the selection of the coding scheme, and simplifies the candidate mode set and motion estimation of the prediction coding.
  • the search range reduces the number of reference frames and reduces the computational complexity of the video coding layer.
  • the invention simultaneously discloses simulation tests and statistical results
  • Table 2 shows the results of performance comparison between the method of the present invention and the H.264/AVC (JM17.0) standard algorithm.
  • Table 2 compares the coding performance comparison results of 10 typical standard test sequences with different motion characteristics with respect to the H.264/AVC (JM17.0) standard algorithm under the proposed method.
  • the method of the present invention saves about 80% of the coding time compared with the H.264/AVC standard algorithm; the average output rate is controlled within 2%; PSNR-Y The average reduction is -0.188dB, and the PSNR-Y of the visual region of interest is reduced by -0.153dB, which preferentially guarantees the coding quality of the visible region of the visual perception feature, which is in line with the visual perception characteristics of the human eye for the degradation of the non-interest region. .
  • the method of the invention ensures that the average PSNR-Y decreases by less than -0.2 dB, which is much smaller than the minimum sensitivity (-0.5 dB) perceived by the human eye to image quality changes, and maintains a good reconstructed video image quality.
  • the statistical data of Fig. 2 shows that the method of the present invention has lower computational complexity than the H.264/AVC standard algorithm and the existing algorithm.
  • the average coding time savings are above 85%.
  • the video coding method with the visual perception feature proposed by the invention can maintain good subjective quality of the video image under the premise of greatly improving the coding speed, and the test result proves the feasibility of making full use of the coded information for low complexity visual perception analysis.
  • the consistency of visual perception feature saliency analysis results with HVS verifies the rationality of grading coding scheme based on visual perception features.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

公开了一种融合视觉感知特征的可分层视频编码方法,包括视觉感兴趣区域优先级的设定以及视频编码资源分配方案的设定,前者为:鉴于视频图像内容的丰富性和人眼视觉选择性注意机制,将视频内容分别进行时域和空域视觉特征显著度分析,完成视觉感兴趣区域优先级标注,后者表现为:为在保证视频编码质量和压缩效率的同时,提高视频编码实时性能,依据视觉感兴趣区域优先级,首先满足感兴趣区域宏块的编码资源最优化,实现分层编码,有效缓解了编码复杂度与编码效率之间的矛盾,与H.264/AVC国际视频编码标准相比,能够在保持较高视频图像质量和低压缩码率的前提下,将编码速度平均提高约80%,显著改善了视频编码性能。

Description

一种融合视觉感知特征的可分层视频编码方法 技术领域
本发明涉及视频编码方法,尤其涉及一种融合视觉感知特征的可分层视频编码方法。
背景技术
随着多媒体信息处理和通信技术的飞速发展,IPTV、PDA、立体电影、自由视点视频等多样化视频业务相继推出,视频编码在信息化处理以及相关领域展现了广阔的发展前景。然而,在网络带宽和存储空间受限的同时,人们对视频质量的要求却在不断提高、数字视频的性能指标,如分辨率、质量、帧率等不断提升,对现有的视频编码标准提出了新要求。
为了获得具有低复杂度、高质量和高压缩率的视频编码方法,继2003年由国际电信联盟ITU-T和国际标准化组织ISO/IEC联合推出视频压缩标准H.264/AVC后,2010年1月,ISO/IEC和ITU-T联合成立了JCT-VC(Joint Collaborative Team on Video Coding)小组,并且发布了下一代视频编码技术提案HEVC(High Efficiency Video Coding)。提案指出,HEVC依然沿用H.264/AVC的混合编码框架,着力研究新的编码技术,旨在解决现有视频编码标准在压缩率与编码复杂度之间的矛盾,使之适应多类型的网络传输,承载更多的信息处理业务。具有“实时性”、“高压缩率”和“高清晰度”的视频编码标准及其应用技术,已成为信号与信息处理领域的研究热点之一。
截至目前,众多学者围绕视频快速编码或者视觉感知分析开展了大量研究工作,但是很少将二者结合在一个编码框架内联合实现对视频编码性能的优化。
在视觉感知特征分析方面,有的研究人员采用颜色、亮度、方向和肤色四种视觉特征进行感兴趣区域计算,但忽略了运动视觉特征;有的研究人员融合了运动、亮度强度、人脸和文字等视觉特征,构建视觉注意模型实现感兴趣提取;也有的研究人员采用运动和纹理信息获取感兴趣区域;或者有人提出在压缩域或者基于小波变换的方法获得感兴趣区域。由于现有的全局运动估计算法复杂度都较大,因此视觉感兴趣区域提取算法复杂度过高。上述基于人类视觉系统HVS(Human Visual System)的视频编码技术集中研究了比特资源优化分配的方法,在比特资源受限时保证感兴趣区域的视频图像质量,但欠缺对计算资源分配问题的考虑,并且对进行视觉感知分析时引入的额外计算复杂度,也没有引起足够的关注,其计算效率有待提高。
在快速视频编码方面,有的研究人员通过控制运动估计点数、以损失率失真性能为代价,实现快速编码;有的研究人员通过进行编码参数控制实现快速编码。但上述方法并不区分视频图像中不同区域在视觉意义上的重要程度,对所有编码内容采用相同的快速编码方案,忽略了HVS对视频场景感知的差异性。
发明内容
本发明针对上述问题,提出一种融合视觉感知特征的可分层视频编码方法,包括视觉感兴趣区域优先级的设定以及视频编码资源分配方案的设定两部分;
所述视觉感兴趣区域优先级的设定主要为:鉴于视频图像内容的丰富性和人眼视觉选择性注意机制,视频内容通常同时具有时域和空域双重视觉特征,标注视觉特征显著度区域的计算公式可表示为:
Figure PCTCN2015100056-appb-000001
式中,ROI(x,y)代表当前编码宏块视觉感兴趣优先级;T(x,y,MV)代表当前编码宏块的时域视觉特征显著度;S(x,y,Mode)代表当前编码宏块的空域视觉特征显著度;(x,y)表示当前编码宏块的位置坐标;
所述视频编码资源分配方案的设定表现为:为在保证视频编码质量和压缩效率的同时,改善视频编码实时性能,首先满足感兴趣区域宏块的编码最优化,
采用快速帧内预测算法,利用宏块灰度直方图描述宏块平坦程度,依据宏块平坦度自适应地选取可能的帧内预测模式集合;
采用快速帧间预测算法,通过分析各种帧间预测模式出现概率的统计特性,对特定模式进行预判以提前终止不必要的帧间预测模式搜索和率失真代价计算,减少编码耗时;
采用快速运动估计搜索算法,基于编码块运动矢量相关性,依据编码块运动程度,判定搜索层次,实现高效搜索。
所述视觉感兴趣区域优先级的设定中,首先,进行时域视觉显著度区域标注:具体分为两个步骤:步骤1运动矢量噪声检测和步骤2平移运动矢量检测,分别用于削弱由于运动矢量噪声和摄像机运动而产生的平移运动矢量对于时域视觉显著度区域检测准确性的影响,完成前景与背景的分离,得到较为准则的、符合人眼视觉特征的时域视觉显著度区域标注结果;然后,进行空域视觉显著度区域标;最后,依据时域、空域视觉特征显著度区域标注结果,完成视觉特征显著度区域的标注。
在进行时域视觉显著度区域标注时,所述步骤1运动矢量噪声检测的计算公式表示为:
Figure PCTCN2015100056-appb-000002
(1)式中,(x,y)为当前编码块位置坐标,
Figure PCTCN2015100056-appb-000003
表示当前编码块的运动矢量MV,
Figure PCTCN2015100056-appb-000004
表示运动参考区域Crr内的平均运动矢量,定义为:
Figure PCTCN2015100056-appb-000005
Figure PCTCN2015100056-appb-000006
表示运动参考区域Crr所包含宏块的运动矢量,
Figure PCTCN2015100056-appb-000007
表示累加次数;
运动参考区域Crr的定义如下,以使参考区域Crr的形状、位置、面积能够随当前运动矢量
Figure PCTCN2015100056-appb-000008
的变化而实现自适应调整:
将分别位于Crr的左上、右上、左下和右下的四个宏块表示为MB1,MB2,MB3,MB4,其位置坐标定义为:
Figure PCTCN2015100056-appb-000009
其中,
Figure PCTCN2015100056-appb-000010
Figure PCTCN2015100056-appb-000011
分别是当前运动矢量
Figure PCTCN2015100056-appb-000012
在水平和垂直方向上的运动幅度,ws和hs分别表示当前编码块的宽度和高度,[·]表示取整计算;
如果
Figure PCTCN2015100056-appb-000013
说明在运动参考区域Crr内不存在运动矢量,则认为
Figure PCTCN2015100056-appb-000014
是由运动噪声引起的,应予以滤除,
Figure PCTCN2015100056-appb-000015
被置为0,标记T1(x,y,MV)=3;
如果
Figure PCTCN2015100056-appb-000016
说明当前编码块与邻近宏块相比具有较显著的运动特征,属于前景动态区域,标记T1(x,y,MV)=2;
否则,说明当前编码块与邻近宏块具有相似的运动特性,时域显著性并不明显,需要进一步进行平移运动矢量检测,以判定该编码块是属于背景区域,还是属于前景平移区域,标记为T2(x,y,MV);
所述步骤2平移运动矢量检测的计算公式可表示为:
Figure PCTCN2015100056-appb-000017
(2)式中,(x,y)表示当前编码块的位置坐标;
Figure PCTCN2015100056-appb-000018
为动态阈值;SAD(x,y)为当前编码块与前一帧对应位置块之间的绝对差值和SAD(Sum of Absolute Differences,SAD),用于表征相邻两帧对应编码块的变化程度,定义如下:
Figure PCTCN2015100056-appb-000019
其中,s(i,j)为当前编码块的像素值;c(i,j)为前一帧对应位置块的像素值;M,N分别为当前编码块的长、宽尺寸;
动态阈值
Figure PCTCN2015100056-appb-000020
为前一帧中被确定为背景区域内的所有编码块SAD的均值,定义如下:
Figure PCTCN2015100056-appb-000021
其中,Sc表示前一帧的背景区域;
Figure PCTCN2015100056-appb-000022
表示Sc内包含编码块对应SAD值的累加和;Num表示累加次数;
综合上述(1)和(2)两个处理步骤,进行时域视觉显著度区域标注的计算公式可描述为:
Figure PCTCN2015100056-appb-000023
(3)式中,各参数的定义与式(1)和式(2)相同;
然后,进行空域视觉显著度区域标注,其计算公式可描述为:
Figure PCTCN2015100056-appb-000024
(4)式中,(x,y)表示当前编码块的位置坐标;Mode表示编码块的预测模式;modeP表示P帧编码中当前编码块的预测模式;modeI表示I帧编码中当前编码块的预测模式;
如果modeP选择了帧内预测模式,说明空域视觉特征显著度最高,属于敏感区,标记S(x,y,Mode)=2;
如果modeP选择了子块帧间预测模式集合Inter8(8×8,8×4,4×8,4×4)或者modeI选择了Intra4×4预测模式,说明空间细节丰富,也具有较高空域视觉特征显著度,属于关注区,标记S(x,y,Mode)=1;
如果modeP选择了宏块帧间预测模式集合Inter16(Skip,16×16,16×8,8×16)或者modeI选择了Intra16×16预测模式,说明空间变化平缓,空间视觉特征显著度低,属于非显著区,标记S(x,y,Mode)=0;
最后,依据时域、空域视觉特征显著度区域标注结果,标注视觉特征显著度区域。
采用快速帧内预测算法的具体过程:
步骤1:计算当前编码宏块亮度分量Y的灰度直方图,记录其最大像素数Max Value;
步骤2:设定上限阈值Thhigh和下限阈值Thlow,Thhigh和Thlow均为[1,256]间的整数;
步骤3:若Max Value≥Thhigh,认为宏块平坦,舍弃Intra4×4预测模式集合,选择Intra16×16预测模式集合,并将率失真开销最小的模式作为最优帧内预测模式;同时更新上限阈值:
Figure PCTCN2015100056-appb-000025
否则,进入步骤4;
步骤4:若Max Value≤Thlow,认为宏块细节丰富,舍弃Intra16×16预测模式集合,选择Intra4×4预测模式集合,并将率失真开销最小的模式作为最佳帧内预测模式;同时更新下限阈值:
Figure PCTCN2015100056-appb-000026
否则,进入步骤5;
步骤5:若Thlow<Max Value<Thhigh,认为宏块平坦度特征不显著,采用标准帧内预测算法;
本发明中的上限阈值Thhigh和下限阈值Thlow分别设定为150和50。
采用快速帧间预测算法的具体过程:
步骤1:Skip模式的预判
步骤1.1:计算Skip模式(mode0)的率失真代价值Jskip,如果小于阈值T则停止对其他模式的搜索,选定Skip作为最佳预测模式,跳转至步骤4;否则执行步骤1.2;
其中,T=(0.7-Min_cost/18000)×Min_cost,Min_cost为前一个编码宏块的最优率失真代价值;
步骤1.2:计算Inter16×16模式(mode1)的率失真代价值J16×16,如果J16×16>Jskip,则仍选定Skip作为最佳编码模式,跳转至步骤4;否则执行步骤2;
步骤2:宏块/子块帧间预测模式的预判
步骤2.1:计算Inter16×16模式和Inter8×8模式的率失真代价值J16×16和J8×8,若J8×8-J16×16>T0,则选定Inter16×16模式作为最佳帧间编码模式,跳转至步骤4;否则,执行步骤2.2;
其中,T0=0.2×Min_cost,是根据实验数据得出的自适应经验域值,可以在保证快速判决模式的同时,尽量减少误判率,Min_cost为前一个编码宏块的最优率失真代价值;
步骤2.2:计算Inter4×4模式的率失真代价值J4×4,如果J4×4<min(J16×16,J8×8),则对该宏块进行亚分割,将子块帧间预测模式Inter8×8,Inter8×4,Inter4×8和Inter4×4(mode4~mode7)作为帧间备选模式集合;否则,将宏块帧间预测模式Inter16×16,Inter16×8,Inter8×16(mode1~mode3)作为帧间预测备选模式集合,舍弃亚分割预测;
步骤3:帧内模式的预判
步骤3.1:计算当前编码宏块的平均边缘误差值ABE(Average Boundary Error)与边缘误差总和SBE(Summation Boundary Error),平均边缘误差值ABE反映了宏块的时间相关性;
ABE=SBE/64
其中,
Figure PCTCN2015100056-appb-000027
式中:Yorig为当前编码宏块的像素值;Yrec为重构宏块的像素值;(x,y)表示当前编码宏块的位置坐标;
步骤3.2:计算当前编码宏块的平均比特率AR(Average Rate),平均比特率AR反映了宏块的空间相关性;
AR=λRate/384
式中:λ为拉格朗日乘数因子;Rate为宏块编码所需的比特数;
步骤3.3:比较宏块的平均边缘误差和平均比特率,若ABE<C·AR(C=0.95),则说明该宏块的空域冗余小于时域冗余,舍弃对帧内预测模式的遍历计算,进入步骤4;否则,保留帧内预测模式遍历计算,进入步骤4;
步骤4:根据率失真准则计算并选择出最优帧间预测模式,完成帧间预测编码。
采用快速运动估计搜索算法的具体过程:
步骤1:描述宏块运动特征
步骤1.1:基于率失真准则,计算当前编码宏块运动估计的率失真代价值RD costmotion
Jmotion(mv,ref|λmotin)=SAD[s,r(ref,mv)]+λmotin[R(mv-pred)+R(ref)]
式中,s为当前宏块像素值;mv是宏块运动矢量,pred为预测矢量;ref为选择的参考帧;r(ref,mv)为参考宏块的像素值;R是运动矢量进行差分编码消耗的比特数,包括运动矢量与其预测值的差值的编码比特数和参考帧的编码比特数;λmotion为拉格朗日乘子;SAD为当前块与参考块之间的绝对误差和,其定义为:
Figure PCTCN2015100056-appb-000028
式中M和N分别表示当前编码宏块的宽度和高度;x,y表示宏块所在位置;s表示真实值;c表示预测值;m=(mx,my)T表示宏块运动矢量,mx和my分别表示宏块在水平和垂直方向的运动矢量;
步骤1.2:基于率失真准则,计算在mode模式下的率失真代值RD costmode
Jmode(s,c,mode|λmode)=SSD(s,c,mode|QP)+λmode×R(s,c,mode|QP)
式中,mode为当前宏块的帧间编码模式;s为原始视频信号;c为mode模式编码后的重建视频信号;λmode为拉格朗日乘子;R(s,c,mode|QP)是与模式和量化参数有关的包括宏块头信息、运动矢量和所有DCT块信息的总的二进制位数;QP是编码量化步长;SSD(s,c,mode)为原始信号与重建信号之间的平方差值和,即:
Figure PCTCN2015100056-appb-000029
其中,B1和B2分别表示编码块的水平像素数和垂直像素数,其取值可为16,8,4;sY[x,y],cY[x,y,mode|QP]表示原始视频和重建视频亮度信号的取值;cU,cV和sU,sV表示相应的色差信号的取值;
步骤1.3:从RD costmotion和RD costmode中选取最小率失真代价值,记为RD_mincost;
步骤2:判定宏块运动剧烈程度
判定宏块运动剧烈程度的计算公式为:
Figure PCTCN2015100056-appb-000030
式中,γ,δ为判别宏块运动程度的调整因子,分别定义为:
Figure PCTCN2015100056-appb-000031
其中,Bsize[blocktype]为当前编码宏块尺寸,有7种取值:16×16,16×8,8×16,8×8,8×4,4×8,4×4;pred_mincost由UMHexagonS算法起始搜索点选择的运动矢量预测方式而定:
(1)如果起始搜索点采用时间预测方式的运动矢量,则pred_mincost选取参考帧预测运动矢量reference frame MV predictor;
(2)如果起始搜索点没有采用时间预测方式的运动矢量,再分为以下两类情况:
(2.1)若当前运动估计预测宏块选择的是16×16,16×8,8×16大尺寸帧间预测模式,则pred_mincost选取中值预测运动矢量median MV predictor;
(2.2)若当前运动估计预测宏块选择的是8×8,8×4,4×8,4×4小尺寸帧间预测模式,则pred_mincost选取上层预测运动矢量uplayer MV predictor;
根据大量实验测试数据,数组α1[blocktype]和数组α2[blocktype]分别定义为:
α1[blocktype]=[-0.23,-0.23,-0.23,-0.25,-0.27,-0.27,-0.28];
α2[blocktype]=[-2.39,-2.40,-2.40,-2.41,-2.45,-2.45,-2.48];
步骤3:确定宏块运动估计搜索层次
步骤3.1:当宏块运动程度较低时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,只进行内层的第1层和第2层的非均匀六边形网格搜索;
步骤3.2:当宏块运动程度中等时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,进行第1层到第3层的非均匀六边形网格搜索;
步骤3.3:当宏块运动程度较高时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,才进行4层的非均匀六边形网格搜索。
本发明采用视频编码层和视觉感知分析层的双层结构实现快速编码。一方面,视觉感知分析层利用编码层的码流信息,进行视觉特征显著度分析,标识视觉感兴趣区域优先级,大大缩短了视觉感知分析的计算耗时;另一方面,编码层复用感知分析层输出的视觉特征显著度分析结果,优化分配编码计算资源,实现了可分层视频编码,提高了编码速度。本发明既保证了视频图像质量和编码效率,又整体提高了编码计算速度,在编码速度、主观视频质量及压缩码率三方面达到了平衡。
附图说明
图1所示为本发明中率失真性能比较结果图;
图2所示为本发明中计算复杂度比较结果图;
具体实施方式
下面结合图表和具体实施方式对本发明作进一步详细地说明。
一种融合视觉感知特征的可分层视频编码方法,包括视觉感兴趣区域优先级的设定以及视频编码资源分配方案的设定两部分;
所述视觉感兴趣区域优先级的设定主要为:鉴于视频图像内容的丰富性和人眼视觉选择性注意机制,视频内容通常同时具有时域和空域双重视觉特征,为了降低对视频内容的时域和空域视觉特征的计算复杂度,提出利用已有视频编码码流信息来进行人眼对时域和空域视觉显著度区域的标注,
首先,进行时域视觉显著度区域标注:具体分为两个步骤:(1)运动矢量噪声检测和(2)平移运动矢量检测,分别用于削弱由于运动矢量噪声和摄像机运动而产生的平移运动矢量对于时域视觉显著度区域检测准确性的影响,完成前景与背景的分离,得到较为准则的、符合人眼视觉特征的时域视觉显著度区域标注结果,
(1)运动矢量噪声检测:
进行运动矢量噪声检测的计算公式可表示为:
Figure PCTCN2015100056-appb-000032
(1)式中,(x,y)为当前编码块位置坐标,
Figure PCTCN2015100056-appb-000033
表示当前编码块的运动矢量MV,
Figure PCTCN2015100056-appb-000034
表示运动参考区域Crr内的平均运动矢量,定义为:
Figure PCTCN2015100056-appb-000035
Figure PCTCN2015100056-appb-000036
表示参考区域Crr所包含宏块的运动矢量,
Figure PCTCN2015100056-appb-000037
表示累加次数。
为了使得运动参考区域Crr的形状、位置、面积能够随当前运动矢量
Figure PCTCN2015100056-appb-000038
的变化而实现自适应调整,运动参考区域Crr的定义如下,:
将分别位于Crr的左上、右上、左下和右下的四个宏块表示为MB1,MB2,MB3,MB4,其位置坐标定义为:
Figure PCTCN2015100056-appb-000039
其中,
Figure PCTCN2015100056-appb-000040
Figure PCTCN2015100056-appb-000041
分别是当前运动矢量
Figure PCTCN2015100056-appb-000042
在水平和垂直方向上的运动幅度,ws和hs分别表示当前编码块的宽度和高度,[·]表示取整计算。
如果
Figure PCTCN2015100056-appb-000043
说明在运动参考区域Crr内不存在运动矢量,则认为
Figure PCTCN2015100056-appb-000044
是由运动噪声引起的,应予以滤除,
Figure PCTCN2015100056-appb-000045
被置为0,标记T1(x,y,MV)=3.
如果
Figure PCTCN2015100056-appb-000046
说明当前编码块与邻近宏块相比具有较显著的运动特征,属于前景动态区域,标记T1(x,y,MV)=2。
否则,说明当前编码块与邻近宏块具有相似的运动特性,时域显著性并不明显,需要进一步进行平移运动矢量检测,以判定该编码块是属于背景区域,还是属于由于摄像机的移动而产生的平移运动矢量的前景平移区域,标记为T2(x,y,MV)。
(2)平移运动矢量检测:
进行平移运动矢量检测的计算公式可表示为:
Figure PCTCN2015100056-appb-000047
(2)式中,(x,y)表示当前编码块的位置坐标;
Figure PCTCN2015100056-appb-000048
为动态阈值;SAD(x,y)为当前编码块与前一帧对应位置块之间的绝对差值和SAD(Sum of Absolute Differences,SAD),用于表征相邻两帧对应编码块的变化程度,定义如下:
Figure PCTCN2015100056-appb-000049
其中,s(i,j)为当前编码块的像素值;c(i,j)为前一帧对应位置块的像素值;M、N分别为当前编码块的长、宽尺寸。
动态阈值
Figure PCTCN2015100056-appb-000050
为前一帧中被确定为背景区域内的所有编码块SAD的均值,定义如下:
Figure PCTCN2015100056-appb-000051
其中,Sc表示前一帧的背景区域;
Figure PCTCN2015100056-appb-000052
表示Sc内包含编码块对应SAD值的累加和;Num表示累加次数。
综合上述(1)和(2)两个处理步骤,进行时域视觉显著度区域标注的计算公式可描述为:
Figure PCTCN2015100056-appb-000053
(3)式中,各参数的定义与式(1)和式(2)相同。
然后,进行时域视觉显著度区域标注:
Figure PCTCN2015100056-appb-000054
(4)式中,(x,y)表示当前编码块的位置坐标;Mode表示编码块的预测模式;modeP表示P帧编码中当前编码块的预测模式;modeI表示I帧编码中当前编码块的预测模式。
如果modeP选择了帧内预测模式,说明空域视觉特征显著度最高,属于敏感区,标记S(x,y,Mode)=2;
如果modeP选择了子块帧间预测模式集合Inter8(8×8,8×4,4×8,4×4)或者modeI选择了Intra4×4预测模式,说明空间细节丰富,也具有较高空域视觉特征显著度,属于关注区,标记S(x,y,Mode)=1;
如果modeP选择了宏块帧间预测模式集合Inter16(Skip,16×16,16×8,8×16)或者modeI选择了Intra16×16预测模式,说明空间变化平缓,空间视觉特征显著度低,属于非显著区,标记S(x,y,Mode)=0。
最后,依据时域、空域视觉特征显著度区域标注结果,标注视觉特征显著度区域:
计算公式可表示为:
Figure PCTCN2015100056-appb-000055
(5)式中,ROI(x,y)代表当前编码宏块视觉感兴趣优先级;T(x,y,MV)代表当前编码宏块的时域视觉特征显著度;S(x,y,Mode)代表当前编码宏块的空域视觉特征显著度;(x,y)表示当前编码宏块的位置坐标;
视频编码资源分配方案的设定表现为:为在保证视频编码质量和压缩效率的同时,改善视频编码 实时性能,首先满足感兴趣区域宏块的编码最优化,制定的分层编码方案见表1。
表1
Figure PCTCN2015100056-appb-000056
表1采用快速帧内预测算法,利用宏块灰度直方图描述宏块平坦程度,依据宏块平坦度自适应地选取可能的帧内预测模式集合。
基本原理为:
宏块灰度直方图描述了该宏块所包含的灰度级信息。从数学上来说,灰度直方图统计了宏块中各个灰度级出现的次数或概率;从图形上来说,灰度直方图是一个二维图形,横坐标表示宏块包含的灰度级,从全黑到全白的灰度级范围为[0,255],纵坐标表示宏块中的像素点在各个灰度级上出现的次数。
宏块灰度直方图的形状可以反映宏块纹理的丰富程度。在宏块灰度直方图的纵坐标轴上,必然存在一个纵坐标最大的灰度级(即峰值),那么把属于该灰度级的像素点总数定义为该宏块的最大像素数,记为Max Value。如果Max Value明显高于宏块灰度直方图中其他灰度级出现的次数,则说明它是该宏块的主要灰度分量,宏块内像素的空间相关性较大,即宏块平坦,适合采用Intra16×16预测模式集合;反之,Max Value与宏块灰度值直方图中的其他灰度级出现的次数相当,则说明该宏块覆盖了多个灰度级,宏块内像素灰度变化剧烈,即宏块纹理丰富,适合采用Intra4×4预测模式集合。
具体过程为:
步骤1:计算当前编码宏块亮度分量Y的灰度直方图,记录其最大像素数Max Value;
步骤2:设定上限阈值Thhigh和下限阈值Thlow,Thhigh和Thlow均为[1,256]间的整数;
步骤3:若Max Value≥Thhigh,认为宏块平坦,舍弃Intra4×4预测模式集合,选择Intra16×16预测模式集合,并将率失真开销最小的模式作为最优帧内预测模式;同时更新上限阈值:
Figure PCTCN2015100056-appb-000057
否则,进入步骤4;
步骤4:若Max Value≤Thlow,认为宏块细节丰富,舍弃Intra16×16预测模式集合,选择Intra4×4预测模式集合,并将率失真开销最小的模式作为最佳帧内预测模式;同时更新下限阈值:
Figure PCTCN2015100056-appb-000058
否则,进入步骤5;
步骤5:若Thlow<Max Value<Thhigh,认为宏块平坦度特征不显著,采用标准帧内预测算法。
本发明中的上限阈值Thhigh和下限阈值Thlow分别设定为150和50。
表1采用快速帧间预测算法,通过分析各种帧间预测模式出现概率的统计特性,对特定模式进行预判以提前终止不必要的帧间预测模式搜索和率失真代价计算,减少编码耗时。
基本原理为:
为了提高编码精度,H.264/AVC视频编码标准在帧间编码中采用7种可变块的预测模式,每个编码宏块可划分为Inter16×16,Inter16×8,Inter8×16和Inter8×8模式,其中Inter8×8模式还可以进行亚分割,划分为Inter8×8,Inter8×4,Inter4×8和Inter4×4模式。此外,H.264/AVC帧间预测还支持Skip模式以及Intra16×16和Intra4×4两种帧内预测模式。H.264/AVC对每一个编码宏块遍历所有可能选择的预测模式,以获得率失真性能的最优化,达到最佳预测效果。灵活多样的可选帧间预测模式的引入成为H.264/AVC较其它视频编码标准获得更高编码效率的重要因素,但块划分模式组合的增多也使帧间预测模式判决过程异常复杂,造成编码计算复杂度急剧增加。
研究发现,视频图像基本可以分为背景纹理平坦区域、背景纹理细致区域和运动区域3大类:通常背景纹理平坦区域在视频内容中占有很大比重,对于这类平坦区域和运动平滑区域,大多采取Skip模式(mode0)或者宏块级帧间预测模式Inter16×16,Inter16×8,Inter8×16(mode1~mode3)进行预测;在运动复杂的情况下需要使用更多的编码模式进行预测,才会使用到帧间亚分割预测模式Inter8×8,Inter8×4,Inter4×8和Inter4×4(mode4~mode7);只有在视频图像的边缘部分才会使用到Intra16×16和Intra4×4帧内预测模式(I16MB,I4MB),其出现的几率非常低。因此,可以根据帧间预测模式统计特性进行预先判决,分集筛选,以排除掉出现机率很小的编码模式,提高编码速度。
具体过程为:
步骤1:Skip模式的预判
步骤1.1:计算Skip模式(mode0)的率失真代价值Jskip,如果小于阈值T则停止对其他模式的搜索,选定Skip作为最佳预测模式,跳转至步骤4;否则执行步骤1.2;
其中,T=(0.7-Min_cost/18000)×Min_cost。Min_cost为前一个编码宏块的最优率失真代价值。
步骤1.2:计算Inter16×16模式(mode1)的率失真代价值J16×16,如果J16×16>Jskip,则仍选定Skip作为最佳编码模式,跳转至步骤4;否则执行步骤2。
步骤2:宏块/子块帧间预测模式的预判
步骤2.1:计算Inter16×16模式和Inter8×8模式的率失真代价值J16×16和J8×8,若J8×8-J16×16>T0,则选定Inter16×16模式作为最佳帧间编码模式,跳转至步骤4;否则,执行步骤2.2;
其中,T0=0.2×Min_cost,是根据实验数据得出的自适应经验域值,可以在保证快速判决模式的同时,尽量减少误判率。Min_cost为前一个编码宏块的最优率失真代价值。
步骤2.2:计算Inter4×4模式的率失真代价值J4×4,如果J4×4<min(J16×16,J8×8),则对该宏块进行亚分割,将子块帧间预测模式Inter8×8,Inter8×4,Inter4×8和Inter4×4(mode4~mode7)作为帧间 备选模式集合;否则,将宏块帧间预测模式Inter16×16,Inter16×8,Inter8×16(mode1~mode3)作为帧间预测备选模式集合,舍弃亚分割预测。
步骤3:帧内模式的预判
步骤3.1:计算当前编码宏块的平均边缘误差值ABE(Average Boundary Error)与边缘误差总和SBE(Summation Boundary Error),平均边缘误差值ABE反映了宏块的时间相关性;
ABE=SBE/64
其中,
Figure PCTCN2015100056-appb-000059
式中:Yorig为当前编码宏块的像素值;Yrec为重构宏块的像素值;(x,y)表示当前编码宏块的位置坐标。
步骤3.2:计算当前编码宏块的平均比特率AR(Average Rate),平均比特率AR反映了宏块的空间相关性;
AR=λRate/384
式中:λ为拉格朗日乘数因子;Rate为宏块编码所需的比特数。
步骤3.3:比较宏块的平均边缘误差和平均比特率,若ABE<C·AR(C=0.95),则说明该宏块的空域冗余小于时域冗余,舍弃对帧内预测模式的遍历计算,进入步骤4;否则,保留帧内预测模式遍历计算,进入步骤4。
步骤4:根据率失真准则计算并选择出最优帧间预测模式,完成帧间预测编码。
表1采用快速运动估计搜索算法,基于编码块运动矢量相关性,依据编码块运动程度,判定搜索层次,实现高效搜索。
基本原理为:
H.264/AVC标准中所采用的UMHexagonS算法,是目前效果最好的运动估计算法之一。但通过大量实验统计数据发现,最佳匹配点在UMHexagonS算法各个搜索步骤上是非均匀分布的,但UMHexagonS算法在“非均匀4层次的六边形格网格搜索”步骤中,并没有对编码块运动特征与搜索范围之间的关联性进行分析,不论当前编码宏块的运动程度如何,都必须在完成4层非均匀六边形搜索(4层×16个搜索点/层=64个搜索点)以后才能进入下一步搜索,计算量相当可观。对于视频序列中占有较大比重的运动平缓区域的宏块而言,过大的搜索半径及分布在外层上的搜索点对提高运动估计准确度的贡献甚微,但却消耗了较多的运动估计时间。反之,对于少数运动程度剧烈的编码块,耗费在内层上搜索点的遍历计算也造成了编码耗时。可见,当前编码宏块的运动剧烈程度与其最佳匹配点所在运动估计搜索层次有着必然的联系。如果能够根据宏块运动程度自适应地选择搜索层数,无疑将大大节约搜索点数,降低运动估计的计算复杂度。而选取何种特征和准则判别宏块运动程度成为优化运动估计算法的关键所在。
由此,本发明将原UMHexagonS算法中的4层非均匀六边形网格搜索改进为搜索层数随宏块运 动程度自适应变化的非均匀六边形网格搜索。首先描述宏块运动特征;然后将宏块运动程度划分为三个档次:运动程度较低、运动程度中等、运动程度较高;最后根据运动程度选择相应的搜索层次。
具体过程为:
步骤1:描述宏块运动特征
步骤1.1:基于率失真准则,计算当前编码宏块运动估计的率失真代价值RD costmotion
Jmotion(mv,ref|λmotin)=SAD[s,r(ref,mv)]+λmotin[R(mv-pred)+R(ref)]
式中,s为当前宏块像素值;mv是宏块运动矢量,pred为预测矢量;ref为选择的参考帧;r(ref,mv)为参考宏块的像素值;R是运动矢量进行差分编码消耗的比特数,包括运动矢量与其预测值的差值的编码比特数和参考帧的编码比特数;λmotion为拉格朗日乘子;SAD为当前块与参考块之间的绝对误差和,其定义为:
Figure PCTCN2015100056-appb-000060
式中M和N分别表示当前编码宏块的宽度和高度;x,y表示宏块所在位置;s表示真实值;c表示预测值;m=(mx,my)T表示宏块运动矢量,mx和my分别表示宏块在水平和垂直方向的运动矢量。
步骤1.2:基于率失真准则,计算在mode模式下的率失真代值RD costmode
Jmode(s,c,mode|λmode)=SSD(s,c,mode|QP)+λmode×R(s,c,mode|QP)
式中,mode为当前宏块的帧间编码模式;s为原始视频信号;c为mode模式编码后的重建视频信号;λmode为拉格朗日乘子;R(s,c,modeQP)是与模式和量化参数有关的包括宏块头信息、运动矢量和所有DCT块信息的总的二进制位数;QP是编码量化步长;SSD(s,c,mode)为原始信号与重建信号之间的平方差值和,即:
Figure PCTCN2015100056-appb-000061
其中,B1和B2分别表示编码块的水平像素数和垂直像素数,其取值可为16,8,4;sY[x,y],cY[x,y,mode|QP]表示原始视频和重建视频亮度信号的取值;cU,cV和sU,sV表示相应的色差信号的取值。
步骤1.3:从RD costmotion和RD costmode中选取最小率失真代价值,记为RD_mincost。
步骤2:判定宏块运动剧烈程度
判定宏块运动剧烈程度的计算公式为:
Figure PCTCN2015100056-appb-000062
式中,γ,δ为判别宏块运动程度的调整因子,分别定义为:
Figure PCTCN2015100056-appb-000063
其中,Bsize[blocktype]为当前编码宏块尺寸,有7种取值:16×16,16×8,8×16,8×8,8×4,4×8,4×4;pred_mincost由UMHexagonS算法起始搜索点选择的运动矢量预测方式而定:
(1)如果起始搜索点采用时间预测方式的运动矢量,则pred_mincost选取参考帧预测运动矢量reference frame MV predictor;
(2)如果起始搜索点没有采用时间预测方式的运动矢量,再分为以下两类情况:
(2.1)若当前运动估计预测宏块选择的是16×16,16×8,8×16大尺寸帧间预测模式,则pred_mincost选取中值预测运动矢量median MV predictor;
(2.2)若当前运动估计预测宏块选择的是8×8,8×4,4×8,4×4小尺寸帧间预测模式,则pred_mincost选取上层预测运动矢量uplayer MV predictor。
根据大量实验测试数据,数组α1[blocktype]和数组α2[blocktype]分别定义为:
α1[blocktype]=[-0.23,-0.23,-0.23,-0.25,-0.27,-0.27,-0.28];
α2[blocktype]=[-2.39,-2.40,-2.40,-2.41,-2.45,-2.45,-2.48]。
步骤3:确定宏块运动估计搜索层次
步骤3.1:当宏块运动程度较低时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,只进行内层的第1层和第2层的非均匀六边形网格搜索;
步骤3.2:当宏块运动程度中等时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,进行第1层到第3层的非均匀六边形网格搜索;
步骤3.3:当宏块运动程度较高时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,才进行4层的非均匀六边形网格搜索。
在P帧编码中,由公式(5):
ROI(x,y)=3时,情况①,编码宏块属于前景动态区域(T(x,y,MV)=2)或者前景平移区域(T(x,y,MV)=1),具有时域视觉特征,并且S(x,y,Mode)=1,说明该宏块选择了帧间子块预测模式集合Inter8,也具有空域视觉特征,属于时域视觉特征显著且纹理丰富区;情况②当S(x,y,Mode)=2,说明P帧编码宏块采用了帧内预测模式,属于空域视觉特征敏感区。上述两种情况下人眼关注度最高,遍历执行快速的帧内预测和帧间子块预测模式集合Inter8,运动估计搜索执行第2~4层搜索,允许的参考帧数为5个。
ROI(x,y)=2时,编码宏块具有时域视觉特征(T(x,y,MV)=2或T(x,y,MV)=1),且S(x,y,Mode)=0,说明该宏块选择了帧间宏块预测模式集合Inter16,空域视觉特征不显著,属于时域视觉特征显著且纹理平坦区,人眼关注度次之,略过帧内预测,仅遍历执行快速帧间宏块预测模式集合Inter16,运动估计搜索执行第1~3层搜索,参考帧数为3个。
ROI(x,y)=1时,编码宏块不具有时域视觉特征(T(x,y,MV)=0),属于非动态背景区域,且S(x,y,Mode)=1,说明该宏块选择了帧间子块预测模式集合Inter8,具有空域视觉特征,属于空域视觉特征关注区,人眼关注度再次之,略过帧内预测,仅遍历执行快速帧间子块预测模式集合Inter8,运动估计搜索执行第1~2层搜索,参考帧数为1个。
ROI(x,y)=0时,说明当前编码宏块不具有时域特征和空域视觉特征,属于平坦静止背景区域;人眼关注度最低,仅遍历执行快速帧间宏块预测模式集合Inter16,运动估计搜索执行第1层搜索,参考帧数为1个。
在I帧编码中,由公式(5):
ROI(x,y)=1时,编码宏块不具有时域视觉特征(T(x,y,MV)=0),且S(x,y,Mode)=1,说明该宏块选择了Intra4×4预测模式,空间细节丰富,也具有较高空域视觉特征显著度,属于关注区,略过Intra16×16预测。
ROI(x,y)=0时,编码宏块不具有时域特征和空域视觉特征,属于平坦静止背景区域;人眼关注度最低,仅执行Intra16×16预测。
综上所述,本发明首先依据低层编码信息实现了高效视觉感知特征分析与检测,再根据视觉感兴趣区域优先级标注结果,指导编码方案选择,简化了预测编码的备选模式集和运动估计搜索范围,减少了参考帧数量,降低了视频编码层的计算复杂度。
本发明同时公开了仿真测试与统计结果;
表2为本发明方法与H.264/AVC(JM17.0)标准算法的性能比较结果。
表2
Figure PCTCN2015100056-appb-000064
Figure PCTCN2015100056-appb-000065
表2统计了具有不同运动特点的10个典型标准测试序列在本发明提出方法下相对于H.264/AVC(JM17.0)标准算法的编码性能比较结果。
在量化步长QP分别为28,32,36时,本发明方法与H.264/AVC标准算法相比,编码时间平均节省约80%;输出码率增加平均控制在2%以内;PSNR-Y平均降低-0.188dB,其中视觉感兴趣区域的PSNR-Y平均降低-0.153dB,优先保证了视觉感知特征显著区域的编码质量,符合人眼对非感兴趣区域的降质不敏感的视觉感知特性。
在输出码率控制方面,图1中两条率失真性能R-D曲线非常接近,说明本发明方法较好地继承了H.264/AVC标准算法低码率、高质量的编码优势。
在视频图像重建质量方面,本发明方法保证了PSNR-Y平均降幅在-0.2dB以内,远小于人眼对图像质量变化感知的最小灵敏度(-0.5dB),保持了良好的重建视频图像质量。
在编码速度提高方面,图2统计数据表明,本发明方法与H.264/AVC标准算法和现有算法相比,具有更低的计算复杂度。对于运动平缓、纹理平坦的Akiyo,News等序列,与H.264/AVC(JM17.0)相比,编码时间平均节省均在85%以上。
本发明提出的融合视觉感知特征的视频编码方法,可以在大幅提高编码速度的前提下,保持良好的视频图像主观质量,试验结果证明了充分利用编码信息进行低复杂度视觉感知分析的可行性,视觉感知特征显著度分析结果与HVS的一致性,验证了基于视觉感知特征制定可分级编码方案的合理性。

Claims (6)

  1. 一种融合视觉感知特征的可分层视频编码方法,其特征在于,包括视觉感兴趣区域优先级的设定以及视频编码资源分配方案的设定两部分;
    所述视觉感兴趣区域优先级的设定主要为:鉴于视频图像内容的丰富性和人眼视觉选择性注意机制,视频内容通常同时具有时域和空域双重视觉特征,标注视觉特征显著度区域的计算公式可表示为:
    Figure PCTCN2015100056-appb-100001
    式中,ROI(x,y)代表当前编码宏块视觉感兴趣优先级;T(x,y,MV)代表当前编码宏块的时域视觉特征显著度;S(x,y,Mode)代表当前编码宏块的空域视觉特征显著度;(x,y)表示当前编码宏块的位置坐标;
    所述视频编码资源分配方案的设定表现为:为在保证视频编码质量和压缩效率的同时,改善视频编码实时性能,首先满足感兴趣区域宏块的编码最优化,
    采用快速帧内预测算法,利用宏块灰度直方图描述宏块平坦程度,依据宏块平坦度自适应地选取可能的帧内预测模式集合;
    采用快速帧间预测算法,通过分析各种帧间预测模式出现概率的统计特性,对特定模式进行预判以提前终止不必要的帧间预测模式搜索和率失真代价计算,减少编码耗时;
    采用快速运动估计搜索算法,基于编码块运动矢量相关性,依据编码块运动程度,判定搜索层次,实现高效搜索。
  2. 根据权利要求1所述的融合视觉感知特征的可分层视频编码方法,其特征在于,所述视觉感兴趣区域优先级的设定中,首先,进行时域视觉显著度区域标注:具体分为两个步骤:步骤1运动矢量噪声检测和步骤2平移运动矢量检测,分别用于削弱由于运动矢量噪声和摄像机运动而产生的平移运动矢量对于时域视觉显著度区域检测准确性的影响,完成前景与背景的分离,得到较为准则的、符合人眼视觉特征的时域视觉显著度区域标注结果;然后,进行空域视觉显著度区域标;最后,依据时域、空域视觉特征显著度区域标注结果,完成视觉特征显著度区域的标注。
  3. 根据权利要求2所述的融合视觉感知特征的可分层视频编码方法,其特征在于,在进行时域视觉显著度区域标注时,所述步骤1运动矢量噪声检测的计算公式表示为:
    Figure PCTCN2015100056-appb-100002
    (1)式中,(x,y)为当前编码块位置坐标,
    Figure PCTCN2015100056-appb-100003
    表示当前编码块的运动矢量MV,
    Figure PCTCN2015100056-appb-100004
    表示运动参考区域Crr内的平均运动矢量,定义为:
    Figure PCTCN2015100056-appb-100005
    Figure PCTCN2015100056-appb-100006
    表示运动参考区域Crr所包含宏块的运动矢量,
    Figure PCTCN2015100056-appb-100007
    表示累加次数;
    运动参考区域Crr的定义如下,以使参考区域Crr的形状、位置、面积能够随当前运动矢量
    Figure PCTCN2015100056-appb-100008
    的变化而实现自适应调整:
    将分别位于Crr的左上、右上、左下和右下的四个宏块表示为MB1,MB2,MB3,MB4,其位置坐标定义为:
    Figure PCTCN2015100056-appb-100009
    其中,
    Figure PCTCN2015100056-appb-100010
    Figure PCTCN2015100056-appb-100011
    Figure PCTCN2015100056-appb-100012
    分别是当前运动矢量
    Figure PCTCN2015100056-appb-100013
    在水平和垂直方向上的运动幅度,ws和hs分别表示当前编码块的宽度和高度,[·]表示取整计算;
    如果
    Figure PCTCN2015100056-appb-100014
    说明在运动参考区域Crr内不存在运动矢量,则认为
    Figure PCTCN2015100056-appb-100015
    是由运动噪声引起的,应予以滤除,
    Figure PCTCN2015100056-appb-100016
    被置为0,标记T1(x,y,MV)=3;
    如果
    Figure PCTCN2015100056-appb-100017
    说明当前编码块与邻近宏块相比具有较显著的运动特征,属于前景动态区域,标记T1(x,y,MV)=2;
    否则,说明当前编码块与邻近宏块具有相似的运动特性,时域显著性并不明显,需要进一步进行平移运动矢量检测,以判定该编码块是属于背景区域,还是属于前景平移 区域,标记为T2(x,y,MV);
    所述步骤2平移运动矢量检测的计算公式可表示为:
    Figure PCTCN2015100056-appb-100018
    (2)式中,(x,y)表示当前编码块的位置坐标;
    Figure PCTCN2015100056-appb-100019
    为动态阈值;SAD(x,y)为当前编码块与前一帧对应位置块之间的绝对差值和SAD(Sum of Absolute Differences,SAD),用于表征相邻两帧对应编码块的变化程度,定义如下:
    Figure PCTCN2015100056-appb-100020
    其中,s(i,j)为当前编码块的像素值;c(i,j)为前一帧对应位置块的像素值;M,N分别为当前编码块的长、宽尺寸;
    动态阈值
    Figure PCTCN2015100056-appb-100021
    为前一帧中被确定为背景区域内的所有编码块SAD的均值,定义如下:
    Figure PCTCN2015100056-appb-100022
    其中,Sc表示前一帧的背景区域;
    Figure PCTCN2015100056-appb-100023
    表示Sc内包含编码块对应SAD值的累加和;Num表示累加次数;
    综合上述(1)和(2)两个处理步骤,进行时域视觉显著度区域标注的计算公式可描述为:
    Figure PCTCN2015100056-appb-100024
    (3)式中,各参数的定义与式(1)和式(2)相同;
    然后,进行空域视觉显著度区域标注,其计算公式可描述为:
    Figure PCTCN2015100056-appb-100025
    (4)式中,(x,y)表示当前编码块的位置坐标;Mode表示编码块的预测模式;mod eP表示P帧编码中当前编码块的预测模式;mod eI表示I帧编码中当前编码块的预测模式;
    如果mod eP选择了帧内预测模式,说明空域视觉特征显著度最高,属于敏感区,标记S(x,y,Mode)=2;
    如果mod eP选择了子块帧间预测模式集合Inter8(8×8,8×4,4×8,4×4)或者mod eI选择了Intra4×4预测模式,说明空间细节丰富,也具有较高空域视觉特征显著度,属于关注区,标记S(x,y,Mode)=1;
    如果mod eP选择了宏块帧间预测模式集合Inter16(Skip,16×16,16×8,8×16)或者mod eI选择了Intra16×16预测模式,说明空间变化平缓,空间视觉特征显著度低,属于非显著区,标记S(x,y,Mode)=0;
    最后,依据时域、空域视觉特征显著度区域标注结果,标注视觉特征显著度区域。
  4. 根据权利要求1所述的融合视觉感知特征的可分层视频编码方法,其特征在于,采用快速帧内预测算法的具体过程:
    步骤1:计算当前编码宏块亮度分量Y的灰度直方图,记录其最大像素数Max Value;
    步骤2:设定上限阈值Thhigh和下限阈值Thlow,Thhigh和Thlow均为[1,256]间的整数;
    步骤3:若Max Value≥Thhigh,认为宏块平坦,舍弃Intra4×4预测模式集合,选择Intra16×16预测模式集合,并将率失真开销最小的模式作为最优帧内预测模式;同时更新上限阈值:
    Figure PCTCN2015100056-appb-100026
    否则,进入步骤4;
    步骤4:若Max Value≤Thlow,认为宏块细节丰富,舍弃Intra16×16预测模式集合,选择Intra4×4预测模式集合,并将率失真开销最小的模式作为最佳帧内预测模式;同时更新下限阈值:
    Figure PCTCN2015100056-appb-100027
    否则,进入步骤5;
    步骤5:若Thlow<Max Value<Thhigh,认为宏块平坦度特征不显著,采用标准帧内预测算法;
    本发明中的上限阈值Thhigh和下限阈值Thlow分别设定为150和50。
  5. 根据权利要求1所述的融合视觉感知特征的可分层视频编码方法,其特征在于,采用快速帧间预测算法的具体过程:
    步骤1:Skip模式的预判
    步骤1.1:计算Skip模式(mode0)的率失真代价值Jskip,如果小于阈值T则停止对其他模式的搜索,选定Skip作为最佳预测模式,跳转至步骤4;否则执行步骤1.2;
    其中,T=(0.7-Min_cos t/18000)×Min_cos t,Min_cos t为前一个编码宏块的最优率失真代价值;
    步骤1.2:计算Inter16×16模式(mode1)的率失真代价值J16×16,如果J16×16>Jskip, 则仍选定Skip作为最佳编码模式,跳转至步骤4;否则执行步骤2;
    步骤2:宏块/子块帧间预测模式的预判
    步骤2.1:计算Inter16×16模式和Inter8×8模式的率失真代价值J16×16和J8×8,若J8×8-J16×16>T0,则选定Inter16×16模式作为最佳帧间编码模式,跳转至步骤4;否则,执行步骤2.2;
    其中,T0=0.2×Min_cos t,是根据实验数据得出的自适应经验域值,可以在保证快速判决模式的同时,尽量减少误判率,Min_cos t为前一个编码宏块的最优率失真代价值;
    步骤2.2:计算Inter4×4模式的率失真代价值J4×4,如果J4×4<min(J16×16,J8×8),则对该宏块进行亚分割,将帧间亚分割预测模式Inter8×8,Inter8×4,Inter4×8和Inter4×4(mode4~mode7)作为帧间备选模式集合;否则,将宏块级帧间预测模式Inter16×16,Inter16×8,Inter8×16(mode1~mode3)作为帧间预测备选模式集合,舍弃亚分割预测;
    步骤3:帧内模式的预判
    步骤3.1:计算当前编码宏块的平均边缘误差值ABE(Average Boundary Error)与边缘误差总和SBE(Summation Boundary Error),平均边缘误差值ABE反映了宏块的时间相关性;
    ABE=SBE/64
    其中,
    Figure PCTCN2015100056-appb-100028
    式中:Yorig为当前编码宏块的像素值;Yrec为重构宏块的像素值;(x,y)表示当前编码宏块的位置坐标;
    步骤3.2:计算当前编码宏块的平均比特率AR(Average Rate),平均比特率AR反映了宏块的空间相关性;
    AR=λRate/384
    式中:λ为拉格朗日乘数因子;Rate为宏块编码所需的比特数;
    步骤3.3:比较宏块的平均边缘误差和平均比特率,若ABE<C·AR(C=0.95),则说明该宏块的空域冗余小于时域冗余,舍弃对帧内预测模式的遍历计算,进入步骤4;否则,保留帧内预测模式遍历计算,进入步骤4;
    步骤4:根据率失真准则计算并选择出最优帧间预测模式,完成帧间预测编码。
  6. 根据权利要求1所述的融合视觉感知特征的可分层视频编码方法,其特征在于,采用快速运动估计搜索算法的具体过程:
    步骤1:描述宏块运动特征
    步骤1.1:基于率失真准则,计算当前编码宏块运动估计的率失真代价值RD costmotion
    Jmotion(mv,ref|λmotin)=SAD[s,r(ref,mv)]+λmotin[R(mv-pred)+R(ref)]
    式中,s为当前宏块像素值;mv是宏块运动矢量,pred为预测矢量;ref为选择的参考帧;r(ref,mv)为参考宏块的像素值;R是运动矢量进行差分编码消耗的比特数,包括运动矢量与其预测值的差值的编码比特数和参考帧的编码比特数;λmotion为拉格朗日乘子;SAD为当前块与参考块之间的绝对误差和,其定义为:
    Figure PCTCN2015100056-appb-100029
    式中M和N分别表示当前编码宏块的宽度和高度;x,y表示宏块所在位置;s表示真实值;c表示预测值;m=(mx,my)T表示宏块运动矢量,mx和my分别表示宏块在水平和垂直方向的运动矢量;
    步骤1.2:基于率失真准则,计算在mode模式下的率失真代值RD costmode
    Jmod e(s,c,mod e|λmod e)=SSD(s,c,mod e|QP)+λmod e×R(s,c,mod e|QP)
    式中,mode为当前宏块的帧间编码模式;s为原始视频信号;c为mode模式编码后的重建视频信号;λmod e为拉格朗日乘子;R(s,c,mod e|QP)是与模式和量化参数有关的包括宏块头信息、运动矢量和所有DCT块信息的总的二进制位数;QP是编码量化步长;SSD(s,c,mod e)为原始信号与重建信号之间的平方差值和,即:
    Figure PCTCN2015100056-appb-100030
    其中,B1和B2分别表示编码块的水平像素数和垂直像素数,其取值可为16,8,4;sY[x,y],cY[x,y,mod e|QP]表示原始视频和重建视频亮度信号的取值;cU,cV和sU,sV表示相应的色差信号的取值;
    步骤1.3:从RD costmotion和RD costmode中选取最小率失真代价值,记为RD_min cos t;
    步骤2:判定宏块运动剧烈程度
    判定宏块运动剧烈程度的计算公式为:
    Figure PCTCN2015100056-appb-100031
    式中,γ,δ为判别宏块运动程度的调整因子,分别定义为:
    Figure PCTCN2015100056-appb-100032
    其中,Bsize[blocktype]为当前编码宏块尺寸,有7种取值:16×16,16×8,8×16,8×8,8×4,4×8,4×4;pred_min cos t由UMHexagonS算法起始搜索点选择的运动矢量预测方式而定:
    (1)如果起始搜索点采用时间预测方式的运动矢量,则pred_min cos t选取参考帧预测运动矢量reference frame MV predictor;
    (2)如果起始搜索点没有采用时间预测方式的运动矢量,再分为以下两类情况:
    (2.1)若当前运动估计预测宏块选择的是16×16,16×8,8×16大尺寸帧间预测模式,则pred_min cos t选取中值预测运动矢量median MV predictor;
    (2.2)若当前运动估计预测宏块选择的是8×8,8×4,4×8,4×4小尺寸帧间预测模式,则pred_min cos t选取上层预测运动矢量uplayer MV predictor;
    根据大量实验测试数据,数组α1[blocktype]和数组α2[blocktype]分别定义为:
    α1[blocktype]=[-0.23,-0.23,-0.23,-0.25,-0.27,-0.27,-0.28];
    α2[blocktype]=[-2.39,-2.40,-2.40,-2.41,-2.45,-2.45,-2.48];
    步骤3:确定宏块运动估计搜索层次
    步骤3.1:当宏块运动程度较低时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,只进行内层的第1层和第2层的非均匀六边形网格搜索;
    步骤3.2:当宏块运动程度中等时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,进行第1层到第3层的非均匀六边形网格搜索;
    步骤3.3:当宏块运动程度较高时,在UMHexagonS算法的“非均匀4层次的六边形格网格搜索”步骤中,才进行4层的非均匀六边形网格搜索。
PCT/CN2015/100056 2015-01-20 2015-12-31 一种融合视觉感知特征的可分层视频编码方法 WO2016115968A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/124,672 US10313692B2 (en) 2015-01-20 2015-12-31 Visual perception characteristics-combining hierarchical video coding method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510025201.1A CN104539962B (zh) 2015-01-20 2015-01-20 一种融合视觉感知特征的可分层视频编码方法
CN2015100252011 2015-01-20

Publications (1)

Publication Number Publication Date
WO2016115968A1 true WO2016115968A1 (zh) 2016-07-28

Family

ID=52855410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/100056 WO2016115968A1 (zh) 2015-01-20 2015-12-31 一种融合视觉感知特征的可分层视频编码方法

Country Status (3)

Country Link
US (1) US10313692B2 (zh)
CN (1) CN104539962B (zh)
WO (1) WO2016115968A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640424A (zh) * 2019-03-01 2020-09-08 北京搜狗科技发展有限公司 一种语音识别方法、装置和电子设备
CN111901591A (zh) * 2020-07-28 2020-11-06 有半岛(北京)信息科技有限公司 一种编码模式的确定方法、装置、服务器和存储介质
CN112004088A (zh) * 2020-08-06 2020-11-27 杭州当虹科技股份有限公司 一种适用于avs2编码器的cu级qp分配算法
CN113542745A (zh) * 2021-05-27 2021-10-22 绍兴市北大信息技术科创中心 一种率失真编码优化方法

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539962B (zh) * 2015-01-20 2017-12-01 北京工业大学 一种融合视觉感知特征的可分层视频编码方法
CN105898306A (zh) * 2015-12-11 2016-08-24 乐视云计算有限公司 用于运动视频的码率控制方法及装置
US20170359596A1 (en) * 2016-06-09 2017-12-14 Apple Inc. Video coding techniques employing multiple resolution
CN106331711B (zh) * 2016-08-26 2019-07-05 北京工业大学 一种基于网络特征与视频特征的动态码率控制方法
US11132758B2 (en) * 2016-09-14 2021-09-28 Inscape Data, Inc. Embedding data in video without visible impairments
US11082710B2 (en) * 2016-09-14 2021-08-03 Inscape Data, Inc. Embedding video watermarks without visible impairments
US10812791B2 (en) * 2016-09-16 2020-10-20 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor
CN108134939B (zh) * 2016-12-01 2020-08-07 北京金山云网络技术有限公司 一种运动估计方法及装置
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
CN106937118B (zh) * 2017-03-13 2019-09-13 西安电子科技大学 一种基于主观感兴趣区域和时空域相结合的码率控制方法
US10587880B2 (en) * 2017-03-30 2020-03-10 Qualcomm Incorporated Zero block detection using adaptive rate model
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
CN107371029B (zh) * 2017-06-28 2020-10-30 上海大学 基于内容的视频包优先级分配方法
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
CN107454395A (zh) * 2017-08-23 2017-12-08 上海安威士科技股份有限公司 一种高清网络摄像机及智能码流控制方法
CN107580217A (zh) * 2017-08-31 2018-01-12 郑州云海信息技术有限公司 编码方法及其装置
EP3766247A4 (en) * 2018-04-02 2022-01-19 MediaTek Inc. VIDEO PROCESSING METHODS AND APPARATUS FOR SUBBLOCK MOTION COMPENSATION IN VIDEO CODING SYSTEMS
CN108833928B (zh) * 2018-07-03 2020-06-26 中国科学技术大学 交通监控视频编码方法
CN109151479A (zh) * 2018-08-29 2019-01-04 南京邮电大学 基于h.264压缩域模式和空时特征的显著度提取方法
CN111193931B (zh) * 2018-11-14 2023-04-07 深圳市中兴微电子技术有限公司 一种视频数据的编码处理方法和计算机存储介质
CN111200734B (zh) * 2018-11-19 2022-03-11 浙江宇视科技有限公司 视频编码方法及装置
CN109862356B (zh) * 2019-01-17 2020-11-10 中国科学院计算技术研究所 一种基于感兴趣区域的视频编码方法及系统
CN109769120B (zh) * 2019-02-19 2022-03-22 北京微吼时代科技有限公司 基于视频内容跳过编码模式判决方法、装置、设备及介质
CN110087087B (zh) * 2019-04-09 2023-05-12 同济大学 Vvc帧间编码单元预测模式提前决策及块划分提前终止方法
CN110035285B (zh) * 2019-04-18 2023-01-06 中南大学 基于运动矢量敏感度的深度预测方法
CN110225355A (zh) * 2019-06-22 2019-09-10 衢州光明电力投资集团有限公司赋腾科技分公司 基于感兴趣区域的高性能视频编码帧内预测优化方法
CN112218088A (zh) * 2019-07-09 2021-01-12 深圳先进技术研究院 一种图像与视频压缩方法
CN110728173A (zh) * 2019-08-26 2020-01-24 华北石油通信有限公司 基于感兴趣目标显著性检测的视频传输方法和装置
CN110648334A (zh) * 2019-09-18 2020-01-03 中国人民解放军火箭军工程大学 一种基于注意力机制的多特征循环卷积显著性目标检测方法
CN110996099B (zh) * 2019-11-15 2021-05-25 网宿科技股份有限公司 一种视频编码方法、系统及设备
CN110933446B (zh) * 2019-11-15 2021-05-25 网宿科技股份有限公司 一种感兴趣区域的识别方法、系统及设备
US10939126B1 (en) * 2019-12-09 2021-03-02 Guangzhou Zhijing Technology Co., Ltd Method of adding encoded range-of-interest location, type and adjustable quantization parameters per macroblock to video stream
CN113096132B (zh) * 2020-01-08 2022-02-08 东华医为科技有限公司 图像处理的方法、装置、存储介质和电子设备
US11263261B2 (en) * 2020-02-14 2022-03-01 Alibaba Group Holding Limited Method and system for characteristic-based video processing
CN111327909B (zh) * 2020-03-06 2022-10-18 郑州轻工业大学 一种针对3d-hevc的快速深度编码方法
CN116389822A (zh) * 2020-03-30 2023-07-04 华为技术有限公司 数据传输方法、芯片系统及相关装置
CN111901606A (zh) * 2020-07-31 2020-11-06 杭州当虹科技股份有限公司 一种提升字幕编码质量的视频编码方法
CN111918066B (zh) * 2020-09-08 2022-03-15 北京字节跳动网络技术有限公司 视频编码方法、装置、设备及存储介质
CN114650421A (zh) * 2020-12-18 2022-06-21 中兴通讯股份有限公司 视频处理方法、装置、电子设备及存储介质
CN113365081B (zh) * 2021-05-27 2023-02-07 深圳市杰理微电子科技有限公司 视频编码中运动估计优化方法与装置
CN113361599B (zh) * 2021-06-04 2024-04-05 杭州电子科技大学 一种基于感知特征参量度量的视频时域显著度度量方法
CN113422882B (zh) * 2021-06-22 2022-09-02 中国科学技术大学 图像压缩编码的分级加密方法、系统、设备与存储介质
CN113810720A (zh) * 2021-08-09 2021-12-17 北京博雅慧视智能技术研究院有限公司 一种图像处理方法、装置、设备及介质
US11949877B2 (en) * 2021-10-01 2024-04-02 Microsoft Technology Licensing, Llc Adaptive encoding of screen content based on motion type
US11962811B2 (en) * 2021-10-19 2024-04-16 Google Llc Saliency based denoising
US11704891B1 (en) 2021-12-29 2023-07-18 Insight Direct Usa, Inc. Dynamically configured extraction, preprocessing, and publishing of a region of interest that is a subset of streaming video data
US11509836B1 (en) 2021-12-29 2022-11-22 Insight Direct Usa, Inc. Dynamically configured processing of a region of interest dependent upon published video data selected by a runtime configuration file
CN114401400A (zh) * 2022-01-19 2022-04-26 福州大学 基于视觉显著性编码效应感知的视频质量评测方法及系统
CN114374843B (zh) * 2022-03-23 2022-05-20 广州方硅信息技术有限公司 基于预测模式选择的直播视频编码方法及计算机设备
CN114745549B (zh) * 2022-04-02 2023-03-17 北京广播电视台 一种基于感兴趣区域的视频编码方法和系统
CN114513661B (zh) * 2022-04-20 2022-09-06 宁波康达凯能医疗科技有限公司 一种基于方向检测的帧内图像模式决策方法与系统
US11778167B1 (en) 2022-07-26 2023-10-03 Insight Direct Usa, Inc. Method and system for preprocessing optimization of streaming video data
WO2024050723A1 (zh) * 2022-09-07 2024-03-14 Oppo广东移动通信有限公司 一种图像预测方法、装置及计算机可读存储介质
CN116074513A (zh) * 2023-03-06 2023-05-05 北京朝歌数码科技股份有限公司 应用于网络摄像头的视频编码方法、计算机可读介质、电子设备
CN116996680B (zh) * 2023-09-26 2023-12-12 上海视龙软件有限公司 一种用于视频数据分类模型训练的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206674A1 (en) * 2006-03-01 2007-09-06 Streaming Networks (Pvt.) Ltd. Method and system for providing low cost robust operational control of video encoders
CN101621709A (zh) * 2009-08-10 2010-01-06 浙江大学 一种全参考型图像客观质量评价方法
CN102186070A (zh) * 2011-04-20 2011-09-14 北京工业大学 分层结构预判的快速视频编码方法
CN103618900A (zh) * 2013-11-21 2014-03-05 北京工业大学 基于编码信息的视频感兴趣区域提取方法
US8774272B1 (en) * 2005-07-15 2014-07-08 Geo Semiconductor Inc. Video quality by controlling inter frame encoding according to frame position in GOP
CN104539962A (zh) * 2015-01-20 2015-04-22 北京工业大学 一种融合视觉感知特征的可分层视频编码方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8559501B2 (en) * 2006-06-09 2013-10-15 Thomson Licensing Method and apparatus for adaptively determining a bit budget for encoding video pictures
US7995800B2 (en) * 2007-02-28 2011-08-09 Imec System and method for motion detection and the use thereof in video coding
KR101680951B1 (ko) * 2007-04-12 2016-11-29 톰슨 라이센싱 비디오 인코더에서 고속으로 기하학적 모드를 결정하기 위한 방법들 및 장치
US20160173906A1 (en) * 2014-12-11 2016-06-16 Intel Corporation Partition mode and transform size determination based on flatness of video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8774272B1 (en) * 2005-07-15 2014-07-08 Geo Semiconductor Inc. Video quality by controlling inter frame encoding according to frame position in GOP
US20070206674A1 (en) * 2006-03-01 2007-09-06 Streaming Networks (Pvt.) Ltd. Method and system for providing low cost robust operational control of video encoders
CN101621709A (zh) * 2009-08-10 2010-01-06 浙江大学 一种全参考型图像客观质量评价方法
CN102186070A (zh) * 2011-04-20 2011-09-14 北京工业大学 分层结构预判的快速视频编码方法
CN103618900A (zh) * 2013-11-21 2014-03-05 北京工业大学 基于编码信息的视频感兴趣区域提取方法
CN104539962A (zh) * 2015-01-20 2015-04-22 北京工业大学 一种融合视觉感知特征的可分层视频编码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, PENGYU; ET AL.: "Video ROI Extraction Algorithm Based on Reconstructed Encoding Information", COMPUTER ENGINEERING, vol. 37, no. 24, 31 December 2011 (2011-12-31), ISSN: 1000-3428 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640424A (zh) * 2019-03-01 2020-09-08 北京搜狗科技发展有限公司 一种语音识别方法、装置和电子设备
CN111640424B (zh) * 2019-03-01 2024-02-13 北京搜狗科技发展有限公司 一种语音识别方法、装置和电子设备
CN111901591A (zh) * 2020-07-28 2020-11-06 有半岛(北京)信息科技有限公司 一种编码模式的确定方法、装置、服务器和存储介质
CN111901591B (zh) * 2020-07-28 2023-07-18 有半岛(北京)信息科技有限公司 一种编码模式的确定方法、装置、服务器和存储介质
CN112004088A (zh) * 2020-08-06 2020-11-27 杭州当虹科技股份有限公司 一种适用于avs2编码器的cu级qp分配算法
CN112004088B (zh) * 2020-08-06 2024-04-16 杭州当虹科技股份有限公司 一种适用于avs2编码器的cu级qp分配算法
CN113542745A (zh) * 2021-05-27 2021-10-22 绍兴市北大信息技术科创中心 一种率失真编码优化方法

Also Published As

Publication number Publication date
US10313692B2 (en) 2019-06-04
CN104539962B (zh) 2017-12-01
US20170085892A1 (en) 2017-03-23
CN104539962A (zh) 2015-04-22

Similar Documents

Publication Publication Date Title
WO2016115968A1 (zh) 一种融合视觉感知特征的可分层视频编码方法
US20220312021A1 (en) Analytics-modulated coding of surveillance video
CN106961606B (zh) 基于纹理划分特征的hevc帧内编码模式选择方法
CN103873861B (zh) 一种用于hevc的编码模式选择方法
US20060165163A1 (en) Video encoding
CN109302610B (zh) 一种基于率失真代价的屏幕内容编码帧间快速编码方法
KR101433170B1 (ko) 인접 블록의 공간 예측 방향성을 이용하여 화면 내 예측모드를 추정하는 인코딩 및 디코딩 방법, 그 장치
CN105141948A (zh) 一种改进的hevc样点自适应补偿方法
CN109040764B (zh) 一种基于决策树的hevc屏幕内容帧内快速编码算法
CN106878754B (zh) 一种3d视频深度图像帧内预测模式选择方法
CN113079373A (zh) 一种基于hevc-scc的视频编码方法
Duvar et al. Fast inter mode decision exploiting intra-block similarity in HEVC
CN110446042B (zh) 一种提升h.264中p帧质量的编码方法
Chen et al. CNN-based fast HEVC quantization parameter mode decision
CN111246218B (zh) 基于jnd模型的cu分割预测和模式决策纹理编码方法
Luo et al. Fast AVS to HEVC transcoding based on ROI detection using visual characteristics
CN114173131A (zh) 一种基于帧间相关性的视频压缩方法及系统
CN113099224A (zh) 一种基于图像主纹理强度的单元划分和预测模型选择的视频编码方法
Chen et al. Adaptive frequency weighting for high-performance video coding
KR101630167B1 (ko) Hevc를 위한 고속 화면내 부호화 모드 결정 방법
CN110611819B (zh) 一种提升h.264中b帧质量的编码方法
Li et al. Perceptual video coding based on adaptive region-level intra-period
CN108012152B (zh) 一种快速的hevc编码方法
Yang et al. Improved method of deblocking filter based on convolutional neural network in VVC
Wang et al. Rate Control in Versatile Video Coding with Cosh Rate–Distortion Model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878636

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15124672

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878636

Country of ref document: EP

Kind code of ref document: A1