WO2016141609A1 - 图像预测方法和相关设备 - Google Patents

图像预测方法和相关设备 Download PDF

Info

Publication number
WO2016141609A1
WO2016141609A1 PCT/CN2015/075094 CN2015075094W WO2016141609A1 WO 2016141609 A1 WO2016141609 A1 WO 2016141609A1 CN 2015075094 W CN2015075094 W CN 2015075094W WO 2016141609 A1 WO2016141609 A1 WO 2016141609A1
Authority
WO
WIPO (PCT)
Prior art keywords
image block
current image
pixel
motion information
motion
Prior art date
Application number
PCT/CN2015/075094
Other languages
English (en)
French (fr)
Inventor
陈焕浜
林四新
梁凡
杨海涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to SG11201707392RA priority Critical patent/SG11201707392RA/en
Priority to CN201580077673.XA priority patent/CN107534770B/zh
Priority to CN201910900293.1A priority patent/CN110557631B/zh
Priority to JP2017548056A priority patent/JP6404487B2/ja
Priority to MYPI2017001326A priority patent/MY190198A/en
Priority to BR112017019264-0A priority patent/BR112017019264B1/pt
Priority to EP15884292.2A priority patent/EP3264762A4/en
Priority to MX2017011558A priority patent/MX2017011558A/es
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to RU2017134755A priority patent/RU2671307C1/ru
Priority to CA2979082A priority patent/CA2979082C/en
Priority to AU2015385634A priority patent/AU2015385634B2/en
Priority to KR1020177027987A priority patent/KR102081213B1/ko
Publication of WO2016141609A1 publication Critical patent/WO2016141609A1/zh
Priority to US15/699,515 priority patent/US10404993B2/en
Priority to HK18103344.7A priority patent/HK1243852A1/zh
Priority to US16/413,329 priority patent/US10659803B2/en
Priority to US16/847,444 priority patent/US11178419B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to the field of video coding and decoding, and in particular to an image prediction method and related equipment.
  • the basic principle of video coding compression is to use the correlation between airspace, time domain and codewords to remove redundancy as much as possible.
  • the current popular practice is to use a block-based hybrid video coding framework to implement video coding compression through prediction (including intra prediction and inter prediction), transform, quantization, and entropy coding.
  • This coding framework shows a strong vitality, and HEVC still uses this block-based hybrid video coding framework.
  • motion estimation/motion compensation is a key technique that affects encoding/decoding performance.
  • the existing motion estimation/motion compensation algorithms are basically block motion compensation algorithms based on the translational motion model.
  • irregular movements such as scaling, rotation, and parabolic motion are ubiquitous.
  • video coding experts realized the universality of irregular motion and hoped to improve video coding efficiency by introducing irregular motion models (such as affine motion models), but the existing affine motion model based on The computational complexity of image prediction is usually very high.
  • Embodiments of the present invention provide an image prediction method and related equipment, in order to reduce the computational complexity of image prediction based on an affine motion model.
  • a first aspect of the present invention provides an image prediction method, which may include:
  • Determining 2 pixel samples in the current image block determining each pixel of the 2 pixel samples a set of candidate motion information units corresponding to the samples; wherein the candidate motion information unit set corresponding to each of the pixel samples includes at least one motion information unit of the candidate;
  • Each of the motion information units i is selected from at least a part of the motion information unit of the candidate motion information unit group corresponding to each of the two pixel samples, where
  • the motion information unit includes a motion vector in which the prediction direction is forward and/or a motion vector in which the prediction direction is backward;
  • the current image block is subjected to pixel value prediction using an affine motion model and the combined motion information unit set i.
  • the determining the combined motion information unit set i including the two motion information units includes:
  • each of the N candidate motion information unit sets includes Each of the motion information units is selected from at least a portion of the motion information unit of the candidate motion information unit set corresponding to each of the two pixel samples, wherein the N is a positive integer.
  • the set of N candidate combined motion information units are different from each other, and each set of candidate combined motion information units in the N candidate combined motion information unit sets includes two motion information units.
  • the set of the N candidate combined motion information units meet the first condition, the second condition, the third condition, At least one of the fourth condition and the fifth condition,
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion
  • the second condition includes that the two motion information units in the set of candidate motion information units in the N candidate motion information unit sets have the same prediction direction;
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same;
  • the fourth condition includes that an absolute value of a difference between motion vector horizontal components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold Or, the absolute value of the difference between one of the motion information unit and the motion vector horizontal component of the pixel sample Z in any one of the N candidate motion information unit sets is less than or equal to a level a component threshold, the pixel sample Z of the current image block being different from any one of the 2 pixel samples;
  • the fifth condition includes that an absolute value of a difference between motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to vertical a component threshold, or an absolute value of a difference between any one of the motion information unit of one of the candidate motion information unit sets and the motion vector vertical component of the pixel sample Z of the N candidate motion information unit sets is smaller than Or equal to the horizontal component threshold, the pixel sample Z of the current image block is different from any one of the 2 pixel samples.
  • the 2 pixel samples Included in the upper left pixel sample, the upper right pixel sample, the lower left pixel sample, and two of the central pixel samples a1 of the current image block;
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes motion information units of x1 pixel samples, wherein the x1 pixel samples include at least one spatial domain of the upper left pixel sample of the current image block. Adjacent pixel samples and/or at least one of the current image blocks The upper left pixel sample is a pixel sample adjacent to the time domain, and the x1 is a positive integer;
  • the x1 pixel samples include a pixel sample having the same position as an upper left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent pixel sample on the left, a spatially adjacent pixel sample on the upper left of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the upper right pixel sample of the current image block The corresponding set of candidate motion information units includes motion information units of x2 pixel samples, wherein the x2 pixel samples include at least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one The pixel sample of the upper right pixel sample of the current image block is adjacent to the time domain, and the x2 is a positive integer;
  • the x2 pixel samples include a pixel sample having the same position as an upper right pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, wherein the x3 pixel samples include at least one spatial domain of the lower left pixel sample of the current image block. a neighboring pixel sample and/or at least one pixel sample adjacent to a time domain of a lower left pixel sample of the current image block, the x3 being a positive integer;
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the set of candidate motion information units corresponding to the central pixel sample a1 of the current image block includes x5 a motion information unit of a pixel sample, wherein one of the x5 pixel samples is a pixel sample a2,
  • the position of the central pixel sample a1 in the video frame to which the current image block belongs is the same as the position of the pixel sample a2 in the adjacent video frame of the video frame to which the current image block belongs, and the x5 is A positive integer.
  • Performing pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i includes: when the prediction direction in the merged motion information unit set i is a motion vector corresponding to the first prediction direction
  • the merged motion information unit set i is subjected to scaling processing such that the prediction direction in the combined motion information unit set i is the first prediction
  • the motion vector of the direction is scaled to the reference frame of the current image block, and the current image block is subjected to pixel value prediction using the affine motion model and the merged motion information unit set i after performing the scaling process, wherein the first The forecasting direction is forward or backward;
  • Performing pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i includes: when the prediction direction in the merged motion information unit set i is a reference corresponding to a forward motion vector a frame index is different from a forward reference frame index of the current image block, and a reference frame index corresponding to the backward direction motion vector in the merged motion information unit set i is different from a backward direction of the current image block
  • the merged motion information unit set i is subjected to a scaling process such that the forward motion vector in the merged motion information unit set i is forwarded to the front of the current image block To the reference frame and causing the backward direction motion vector in the merged motion information unit set i to be scaled to the backward reference frame of the current image block, using the affine motion model and the merge motion after the scaling process
  • the information unit set i performs pixel value prediction on the current image block.
  • affine motion model and the merged motion information unit set i to advance the current image block Row pixel value prediction including:
  • Performing pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i including: using a difference between motion vector horizontal components of the two pixel samples and the current a ratio of a length or a width of the image block, and a ratio of a difference between a vertical component of the motion vector of the 2 pixel samples and a length or a width of the current image block, to obtain an arbitrary pixel in the current image block a motion vector of the sample, wherein the motion vector of the 2 pixel samples is obtained based on motion vectors of two motion information units in the combined motion information unit set i.
  • the horizontal coordinate coefficient of the motion vector horizontal component of the 2 pixel samples and the vertical coordinate coefficient of the motion vector vertical component are equal, and the vertical coordinate coefficient and the motion vector vertical of the motion vector horizontal component of the 2 pixel samples
  • the horizontal coordinate coefficients of the components are opposite.
  • the affine motion model is an affine motion model of the form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • the image prediction method is applied to a video encoding process or the image prediction method is applied to a video decoding process.
  • the N candidate merges
  • the merged motion information unit set i including the two motion information units is determined among the motion information unit sets, including: based on the identifier of the merged motion information unit set i obtained from the video code stream, from the N candidate combined motion information unit sets
  • the merged motion information unit set i including two motion information units is determined.
  • the image prediction method is applied to In the case of a video decoding process, the method further includes: decoding a motion vector residual of the two pixel samples from the video code stream, and using the spatial neighboring or time domain adjacent to the two pixel samples The motion vector of the pixel sample obtains a motion vector predictor of the 2 pixel samples, and the motion vector predictor of the 2 pixel samples and the motion vector residual of the 2 pixel samples respectively obtain the 2 pixel samples Sport vector.
  • the method further includes Obtaining motion vector predictors of the two pixel samples by using motion vectors of spatially adjacent or temporally adjacent pixel samples of the two pixel samples, and obtaining motion vector predictors according to the motion data of the two pixel samples A motion vector residual of the two pixel samples, and a motion vector residual of the two pixel samples is written into the video code stream.
  • the image prediction method is applied to In the case of a video encoding process, the method further includes: combining the combined motion information unit The identity of i is written to the video stream.
  • a second aspect of the embodiments of the present invention provides an image prediction apparatus, including:
  • a first determining unit configured to determine 2 pixel samples in the current image block, and determine a candidate motion information unit set corresponding to each of the 2 pixel samples; wherein each of the pixel samples Corresponding candidate motion information unit set includes at least one motion information unit of the candidate;
  • a second determining unit configured to determine a combined motion information unit set i including two motion information units
  • Each of the motion information units i is selected from at least a part of the motion information unit of the candidate motion information unit group corresponding to each of the two pixel samples, where
  • the motion information unit includes a motion vector in which the prediction direction is forward and/or a motion vector in which the prediction direction is backward;
  • a prediction unit configured to perform pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i.
  • the second determining unit is specifically configured to determine, from among the N candidate motion information unit sets, a combination that includes two motion information units. a motion information unit set i; wherein each motion information unit included in each of the N candidate motion information unit sets is selected from each of the 2 pixel samples At least part of the motion information unit that meets the constraint condition in the candidate motion information unit set corresponding to the sample, wherein the N is a positive integer, the N candidate combined motion information unit sets are different from each other, and the N candidate merge motions Each candidate merged motion information unit set in the information unit set includes two motion information units.
  • the set of the N candidate combined motion information units meet the first condition, the second condition, the third condition, At least one of the fourth condition and the fifth condition,
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion
  • the second condition includes that the two motion information units in the set of candidate motion information units in the N candidate motion information unit sets have the same prediction direction;
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same;
  • the fourth condition includes that an absolute value of a difference between motion vector horizontal components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold Or, the absolute value of the difference between one of the motion information unit and the motion vector horizontal component of the pixel sample Z in any one of the N candidate motion information unit sets is less than or equal to a level a component threshold, the pixel sample Z of the current image block being different from any one of the 2 pixel samples;
  • the fifth condition includes that an absolute value of a difference between motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to vertical a component threshold, or an absolute value of a difference between a motion information unit of one of the motion information unit units and a motion vector component of the pixel sample Z in the set of candidate motion information units in the N candidate motion information unit sets is less than or Equal to the horizontal component threshold, the pixel sample Z of the current image block is different from any one of the 2 pixel samples.
  • the 2 pixel samples Included in the upper left pixel sample, the upper right pixel sample, the lower left pixel sample, and two of the central pixel samples a1 of the current image block;
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes x1 pixel samples a motion information unit, wherein the x1 pixel samples include at least one a pixel sample adjacent to an upper left pixel sample spatial domain of the pre-image block and/or at least one pixel sample adjacent to a time domain of an upper left pixel sample of the current image block, the x1 being a positive integer;
  • the x1 pixel samples include a pixel sample having the same position as an upper left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent pixel sample on the left, a spatially adjacent pixel sample on the upper left of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the upper right pixel sample of the current image block The corresponding set of candidate motion information units includes motion information units of x2 pixel samples, wherein the x2 pixel samples include at least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one The pixel sample of the upper right pixel sample of the current image block is adjacent to the time domain, and the x2 is a positive integer;
  • the x2 pixel samples include a pixel sample having the same position as an upper right pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, wherein the x3 pixel samples include at least one spatial domain of the lower left pixel sample of the current image block. a neighboring pixel sample and/or at least one pixel sample adjacent to a time domain of a lower left pixel sample of the current image block, the x3 being a positive integer;
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the candidate motion information unit set corresponding to the central pixel sample a1 of the current image block includes motion information units of x5 pixel samples, wherein one of the x5 pixel samples is a pixel sample a2,
  • the position of the central pixel sample a1 in the video frame to which the current image block belongs is the same as the position of the pixel sample a2 in the adjacent video frame of the video frame to which the current image block belongs, and the x5 is A positive integer.
  • the prediction unit is specifically configured to: when the reference frame index corresponding to the motion vector in the first prediction direction in the merged motion information unit set i is different from the reference frame index of the current image block,
  • the merged motion information unit set i performs a scaling process such that a motion vector in the merged motion information unit set i that is a prediction direction of the first prediction direction is scaled to a reference frame of the current image block, using an affine motion And performing a pixel value prediction on the current image block by using a model and a merged motion information unit set i, wherein the first prediction direction is forward or backward;
  • the prediction unit is specifically configured to: when the prediction direction in the merged motion information unit set i is a reference frame index corresponding to a forward motion vector, different from a forward reference frame index of the current image block, and In the case where the prediction direction in the merged motion information unit set i is that the reference frame index corresponding to the backward motion vector is different from the backward reference frame index of the current image block, the merged motion information unit set i is scaled Processing such that a forward motion vector in the merged motion information unit set i is forward-oriented to a forward reference frame of the current image block and causes a prediction direction in the merged motion information unit set i to be The backward motion vector is scaled to the backward reference frame of the current image block, and the current image block is subjected to pixel value prediction using the affine motion model and the merged motion information unit set i subjected to the scaling process.
  • the prediction unit is specifically configured to calculate a motion vector of each pixel in the current image block by using an affine motion model and the combined motion information unit set i, and use each of the calculated current image blocks a motion vector of a pixel determines a predicted image of each pixel in the current image block Prime value
  • the prediction unit is specifically configured to calculate a motion vector of each pixel block in the current image block by using an affine motion model and the combined motion information unit set i, and use each of the calculated current image blocks
  • the motion vector of the pixel block determines a predicted pixel value for each pixel of each pixel block in the current image block.
  • the prediction unit is specifically configured to utilize a ratio of a difference between motion vector horizontal components of the 2 pixel samples to a length or a width of the current image block, and a motion vector vertical of the 2 pixel samples a ratio of a difference between the components to a length or a width of the current image block, resulting in a motion vector of an arbitrary pixel sample in the current image block, wherein the motion vector of the 2 pixel samples is based on the combined motion
  • the motion vectors of the two motion information units in the information unit set i are obtained.
  • the horizontal coordinate coefficient and the motion vector vertical component of the motion vector horizontal component of the 2 pixel samples The vertical coordinate coefficients are equal, and the vertical coordinate coefficients of the motion vector horizontal components of the 2 pixel samples and the horizontal coordinate coefficients of the motion vector vertical components are opposite.
  • the affine motion model is an affine motion model of the form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • the image prediction device is applied to a video encoding device or the image prediction device is applied to a video decoding device.
  • the second The determining unit is specifically configured to determine, according to the identifier of the merged motion information unit set i obtained from the video code stream, the merged motion information unit set i including the two motion information units from among the N candidate combined motion information unit sets.
  • the apparatus further includes a decoding unit, configured to decode a motion vector residual of the two pixel samples from a video code stream, and utilize motion of adjacent or temporally adjacent pixel samples of the two pixel samples.
  • the vector obtains a motion vector predictor of the two pixel samples, and obtains a motion vector of the two pixel samples based on a motion vector predictor of the two pixel samples and a motion vector residual of the two pixel samples.
  • the prediction unit in the case where the image prediction apparatus is applied to a video encoding apparatus, the prediction unit The method is further configured to: obtain motion vector predictive values of the two pixel samples by using motion vectors of spatial neighboring or temporally adjacent pixel samples of the two pixel samples, according to motion vectors of the two pixel samples. The predicted value obtains a motion vector residual of the two pixel samples, and the motion vector residual of the two pixel samples is written into the video code stream.
  • the device when the image prediction apparatus is applied, the device further comprises an encoding unit for writing the identity of the combined motion information unit set i to the video code stream.
  • a third aspect of the embodiments of the present invention provides an image prediction apparatus, including:
  • the processor determines, by using a code or an instruction stored in the memory, for determining two pixel samples in a current image block, and determining candidates corresponding to each of the two pixel samples. a set of motion information units; wherein the set of candidate motion information units corresponding to each of the pixel samples includes at least one motion information unit of the candidate; determining a combined motion information unit set i including two motion information units; wherein the combining Each of the motion information units i is selected from at least a portion of the motion information unit of the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the motion information unit includes The prediction direction is a forward motion vector and/or the prediction direction is a backward motion vector; the current image block is subjected to pixel value prediction using the affine motion model and the combined motion information unit set i.
  • the processor is configured to merge motion information from the N candidates in determining the combined motion information unit set i including the two motion information units Among the set of units, a combined motion information unit set i including two motion information units is determined; wherein each motion information unit included in each candidate combined motion information unit set in the N candidate combined motion information unit sets, And at least part of the motion information unit that is selected from the candidate motion information unit set corresponding to each of the two pixel samples, wherein the N is a positive integer, and the N candidate merge motions
  • the sets of information units are different from each other, and each set of candidate motion information units in the set of N candidate combined motion information units includes two motion information units.
  • the set of the N candidate combined motion information units meet the first condition, the second condition, the third condition, At least one of the fourth condition and the fifth condition,
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion
  • the second condition includes that the two motion information units in the set of candidate motion information units in the N candidate motion information unit sets have the same prediction direction;
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same;
  • the fourth condition includes any one of the N candidate combined motion information unit sets The absolute value of the difference between the motion vector horizontal components of the two motion information units in the combined motion information unit set is less than or equal to the horizontal component threshold, or any one of the N candidate combined motion information unit sets The absolute value of the difference between one of the motion information elements of the unit set and the motion vector horizontal component of the pixel sample Z is less than or equal to a horizontal component threshold, and the pixel sample Z of the current image block is different from the two Any one of the pixel samples;
  • the fifth condition includes that an absolute value of a difference of motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a vertical component threshold Or the absolute value of the difference between any one of the motion information unit of one of the candidate motion information unit sets and the motion vector vertical component of the pixel sample Z is less than or equal to A horizontal component threshold, the pixel sample Z of the current image block being different from any one of the 2 pixel samples.
  • the 2 pixel samples Included in the upper left pixel sample, the upper right pixel sample, the lower left pixel sample, and two of the central pixel samples a1 of the current image block;
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes x1 pixel samples a motion information unit, wherein the x1 pixel samples include at least one pixel sample adjacent to an upper left pixel sample spatial domain of the current image block and/or at least one time domain adjacent to an upper left pixel sample of the current image block Pixel samples, the x1 is a positive integer;
  • the x1 pixel samples are adjacent to a time domain of a video frame to which the current image block belongs a pixel sample having the same position as the upper left pixel sample of the current image block, a spatial adjacent pixel sample of the left side of the current image block, and a spatially adjacent pixel sample of the upper left space of the current image block. At least one of the spatially adjacent pixel samples of the upper side of the current image block.
  • the upper right pixel sample of the current image block The corresponding set of candidate motion information units includes motion information units of x2 pixel samples, wherein the x2 pixel samples include at least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one The pixel sample of the upper right pixel sample of the current image block is adjacent to the time domain, and the x2 is a positive integer;
  • the x2 pixel samples include a pixel sample having the same position as an upper right pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, wherein the x3 pixel samples include at least one spatial domain of the lower left pixel sample of the current image block. a neighboring pixel sample and/or at least one pixel sample adjacent to a time domain of a lower left pixel sample of the current image block, the x3 being a positive integer;
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the candidate motion information unit set corresponding to the central pixel sample a1 of the current image block includes motion information units of x5 pixel samples, wherein one of the x5 pixel samples is a pixel sample a2,
  • the position of the central pixel sample a1 in the video frame to which the current image block belongs is the same as the position of the pixel sample a2 in the adjacent video frame of the video frame to which the current image block belongs, and the x5 is A positive integer.
  • the processor is configured to: when the prediction direction in the merged motion information unit set i is the first In a case where the reference frame index corresponding to the motion vector of the prediction direction is different from the reference frame index of the current image block, the merged motion information unit set i is subjected to scaling processing such that the merged motion information unit set i The motion vector whose prediction direction is the first prediction direction is scaled to the reference frame of the current image block, and the pixel value prediction is performed on the current image block by using the affine motion model and the merged motion information unit set i after performing the scaling process, Wherein the first prediction direction is forward or backward;
  • the processor is configured to: when the prediction direction in the merged motion information unit set i is The reference frame index corresponding to the forward motion vector is different from the forward reference frame index of the current image block, and the reference frame index in the merged motion information unit set i is different from the reference frame index corresponding to the backward motion vector
  • the merged motion information unit set i is subjected to a scaling process such that the forward motion vector in the merged motion information unit set i is scaled to the forward motion vector Going to the forward reference frame of the current image block and causing the backward direction motion vector in the merged motion information unit set i to be scaled to the backward reference frame of the current image block, using an affine motion model And combining the motion information unit set i after performing the scaling process on the current image block for pixel value prediction.
  • the affine motion model is utilized And the aspect of performing the pixel value prediction on the current image block by the merged motion information unit set i, wherein the processor is configured to calculate the current image block by using an affine motion model and the combined motion information unit set i a motion vector of each pixel, using the calculated result in the current image block A motion vector of each pixel determines a predicted pixel value of each pixel in the current image block;
  • the processor is configured to calculate using the affine motion model and the combined motion information unit set i a motion vector of each pixel block in the current image block, using the calculated motion vector of each pixel block in the current image block to determine a predicted pixel value of each pixel point of each pixel block in the current image block.
  • the processor is configured to utilize a difference between horizontal components of motion vectors of the two pixel samples a ratio of a value to a length or a width of the current image block, and a ratio of a difference between a vertical component of a motion vector of the 2 pixel samples and a length or a width of the current image block, to obtain the current image a motion vector of an arbitrary pixel sample in the block, wherein the motion vector of the 2 pixel samples is obtained based on motion vectors of two motion information units in the combined motion information unit set i.
  • the horizontal coordinate coefficient of the motion vector horizontal component of the 2 pixel samples and the vertical coordinate coefficient of the motion vector vertical component are equal, and the vertical coordinate coefficient and the motion vector vertical of the motion vector horizontal component of the 2 pixel samples
  • the horizontal coordinate coefficients of the components are opposite.
  • the affine motion model is an affine motion model of the form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • the image prediction device is applied to a video encoding device or the image prediction device is applied to a video decoding device.
  • the determining includes An aspect of the merged motion information unit set i of the motion information units, the processor configured to determine, based on the identifier of the merged motion information unit set i obtained from the video code stream, from the N candidate combined motion information unit sets The combined motion information unit set i of the two motion information units.
  • the processor is further configured to: obtain a motion vector residual of the two pixel samples from the video code stream, and use the spatial neighboring or time domain of the two pixel samples The motion vector of the adjacent pixel samples obtains motion vector predictors of the 2 pixel samples, and the motion vector predictor values of the 2 pixel samples and the motion vector residuals of the 2 pixel samples respectively obtain the 2 The motion vector of a sample of pixels.
  • the processor In conjunction with the thirteenth possible implementation of the third aspect, in a sixteenth possible implementation of the third aspect, in the case where the image prediction apparatus is applied to a video encoding apparatus, the processor And using, by using motion vectors of spatially adjacent or time-domain adjacent pixel samples of the two pixel samples, obtaining motion vector predictors of the two pixel samples, according to motion vectors of the two pixel samples The predicted value obtains a motion vector residual of the two pixel samples, and the motion vector residual of the two pixel samples is written into the video code stream.
  • the processor is further configured to write the identifier of the combined motion information unit set i into the video code stream.
  • a fourth aspect of the embodiments of the present invention provides an image processing method, including:
  • the affine motion model is in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical of the affine motion model The vertical component of the direct component is d, so that the affine motion model is of the form:
  • the calculating, by using the affine motion model and the motion vector 2-tuple, include:
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the horizontal component of the motion vector of each of the two pixel samples is utilized The ratio of the difference between the difference and the distance between the two pixel samples, and the ratio between the difference between the vertical components of the motion vectors of the two pixel samples and the distance between the two pixel samples, a value of a coefficient of the affine motion model;
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the calculating is performed by using the affine motion model and the motion vector 2-tuple
  • the motion vector of any pixel sample in the current image block includes:
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the using the affine motion model and the The motion vector 2-tuple after calculating the motion vector of any pixel sample in the current image block, further includes:
  • the method further includes:
  • a fifth aspect of the embodiments of the present invention provides an image processing apparatus, including:
  • an obtaining unit configured to obtain a motion vector 2-tuple of the current image block, where the motion vector 2-tuple includes a motion vector of each of the 2 pixel samples in the video frame to which the current image block belongs;
  • the affine motion model is in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical of the affine motion model The vertical component of the direct component is d, so that the affine motion model is of the form:
  • the calculating unit is specifically configured to:
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the calculating unit is specifically configured to:
  • the calculating unit is specifically configured to:
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance.
  • the sample includes an upper left pixel sample of the current image block, and a lower right pixel located at a lower right of the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the device when the image processing apparatus is applied to the video
  • the device further includes an encoding unit, configured to calculate, by using the calculating unit, a motion vector of an arbitrary pixel sample in the current image block, and the arbitrary pixel in the current image block The sample is subjected to motion compensated predictive coding.
  • the device when the image processing apparatus is applied to the video
  • the device further includes a decoding unit, configured to perform motion compensation decoding on the arbitrary pixel sample by using the motion vector of the arbitrary pixel sample in the current image block calculated by the calculating unit.
  • the pixel reconstruction value of the arbitrary pixel sample when the image processing apparatus is applied to the video
  • the device further includes a decoding unit, configured to perform motion compensation decoding on the arbitrary pixel sample by using the motion vector of the arbitrary pixel sample in the current image block calculated by the calculating unit.
  • the pixel reconstruction value of the arbitrary pixel sample when the image processing apparatus is applied to the video
  • the device further includes a decoding unit, configured to perform motion compensation decoding on the arbitrary pixel sample by using the motion vector of the arbitrary pixel sample in the current image block calculated by the calculating unit.
  • a sixth aspect of the embodiments of the present invention provides an image processing apparatus, including:
  • the processor by calling a code or an instruction stored in the memory, for obtaining a motion vector 2-tuple of a current image block, where the motion vector 2-tuple includes a video frame to which the current image block belongs The respective motion vectors of the 2 pixel samples;
  • the affine motion model is in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical of the affine motion model The vertical component of the direct component is d, so that the affine motion model is of the form:
  • the processor is configured to obtain values of coefficients of the affine motion model by using motion vectors of the two pixel samples and positions of the two pixel samples ;
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the affine motion model and the motion vector 2-tuple are utilized to calculate Referring to a motion vector aspect of an arbitrary pixel sample in a current image block, the processor is configured to utilize a ratio between a difference between horizontal components of respective motion vectors of the two pixel samples and a distance between the two pixel samples And a ratio of a difference between a vertical component of a motion vector of each of the two pixel samples and a distance between the two pixel samples, obtaining a value of a coefficient of the affine motion model;
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the affine motion model and the motion vector 2-tuple are used to calculate In terms of motion vectors of any pixel samples in the current image block, the processor is configured to utilize a weighted sum between components of respective motion vectors of the two pixel samples and a distance or a distance between the two pixel samples Calculating a ratio of squares of distances between two pixel samples, obtaining values of coefficients of the affine motion model;
  • a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and values of coefficients of the affine motion model.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance.
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the processor when the image processing apparatus is applied to the video
  • the processor is further configured to: after calculating the motion vector of the arbitrary pixel sample in the current image block by using the affine motion model and the motion vector 2-tuple, using the calculation Obtaining a motion vector of an arbitrary pixel sample in the current image block, and performing motion compensation predictive coding on the arbitrary pixel sample in the current image block.
  • the processor is further configured to After determining the predicted pixel value of the pixel point of the arbitrary pixel sample in the current image block, performing motion compensation decoding on the arbitrary pixel sample by using the calculated motion vector of any pixel sample in the current image block. Obtaining a pixel reconstruction value of the arbitrary pixel sample.
  • An image processing method includes:
  • the affine motion model is in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, -b is a horizontal coordinate coefficient of a vertical component of the affine motion model, and coefficients of the affine motion model Including a and b;
  • the coefficients of the affine motion model further include a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical displacement coefficient d of a vertical component of the affine motion model, such that the affine motion model In the form of:
  • An eighth aspect of the embodiments of the present invention provides an image processing apparatus, including:
  • a calculating unit configured to calculate a motion vector of an arbitrary pixel sample in the current image block by using a coefficient of the affine motion model obtained by the obtaining unit and the affine model;
  • a prediction unit configured to calculate, by the calculation unit, a motion vector of the arbitrary pixel sample, and determine a predicted pixel value of a pixel of the arbitrary pixel sample;
  • the affine motion model is in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, -b is a horizontal coordinate coefficient of a vertical component of the affine motion model, and coefficients of the affine motion model Including a and b;
  • the coefficients of the affine motion model further include a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical displacement coefficient d of a vertical component of the affine motion model, such that the affine
  • the motion model is of the form:
  • pixel value prediction is performed on a current image block by using an affine motion model and a combined motion information unit set i, wherein each motion in the motion information unit set i is merged.
  • the information units are respectively selected from at least part of the motion information units of the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the selection range is reduced because the combined motion information unit set i becomes relatively small, and the tradition is abandoned
  • the mechanism adopted by the technology to filter out a motion information unit of a plurality of pixel samples by a large number of calculations in a plurality of possible candidate motion information element sets of a plurality of pixel samples, is advantageous for improving coding efficiency, and is also advantageous for reducing affine-based affine
  • the computational complexity of image prediction by motion models makes it possible to introduce video coding standards into affine motion models. And because the affine motion model is introduced, it is beneficial to describe the motion of the object more accurately, so it is beneficial to improve the prediction accuracy.
  • the number of referenced pixel samples can be two, it is advantageous to further reduce the computational complexity of image prediction based on the affine motion model after introducing the affine motion model, and also to reduce the coding end transfer simulation. Shooting parameter information or the number of motion vector residuals, etc.
  • FIG. 1 is a schematic diagram of partitioning of several image blocks according to an embodiment of the present invention
  • FIG. 1 is a schematic flowchart of an image prediction method according to an embodiment of the present disclosure
  • FIG. 1 is a schematic diagram of an image block according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of another image prediction method according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of a plurality of candidate motion information unit sets for determining pixel samples according to an embodiment of the present disclosure
  • 2 e is a schematic diagram of vertex coordinates of an image block x according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of the affine motion of a pixel according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a pixel point rotation motion according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart diagram of another image prediction method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of an image prediction apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another image prediction apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart diagram of an image processing method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart diagram of another image processing method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart diagram of another image processing method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another image processing apparatus according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart diagram of another image processing method according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of another image processing apparatus according to an embodiment of the present invention.
  • Embodiments of the present invention provide an image prediction method and related equipment, in order to reduce the computational complexity of image prediction based on an affine motion model.
  • the video sequence consists of a series of pictures (English: picture), the picture is further divided into slices (English: slice), and the slice is further divided into blocks (English: block).
  • the video coding is performed in units of blocks, and can be encoded from left to right and from top to bottom line from the upper left corner position of the picture.
  • the concept of block is further extended.
  • the MB can be further divided into a plurality of prediction blocks (English: partition) that can be used for predictive coding.
  • coding unit English: coding unit, abbreviation: CU
  • prediction unit English: prediction unit, abbreviation: PU
  • transform unit English: transform unit, abbreviation: TU
  • CU PU or TU
  • the PU can correspond to a prediction block and is the basic unit of predictive coding.
  • the CU is further divided into a plurality of PUs according to a division mode.
  • the TU can correspond to a transform block and is a basic unit for transforming the prediction residual.
  • High-performance video coding English: high efficiency video coding, abbreviated: HEVC
  • HEVC high efficiency video coding
  • the size of a coding unit may include four levels of 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, and 8 ⁇ 8, and coding units of each level may be divided into different according to intra prediction and inter prediction.
  • the size of the prediction unit. 1 - a and FIG. The corresponding prediction unit division method.
  • the skip mode and the direct mode are effective tools for improving coding efficiency, and are used at low bit rates.
  • the blocks of the coding mode can account for more than half of the entire coding sequence.
  • skip mode you only need to pass a skip mode flag in the code stream, you can use the surrounding motion vector to derive the current image.
  • the motion vector of the block directly copies the value of the reference block as the reconstructed value of the current image block according to the motion vector.
  • the encoder can derive the motion vector of the current image block by using the peripheral motion vector, and directly copy the value of the reference block as the predicted value of the current image block according to the motion vector, and use the predicted value pair at the encoding end.
  • the current image block is encoded and predicted.
  • high-performance video coding English: high efficiency video coding, abbreviated: HEVC
  • HEVC high efficiency video coding
  • AMVP adaptive motion vector prediction
  • a fusion code constructs a candidate motion information set by using motion information of a coded block around the current coding block (which may include a motion vector (English: motion vector, abbreviation: MV) and a prediction direction and a reference frame index, etc.)
  • the candidate motion information with the highest coding efficiency may be selected as the motion information of the current coding block, the prediction value of the current coding block is found in the reference frame, and the current coding block is predictively coded, and at the same time, the peripheral coded block from which the representation is selected may be selected.
  • the index value of the motion information is written to the code stream.
  • the adaptive motion vector prediction mode when used, by using the motion vector of the peripheral coded block as the predicted value of the current coded block motion vector, a motion vector with the highest coding efficiency may be selected to predict the motion vector of the current coded block, and An index value indicating which peripheral motion vector is selected is written to the video stream.
  • the image prediction method provided by the embodiment of the present invention is described below.
  • the execution body of the image prediction method provided by the embodiment of the present invention is a video coding device or a video decoding device, where the video coding device or the video decoding device may be any output or A device that stores video, such as a laptop, tablet, PC, cell phone, or video server.
  • an image prediction method includes: determining two pixel samples in a current image block, and determining a candidate motion information unit set corresponding to each of the two pixel samples.
  • the candidate motion information unit set corresponding to each pixel sample includes at least one motion information unit of the candidate; determining a combined motion information unit set i including two motion information units; wherein the combined motion information unit set
  • Each of the motion information units in i is selected from at least a portion of the motion information units in the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the motion information unit includes a prediction direction of the front Directional motion vector and / Or predicting the direction as a backward motion vector; performing pixel value prediction on the current image block using the affine motion model and the combined motion information unit set i.
  • FIG. 1-c is a schematic flowchart of an image prediction method according to an embodiment of the present invention.
  • an image prediction method provided by an embodiment of the present invention may include:
  • S101 Determine two pixel samples in the current image block, and determine a candidate motion information unit set corresponding to each of the two pixel samples.
  • the candidate motion information unit set corresponding to each pixel sample includes at least one motion information unit of the candidate.
  • the pixel samples mentioned in the embodiments of the present invention may be pixel points or pixel blocks including at least two pixel points.
  • referring to the motion information unit in each embodiment of the present invention may include a motion vector in which the prediction direction is forward and/or a motion vector in which the prediction direction is backward. That is, one motion information unit may include one motion vector or two motion vectors that may include different prediction directions.
  • the prediction direction corresponding to the motion information unit is forward, it indicates that the motion information unit includes a motion vector whose prediction direction is forward, but does not include a motion vector whose prediction direction is backward. If the prediction direction corresponding to the motion information unit is backward, it indicates that the motion information unit includes a motion vector whose prediction direction is backward but does not include a motion vector whose prediction direction is forward. If the prediction direction corresponding to the motion information unit is unidirectional, it indicates that the motion information unit includes a motion vector in which the prediction direction is forward, but does not include a motion vector in which the prediction direction is backward, or indicates that the motion information unit includes a backward direction. The motion vector does not include the forward direction motion vector. Wherein, if the prediction direction corresponding to the motion information unit is bidirectional, the motion information unit includes a motion vector in which the prediction direction is forward and a motion vector in which the prediction direction is backward.
  • the 2 pixel samples include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and a second pixel sample in the central pixel sample a1 of the current image block.
  • the upper left pixel sample of the current image block may be an upper left vertex of the current image block or a pixel block in the current image block that includes an upper left vertex of the current image block; a lower left pixel of the current image block The sample is the lower left vertex of the current image block or the a pixel block in a front image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex of the current image block or the current image block in the current image block a pixel block of the upper right vertex; the central prime sample a1 of the current image block is a central pixel point of the current image block or a pixel block of the current image block that includes a central pixel point of the current image block.
  • the size of the pixel block is, for example, 2*2, 1*2, 4*2, 4*4, or other size.
  • An image block may include a plurality of pixel blocks.
  • the central pixel of the image block when w is an odd number (for example, w is equal to 3, 5, 7, or 11, etc.), the central pixel of the image block is unique, and when w is an even number
  • the time for example, w is equal to 4, 6, 8, or 16, etc.
  • the central sample of the image block may be any central pixel point or a designated central pixel of the image block.
  • the central sample of the image block may be a pixel block containing any one of the central pixel points in the image block, or the central sample of the image block may be a pixel block in the image block containing the specified central pixel.
  • the image block of size 4*4 shown in the example of FIG. 1-d has four pixel points of A1, A2, A3, and A4, and the designated central pixel point can be pixel point A1 (upper left) Center pixel point), pixel point A2 (lower left center pixel point), pixel point A3 (upper right center pixel point) or pixel point A4 (lower right center pixel point), and so on.
  • Each of the motion information units i is selected from at least a part of motion information units in the candidate motion information unit set corresponding to each of the two pixel samples.
  • the motion information unit includes a motion vector in which the prediction direction is a forward direction and/or a motion direction in which the prediction direction is a backward direction.
  • the candidate motion information unit set corresponding to the pixel sample 001 is the candidate motion information unit set 011.
  • the candidate motion information unit set corresponding to the pixel sample 002 is the candidate motion information unit set 022.
  • the merged motion information unit set i includes a motion information unit C01 and a motion information unit C02, wherein the motion information unit C01 may be selected from the candidate motion information unit set 011, wherein the motion information unit C02 may be selected from the candidate motion information unit set 022. And so on.
  • the merged motion information unit set i includes the motion information unit C01 and the motion information unit C02, wherein any one of the motion information unit C01 and the motion information unit C02 may include a forward motion vector and a prediction direction.
  • the combined motion information unit set i may include 2 motion vectors (the prediction manners corresponding to the 2 motion vectors may be forward or backward) or the 2 motion vectors may Including one motion vector with a prediction direction of forward and one motion vector with a prediction direction of backward, and may also include four motion vectors (where the four motion vectors may include two motions whose prediction direction is forward)
  • the vector and the prediction direction are two backward motion vectors), and may also include three motion vectors (the three motion vectors may also include one motion vector with a prediction direction of forward and two backward directions with a prediction direction of backward).
  • the motion vector may also include two motion vectors whose prediction direction is forward and one motion vector whose prediction direction is backward.
  • the current image block may be a current coding block or a current decoding block.
  • the current image block is subjected to pixel value prediction by using the affine motion model and the merged motion information unit set i, wherein each motion information unit in the merged motion information unit set i is separately selected.
  • At least part of the motion information unit in the candidate motion information unit set corresponding to each pixel sample in the 2 pixel samples since the selection range of the merged motion information unit set i becomes relatively small, the conventional technology is adopted in multiple A mechanism for filtering out a motion information unit of a plurality of pixel samples by a large number of calculations in all possible candidate motion information unit sets of the pixel samples is advantageous for improving coding efficiency and also for reducing image prediction based on the affine motion model.
  • the image prediction method provided by this embodiment may be applied to a video encoding process or may be applied to a video decoding process.
  • the square of the merged motion information unit set i including the two motion information units is determined.
  • the style may be varied.
  • determining a combined motion information unit set i that includes two motion information units includes: determining, from the N candidate motion information unit sets, that the two motion information is included a unit of combined motion information unit i of the unit; wherein each motion information unit included in each set of candidate motion information unit sets in the N candidate motion information unit sets is selected from the two pixel samples respectively At least part of the motion information unit of the candidate motion information unit set corresponding to each pixel sample, wherein the N is a positive integer, the N candidate combined motion information unit sets are different from each other, and the N Each candidate merged motion information unit set in the candidate merged motion information unit set includes two motion information units.
  • the two candidate combined motion information unit sets are different, and may be that the motion information units included in the candidate combined motion information unit set are not completely the same.
  • the two motion information units are different, and may refer to different motion vectors included in the two motion information units, or different motion directions corresponding to the motion vectors included in the two motion information units, or included in the two motion information units.
  • the motion vector corresponds to a different reference frame index.
  • the two motion information units are the same, which may mean that the motion vectors included in the two motion information units are the same, and the motion directions included in the two motion information units correspond to the same prediction direction, and the motions included in the two motion information units The vector corresponds to the same reference frame index.
  • determining, by using one of the N candidate motion information unit sets, two motion information units Combining the motion information unit set i may include: determining a combined motion information unit including two motion information units from among the N candidate combined motion information unit sets based on the identifier of the combined motion information unit set i obtained from the video code stream Set i.
  • the method may further include: writing the identifier of the merged motion information unit set i Into the video stream.
  • the identifier of the merged motion information unit set i may be any information that can identify the merged motion information unit set i.
  • the identifier of the merged motion information unit set i may be a merged motion information unit set i in the merged motion information.
  • the image prediction method is applied to In the case of a video encoding process, the method further includes: using motion vectors of spatially adjacent or time-domain adjacent pixel samples of the two pixel samples to obtain motion vector predictors of the two pixel samples, Obtaining motion vector residuals of the two pixel samples according to motion vector predictors of the two pixel samples, and writing motion vector residuals of the two pixel samples into the video code stream.
  • the method further includes: decoding the 2 pixel samples from a video code stream. a motion vector residual, using motion vectors of spatially adjacent or temporally adjacent pixel samples of the 2 pixel samples to obtain motion vector predictors of the 2 pixel samples, based on motion of the 2 pixel samples The vector predictor and the motion vector residual of the two pixel samples respectively obtain motion vectors of the two pixel samples.
  • determining the combined motion information unit set i including the two motion information units from among the N candidate combined motion information unit sets may include: based on distortion or rate distortion cost A combined motion information unit set i including two motion vectors is determined from among the N candidate combined motion information unit sets.
  • the rate distortion cost corresponding to the merged motion information unit set i is less than or equal to the rate distortion cost corresponding to any one of the combined motion information unit sets except the combined motion information unit set i in the N candidate motion information unit sets. .
  • the distortion corresponding to the merged motion information unit set i is less than or equal to the distortion corresponding to any one of the combined motion information unit sets except the combined motion information unit set i in the N candidate motion information unit sets.
  • the rate-distortion cost corresponding to a certain candidate combined motion information unit set may be, for example, utilized.
  • the distortion corresponding to a certain candidate combined motion information unit set (for example, the combined motion information unit set i in the N candidate motion information unit sets) in the foregoing N candidate combined motion information unit sets, for example, may be an image block.
  • the original pixel value (such as the current image block) is combined with the use of the candidate
  • a set of motion information unit sets (eg, merged motion information unit set i) is obtained by performing pixel value prediction on the image block to obtain distortion between predicted pixel values of the image block (ie, original pixel values of the image block and predicted pixel values) Distortion between).
  • the obtained distortion between the predicted pixel values of the image block may be, for example, the original pixel value of the image block (such as the current image block) and the set of the combined motion information unit using the certain candidate (for example, the combined motion information unit set) i) a sum of square differences or (SAD, sum of absolution differences) or an error and/or between the predicted pixel values of the image block obtained by performing pixel value prediction on the image block
  • Other distortion parameters that measure distortion that measure distortion.
  • N is a positive integer.
  • N described above may be, for example, equal to 1, 2, 3, 4, 5, 6, 8, or other values.
  • each motion information unit in any one of the N candidate motion information unit sets may be different from each other.
  • the set of N candidate combined motion information units satisfies at least one of a first condition, a second condition, a third condition, a fourth condition, and a fifth condition. condition.
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion.
  • the motion mode of the current image block indicated by the motion information unit in the candidate combined motion information unit set may be considered as a translational motion, and vice versa, the motion mode of the current image block indicated by the motion information unit in the candidate combined motion information unit is considered to be a non-translational motion, wherein the first prediction direction is forward or backward .
  • the motion information unit in the candidate combined motion information unit may be considered as indicated by the motion information unit.
  • the mode of motion of the current image block is a non-translational motion.
  • the second condition includes that the two motion information units in the one candidate motion information unit set in the N candidate combined motion information unit sets have the same prediction direction.
  • both motion information units include a motion vector in which the prediction direction is forward and a motion vector in which the prediction direction is backward
  • one of the two motion information units includes a motion vector in which the prediction direction is forward and a motion vector in which the prediction direction is backward
  • the other motion information unit includes a motion vector in which the prediction direction is forward.
  • the motion vector with the prediction direction being backward is not included, or the motion information unit includes the motion vector with the prediction direction being the backward direction but not the motion vector with the prediction direction being the forward direction, which may represent the prediction corresponding to the two motion information units.
  • the direction is different.
  • one of the two motion information units includes a motion vector whose prediction direction is a forward motion but does not include a motion vector whose backward direction is a backward direction
  • another motion information unit includes a backward direction of the prediction direction.
  • the motion vector, but not including the forward direction motion vector may indicate that the prediction directions corresponding to the two motion information units are different.
  • both motion information units include a motion vector whose prediction direction is forward, but the two motion information units do not include the motion vector whose prediction direction is backward, indicating that the prediction directions of the two motion information units are the same.
  • both motion information units include a motion vector whose prediction direction is backward, but neither of the motion information units includes a motion vector whose prediction direction is forward, indicating that the two motion information units have the same prediction direction. .
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same.
  • both motion information units include a motion vector in which the prediction direction is forward and a motion vector in which the prediction direction is backward
  • the prediction direction in the two motion information units is a reference corresponding to the forward motion vector.
  • the frame index is the same
  • the prediction direction in the two motion information units is the same as the reference frame index corresponding to the backward motion vector, which may indicate that the reference frame indexes corresponding to the two motion information units are the same.
  • the other motion information unit when one of the two motion information units includes a motion vector in which the prediction direction is forward and a motion vector in which the prediction direction is backward, the other motion information unit includes The prediction direction is a forward motion vector but does not include a motion vector in which the prediction direction is backward, or the other motion information unit includes a motion vector in which the prediction direction is backward but does not include a motion vector in which the prediction direction is forward, indicating that The prediction directions corresponding to the two motion information units are different, and the reference frame indexes corresponding to the two motion information units may be different.
  • the other motion information unit when one of the two motion information units includes a motion vector in which the prediction direction is a forward motion but does not include a motion vector in which the prediction direction is backward, the other motion information unit includes a motion in which the prediction direction is backward.
  • the vector, but not including the forward direction motion vector may indicate that the reference frame indices corresponding to the two motion information units are different.
  • the other motion information unit when one of the two motion information units includes a motion vector in which the prediction direction is a forward motion but does not include a motion vector in which the prediction direction is backward, the other motion information unit includes a motion in which the prediction direction is forward.
  • the vector does not include the motion vector whose prediction direction is backward, and the prediction direction in the two motion information units is the same as the reference frame index corresponding to the forward motion vector, and may represent the reference frame corresponding to the two motion information units.
  • the index is different.
  • the other motion information unit includes a motion in which the prediction direction is backward.
  • the vector does not include the motion vector of the forward direction, and the reference frame in the two motion information units is the same as the reference frame index corresponding to the backward motion vector, and may represent the reference frame corresponding to the two motion information units.
  • the index is different.
  • the fourth condition includes that an absolute value of a difference value of motion vector horizontal components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold, or And an absolute value of a difference between one of the motion information unit and the motion vector horizontal component of the pixel sample Z in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold
  • the pixel sample Z of the current image block is different from any one of the 2 pixel samples.
  • the horizontal component threshold may be, for example, equal to 1/3 of the width of the current image block, 1/2 of the width of the current image block, 2/3 of the width of the current image block, or 3/4 of the width of the current image block or the like. size.
  • the fifth condition includes that an absolute value of a difference of motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a vertical component threshold
  • the N candidate combined motion information unit sets The absolute value of the difference between any one of the motion information unit of the candidate motion information unit and the motion vector vertical component of the pixel sample Z is less than or equal to a horizontal component threshold, and the pixel sample of the current image block Z is different from any one of the 2 pixel samples.
  • the above vertical component threshold may be, for example, equal to 1/3 of the height of the current image block, 1/2 of the height of the current image block, 2/3 of the height of the current image block, or 3/4 of the height of the current image block or the like. size.
  • the pixel samples Z may be the lower left pixel samples or the central pixel samples or other pixel samples of the current image block. Other situations can be deduced by analogy.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes motion information units of x1 pixel samples, where the x1 pixel samples include At least one pixel sample adjacent to an upper left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to an upper left pixel sample time domain of the current image block, the x1 being a positive integer.
  • the x1 pixel samples include only at least one pixel sample adjacent to the upper left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to the upper left pixel sample time domain of the current image block.
  • x1 above may be equal to 1, 2, 3, 4, 5, 6, or other values, for example.
  • the x1 pixel samples include a pixel sample that is the same as an upper left pixel sample position of the current image block, and the current image block, among video frames adjacent to a video frame time domain to which the current image block belongs. At least one of a spatial adjacent pixel sample on the left, a spatially adjacent pixel sample on the upper left of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the upper right pixel sample of the current image block includes motion information units of x2 pixel samples, where the x2 pixel samples include At least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a time domain of an upper right pixel sample of the current image block, the x2 being a positive integer.
  • x2 above may be, for example, equal to 1, 2, 3, 4, 5, 6, or other values.
  • the x2 pixel samples include a pixel sample that is the same as the upper right pixel sample position of the current image block, among the video frames adjacent to the video frame time domain to which the current image block belongs. At least one of a spatial adjacent pixel sample on the right side of the front image block, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, where the x3 pixel samples include At least one pixel sample adjacent to a lower left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a lower left pixel sample time domain of the current image block, the x3 being a positive integer.
  • the x3 pixel samples include only at least one pixel sample adjacent to the lower left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to the lower left pixel sample time domain of the current image block.
  • x3 above may be equal to 1, 2, 3, 4, 5, 6, or other values, for example.
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the set of candidate motion information units corresponding to the central pixel sample a1 of the current image block includes motion information units of x5 pixel samples, where the x5 pixel samples are included.
  • One of the pixel samples is the pixel sample a2.
  • the x5 pixel samples include only pixel samples a2.
  • the position of the central pixel sample a1 in the video frame to which the current image block belongs is the same as the position of the pixel sample a2 in the adjacent video frame of the video frame to which the current image block belongs, and the x5 is A positive integer.
  • the performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: when the merged motion information In a case where the prediction direction in the unit set i is that the reference frame index corresponding to the motion vector of the first prediction direction is different from the reference frame index of the current image block, the merged motion information unit set i is subjected to scaling processing so that The motion vector in the first motion direction of the merged motion information unit set i is scaled to a reference frame of the current image block, and the affine motion model and the merged motion information unit set i after performing the scaling process are used. Performing pixel value prediction on the current image block, where the first prediction direction is forward or backward;
  • the performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: when the prediction direction in the combined motion information unit set i is a forward motion The reference frame index corresponding to the vector is different from the forward reference frame index of the current image block, and the reference frame index corresponding to the backward direction motion vector in the merged motion information unit set i is different from the current image
  • the merged motion information unit set i is subjected to scaling processing such that the forward motion vector in the merged motion information unit set i is forwarded to the current Forward reference frame of the image block and causing the backward direction motion vector in the merged motion information unit set i to be scaled to the backward reference frame of the current image block, using the affine motion model and performing scaling processing
  • the merged motion information unit set i performs pixel value prediction on the current image block.
  • performing pixel value prediction on the current image block by using a non-translational motion model and a combined motion information element set i after performing scaling processing may include: Performing motion estimation processing on the motion vector in the merged motion information unit set i after the scaling process to obtain the combined motion information unit set i after the motion estimation processing, and the combined motion information unit after the motion estimation processing and the motion estimation processing The set i performs pixel value prediction on the current image block.
  • the performing, by using the affine motion model and the merged motion information unit set i, performing pixel value prediction on the current image block including: using an affine motion model Calculating a motion vector of each pixel point in the current image block by using the combined motion information unit set i, and determining, by using the calculated motion vector of each pixel point in the current image block, the current image block a predicted pixel value of each pixel point; or calculating a motion vector of each pixel block in the current image block by using an affine motion model and the combined motion information unit set i, using the calculated current image block A motion vector of each pixel block determines a predicted pixel value of each pixel point of each pixel block in the current image block.
  • the test finds that if the affine motion model and the combined motion information unit set i are used to calculate the motion vector of each pixel block in the current image block, and then the calculated pixels in the current image block are used.
  • the motion vector of the block determines a predicted pixel value of each pixel point of each pixel block in the current image block, and the pixel block in the current image block is granular when calculating the motion vector, so that Conducive to a large reduction in computational complexity.
  • performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: combining the combined motion information unit The motion vector in the set i is subjected to motion estimation processing to obtain a combined motion information unit set i after the motion estimation process, and the current image block is pixel-formed by using the affine motion model and the motion estimation processed combined motion information unit set i Value prediction.
  • the performing, by using the affine motion model and the merged motion information unit set i, performing pixel value prediction on the current image block includes: using the merge motion a ratio of a difference between motion vector horizontal components of two motion information units in the information unit set i to a length or a width of the current image block, and two motions in the combined motion information unit set i A ratio of a difference between a vertical component of a motion vector of the information unit to a length or a width of the current image block results in a motion vector of an arbitrary pixel sample in the current image block.
  • the performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: using a difference between a motion vector horizontal component of the two pixel samples a ratio of a length or a width of the current image block, and a ratio of a difference between a vertical component of the motion vector of the 2 pixel samples and a length or a width of the current image block, obtained in the current image block a motion vector of an arbitrary pixel sample, wherein motion vectors of the 2 pixel samples are obtained based on motion vectors of two motion information units in the combined motion information unit set i (eg, motion of the 2 pixel samples)
  • the vector is a motion vector of two motion information units in the merged motion information unit set i, or based on a motion vector and a prediction residual of two motion information units in the combined motion information unit set i Motion vector of a sample of pixels).
  • the horizontal coordinate coefficient of the motion vector horizontal component of the 2 pixel samples and the vertical coordinate coefficient of the motion vector vertical component are equal, and the 2 pixels are The vertical coordinate coefficient of the motion vector horizontal component of the sample is opposite to the horizontal coordinate coefficient of the vertical component of the motion vector.
  • the affine motion model may be, for example, an affine motion model of the form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • (vx 2 , vy 2 ) is a motion vector of another pixel sample different from the above two pixel samples in the current image block.
  • (vx 2 , vy 2 ) may be the lower left pixel sample or the central pixel sample of the previous image block.
  • (vx 2 , vy 2 ) may be the upper right pixel sample or the central pixel sample of the front image block.
  • the coordinates of the pixel sample may be the action of any one of the pixel samples, or the coordinates of the pixel sample may be the coordinates of the specified pixel point in the pixel sample.
  • the coordinates of the pixel sample may be the upper left pixel point or the lower left upper left pixel point or the upper right pixel point or the coordinates of the center pixel point in the pixel sample, etc.).
  • the pixel value prediction may be performed in a manner similar to the pixel value prediction manner corresponding to the current image block.
  • some image blocks in the current video frame are also Pixel value prediction may be performed in a different manner from the pixel value prediction manner corresponding to the current image block.
  • FIG. 2-a is a schematic flowchart diagram of another image prediction method according to another embodiment of the present invention.
  • an image prediction method implemented in a video encoding apparatus is mainly described as an example.
  • another image prediction method provided by another embodiment of the present invention may include:
  • the video encoding device determines two pixel samples in the current image block.
  • the two pixel samples include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and two of the central pixel samples a1 of the current image block as an example.
  • the 2 pixel samples include an upper left pixel sample and an upper right pixel sample of the current image block.
  • the scenario in which the two pixel samples are other pixel samples of the current image block may be analogized.
  • the upper left pixel sample of the current image block may be an upper left vertex of the current image block or a pixel block in the current image block that includes an upper left vertex of the current image block; a lower left pixel of the current image block The sample is a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex of the current image block or a pixel block in the current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or an inclusion in the current image block A block of pixels of a central pixel of the current image block.
  • the size of the pixel block is, for example, 2*2, 1*2, 4*2, 4*4, or other sizes.
  • the video encoding apparatus determines a candidate motion information unit set corresponding to each of the two pixel samples.
  • the candidate motion information unit set corresponding to each pixel sample includes at least one motion information unit of the candidate.
  • the pixel samples mentioned in the embodiments of the present invention may be pixel points or pixel blocks including at least two pixel points.
  • the candidate motion information unit set S1 corresponding to the upper left pixel sample of the current image block may include motion information units of x1 pixel samples.
  • the x1 pixel samples include: a pixel sample Col-LT having the same position as an upper left pixel sample LT of the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent image block C on the left side of the current image block, a spatially adjacent image block A on the upper left of the current image block, and a spatially adjacent image block B on the upper side of the current image block.
  • the motion information unit of the spatial adjacent image block C on the left side of the current image block may be acquired first, and the upper left of the current image block.
  • the motion information unit of the spatial adjacent image block A and the motion information unit of the spatial adjacent image block B of the upper side of the current image block, the motion of the spatial adjacent image block C on the left side of the current image block to be acquired The information unit, the motion information unit of the spatially adjacent image block A of the upper left of the current image block, and the motion information unit of the spatial adjacent image block B of the upper side of the current image block are added to the upper left pixel of the current image block a candidate motion information unit corresponding to the sample, if the motion information unit of the spatial adjacent image block C of the left side of the current image block, the motion information unit of the spatially adjacent image block A of the upper left of the current image block, and the If some or all of the motion information units in the motion information unit of the spatial adjacent image block B of the upper side of the current image block are the same, the candidate motion information unit set S1 is further
  • the motion information unit of the pixel sample Col-LT having the same position as the upper left pixel sample LT of the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs is different from Any one of the motion information unit S1 in the candidate motion information unit set S1, and the video image frame adjacent to the time domain of the video frame to which the current image block belongs, and the upper left of the current image block
  • the motion information unit of the pixel sample Col-LT having the same pixel sample LT position is added to the candidate motion information unit set S1 after the de-duplication processing, if the number of motion information units in the candidate motion information unit set S1 is still If there are less than three, zero motion information units may be added to the candidate motion information unit set S1 until the number of motion information units in the candidate motion information unit set S1 is equal to three.
  • the zero motion information unit added to the candidate motion information unit set S1 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S1 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. .
  • the zero motion information unit added to the candidate motion information unit set S1 includes a pre- The direction of the measurement is a forward zero motion vector and the prediction direction is a backward zero motion vector, wherein the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S1 may be different.
  • the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the candidate motion information unit set S2 corresponding to the upper right pixel sample of the current image block may include motion information units of x2 image blocks.
  • the x2 image blocks may include: a pixel sample Col-RT that is the same as an upper right pixel sample RT position of the current image block, among video frames adjacent to a video frame time domain to which the current image block belongs. And at least one of a spatially adjacent image block E of the upper right of the current image block and a spatially adjacent image block D of the upper side of the current image block.
  • the motion information unit of the spatially adjacent image block E of the upper right of the current image block and the motion information unit of the spatial adjacent image block D of the upper side of the current image block may be acquired first, and the current image to be acquired may be acquired.
  • the motion information unit of the spatially adjacent image block E of the upper right of the block and the motion information unit of the spatial adjacent image block D of the upper side of the current image block are added to the candidate motion information unit corresponding to the upper right pixel sample of the current image block.
  • the candidate may be The motion information unit set S2 performs deduplication processing (the number of motion information units in the candidate motion information unit set S2 after the de-duplication processing is 1), if the time interval of the video frame to which the current image block belongs a motion information unit of a pixel sample Col-RT having the same position as an upper right pixel sample RT of the current image block among adjacent video frames, and the candidate motion signal after deduplication processing Wherein the same unit S2 is a set of motion information element may be further added to the cells the zero motion information candidate motion information element set S2 until the number of candidate motion information element set S2, the motion information unit is equal to 2.
  • the motion information unit of the pixel sample Col-RT that is the same as the upper right pixel sample RT of the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs, is different from Any one of the candidate motion information unit sets S2 after the re-processing may be the same as the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs
  • the motion information unit of the pixel sample Col-RT in which the upper right pixel sample RT position is the same is added to the candidate motion information unit set S2 after the deduplication processing, if the motion information unit in the candidate motion information unit set S2 at this time If the number is still less than 2, then the zero motion information unit is further added to the candidate motion information unit set S2 until the The number of motion information units in the candidate motion information unit set S2 is equal to two.
  • the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. . In addition, if the video frame to which the current image block belongs is a bidirectional prediction frame, the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is forward and a zero motion vector whose prediction direction is backward.
  • the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S2 may be different, and the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the candidate motion information unit set S3 corresponding to the lower left pixel sample of the current image block may include motion information units of x3 image blocks.
  • the x3 image blocks may include: a pixel sample Col-LB having the same position as a lower left pixel sample LB of the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. And at least one of a left-side spatial adjacent image block G of the current image block and a spatially adjacent image block F of a left side of the current image block.
  • the motion information unit of the spatially adjacent image block G of the lower left of the current image block and the motion information unit of the spatial adjacent image block F of the left side of the current image block may be acquired first, and the acquired current image block may be acquired.
  • the motion information unit of the lower left spatial adjacent image block G and the motion information unit of the left adjacent spatial image block F of the current image block are added to the candidate motion information unit set corresponding to the lower left pixel sample of the current image block.
  • the candidate motion information is The unit set S3 performs deduplication processing (the number of motion information units in the candidate motion information unit set S3 after the de-reprocessing is 1), if adjacent to the video frame time domain to which the current image block belongs a motion information unit of a pixel sample Col-LB having the same position as a lower left pixel sample LB of the current image block among the video frames, and the candidate motion information unit after the deduplication processing
  • S3 is the same motion information of a unit can be further added to the zero motion information unit to the set of candidate motion information unit S3 until the candidate motion information unit sets the number of motion information element S3 is equal to 2.
  • the motion of the pixel sample Col-LB that is the same as the position of the lower left pixel sample LB of the current image block among the video frames adjacent to the time domain of the video frame to which the current image block belongs may be The information unit is added to the deselected motion information unit set S3. If the number of motion information units in the candidate motion information unit set S3 is still less than two, the candidate motion information unit is further The zero motion information unit is added to the set S3 until the number of motion information units in the candidate motion information unit set S3 is equal to two.
  • the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. . In addition, if the video frame to which the current image block belongs is a bidirectional prediction frame, the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is forward and a zero motion vector whose prediction direction is backward.
  • the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S3 may be different, and the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the two motion information units are different, and the motion information included in the two motion information units is different, or the motion directions corresponding to the motion information units of the two motion information units are different, or the two motion information units are The included motion vector corresponds to a different reference frame index.
  • the two motion information units are the same, and the motion vectors included in the two motion information units are the same, and the motion directions corresponding to the motion information units of the two motion information units are the same, and the two motion information units are The included motion vectors correspond to the same reference frame index.
  • a candidate motion information unit set of the corresponding pixel sample can be obtained in a similar manner.
  • the 2 pixel samples may include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and a central pixel sample a1 of the current image block. Two of the pixel samples in .
  • the upper left pixel sample of the current image block is the a top left vertex of the current image block or a pixel block in the current image block that includes an upper left vertex of the current image block; a lower left pixel sample of the current image block is a lower left vertex of the current image block or the current image a block of pixels in the block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex of the current image block or an upper right of the current image block that includes the current image block a pixel block of a vertex; a central pixel sample a1 of the current image block is a central pixel point of the current image block or a pixel block of the current image block that includes a central pixel point of the current image block.
  • the video encoding apparatus determines, according to the candidate motion information element set corresponding to each of the two pixel samples, the N candidate combined motion information unit sets.
  • Each of the motion information units included in each of the N candidate motion information unit sets is selected from candidate motions corresponding to each of the two pixel samples. At least part of the motion information element of the information unit set that meets the constraint.
  • the set of N candidate combined motion information units are different from each other, and each set of candidate combined motion information units in the N candidate combined motion information unit sets includes two motion information units.
  • a condition is to filter out N sets of candidate combined motion information units from the six initial candidate combined motion information unit sets.
  • the set of N candidate combined motion information units may, for example, also satisfy other unlisted conditions.
  • the initial candidate combined motion information unit set may be filtered by using at least one of the first condition, the second condition, and the third condition, and the N01 is selected from the initial candidate combined motion information unit set.
  • the candidate merges the motion information unit set, and then performs scaling processing on the N01 candidate combined motion information unit sets, and then uses at least one of the fourth condition and the fifth condition.
  • the conditions are filtered out from the N01 candidate combined motion information unit sets subjected to the scaling process to the N candidate combined motion information unit sets.
  • the fourth condition and the fifth condition may also not be referenced, but the initial candidate combined motion information element set is directly filtered by using at least one of the first condition, the second condition, and the third condition, from the initial candidate.
  • the combined motion information unit collectively filters out N candidate combined motion information unit sets.
  • the motion vector in the video codec reflects the distance that an object is offset in one direction (predictive direction) with respect to the same time (the same time frame corresponds to the same reference frame). Therefore, in the case that the motion information units of different pixel samples correspond to different prediction directions and/or correspond to different reference frame indexes, motion offset of each pixel/pixel block of the current image block relative to a reference frame may not be directly obtained. And when the pixel samples correspond to the same prediction direction and correspond to the same reference frame index, the combined motion vector combination can be used to obtain the motion vector of each pixel/pixel block in the image block.
  • the candidate combined motion information unit set may be subjected to scaling processing.
  • performing scaling processing on the candidate combined motion information unit set may involve modifying, adding, and/or deleting motion vectors in one or more motion information units in the candidate combined motion information unit set.
  • the performing pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i may include: when the merged motion information unit set In the case where the prediction direction in i is that the reference frame index corresponding to the motion vector of the first prediction direction is different from the reference frame index of the current image block, the merged motion information unit set i is subjected to scaling processing so that the a motion vector in which the prediction direction in the merged motion information unit set i is the first prediction direction is scaled to a reference frame of the current image block, using the affine motion model and the merged motion information unit set i subjected to the scaling process
  • the current image block performs pixel value prediction, and the first prediction direction is forward or backward;
  • the performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: when the prediction direction in the combined motion information unit set i is a forward motion The reference frame index corresponding to the vector is different from the forward reference frame index of the current image block, and the prediction direction in the merged motion information unit set i is a parameter corresponding to the backward motion vector
  • the merged motion information unit set i is subjected to a scaling process such that the prediction direction in the merged motion information unit set i is forward a motion vector is scaled to a forward reference frame of the current image block and causes a backward direction motion vector in the merged motion information unit set i to be scaled to a backward reference frame of the current image block
  • the current image block is subjected to pixel value prediction using the affine motion model and the merged motion information unit set i subjected to the scaling process.
  • the video encoding device determines, from among the N candidate combined motion information unit sets, a combined motion information unit set i including two motion information units.
  • the video encoding apparatus may further write the identifier of the combined motion information unit set i into the video code stream.
  • the video decoding device determines the combined motion information unit set i including the two motion information units from among the N candidate combined motion information unit sets based on the identification of the combined motion information unit set i obtained from the video code stream.
  • determining, by the video encoding apparatus, the merged motion information unit set i including the two motion information units from the N candidate combined motion information unit sets may include: based on a distortion or a rate The distortion cost determines a combined motion information unit set i including two motion vectors from among the N candidate combined motion information unit sets.
  • the rate distortion cost corresponding to the merged motion information unit set i is less than or equal to the rate distortion cost corresponding to any one of the combined motion information unit sets except the combined motion information unit set i in the N candidate motion information unit sets. .
  • the distortion corresponding to the merged motion information unit set i is less than or equal to the distortion corresponding to any one of the combined motion information unit sets except the combined motion information unit set i in the N candidate motion information unit sets.
  • the rate-distortion cost corresponding to a certain candidate combined motion information unit set may be, for example, utilized.
  • one candidate merged motion information list among the N candidate motion information unit sets The distortion corresponding to the set of elements (eg, the combined motion information element set i in the N candidate motion information unit sets described above) may be, for example, an original pixel value of an image block (eg, a current image block) and a combined motion information unit using the certain candidate Set (eg, merge motion information unit set i) distortion between predicted pixel values of the image block obtained by performing pixel value prediction on the image block (ie, distortion between original pixel values of the image block and predicted pixel values) .
  • an image block eg, a current image block
  • a combined motion information unit using the certain candidate Set eg, merge motion information unit set i
  • distortion between predicted pixel values of the image block obtained by performing pixel value prediction on the image block ie, distortion between original pixel values of the image block and predicted pixel values
  • the original pixel value of the image block (such as the current image block) and the pixel value prediction of the image block by using the certain candidate combined motion information unit set (for example, the combined motion information unit set i)
  • the resulting distortion between the predicted pixel values of the image block may be, for example, the original pixel value of the image block (eg, the current image block) and the set of combined motion information units using the certain candidate (eg, the combined motion information unit set) i) a squared error sum (SSD) or absolute error sum (SAD) or error between the predicted pixel values of the image block resulting from pixel value prediction for the image block and or other distortion parameters capable of measuring distortion.
  • SSD squared error sum
  • SAD absolute error sum
  • n1 candidate combined motion information unit sets may be selected from the N candidate combined motion information unit sets, and n1 candidate combined motion information is obtained based on distortion or rate distortion cost.
  • the unit set determines a combined motion information unit set i including two motion information units.
  • the D(V) corresponding to any one of the candidate motion information unit sets in the n1 candidate combined motion information unit sets is less than or equal to any other than the n1 candidate combined motion information unit sets in the N candidate motion information unit sets.
  • a candidate merges the set of motion information units corresponding to D(V), where n1 is, for example, equal to 3, 4, 5, 6, or other values.
  • the identifier of the n1 candidate combined motion information unit set or the n1 candidate combined motion information unit set may be added to the candidate combined motion information unit set queue, wherein if the N is less than or equal to n1, the N pieces may be used.
  • the identifier of the candidate combined motion information unit set or the N candidate combined motion information unit sets is added to the candidate combined motion information unit set queue.
  • the candidate merged motion information unit set in the candidate merged motion information unit set queue may be, for example, sorted in ascending or descending order according to the D(V) size.
  • the Euclidean distance parameter D(V) of any one of the candidate motion information unit sets (eg, the combined motion information unit set i) in the N candidate motion information unit sets may be calculated, for example, as follows:
  • v p,x represents the motion vector Horizontal component
  • v p,y represents the motion vector Vertical component
  • motion vectors of two pixel samples included in a candidate merged motion information unit set of N candidate combined motion information unit sets A motion vector represented as another pixel sample of the current image block, the other pixel sample being different from the two pixel samples described above.
  • Figure 2-e with a motion vector representing the upper left pixel sample and the upper right pixel sample of the current image block, motion vector a motion vector representing the lower left pixel sample of the current image block, of course, a motion vector It can also represent the motion vector of the central pixel sample or other pixel samples of the current image block.
  • the data is sorted in ascending or descending order, and the candidate combined motion information unit set queue can be obtained.
  • the merge motion information unit set in the candidate merged motion information unit set queue is different from each other, and the available index number indicates a certain combined motion information unit set in the candidate merged motion information unit set queue.
  • the video encoding apparatus performs motion vector prediction on the current image block by using an affine motion model and the combined motion information unit set i.
  • the size of the current image block is w ⁇ h, and the w is equal to or not equal to h.
  • Figure 2-e shows the coordinates of the four vertices of the current image block.
  • a schematic diagram of affine motion is shown in Figures 2-f and 2-g.
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vector of any pixel within the current image block x is (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vector of any pixel within the current image block x is (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, wherein the vx and vy are coordinates (x, y) in the current image block, respectively.
  • the video encoding apparatus may perform pixel value prediction on the current image block based on the calculated motion vector of each pixel point or each pixel block of the current image block.
  • the video encoding apparatus may obtain the prediction residual of the current image block by using the original pixel value of the current image block and the current image block prediction pixel value obtained by performing pixel value prediction on the current image block.
  • the video encoding device can write the prediction residual of the current image block to the video code stream.
  • the video encoding apparatus performs pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i, and each motion information unit in the combined motion information unit set i is respectively At least part of the motion information unit in the candidate motion information unit set corresponding to each of the 2 pixel samples, since the selected motion information unit set i selection range becomes relatively small, the traditional technology is adopted A mechanism for filtering out a motion information unit of a plurality of pixel samples by a large number of calculations in all possible candidate motion information unit sets of pixel samples, which is advantageous for improving coding efficiency and also for reducing image prediction based on affine motion model The computational complexity, which in turn makes it possible to introduce affine motion models into video coding standards.
  • the affine motion model is introduced, it is beneficial to describe the motion of the object more accurately, so it is beneficial to improve the prediction accuracy. Since the number of referenced pixel samples can be two, it is advantageous to further reduce the computational complexity of image prediction based on the affine motion model after introducing the affine motion model, and also to reduce the transmission of affine parameter information at the encoding end. Or the number of motion vector residuals, etc.
  • a derivation process of the affine motion model shown in Equation 1 is exemplified below.
  • a rotational motion model can be utilized to derive an affine motion model.
  • FIG. 2-h the rotational motion is exemplified by, for example, FIG. 2-h or FIG. 2-i.
  • the rotational motion model is shown in formula (2).
  • (x', y') is the coordinate corresponding to the pixel point of the coordinate (x, y) in the reference frame, where ⁇ is the rotation angle and (a 0 , a 1 ) is the translation component. If the transform coefficient is known, the motion vector (vx, vy) of the pixel point (x, y) can be obtained.
  • the rotation matrix adopted is:
  • the simplified affine motion model description can be as Equation 3.
  • the simplified affine motion model can only represent 4 parameters compared with the general affine motion model.
  • Equation 1 For an image block of size w ⁇ h (such as CUR), extend the right and bottom boundaries of each line and find the motion vector of the vertex of the coordinate point (0,0), (w,0) (vx 0 , vy 0) ), (vx 1 , vy 1 ). Taking these two vertices as pixel samples (of course, pixel samples with other points as references, such as central pixel samples, etc.), and substituting their coordinates and motion vectors into equation (3), Equation 1 can be derived.
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • Formula 1 has strong usability, and the practice finds that since the number of referenced pixel samples can be two, it is advantageous to further reduce the introduction of the affine motion model, based on the simulation.
  • the motion model performs computational complexity of image prediction and reduces the number of affine parameter information or motion vector difference values.
  • FIG. 3 is a schematic flowchart diagram of another image prediction method according to another embodiment of the present invention.
  • an image prediction method implemented in a video decoding apparatus is mainly described as an example.
  • another image prediction method provided by another embodiment of the present invention may include:
  • the video decoding device determines two pixel samples in the current image block.
  • the two pixel samples include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and two of the central pixel samples a1 of the current image block as an example.
  • the 2 pixel samples include an upper left pixel sample and an upper right pixel sample of the current image block.
  • the scenario in which the two pixel samples are other pixel samples of the current image block may be analogized.
  • the upper left pixel sample of the current image block may be an upper left vertex of the current image block or a pixel block in the current image block that includes an upper left vertex of the current image block; a lower left pixel of the current image block The sample is a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex of the current image block or a pixel block in the current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or an inclusion in the current image block A block of pixels of a central pixel of the current image block.
  • the size of the pixel block is, for example, 2*2, 1*2, 4*2, 4*4, or other sizes.
  • the video decoding apparatus determines a candidate motion information unit set corresponding to each of the two pixel samples.
  • the candidate motion information unit set corresponding to each pixel sample includes at least one motion information unit of the candidate.
  • the pixel samples mentioned in the embodiments of the present invention may be pixel points or pixel blocks including at least two pixel points.
  • the candidate motion information unit set S1 corresponding to the upper left pixel sample of the current image block may include motion information units of x1 pixel samples.
  • the x1 pixel samples include: a pixel sample Col-LT having the same position as an upper left pixel sample LT of the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent image block C on the left side of the current image block, a spatially adjacent image block A on the upper left of the current image block, and a spatially adjacent image block B on the upper side of the current image block.
  • the motion information unit of the spatial adjacent image block C on the left side of the current image block, the motion information unit of the spatially adjacent image block A on the upper left of the current image block, and the upper side of the current image block may be acquired first.
  • the motion information unit of the spatial adjacent image block B, the motion information unit of the spatial adjacent image block C on the left side of the current image block, and the motion of the adjacent air image block A on the upper left side of the current image block The information unit and the motion information unit of the spatial adjacent image block B of the upper side of the current image block are added to the candidate motion information unit set corresponding to the upper left pixel sample of the current image block, if the air space on the left side of the current image block a motion information unit of the adjacent image block C, a motion information unit of the spatially adjacent image block A of the upper left of the current image block, and a portion of the motion information unit of the spatial adjacent image block B of the upper side of the current image block Or all the motion information units are the same, and further performing deduplication processing on the candidate
  • the motion information unit of the pixel sample Col-LT having the same position as the upper left pixel sample LT of the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs is different from Any one of the motion information unit S1 in the candidate motion information unit set S1, and the video image frame adjacent to the time domain of the video frame to which the current image block belongs, and the upper left of the current image block
  • the motion information unit of the pixel sample Col-LT having the same pixel sample LT position is added to the de-duplication processing In the candidate motion information unit set S1, if the number of motion information units in the candidate motion information unit set S1 is still less than three at this time, the zero motion information unit may be added to the candidate motion information unit set S1 until The number of motion information units in the candidate motion information unit set S1 is equal to three.
  • the zero motion information unit added to the candidate motion information unit set S1 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S1 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. . In addition, if the video frame to which the current image block belongs is a bidirectional prediction frame, the zero motion information unit added to the candidate motion information unit set S1 includes a zero motion vector whose prediction direction is forward and a zero motion vector whose prediction direction is backward.
  • the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S1 may be different, and the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the candidate motion information unit set S2 corresponding to the upper right pixel sample of the current image block may include motion information units of x2 image blocks.
  • the x2 image blocks may include: a pixel sample Col-RT that is the same as an upper right pixel sample RT position of the current image block, among video frames adjacent to a video frame time domain to which the current image block belongs. And at least one of a spatially adjacent image block E of the upper right of the current image block and a spatially adjacent image block D of the upper side of the current image block.
  • the motion information unit of the spatially adjacent image block E of the upper right of the current image block and the motion information unit of the spatial adjacent image block D of the upper side of the current image block may be acquired first, and the current image to be acquired may be acquired.
  • the motion information unit of the spatially adjacent image block E of the upper right of the block and the motion information unit of the spatial adjacent image block D of the upper side of the current image block are added to the candidate motion information unit corresponding to the upper right pixel sample of the current image block.
  • the candidate may be The motion information unit set S2 performs deduplication processing (the number of motion information units in the candidate motion information unit set S2 after the de-duplication processing is 1), if the time interval of the video frame to which the current image block belongs a motion information unit of a pixel sample Col-RT having the same position as an upper right pixel sample RT of the current image block among adjacent video frames, and the candidate motion signal after deduplication processing Wherein the same motion information of a unit cell in the set S2 may enter a The step adds a zero motion information unit to the candidate motion information unit set S2 until the number of motion information units in the candidate motion information unit set S2 is equal to two.
  • the motion information unit of the pixel sample Col-RT that is the same as the upper right pixel sample RT of the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs, is different from Any one of the candidate motion information unit sets S2 after the re-processing may be the same as the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs
  • the motion information unit of the pixel sample Col-RT in which the upper right pixel sample RT position is the same is added to the candidate motion information unit set S2 after the deduplication processing, if the motion information unit in the candidate motion information unit set S2 at this time If the number is still less than two, the zero motion information unit is further added to the candidate motion information unit set S2 until the number of motion information units in the candidate motion information unit set S2 is equal to two.
  • the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. . In addition, if the video frame to which the current image block belongs is a bidirectional prediction frame, the zero motion information unit added to the candidate motion information unit set S2 includes a zero motion vector whose prediction direction is forward and a zero motion vector whose prediction direction is backward.
  • the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S2 may be different, and the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the candidate motion information unit set S3 corresponding to the lower left pixel sample of the current image block may include motion information units of x3 image blocks.
  • the x3 image blocks may include: a pixel sample Col-LB having the same position as a lower left pixel sample LB of the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. And at least one of a left-side spatial adjacent image block G of the current image block and a spatially adjacent image block F of a left side of the current image block.
  • the motion information unit of the spatially adjacent image block G of the lower left of the current image block and the motion information unit of the spatial adjacent image block F of the left side of the current image block may be acquired first, and the acquired current image block may be acquired.
  • a motion information unit of the lower left spatial adjacent image block G and a motion information unit of the left adjacent spatial image block F of the current image block are added to the current image block
  • the candidate motion information unit set S3 corresponding to the lower left pixel sample, if the motion information unit of the left adjacent spatial image block G of the current image block and the motion information of the spatial adjacent image block F of the left side of the current image block If the units are the same, the candidate motion information unit set S3 is subjected to deduplication processing (the number of motion information units in the candidate motion information unit set S3 after the de-duplication processing is 1), if the current image is a motion information unit of a pixel sample Col-LB having the same position as a lower left pixel sample LB of the current image block among video frames adjacent to the video
  • the motion information unit of the pixel sample Col-LB that is the same as the position of the lower left pixel sample LB of the current image block among the video frames adjacent to the video frame time domain to which the current image block belongs, is different from Any one of the candidate motion information unit sets S3 that is reprocessed may be the same as the current image block of the video frame adjacent to the video frame time domain to which the current image block belongs.
  • the motion information unit of the pixel sample Col-LB having the same lower left pixel sample LB position is added to the deselected motion information unit set S3, if the number of motion information units in the candidate motion information unit set S3 is still If there are less than two, the zero motion information unit is further added to the candidate motion information unit set S3 until the number of motion information units in the candidate motion information unit set S3 is equal to two.
  • the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is forward but may not include a backward direction. Motion vector. If the video frame to which the current image block belongs is a backward predicted frame, the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is backward but may not include a zero motion vector whose prediction direction is forward. . In addition, if the video frame to which the current image block belongs is a bidirectional prediction frame, the zero motion information unit added to the candidate motion information unit set S3 includes a zero motion vector whose prediction direction is forward and a zero motion vector whose prediction direction is backward.
  • the reference frame index corresponding to the motion vector added to the different zero motion information unit in the candidate motion information unit set S3 may be different, and the corresponding reference frame index may be, for example, 0, 1, 2, 3 or other values thereof.
  • the two motion information units are different, and the motion information included in the two motion information units may be different, or the motion vectors included in the two motion information units may have different prediction directions, or The motion vectors included in the two motion information units have different reference frame indices.
  • the two motion information units are the same, and the motion vectors included in the two motion information units are the same, and the motion directions corresponding to the motion information units of the two motion information units are the same, and the two motion information units are The included motion vectors correspond to the same reference frame index.
  • a candidate motion information unit set of the corresponding pixel sample can be obtained in a similar manner.
  • the 2 pixel samples may include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and a central pixel sample a1 of the current image block. Two of the pixel samples in .
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the video decoding apparatus determines, according to the candidate motion information unit set corresponding to each of the two pixel samples, the N candidate combined motion information unit sets.
  • Each of the motion information units included in each of the N candidate motion information unit sets is selected from candidate motions corresponding to each of the two pixel samples. At least part of the motion information element of the information unit set that meets the constraint.
  • the set of N candidate combined motion information units are different from each other, and each set of candidate combined motion information units in the N candidate combined motion information unit sets includes two motion information units.
  • a condition is to filter out N sets of candidate combined motion information units from the six initial candidate combined motion information unit sets. Among them, if the candidate sports letter The number of motion information units included in the information unit set S1 and the candidate motion information unit set S2 is not limited to the above example, and the number of initial candidate combined motion information unit sets is not necessarily six.
  • the set of N candidate combined motion information units may, for example, also satisfy other unlisted conditions.
  • the initial candidate combined motion information unit set may be filtered by using at least one of the first condition, the second condition, and the third condition, and the N01 is selected from the initial candidate combined motion information unit set.
  • the candidate merged motion information unit sets and then performs scaling processing on the N01 candidate combined motion information unit sets, and then uses at least one of the fourth condition and the fifth condition to extract from the N01 candidate combined motion information unit units subjected to the scaling processing.
  • N sets of candidate combined motion information unit sets are filtered out.
  • the fourth condition and the fifth condition may also not be referenced, but the initial candidate combined motion information element set is directly filtered by using at least one of the first condition, the second condition, and the third condition, from the initial candidate.
  • the combined motion information unit collectively filters out N candidate combined motion information unit sets.
  • the motion vector in the video codec reflects the distance that an object is offset in one direction (predictive direction) with respect to the same time (the same time frame corresponds to the same reference frame). Therefore, in the case that the motion information units of different pixel samples correspond to different prediction directions and/or correspond to different reference frame indexes, motion offset of each pixel/pixel block of the current image block relative to a reference frame may not be directly obtained. And when the pixel samples correspond to the same prediction direction and correspond to the same reference frame index, the combined motion vector combination can be used to obtain the motion vector of each pixel/pixel block in the image block.
  • the candidate combined motion information unit set may be subjected to scaling processing.
  • performing scaling processing on the candidate combined motion information unit set may involve modifying, adding, and/or deleting motion vectors in one or more motion information units in the candidate combined motion information unit set.
  • the performing pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i may include: when the merged motion information unit set The prediction direction in i is the reference frame index corresponding to the motion vector in the first prediction direction Different from the reference frame index of the current image block, performing scaling processing on the merged motion information unit set i such that the prediction direction in the combined motion information unit set i is a motion vector in the first prediction direction a reference frame that is scaled to the current image block, and performs pixel value prediction on the current image block by using an affine motion model and a merged motion information unit set i that performs scaling processing, where the first prediction direction is forward or Backward
  • the performing pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i may include: when the prediction direction in the combined motion information unit set i is a forward motion The reference frame index corresponding to the vector is different from the forward reference frame index of the current image block, and the reference frame index corresponding to the backward direction motion vector in the merged motion information unit set i is different from the current image
  • the merged motion information unit set i is subjected to scaling processing such that the forward motion vector in the merged motion information unit set i is forwarded to the current Forward reference frame of the image block and causing the backward direction motion vector in the merged motion information unit set i to be scaled to the backward reference frame of the current image block, using the affine motion model and performing scaling processing
  • the merged motion information unit set i performs pixel value prediction on the current image block.
  • the video decoding apparatus performs a decoding process on the video code stream to obtain an identifier of the combined motion information unit set i and a prediction residual of the current image block, and based on the identifier of the combined motion information unit set i, from the N candidate combined motion information unit sets. Among them, a combined motion information unit set i including two motion information units is determined.
  • the video encoding device can write the identifier of the combined motion information unit set i to the video code stream.
  • the video decoding apparatus performs motion vector prediction on the current image block by using an affine motion model and the combined motion information unit set i.
  • the video decoding device may perform motion estimation processing on the motion vector in the combined motion information unit set i to obtain a combined motion information unit set i after motion estimation processing, and the video decoding apparatus uses an affine motion model and motion estimation processing.
  • the merged motion information unit set i performs motion vector prediction on the current image block.
  • the size of the current image block is w ⁇ h, and the w is equal to or not equal to h.
  • Figure 2-e shows the coordinates of the four vertices of the current image block.
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vector of any pixel within the current image block x is (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vector of any pixel within the current image block x is (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the coordinates and motion vectors of the two pixel samples are substituted into the affine motion model as exemplified below, and the calculation can be calculated.
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, wherein the vx and vy are coordinates (x, y) in the current image block, respectively.
  • the motion vector horizontal component (vx) and the motion vector vertical component (vy) of the pixel samples, wherein the w in Equation 1 may be the length or width of the current image block.
  • the video decoding device calculates a predicted pixel value of the current image block obtained by performing pixel value prediction on the current image block according to the calculated motion vector of each pixel point or each pixel block of the current image block.
  • the video decoding apparatus reconstructs the current image block by using the predicted pixel value of the current image block and the prediction residual of the current image block.
  • the video decoding device performs pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i, and each motion information unit in the combined motion information unit set i is respectively At least part of the motion information unit in the candidate motion information unit set corresponding to each of the 2 pixel samples, since the selected motion information unit set i selection range becomes relatively small, the traditional technology is adopted A mechanism for filtering out a motion information unit of a plurality of pixel samples by a large number of calculations in all possible candidate motion information unit sets of pixel samples, which is advantageous for improving coding efficiency and also for reducing image prediction based on affine motion model The computational complexity, which in turn makes it possible to introduce affine motion models into video coding standards.
  • the affine motion model is introduced, it is beneficial to describe the motion of the object more accurately, so it is beneficial to improve the prediction accuracy. Since the number of referenced pixel samples can be two, it is advantageous to further reduce the computational complexity of image prediction based on the affine motion model after introducing the affine motion model, and also to reduce the transmission of affine parameter information at the encoding end. Or the number of motion vector residuals, etc.
  • an embodiment of the present invention further provides an image prediction apparatus 400, which may include:
  • a first determining unit 410 configured to determine 2 pixel samples in the current image block, and determine a candidate motion information unit set corresponding to each of the 2 pixel samples; wherein each of the pixel samples The corresponding set of candidate motion information units includes at least one motion information unit of the candidate;
  • the second determining unit 420 is configured to determine a combined motion information unit set i including two motion information units.
  • Each of the motion information units i is selected from at least a part of the motion information unit of the candidate motion information unit group corresponding to each of the two pixel samples, where
  • the motion information unit includes a motion vector in which the prediction direction is forward and/or a motion vector in which the prediction direction is backward;
  • the prediction unit 430 is configured to perform pixel value prediction on the current image block by using the affine motion model and the combined motion information unit set i.
  • the second determining unit 420 may be specifically configured to: determine, from the set of N candidate combined motion information units, a combined motion information unit that includes two motion information units. a set i; wherein each of the motion information units included in each of the N candidate motion information unit sets is selected from each of the two pixel samples At least part of the motion information unit of the candidate motion information unit set that meets the constraint condition, wherein the N is a positive integer, the N candidate combined motion information element sets are different from each other, and the N candidate combined motion information element sets are concentrated Each candidate merged motion information unit set includes 2 motion information units.
  • the set of N candidate combined motion information units satisfies at least one of a first condition, a second condition, a third condition, a fourth condition, and a fifth condition ,
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion
  • the second condition includes any one of the N candidate combined motion information unit sets The two motion information units in the combined motion information unit set have the same prediction direction;
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same;
  • the fourth condition includes that an absolute value of a difference between motion vector horizontal components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold Or, the absolute value of the difference between one of the motion information unit and the motion vector horizontal component of the pixel sample Z in any one of the N candidate motion information unit sets is less than or equal to a level a component threshold, the pixel sample Z of the current image block being different from any one of the 2 pixel samples;
  • the fifth condition includes that an absolute value of a difference between motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to vertical a component threshold, or an absolute value of a difference between a motion information unit of one of the motion information unit units and a motion vector component of the pixel sample Z in the set of candidate motion information units in the N candidate motion information unit sets is less than or Equal to the horizontal component threshold, the pixel sample Z of the current image block is different from any one of the 2 pixel samples.
  • the 2 pixel samples include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and 2 of the central pixel samples a1 of the current image block. ;
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes motion information units of x1 pixel samples, where the x1 pixel samples include At least one adjacent to an empty space of an upper left pixel sample of the current image block a pixel sample and/or at least one pixel sample adjacent to a time domain of an upper left pixel sample of the current image block, the x1 being a positive integer;
  • the x1 pixel samples include a pixel sample having the same position as an upper left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent pixel sample on the left, a spatially adjacent pixel sample on the upper left of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the upper right pixel sample of the current image block includes motion information units of x2 pixel samples, where the x2 pixel samples include At least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a time domain of an upper right pixel sample of the current image block, the x2 being a positive integer;
  • the x2 pixel samples include a pixel sample having the same position as an upper right pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, where the x3 pixel samples include At least one pixel sample adjacent to a lower left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a lower left pixel sample time domain of the current image block, the x3 being a positive integer;
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the set of candidate motion information units corresponding to the central pixel sample a1 of the current image block includes motion information units of x5 pixel samples, where the x5 pixel samples One of the pixel samples in the sample is the pixel sample a2,
  • the pixel sample a2 has the same position in the adjacent video frame of the video frame to which the current image block belongs, and the x5 is a positive integer.
  • the prediction unit 430 is specifically configured to: when the prediction direction in the merged motion information unit set i is a motion vector corresponding to the first prediction direction, a reference frame index is different from the reference frame index.
  • the merged motion information unit set i is subjected to a scaling process such that the motion vector in the merged motion information unit set i with the prediction direction being the first prediction direction is scaled to a reference frame of the current image block, performing pixel value prediction on the current image block by using an affine motion model and a combined motion information unit set i after performing scaling processing, where the first prediction direction is forward or backward;
  • the prediction unit 430 is specifically configured to: when the prediction direction in the merged motion information unit set i is a reference frame index corresponding to the forward motion vector, different from the forward reference frame index of the current image block, and In the case where the prediction direction in the merged motion information unit set i is that the reference frame index corresponding to the backward motion vector is different from the backward reference frame index of the current image block, the merged motion information unit set i is performed.
  • the prediction unit 430 is specifically configured to calculate motion of each pixel in the current image block by using an affine motion model and the combined motion information unit set i.
  • Vector determining, by using the calculated motion vector of each pixel in the current image block, a predicted pixel value of each pixel in the current image block;
  • the prediction unit 430 is specifically configured to calculate, by using an affine motion model and the merged motion information unit set i, a motion vector of each pixel block in the current image block, by using the calculated current image block.
  • a motion vector of each pixel block determines a predicted pixel value of each pixel point of each pixel block in the current image block.
  • the prediction unit 430 may be specific a ratio between a difference between motion vector horizontal components of the 2 pixel samples and a length or width of the current image block, and a difference between motion vector vertical components of the 2 pixel samples Obtaining a motion vector of an arbitrary pixel sample in the current image block, and a motion vector of the 2 pixel samples is based on the merged motion information unit set i The motion vectors of the two motion information units are obtained.
  • the horizontal coordinate coefficient of the motion vector horizontal component of the 2 pixel samples and the vertical coordinate coefficient of the motion vector vertical component are equal, and the 2 pixel samples are The vertical coordinate coefficient of the horizontal component of the motion vector is opposite to the horizontal coordinate coefficient of the vertical component of the motion vector.
  • the affine motion model may be an affine motion model in the following form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • the image prediction apparatus is applied to a video encoding apparatus or the image prediction apparatus is applied to a video decoding apparatus.
  • the second determining unit 420 may be specifically configured to obtain, based on the obtained from the video code stream.
  • the identification of the merged motion information unit set i determines a merged motion information unit set i including two motion information units from among the N candidate combined motion information unit sets.
  • the apparatus further includes a decoding unit, configured to decode, from the video code stream, a motion vector residual of the two pixel samples, using spatial neighboring or temporally adjacent pixel samples of the two pixel samples.
  • the motion vector obtains a motion vector predictor of the 2 pixel samples, and obtains a motion vector of the 2 pixel samples based on a motion vector predictor of the 2 pixel samples and a motion vector residual of the 2 pixel samples, respectively.
  • the prediction unit 430 when the image prediction apparatus is applied to a video encoding apparatus, the prediction unit 430 is further configured to: use a spatial phase of the 2 pixel samples a motion vector of a neighboring or time-domain adjacent pixel sample, obtaining a motion vector predictor of the two pixel samples, and obtaining a motion vector residual of the two pixel samples according to a motion vector predictor of the two pixel samples And writing the motion vector residual of the two pixel samples into the video code stream.
  • the apparatus when the image prediction apparatus is applied to a video encoding apparatus, the apparatus further includes an encoding unit, configured to set the combined motion information unit The identifier is written to the video stream.
  • the image prediction device 400 can be any device that needs to output and play video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, and the like.
  • the image prediction apparatus 500 performs pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i, and merges each motion information in the motion information unit set i.
  • the unit is respectively selected from at least part of the motion information unit of the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the traditional technology is abandoned because the selected motion information unit set i selection range becomes relatively small.
  • the mechanism for filtering out one motion information unit of a plurality of pixel samples by using a large number of calculations in all possible candidate motion information element sets of a plurality of pixel samples is advantageous for improving coding efficiency and also for reducing affine-based motion.
  • the model makes the computational complexity of image prediction, which makes it possible to introduce affine motion models into video coding standards. And because the affine motion model is introduced, it is beneficial to describe the motion of the object more accurately, so it is beneficial to improve the prediction accuracy. Moreover, since the number of referenced pixel samples can be two, it is advantageous to further reduce the computational complexity of image prediction based on the affine motion model after introducing the affine motion model, and also to reduce the coding end transfer simulation. Shooting parameter information or the number of motion vector residuals, etc.
  • FIG. 5 is a schematic diagram of an image prediction apparatus 500 according to an embodiment of the present invention.
  • the image prediction apparatus 500 may include at least one bus 501, at least one processor 502 connected to the bus 501, and at least one memory connected to the bus 501. 503.
  • the processor 502 calls the code or instruction stored in the memory 503 through the bus 501 for determining 2 pixel samples in the current image block, and determining candidates corresponding to each of the 2 pixel samples.
  • a set of motion information units wherein the set of candidate motion information units corresponding to each of the pixel samples includes at least one motion information unit of the candidate; determining a combined motion information unit set i including two motion information units; wherein the combining Each of the motion information units i is selected from at least a portion of the motion information unit of the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the motion information unit includes
  • the prediction direction is a forward motion vector and/or the prediction direction is a backward motion vector; the current image block is subjected to pixel value prediction using the affine motion model and the combined motion information unit set i.
  • the processor in determining an integrated motion information element set i including two motion information units, is configured to determine from among the N candidate combined motion information unit sets. a combined motion information unit set i including two motion information units; wherein each motion information unit included in each of the N candidate motion information unit sets is selected from the At least part of the motion information unit that meets the constraint condition in the candidate motion information unit set corresponding to each of the two pixel samples, wherein the N is a positive integer, and the N candidate combined motion information unit sets are not mutually Similarly, each set of candidate combined motion information units in the set of N candidate combined motion information units includes two motion information units.
  • the set of N candidate combined motion information units satisfies at least one of a first condition, a second condition, a third condition, a fourth condition, and a fifth condition ,
  • the first condition includes that the motion mode of the current image block indicated by the motion information unit in any one of the N candidate motion information unit sets is a non-translation motion
  • the second condition includes that the two motion information units in the set of candidate motion information units in the N candidate motion information unit sets have the same prediction direction;
  • the third condition includes that the reference frame indexes corresponding to the two motion information units in any one of the N candidate motion information unit sets are the same;
  • the fourth condition includes that an absolute value of a difference between motion vector horizontal components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to a horizontal component threshold Or, the absolute value of the difference between one of the motion information unit and the motion vector horizontal component of the pixel sample Z in any one of the N candidate motion information unit sets is less than or equal to a level a component threshold, the pixel sample Z of the current image block being different from any one of the 2 pixel samples;
  • the fifth condition includes that an absolute value of a difference between motion vector vertical components of two motion information units in any one of the N candidate motion information unit sets is less than or equal to vertical a component threshold, or an absolute value of a difference between a motion information unit of one of the motion information unit units and a motion vector component of the pixel sample Z in the set of candidate motion information units in the N candidate motion information unit sets is less than or Equal to the horizontal component threshold, the pixel sample Z of the current image block is different from any one of the 2 pixel samples.
  • the 2 pixel samples include an upper left pixel sample, an upper right pixel sample, a lower left pixel sample, and 2 of the central pixel samples a1 of the current image block. ;
  • the upper left pixel sample of the current image block is an upper left vertex of the current image block or a pixel block of the current image block including an upper left vertex of the current image block; and a lower left pixel sample of the current image block a lower left vertex of the current image block or a pixel block in the current image block that includes a lower left vertex of the current image block; an upper right pixel sample of the current image block is an upper right vertex or a location of the current image block a pixel block in a current image block that includes an upper right vertex of the current image block; a central pixel sample a1 of the current image block is a central pixel point of the current image block or the current image block includes the A block of pixels at the center pixel of the current image block.
  • the candidate motion information unit set corresponding to the upper left pixel sample of the current image block includes motion information units of x1 pixel samples, where the x1 pixel samples include At least one pixel sample adjacent to an upper left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a time domain of an upper left pixel sample of the current image block
  • the x1 is a positive integer
  • the x1 pixel samples include a pixel sample having the same position as an upper left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatial adjacent pixel sample on the left, a spatially adjacent pixel sample on the upper left of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the upper right pixel sample of the current image block includes motion information units of x2 pixel samples, where the x2 pixel samples include At least one pixel sample adjacent to an upper right pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a time domain of an upper right pixel sample of the current image block, the x2 being a positive integer.
  • the x2 pixel samples include a pixel sample having the same position as an upper right pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the candidate motion information unit set corresponding to the lower left pixel sample of the current image block includes motion information units of x3 pixel samples, where the x3 pixel samples include At least one pixel sample adjacent to a lower left pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a lower left pixel sample time domain of the current image block, the x3 being a positive integer;
  • the x3 pixel samples include a pixel sample having the same position as a lower left pixel sample of the current image block, and the current image block, among video frames adjacent to a time domain of a video frame to which the current image block belongs. At least one of a spatially adjacent pixel sample on the left side, a spatially adjacent pixel sample on the lower left of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the set of candidate motion information units corresponding to the central pixel sample a1 of the current image block includes motion information units of x5 pixel samples, where the x5 pixel samples One of the pixel samples is a pixel sample a2,
  • the position of the central pixel sample a1 in the video frame to which the current image block belongs is the same as the position of the pixel sample a2 in the adjacent video frame of the video frame to which the current image block belongs.
  • X5 is a positive integer.
  • the processor 502 is configured to perform pixel value prediction on the current image block by using an affine motion model and the combined motion information unit set i.
  • the prediction direction in the merged motion information unit set i is that the reference frame index corresponding to the motion vector of the first prediction direction is different from the reference frame index of the current image block
  • the merged motion information unit set i Performing a scaling process such that the motion vector in the first motion direction in the merged motion information unit set i is scaled to the reference frame of the current image block, using the affine motion model and the merge after the scaling process
  • the motion information unit set i performs pixel value prediction on the current image block, wherein the first prediction direction is forward or backward;
  • the processor 502 is configured to: when the prediction in the combined motion information unit set i The reference frame index corresponding to the forward motion vector is different from the forward reference frame index of the current image block, and the prediction direction in the merged motion information unit set i is the reference frame index corresponding to the backward motion vector Different from the backward reference frame index of the current image block, the merged motion information unit set i is subjected to scaling processing such that the prediction direction in the merged motion information unit set i is a forward motion vector A forward reference frame that is scaled to the current image block and causes a backward direction motion vector in the merged motion information unit set i to be scaled to a backward reference frame of the current image block, using affine
  • the motion model and the merged motion information unit set i subjected to the scaling process perform pixel value prediction on the current image block.
  • the processor 502 is configured to perform pixel value prediction on the current image block by using an affine motion model and the combined motion information unit set i. Calculating a motion vector of each pixel in the current image block by using an affine motion model and the combined motion information unit set i, and determining, by using the calculated motion vector of each pixel in the current image block, a predicted pixel value of each pixel in the current image block;
  • the processor 502 is configured to utilize an affine motion model and the combined motion information unit set i Calculating a motion vector of each pixel block in the current image block, and calculating The motion vector of each pixel block in the current image block is determined to determine a predicted pixel value of each pixel point of each pixel block in the current image block.
  • the processor 502 is configured to perform pixel value prediction on the current image block by using an affine motion model and the combined motion information unit set i.
  • the processor 502 uses the ratio of the difference between the motion vector horizontal components of the 2 pixel samples to the length or width of the current image block, and the difference between the vertical components of the motion vector of the 2 pixel samples a ratio of a length or a width of the current image block to obtain a motion vector of an arbitrary pixel sample in the current image block, wherein the motion vector of the 2 pixel samples is based on two of the combined motion information unit sets i
  • the motion vector of the motion information unit is obtained.
  • the horizontal coordinate coefficient of the motion vector horizontal component of the 2 pixel samples and the vertical coordinate coefficient of the motion vector vertical component are equal, and the 2 pixel samples are The vertical coordinate coefficient of the horizontal component of the motion vector is opposite to the horizontal coordinate coefficient of the vertical component of the motion vector.
  • the affine motion model may be an affine motion model in the following form:
  • the motion vectors of the two pixel samples are (vx 0 , vy 0 ) and (vx 1 , vy 1 ), respectively, and the vx is a pixel sample with coordinates (x, y) in the current image block.
  • a motion vector horizontal component the vy being a motion vector vertical component of a pixel sample of coordinates (x, y) in the current image block
  • the w being the length or width of the current image block.
  • the image prediction apparatus is applied to a video encoding apparatus or the image prediction apparatus is applied to a video decoding apparatus.
  • the processor 502 in determining a combined motion information unit set i including two motion information units, is configured to determine, according to the identifier of the merged motion information unit set i obtained from the video code stream, that the two motion information units are included from the N candidate combined motion information unit sets. Combine the motion information unit set i.
  • the processor 502 when the image prediction apparatus is applied to the video decoding apparatus, the processor 502 is further configured to: decode the 2 from the video code stream. a motion vector residual of the pixel samples, using the motion vectors of the spatial neighboring or temporally adjacent pixel samples of the 2 pixel samples to obtain motion vector predictors of the 2 pixel samples, based on the 2 pixels The motion vector predictor of the sample and the motion vector residual of the two pixel samples respectively obtain motion vectors of the two pixel samples.
  • the processor 502 when the image prediction apparatus is applied to a video encoding apparatus, is further configured to utilize a spatial phase of the two pixel samples. a motion vector of a neighboring or time-domain adjacent pixel sample, obtaining a motion vector predictor of the two pixel samples, and obtaining a motion vector residual of the two pixel samples according to a motion vector predictor of the two pixel samples And writing the motion vector residual of the two pixel samples into the video code stream.
  • the processor 502 when the image prediction apparatus is applied to a video encoding apparatus, the processor 502 is further configured to: combine the motion information unit set i Identifies the write to the video stream.
  • the image prediction device 500 can be any device that needs to output and play video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, and the like.
  • the image prediction apparatus 500 performs pixel value prediction on the current image block by using the affine motion model and the merged motion information unit set i, and merges each motion information in the motion information unit set i.
  • the unit is respectively selected from at least part of the motion information unit of the candidate motion information unit set corresponding to each of the 2 pixel samples, wherein the traditional technology is abandoned because the selected motion information unit set i selection range becomes relatively small.
  • the mechanism for filtering out one motion information unit of a plurality of pixel samples by using a large number of calculations in all possible candidate motion information element sets of a plurality of pixel samples is advantageous for improving coding efficiency and also for reducing affine-based motion.
  • the model makes the computational complexity of image prediction, which makes it possible to introduce affine motion models into video coding standards. And because the affine motion model is introduced, it is beneficial to describe the motion of the object more accurately, so it is beneficial to improve the prediction accuracy. And, since the number of pixel samples referenced can be two, there is It is beneficial to further reduce the computational complexity of image prediction based on the affine motion model after introducing the affine motion model, and it is also beneficial to reduce the number of affine parameter information or motion vector residuals transmitted by the encoder.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the image prediction methods described in the foregoing method embodiments.
  • FIG. 6 is a schematic flowchart diagram of an image processing method according to an embodiment of the present invention.
  • an image processing method provided by an embodiment of the present invention may include:
  • the motion vector 2-tuple may include motion vectors of two pixel samples in a video frame to which the current image block belongs.
  • the pixel samples mentioned in the embodiments of the present invention may be pixel points or pixel blocks including at least two pixel points.
  • the motion vectors mentioned in the embodiments of the present invention may be forward motion vectors or backward motion vectors, wherein respective motion vector directions of the motion vector 2-tuples may be the same.
  • the current image block may be a current coding block or a current decoding block.
  • the motion vector 2-tuple may include the motion vector of the 2 pixel samples in the foregoing embodiment, and may also include a motion vector of each motion information unit in the merged motion information unit set i in the foregoing embodiment.
  • a motion vector of each motion information unit in the combined motion information unit set i after the scaling process described in the foregoing embodiment may be included, and may also include the combined motion information unit set i in the motion estimation process in the foregoing embodiment.
  • the motion vector 2-tuple obtained by iteratively updating the motion vector of each motion information unit in the combined motion information unit set i in the above embodiment may be used as a motion vector of each motion information unit. .
  • the motion vector 2-tuple in the embodiment of the present invention includes only the motion vector, and may include a motion vector, or a reference frame index corresponding to the motion vector, in the embodiment of the present invention. Motion vector.
  • the two pixel samples may include two pixel samples of the upper left pixel sample, the right region pixel sample, the lower region pixel sample, and the lower right region pixel sample of the current image block.
  • the upper left pixel sample of the current image block may be an upper left vertex of the current image block or a pixel block in the current image block that includes an upper left vertex of the current image block.
  • the coordinate value of the upper left pixel sample may default to (0, 0).
  • the lower area pixel sample of the current image block may be a pixel point or a pixel block of the current image block located below the upper left pixel sample, wherein a vertical coordinate of the lower area pixel sample is greater than the upper left pixel sample
  • the lower area pixel sample may include the lower left pixel sample in the above embodiment.
  • the horizontal coordinate of the lower region pixel sample may be the same as the horizontal coordinate of the upper left pixel sample, and the horizontal coordinate of the lower region pixel sample may also be different from the horizontal coordinate of the upper left pixel sample by n pixel height, where n is a positive integer less than 3.
  • the vertical coordinate may be referred to as an ordinate
  • the horizontal coordinate may also be referred to as an abscissa.
  • the right region pixel sample of the current image block may be a pixel point or a pixel block of the current image block located on the right side of the upper left pixel sample, wherein a horizontal coordinate of the right region pixel sample is greater than the upper left pixel sample The horizontal coordinates.
  • the right region pixel sample may include the upper right pixel sample in the above embodiment.
  • the vertical coordinate of the right region pixel sample may be the same as the vertical coordinate of the upper left pixel sample, and the vertical coordinate of the right region pixel sample may also be different from the vertical coordinate of the upper left pixel sample by n pixel width, where n is less than 3 Positive integer.
  • the bottom right area pixel sample of the current image block may be a pixel point or a pixel block of the current image block located at a lower right of the upper left pixel sample, wherein a vertical coordinate of the lower right area pixel sample is greater than the The vertical coordinate of the upper left pixel sample, the horizontal coordinate of the lower right area pixel sample is greater than the horizontal coordinate of the upper left pixel sample.
  • the bottom right area pixel sample may include the center pixel sample a1 in the above embodiment, and may further include a lower right pixel sample, and the lower right pixel sample of the current image block may be the lower right vertex of the current image block. Or a pixel block in the current image block that includes a lower right vertex of the current image block.
  • the size of the pixel block is, for example, 2*2, 1*2, 4*2, 4*4, or other size.
  • the upper left pixel sample the upper right pixel sample, the lower left pixel sample, and the middle of the current image block
  • the specific content of the heart pixel sample a1 can be specifically described in the foregoing embodiment, and details are not described herein again.
  • the two pixel samples may also be the two pixel samples in the foregoing embodiment.
  • the specific content of the two pixel samples may be specifically described in the foregoing embodiment, and details are not described herein again.
  • the motion vector of the arbitrary pixel sample in the current image block may be the motion vector of each pixel in the current image block in the foregoing embodiment, and the motion vector of each pixel block in the current image block. And a motion vector of any pixel sample in the current image block, a motion vector of each pixel point in the current image block in the above embodiment, and a motion of each pixel block in the current image block.
  • the specific content of the vector and the motion vector of any pixel sample in the current image block may be referred to the specific description in the foregoing embodiment, and details are not described herein again.
  • the affine motion model can be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical component of the affine motion model The vertical displacement coefficient d, so that the affine motion model is of the form:
  • a sum of squares of a horizontal coordinate coefficient of a horizontal component of the affine motion model and a vertical coordinate coefficient of a horizontal component of the affine motion model is not equal to 1 .
  • the affine motion model The sum of the squares of the vertical coordinate coefficients of the vertical component and the horizontal coordinate coefficients of the vertical component of the affine motion model is not equal to one.
  • the calculating, by using the affine motion model and the motion vector 2-tuple, the motion vector of any pixel sample in the current image block may include: Obtaining values of coefficients of the affine motion model using respective motion vectors of the two pixel samples and positions of the two pixel samples; using the affine motion model and coefficients of the affine motion model a value that obtains a motion vector of any pixel sample in the current image block.
  • the calculating, by using the affine motion model and the motion vector 2-tuple, the motion vector of any pixel sample in the current image block may include: Using a ratio between a difference between a horizontal component of a motion vector of each of the two pixel samples and a distance between the two pixel samples, and a vertical component of a motion vector of each of the two pixel samples Obtaining a ratio of a difference between the difference and the distance between the two pixel samples, obtaining a value of a coefficient of the affine motion model; obtaining the current current by using the affine motion model and a value of a coefficient of the affine motion model The motion vector of any pixel sample in the image block.
  • the calculating, by using the affine motion model and the motion vector 2-tuple, the motion vector of the arbitrary pixel sample in the current image block may include: using a component of a motion vector of each of the two pixel samples The ratio of the weighted sum to the distance between the two pixel samples or the square of the distance between the two pixel samples, the value of the coefficient of the affine motion model is obtained; using the affine motion model and the The values of the coefficients of the affine motion model are obtained, and motion vectors of arbitrary pixel samples in the current image block are obtained.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a right region pixel sample located to a right of the upper left pixel sample
  • the affine motion model is specifically as follows:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance. w may also be the difference between the horizontal coordinate of the right region pixel sample and the horizontal coordinate of the upper left pixel sample.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a lower region pixel sample located below the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance. h may also be the difference between the vertical coordinate of the lower region pixel sample and the vertical coordinate of the upper left pixel sample.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a lower right region pixel sample located at a lower right of the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the affine motion model is specific. for:
  • (x 4 , y 4 ) is the coordinate of one of the two pixel samples
  • (vx 4 , vy 4 ) is the motion of the one of the pixel samples with the coordinate of (x 4 , y 4 ) Vector
  • (x 5 , y 5 ) is the coordinate of another pixel sample of the 2 pixel samples
  • (vx 5 , vy 5 ) is the motion of the other pixel sample with coordinates (x 5 , y 5 ) Vector.
  • the motion vector of the arbitrary pixel sample in the current image block is calculated, And determining, by using the position of the arbitrary pixel sample in the current image block and the motion vector of the arbitrary pixel sample, the corresponding position of the arbitrary pixel sample in the frame corresponding to the motion vector of the arbitrary pixel sample.
  • obtaining a corresponding image block of the current image block in the corresponding frame according to the corresponding position comparing the corresponding image block with the current image block, calculating a square difference sum or an absolute error sum between the two, and measuring the two
  • the matching error between the two is used to evaluate the accuracy of the image tracking of the current image block.
  • the motion vector of any pixel sample in the current image block is calculated, And using the calculated motion vector of the arbitrary pixel sample in the current image block to determine a predicted pixel value of a pixel point of the arbitrary pixel sample in the current image block.
  • the motion vector of any pixel sample in the current image block may be a motion vector of an arbitrary pixel point in the current image block, and the process may be: using the calculated pixel points in the current image block.
  • the motion vector determines a predicted pixel value of each pixel in the current image block; a motion vector of an arbitrary pixel sample in the current image block may also be a motion vector of an arbitrary pixel block in the current image block, and the process may be changed. And determining, by using the calculated motion vector of each pixel block in the current image block, a predicted pixel value of each pixel of each pixel block in the current image block.
  • the test finds that if the affine motion model and the combined motion information unit set i are first calculated a motion vector of each pixel block in the current image block, and then using the calculated motion vector of each pixel block in the current image block to determine a prediction of each pixel point of each pixel block in the current image block.
  • the pixel value is due to the granularity of the pixel block in the current image block when calculating the motion vector, which is advantageous for greatly reducing the computational complexity.
  • the method may further include: using the calculated arbitrary pixel in the current image block. a motion vector of the sample, performing motion compensation predictive coding on the arbitrary pixel samples in the current image block.
  • the process may be: determining, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel of the arbitrary pixel sample in the current image block; a predicted pixel value of a pixel of the arbitrary pixel sample, performing motion compensation prediction on the arbitrary pixel sample, thereby obtaining a reconstructed value of a pixel point of the arbitrary pixel sample;
  • the process may also be: determining, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel point of the arbitrary pixel sample in the current image block; using the arbitrary pixel sample a predicted pixel value of a pixel, performing motion compensation prediction on the arbitrary pixel sample, using a pixel value of a pixel of the arbitrary pixel sample obtained by motion compensation prediction, and an actual pixel value of a pixel point of the arbitrary pixel sample Obtaining a prediction residual of the arbitrary pixel samples, and encoding a prediction residual of the arbitrary pixel samples into the code stream.
  • a similar method is used to obtain a prediction residual of other pixel samples required for the prediction residual of the current image block, thereby obtaining a prediction residual of the current image block, and then obtaining a prediction residual of the current image block, and then The prediction residual of the current image block is encoded into the code stream, and the actual pixel value may also be referred to as the original pixel value.
  • the method further includes: using the calculated arbitrary pixel sample in the current image block. a motion vector, performing motion compensation decoding on the arbitrary pixel samples to obtain a pixel reconstruction value of the arbitrary pixel samples.
  • the process may be: determining, by using the calculated motion vector of any pixel sample in the current image block, the pixel point of the arbitrary pixel sample in the current image block. Predicting the pixel value; performing motion compensation prediction on the arbitrary pixel sample by using the predicted pixel value of the pixel of the arbitrary pixel sample, thereby obtaining a reconstructed value of the pixel point of the arbitrary pixel sample.
  • the process may also be: determining, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel point of the arbitrary pixel sample in the current image block; using the arbitrary pixel sample a predicted pixel value of the pixel, performing motion compensation prediction on the arbitrary pixel sample, decoding the prediction residual of the arbitrary pixel sample from the code stream, or decoding the current image block from the code stream Residual, thereby obtaining a prediction residual of the arbitrary pixel sample, and combining the pixel values of the pixel points of the arbitrary pixel samples obtained through motion compensation prediction to obtain a reconstructed value of the pixel point of the arbitrary pixel sample.
  • the image processing may be performed in a manner similar to the image processing manner corresponding to the current image block.
  • some image blocks in the current video frame may also follow Image processing is performed in a different manner from the image processing method corresponding to the current image block.
  • the technical solution provided by the embodiment of the invention constructs an affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector.
  • the technical solution introduces two displacement coefficients, the technical solution can estimate the motion vector based on the mixed motions of rotation, scaling and translation, so that the estimation of the motion vector is more accurate.
  • FIG. 7 is a schematic flowchart diagram of another image processing method according to another embodiment of the present invention.
  • a method of implementing an image processing method in a video encoding apparatus is mainly described as an example.
  • the image processing method provided by another embodiment of the present invention may include:
  • the video encoding device determines two pixel samples in the current image block.
  • the two pixel samples may include two pixel samples of the upper left pixel sample, the right region pixel sample, the lower region pixel sample, and the lower right region pixel sample of the current image block.
  • the upper left pixel sample, the right region pixel sample, the lower region pixel sample, and the lower right region image of the current image block For the substantial content in the prime sample, reference may be made to the specific description in the above embodiments, and details are not described herein again.
  • the video encoding apparatus determines a candidate motion information unit set corresponding to each of the two pixel samples.
  • the candidate motion information unit set corresponding to each pixel sample includes at least one motion information unit of the candidate.
  • the pixel samples mentioned in the embodiments of the present invention may be pixel points or pixel blocks including at least two pixel points.
  • the specific content of the candidate motion information unit set corresponding to the upper left pixel sample of the current image block and the corresponding candidate motion information unit set generation method may refer to the foregoing embodiment. The specific description will not be repeated here.
  • the set of candidate motion information units corresponding to the right region pixel samples of the current image block includes motion information units of x6 pixel samples, where the x6 pixel samples Included at least one pixel sample adjacent to a right region pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a right region pixel sample time domain of the current image block, the x6 being a positive integer .
  • x6 above may be, for example, equal to 1, 2, 3, 4, 5, 6, or other values.
  • the x6 pixel samples include a pixel sample having the same position as a right region pixel sample of the current image block, and the current image, among video frames adjacent to a video frame time domain to which the current image block belongs. At least one of a spatially adjacent pixel sample on the right side of the block, a spatially adjacent pixel sample on the upper right of the current image block, and a spatially adjacent pixel sample on the upper side of the current image block.
  • the set of candidate motion information units corresponding to the lower region pixel samples of the current image block includes motion information units of x7 pixel samples, where the x7 pixel samples Included at least one pixel sample adjacent to a lower region pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a lower region pixel sample time domain of the current image block, the x7 being a positive integer .
  • x7 above may be, for example, equal to 1, 2, 3, 4, 5, 6, or other values.
  • the x7 pixel samples include a pixel sample having the same position as a lower region pixel sample of the current image block, among the video frames adjacent to the video frame time domain to which the current image block belongs, At least one of a spatially adjacent pixel sample to the left of the current image block, a spatially adjacent pixel sample of the lower left of the current image block, and a spatially adjacent pixel sample of the lower side of the current image block.
  • the candidate motion information unit set corresponding to the lower right area pixel sample of the current image block includes motion information units of x8 pixel samples, where the x8 pixels
  • the sample includes at least one pixel sample adjacent to a lower right region pixel sample spatial domain of the current image block and/or at least one pixel sample adjacent to a lower right region pixel sample time domain of the current image block, the x8 Is a positive integer.
  • x8 above may be, for example, equal to 1, 2, 3, 4, 5, 6, or other values.
  • the x8 pixel samples include a pixel sample having the same position as a lower right region pixel sample of the current image block, among the video frames adjacent to the video frame time domain to which the current image block belongs, the current At least one of a spatially adjacent pixel sample on the right side of the image block, a spatially adjacent pixel sample on the lower right of the current image block, and a spatially adjacent pixel sample on the lower side of the current image block.
  • the candidate motion information unit corresponding to the lower right pixel sample included in the lower right area pixel sample includes at least one pixel sample adjacent to the lower right pixel sample spatial domain of the current image block and/or at least one and the current image.
  • the pixel samples adjacent to the lower right pixel sample of the block in the time domain may, for example, include the same position as the lower right pixel sample of the current image block, among the video frames adjacent to the time domain of the video frame to which the current image block belongs.
  • the method for generating the lower left pixel sample, the upper right pixel sample, the corresponding candidate motion information unit set of the central pixel sample a1, and the corresponding candidate motion information unit set may be referred to the specific description in the foregoing embodiment. , will not repeat them here.
  • the method for generating the candidate motion information unit set corresponding to the right lower pixel sample, the lower right region pixel sample, and the lower right region pixel sample includes the lower left pixel sample and the upper right pixel.
  • the method for generating the corresponding candidate motion information unit set of the sample or the central pixel sample a1 will not be described herein.
  • the video encoding apparatus determines, according to the candidate motion information element set corresponding to each of the two pixel samples, the N candidate combined motion information unit sets.
  • the video encoding apparatus determines, from among the N candidate combined motion information unit sets, a combined motion information unit set i including two motion information units.
  • the video encoding apparatus may further write the identifier of the combined motion information unit set i into the video code stream.
  • the video decoding device determines the combined motion information unit set i including the two motion information units from among the N candidate combined motion information unit sets based on the identification of the combined motion information unit set i obtained from the video code stream.
  • the identifier of the merged motion information unit set i may be any information that can identify the merged motion information unit set i.
  • the identifier of the merged motion information unit set i may be a merged motion information unit set i in the merged motion information. The index number in the list of unit sets, and so on.
  • the video encoding apparatus obtains a motion vector 2-tuple by using the merged information unit set i.
  • the video encoding apparatus may use the two motion vectors of the merged information unit set i of the current image block as the motion vector predictor as the two motions in the search motion vector 2-tuple.
  • the starting value of the vector is used to simplify the affine motion search.
  • the search process is briefly described as follows: the motion vector predictive value is the starting value, and iteratively updated, when the number of iterative updates reaches a certain number of times, or the predicted value of the current image block obtained from the two motion vectors obtained by the iterative update.
  • the motion vector 2-tuple of the two motion vectors obtained by the iterative update will be included.
  • the video encoding apparatus may further use two motion vectors of the merged information unit set i of the current image block and two motion vectors in the motion vector 2-tuple to obtain two.
  • the predicted difference of the respective motion vectors of the pixel samples that is, each motion vector of the merged information unit set i of the current image block corresponds to each motion of the motion vector in the motion vector 2-tuple and the merged information unit set i of the current image block.
  • the difference of the vectors encodes the predicted difference of the respective motion vectors of the 2 pixel samples.
  • the video encoding apparatus calculates the affine motion model and the motion vector 2-tuple. a motion vector of any pixel sample in the current image block.
  • the motion vector of the arbitrary pixel sample in the current image block may be the motion vector of each pixel in the current image block in the foregoing embodiment, and the motion vector of each pixel block in the current image block. And a motion vector of any pixel sample in the current image block, a motion vector of each pixel point in the current image block in the above embodiment, and a motion of each pixel block in the current image block.
  • the specific content of the vector and the motion vector of any pixel sample in the current image block may be referred to the specific description in the foregoing embodiment, and details are not described herein again.
  • the affine motion model can be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical component of the affine motion model The vertical displacement coefficient d, so that the affine motion model is of the form:
  • a sum of squares of a horizontal coordinate coefficient of a horizontal component of the affine motion model and a vertical coordinate coefficient of a horizontal component of the affine motion model is not equal to 1 .
  • a sum of squares of a vertical coordinate coefficient of a vertical component of the affine motion model and a horizontal coordinate coefficient of a vertical component of the affine motion model is not equal to 1 .
  • the calculating, by using an affine motion model and the motion vector 2-tuple, a motion vector of an arbitrary pixel sample in the current image block may include: obtaining, by using a motion vector of each of the two pixel samples and a position of the two pixel samples, a value of a coefficient of the affine motion model; using the affine motion model and the affine motion model The value of the coefficient, the motion vector of any pixel sample in the current image block is obtained.
  • the calculating, by using the affine motion model and the motion vector 2-tuple, the motion vector of any pixel sample in the current image block may include: Using a ratio between a difference between a horizontal component of a motion vector of each of the two pixel samples and a distance between the two pixel samples, and a vertical component of a motion vector of each of the two pixel samples Obtaining a ratio of a difference between the difference and the distance between the two pixel samples, obtaining a value of a coefficient of the affine motion model; obtaining the current current by using the affine motion model and a value of a coefficient of the affine motion model The motion vector of any pixel sample in the image block.
  • the calculating, by using the affine motion model and the motion vector 2-tuple, the motion vector of the arbitrary pixel sample in the current image block may include: using a component of a motion vector of each of the two pixel samples The ratio of the weighted sum to the distance between the two pixel samples or the square of the distance between the two pixel samples, the value of the coefficient of the affine motion model is obtained; using the affine motion model and the The values of the coefficients of the affine motion model are obtained, and motion vectors of arbitrary pixel samples in the current image block are obtained.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a right region pixel sample located to a right of the upper left pixel sample
  • the affine motion model is specifically as follows:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance. w may also be the difference between the horizontal coordinate of the right region pixel sample and the horizontal coordinate of the upper left pixel sample.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a lower region pixel sample located below the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance. h may also be the difference between the vertical coordinate of the lower region pixel sample and the vertical coordinate of the upper left pixel sample.
  • the 2 pixel samples include an upper left pixel sample of the current image block and a lower right region pixel sample located at a lower right of the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the affine motion model is specific. for:
  • (x 4 , y 4 ) is the coordinate of one of the two pixel samples
  • (vx 4 , vy 4 ) is the motion of the one of the pixel samples with the coordinate of (x 4 , y 4 ) Vector
  • (x 5 , y 5 ) is the coordinate of another pixel sample of the 2 pixel samples
  • (vx 5 , vy 5 ) is the motion of the other pixel sample with coordinates (x 5 , y 5 ) Vector.
  • the image processing may be performed in a manner similar to the image processing manner corresponding to the current image block.
  • some image blocks in the current video frame may also follow Image processing is performed in a different manner from the image processing method corresponding to the current image block.
  • the video encoding apparatus determines, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel point of an arbitrary pixel sample in the current image block.
  • the reference frame index corresponding to the motion vector of the different arbitrary pixel samples in the current image block may be the same, and may be the reference frame index corresponding to the motion vector of the merged information unit set i.
  • the motion vector of any pixel sample in the current image block may be a motion vector of an arbitrary pixel point in the current image block, and the process may be: utilizing The calculated motion vector of each pixel in the current image block determines a predicted pixel value of each pixel in the current image block.
  • the motion vector of any pixel sample in the current image block may also be a motion vector of an arbitrary pixel block in the current image block, and the modification process may be: And determining, by using the calculated motion vector of each pixel block in the current image block, a predicted pixel value of each pixel point of each pixel block in the current image block.
  • the test finds that if the affine motion model and the combined motion information unit set i are used to calculate the motion vector of each pixel block in the current image block, and then the calculated pixels in the current image block are used.
  • the motion vector of the block determines the predicted pixel value of each pixel of each pixel block in the current image block, and the pixel block in the current image block is used as the granularity when calculating the motion vector, which is advantageous for greatly reducing the computational complexity. .
  • the method may further include: using the calculated current image A motion vector of an arbitrary pixel sample in the block, performing motion compensation predictive coding on the arbitrary pixel sample in the current image block.
  • the process may be: determining, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel of the arbitrary pixel sample in the current image block; a predicted pixel value of a pixel of the arbitrary pixel sample, performing motion compensation prediction on the arbitrary pixel sample, thereby obtaining a reconstructed value of a pixel point of the arbitrary pixel sample; or using the calculated current image block a motion vector of an arbitrary pixel sample, determining a predicted pixel value of a pixel point of the arbitrary pixel sample in the current image block; and performing motion on the arbitrary pixel sample by using a predicted pixel value of a pixel point of the arbitrary pixel sample Compensating the prediction, using the pixel value of the pixel of the arbitrary pixel sample obtained by the motion compensation prediction and the actual pixel value of the pixel of the arbitrary pixel sample, obtaining a prediction residual of the arbitrary pixel sample, and performing the prediction
  • the residual is encoded into
  • the image processing may be performed in a manner similar to the image processing manner corresponding to the current image block.
  • some image blocks in the current video frame may also follow Image processing is performed in a different manner from the image processing method corresponding to the current image block.
  • the technical solution provided by the embodiment of the invention constructs an affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector.
  • the technical solution introduces two displacement coefficients, the technical solution can estimate the motion vector based on the mixed motions of rotation, scaling and translation, so that the estimation of the motion vector is more accurate.
  • FIG. 8 is a schematic flowchart diagram of another image processing method according to another embodiment of the present invention.
  • the image processing method is mainly described in the video decoding device as an example.
  • the image processing method provided by another embodiment of the present invention may include:
  • the video decoding device determines two pixel samples in the current image block.
  • the two pixel samples include two pixel samples in the upper left pixel sample, the right region pixel sample, the lower region pixel sample, and the lower right region pixel sample of the current image block.
  • the substantial content in the upper left pixel sample, the right region pixel sample, the lower region pixel sample, and the lower right region pixel sample of the front image block may be specifically described in the above embodiments, and details are not described herein again.
  • the video decoding apparatus determines a candidate motion information unit set corresponding to each of the two pixel samples.
  • the video encoding device determines the two pixels in the above S702.
  • the specific process of the candidate motion information unit set corresponding to each pixel sample in the sample is not described here.
  • the video decoding apparatus determines, according to the candidate motion information element set corresponding to each of the two pixel samples, the N candidate combined motion information unit sets.
  • the video decoding apparatus may refer to the video coding in S703 above.
  • the device determines a specific process of the N candidate motion information unit sets based on the candidate motion information unit set corresponding to each of the 2 pixel samples, and details are not described herein again.
  • the video decoding apparatus performs a decoding process on the video code stream to obtain an identifier of the combined motion information unit set i and a prediction residual of the current image block, and based on the identifier of the merged motion information unit set i, from the N candidate combined motion information unit sets. Among them, a combined motion information unit set i including two motion information units is determined.
  • the video encoding device can write the identifier of the combined motion information unit set i to the video code stream.
  • the video decoding device obtains a motion vector 2-tuple by using the combined motion information unit set i.
  • the video decoding apparatus may use the motion vector of each motion information unit in the merged information unit set i of the current image block as a motion vector predictor, and decode the motion vector from the code stream. a prediction difference value of a motion vector of each of the two pixel samples of the current image block, and adding a prediction difference of the motion vector corresponding to each of the motion vector predictor values and the motion vector predictor value, thereby obtaining the current The motion vector 2-tuple of the motion vector of each of the 2 pixel samples of the image block.
  • the video decoding apparatus calculates, by using an affine motion model and the motion vector 2-tuple, a motion vector of an arbitrary pixel sample in the current image block.
  • the video decoding apparatus uses the affine motion model and the motion vector 2-tuple to calculate a motion vector of the motion vector of any pixel in the current image block.
  • the video coding apparatus uses the simulation in S706.
  • a specific process of calculating a motion vector of an arbitrary pixel sample in the current image block is performed by the motion model and the motion vector 2-tuple, and details are not described herein again.
  • the video decoding device determines, by using the calculated motion vector of any pixel sample in the current image block, a predicted pixel value of a pixel point of an arbitrary pixel sample in the current image block.
  • the reference frame index corresponding to the motion vector of the different arbitrary pixel samples in the current image block may be the same, and may be the reference frame index corresponding to the motion vector of the merged information unit set i.
  • the video decoding apparatus uses the calculated motion vector of any pixel sample in the current image block to determine a predicted pixel value of a pixel point of an arbitrary pixel sample in the current image block.
  • the specific process of determining the predicted pixel value of the pixel of any pixel sample in the current image block by using the calculated motion vector of the arbitrary image sample in the current image block is not described here.
  • the video decoding apparatus reconstructs, by using a predicted pixel value of an arbitrary pixel sample in the current image block and an arbitrary residual pixel of the prediction residual of any pixel sample in the current image block obtained from the code stream.
  • the process may be: performing motion compensation prediction on the arbitrary pixel samples by using predicted pixel values of the pixel points of the arbitrary pixel samples, thereby obtaining a reconstructed value of the pixel points of the arbitrary pixel samples; Or performing motion compensation prediction on the arbitrary pixel samples by using the predicted pixel values of the pixel points of the arbitrary pixel samples, and decoding the prediction residuals of the arbitrary pixel samples from the code stream, and combining the motion compensation predictions
  • the pixel value of the pixel of the arbitrary pixel sample is obtained, and the reconstructed value of the pixel of the arbitrary pixel sample is obtained.
  • the image processing may be performed in a manner similar to the image processing manner corresponding to the current image block.
  • some image blocks in the current video frame may also follow Performing an image in a different way from the image processing method corresponding to the current image block deal with.
  • the technical solution provided by the embodiment of the invention constructs an affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector.
  • the technical solution introduces two displacement coefficients, the technical solution can estimate the motion vector based on the mixed motions of rotation, scaling and translation, so that the estimation of the motion vector is more accurate.
  • an embodiment of the present invention further provides an image processing apparatus 900, which may include:
  • the obtaining unit 910 is configured to obtain a motion vector 2-tuple of the current image block, where the motion vector 2-tuple includes a motion vector of each of the 2 pixel samples in the video frame to which the current image block belongs.
  • the calculating unit 920 is configured to calculate, by using the affine motion model and the motion vector 2-tuple obtained by the obtaining unit 910, a motion vector of an arbitrary pixel sample in the current image block.
  • the affine motion model can be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical component of the affine motion model The straight displacement coefficient d, so that the affine motion model is of the form:
  • the calculating unit 920 may be specifically configured to: obtain the imitation by using a motion vector of each of the 2 pixel samples and a position of the 2 pixel samples.
  • the calculating unit 920 may be specifically configured to: utilize a difference between horizontal components of respective motion vectors of the 2 pixel samples and the 2 pixel samples The ratio of the distance between the distance, and the ratio between the difference between the vertical components of the motion vectors of the two pixel samples and the distance between the two pixel samples, obtains the value of the coefficient of the affine motion model And obtaining a motion vector of an arbitrary pixel sample in the current image block by using the affine motion model and values of coefficients of the affine motion model.
  • the calculating unit 920 may be specifically configured to: use a weighted sum between components of respective motion vectors of the two pixel samples and the two pixel samples a ratio of a distance or a square of a distance between the two pixel samples to obtain a value of a coefficient of the affine motion model; using the affine motion model and a value of a coefficient of the affine motion model to obtain a value A motion vector of any pixel sample in the current image block.
  • the imitation The shooting motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance.
  • the affine The motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance.
  • the 2 pixel samples include an upper left pixel sample of the current image block, and a lower right area pixel sample located at a lower right of the upper left pixel sample
  • the affine motion model is specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the image processing apparatus 900 is applied to a video encoding apparatus or the image predicting apparatus is applied to a video decoding apparatus.
  • the apparatus when the image processing apparatus 900 is applied to a video encoding apparatus, the apparatus further includes an encoding unit, configured to be calculated by using the calculating unit 920. a motion vector of an arbitrary pixel sample in the current image block, and performing motion compensation predictive coding on the arbitrary pixel sample in the current image block.
  • the apparatus when the image processing apparatus 900 is applied to a video encoding apparatus, the apparatus further includes a decoding unit, configured to be calculated by using the calculating unit 920. a motion vector of an arbitrary pixel sample in the current image block, performing motion compensation decoding on the arbitrary pixel sample to obtain a pixel reconstruction value of the arbitrary pixel sample.
  • the image processing apparatus 900 of the present embodiment may further include each functional unit in the image prediction apparatus 400.
  • the obtaining unit 910 and the calculation unit 920 in the image processing apparatus 900 of the present embodiment may be applied to the prediction unit 430. Therefore, the specific functions of the prediction unit 430 can be implemented.
  • the functional units in the image prediction apparatus 400 reference may be made to the specific description in the foregoing embodiments, and details are not described herein again.
  • the image processing apparatus 900 of the present embodiment may be specifically implemented according to the method in the foregoing method embodiments, and the specific implementation process may refer to the related description of the foregoing method embodiments, and details are not described herein again.
  • the image processing device 900 can be any need to output and play video Devices such as laptops, tablets, personal computers, mobile phones, etc.
  • the image processing apparatus 900 constructs an affine affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the estimation of the motion vector. Accuracy. After the image processing device 900 introduces two displacement coefficients, the image processing device 900 can estimate the motion vector based on the mixed motion of the rotation, scaling, and translation, so that the estimation of the motion vector is more accurate.
  • FIG. 10 is a schematic diagram of an image processing apparatus 1000 according to an embodiment of the present invention.
  • the image processing apparatus 1000 may include at least one bus 1001, at least one processor 1002 connected to the bus 1001, and at least one memory connected to the bus 1001. 1003.
  • the processor 1002 calls the code or instruction stored in the memory 1003 through the bus 1001 for obtaining a motion vector 2-tuple of the current image block, where the motion vector 2-tuple includes the video frame to which the current image block belongs.
  • the motion vector of each of the two pixel samples; using the affine motion model and the motion vector 2-tuple, the motion vector of any pixel sample in the current image block is calculated.
  • the affine motion model may be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, and -b is a horizontal coordinate coefficient of a vertical component of the affine motion model.
  • the affine motion model further includes a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a vertical component of the affine motion model The straight displacement coefficient d, so that the affine motion model is of the form:
  • the processing The device 1002 is configured to obtain a value of a coefficient of the affine motion model by using a motion vector of each of the two pixel samples and a position of the two pixel samples; using the affine motion model and the affine The value of the coefficient of the motion model obtains the motion vector of any pixel sample in the current image block.
  • the processor 1002 is calculated by using an affine motion model and the motion vector 2-tuple to obtain a motion vector of an arbitrary pixel sample in the current image block. And a ratio of a difference between a difference between a horizontal component of a motion vector of each of the two pixel samples and a distance between the two pixel samples, and a vertical component of a motion vector of each of the two pixel samples Obtaining a ratio of a difference between the difference and the distance between the two pixel samples, obtaining a value of a coefficient of the affine motion model; obtaining the value of a coefficient of the affine motion model and the affine motion model a motion vector of any pixel sample in the current image block.
  • the processor 1002 is calculated by using an affine motion model and the motion vector 2-tuple to obtain a motion vector of an arbitrary pixel sample in the current image block.
  • the method may be used to obtain the ratio by using a ratio of a weighted sum between components of respective motion vectors of the two pixel samples to a distance between the two pixel samples or a square of a distance between the two pixel samples A value of a coefficient of the motion model; a motion vector of an arbitrary pixel sample in the current image block is obtained using the affine motion model and a value of a coefficient of the affine motion model.
  • the imitation The motion model can be specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 1 , vy 1 ) is a motion vector of the right region pixel sample
  • w is between the two pixel samples distance.
  • the two pixel samples include the
  • the affine motion model may be specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 2 , vy 2 ) is a motion vector of the lower region pixel sample
  • h is between the two pixel samples distance.
  • the 2 pixel samples include an upper left pixel sample of the current image block, and a lower right area pixel sample located at a lower right of the upper left pixel sample
  • the affine motion model can be specifically:
  • (vx 0 , vy 0 ) is a motion vector of the upper left pixel sample
  • (vx 3 , vy 3 ) is a motion vector of the lower right region pixel sample
  • h 1 is the two pixel samples.
  • the distance in the vertical direction w 1 is the horizontal direction distance between the two pixel samples
  • w 1 2 + h 1 2 is the square of the distance between the two pixel samples.
  • the image processing apparatus 1000 is applied to a video encoding apparatus or the image predicting apparatus is applied to a video decoding apparatus.
  • the processor 1002 is further configured to use the affine motion model and the a motion vector 2-tuple, after calculating a motion vector of an arbitrary pixel sample in the current image block, using the calculated motion vector of an arbitrary pixel sample in the current image block, for the current image block in the current image block Motion compensated predictive coding for arbitrary pixel samples
  • the processor 1002 is further configured to: in the determining the current image block After the predicted pixel value of the pixel of the arbitrary pixel sample, using the calculated motion vector of the arbitrary pixel sample in the current image block, performing motion compensation decoding on the arbitrary pixel sample to obtain the arbitrary pixel sample. Pixel reconstruction value.
  • the image processing apparatus 1000 can be any device that needs to output and play video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, and the like.
  • the image processing apparatus 1000 constructs an affine motion model based on the rotation and the scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector. .
  • the image processing device 1000 can estimate the motion vector based on the mixed motion of the rotation, the scaling, and the translation, so that the estimation of the motion vector is more accurate.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the image prediction methods described in the foregoing method embodiments.
  • FIG. 11 is a schematic flowchart diagram of another image processing method according to an embodiment of the present invention.
  • the image processing method provided by another embodiment of the present invention may include:
  • S1101 Obtain coefficients of an affine motion model, and calculate motion vectors of any pixel samples in the current image block by using coefficients of the affine motion model and the affine motion model.
  • the affine motion model may be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • the coefficients of the affine motion model may include a and b;
  • the coefficients of the affine motion model may further include a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a verticality of the affine motion model
  • the vertical displacement coefficient d of the component such that the affine motion model is of the form:
  • S1102 Determine, by using the calculated motion vector of the arbitrary pixel sample, a predicted pixel value of a pixel point of the arbitrary pixel sample.
  • the technical solution provided by the embodiment of the invention constructs an affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector.
  • the technical solution introduces two displacement coefficients, the technical solution can estimate the motion vector based on the mixed motions of rotation, scaling and translation, so that the estimation of the motion vector is more accurate.
  • an embodiment of the present invention further provides an image processing apparatus 1200, which may include:
  • the obtaining unit 1210 is configured to obtain coefficients of the affine motion model.
  • the calculating unit 1220 is configured to obtain the coefficients of the affine motion model and the affine motion model by using the obtaining unit 1210, and calculate a motion vector of an arbitrary pixel sample in the current image block.
  • the prediction unit 1230 is configured to determine a predicted pixel value of a pixel point of the arbitrary pixel sample by using a motion vector of the arbitrary pixel sample calculated by the calculating unit 1220.
  • the affine motion model may be in the following form:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is the level of the horizontal component of the affine motion model a coordinate coefficient
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • a is a vertical coordinate coefficient of a vertical component of the affine motion model
  • - b is a horizontal coordinate coefficient of a vertical component of the affine motion model
  • the coefficients of the affine motion model may include a and b;
  • the coefficients of the affine motion model may further include a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a verticality of the affine motion model
  • the vertical displacement coefficient d of the component such that the affine motion model is of the form:
  • the image processing device 1200 can be any device that needs to output and play video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, and the like.
  • the image processing apparatus 1200 constructs an affine motion model based on the rotation and scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector. . After the image processing device 1200 introduces two displacement coefficients, the image processing device 1200 can estimate the motion vector based on the mixed motion of rotation, scaling, and translation, so that the estimation of the motion vector is more accurate.
  • FIG. 13 is a schematic diagram of an image processing apparatus 1300 according to an embodiment of the present invention.
  • the image processing apparatus 1300 may include at least one bus 1301, at least one processor 1302 connected to the bus 1301, and at least one memory connected to the bus 1301. 1303.
  • the processor 1302 calls the code or instruction stored in the memory 1303 via the bus 1301 for obtaining coefficients of the affine motion model, and calculating the coefficients by using the coefficients of the affine motion model and the affine motion model.
  • the predicted pixel value of the pixel of the arbitrary pixel sample is determined by using the calculated motion vector of the arbitrary pixel sample.
  • the affine motion model may be formula:
  • (x, y) is a coordinate of the arbitrary pixel sample
  • the vx is a horizontal component of a motion vector of the arbitrary pixel sample
  • the vy is a vertical component of a motion vector of the arbitrary pixel sample
  • a is a horizontal coordinate coefficient of a horizontal component of the affine motion model
  • b is a vertical coordinate coefficient of a horizontal component of the affine motion model
  • vy In -bx+ay, a is a vertical coordinate coefficient of a vertical component of the affine motion model, -b is a horizontal coordinate coefficient of a vertical component of the affine motion model, and coefficients of the affine motion model May include a and b;
  • the coefficients of the affine motion model may further include a horizontal displacement coefficient c of a horizontal component of the affine motion model, and a verticality of the affine motion model
  • the vertical displacement coefficient d of the component such that the affine motion model is of the form:
  • the image processing device 1300 can be any device that needs to output and play video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, and the like.
  • the image processing apparatus 1300 constructs an affine motion model based on the rotation and the scaling motion only by two parameters, which not only reduces the computational complexity, but also improves the accuracy of estimating the motion vector. .
  • the image processing device 1300 can estimate the motion vector based on the mixed motion of rotation, scaling, and translation, so that the estimation of the motion vector is more accurate.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the image prediction methods described in the foregoing method embodiments.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the above units is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the above-described integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a computer device which may be a personal computer, server or network device, etc., and in particular a processor in a computer device
  • the foregoing storage medium may include: a U disk, a mobile hard disk, a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM), and the like.
  • the medium of the code may be implemented in the form of a software functional unit and sold or used as a stand-alone product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)

Abstract

一种图像预测方法和相关设备。一种图像预测方法包括:确定当前图像块中的2个像素样本,确定2个像素样本之中的每个像素样本所对应的候选运动信息单元集;确定包括2个运动信息单元的合并运动信息单元集i;利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。本发明实施例的提供技术方案有利于降低基于仿射运动模型进行图像预测的计算复杂度。

Description

图像预测方法和相关设备 技术领域
本发明涉及视频编解码领域,具体涉及图像预测方法和相关设备。
背景技术
随着光电采集技术的发展及不断增长的高清数字视频需求,视频数据量越来越大,有限异构的传输带宽、多样化的视频应用不断地对视频编码效率提出了更高的需求,高性能视频编码(英文:high efficient video coding,缩写:HEVC)标准的制定工作因需启动。
视频编码压缩的基本原理是利用空域、时域和码字之间的相关性,尽可能去除冗余。目前流行做法是采用基于块的混合视频编码框架,通过预测(包括帧内预测和帧间预测)、变换、量化、熵编码等步骤实现视频编码压缩。这种编码框架,显示了很强的生命力,HEVC也仍沿用这种基于块的混合视频编码框架。
在各种视频编/解码方案中,运动估计/运动补偿是一种影响编/解码性能的关键技术。其中,在现有的各种视频编/解码方案中,假设物体的运动总是满足平动运动,整个物体的各个部分有相同的运动。现有的运动估计/运动补偿算法基本都是建立在平动模型(英文:translational motion model)的基础上的块运动补偿算法。然而,现实世界中运动有多样性,缩放、旋转和抛物线运动等非规则运动普遍存在。上世纪90年代开始,视频编码专家就意识到了非规则运动的普遍性,希望通过引进非规则运动模型(如仿射运动模型等)来提高视频编码效率,但是现有的基于仿射运动模型进行图像预测的计算复杂度通常非常的高。
发明内容
本发明实施例提供图像预测方法和相关设备,以期降低基于仿射运动模型进行图像预测的计算复杂度。
本发明第一方面提供一种图像预测方法,可包括:
确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素 样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;
确定包括2个运动信息单元的合并运动信息单元集i;
其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;
利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第一方面,在第一方面的第一种可能的实施方式中,所述确定包括2个运动信息单元的合并运动信息单元集i,包括:
从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
结合第一方面的第一种可能的实施方式,在第一方面的第二种可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量之间的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的其中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
结合第一方面或第一方面的第一种至第二种可能的实施方式中的任意一种可能的实施方式,在第一方面的第三种可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
结合第一方面的第三种可能的实施方式,在第一方面的第四种可能的实施方式中,
所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块 的左上像素样本时域相邻的像素样本,所述x1为正整数;
其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第一方面的第三种至第四种可能的实施方式中的任意一种可能的实施方式,在第一方面的第五种可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第一方面的第三种至第五种可能的实施方式中的任意一种可能的实施方式,在第一方面的第六种可能的实施方式中,
所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
结合第一方面的第三种至第六种可能的实施方式中的任意一种可能的实施方式,在第一方面的第七种可能的实施方式中,
所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5 个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
结合第一方面或第一方面的第一种至第七种可能的实施方式中的任意一种可能的实施方式,在第一方面的第八种可能的实施方式中,
所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测包括:当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
或者,
所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测包括:当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第一方面或第一方面的第一种至第八种可能的实施方式中的任意一种可能的实施方式,在第一方面的第九种可能的实施方式中,
所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进 行像素值预测,包括:
利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
或者,
利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
结合第一方面或第一方面的第一种至第九种可能的实施方式中的任意一种可能的实施方式,在第一方面的第十种可能的实施方式中,
所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,包括:利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
结合第一方面的第十种可能的实施方式,在第一方面的第十一种可能的实施方式中,
所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
结合第一方面或第一方面的第一种至第十一种可能的实施方式中的任意一种可能的实施方式,在第一方面的第十二种可能的实施方式中,
所述仿射运动模型为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000001
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所 述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
结合第一方面或第一方面的第一种至第十二种可能的实施方式中的任意一种可能的实施方式,在第一方面的第十三种可能的实施方式中,
所述图像预测方法应用于视频编码过程中或所述图像预测方法应用于视频解码过程中。
结合第一方面的第十三种可能的实施方式,在第一方面的第十四种可能的实施方式中,在所述图像预测方法应用于视频解码过程中的情况下,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i,包括:基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
结合第一方面的第十三种可能的实施方式或第一方面的第十四种可能的实施方式,在第一方面的第十五种可能的实施方式中,在所述图像预测方法应用于视频解码过程中的情况下,所述方法还包括:从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
结合第一方面的第十三种可能的实施方式,在第一方面的第十六种可能的实施方式中,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还包括:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
结合第一方面的第十三种可能的实施方式或第一方面的第十六种可能的实施方式,在第一方面的第十七种可能的实施方式中,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还包括:将所述合并运动信息单元集 i的标识写入视频码流。
本发明实施例第二方面提供一种图像预测装置,包括:
第一确定单元,用于确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;
第二确定单元,用于确定包括2个运动信息单元的合并运动信息单元集i;
其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;
预测单元,用于利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第二方面,在第二方面的第一种可能的实施方式中,所述第二确定单元具体用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
结合第二方面的第一种可能的实施方式,在第二方面的第二种可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量之间的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
结合第二方面或第二方面的第一种至第二种可能的实施方式中的任意一种可能的实施方式,在第二方面的第三种可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
结合第二方面的第三种可能的实施方式,在第二方面的第四种可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当 前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第二方面的第三种至第四种可能的实施方式中的任意一种可能的实施方式,在第二方面的第五种可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第二方面的第三种至第五种可能的实施方式中的任意一种可能的实施方式,在第二方面的第六种可能的实施方式中,
所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
结合第二方面的第三种至第六种可能的实施方式中的任意一种可能的实施方式,在第二方面的第七种可能的实施方式中,
所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
结合第二方面或第二方面的第一种至第七种可能的实施方式中的任意一种可能的实施方式,在第二方面的第八种可能的实施方式中,
所述预测单元具体用于,当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
或者,所述预测单元具体用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第二方面或第二方面的第一种至第八种可能的实施方式中的任意一种可能的实施方式,在第二方面的第九种可能的实施方式中,
所述预测单元具体用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像 素值;
或者,
所述预测单元具体用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
结合第二方面或第二方面的第一种至第九种可能的实施方式中的任意一种可能的实施方式,在第二方面的第十种可能的实施方式中,
所述预测单元具体用于,利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
结合第二方面的第十种可能的实施方式,在第二方面的第十一种可能的实施方式中,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
结合第二方面或第二方面的第一种至第十一种可能的实施方式中的任意一种可能的实施方式,在第二方面的第十二种可能的实施方式中,
所述仿射运动模型为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000002
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
结合第二方面或第二方面的第一种至第十二种可能的实施方式中的任意 一种可能的实施方式,在第二方面的第十三种可能的实施方式中,
所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
结合第二方面的第十三种可能的实施方式,在第二方面的第十四种可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,所述第二确定单元具体用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
结合第二方面的第十三种可能的实施方式或第二方面的第十四种可能的实施方式,在第二方面的第十五种可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,
所述装置还包括解码单元,用于从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
结合第二方面的第十三种可能的实施方式,在第二方面的第十六种可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述预测单元还用于:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
结合第二方面的第十三种可能的实施方式或第二方面的第十六种可能的实施方式,在第二方面的第十七种可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述装置还包括编码单元,用于将所述合并运动信息单元集i的标识写入视频码流。
本发明实施例第三方面提供一种图像预测装置,包括:
处理器和存储器;
其中,所述处理器通过调用所述存储器中存储的代码或指令以用于,确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;确定包括2个运动信息单元的合并运动信息单元集i;其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第三方面,在第三方面的第一种可能的实施方式中,在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
结合第三方面的第一种可能的实施方式,在第三方面的第二种可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选 合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的其中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
结合第三方面或第三方面的第一种至第二种可能的实施方式中的任意一种可能的实施方式,在第三方面的第三种可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
结合第三方面的第三种可能的实施方式,在第三方面的第四种可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的 视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第三方面的第三种至第四种可能的实施方式中的任意一种可能的实施方式,在第三方面的第五种可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
结合第三方面的第三种至第五种可能的实施方式中的任意一种可能的实施方式,在第三方面的第六种可能的实施方式中,
所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
结合第三方面的第三种至第六种可能的实施方式中的任意一种可能的实施方式,在第三方面的第七种可能的实施方式中,
所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
结合第三方面或第三方面的第一种至第七种可能的实施方式中的任意一种可能的实施方式,在第三方面的第八种可能的实施方式中,
在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
或者,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
结合第三方面或第三方面的第一种至第八种可能的实施方式中的任意一种可能的实施方式,在第三方面的第九种可能的实施方式中,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的 各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
或者,
在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
结合第三方面或第三方面的第一种至第九种可能的实施方式中的任意一种可能的实施方式,在第三方面的第十种可能的实施方式中,
在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
结合第三方面的第十种可能的实施方式,在第三方面的第十一种可能的实施方式中,
所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
结合第三方面或第三方面的第一种至第十一种可能的实施方式中的任意一种可能的实施方式,在第三方面的第十二种可能的实施方式中,
所述仿射运动模型为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000003
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy 为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
结合第三方面或第三方面的第一种至第十二种可能的实施方式中的任意一种可能的实施方式,在第三方面的第十三种可能的实施方式中,
所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
结合第三方面的第十三种可能的实施方式,在第三方面的第十四种可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
结合第三方面的第十三种可能的实施方式或第三方面的第十四种可能的实施方式,在第三方面的第十五种可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,所述处理器还用于,从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
结合第三方面的第十三种可能的实施方式,在第三方面的第十六种可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器还用于,利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
结合第三方面的第十三种可能的实施方式或第三方面的第十六种可能的实施方式,在第三方面的第十七种可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器还用于,将所述合并运动信息单元集i的标识写入视频码流。
本发明实施例第四方面提供一种图像处理方法,包括:
获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量;
其中,所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000004
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
结合第四方面,在第四方面第一种可能的实现方式中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000005
结合第四方面或第四方面第一种可能的实现方式,在第四方面第二种可能的实现方式中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量包括:
利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第四方面或第四方面第一种或第二种可能的实现方式,在第四方面第三可能的实现方式中,利用所述2个像素样本各自的运动矢量的水平分量之间 的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第四方面或第四方面第一种或第二种可能的实现方式,在第四方面第四可能的实现方式中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量包括:
利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第四方面或第四方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第四方面第五可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000006
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
结合第四方面或第四方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第四方面第六可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000007
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
结合第四方面或第四方面第一种、第二种和第四种可能的实现方式中任意一种可能的实现方式,在第四方面第七可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000008
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
结合第四方面或第四方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第四方面第八可能的实现方式中,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量之后,还包括:
利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
结合第四方面或第四方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第四方面第九可能的实现方式中,在所述确定所述当前图像块中的所述任意像素样本的像素点的预测像素值之后,还包括:
利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
本发明实施例第五方面提供一种图像处理装置,包括:
获得单元,用于获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
计算单元,用于利用仿射运动模型和所述获得单元获得的运动矢量2元组, 计算得到所述当前图像块中任意像素样本的运动矢量;
其中,所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000009
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
结合第五方面,在第五方面第一种可能的实现方式中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000010
结合第五方面或第五方面第一种可能的实现方式,在第五方面第二种可能的实现方式中,所述计算单元具体用于:
利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第五方面或第五方面第一种或第二种可能的实现方式,在第五方面第三可能的实现方式中,所述计算单元具体用于:
利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前 图像块中的任意像素样本的运动矢量。
结合第五方面或第五方面第一种或第二种可能的实现方式,在第五方面第四可能的实现方式中,所述计算单元具体用于:
利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第五方面或第五方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第五方面第五可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000011
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
结合第五方面或第五方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第五方面第六可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000012
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
结合第五方面或第五方面第一种、第二种和第四种可能的实现方式中任意一种可能的实现方式,在第五方面第七可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下 区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000013
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
结合第五方面或第五方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第五方面第八可能的实现方式中,在当所述图像处理装置应用于视频编码装置中的情况下,所述装置还包括编码单元,用于利用所述计算单元计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
结合第五方面或第五方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第五方面第九可能的实现方式中,在当所述图像处理装置应用于视频编码装置中的情况下,所述装置还包括解码单元,用于利用所述计算单元计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
本发明实施例第六方面提供一种图像处理装置,包括:
处理器和存储器;
其中,所述处理器通过调用所述存储器中存储的代码或指令以用于,获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量;
其中,所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000014
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
结合第六方面,在第六方面第一种可能的实现方式中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000015
结合第六方面或第六方面第一种可能的实现方式,在第六方面第二种可能的实现方式中,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第六方面或第六方面第一种或第二种可能的实现方式,在第六方面第三可能的实现方式中,在利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第六方面或第六方面第一种或第二种可能的实现方式,在第六方面第四可能的实现方式中,在利用仿射运动模型和所述运动矢量2元组,计算得到 所述当前图像块中任意像素样本的运动矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
结合第六方面或第六方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第六方面第五可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000016
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
结合第六方面或第六方面第一种至第三种可能的实现方式中任意一种可能的实现方式,在第六方面第六可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000017
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
结合第六方面或第六方面第一种、第二种和第四种可能的实现方式中任意一种可能的实现方式,在第六方面第七可能的实现方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000018
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
结合第六方面或第六方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第六方面第八可能的实现方式中,在当所述图像处理装置应用于视频编码装置中的情况下,所述处理器还用于,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
结合第六方面或第六方面第一种至第七种可能的实现方式中任意一种可能的实现方式,在第六方面第九可能的实现方式中,所述处理器还用于,在所述确定所述当前图像块中的所述任意像素样本的像素点的预测像素值之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
本发明实施例第七方面一种图像处理方法,包括:
获得仿射运动模型的系数,利用所述仿射运动模型的系数以及所述仿射模型,计算得到所述当前图像块中任意像素样本的运动矢量;
利用计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值;
其中,所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000019
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数包括a和b;
所述仿射运动模型的系数还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000020
本发明实施例第八方面提供一种图像处理装置,包括:
获得单元,用于获得仿射运动模型的系数;
计算单元,用于利用所述获得单元获得的所述仿射运动模型的系数以及所述仿射模型,计算得到所述当前图像块中任意像素样本的运动矢量;
预测单元,用于所述计算单元计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值;
其中,所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000021
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数包括a和b;
所述仿射运动模型的系数还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射 运动模型为如下形式:
Figure PCTCN2015075094-appb-000022
可以看出,本发明的一些实施例提供的技术方案中,利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,其中,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。并且,由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且,也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
附图说明
为了更清楚地说明本发明实施例技术方案,下面将对实施例和现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1-a~图1-b为本发明实施例提供的几种图像块的划分示意图;
图1-c为本发明实施例提供的一种图像预测方法的流程示意图;
图1-d为本发明实施例提供的一种图像块的示意图;
图2-a为本发明实施例提供的另一种图像预测方法的流程示意图;
图2-b~图2-d是本发明实施例提供的几种确定像素样本的候选运动信息单元集的示意图;
图2-e是本发明实施例提供的图像块x的顶点坐标的示意图;
图2-f~图2-g是本发明实施例提供的像素点仿射运动的示意图;
图2-h~图2-i是本发明实施例提供的像素点旋转运动的示意图;
图3为本发明实施例提供的另一种图像预测方法的流程示意图;
图4是本发明实施例提供的一种图像预测装置的示意图;
图5是本发明实施例提供的另一种图像预测装置的示意图。
图6为本发明实施例提供的一种图像处理方法的流程示意图;
图7为本发明实施例提供的另一种图像处理方法的流程示意图;
图8为本发明实施例提供的另一种图像处理方法的流程示意图;
图9是本发明实施例提供的一种图像处理装置的示意图;
图10是本发明实施例提供的另一种图像处理装置的示意图;
图11为本发明实施例提供的另一种图像处理方法的流程示意图;
图12是本发明实施例提供的一种图像处理装置的示意图;
图13是本发明实施例提供的另一种图像处理装置的示意图。
具体实施方式
本发明实施例提供图像预测方法和相关设备,以期降低基于仿射运动模型进行图像预测的计算复杂度。
为使得本发明的发明目的、特征、优点能够更加明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然下面所描述的实施例仅是本发明的一部分实施例,而非全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包括。例如包括了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面先对本发明实施例可能涉及的一些概念进行介绍。
在多数的编码框架中,视频序列包括一系列图像(英文:picture),图像被进一步划分为切片(英文:slice),slice再被划分为块(英文:block)。视频编码以块为单位,可从picture的左上角位置开始从左到右从上到下一行一行进行编码处理。在一些新的视频编码标准中,block的概念被进一步扩展。在H.264标准中有宏块(英文:macroblock,缩写:MB),MB可进一步划分成多个可用于预测编码的预测块(英文:partition)。其中,在HEVC标准中,采用编码单元(英文:coding unit,缩写:CU),预测单元(英文:prediction unit,缩写:PU)和变换单元(英文:transform unit,缩写:TU)等基本概念,从功能上划分了多种Unit,并采用全新的基于树结构进行描述。比如CU可以按照四叉树进行划分为更小的CU,而更小的CU还可以继续划分,从而形成一种四叉树结构。对于PU和TU也有类似的树结构。无论CU,PU还是TU,本质上都属于块block的概念,CU类似于宏块MB或者编码块,是对编码图像进行划分和编码的基本单元。PU可以对应预测块,是预测编码的基本单元。对CU按照划分模式进一步划分成多个PU。TU可以对应变换块,是对预测残差进行变换的基本单元。高性能视频编码(英文:high efficiency video coding,缩写:HEVC)标准中则可以把它们统一称之为编码树块(英文:coding tree block,缩写:CTB)等等。
在HEVC标准中,编码单元的大小可包括64×64,32×32,16×16和8×8等四个级别,每个级别的编码单元按照帧内预测和帧间预测由可以划分为不同大小的预测单元。其中,例如图1-a和图1-b所示,图1-a举例示出了一种与帧内预测对应的预测单元划分方式,图1-b举例示出了几种与帧间预测对应的预测单元划分方式。
在视频编码技术发展演进过程中,视频编码专家们想了各种方法来利用相邻编解码块之间的时空相关性来努力提高编码效率。在H264/高级视频编码(英文:advanced video coding,缩写:AVC)标准中,跳过模式(skip mode)和直接模式(direct mode)成为提高编码效率的有效工具,在低码率时使用这两种编码模式的块能占到整个编码序列的一半以上。当使用跳过模式时,只需要在码流中传递一个跳过模式标记,就可以利用周边运动矢量推导得到当前图像 块的运动矢量,根据该运动矢量来直接拷贝参考块的值作为当前图像块的重建值。此外,当使用直接模式时,编码器可以利用周边运动矢量推导得到当前图像块的运动矢量,根据该运动矢量直接拷贝参考块的值作为当前图像块的预测值,在编码端利用该预测值对当前图像块进行编码预测。目前最新的高性能视频编码(英文:high efficiency video coding,缩写:HEVC)标准中,通过引进一些新编码工具,进一步提高视频编码性能。融合编码(merge)模式和自适应运动矢量预测(英文:advanced motion vector prediction,缩写:AMVP)模式是两个重要的帧间预测工具。融合编码(merge)利用当前编码块周边已编码块的运动信息(可包括运动矢量(英文:motion vector,缩写:MV)和预测方向和参考帧索引等)构造一个候选运动信息集合,通过比较,可选择出编码效率最高的候选运动信息作为当前编码块的运动信息,在参考帧中找到当前编码块的预测值,对当前编码块进行预测编码,同时,可把表示选择来自哪个周边已编码块的运动信息的索引值写入码流。当使用自适应运动矢量预测模式时,利用周边已编码块的运动矢量作为当前编码块运动矢量的预测值,可以选定一个编码效率最高的运动矢量来预测当前编码块的运动矢量,并可把表示选定哪个周边运动矢量的索引值写入视频码流。
下面继续探讨本发明实施例的技术方案。
下面先介绍本发明实施例提供的图像预测方法,本发明实施例提供的图像预测方法的执行主体是视频编码装置或视频解码装置,其中,该视频编码装置或视频解码装置可以是任何需要输出或存储视频的装置,如笔记本电脑、平板电脑、个人电脑、手机或视频服务器等设备。
本发明图像预测方法的一个实施例,一种图像预测方法包括:确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;确定包括2个运动信息单元的合并运动信息单元集i;其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/ 或预测方向为后向的运动矢量;利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
请参见图1-c,图1-c为本发明的一个实施例提供的一种图像预测方法的流程示意图。其中,图1-c举例所示,本发明的一个实施例提供的一种图像预测方法可包括:
S101、确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集。
其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元。
其中,本发明各实施例中提及的像素样本可以是像素点或包括至少两个像素点的像素块。
其中,本发明各实施例中提及运动信息单元可包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量。也就是说,一个运动信息单元可能包括一个运动矢量或可能包括预测方向不同的两个运动矢量。
其中,若运动信息单元对应的预测方向为前向,表示该运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量。若运动信息单元对应的预测方向为后向,表示该运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量。若运动信息单元对应的预测方向为单向,表示该运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,或者表示该运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量。其中,若运动信息单元对应的预测方向为双向,表示运动信息单元包括预测方向为前向的运动矢量和预测方向为后向的运动矢量。
可选的,在本发明的一些可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的2个像素样本。其中,所述当前图像块的左上像素样本可为所述当前图像块的左上顶点或者所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当 前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
若像素样本为像素块,则该像素块的大小例如为2*2,1*2、4*2、4*4或其他大小。图像块可包括多个像素块。
需要说明的是,对于一个尺寸为w*w的图像块,当w为奇数时(例如w等于3、5、7或11等),该图像块的中心像素点是唯一的,当w为偶数时(例如w等于4、6、8或16等),该图像块的中心像素点可能有多个,图像块的中心素样本可为上述图像块的任意一个中心像素点或指定中心像素点,或者图像块的中心素样本可以为上述图像块中的包含任意一个中心像素点的的像素块,或者图像块的中心素样本可为所述图像块中的包含指定中心像素点的像素块。例如图1-d举例所示的尺寸为4*4的图像块,图像块的中心像素点有A1、A2、A3和A4这4个像素点,那么指定中心像素点可为像素点A1(左上中心像素点),像素点A2(左下中心像素点)、像素点A3(右上中心像素点)或像素点A4(右下中心像素点),其他情况以此类推。
S102、确定包括2个运动信息单元的合并运动信息单元集i。
其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元。其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量。
举例来说,假设2个像素样本包括像素样本001和像素样本002。像素样本001对应的候选运动信息单元集为候选运动信息单元集011。像素样本002对应的候选运动信息单元集为候选运动信息单元集022。其中,合并运动信息单元集i包括运动信息单元C01和运动信息单元C02,其中,运动信息单元C01可选自候选运动信息单元集011,其中,运动信息单元C02可选自候选运动信息单元集022,以此类推。
可以理解,假设合并运动信息单元集i包括运动信息单元C01和运动信息单元C02,其中,运动信息单元C01和运动信息单元C02中的任意一个运动信息单元可能包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量,因此,合并运动信息单元集i可能包括2个运动矢量(这2个运动矢量对应的预测方式可为前向或后向。或这2个运动矢量可包括预测方向为前向的1个运动矢量和预测方向为后向的1个运动矢量),也可能包括4个运动矢量(其中,这4个运动矢量可能包括预测方向为前向的2个运动矢量和预测方向为后向的2个运动矢量),也可能包括3个运动矢量(这3个运动矢量可能也可能包括预测方向为前向的1个运动矢量和预测方向为后向的2个运动矢量,或也可能包括预测方向为前向的2个运动矢量和预测方向为后向的1个运动矢量)。
S103、利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
其中,当前图像块可为当前编码块或当前解码块。
可以看出,本实施例的技术方案中,利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,其中,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。并且由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且,也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
其中,本实施例提供的所述图像预测方法可应用于视频编码过程中或可应用于视频解码过程中。
在实际应用中,确定包括2个运动信息单元的合并运动信息单元集i的方 式可能是多种多样的。
可选的,在本发明的一些可能的实施方式中,确定包括2个运动信息单元的合并运动信息单元集i,包括:从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
其中,两个候选合并运动信息单元集不相同,可指候选合并运动信息单元集包括的运动信息单元不完全相同。
其中,两个运动信息单元不相同,可指两个运动信息单元所包括的运动矢量不同,或两个运动信息单元所包括的运动矢量对应的预测方向不同,或者两个运动信息单元所包括的运动矢量对应的参考帧索引不同。其中,两个运动信息单元相同,可指两个运动信息单元所包括的运动矢量相同,且两个运动信息单元所包括的运动矢量对应的预测方向相同,且两个运动信息单元所包括的运动矢量对应的参考帧索引相同。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于视频解码过程中的情况下,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i,可以包括:基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还可包括:将所述合并运动信息单元集i的标识写入视频码流。所述合并运动信息单元集i的标识可以是任何能够标识出所述合并运动信息单元集i的信息,例如所述合并运动信息单元集i的标识可为合并运动信息单元集i在合并运动信息单元集列表中的索引号等。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于 视频编码过程中的情况下,所述方法还包括:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于视频解码过程中的情况下,所述方法还包括:从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式中,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i可包括:基于失真或率失真代价从N个候选合并运动信息单元集之中确定出包含2个运动矢量的合并运动信息单元集i。
可选的,合并运动信息单元集i对应的率失真代价,小于或等于上述N个候选合并运动信息单元集中除合并运动信息单元集i之外的任意一个合并运动信息单元集对应的率失真代价。
可选的,合并运动信息单元集i对应的失真,小于或者等于上述N个候选合并运动信息单元集中除合并运动信息单元集i之外的任意一个合并运动信息单元集对应的失真。
其中,上述N个候选合并运动信息单元集之中的某个候选合并运动信息单元集(例如上述N个候选合并运动信息单元集中的合并运动信息单元集i)对应的率失真代价例如可以为利用该某个候选合并运动信息单元集(例如合并运动信息单元集i)对图像块(例如当前图像块)进行像素值预测而得到的该图像块的预测像素值所对应的率失真代价。
其中,上述N个候选合并运动信息单元集之中的某个候选合并运动信息单元集(例如上述N个候选合并运动信息单元集中的合并运动信息单元集i)对应的失真,例如可为图像块(如当前图像块)的原始像素值与利用该某个候选合 并运动信息单元集(如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的失真(即,图像块的原始像素值与预测像素值之间的失真)。
在本发明一些可能的实施方式中,图像块(如当前图像块)的原始像素值与利用该某个候选合并运动信息单元集(如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的失真,具体例如可以为该图像块(如当前图像块)的原始像素值与利用该某个候选合并运动信息单元集(例如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的平方误差和(SSD,sum of square differences)或绝对误差和(SAD,sum of absolution differences)或误差和或能够衡量失真的其他失真参量。
其中,所述N为正整数。例如上述N例如可等于1、2、3、4、5、6、8或其他值。
可选的,在本发明的一些可能的实施方式中,上述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的各运动信息单元可以互不相同。
可选的,在本发明的一些可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件。
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动。例如,在候选合并运动信息单元集中的对应第一预测方向的所有运动矢量相等的情况下,可认为该候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为平动运动,反之,则可认为该候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动,其中,第一预测方向为前向或后向。又例如在候选合并运动信息单元集中的对应预测方向为前向的所有运动矢量相等,且候选合并运动信息单元集中的对应预测方向为后向的所有运动矢量相等的情况下,可认 为该候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为平动运动,反之,则可认为该候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动。
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同。
举例来说,当两个运动信息单元均包括预测方向为前向的运动矢量和预测方向为后向的运动矢量,表示这两个运动信息单元对应的预测方向相同。又例如,当两个运动信息单元中的其中一个运动信息单元包括预测方向为前向的运动矢量和预测方向为后向的运动矢量,另一个运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,或者该另一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,可表示这两个运动信息单元对应的预测方向不相同。又例如,当两个运动信息单元中的其中一个运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,而另一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,可表示这两个运动信息单元对应的预测方向不相同。又例如,当两个运动信息单元均包括预测方向为前向的运动矢量但这两个运动信息单元均不包括预测方向为后向的运动矢量,表示这两个运动信息单元对应的预测方向相同。又例如,当两个运动信息单元均包括预测方向为后向的运动矢量但是这两个运动信息单元均不包括预测方向为前向的运动矢量,表示这两个运动信息单元对应的预测方向相同。
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同。
举例来说,当两个运动信息单元均包括预测方向为前向的运动矢量和预测方向为后向的运动矢量,且该两个运动信息单元中的预测方向为前向的运动矢量对应的参考帧索引相同,且该两个运动信息单元中的预测方向为后向的运动矢量对应的参考帧索引相同,可表示这两个运动信息单元对应的参考帧索引相同。又举例来说,当两个运动信息单元中的其中一个运动信息单元包括预测方向为前向的运动矢量和预测方向为后向的运动矢量,另一个运动信息单元包括 预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,或该另一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,表示这两个运动信息单元对应的预测方向不相同,可表示这两个运动信息单元对应的参考帧索引不同。又例如,当两个运动信息单元中的其中一个运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,另一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,可表示这两个运动信息单元对应的参考帧索引不同。又例如,当两个运动信息单元中的其中一个运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,另一个运动信息单元包括预测方向为前向的运动矢量但不包括预测方向为后向的运动矢量,并且该两个运动信息单元中的预测方向为前向的运动矢量对应的参考帧索引相同,则可表示这两个运动信息单元对应的参考帧索引不同。又例如,当两个运动信息单元中的其中一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,另一个运动信息单元包括预测方向为后向的运动矢量但不包括预测方向为前向的运动矢量,并且该两个运动信息单元中的预测方向为后向的运动矢量对应的参考帧索引相同,则可以表示这两个运动信息单元对应的参考帧索引不同。
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。其中,上述水平分量阈值例如可以等于当前图像块的宽度的1/3、当前图像块的宽度的1/2、当前图像块的宽度的2/3或当前图像块的宽度的3/4或其他大小。
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的其 中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。其中,上述垂直分量阈值例如可以等于当前图像块的高度的1/3、当前图像块的高度的1/2、当前图像块的高度的2/3或当前图像块的高度的3/4或其他大小。
假设,上述两个像素样本为当前图像块的左上像素样本,那么像素样本Z可为当前图像块的左下像素样本或中心像素样本或其他像素样本。其他情况可以此类推。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数。例如,所述x1个像素样只包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本。
例如上述x1例如可等于1、2、3、4、5、6或其他值。
例如,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数。
例如上述x2例如可等于1、2、3、4、5、6或其他值。
例如,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当 前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数。例如所述x3个像素样本只包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本。
例如上述x3例如可等于1、2、3、4、5、6或其他值。
例如,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能实施方式中,所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2。例如所述x5个像素样本只包括像素样本a2。其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
可选的,在本发明一些可能实施方式之中,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可包括:当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,所述第一预测方向为前向或后向;
或者,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可以包括:当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明的一些可能的实施方式之中,利用非平动运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,例如可包括:对进行缩放处理后的合并运动信息单元集i中的运动矢量进行运动估计处理,以得到运动估计处理后的合并运动信息单元集i,利用非平动运动模型和运动估计处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,包括:利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;或利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
测试发现,若先利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,而后再利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值,由于计算运动矢量时以当前图像块中的像素块为粒度,这样有 利于较大的降低计算复杂度。
可选的,在本发明的一些可能的实施方式之中,利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测可包括:对所述合并运动信息单元集i中的运动矢量进行运动估计处理,以得到运动估计处理后的合并运动信息单元集i,利用仿射运动模型和运动估计处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,包括:利用所述合并运动信息单元集i中的两个运动信息单元的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述所述合并运动信息单元集i中的两个运动信息单元的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量。
或者,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可包括:利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到(例如,所述2个像素样本的运动矢量为所述合并运动信息单元集i中的两个运动信息单元的运动矢量,或者,基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量和预测残差得到所述2个像素样本的运动矢量)。
可选的,在本发明的一些可能的实施方式之中,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
可选的,在本发明的一些可能的实施方式之中,
所述仿射运动模型例如可为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000023
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
其中,
Figure PCTCN2015075094-appb-000024
其中,(vx2,vy2)为当前图像块中的与上述2个像素样本不同的另一个像素样本的运动矢量。例如假设上述2个像素样本为当前图像块的左上像素样本和右上像素样本,那么(vx2,vy2)可为前图像块的左下像素样本或中心像素样本。又例如假设上述2个像素样本为当前图像块的左上像素样本和左下像素样本,那么(vx2,vy2)可为前图像块的右上像素样本或中心像素样本。
其中,当像素样本为为包括多个像素点的像素块时,像素样本的坐标可为像素样本中的任意一个像素点的作为,或者像素样本的坐标可为像素样本中的指定像素点的坐标(例如像素样本的坐标可为像素样本中的左上像素点或左下左上像素点或右上像素点或中心像素点的坐标等)。
可以理解的是,对于当前视频帧中的每个图像块,均可以按照与当前图像块对应的像素值预测方式相类似的方式进行像素值预测,当然,当前视频帧中的某些图像块也可能按照与当前图像块对应的像素值预测方式不同的方式进行像素值预测。
为便于更好的理解和实施本发明实施例的上述方案,下面结合更具体的应用场景进行进一步说明。
请参见图2-a,图2-a为本发明的另一个实施例提供的另一种图像预测方法的流程示意图。本实施例中主要以在视频编码装置中实施图像预测方法为例进行描述。其中,图2-a举例所示,本发明的另一个实施例提供的另一种图像预测方法可包括:
S201、视频编码装置确定当前图像块中的2个像素样本。
其中,本实施例中主要以所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本为例。例如,所述2个像素样本包括所述当前图像块的左上像素样本和右上像素样本。其中,所述2个像素样本为所述当前图像块的其他像素样本的场景可以此类推。
其中,所述当前图像块的左上像素样本可为所述当前图像块的左上顶点或者所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
若像素样本为像素块,则该像素块的大小例如为2*2,1*2、4*2、4*4或者其他大小。
S202、视频编码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集。
其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元。
其中,本发明各实施例中提及的像素样本可以是像素点或包括至少两个像素点的像素块。
其中,例如图2-b和图2-c所示,所述当前图像块的左上像素样本对应的候选运动信息单元集S1可包括x1个像素样本的运动信息单元。其中,所述x1个像素样本包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT、所述当前图像块的左边的空域相邻图像块C、所述当前图像块的左上的空域相邻图像块A、所述当前图像块的上边的空域相邻图像块B中的至少一个。例如可先获取所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的 空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元,将获取到的所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元添加到所述当前图像块的左上像素样本对应的候选运动信息单元集中,若所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元中的部分或全部运动信息单元相同,则进一步对所述候选运动信息单元集S1进行去重处理(此时去重处理后的所述候选运动信息单元集S1中的运动信息单元的数量可能是1或2),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元,与去重处理后的所述候选运动信息单元集S1中的其中一个运动信息单元相同,则可向所述候选运动信息单元集S1中加入零运动信息单元,直到候选运动信息单元集S1中的运动信息单元数量等于3。此外,若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元,不同于去重处理后的所述候选运动信息单元集S1中的任意一个运动信息单元,则将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元添加到去重处理后的所述候选运动信息单元集S1中,若此时所述候选运动信息单元集S1中的运动信息单元数量仍然少于3个,则可以向所述候选运动信息单元集S1中加入零运动信息单元,直到所述候选运动信息单元集S1中的运动信息单元数量等于3。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预 测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S1中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
类似的,例如图2-b和图2-c所示,所述当前图像块的右上像素样本对应的候选运动信息单元集S2可以包括x2个图像块的运动信息单元。其中,所述x2个图像块可以包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT、所述当前图像块的右上的空域相邻图像块E、所述当前图像块的上边的空域相邻图像块D之中的至少一个。例如,可以先获取所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元,将获取的所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元添加到所述当前图像块的右上像素样本对应的候选运动信息单元集S2中,若所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元相同,则可对所述候选运动信息单元集S2进行去重处理(此时去重处理后的所述候选运动信息单元集S2中的运动信息单元的数量是1),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元,与去重处理后的所述候选运动信息单元集S2中的其中一个运动信息单元相同,可进一步向所述候选运动信息单元集S2中加入零运动信息单元,直到所述候选运动信息单元集S2中运动信息单元数量等于2。此外,若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元,不同于去重处理之后的所述候选运动信息单元集S2中的任意一个运动信息单元,则可以将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元添加到去重处理后的所述候选运动信息单元集S2中,若此时所述候选运动信息单元集S2之中的运动信息单元数量仍然少于2个,则进一步向所述候选运动信息单元集S2中加入零运动信息单元,直到所 述候选运动信息单元集S2中运动信息单元的数量等于2。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S2中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
类似的,例如图2-b和图2-c所示,所述当前图像块的左下像素样本对应的候选运动信息单元集S3可以包括x3个图像块的运动信息单元。其中,所述x3个图像块可包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB、所述当前图像块的左下的空域相邻图像块G、所述当前图像块的左边的空域相邻图像块F中的至少一个。例如先获取所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元,可将获取的所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元添加到所述当前图像块的左下像素样本对应的候选运动信息单元集S3中,若所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元相同,则对所述候选运动信息单元集S3进行去重处理(此时去重处理后的所述候选运动信息单元集S3中的运动信息单元的数量是1),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元,与去重处理后的所述候选运动信息单元集S3中的其中一个运动信息单元相同,则可进一步向所述候选运动信息单元集S3中加入零运动信息单元,直到所述候选运动信息单元集S3中运动信息单元数量等于2。此外,若与所述当前图像块所属的视频帧时 域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元,不同于去重处理后的所述候选运动信息单元集S3中的任意一个运动信息单元,则可将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元添加到去重处理后的候选运动信息单元集S3中,若此时所述候选运动信息单元集S3之中的运动信息单元数量仍然少于2个,则进一步向所述候选运动信息单元集S3中加入零运动信息单元,直到所述候选运动信息单元集S3中运动信息单元数量等于2。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S3中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
其中,两个运动信息单元不相同,可指该两个运动信息单元包括的运动矢量不同,或该两个运动信息单元所包括的运动矢量对应的预测方向不同,或者该两个运动信息单元所包括的运动矢量对应的参考帧索引不同。其中,两个运动信息单元相同,可指该两个运动信息单元所包括的运动矢量相同,且该两个运动信息单元所包括的运动矢量对应的预测方向相同,且该两个运动信息单元所包括的运动矢量对应的参考帧索引相同。
可以理解,对于存在更多像素样本的场景,可以按照类似方式得到相应像素样本的候选运动信息单元集。
例如图2-d所示,其中,在图2-d所示举例中,所述2个像素样本可包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中两个像素样本。其中,所述当前图像块的左上像素样本为所述 当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
S203、视频编码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集。其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元。所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
可以理解的是,假设基于候选运动信息单元集S1(假设包括3个运动信息单元)和所述候选运动信息单元集S2(假设包括2个运动信息单元)来确定候选合并运动信息单元集,则理论上可确定出3*2=6个初始的候选合并运动信息单元集,然而为了提高可用性,例如可以利用第一条件、第二条件、第三条件、第四条件和第五条件中的至少一个条件来从这6个初始的候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。其中,如果候选运动信息单元集S1和所述候选运动信息单元集S2所包括的运动信息单元的数量不限于上述举例,那么,初始的候选合并运动信息单元集的数量不一定是6。
其中,第一条件、第二条件、第三条件、第四条件和第五条件的具体限制性内容可参考上述实施例中的举例说明,此处不在赘述。当然,所述N个候选合并运动信息单元集例如还可满足其他未列出条件。
在具体实现过程中,例如可先利用第一条件、第二条件和第三条件中的至少一个条件对初始的候选合并运动信息单元集进行筛选,从初始的候选合并运动信息单元集中筛选出N01个候选合并运动信息单元集,而后对N01个候选合并运动信息单元集进行缩放处理,而后再利用第四条件和第五条件中的至少一 个条件从进行缩放处理的N01个候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。当然,第四条件和第五条件也可能不参考,而是直接利用第一条件、第二条件和第三条件中的至少一个条件对初始的候选合并运动信息单元集进行筛选,从初始的候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。
可以理解的是,视频编解码中运动矢量反映的是一个物体在一个方向(预测方向)上相对于同一时刻(同一时刻对应同一参考帧)偏移的距离。因此在不同像素样本的运动信息单元对应不同预测方向和/或对应不同参考帧索引的情况下,可能无法直接得到当前图像块的每个像素点/像素块相对于一参考帧的运动偏移。而当这些像素样本对应相同预测方向和对应相同参考帧索引的情况下,可利用这些合并运动矢量组合得到该图像块中每个像素点/像素块的运动矢量。
因此,在候选合并运动信息单元集中的不同像素样本的运动信息单元对应不同预测方向和/或对应不同参考帧索引的情况下,可以对候选合并运动信息单元集进行缩放处理。其中,对候选合并运动信息单元集进行缩放处理可能涉及到对该候选合并运动信息单元集中的一个或多个运动信息单元中的运动矢量进行修改、添加和/或删除等。
例如,在本发明一些可能实施方式之中,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可包括:当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,所述第一预测方向为前向或后向;
或者,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可以包括:当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参 考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
S204、视频编码装置从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i。
可选的,在本发明一些可能的实施方式中,视频编码装置还可将所述合并运动信息单元集i的标识写入视频码流。相应的,视频解码装置基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
可选的,在本发明一些可能的实施方式中,视频编码装置从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i可以包括:基于失真或者率失真代价从N个候选合并运动信息单元集之中确定出包含2个运动矢量的合并运动信息单元集i。
可选的,合并运动信息单元集i对应的率失真代价,小于或等于上述N个候选合并运动信息单元集中除合并运动信息单元集i之外的任意一个合并运动信息单元集对应的率失真代价。
可选的,合并运动信息单元集i对应的失真,小于或者等于上述N个候选合并运动信息单元集中除合并运动信息单元集i之外的任意一个合并运动信息单元集对应的失真。
其中,上述N个候选合并运动信息单元集之中的某个候选合并运动信息单元集(例如上述N个候选合并运动信息单元集中的合并运动信息单元集i)对应的率失真代价例如可以为利用该某个候选合并运动信息单元集(例如合并运动信息单元集i)对图像块(例如当前图像块)进行像素值预测而得到的该图像块的预测像素值所对应的率失真代价。
其中,上述N个候选合并运动信息单元集之中的某个候选合并运动信息单 元集(例如上述N个候选合并运动信息单元集中的合并运动信息单元集i)对应的失真,例如可为图像块(如当前图像块)的原始像素值与利用该某个候选合并运动信息单元集(如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的失真(即,图像块的原始像素值与预测像素值之间的失真)。
在本发明一些可能的实施方式中,图像块(如当前图像块)的原始像素值与利用该某个候选合并运动信息单元集(例如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的失真,具体例如可以为该图像块(例如当前图像块)的原始像素值与利用该某个候选合并运动信息单元集(例如合并运动信息单元集i)对该图像块进行像素值预测而得到的该图像块的预测像素值之间的平方误差和(SSD)或绝对误差和(SAD)或误差和或能够衡量失真的其他失真参量。
进一步的,为进一步降低运算复杂度,当上述N大于n1,可从N个候选合并运动信息单元集中筛选出n1个候选合并运动信息单元集,基于失真或率失真代价从n1个候选合并运动信息单元集中确定出包含2个运动信息单元的合并运动信息单元集i。上述n1个候选合并运动信息单元集中的任意一个候选合并运动信息单元集对应的D(V)小于或等于上述N个候选合并运动信息单元集中的除n1个候选合并运动信息单元集之外的任意一个候选合并运动信息单元集对应的D(V),其中,n1例如等于3、4、5、6或其他值。
进一步的,可将上述n1个候选合并运动信息单元集或n1个候选合并运动信息单元集的标识加入候选合并运动信息单元集队列,其中,若上述N小于或者等于n1,则可将上述N个候选合并运动信息单元集或N个候选合并运动信息单元集的标识加入候选合并运动信息单元集队列。其中,候选合并运动信息单元集队列中的各候选合并运动信息单元集例如可按照D(V)大小进行升序或降序排列。
其中,上述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集(例如合并运动信息单元集i)的欧式距离参数D(V)例如可按照如下方式计算:
Figure PCTCN2015075094-appb-000025
其中,vp,x表示运动矢量
Figure PCTCN2015075094-appb-000026
的水平分量,其中,vp,y表示运动矢量
Figure PCTCN2015075094-appb-000027
的竖直分量,
Figure PCTCN2015075094-appb-000028
Figure PCTCN2015075094-appb-000029
为N个候选合并运动信息单元集中的某个候选合并运动信息单元集包括的两个像素样本的2个运动矢量,运动矢量
Figure PCTCN2015075094-appb-000030
表示为当前图像块的另一个像素样本的运动矢量,该另一个像素样本不同于上述两个像素样本。例如图2-e所示,
Figure PCTCN2015075094-appb-000031
Figure PCTCN2015075094-appb-000032
表示当前图像块的左上像素样本和右上像素样本的运动矢量,运动矢量
Figure PCTCN2015075094-appb-000033
表示当前图像块的左下像素样本的运动矢量,当然,运动矢量
Figure PCTCN2015075094-appb-000034
也可表示当前图像块的中心像素样本或其他像素样本的运动矢量。
可选的,|v1,x-v0,x|≤w/2或者|v1,y-v0,y|≤h/2或者|v2,x-v0,x|≤w/2或者|v2,y-v0,y|≤h/2。
进一步的,根据上述N个候选合并运动信息单元集的D(V)值的大小按升序或降序排序,可以得到候选合并运动信息单元集队列。其中,候选合并运动信息单元集队列中的每个合并运动信息单元集互不相同,可用索引号指示候选合并运动信息单元集队列中的某个合并运动信息单元集。
S205、视频编码装置利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行运动矢量预测。
其中,假设当前图像块的大小为w×h,所述w等于或不等于h。
假设,上述两个像素样本的坐标为(0,0)和(w,0)和,此处以像素样本左上角像素的坐标参与计算为例。参见图2-e,图2-e示出了当前图像块的四个顶点的坐标。参见图2-f和图2-g示出了仿射运动的一种示意图。
所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),将2个像素样本的坐标及运动矢量代入如下举例的仿射运动模型,便可计算出当前图像块x内的任意像素点的运动矢量。
Figure PCTCN2015075094-appb-000035
  (公式1)
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),其中,所述vx和vy分别是当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量(vx)和运动矢量竖直分量(vy),所述w为所述当前图像块的长或宽。
进一步,视频编码装置可基于计算出的所述当前图像块的各像素点或各像素块的运动矢量对所述当前图像块进行像素值预测。视频编码装置可利用当前图像块的原始像素值和对当前图像块进行像素值预测而得到的当前图像块预测像素值得到当前图像块的预测残差。视频编码装置可将当前图像块的预测残差写入视频码流。
可以看出,本实施例的技术方案中,视频编码装置利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
下面举例公式1所示的仿射运动模型的一种推导过程。其中,例如可利用旋转运动模型来推导仿射运动模型。
其中,旋转运动例如图2-h或图2-i举例所示。
其中,旋转运动模型如公式(2)所示。其中(x′,y′)为坐标为(x,y)的像素点在参考帧中对应的坐标,其中,θ为旋转角度,(a0,a1)为平移分量。若已知变换系数,即可求得像素点(x,y)的运动矢量(vx,vy)。
Figure PCTCN2015075094-appb-000036
  (公式2)
其中,采用的旋转矩阵为:
Figure PCTCN2015075094-appb-000037
若在旋转的基础上再进行一次系数为ρ的缩放变换,同时,为了避免旋转运动中的三角运算,得到如下简化的仿射运动矩阵。
Figure PCTCN2015075094-appb-000038
这样,有利于降低计算的复杂度,可以简化每个像素点的运动矢量的计算过程,而且该模型可以像一般的仿射运动模型一样应用于旋转和缩放等复杂运动场景。其中,简化的仿射运动模型描述可如公式3。其中,和一般仿射运动模型相比简化的仿射运动模型可只需要4个参数表示。
Figure PCTCN2015075094-appb-000039
  (公式3)
对于尺寸为w×h的图像块(如CUR),将其右边及下边界各扩展一行并求得坐标点(0,0),(w,0)的顶点的运动矢量(vx0,vy0),(vx1,vy1)。以这两个顶点为像素样本(当然,也可以以其它点作为参考的像素样本,如中心像素样本等等),将它们的坐标及运动矢量代入公式(3),可以推导出公式1,
Figure PCTCN2015075094-appb-000040
  (公式1)
其中,
Figure PCTCN2015075094-appb-000041
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述 w为所述当前图像块的长或宽。
可以理解,从上面的推导过程可以看出公式1具有较强的可用性,实践过程发现,由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度和减少编码传递仿射参数信息或运动矢量差值的个数。
请参见图3,图3为本发明的另一个实施例提供的另一种图像预测方法的流程示意图。本实施例中主要以在视频解码装置中实施图像预测方法为例进行描述。其中,图3举例所示,本发明的另一个实施例提供的另一种图像预测方法可包括:
S301、视频解码装置确定当前图像块中的2个像素样本。
其中,本实施例中主要以所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本为例。例如,所述2个像素样本包括所述当前图像块的左上像素样本和右上像素样本。其中,所述2个像素样本为所述当前图像块的其他像素样本的场景可以此类推。
其中,所述当前图像块的左上像素样本可为所述当前图像块的左上顶点或者所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
若像素样本为像素块,则该像素块的大小例如为2*2,1*2、4*2、4*4或者其他大小。
S302、视频解码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集。
其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元。
其中,本发明各实施例中提及的像素样本可以是像素点或包括至少两个像素点的像素块。
其中,例如图2-b和图2-c所示,所述当前图像块的左上像素样本对应的候选运动信息单元集S1可包括x1个像素样本的运动信息单元。其中,所述x1个像素样本包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT、所述当前图像块的左边的空域相邻图像块C、所述当前图像块的左上的空域相邻图像块A、所述当前图像块的上边的空域相邻图像块B中的至少一个。例如可先获取所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元,将获取到的所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元添加到所述当前图像块的左上像素样本对应的候选运动信息单元集中,若所述当前图像块的左边的空域相邻图像块C的运动信息单元、所述当前图像块的左上的空域相邻图像块A的运动信息单元和所述当前图像块的上边的空域相邻图像块B的运动信息单元中的部分或全部运动信息单元相同,则进一步对所述候选运动信息单元集S1进行去重处理(此时去重处理后的所述候选运动信息单元集S1中的运动信息单元的数量可能是1或2),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元,与去重处理后的所述候选运动信息单元集S1中的其中一个运动信息单元相同,则可向所述候选运动信息单元集S1中加入零运动信息单元,直到候选运动信息单元集S1中的运动信息单元数量等于3。此外,若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元,不同于去重处理后的所述候选运动信息单元集S1中的任意一个运动信息单元,则将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本LT位置相同的像素样本Col-LT的运动信息单元添加到去重处理后的所述 候选运动信息单元集S1中,若此时所述候选运动信息单元集S1中的运动信息单元数量仍然少于3个,则可以向所述候选运动信息单元集S1中加入零运动信息单元,直到所述候选运动信息单元集S1中的运动信息单元数量等于3。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S1中的零运动信息单元包括预测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S1中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
类似的,例如图2-b和图2-c所示,所述当前图像块的右上像素样本对应的候选运动信息单元集S2可以包括x2个图像块的运动信息单元。其中,所述x2个图像块可以包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT、所述当前图像块的右上的空域相邻图像块E、所述当前图像块的上边的空域相邻图像块D之中的至少一个。例如,可以先获取所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元,将获取的所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元添加到所述当前图像块的右上像素样本对应的候选运动信息单元集S2中,若所述当前图像块的右上的空域相邻图像块E的运动信息单元和所述当前图像块的上边的空域相邻图像块D的运动信息单元相同,则可对所述候选运动信息单元集S2进行去重处理(此时去重处理后的所述候选运动信息单元集S2中的运动信息单元的数量是1),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元,与去重处理后的所述候选运动信息单元集S2中的其中一个运动信息单元相同,可进一 步向所述候选运动信息单元集S2中加入零运动信息单元,直到所述候选运动信息单元集S2中运动信息单元数量等于2。此外,若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元,不同于去重处理之后的所述候选运动信息单元集S2中的任意一个运动信息单元,则可以将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本RT位置相同的像素样本Col-RT的运动信息单元添加到去重处理后的所述候选运动信息单元集S2中,若此时所述候选运动信息单元集S2之中的运动信息单元数量仍然少于2个,则进一步向所述候选运动信息单元集S2中加入零运动信息单元,直到所述候选运动信息单元集S2中运动信息单元的数量等于2。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S2中的零运动信息单元包括预测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S2中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
类似的,例如图2-b和图2-c所示,所述当前图像块的左下像素样本对应的候选运动信息单元集S3可以包括x3个图像块的运动信息单元。其中,所述x3个图像块可包括:与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB、所述当前图像块的左下的空域相邻图像块G、所述当前图像块的左边的空域相邻图像块F中的至少一个。例如先获取所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元,可将获取的所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元添加到所述当前图像块的 左下像素样本对应的候选运动信息单元集S3中,若所述当前图像块的左下的空域相邻图像块G的运动信息单元和所述当前图像块的左边的空域相邻图像块F的运动信息单元相同,则对所述候选运动信息单元集S3进行去重处理(此时去重处理后的所述候选运动信息单元集S3中的运动信息单元的数量是1),若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元,与去重处理后的所述候选运动信息单元集S3中的其中一个运动信息单元相同,则可进一步向所述候选运动信息单元集S3中加入零运动信息单元,直到所述候选运动信息单元集S3中运动信息单元数量等于2。此外,若与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元,不同于去重处理后的所述候选运动信息单元集S3中的任意一个运动信息单元,则可将与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本LB位置相同的像素样本Col-LB的运动信息单元添加到去重处理后的候选运动信息单元集S3中,若此时所述候选运动信息单元集S3之中的运动信息单元数量仍然少于2个,则进一步向所述候选运动信息单元集S3中加入零运动信息单元,直到所述候选运动信息单元集S3中运动信息单元数量等于2。
其中,若当前图像块所属视频帧是前向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为前向的零运动矢量但可不包括预测方向为后向的零运动矢量。若当前图像块所属视频帧是后向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为后向的零运动矢量但可不包括预测方向为前向的零运动矢量。此外,若当前图像块所属视频帧是双向预测帧,则添加到候选运动信息单元集S3中的零运动信息单元包括预测方向为前向的零运动矢量和预测方向为后向的零运动矢量,其中,添加到候选运动信息单元集S3中的不同零运动信息单元中的运动矢量所对应的参考帧索引可不相同,对应的参考帧索引例如可为0、1、2、3或其其他值。
其中,两个运动信息单元不相同,可指该两个运动信息单元包括的运动矢量不同,或该两个运动信息单元所包括的运动矢量对应的预测方向不同,或者 该两个运动信息单元所包括的运动矢量对应的参考帧索引不同。其中,两个运动信息单元相同,可指该两个运动信息单元所包括的运动矢量相同,且该两个运动信息单元所包括的运动矢量对应的预测方向相同,且该两个运动信息单元所包括的运动矢量对应的参考帧索引相同。
可以理解,对于存在更多像素样本的场景,可以按照类似方式得到相应像素样本的候选运动信息单元集。
例如图2-d所示,其中,在图2-d所示举例中,所述2个像素样本可包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中两个像素样本。其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
S303、视频解码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集。其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元。所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
可以理解的是,假设基于候选运动信息单元集S1(假设包括3个运动信息单元)和所述候选运动信息单元集S2(假设包括2个运动信息单元)来确定候选合并运动信息单元集,则理论上可确定出3*2=6个初始的候选合并运动信息单元集,然而为了提高可用性,例如可以利用第一条件、第二条件、第三条件、第四条件和第五条件中的至少一个条件来从这6个初始的候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。其中,如果候选运动信 息单元集S1和所述候选运动信息单元集S2所包括的运动信息单元的数量不限于上述举例,那么,初始的候选合并运动信息单元集的数量不一定是6。
其中,第一条件、第二条件、第三条件、第四条件和第五条件的具体限制性内容可参考上述实施例中的举例说明,此处不在赘述。当然,所述N个候选合并运动信息单元集例如还可满足其他未列出条件。
在具体实现过程中,例如可先利用第一条件、第二条件和第三条件中的至少一个条件对初始的候选合并运动信息单元集进行筛选,从初始的候选合并运动信息单元集中筛选出N01个候选合并运动信息单元集,而后对N01个候选合并运动信息单元集进行缩放处理,而后再利用第四条件和第五条件中的至少一个条件从进行缩放处理的N01个候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。当然,第四条件和第五条件也可能不参考,而是直接利用第一条件、第二条件和第三条件中的至少一个条件对初始的候选合并运动信息单元集进行筛选,从初始的候选合并运动信息单元集中筛选出N个候选合并运动信息单元集。
可以理解的是,视频编解码中运动矢量反映的是一个物体在一个方向(预测方向)上相对于同一时刻(同一时刻对应同一参考帧)偏移的距离。因此在不同像素样本的运动信息单元对应不同预测方向和/或对应不同参考帧索引的情况下,可能无法直接得到当前图像块的每个像素点/像素块相对于一参考帧的运动偏移。而当这些像素样本对应相同预测方向和对应相同参考帧索引的情况下,可利用这些合并运动矢量组合得到该图像块中每个像素点/像素块的运动矢量。
因此,在候选合并运动信息单元集中的不同像素样本的运动信息单元对应不同预测方向和/或对应不同参考帧索引的情况下,可以对候选合并运动信息单元集进行缩放处理。其中,对候选合并运动信息单元集进行缩放处理可能涉及到对该候选合并运动信息单元集中的一个或多个运动信息单元中的运动矢量进行修改、添加和/或删除等。
例如,在本发明一些可能实施方式之中,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可包括:当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引 不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,所述第一预测方向为前向或后向;
或者,所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,可以包括:当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
S304、视频解码装置对视频码流进行解码处理以得到合并运动信息单元集i的标识和当前图像块的预测残差,基于合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i。
相应的,视频编码装置可将所述合并运动信息单元集i的标识写入到视频码流。
S305、视频解码装置利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行运动矢量预测。
例如视频解码装置可先对所述合并运动信息单元集i中的运动矢量进行运动估计处理,以得到运动估计处理后的合并运动信息单元集i,视频解码装置利用仿射运动模型和运动估计处理后的合并运动信息单元集i对所述当前图像块进行运动矢量预测。
其中,假设当前图像块的大小为w×h,所述w等于或不等于h。
假设,上述两个像素样本的坐标为(0,0)和(w,0)和,此处以像素样本左上角 像素的坐标参与计算为例。参见图2-e,图2-e示出了当前图像块的四个顶点的坐标。
所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),将2个像素样本的坐标及运动矢量代入如下举例的仿射运动模型,便可计算出当前图像块x内的任意像素点的运动矢量。
Figure PCTCN2015075094-appb-000042
  (公式1)
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),其中,所述vx和vy分别是当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量(vx)和运动矢量竖直分量(vy),其中,公式1中的所述w可为所述当前图像块的长或者宽。
S306、视频解码装置基于计算出的所述当前图像块的各像素点或各像素块的运动矢量对所述当前图像块进行像素值预测以得到的当前图像块的预测像素值。
S307、视频解码装置利用当前图像块的预测像素值和当前图像块的预测残差对当前图像块进行重建。
可以看出,本实施例的技术方案中,视频解码装置利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
下面还提供用于实施上述方案的相关装置。
参见图4,本发明实施例还提供一种图像预测装置400,可包括:
第一确定单元410,用于确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;
其中,第二确定单元420,用于确定包括2个运动信息单元的合并运动信息单元集i。
其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;
预测单元430,用于利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明一些可能的实施方式中,所述第二确定单元420可具体用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
可选的,在本发明一些可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选 合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量之间的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
可选的,在本发明一些可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的 像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所 述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
可选的,在本发明一些可能的实施方式中,预测单元430具体用于当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,所述第一预测方向为前向或后向;
或者,所述预测单元430具体用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明一些可能的实施方式中,所述预测单元430具体用于利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
或者,
所述预测单元430具体用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
可选的,在本发明的一些可能的实施方式中,所述预测单元430可以具体 用于利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
可选的,在本发明一些可能的实施方式中,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000043
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
可选的,在本发明一些可能的实施方式中,所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,所述第二确定单元420可具体用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,
所述装置还包括解码单元,用于从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的 运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述预测单元430还用于:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述装置还包括编码单元,用于将所述合并运动信息单元集i的标识写入视频码流。
可以理解的是,本实施例的图像预测装置400的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像预测装置400可为任何需要输出、播放视频的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
可以看出,本实施例提供的技术方案中,图像预测装置500利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。并且,由于所参考的像素样本的数量可为2个,这样有利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且,也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
参见图5,图5为本发明实施例提供的图像预测装置500的示意图,图像预测装置500可包括至少一个总线501、与总线501相连的至少一个处理器502以及与总线501相连的至少一个存储器503。
其中,处理器502通过总线501调用存储器503中存储的代码或者指令以用于,确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;确定包括2个运动信息单元的合并运动信息单元集i;其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明一些可能的实施方式中,在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
可选的,在本发明一些可能的实施方式中,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量之间的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
可选的,在本发明一些可能的实施方式中,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样 本,所述x1为正整数;
其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数。
其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,所述x5个像素样本中的其中一个像素样本为像素样本a2,
其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述 x5为正整数。
可选的,在本发明一些可能的实施方式中,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器502用于,当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
或者,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器502用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
可选的,在本发明一些可能的实施方式中,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器502用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
或者,
在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器502用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得 到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
可选的,在本发明一些可能的实施方式中,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器502用于,利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
可选的,在本发明一些可能的实施方式中,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形式的仿射运动模型:
Figure PCTCN2015075094-appb-000044
其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
可选的,在本发明一些可能的实施方式中,所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器502用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的 合并运动信息单元集i。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频解码装置中的情况下,所述处理器502还用于,从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器502还用于,利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
可选的,在本发明一些可能的实施方式中,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器502还用于,将所述合并运动信息单元集i的标识写入视频码流。
可以理解的是,本实施例的图像预测装置500的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像预测装置500可为任何需要输出、播放视频的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
可以看出,本实施例提供的技术方案中,图像预测装置500利用仿射运动模型和合并运动信息单元集i对当前图像块进行像素值预测,合并运动信息单元集i中的每个运动信息单元分别选自2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,由于合并运动信息单元集i选择范围变得相对较小,摒弃了传统技术采用的在多个像素样本的全部可能候选运动信息单元集合中通过大量计算才筛选出多个像素样本的一种运动信息单元的机制,有利于提高编码效率,并且也有利于降低基于仿射运动模型进行图像预测的计算复杂度,进而使得仿射运动模型引入视频编码标准变得可能。并且由于引入了仿射运动模型,有利于更准确描述物体运动,故而有利于提高预测准确度。并且,由于所参考的像素样本的数量可为2个,这样有 利于进一步降低引入仿射运动模型之后,基于仿射运动模型进行图像预测的计算复杂度,并且,也有利于减少编码端传递仿射参数信息或者运动矢量残差的个数等。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任意一种图像预测方法的部分或全部步骤。
请参见图6,图6为本发明的一个实施例提供的一种图像处理方法的流程示意图。其中,图6举例所示,本发明的一个实施例提供的一种图像处理方法可包括:
S601、获得当前图像块的运动矢量2元组。
其中,所述运动矢量2元组可以包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量。其中,本发明各实施例中提及的像素样本可以是像素点或包括至少两个像素点的像素块。
其中,本发明各实施例中提及运动矢量可以是前向运动矢量也可以是后向运动矢量,其中所述运动矢量2元组的各自的运动矢量方向可以相同。
其中,当前图像块可为当前编码块或当前解码块。
其中,运动矢量2元组可以包括上述实施例中所述2个像素样本的运动矢量,也可以包括上述实施例中所述合并运动信息单元集i中每个运动信息单元的一个运动矢量,也可以包括上述实施例中所述进行缩放处理后的合并运动信息单元集i中每个运动信息单元的一个运动矢量,也可以包括上述实施例中所述运动估计处理后的合并运动信息单元集i中每个运动信息单元的一个运动矢量,也可以以上述实施例中所述合并运动信息单元集i中每个运动信息单元的运动矢量为预测值进行迭代更新得到的所述运动矢量2元组。有关2个像素样本的运动矢量、合并运动信息单元集i、进行缩放处理后的合并运动信息单元集i、运动估计处理后的合并运动信息单元集i的具体内容可参考上述实施例中具体说明,在此不再赘述。其中,由于上述实施例中的合并运动信息单元集i中可以包括运动矢量,也可以包括运动矢量的方向或运动矢量对应的参考帧索引,而本发明实施例中的运动矢量2元组仅包括运动矢量。
其中,所述2个像素样本可包括所述当前图像块的左上像素样本、右区域像素样本、下区域像素样本和右下区域像素样本中的2个像素样本。
其中,所述当前图像块的左上像素样本可为所述当前图像块的左上顶点或者所述当前图像块中的包含所述当前图像块的左上顶点的像素块。在视频编码以及解码中,左上像素样本的坐标值可以默认为(0,0)。
其中,所述当前图像块的下区域像素样本可为所述当前图像块的位于所述左上像素样本下方的像素点或像素块,其中,下区域像素样本的竖直坐标大于所述左上像素样本的竖直坐标。其中,所述下区域像素样本可包括上述实施例中的左下像素样本。其中,下区域像素样本的水平坐标可以和左上像素样本的水平坐标相同,下区域像素样本的水平坐标也可以和左上像素样本的水平坐标相差n个像素高度,其中n为小于3的正整数。其中,在所有本发明实施例中,竖直坐标可以称为纵坐标,水平坐标也可称为横坐标。
其中,所述当前图像块的右区域像素样本可为所述当前图像块的位于所述左上像素样本右侧的像素点或像素块,其中,右区域像素样本的水平坐标大于所述左上像素样本的水平坐标。其中,所述右区域像素样本可包括上述实施例中的右上像素样本。其中,右区域像素样本的竖直坐标可以和左上像素样本的竖直坐标相同,右区域像素样本的竖直坐标也可以和左上像素样本的竖直坐标相差n个像素宽度,其中n为小于3的正整数。
其中,所述当前图像块的右下区域像素样本可为所述当前图像块的位于所述左上像素样本右下方的像素点或像素块,其中,右下区域像素样本的竖直坐标大于所述左上像素样本的竖直坐标,右下区域像素样本的水平坐标大于所述左上像素样本的水平坐标。其中,所述右下区域像素样本可包括上述实施例中的中心像素样本a1,还可包括右下像素样本,所述当前图像块的右下像素样本可为所述当前图像块的右下顶点或者所述当前图像块中的包含所述当前图像块的右下顶点的像素块。
若像素样本为像素块,则该像素块的大小例如为2*2,1*2、4*2、4*4或其他大小。
有关所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中 心像素样本a1的具体内容可参考上述实施例中具体说明,在此不再赘述。
其中,2个像素样本也可以是上述实施例中的所述2个像素样本,有关2个像素样本的具体内容可参考上述实施例中具体说明,在此不再赘述。
S602、利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量。
其中,所述计算得到的当前图像块中任意像素样本的运动矢量可以是上述实施例中所述当前图像块中的各像素点的运动矢量,所述当前图像块中的各像素块的运动矢量,以及所述当前图像块中的任意像素样本的运动矢量任一项,上述实施例中与所述当前图像块中的各像素点的运动矢量,所述当前图像块中的各像素块的运动矢量,以及所述当前图像块中的任意像素样本的运动矢量有关的具体内容可参考上述实施例中的具体说明,在此不再赘述。
其中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000045
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
可选的,在本发明的一些可能的实施方式之中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000046
可选的,在本发明的一些可能的实施方式之中,所述仿射运动模型的水平分量的水平坐标系数和所述仿射运动模型的水平分量的竖直坐标系数的平方和不等于1。或者,在本发明的一些可能的实施方式之中,所述仿射运动模型 的竖直分量的竖直坐标系数和所述仿射运动模型的竖直分量的水平坐标系数的平方和不等于1。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量,可包括:利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量,可包括:利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
或者,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量,可包括:利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000047
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。w也可以为 所述右区域像素样本的水平坐标与所述左上像素样本的水平坐标之间的差值。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000048
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。h也可以为所述下区域像素样本的竖直坐标与所述左上像素样本的竖直坐标之间的差值。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000049
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本为所述当前图像块所属的视频帧中的任意的2个像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000050
Figure PCTCN2015075094-appb-000051
其中,(x4,y4)为所述2个像素样本中其中一个像素样本的坐标,(vx4,vy4)为坐标为(x4,y4)的所述其中一个像素样本的运动矢量,(x5,y5)为所述2个像素样本中另一个像素样本的坐标,(vx5,vy5)为坐标为(x5,y5)的所述另一个像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式之中,在所述图像处理方法应用于图像跟踪中的情况下,还可以在计算得到所述当前图像块中任意像素样本的运动矢量之后,利用该任意像素样本在当前图像块中位置和该任意像素样本的运动矢量,确定该任意像素样本对应在该任意像素样本的运动矢量对应的帧中的对应位置。
进一步地,根据上述对应位置获得当前图像块在对应的帧中的对应图像块,将该对应图像块与当前图像块进行比较,计算两者之间的平方差和或绝对误差和,衡量两者之间的匹配误差,用于评估当前图像块的图像跟踪的准确度。
可选的,在本发明的一些可能的实施方式之中,在所述图像处理方法应用于图像预测中的情况下,还可以在计算得到所述当前图像块中任意像素样本的运动矢量之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中任意像素样本的像素点的预测像素值。其中,所述当前图像块中任意像素样本的运动矢量可以是所述当前图像块中的任意像素点的运动矢量,该过程可以为:利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;所述当前图像块中任意像素样本的运动矢量也可以是所述当前图像块中的任意像素块的运动矢量,改过程可以为:利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
测试发现,若先利用仿射运动模型和所述合并运动信息单元集i计算得到 所述当前图像块中的各像素块的运动矢量,而后再利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值,由于计算运动矢量时以当前图像块中的像素块为粒度,这样有利于较大的降低计算复杂度。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还可包括:利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
其中,具体来说,该过程可以是:利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从而得到所述任意像素样本的像素点的重建值;
该过程也可以是:利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,利用经过运动补偿预测得到的所述任意像素样本的像素点的像素值和所述任意像素样本的像素点的实际像素值,获得所述任意像素样本的预测残差,把所述任意像素样本的预测残差编码进码流。
或者,在获得所述任意像素样本的预测残差后,采用类似的方法为获得当前图像块的预测残差所需要的其它像素样本的预测残差,从而获得当前图像块的预测残差,然后把当前图像块的预测残差编码进码流,实际像素值也可称为原始像素值。
可选的,在本发明的一些可能的实施方式中,在所述图像预测方法应用于视频解码过程中的情况下,所述方法还包括:利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
其中,具体来说,该过程可以是:利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的 预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从而得到所述任意像素样本的像素点的重建值。
该过程也可以是:利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从码流中解码得到所述任意像素样本的预测残差,或是从码流中解码得到所述当前图像块的预测残差,从而获得得到所述任意像素样本的预测残差,并结合经过运动补偿预测得到的所述任意像素样本的像素点的像素值,得到所述任意像素样本的像素点的重建值。
可以理解的是,对于当前视频帧中的每个图像块,均可以按照与当前图像块对应的图像处理方式相类似的方式进行图像处理,当然,当前视频帧中的某些图像块也可能按照与当前图像块对应的图像处理方式不同的方式进行图像处理。
本发明实施例提供的技术方案,仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在该技术方案引入了两个位移系数后,该技术方案可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
为便于更好的理解和实施本发明实施例的上述方案,下面结合更具体的应用场景进行进一步说明。
请参见图7,图7为本发明的另一个实施例提供的另一种图像处理方法的流程示意图。本实施例中主要以在视频编码装置中实施图像处理方法方法为例进行描述。其中,图7举例所示,本发明的另一个实施例提供的另一种图像处理方法可包括:
S701、视频编码装置确定当前图像块中的2个像素样本。
其中,所述2个像素样本可包括所述当前图像块的左上像素样本、右区域像素样本、下区域像素样本和右下区域像素样本中的2个像素样本。有关所述当前图像块的左上像素样本、右区域像素样本、下区域像素样本和右下区域像 素样本中的实质性内容可参考上述实施例中具体说明,在此不再赘述。
S702、视频编码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集。
其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元。
其中,本发明各实施例中提及的像素样本可以是像素点或包括至少两个像素点的像素块。
可选的,在本发明一些可能的实施方式中,所述当前图像块的左上像素样本所对应的候选运动信息单元集以及所对应的候选运动信息单元集生成方法的具体内容可参考上述实施例中具体说明,在此不再赘述。
可选的,在本发明一些可能的实施方式中,所述当前图像块的右区域像素样本所对应的候选运动信息单元集包括x6个像素样本的运动信息单元,其中,所述x6个像素样本包括至少一个与所述当前图像块的右区域像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右区域像素样本时域相邻的像素样本,所述x6为正整数。
例如上述x6例如可等于1、2、3、4、5、6或其他值。
例如,所述x6个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右区域像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的下区域像素样本所对应的候选运动信息单元集包括x7个像素样本的运动信息单元,其中,所述x7个像素样本包括至少一个与所述当前图像块的下区域像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的下区域像素样本时域相邻的像素样本,所述x7为正整数。
例如上述x7例如可等于1、2、3、4、5、6或其他值。
例如,所述x7个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的下区域像素样本位置相同的像素样本、所述 当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
可选的,在本发明一些可能的实施方式中,所述当前图像块的右下区域像素样本所对应的候选运动信息单元集包括x8个像素样本的运动信息单元,其中,所述x8个像素样本包括至少一个与所述当前图像块的右下区域像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右下区域像素样本时域相邻的像素样本,所述x8为正整数。
例如上述x8例如可等于1、2、3、4、5、6或其他值。
例如,所述x8个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右下区域像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
其中所述右下区域像素样本包括的右下像素样本对应的候选运动信息单元包括至少一个与所述当前图像块的右下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右下像素样本时域相邻的像素样本,例如可以包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右下像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
其中,上述像素样本所包括的左下像素样本、右上像素样本、中心像素样本a1的所对应的候选运动信息单元集以及所对应的候选运动信息单元集的生成方法可参考上述实施例中的具体说明,在此不再赘述。
类似的,上述右区域像素样本、下区域像素样本、右下区域像素样本、右下区域像素样本包括的右下像素样本所对应的候选运动信息单元集的生成方法可参考左下像素样本、右上像素样本或中心像素样本a1的所对应的候选运动信息单元集的生成方法,在此不再赘述。
S703、视频编码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集。
其中,S703的具体内容可参考上述实施例中S203中的具体说明,在此不再赘述。
S704、视频编码装置从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i。
可选的,在本发明一些可能的实施方式中,视频编码装置还可将所述合并运动信息单元集i的标识写入视频码流。相应的,视频解码装置基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。所述合并运动信息单元集i的标识可以是任何能够标识出所述合并运动信息单元集i的信息,例如所述合并运动信息单元集i的标识可为合并运动信息单元集i在合并运动信息单元集列表中的索引号等。
另外,S704的具体内容可参考上述实施例中S204中的具体说明,在此不再赘述。
S705、视频编码装置利用所述合并信息单元集i获得运动矢量2元组。
可选的,在本发明一些可能的实施方式中,视频编码装置可以以当前图像块的合并信息单元集i的2个运动矢量为运动矢量预测值,作为搜索运动矢量2元组中2个运动矢量的起始值,进行简化仿射运动搜索。该搜索过程简单描述如下:以运动矢量预测值为起始值,并进行迭代更新,当迭代更新次数达到一定规定的次数,或者根据迭代更新得到的2个运动矢量得到的当前图像块的预测值和当前块的原始值之间的匹配误差小于规定的阈值时,将包括所述迭代更新得到的2个运动矢量的运动矢量2元组。
可选的,在本发明一些可能的实施方式中,视频编码装置还可以利用当前图像块的合并信息单元集i的2个运动矢量以及运动矢量2元组中的2个运动矢量,得到2个像素样本各自运动矢量的预测差值,即当前图像块的合并信息单元集i的每个运动矢量对应在运动矢量2元组中的运动矢量与当前图像块的合并信息单元集i的每个运动矢量的差值,编码2个像素样本各自运动矢量的预测差值。
S706、视频编码装置利用仿射运动模型和所述运动矢量2元组,计算得到 所述当前图像块中任意像素样本的运动矢量。
其中,所述计算得到的当前图像块中任意像素样本的运动矢量可以是上述实施例中所述当前图像块中的各像素点的运动矢量,所述当前图像块中的各像素块的运动矢量,以及所述当前图像块中的任意像素样本的运动矢量任一项,上述实施例中与所述当前图像块中的各像素点的运动矢量,所述当前图像块中的各像素块的运动矢量,以及所述当前图像块中的任意像素样本的运动矢量有关的具体内容可参考上述实施例中的具体说明,在此不再赘述。
其中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000052
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
可选的,在本发明的一些可能的实施方式之中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000053
可选的,在本发明的一些可能的实施方式之中,所述仿射运动模型的水平分量的水平坐标系数和所述仿射运动模型的水平分量的竖直坐标系数的平方和不等于1。或者,在本发明的一些可能的实施方式之中,所述仿射运动模型的竖直分量的竖直坐标系数和所述仿射运动模型的竖直分量的水平坐标系数的平方和不等于1。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量, 可包括:利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式之中,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量,可包括:利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
或者,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量,可包括:利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000054
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。w也可以为所述右区域像素样本的水平坐标与所述左上像素样本的水平坐标之间的差值。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000055
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。h也可以为所述下区域像素样本的竖直坐标与所述左上像素样本的竖直坐标之间的差值。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000056
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
可选的,在本发明的一些可能的实施方式之中,在所述2个像素样本为所述当前图像块所属的视频帧中的任意的2个像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000057
Figure PCTCN2015075094-appb-000058
其中,(x4,y4)为所述2个像素样本中其中一个像素样本的坐标,(vx4,vy4)为坐标为(x4,y4)的所述其中一个像素样本的运动矢量,(x5,y5)为所述2个像素样本中另一个像素样本的坐标,(vx5,vy5)为坐标为(x5,y5)的所述另一个像素样本的运动矢量。
可以理解的是,对于当前视频帧中的每个图像块,均可以按照与当前图像块对应的图像处理方式相类似的方式进行图像处理,当然,当前视频帧中的某些图像块也可能按照与当前图像块对应的图像处理方式不同的方式进行图像处理。
S707、视频编码装置利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中任意像素样本的像素点的预测像素值。
其中,在预测过程中,所述当前图像块中不同的任意像素样本的运动矢量所对应的参考帧索引可以相同,且可以为合并信息单元集i的运动矢量所对应的参考帧索引。
可选的,在本发明的一些可能的实施方式之中,所述当前图像块中任意像素样本的运动矢量可以是所述当前图像块中的任意像素点的运动矢量,该过程可以为:利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值。
可选的,在本发明的一些可能的实施方式之中,所述当前图像块中任意像素样本的运动矢量也可以是所述当前图像块中的任意像素块的运动矢量,改过程可以为:利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
测试发现,若先利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,而后再利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值,由于计算运动矢量时以当前图像块中的像素块为粒度,这样有利于较大的降低计算复杂度。
可选的,在本发明的一些可能的实施方式中,在所述图像处理方法应用于视频编码过程中的情况下,所述方法还可包括:利用计算得到的所述当前图像 块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
其中,具体来说,该过程可以是:利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从而得到所述任意像素样本的像素点的重建值;或者,利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中的所述任意像素样本的像素点的预测像素值;利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,利用经过运动补偿预测得到的所述任意像素样本的像素点的像素值和所述任意像素样本的像素点的实际像素值,获得所述任意像素样本的预测残差,把所述预测残差编码进码流,其中,实际像素值也可称为原始像素值。
可以理解的是,对于当前视频帧中的每个图像块,均可以按照与当前图像块对应的图像处理方式相类似的方式进行图像处理,当然,当前视频帧中的某些图像块也可能按照与当前图像块对应的图像处理方式不同的方式进行图像处理。
本发明实施例提供的技术方案,仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在该技术方案引入了两个位移系数后,该技术方案可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
请参见图8,图8为本发明的另一个实施例提供的另一种图像处理方法的流程示意图。本实施例中主要以在视频解码装置中实施图像处理方法为例进行描述。其中,图8举例所示,本发明的另一个实施例提供的另一种图像处理方法可包括:
S801、视频解码装置确定当前图像块中的2个像素样本。
其中,所述2个像素样本包括所述当前图像块的左上像素样本、右区域像素样本、下区域像素样本和右下区域像素样本中的2个像素样本。有关所述当 前图像块的左上像素样本、右区域像素样本、下区域像素样本和右下区域像素样本中的实质性内容可参考上述实施例中具体说明,在此不再赘述。
S802、视频解码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集。
其中,在S802中,视频解码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集的具体过程可参见上述S702中视频编码装置确定出所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集的具体过程,在此不再赘述。
S803、视频解码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集。
其中,在S803中,视频解码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集的具体过程可参见上述S703中视频编码装置基于所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集确定N个候选合并运动信息单元集的具体过程,在此不再赘述。
S804、视频解码装置对视频码流进行解码处理以得到合并运动信息单元集i的标识和当前图像块的预测残差,基于合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i。
相应的,视频编码装置可将所述合并运动信息单元集i的标识写入到视频码流。
S805、视频解码装置利用所述合并运动信息单元集i获得运动矢量2元组。
可选的,在本发明一些可能的实施方式中,视频解码装置可以当前图像块的合并信息单元集i中的每个运动信息单元的运动矢量为运动矢量预测值,并从码流中解码得到当前图像块2个像素样本各自运动矢量的预测差值,把运动矢量预测值中每个运动矢量和运动矢量预测值中每个运动矢量对应的运动矢量的预测差值相加,从而获得包括当前图像块2个像素样本各自的运动矢量的运动矢量2元组。
S806、视频解码装置利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量。
其中,在S806中,视频解码装置利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量的具体过程可参见上述S706中视频编码装置利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量的具体过程,在此不再赘述。
S807、视频解码装置利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中任意像素样本的像素点的预测像素值。
其中,在预测过程中,所述当前图像块中不同的任意像素样本的运动矢量所对应的参考帧索引可以相同,且可以为合并信息单元集i的运动矢量所对应的参考帧索引。
其中,在S807中,视频解码装置利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中任意像素样本的像素点的预测像素值的具体过程可参见上述S707中视频编码装置利用计算得到的所述当前图像块中任意像素样本的运动矢量,确定所述当前图像块中任意像素样本的像素点的预测像素值的具体过程,在此不再赘述。
S808、视频解码装置利用当前图像块中任意像素样本的预测像素值和从码流中得到的当前图像块的中任意像素样本的预测残差对中任意像素样本的进行重建。
其中,具体来说,该过程可以是:利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从而得到所述任意像素样本的像素点的重建值;或者,利用所述任意像素样本的像素点的预测像素值,对所述任意像素样本进行运动补偿预测,从码流中解码得到所述任意像素样本的预测残差,并结合经过运动补偿预测得到的所述任意像素样本的像素点的像素值,得到所述任意像素样本的像素点的重建值。
可以理解的是,对于当前视频帧中的每个图像块,均可以按照与当前图像块对应的图像处理方式相类似的方式进行图像处理,当然,当前视频帧中的某些图像块也可能按照与当前图像块对应的图像处理方式不同的方式进行图像 处理。
本发明实施例提供的技术方案,仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在该技术方案引入了两个位移系数后,该技术方案可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
下面还提供用于实施上述方案的相关装置。
参见图9,本发明实施例还提供一种图像处理装置900,可包括:
获得单元910,用于获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量。
计算单元920,用于利用仿射运动模型和所述获得单元910获得的运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量。
其中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000059
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000060
可选的,在本发明一些可能的实施方式中,所述计算单元920可具体用于:利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿 射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,所述计算单元920可具体用于:利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,所述计算单元920可具体用于:利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000061
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000062
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
Figure PCTCN2015075094-appb-000063
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
可选的,在本发明一些可能的实施方式中,所述图像处理装置900应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
可选的,在本发明一些可能的实施方式中,在当所述图像处理装置900应用于视频编码装置中的情况下,所述装置还包括编码单元,用于利用所述计算单元920计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
可选的,在本发明一些可能的实施方式中,在当所述图像处理装置900应用于视频编码装置中的情况下,所述装置还包括解码单元,用于利用所述计算单元920计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
需要说明的是,本实施例的图像处理装置900还可包括图像预测装置400中的各功能单元,本实施例的图像处理装置900中的获得单元910以及计算单元920可应用于预测单元430,从而实现预测单元430所需要的功能,关于图像预测装置400中的各功能单元的具体说明可参考上述实施例中的具体说明,在此不再赘述。
可以理解的是,本实施例的图像处理装置900的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像处理装置900可为任何需要输出、播放视频 的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
本发明实施例提供的技术方案,图像处理装置900仅仅通过两个参数构建了基于旋转和缩放运动的仿射仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在图像处理装置900引入了两个位移系数后,图像处理装置900可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
参见图10,图10为本发明实施例提供的图像处理装置1000的示意图,图像处理装置1000可包括至少一个总线1001、与总线1001相连的至少一个处理器1002以及与总线1001相连的至少一个存储器1003。
其中,处理器1002通过总线1001调用存储器1003中存储的代码或者指令以用于,获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000064
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000065
可选的,在本发明一些可能的实施方式中,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器1002可用于,利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器1002可用于,利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器1002可用于,利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型可具体为:
Figure PCTCN2015075094-appb-000066
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述 当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型可具体为:
Figure PCTCN2015075094-appb-000067
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
可选的,在本发明一些可能的实施方式中,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型可具体为:
Figure PCTCN2015075094-appb-000068
其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
可选的,在本发明一些可能的实施方式中,所述图像处理装置1000应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
可选的,在本发明一些可能的实施方式中,在当所述图像处理装置应用于视频编码装置中的情况下,所述处理器1002还用于,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码
可选的,在本发明一些可能的实施方式中,在当所述图像处理装置应用于视频解码装置中的情况下,所述处理器1002还用于,在所述确定所述当前图像块中的所述任意像素样本的像素点的预测像素值之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
可以理解的是,本实施例的图像处理装置1000的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像处理装置1000可为任何需要输出、播放视频的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
本发明实施例提供的技术方案,图像处理装置1000仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在图像处理装置1000引入了两个位移系数后,图像处理装置1000可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任意一种图像预测方法的部分或全部步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
请参见图11,图11为本发明的一个实施例提供的另一种图像处理方法的流程示意图。其中,图11举例所示,本发明的另一个实施例提供的另一种图像处理方法可包括:
S1101、获得仿射运动模型的系数,利用所述仿射运动模型的系数以及所述仿射运动模型,计算得到所述当前图像块中任意像素样本的运动矢量。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000069
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所 述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数可包括a和b;
可选的,在本发明一些可能的实施方式中,所述仿射运动模型的系数还可包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000070
S1102、利用计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值。
本实施例中的详细描述,可以参见上述实施例的相关描述。
本发明实施例提供的技术方案,仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在该技术方案引入了两个位移系数后,该技术方案可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
参见图12,本发明实施例还提供一种图像处理装置1200,可包括:
获得单元1210,用于获得仿射运动模型的系数。
计算单元1220,用于利用所述获得单元1210获得仿射运动模型的系数以及所述仿射运动模型,计算得到所述当前图像块中任意像素样本的运动矢量。
预测单元1230,用于利用所述计算单元1220计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形式:
Figure PCTCN2015075094-appb-000071
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平 坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数可包括a和b;
可选的,在本发明一些可能的实施方式中,所述仿射运动模型的系数还可包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000072
本实施例中的详细描述,可以参见上述实施例的相关描述。
可以理解的是,本实施例的图像处理装置1200的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像处理装置1200可为任何需要输出、播放视频的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
本发明实施例提供的技术方案,图像处理装置1200仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在图像处理装置1200引入了两个位移系数后,图像处理装置1200可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
参见图13,图13为本发明实施例提供的图像处理装置1300的示意图,图像处理装置1300可包括至少一个总线1301、与总线1301相连的至少一个处理器1302以及与总线1301相连的至少一个存储器1303。
其中,处理器1302通过总线1301调用存储器1303中存储的代码或者指令以用于,获得仿射运动模型的系数,利用所述仿射运动模型的系数以及所述仿射运动模型,计算得到所述当前图像块中任意像素样本的运动矢量。利用计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值。
可选的,在本发明一些可能的实施方式中,所述仿射运动模型可为如下形 式:
Figure PCTCN2015075094-appb-000073
其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数可包括a和b;
可选的,在本发明一些可能的实施方式中,所述仿射运动模型的系数还可包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
Figure PCTCN2015075094-appb-000074
本实施例中的详细描述,可以参见上述实施例的相关描述。
可以理解的是,本实施例的图像处理装置1300的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。图像处理装置1300可为任何需要输出、播放视频的装置,如笔记本电脑,平板电脑、个人电脑、手机等设备。
本发明实施例提供的技术方案,图像处理装置1300仅仅通过两个参数构建了基于旋转和缩放运动的仿射运动模型,不仅降低了计算的复杂度,而且提高了对运动矢量进行估计的精确度。在图像处理装置1300引入了两个位移系数后,图像处理装置1300可以基于旋转、缩放以及平动的混合运动对运动矢量进行估计,使得对运动矢量的估计更加精确。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任意一种图像预测方法的部分或全部步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本发明各个实施例 上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(ROM,Read-Only Memory)或者随机存取存储器(RAM,Random Access Memory)等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (86)

  1. 一种图像预测方法,其特征在于,包括:
    确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;
    确定包括2个运动信息单元的合并运动信息单元集i;
    其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;
    利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
  2. 根据权利要求1所述的方法,其特征在于,所述确定包括2个运动信息单元的合并运动信息单元集i,包括:
    从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
  3. 根据权利要求2所述的方法,其特征在于,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
    其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
    所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选 合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
    所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
    所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
    所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的其中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
    其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
  5. 根据权利要求4所述的方法,其特征在于,
    所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前 图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
    其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  6. 根据权利要求4至5任一项所述的方法,其特征在于,
    所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
    其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  7. 根据权利要求4至6任一项所述的方法,其特征在于,
    所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
    其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
  8. 根据权利要求4至7任一项所述的方法,其特征在于,
    所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
    其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,
    所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测包括:当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
    或者,
    所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测包括:当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,
    所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,包括:
    利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
    或者,
    利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,
    所述利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测,包括:利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
  12. 根据权利要求11所述的方法,其特征在于,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
  13. 根据权利要求1至12任一项所述的方法,其特征在于,
    所述仿射运动模型为如下形式的仿射运动模型:
    Figure PCTCN2015075094-appb-100001
    其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
  14. 根据权利要求1至13任一项所述的方法,其特征在于,
    所述图像预测方法应用于视频编码过程中或所述图像预测方法应用于视频解码过程中。
  15. 根据权利要求14所述的方法,其特征在于,在所述图像预测方法应用 于视频解码过程中的情况下,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i,包括:基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
  16. 根据权利要求14或15所述的方法,其特征在于,在所述图像预测方法应用于视频解码过程中的情况下,所述方法还包括:从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
  17. 根据权利要求14所述的方法,其特征在于,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还包括:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
  18. 根据权利要求14或17所述的方法,其特征在于,在所述图像预测方法应用于视频编码过程中的情况下,所述方法还包括:将所述合并运动信息单元集i的标识写入视频码流。
  19. 一种图像预测装置,其特征在于,包括:
    第一确定单元,用于确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;
    第二确定单元,用于确定包括2个运动信息单元的合并运动信息单元集i;
    其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;
    预测单元,用于利用仿射运动模型和所述合并运动信息单元集i对所述当 前图像块进行像素值预测。
  20. 根据权利要求19所述的装置,其特征在于,
    所述第二确定单元具体用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
  21. 根据权利要求20所述的装置,其特征在于,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
    其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
    所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
    所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
    所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
    所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量之间的差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中 的其中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
  22. 根据权利要求19至21任一项所述的装置,其特征在于,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
    其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
  23. 根据权利要求22所述的装置,其特征在于,
    所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
    其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  24. 根据权利要求22至23任一项所述的装置,其特征在于,
    所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
    其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的 视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  25. 根据权利要求22至24任一项所述的装置,其特征在于,
    所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
    其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
  26. 根据权利要求22至25任一项所述的装置,其特征在于,
    所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
    其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
  27. 根据权利要求19至26任一项所述的装置,其特征在于,所述预测单元具体用于,当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
    或者,所述预测单元具体用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧 索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
  28. 根据权利要求19至27任一项所述的装置,其特征在于,
    所述预测单元具体用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
    或者,
    所述预测单元具体用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
  29. 根据权利要求19至28任一项所述的装置,其特征在于,
    所述预测单元具体用于,利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
  30. 根据权利要求29所述的装置,其特征在于,所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
  31. 根据权利要求19至30任一项所述的装置,其特征在于,
    所述仿射运动模型为如下形式的仿射运动模型:
    Figure PCTCN2015075094-appb-100002
    其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述w为所述当前图像块的长或宽。
  32. 根据权利要求19至31任一项所述的装置,其特征在于,
    所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
  33. 根据权利要求32所述的装置,其特征在于,在当所述图像预测装置应用于视频解码装置中的情况下,所述第二确定单元具体用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
  34. 根据权利要求32或33所述的装置,其特征在于,在当所述图像预测装置应用于视频解码装置中的情况下,
    所述装置还包括解码单元,用于从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
  35. 根据权利要求32所述的装置,其特征在于,在当所述图像预测装置应用于视频编码装置中的情况下,所述预测单元还用于:利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
  36. 根据权利要求32或35所述的装置,其特征在于,在当所述图像预测装置应用于视频编码装置中的情况下,所述装置还包括编码单元,用于将所述合并运动信息单元集i的标识写入视频码流。
  37. 一种图像预测装置,其特征在于,包括:
    处理器和存储器;
    其中,所述处理器通过调用所述存储器中存储的代码或指令以用于,确定当前图像块中的2个像素样本,确定所述2个像素样本之中的每个像素样本所对应的候选运动信息单元集;其中,所述每个像素样本所对应的候选运动信息单元集包括候选的至少一个运动信息单元;确定包括2个运动信息单元的合并运动信息单元集i;其中,所述合并运动信息单元集i中的每个运动信息单元分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的至少部分运动信息单元,其中,所述运动信息单元包括预测方向为前向的运动矢量和/或预测方向为后向的运动矢量;利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测。
  38. 根据权利要求37所述的装置,其特征在于,
    在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器用于,从N个候选合并运动信息单元集之中确定出包含2个运动信息单元的合并运动信息单元集i;其中,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集所包含的每个运动信息单元,分别选自所述2个像素样本中的每个像素样本所对应的候选运动信息单元集中的符合约束条件的至少部分运动信息单元,其中,所述N为正整数,所述N个候选合并运动信息单元集互不相同,所述N个候选合并运动信息单元集中的每个候选合并运动信息单元集包括2个运动信息单元。
  39. 根据权利要求38所述的装置,其特征在于,所述N个候选合并运动信息单元集满足第一条件、第二条件、第三条件、第四条件和第五条件之中的至少一个条件,
    其中,所述第一条件包括所述N个候选合并运动信息单元集中的任意一个 候选合并运动信息单元集中的运动信息单元所指示出的所述当前图像块的运动方式为非平动运动;
    所述第二条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的预测方向相同;
    所述第三条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元对应的参考帧索引相同;
    所述第四条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,或者,所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的其中1个运动信息单元和像素样本Z的运动矢量水平分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本;
    所述第五条件包括所述N个候选合并运动信息单元集中的任意一个候选合并运动信息单元集中的2个运动信息单元的运动矢量竖直分量的之间差值的绝对值小于或等于竖直分量阈值,或者,所述N个候选合并运动信息单元集中的其中一个候选合并运动信息单元集中的任意1个运动信息单元和像素样本Z的运动矢量竖直分量之间的差值的绝对值小于或等于水平分量阈值,所述当前图像块的所述像素样本Z不同于所述2个像素样本中的任意一个像素样本。
  40. 根据权利要求37至39任一项所述的装置,其特征在于,所述2个像素样本包括所述当前图像块的左上像素样本、右上像素样本、左下像素样本和中心像素样本a1中的其中2个像素样本;
    其中,所述当前图像块的左上像素样本为所述当前图像块的左上顶点或所述当前图像块中的包含所述当前图像块的左上顶点的像素块;所述当前图像块的左下像素样本为所述当前图像块的左下顶点或所述当前图像块中的包含所述当前图像块的左下顶点的像素块;所述当前图像块的右上像素样本为所述当前图像块的右上顶点或所述当前图像块中的包含所述当前图像块的右上顶点的像素块;所述当前图像块的中心素样本a1为所述当前图像块的中心像素点或所述当前图像块中的包含所述当前图像块的中心像素点的像素块。
  41. 根据权利要求40所述的装置,其特征在于,
    所述当前图像块的左上像素样本所对应的候选运动信息单元集包括x1个像素样本的运动信息单元,其中,所述x1个像素样本包括至少一个与所述当前图像块的左上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左上像素样本时域相邻的像素样本,所述x1为正整数;
    其中,所述x1个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左上像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  42. 根据权利要求40至41任一项所述的装置,其特征在于,
    所述当前图像块的右上像素样本所对应的候选运动信息单元集包括x2个像素样本的运动信息单元,其中,所述x2个像素样本包括至少一个与所述当前图像块的右上像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的右上像素样本时域相邻的像素样本,所述x2为正整数;
    其中,所述x2个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的右上像素样本位置相同的像素样本、所述当前图像块的右边的空域相邻像素样本、所述当前图像块的右上的空域相邻像素样本和所述当前图像块的上边的空域相邻像素样本中的至少一个。
  43. 根据权利要求40至42任一项所述的装置,其特征在于,
    所述当前图像块的左下像素样本所对应的候选运动信息单元集包括x3个像素样本的运动信息单元,其中,所述x3个像素样本包括至少一个与所述当前图像块的左下像素样本空域相邻的像素样本和/或至少一个与所述当前图像块的左下像素样本时域相邻的像素样本,所述x3为正整数;
    其中,所述x3个像素样本包括与所述当前图像块所属的视频帧时域相邻的视频帧之中的与所述当前图像块的左下像素样本位置相同的像素样本、所述当前图像块的左边的空域相邻像素样本、所述当前图像块的左下的空域相邻像素样本和所述当前图像块的下边的空域相邻像素样本中的至少一个。
  44. 根据权利要求40至43任一项所述的装置,其特征在于,
    所述当前图像块的中心像素样本a1所对应的候选运动信息单元集包括x5个像素样本的运动信息单元,其中,所述x5个像素样本中的其中一个像素样本为像素样本a2,
    其中,所述中心像素样本a1在所述当前图像块所属视频帧中的位置,与所述像素样本a2在所述当前图像块所属视频帧的相邻视频帧中的位置相同,所述x5为正整数。
  45. 根据权利要求40至44任一项所述的装置,其特征在于,
    在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,当所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量对应的参考帧索引不同于所述当前图像块的参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为第一预测方向的运动矢量被缩放到所述当前图像块的参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测,其中,所述第一预测方向为前向或后向;
    或者,在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,当所述合并运动信息单元集i中的预测方向为前向的运动矢量对应的参考帧索引不同于所述当前图像块的前向参考帧索引,并且所述合并运动信息单元集i中的预测方向为后向的运动矢量对应的参考帧索引不同于所述当前图像块的后向参考帧索引的情况下,对所述合并运动信息单元集i进行缩放处理,以使得所述合并运动信息单元集i中的预测方向为前向的运动矢量被缩放到所述当前图像块的前向参考帧且使得所述合并运动信息单元集i中的预测方向为后向的运动矢量被缩放到所述当前图像块的后向参考帧,利用仿射运动模型和进行缩放处理后的合并运动信息单元集i对所述当前图像块进行像素值预测。
  46. 根据权利要求37至45任一项所述的装置,其特征在于,
    在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用仿射运动模型和所述合并运动信息 单元集i计算得到所述当前图像块中的各像素点的运动矢量,利用计算得到的所述当前图像块中的各像素点的运动矢量确定所述当前图像块中的各像素点的预测像素值;
    或者,
    在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用仿射运动模型和所述合并运动信息单元集i计算得到所述当前图像块中的各像素块的运动矢量,利用计算得到的所述当前图像块中的各像素块的运动矢量确定所述当前图像块中的各像素块的各像素点的预测像素值。
  47. 根据权利要求37至46任一项所述的装置,其特征在于,
    在利用仿射运动模型和所述合并运动信息单元集i对所述当前图像块进行像素值预测的方面,所述处理器用于,利用所述2个像素样本的运动矢量水平分量之间的差值与所述当前图像块的长或宽的比值,以及所述2个像素样本的运动矢量竖直分量之间的差值与所述当前图像块的长或宽的比值,得到所述当前图像块中的任意像素样本的运动矢量,其中,所述2个像素样本的运动矢量基于所述合并运动信息单元集i中的两个运动信息单元的运动矢量得到。
  48. 根据权利要求47所述的装置,其特征在于,
    所述2个像素样本的运动矢量水平分量的水平坐标系数和运动矢量竖直分量的竖直坐标系数相等,且所述2个像素样本的运动矢量水平分量的竖直坐标系数和运动矢量竖直分量的水平坐标系数相反。
  49. 根据权利要求37至48任一项所述的装置,其特征在于,
    所述仿射运动模型为如下形式的仿射运动模型:
    Figure PCTCN2015075094-appb-100003
    其中,所述2个像素样本的运动矢量分别为(vx0,vy0)和(vx1,vy1),所述vx为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量水平分量,所述vy为所述当前图像块中的坐标为(x,y)的像素样本的运动矢量竖直分量,所述 w为所述当前图像块的长或宽。
  50. 根据权利要求37至49任一项所述的装置,其特征在于,
    所述图像预测装置应用于视频编码装置中或所述图像预测装置应用于视频解码装置中。
  51. 根据权利要求50所述的装置,其特征在于,在当所述图像预测装置应用于视频解码装置中的情况下,在确定包括2个运动信息单元的合并运动信息单元集i的方面,所述处理器用于,基于从视频码流中获得的合并运动信息单元集i的标识,从N个候选合并运动信息单元集之中确定包含2个运动信息单元的合并运动信息单元集i。
  52. 根据权利要求50或51所述的装置,其特征在于,在当所述图像预测装置应用于视频解码装置中的情况下,所述处理器还用于,从视频码流中解码得到所述2个像素样本的运动矢量残差,利用所述2个像素样本的空域相邻或时域相邻的像素样本的运动矢量得到所述2个像素样本的运动矢量预测值,基于所述2个像素样本的运动矢量预测值和所述2个像素样本的运动矢量残差分别得到所述2个像素样本的运动矢量。
  53. 根据权利要求50所述的装置,其特征在于,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器还用于,利用所述2个像素样本的空域相邻或者时域相邻的像素样本的运动矢量,得到所述2个像素样本的运动矢量预测值,根据所述2个像素样本的运动矢量预测值得到所述2个像素样本的运动矢量残差,将所述2个像素样本的运动矢量残差写入视频码流。
  54. 根据权利要求50或53所述的装置,其特征在于,在当所述图像预测装置应用于视频编码装置中的情况下,所述处理器还用于,将所述合并运动信息单元集i的标识写入视频码流。
  55. 一种图像处理方法,其特征在于,包括:
    获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
    利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量;
    其中,所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100004
    其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
    其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
  56. 根据权利要求55所述的方法,其特征在于,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100005
  57. 根据权利要求55或56所述的方法,其特征在于,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量包括:
    利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  58. 根据权利要求55至57任一项所述的方法,其特征在于,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量包括:
    利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  59. 根据权利要求55至57任一项所述的方法,其特征在于,所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量包括:
    利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  60. 根据权利要求55至58任一项所述的方法,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100006
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
  61. 根据权利要求55至58任一项所述的方法,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100007
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
  62. 根据权利要求55,56,57和59任一项所述的方法,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100008
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
  63. 根据权利要求55至62任一项所述的方法,其特征在于,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量之后,还包括:
    利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
  64. 根据权利要求55至62任一项所述的方法,其特征在于,在所述确定所述当前图像块中的所述任意像素样本的像素点的预测像素值之后,还包括:
    利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
  65. 一种图像处理装置,其特征在于,所述装置包括:
    获得单元,用于获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
    计算单元,用于利用仿射运动模型和所述获得单元获得的运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量;
    其中,所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100009
    其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
    其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式 vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
  66. 根据权利要求65所述的装置,其特征在于,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100010
  67. 根据权利要求65或66所述的装置,其特征在于,所述计算单元具体用于:
    利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  68. 根据权利要求65至67任一项所述的装置,其特征在于,所述计算单元具体用于:
    利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  69. 根据权利要求65至67任一项所述的装置,其特征在于,所述计算单元具体用于:
    利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  70. 根据权利要求65至68任一项所述的装置,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100011
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
  71. 根据权利要求65至68任一项所述的装置,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100012
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
  72. 根据权利要求65,66,67和69任一项所述的装置,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100013
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
  73. 根据权利要求65至72任一项所述的装置,其特征在于,在当所述图像处理装置应用于视频编码装置中的情况下,所述装置还包括编码单元,用于利用所述计算单元计算得到的所述当前图像块中任意像素样本的运动矢量,对所 述当前图像块中的所述任意像素样本进行运动补偿预测编码。
  74. 根据权利要求65至72任一项所述的装置,其特征在于,在当所述图像处理装置应用于视频编码装置中的情况下,所述装置还包括解码单元,用于利用所述计算单元计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
  75. 一种图像处理装置,其特征在于,所述装置包括:
    处理器和存储器;
    其中,所述处理器通过调用所述存储器中存储的代码或指令以用于,获得当前图像块的运动矢量2元组,所述运动矢量2元组包括所述当前图像块所属的视频帧中的2个像素样本各自的运动矢量;
    利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量;
    其中,所述仿射运动模型为如下形式:
    其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
    其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数。
  76. 根据权利要求75所述的装置,其特征在于,所述仿射运动模型还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100015
  77. 根据权利要求75或76所述的装置,其特征在于,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动 矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量与所述2个像素样本的位置,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  78. 根据权利要求75至77任一项所述的装置,其特征在于,在利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量的水平分量之间的差值与所述2个像素样本之间距离的比值,以及所述2个像素样本各自的运动矢量的竖直分量之间的差值与所述2个像素样本之间距离的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  79. 根据权利要求75至77任一项所述的装置,其特征在于,在利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量方面,所述处理器用于,利用所述2个像素样本各自的运动矢量的分量之间的加权和与所述2个像素样本之间距离或所述2个像素样本之间距离的平方的比值,获得所述仿射运动模型的系数的值;
    利用所述仿射运动模型以及所述仿射运动模型的系数的值,获得所述当前图像块中的任意像素样本的运动矢量。
  80. 根据权利要求75至78任一项所述的装置,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右侧的右区域像素样本时,所述仿射运动模型具体为:
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx1,vy1)为所述右区域像素样本的运动矢量,w为所述所述2个像素样本之间的距离。
  81. 根据权利要求75至78任一项所述的装置,其特征在于,在所述2个像 素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本下方的下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100017
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx2,vy2)为所述下区域像素样本的运动矢量,h为所述所述2个像素样本之间的距离。
  82. 根据权利要求75,76,77和79任一项所述的装置,其特征在于,在所述2个像素样本包括所述当前图像块的左上像素样本、位于所述左上像素样本右下方的右下区域像素样本时,所述仿射运动模型具体为:
    Figure PCTCN2015075094-appb-100018
    其中,(vx0,vy0)为所述左上像素样本的运动矢量,(vx3,vy3)为所述右下区域像素样本的运动矢量,h1为所述所述2个像素样本之间的竖直方向距离,w1为所述2个像素样本之间的水平方向距离,w1 2+h1 2为所述所述2个像素样本之间的距离的平方。
  83. 根据权利要求75至82任一项所述的装置,其特征在于,在当所述图像处理装置应用于视频编码装置中的情况下,所述处理器还用于,在所述利用仿射运动模型和所述运动矢量2元组,计算得到所述当前图像块中任意像素样本的运动矢量之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述当前图像块中的所述任意像素样本进行运动补偿预测编码。
  84. 根据权利要求75至82任一项所述的装置,其特征在于,在当所述图像处理装置应用于视频解码装置中的情况下,所述处理器还用于,在所述确定所述当前图像块中的所述任意像素样本的像素点的预测像素值之后,利用计算得到的所述当前图像块中任意像素样本的运动矢量,对所述任意像素样本进行运动补偿解码,得到所述任意像素样本的像素重建值。
  85. 一种图像处理方法,其特征在于,包括:
    获得仿射运动模型的系数,利用所述仿射运动模型的系数以及所述仿射运 动模型,计算得到所述当前图像块中任意像素样本的运动矢量;
    利用计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值;
    其中,所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100019
    其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
    其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数包括a和b;
    所述仿射运动模型的系数还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100020
  86. 一种图像处理装置,其特征在于,包括:
    获得单元,用于获得仿射运动模型的系数;
    计算单元,用于利用所述获得单元获得的仿射运动模型的系数以及所述仿射运动模型,计算得到所述当前图像块中任意像素样本的运动矢量;
    预测单元,用于所述计算单元计算得到的所述任意像素样本的运动矢量,确定所述任意像素样本的像素点的预测像素值;
    其中,所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100021
    其中,(x,y)为所述任意像素样本的坐标,所述vx为所述任意像素样本 的运动矢量的水平分量,所述vy为所述任意像素样本的运动矢量的竖直分量;
    其中,在等式vx=ax+by中,a为所述仿射运动模型的水平分量的水平坐标系数,b为所述仿射运动模型的水平分量的竖直坐标系数;在等式vy=-bx+ay中,a为所述仿射运动模型的竖直分量的竖直坐标系数,-b为所述仿射运动模型的竖直分量的水平坐标系数,所述仿射运动模型的系数包括a和b;
    所述仿射运动模型的系数还包括所述仿射运动模型的水平分量的水平位移系数c,以及所述仿射运动模型的竖直分量的竖直位移系数d,从而所述仿射运动模型为如下形式:
    Figure PCTCN2015075094-appb-100022
PCT/CN2015/075094 2015-03-10 2015-03-26 图像预测方法和相关设备 WO2016141609A1 (zh)

Priority Applications (16)

Application Number Priority Date Filing Date Title
RU2017134755A RU2671307C1 (ru) 2015-03-10 2015-03-26 Способ предсказания изображений и связанное устройство
CN201580077673.XA CN107534770B (zh) 2015-03-10 2015-03-26 图像预测方法和相关设备
CA2979082A CA2979082C (en) 2015-03-10 2015-03-26 Picture processing using an affine motion model and a motion vector 2-tuple
MYPI2017001326A MY190198A (en) 2015-03-10 2015-03-26 Picture prediction method and related apparatus
BR112017019264-0A BR112017019264B1 (pt) 2015-03-10 2015-03-26 Método de predição de imagem e dispositivo relacionado
EP15884292.2A EP3264762A4 (en) 2015-03-10 2015-03-26 Image prediction method and related device
MX2017011558A MX2017011558A (es) 2015-03-10 2015-03-26 Método de predicción de imagen y aparato relacionado.
SG11201707392RA SG11201707392RA (en) 2015-03-10 2015-03-26 Picture prediction method and related device
CN201910900293.1A CN110557631B (zh) 2015-03-10 2015-03-26 图像预测方法和相关设备
JP2017548056A JP6404487B2 (ja) 2015-03-10 2015-03-26 画像予測方法および関連装置
AU2015385634A AU2015385634B2 (en) 2015-03-10 2015-03-26 Picture prediction method and related apparatus
KR1020177027987A KR102081213B1 (ko) 2015-03-10 2015-03-26 화상 예측 방법 및 관련 장치
US15/699,515 US10404993B2 (en) 2015-03-10 2017-09-08 Picture prediction method and related apparatus
HK18103344.7A HK1243852A1 (zh) 2015-03-10 2018-03-09 圖像預測方法和相關設備
US16/413,329 US10659803B2 (en) 2015-03-10 2019-05-15 Picture prediction method and related apparatus
US16/847,444 US11178419B2 (en) 2015-03-10 2020-04-13 Picture prediction method and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2015073969 2015-03-10
CNPCT/CN2015/073969 2015-03-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/699,515 Continuation US10404993B2 (en) 2015-03-10 2017-09-08 Picture prediction method and related apparatus

Publications (1)

Publication Number Publication Date
WO2016141609A1 true WO2016141609A1 (zh) 2016-09-15

Family

ID=56878537

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/075094 WO2016141609A1 (zh) 2015-03-10 2015-03-26 图像预测方法和相关设备

Country Status (14)

Country Link
US (3) US10404993B2 (zh)
EP (1) EP3264762A4 (zh)
JP (2) JP6404487B2 (zh)
KR (1) KR102081213B1 (zh)
CN (2) CN107534770B (zh)
AU (1) AU2015385634B2 (zh)
BR (1) BR112017019264B1 (zh)
CA (2) CA3122322A1 (zh)
HK (1) HK1243852A1 (zh)
MX (2) MX2017011558A (zh)
MY (1) MY190198A (zh)
RU (1) RU2671307C1 (zh)
SG (3) SG10201900632SA (zh)
WO (1) WO2016141609A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020526066A (ja) * 2017-06-26 2020-08-27 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償のための複数の予測子候補
JP2020537394A (ja) * 2017-10-05 2020-12-17 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償用の改善されたプレディクタ候補
US11330285B2 (en) 2017-01-04 2022-05-10 Huawei Technologies Co., Ltd. Picture prediction method and related device

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6404487B2 (ja) 2015-03-10 2018-10-10 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 画像予測方法および関連装置
ES2737841B1 (es) * 2016-02-25 2021-07-27 Kt Corp Método y aparato para procesar señales de vídeo
SG11201806865YA (en) * 2016-03-15 2018-09-27 Mediatek Inc Method and apparatus of video coding with affine motion compensation
US10560712B2 (en) 2016-05-16 2020-02-11 Qualcomm Incorporated Affine motion prediction for video coding
US10448010B2 (en) * 2016-10-05 2019-10-15 Qualcomm Incorporated Motion vector prediction for affine motion models in video coding
US11877001B2 (en) 2017-10-10 2024-01-16 Qualcomm Incorporated Affine prediction in video coding
WO2019117659A1 (ko) * 2017-12-14 2019-06-20 엘지전자 주식회사 움직임 벡터 도출을 기반으로 하는 영상 코딩 방법 및 그 장치
US20190208211A1 (en) * 2018-01-04 2019-07-04 Qualcomm Incorporated Generated affine motion vectors
CN111602393B (zh) * 2018-01-15 2022-10-21 三星电子株式会社 编码方法及其设备以及解码方法及其设备
CN118042153A (zh) * 2018-01-25 2024-05-14 三星电子株式会社 使用基于子块的运动补偿进行视频信号处理的方法和装置
CN116684639A (zh) * 2018-04-01 2023-09-01 Lg电子株式会社 图像编码/解码设备和图像数据发送设备
WO2019192491A1 (en) * 2018-04-02 2019-10-10 Mediatek Inc. Video processing methods and apparatuses for sub-block motion compensation in video coding systems
WO2019199127A1 (ko) * 2018-04-12 2019-10-17 삼성전자 주식회사 부호화 방법 및 그 장치, 복호화 방법 및 그 장치
CN110536135B (zh) * 2018-05-25 2021-11-05 腾讯美国有限责任公司 用于视频编解码的方法和设备
US10887574B2 (en) 2018-07-31 2021-01-05 Intel Corporation Selective packing of patches for immersive video
US10893299B2 (en) * 2018-07-31 2021-01-12 Intel Corporation Surface normal vector processing mechanism
US11178373B2 (en) 2018-07-31 2021-11-16 Intel Corporation Adaptive resolution of point cloud and viewpoint prediction for video streaming in computing environments
US10762394B2 (en) 2018-07-31 2020-09-01 Intel Corporation System and method for 3D blob classification and transmission
US11212506B2 (en) 2018-07-31 2021-12-28 Intel Corporation Reduced rendering of six-degree of freedom video
KR102547353B1 (ko) 2018-08-06 2023-06-26 엘지전자 주식회사 영상 코딩 시스템에서 컨스트럭티드 어파인 mvp 후보를 사용하는 어파인 움직임 예측에 기반한 영상 디코딩 방법 및 장치
US11039157B2 (en) * 2018-09-21 2021-06-15 Tencent America LLC Techniques for simplified affine motion model coding with prediction offsets
KR102354489B1 (ko) 2018-10-08 2022-01-21 엘지전자 주식회사 Atmvp 후보를 기반으로 영상 코딩을 수행하는 장치
US11057631B2 (en) 2018-10-10 2021-07-06 Intel Corporation Point cloud coding standard conformance definition in computing environments
GB2595054B (en) * 2018-10-18 2022-07-06 Canon Kk Video coding and decoding
GB2595053B (en) * 2018-10-18 2022-07-06 Canon Kk Video coding and decoding
WO2020155791A1 (zh) * 2019-02-01 2020-08-06 华为技术有限公司 帧间预测方法和装置
KR20210129721A (ko) * 2019-03-11 2021-10-28 알리바바 그룹 홀딩 리미티드 병합 모드를 위한 예측 가중치를 결정하는 방법, 디바이스, 및 시스템
US11153598B2 (en) * 2019-06-04 2021-10-19 Tencent America LLC Method and apparatus for video coding using a subblock-based affine motion model
JP7275326B2 (ja) 2019-06-14 2023-05-17 ヒョンダイ モーター カンパニー インター予測を利用した映像符号化方法、映像復号化方法、及び映像復号化装置
WO2020262901A1 (ko) * 2019-06-24 2020-12-30 엘지전자 주식회사 영상 디코딩 방법 및 그 장치
WO2021006614A1 (ko) 2019-07-08 2021-01-14 현대자동차주식회사 인터 예측을 이용하여 비디오를 부호화 및 복호화하는 방법 및 장치
KR20210006306A (ko) 2019-07-08 2021-01-18 현대자동차주식회사 인터 예측을 이용하여 비디오를 부호화 및 복호화하는 방법 및 장치
CN111050168B (zh) * 2019-12-27 2021-07-13 浙江大华技术股份有限公司 仿射预测方法及其相关装置
US20210245047A1 (en) 2020-02-10 2021-08-12 Intel Corporation Continuum architecture for cloud gaming
CN112601081B (zh) * 2020-12-04 2022-06-24 浙江大华技术股份有限公司 一种自适应分区多次预测方法及装置
WO2024081734A1 (en) * 2022-10-13 2024-04-18 Bytedance Inc. Method, apparatus, and medium for video processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084912A (en) * 1996-06-28 2000-07-04 Sarnoff Corporation Very low bit rate video coding/decoding method and apparatus
CN1347063A (zh) * 1996-08-05 2002-05-01 三菱电机株式会社 图像编码数据变换装置
CN101350928A (zh) * 2008-07-29 2009-01-21 北京中星微电子有限公司 一种运动估计方法及装置
CN102883160A (zh) * 2009-06-26 2013-01-16 华为技术有限公司 视频图像运动信息获取方法、装置及设备、模板构造方法
CN104363451A (zh) * 2014-10-27 2015-02-18 华为技术有限公司 图像预测方法及相关装置

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2586686B2 (ja) * 1990-04-19 1997-03-05 日本電気株式会社 動画像の動き情報検出装置および動画像の動き補償フレーム間予測符号化装置
JP2000165648A (ja) 1998-11-27 2000-06-16 Fuji Photo Film Co Ltd 画像処理方法および装置並びに記録媒体
US6735249B1 (en) 1999-08-11 2004-05-11 Nokia Corporation Apparatus, and associated method, for forming a compressed motion vector field utilizing predictive motion coding
CN1193620C (zh) * 2000-01-21 2005-03-16 诺基亚有限公司 视频编码器的运动估计方法及系统
KR100359115B1 (ko) 2000-05-24 2002-11-04 삼성전자 주식회사 영상 코딩 방법
JP3681342B2 (ja) * 2000-05-24 2005-08-10 三星電子株式会社 映像コーディング方法
US6537928B1 (en) * 2002-02-19 2003-03-25 Asm Japan K.K. Apparatus and method for forming low dielectric constant film
JP2003274410A (ja) 2002-03-13 2003-09-26 Hitachi Ltd 監視映像の符号化装置及び復号装置並びに符号化方法
US20070076796A1 (en) * 2005-09-27 2007-04-05 Fang Shi Frame interpolation using more accurate motion information
US8116576B2 (en) * 2006-03-03 2012-02-14 Panasonic Corporation Image processing method and image processing device for reconstructing a high-resolution picture from a captured low-resolution picture
JP4793366B2 (ja) * 2006-10-13 2011-10-12 日本ビクター株式会社 多視点画像符号化装置、多視点画像符号化方法、多視点画像符号化プログラム、多視点画像復号装置、多視点画像復号方法、及び多視点画像復号プログラム
JP2007312425A (ja) * 2007-07-30 2007-11-29 Nippon Telegr & Teleph Corp <Ntt> 画像符号化方法,画像復号方法,画像符号化装置,画像復号装置,画像符号化プログラム,画像復号プログラムおよびそれらのプログラムを記録した記録媒体
JP4544334B2 (ja) * 2008-04-15 2010-09-15 ソニー株式会社 画像処理装置および画像処理方法
FR2933565A1 (fr) * 2008-07-01 2010-01-08 France Telecom Procede et dispositif de codage d'une sequence d'images mettant en oeuvre une prediction temporelle, signal, support de donnees, procede et dispositif de decodage, et produit programme d'ordinateur correspondants
CN103039075B (zh) * 2010-05-21 2015-11-25 Jvc建伍株式会社 图像编码装置、图像编码方法、以及图像解码装置、图像解码方法
EP2664070B1 (en) * 2011-01-14 2016-11-02 GE Video Compression, LLC Entropy encoding and decoding scheme
CN102158709B (zh) 2011-05-27 2012-07-11 山东大学 一种解码端可推导的运动补偿预测方法
US8964845B2 (en) * 2011-12-28 2015-02-24 Microsoft Corporation Merge mode for motion information prediction
US9438928B2 (en) * 2012-11-05 2016-09-06 Lifesize, Inc. Mechanism for video encoding based on estimates of statistically-popular motion vectors in frame
CN103024378B (zh) * 2012-12-06 2016-04-13 浙江大学 一种视频编解码中运动信息导出方法及装置
CN104113756A (zh) * 2013-04-22 2014-10-22 苏州派瑞雷尔智能科技有限公司 一种适用于h.264视频编解码的整像素运动估计方法
CN107734335B (zh) 2014-09-30 2020-11-06 华为技术有限公司 图像预测方法及相关装置
JP6404487B2 (ja) 2015-03-10 2018-10-10 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 画像予測方法および関連装置
CN106254878B (zh) * 2015-06-14 2020-06-12 同济大学 一种图像编码及解码方法、图像处理设备
CN104935938B (zh) * 2015-07-15 2018-03-30 哈尔滨工业大学 一种混合视频编码标准中帧间预测方法
CN109076234A (zh) * 2016-05-24 2018-12-21 华为技术有限公司 图像预测方法和相关设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084912A (en) * 1996-06-28 2000-07-04 Sarnoff Corporation Very low bit rate video coding/decoding method and apparatus
CN1347063A (zh) * 1996-08-05 2002-05-01 三菱电机株式会社 图像编码数据变换装置
CN101350928A (zh) * 2008-07-29 2009-01-21 北京中星微电子有限公司 一种运动估计方法及装置
CN102883160A (zh) * 2009-06-26 2013-01-16 华为技术有限公司 视频图像运动信息获取方法、装置及设备、模板构造方法
CN104363451A (zh) * 2014-10-27 2015-02-18 华为技术有限公司 图像预测方法及相关装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11330285B2 (en) 2017-01-04 2022-05-10 Huawei Technologies Co., Ltd. Picture prediction method and related device
JP2020526066A (ja) * 2017-06-26 2020-08-27 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償のための複数の予測子候補
JP7261750B2 (ja) 2017-06-26 2023-04-20 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償のための複数の予測子候補
US11785250B2 (en) 2017-06-26 2023-10-10 Interdigital Vc Holdings, Inc. Multiple predictor candidates for motion compensation
JP2020537394A (ja) * 2017-10-05 2020-12-17 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償用の改善されたプレディクタ候補
JP7277447B2 (ja) 2017-10-05 2023-05-19 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償用の改善されたプレディクタ候補
US11805272B2 (en) 2017-10-05 2023-10-31 Interdigital Patent Holdings, Inc. Predictor candidates for motion compensation
JP7474365B2 (ja) 2017-10-05 2024-04-24 インターデジタル ヴイシー ホールディングス, インコーポレイテッド 動き補償用の改善されたプレディクタ候補

Also Published As

Publication number Publication date
CA2979082C (en) 2021-07-27
SG10202111537RA (en) 2021-11-29
EP3264762A1 (en) 2018-01-03
JP6404487B2 (ja) 2018-10-10
JP2018511997A (ja) 2018-04-26
CN110557631A (zh) 2019-12-10
US10404993B2 (en) 2019-09-03
MX2020010515A (es) 2020-10-22
US11178419B2 (en) 2021-11-16
AU2015385634A1 (en) 2017-10-19
HK1243852A1 (zh) 2018-07-20
EP3264762A4 (en) 2018-05-02
BR112017019264B1 (pt) 2023-12-12
US20190268616A1 (en) 2019-08-29
JP2019013031A (ja) 2019-01-24
MY190198A (en) 2022-04-04
AU2015385634B2 (en) 2019-07-18
CA2979082A1 (en) 2016-09-15
CN107534770B (zh) 2019-11-05
JP6689499B2 (ja) 2020-04-28
US20200244986A1 (en) 2020-07-30
US10659803B2 (en) 2020-05-19
SG11201707392RA (en) 2017-10-30
KR102081213B1 (ko) 2020-02-25
CA3122322A1 (en) 2016-09-15
CN110557631B (zh) 2023-10-20
MX2017011558A (es) 2018-03-21
SG10201900632SA (en) 2019-02-27
RU2671307C1 (ru) 2018-10-30
CN107534770A (zh) 2018-01-02
KR20170125086A (ko) 2017-11-13
US20170374379A1 (en) 2017-12-28
BR112017019264A2 (zh) 2018-05-02

Similar Documents

Publication Publication Date Title
WO2016141609A1 (zh) 图像预测方法和相关设备
JP7313816B2 (ja) 画像予測方法および関連装置
JP7335315B2 (ja) 画像予測方法および関連装置
WO2017201678A1 (zh) 图像预测方法和相关设备
CN112087629B (zh) 图像预测方法、装置及计算机可读存储介质
TW202005384A (zh) 基於交織預測的視頻處理方法、裝置及記錄媒體
WO2016065872A1 (zh) 图像预测方法及相关装置
TW202005388A (zh) 交織預測的應用

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15884292

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2979082

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: MX/A/2017/011558

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2017548056

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11201707392R

Country of ref document: SG

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112017019264

Country of ref document: BR

REEP Request for entry into the european phase

Ref document number: 2015884292

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20177027987

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2017134755

Country of ref document: RU

ENP Entry into the national phase

Ref document number: 2015385634

Country of ref document: AU

Date of ref document: 20150326

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112017019264

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20170908