WO2017031671A1 - 运动矢量场编码方法和解码方法、编码和解码装置 - Google Patents

运动矢量场编码方法和解码方法、编码和解码装置 Download PDF

Info

Publication number
WO2017031671A1
WO2017031671A1 PCT/CN2015/087947 CN2015087947W WO2017031671A1 WO 2017031671 A1 WO2017031671 A1 WO 2017031671A1 CN 2015087947 W CN2015087947 W CN 2015087947W WO 2017031671 A1 WO2017031671 A1 WO 2017031671A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
vector field
sampling point
block
current
Prior art date
Application number
PCT/CN2015/087947
Other languages
English (en)
French (fr)
Inventor
张红
杨海涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to BR112018003247-6A priority Critical patent/BR112018003247B1/pt
Priority to KR1020187006478A priority patent/KR102059066B1/ko
Priority to JP2018508146A priority patent/JP6636615B2/ja
Priority to EP15901942.1A priority patent/EP3343923B1/en
Priority to PCT/CN2015/087947 priority patent/WO2017031671A1/zh
Priority to CN201580081772.5A priority patent/CN107852500B/zh
Priority to AU2015406855A priority patent/AU2015406855A1/en
Publication of WO2017031671A1 publication Critical patent/WO2017031671A1/zh
Priority to US15/901,410 priority patent/US11102501B2/en
Priority to AU2019275631A priority patent/AU2019275631B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/31Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a motion vector field encoding method and a decoding method, an encoding and decoding apparatus.
  • a video corresponding to a motion scene includes a series of video frames, each of which includes a still image, and the illusion of motion of the series of video frames is to display a continuous image relatively quickly, for example, at 15 to per second. The rate of 30 frames is displayed. Due to the relatively fast frame rate, the images on each video frame within the series of video frames are very similar.
  • a video frame in the series of video frames is taken as a reference image, and a motion vector field of another video frame in the series of video frames refers to displacement information of the video frame relative to the reference image.
  • the video frame may be an image adjacent to the reference image or may not be an image adjacent to the reference image.
  • a video frame includes a plurality of pixels, and the image in the video frame can be divided into a plurality of image units, wherein each image unit includes at least one pixel, and motion vectors of all pixels in each image unit are the same That is, an image unit has a motion vector.
  • the motion vector field of the video frame is made up of motion vectors for all image elements.
  • Embodiments of the present invention provide a motion vector field coding method, a decoding method, an encoding and a decoding device.
  • the compression efficiency of the motion vector field can be improved.
  • a first aspect of the embodiments of the present invention provides a video encoding method, including:
  • the prediction information and the prediction residual signal are written to a code stream.
  • the acquiring the prediction signal of the current motion vector field block and the prediction information of the current motion vector field block includes:
  • first reference motion vector field of the current motion vector field block where the first reference motion vector field is an encoded and reconstructed motion vector field, wherein the first reference motion vector field is a video frame at time t1
  • the video frame at the time t1 is a video frame adjacent to the video frame at the time t;
  • the prediction signal includes a motion vector field block of the second reference motion vector field, where a motion vector field block of the second reference motion vector field is a coordinate range in the second reference motion vector field is the same as a coordinate range of the current motion vector field block in the current motion vector field;
  • the prediction information includes the information used to indicate the first reference motion vector field.
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the acquiring the prediction signal of the current motion vector field block and the prediction information of the current motion vector field block includes:
  • the prediction information is acquired according to the direction coefficient, and the prediction information includes direction coefficient information indicating the direction coefficient.
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • the coefficient of the preset function obtained by fitting is taken as the direction coefficient.
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • the acquiring the direction coefficient of the current motion vector field block includes:
  • the squares of the at least two encoded motion vector field blocks are The direction coefficient is used as the direction coefficient of the current motion vector field block.
  • the acquiring the direction coefficient of the current motion vector field block includes:
  • a ratio of a first component of a motion vector of the one sampling point to a second component of a motion vector of the one sampling point is used as a candidate for the candidate direction coefficient set Direction coefficient;
  • a second aspect of the embodiments of the present invention provides a motion motion vector field decoding method, including:
  • the current motion vector field block is obtained by dividing the current motion vector field, where the current motion vector field is a motion vector corresponding to the video frame at time t field;
  • the prediction information includes the information for indicating a first reference motion vector field of the motion vector field block
  • the first reference motion vector field Acquiring, according to the prediction information, the first reference motion vector field, where the first reference motion vector field is a motion vector field of a video frame at time t1;
  • a motion vector field block of the second reference motion vector field Acquiring a motion vector field block of the second reference motion vector field, wherein a coordinate range of the motion vector field block of the second reference motion vector field in the second reference motion vector field and the current motion vector field
  • the block has the same coordinate range in the current motion vector field
  • the prediction signal includes a motion vector field block of the second reference motion vector field.
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • a motion vector of the second sampling point of the second reference motion vector field where a motion vector of the first sampling point of the first reference motion vector field; wherein, starting from a position of the first sampling point, a motion vector of the second sampling point is a position to which the displacement is moved The position of the second sampling point is the same position.
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the first reference motion vector field, the t time, the t1 time, and the t2 At the moment, the second reference motion vector field is obtained, including:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the prediction information includes direction coefficient information indicating a direction coefficient of the current motion vector field block, where the direction coefficient is used to indicate Determining a relationship between a value of a first component of a motion vector of a sample point of the current motion vector field block and a value of a second component of the motion vector of the sample point;
  • the direction coefficient information includes a reconstructed motion vector field block in the current motion vector field Information, the direction coefficient includes a direction coefficient of the reconstructed motion vector field block;
  • the direction coefficient information includes a value of the direction coefficient.
  • a third aspect of the embodiments of the present invention provides an encoding apparatus, including:
  • a first acquiring module configured to acquire an original signal of a current motion vector field block, where the current motion vector field block is obtained by dividing a current motion vector field, where the current motion vector field is a video frame corresponding to time t Motion vector field
  • a second acquiring module configured to acquire a prediction signal of the current motion vector field block and prediction information of the current motion vector field block, where the prediction information is used to indicate information required to acquire the prediction signal;
  • a calculation module configured to calculate, according to the prediction signal acquired by the second acquiring module and the original signal acquired by the first acquiring module, a prediction residual signal of the current motion vector field block, where the prediction residual a difference signal is used to indicate a residual between the original signal and the prediction signal;
  • an encoding module configured to write the prediction information acquired by the second acquiring module and the prediction residual signal calculated by the computing module into a code stream.
  • the second obtaining module is configured to:
  • first reference motion vector field of the current motion vector field block where the first reference motion vector field is an encoded and reconstructed motion vector field, wherein the first reference motion vector field is a video frame at time t1
  • the video frame at the time t1 is a video frame adjacent to the video frame at the time t;
  • the prediction signal includes a motion vector field block of the second reference motion vector field, where a motion vector field block of the second reference motion vector field is a coordinate range in the second reference motion vector field is the same as a coordinate range of the current motion vector field block in the current motion vector field;
  • the prediction information includes the information used to indicate the first reference motion vector field.
  • the second acquisition mode is used according to a calculation formula Calculating a motion vector of the second sampling point of the second reference motion vector field, where a motion vector of the first sampling point of the first reference motion vector field; wherein, with the position of the first sampling point as a starting point, a position moved by the motion vector of the second sampling point as a displacement The position of the second sampling point is the same position.
  • the second acquisition mode is used to determine a second sampling point of the second reference motion vector field
  • the position of the second sampling point is respectively started from the position of each of the at least two first sampling points of the first reference motion vector field, and the respective motion vectors of the at least two first sampling points are Displacement, where the moved position is the same position, and the respective motion vectors of the at least two first sampling points are respective motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) Product of
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second acquisition mode is used to acquire a target second sampling with the second reference motion vector field At least one second sampling point adjacent to the point, wherein a position of any one of the first sampling points of the first reference motion vector field is used as a starting point For displacement, the position moved to is different from the position of the second sampling point of the target. a motion vector of the first sampling point;
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the second acquiring module is configured to:
  • the prediction information is acquired according to the direction coefficient, and the prediction information includes direction coefficient information indicating the direction coefficient.
  • the second acquiring module is configured to:
  • the coefficient of the preset function obtained by fitting is taken as the direction coefficient.
  • the seventh possible implementation in the third aspect In the mode, the second obtaining module is configured to:
  • the second acquiring module is configured to:
  • the direction coefficients of the at least two encoded motion vector field blocks adjacent to the current motion vector field block in the current motion vector field are used as The direction coefficient of the current motion vector field block.
  • the second acquiring module is configured to:
  • a ratio of a first component of a motion vector of the one sampling point to a second component of a motion vector of the one sampling point is used as a candidate for the candidate direction coefficient set Direction coefficient;
  • a fourth aspect of the embodiments of the present invention provides a decoding apparatus, including:
  • a first acquiring module configured to acquire prediction information and a prediction residual signal of a current motion vector field block, where the current motion vector field block is obtained by dividing a current motion vector field, where the current motion vector field is time t a motion vector field corresponding to the video frame;
  • a second acquiring module configured to acquire, according to the prediction information, a prediction signal of the current motion vector field block
  • a calculation module configured to calculate a reconstruction signal of the current motion vector field block according to the prediction signal acquired by the second acquisition module and the prediction residual signal acquired by the first acquisition module.
  • the prediction information includes the information for indicating a first reference motion vector field of the motion vector field block
  • the second obtaining module is configured to:
  • the first reference motion vector field Acquiring, according to the prediction information, the first reference motion vector field, where the first reference motion vector field is a motion vector field of a video frame at time t1;
  • a motion vector field block of the second reference motion vector field Acquiring a motion vector field block of the second reference motion vector field, wherein a coordinate range of the motion vector field block of the second reference motion vector field in the second reference motion vector field and the current motion vector field
  • the block has the same coordinate range in the current motion vector field
  • the prediction signal includes a motion vector field block of the second reference motion vector field.
  • the second acquiring module is configured to:
  • a motion vector of the second sampling point of the second reference motion vector field where a motion vector of the first sampling point of the first reference motion vector field; wherein, starting from a position of the first sampling point, a motion vector of the second sampling point is a position to which the displacement is moved The position of the second sampling point is the same position.
  • the second acquiring module is configured to:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second acquiring module is configured to:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the prediction information includes direction coefficient information for indicating a direction coefficient of the current motion vector field block, where the direction coefficient is used to indicate Determining a relationship between a value of a first component of a motion vector of a sample point of the current motion vector field block and a value of a second component of the motion vector of the sample point;
  • the second acquiring module is configured to acquire a reconstructed value of the first component, and calculate a predicted value of the second component according to the direction coefficient and a reconstructed value of the first component, where the prediction signal includes the The predicted value of the second component.
  • the direction coefficient information includes a reconstructed motion vector field block in the current motion vector field Information, the direction coefficient includes a direction coefficient of the reconstructed motion vector field block;
  • the direction coefficient information includes a value of the direction coefficient.
  • the compression efficiency of the motion vector field is improved.
  • FIG. 1 is a flow chart of an embodiment of a motion vector field encoding method of the present invention
  • FIG. 3 is a flow chart of an embodiment of acquiring a prediction signal of the current block in the embodiment shown in FIG. 1;
  • FIG. 4 is a flow chart of one embodiment of determining weights of motion vectors for respective first sampling points in the first set in the embodiment of FIG. 3;
  • FIG. 5 is a flow chart of an embodiment of acquiring a prediction signal of the current block in the embodiment shown in FIG. 1;
  • FIG. 6 is a flow chart showing an embodiment of acquiring a direction coefficient of a current block in the embodiment shown in FIG. 5;
  • FIG. 7 is a flow chart showing another embodiment of acquiring a direction coefficient of a current block in the embodiment shown in FIG. 5;
  • Figure 8 is a flow chart showing another embodiment of acquiring the direction coefficient of the current block in the embodiment shown in Figure 5;
  • FIG. 9 is a flow chart of an embodiment of a motion vector field decoding method of the present invention.
  • FIG. 10 is a schematic structural diagram of an embodiment of an encoding apparatus according to the present invention.
  • FIG. 11 is a schematic structural diagram of an embodiment of a decoding apparatus according to the present invention.
  • Figure 12 is a block diagram showing the structure of another embodiment of the encoding apparatus of the present invention.
  • Figure 13 is a block diagram showing the structure of another embodiment of the decoding apparatus of the present invention.
  • the motion vector field coding method provided by the embodiment of the present invention is described below.
  • the execution body of the motion vector field encoding method provided by the embodiment of the present invention is an encoding device, wherein the encoding device may be any device that needs to output or store video, such as a notebook computer, a tablet computer, a personal computer, a mobile phone, or a video server. .
  • a motion vector field coding method includes: acquiring an original signal of a current motion vector field block, wherein the current motion vector field block is obtained by dividing a current motion vector field by a block, The current motion vector field is a motion vector field corresponding to a video frame at time t; acquiring a prediction signal of the current motion vector field block and prediction information of the current motion vector field block, where the prediction information is used to indicate an acquisition location Deriving information required for predicting a signal; calculating, based on the prediction signal and the original signal, a prediction residual signal of the current motion vector field block, the prediction residual signal being used to indicate the original signal and the prediction a residual between the signals; writing the prediction information and the prediction residual signal to the code stream.
  • FIG. 1 is a schematic flowchart of a motion vector field coding method according to an embodiment of the present invention.
  • a motion vector field coding method according to an embodiment of the present invention may include: The following:
  • a motion vector field is motion information of one image relative to another.
  • transport The motion vector field is used to describe motion information of a target video frame relative to a reference video frame of the target video frame, wherein the target video frame includes a plurality of image blocks, each image block having a corresponding matching block in the reference video frame.
  • Each sampling point in the motion vector field is in one-to-one correspondence with each image block in the target video frame, and the value of each sampling point is a motion vector of the image block corresponding to the sampling point, wherein the motion vector is the image block relative to the image block.
  • the image block is the displacement information of the matching block in the reference video frame.
  • the motion vector field is divided into different motion vector field blocks, and the motion vector field is compressed by encoding each motion vector field block. Compression coding, wherein one motion vector field block includes at least one sample point.
  • the current motion vector field block to be compressed is referred to as a current block
  • the motion vector field in which the current block is located is referred to as a current field.
  • each of the motion vector field blocks and the video frame corresponding to the motion vector location do not necessarily correspond to each other.
  • the method of blocking the motion vector field reference may be made to the method of blocking the video frame, which is not limited herein.
  • intra prediction modes of video frames are currently provided in the HEVC standard, wherein the intra prediction mode of 35 includes 33 directional prediction modes, and Intra_DC and Intra_Planar modes.
  • the intra prediction mode is applied to the intra prediction of the current field.
  • the at least one motion vector field block that is encoded and reconstructed adjacent to the current block on the current field is used as a reference motion vector field block of the current block; according to the intra prediction mode and The reference motion vector field block acquires a prediction signal of the current block.
  • the motion vector field block adjacent to the current block may be a motion vector field block adjacent to the motion vector field block (ie, connected to the motion vector field block), or may be preset from the current block.
  • the motion vector field block of the numerical motion vector field block is not limited herein. In practical applications, the coding order of each motion vector field block in the current field is from left to right and top to bottom. Therefore, adjacent, encoded, and reconstructed motion vector field blocks located at the left, bottom, top, or top right of the current block are generally selected as reference motion vector field blocks.
  • the taken intra prediction mode is the horizontal prediction mode of the 33 directional prediction modes
  • the reference motion vector field block of the current block is the first motion vector field block on the left side on the same line as the current block.
  • the reconstructed signal of the reference motion vector field block is used as the prediction signal of the current block.
  • the acquired intra prediction mode is an Intra_DC mode. After acquiring the reference motion vector field block of the current block, the reconstructed pixel average value of the reference motion vector field block is used as the prediction signal of the current block.
  • the prediction information is an index of the intra prediction mode and an index of the reference motion vector field block.
  • the prediction information may not include an index of the reference motion vector field block.
  • the encoding device and the decoding device pre-determine the position of the reference motion vector field block used by the current block corresponding to each intra prediction mode with respect to the current block.
  • the encoding apparatus and the decoding apparatus further predetermine a method of calculating a prediction signal corresponding to each intra prediction mode. In this way, after receiving the prediction information, the decoding device calculates the prediction signal by using a pre-made calculation method according to the prediction mode in the prediction information and the index of the reference motion vector field block.
  • an intra prediction mode when acquiring the prediction signal of the current block, an intra prediction mode may be directly determined, and the intra prediction mode is used to calculate a prediction signal of the current block, and the index of the intra prediction mode is included in the prediction information. Inside. Alternatively, it is also possible to traverse each intra prediction mode, calculate the prediction signal of the current block by using each intra prediction mode, and compare the original signal with the current block (that is, the prediction residual signal mentioned below). The index of the intra prediction mode corresponding to the energy-predicted signal is included in the prediction information, and the prediction signal is used to calculate a subsequent prediction residual signal.
  • the prediction residual signal is used to indicate a difference between the original signal and the prediction signal, wherein the prediction signal is a prediction signal acquired according to prediction information. After obtaining the prediction signal of the current block, calculating the difference between the original signal and the prediction signal of the current block, the prediction residual signal of the current block is obtained.
  • the decoding device can predict the current block according to the prediction information, to obtain a pre-predetermined block
  • the measured signal, combined with the prediction residual signal of the current block can calculate the original signal of the current field. Therefore, in the encoding apparatus, when the current block is compression-coded, it is only necessary to transmit the prediction information and the prediction residual signal to the decoding device, so that the decoding device acquires the original signal of the current block.
  • the encoding of the prediction residual signal may refer to the encoding of the prediction residual signal of the video frame in the video standard.
  • the prediction residual signal is first compressed, and then the compressed data is written into the code stream.
  • Lossless compression means that after compression, the reconstructed motion vector field signal is exactly the same as the original signal, and there is no loss of information.
  • Lossy compression means that after compression, the reconstructed motion vector field signal is not exactly the same as the original signal, and there is a certain information loss.
  • the process of lossless compression may include transform and entropy coding.
  • the process of lossy compression can include transform, quantization, and entropy coding. This is a prior art and will not be described here.
  • the prediction information and the prediction residual signal are encoded, the amount of information to be encoded is greatly reduced, and the motion vector is improved.
  • the information and prediction residual signals can be highly restored to the original signal of the current block.
  • the method of intra-field prediction may be adopted, that is, the current block is predicted according to the current field. Since there is a certain correlation between the current block and the spatially adjacent motion vector field, the current block can be predicted according to the intra-field prediction method.
  • intra-field prediction methods include angle prediction and intra-area partition prediction.
  • the method of angle prediction may acquire the prediction signal of the current block according to the intra prediction mode and the reference motion vector field block of the current block described in the explanation of step 102.
  • the intra-area partition prediction method is specifically: dividing the current block into at least two regions, and acquiring a motion vector in each region as a prediction signal of the region. Therefore, in the intra-area partition prediction method, the prediction information includes information of the region division method and a method of determining a prediction signal for each region. Legal information.
  • the information of the area dividing method is used to indicate the area dividing method, for example, an index of the area dividing method.
  • the information of the method of determining the prediction signal for each region is used to indicate a method of determining a prediction signal for each region, such as an index of a method for determining a prediction signal for each region.
  • the prediction information may not include the information of the method of determining the prediction signal of each region, but the encoding device and the decoding device store the same predetermined method for determining the prediction signal of each region. There are no restrictions here.
  • an average value of motion vectors of all sampling points in one region may be used as a prediction signal of the region, or one of the regions may be used.
  • the motion vector of the sampling point is used as a prediction signal for the region, or may be other, and is not limited herein.
  • the motion vector of one of the sampling points in the region is taken as the prediction signal of the region, all the sampling points in the region may be traversed once, and the motion vector of one sampling point that minimizes the energy of the prediction residual signal is determined as Prediction signal.
  • the current block is first sliced into two regions. Then, at least one of the two regions is divided into two regions, and then continues until the preset condition is satisfied, and the segmentation of the region is stopped.
  • the preset condition has various settings.
  • the distortion value of the current block after each segmentation can be calculated, and the distortion value is the difference between the predicted signal and the original signal of each region in the current block after the segmentation.
  • the maximum value in the middle when the distortion value is less than the preset value, the preset condition is satisfied.
  • the above is only an example description of the preset conditions, and is not limited.
  • the preset value may be set in advance, and after each division, the number of regions into which the current block is divided is calculated, and when the number reaches a preset value, the preset condition is satisfied.
  • one of the regions or two of the regions may have a trapezoidal shape or a triangular shape.
  • shape of the region may also be other shapes, which are not limited herein.
  • each partitioning method of the region is traversed, and prediction signals of two sub-regions in the region are calculated according to each segmentation method, and according to The prediction signals of the two sub-regions are used to determine an optimal segmentation method.
  • the rate distortion optimization principle can be used to determine an optimal segmentation method.
  • the current block when the current block is divided into different regions, the current block may be continuously segmented, or discretely segmented. Continuous and discrete cuts are explained below in conjunction with FIG.
  • the continuous segmentation refers to directly dividing the current block S into two regions P1 and P2 by using a straight line L.
  • the discrete segmentation means that the current block S is composed of a plurality of pixel blocks, and when segmented The current block is sliced along the edge of the pixel block in the current field, and the current block S is sliced into two regions P1 and P2.
  • the current block when the current block is divided into different regions, the current block may be divided into contours, that is, the current block is divided according to the contour of the object represented by the image on the current block, which is not limited herein.
  • the method of inter-field prediction may be adopted, that is, the current block is predicted by using a reference motion vector field according to the correlation between the current block and the temporally adjacent motion vector field, wherein the reference motion vector field refers to Other motion vector fields that are adjacent to the current field time.
  • the adjacent motion vector field may be a motion vector field adjacent to the current field (that is, a motion vector field of the next or previous video frame of the video frame corresponding in time to the current field), There may be at least one motion vector field (i.e., a motion vector field corresponding to a video frame in which at least one video frame is temporally separated from a video frame corresponding to the current field) from the current field.
  • a motion vector field i.e., a motion vector field corresponding to a video frame in which at least one video frame is temporally separated from a video frame corresponding to the current field
  • Embodiment 1 Since the motion of the object has a certain consistency in time, the motion vector field has a certain correlation in time, that is, at least part of the current block appears in the reference motion vector field, but the current block The position in the current field and the position in the reference motion vector field are not necessarily the same.
  • the position of a motion vector field block in a motion vector field refers to the range of coordinates of the motion vector field block in the motion vector field. Then, the position of the motion vector field block p in the motion vector field P is the same as the position of the motion vector field block q in the motion vector field Q, and refers to the coordinate range and motion of the motion vector field block p in the motion vector field P.
  • the vector field block q has the same coordinate range in the motion vector field Q.
  • acquiring the prediction signal of the current block specifically includes: determining a reference motion vector field of the current block, searching for a matching block of the current block on the reference motion vector field, and performing the matching block
  • the reconstructed signal is used as a predictive signal for the current block.
  • There are various methods for finding matching blocks for example, traversing each motion vector field block on the reference motion vector field, and calculating the motion vector field block
  • the difference from the current block is that the motion vector field block with the smallest difference from the current block is used as the matching block of the current block.
  • Obtaining the prediction information of the current block specifically includes: using information of the reference motion vector field and information of the matching block as the prediction information, where information of the reference motion vector field is used to indicate the reference motion
  • the vector field the information of the matching block is used to indicate the matching block.
  • the information of the reference motion vector field may be an index of the reference motion vector field
  • the information of the matching block may be a position of the matching block relative to a position of the first motion vector field block in the reference motion vector field.
  • Displacement information wherein the position of the first motion vector field block in the reference motion vector field is the same as the position of the current block on the current field; or the information of the matching block may also be an index of the matching block, which is not used herein. limit.
  • Embodiment 2 In this embodiment, if the motion state of the same object captured in the video sequence remains unchanged in a short time, that is, the motion direction and size are unchanged, the current block can be derived according to the reference motion vector field. Predictive signal.
  • acquiring the prediction signal of the current block specifically includes:
  • the first reference motion vector field is a coded and reconstructed motion vector field adjacent to the current field.
  • the video frame corresponding to the current field is referred to as a video frame at time t
  • the video frame corresponding to the first reference motion vector field is referred to as a video frame at time t1.
  • the time t1 may be before the time t or after the time t, and is not limited herein.
  • the motion vector field and the video frame to be described will be explained first: the position of the target object in the video frame at time t1 is A, and the reference video frame used for inter prediction in the video frame is t2.
  • the displacement of the target object during t-t1 should be That is, if the position in the video frame at the time t of the target object is C, then the displacement from position A to position C should be
  • each sample point in the first reference motion vector field is regarded as a target object, and the position to which each sample point is moved at time t can be derived.
  • a new motion vector field formed by moving each sample point in the first reference motion vector field according to the above rule and changing the motion vector is referred to as a second reference motion vector field.
  • each sampling point in the first reference motion vector field when each sampling point in the first reference motion vector field is moved to obtain the second reference motion vector field, at least two sampling points in the first reference motion vector field may be moved to the motion vector at time t. In the same position on the field.
  • each sampling point in the first reference motion vector field keeps the current speed and direction unchanged, and the first reference motion vector may appear when the motion vector field formed at time t (ie, the second reference motion vector field)
  • the sampling points of the position in the second reference motion vector field have various values, for example, the motion vector of one of the sampling points can be taken (t- The product of t1)/(t1-t2) is taken as the motion motion vector of the sampling point at the position.
  • the motion vector of the position is determined as follows:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2).
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second sampling point is referred to as a target second sampling point
  • at least two first sampling points of the first reference motion vector field are referred to as A collection.
  • the weights of the motion vectors of the first sampling points in the first set may be preset to be equal, that is, the first in the first set
  • the average of the motion vectors of one sample point is taken as the motion vector of the target second sample point.
  • determining the weights of the motion vectors of the first sampling points in the first set specifically includes:
  • At least one second sampling point located around the second sampling point of the target is referred to as a second set, and each second sampling point is an element of the second set.
  • the second set may include at least one of four second sampling points located around the target second sampling point and adjacent to the target second sampling point.
  • the degree of similarity between the motion vector of the first sampling point and the motion vector of each sampling point in the second set is calculated.
  • the degree of similarity For example, the difference between the motion vector of the first sampling point and the motion vector of each element in the second set may be calculated, and the sum or average value of each difference is used as the motion vector of the first sampling point and the second.
  • the weights of the motion vectors of the elements in the first set After determining the similarity degree of the motion vectors of the elements in the first set, determining the weights of the motion vectors of the elements in the first set according to the magnitude of the similarity, wherein the weight of the motion vector of the elements with higher similarity The bigger the number. Specifically, after the weights corresponding to the different rankings are set in advance, and the ranking of the similarity degree of each element in the first set is determined, the weight corresponding to the ranking of the element is taken as the weight of the motion vector of the element.
  • a special position may also appear on the second reference motion vector field, wherein no sample point in the first reference motion vector field moves to the special position at time t.
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the at least one second sample point is hereinafter referred to as a third set.
  • the weights of the motion vectors of the second sampling points in the third set may be equal, that is, the second in the third set.
  • the average of the motion vectors of the sample points is taken as the motion vector of the target second sample point.
  • determining the weights of the motion vectors of the second sampling points in the third set specifically includes:
  • the weighted average of the motion vectors of the second sampling points in the third set is the motion vector of the second sampling point.
  • a second sampling point closest to the left side of the second sampling point of the target (the motion vector is ) the distance from the second sampling point of the target is m
  • the second sampling point closest to the right side of the second sampling point of the target is n
  • the motion vector of the target motion vector is
  • the above is merely an example and is not limiting.
  • the motion vector field block of the second reference motion vector field is in the second reference motion vector
  • the coordinate range in the field is the same as the coordinate range of the current motion vector field block in the current motion vector field.
  • the second reference motion vector field is the predicted motion vector field at time t, that is, the second reference motion vector field is the prediction signal of the current field. Since the prediction methods adopted by different blocks in the current field are not all of the methods described in this embodiment, when the current block samples the method described in this embodiment, the second reference motion vector field is located at the current block. The signal of the motion vector field block at the region serves as the prediction signal for the current block.
  • the decoding device can be obtained by simply using the information of the first reference motion vector field as prediction information.
  • the predicted signal of the current block is calculated.
  • the information of the first reference motion vector field block is used to indicate the first reference motion vector field.
  • the information of the first reference motion vector field is an index of the first reference motion vector field.
  • the prediction information of the current block includes only the information of the first reference motion vector field, the number of bits required for the current block code is greatly reduced.
  • motion vectors can also be replaced with one of the components of the motion vector.
  • the method of motion vector field component prediction may be adopted, that is, the other component is predicted according to the direction coefficient of the current block and one component of the motion vector of each sample point in the current block.
  • the motion vector includes direction and size, which can be decomposed into horizontal and vertical components.
  • the angle between the motion vector and the horizontal direction is ⁇ , and then the magnitudes of the components of the motion vector in the horizontal direction and the vertical direction are respectively This can be launched, as well as
  • the encoding device generally stores the motion vector of the sample point by storing the horizontal component and the vertical component of the motion vector of each sample point.
  • the prediction signal of the other component of the motion vector is calculated by the magnitude of one component of the motion vector and the relationship between the component and the other component.
  • obtaining the prediction signal of the current block specifically includes the following steps:
  • the motion vector of the sample point can be decomposed into a vertical component and a horizontal component.
  • a first component one of the vertical component and the horizontal component of the motion vector
  • the other component is referred to as a second component of the motion vector.
  • the current block may be decomposed into a first component block and a second component block, wherein the first component block includes a first component of each sampling point in the current block, and the second component block includes The second component of each sample point in the current block.
  • the direction coefficient of the current block is used to indicate a relationship between the value of the first component of each sampling point and the value of the second component, that is, when the current block is predicted, it is assumed that the first of all the sampling points in the current block
  • the functional relationship between the component and the second component is the same.
  • the first component block when the current block is compressed, compression of the first component block and the second component block of the current block is included.
  • the first component block may be encoded by the method in the embodiment shown in FIG. 1, or the method of intra-field prediction or inter-field prediction described above, which is not limited herein.
  • the predicted value of each sampling point in the second component block may be the first component of the sampling point. Calculated with the direction factor.
  • the decoding means calculates the predicted value of the second component of the sampling point based on the direction coefficient and the reconstructed value of the first component of each sample point in the current block. Therefore, before the encoding device calculates the prediction signal of the current block, the direction coefficient and the reconstruction value of the first component of each sampling point in the current block are first acquired. It is noted that the coding device has a direction coefficient because the information amount of the direction coefficient is small. Generally, lossless coding is performed, so the encoding device does not need to acquire the reconstructed value of the direction coefficient, but directly samples the original value of the direction coefficient.
  • the decoding end can obtain the reconstructed value of the first component, when the prediction signal of the current block includes the predicted value of the second component, the decoding end can reconstruct the predicted value and the first component according to the second component. The value gets the predicted signal of the current block.
  • the direction coefficient of the current block is used as the direction coefficient of the sampling point, that is, the second component of the sampling point is calculated according to the first component of the sampling point and the direction coefficient of the current block.
  • the prediction signal for each sample point in the current block includes the predicted value of the second component.
  • the prediction information of the current block includes one direction coefficient, the number of bits required for the current block coding is small.
  • Example 1 as shown in Figure 6, obtaining the direction coefficients of the current block includes:
  • the first component of the motion vector of the at least two sampling points is used as an independent variable of the preset function
  • the second component of the motion vector of the at least two sampling points is used as a function value corresponding to the independent variable.
  • the independent variable and the function value are fitted, and the coefficient of the preset function obtained by the fitting is used as the direction coefficient.
  • the motion vector of at least two sampling points in the current block is fitted, wherein the first component of the motion vector of each sampling point is used as an independent variable and a second component in the fitting.
  • a function value of the preset function a function relationship between the first component and the second component of each sample point in the current block is obtained.
  • each sampling point can be fitted to a straight line.
  • k is the direction coefficient of the current block. This means that all points in the current block are moving in the same direction.
  • a and b are the direction coefficients of the current block.
  • the coefficient in the function equation corresponding to the curve is the direction coefficient of the current block.
  • the direction coefficient information of the current motion vector field as part of the prediction information includes the value of the direction coefficient.
  • the encoding device and the decoding device need to pre-determine the function equations corresponding to different coefficient numbers, so that the encoding device only needs to write the direction coefficients into the code stream, and the decoding device can according to the values included in the direction coefficients. The number determines the corresponding function equation.
  • Example 2 As shown in FIG. 7, obtaining the direction coefficients of the current block includes:
  • the first component of the motion vector of the at least two sampling points is used as an argument, and the second component of the motion vector of the at least two sampling points is used as a function value corresponding to the independent variable, and the self is The variable is fitted to the function value.
  • the candidate prediction residual signal of the current motion vector field block corresponding to the candidate direction coefficient of the candidate direction coefficient set is taken, and the candidate direction coefficient corresponding to the candidate prediction residual signal with minimum signal energy minimum or rate distortion is used as The direction coefficient of the current motion vector field block.
  • the coefficient of the fitting function is not directly used as the direction coefficient of the current block.
  • the image corresponding to the current block may be the same object as the image corresponding to one of the coded blocks of all the coded blocks, then The direction coefficient of the current block may be the same as the direction coefficient of one of the encoded blocks.
  • the direction coefficients of all the coded blocks are also obtained, and the coefficients of the fitting function and the direction coefficients of all the coded blocks are used as candidates for the candidate direction coefficient set of the current block.
  • a direction coefficient, a candidate prediction signal of the current block and a candidate prediction residual signal corresponding to each candidate direction coefficient are calculated, and a candidate direction coefficient corresponding to the candidate prediction residual signal having the smallest capability is used as a direction coefficient of the current block.
  • the direction coefficient information when the direction coefficient of the current block is a coefficient of the function, the direction coefficient information includes a value of the direction coefficient, when a direction coefficient of the current block is a direction of the coded block.
  • the coefficient coefficient information is used to indicate the encoded block or the encoded block
  • the value of the direction coefficient is not limited here.
  • Example 3 Obtaining the direction coefficient of the current block includes:
  • the direction coefficients of the at least two encoded motion vector field blocks adjacent to the current motion vector field block in the current motion vector field are used as The direction coefficient of the current motion vector field block.
  • the direction coefficients of at least two coded blocks adjacent to the current block are the same, it may be inferred that the images of the at least two coded blocks belong to the same object, and the image of the current block and the at least two coded blocks are assumed The images belong to the same object, so it is directly determined that the direction coefficients of the current block are the same as the direction coefficients of the at least two coded blocks.
  • the direction coefficient information of the current motion vector field is specifically used to indicate the coded block or a value including a direction coefficient of the coded block, for example, the direction coefficient information includes an index of the coded block. There are no restrictions here.
  • Example 4 As shown in FIG. 8, obtaining the direction coefficients of the current block includes:
  • a ratio of a first component of a motion vector of the one sampling point to a second component of a motion vector of the one sampling point is used as a candidate for the candidate direction coefficient set Direction coefficient;
  • the direction coefficients of at least two encoded motion vector field blocks adjacent to the current motion vector field block in the current motion vector field are used as the direction coefficients of the current block.
  • the image corresponding to each sample point in the current block may be the same object, and then the direction coefficient of the current block may be the same as the direction coefficient of one of the sample points.
  • a motion vector of at least one sampling point in the original signal is further obtained, and a direction component of the encoded block and a first component of a motion vector of the at least one sampling point are compared with the at least An average of a ratio of a second component of a motion vector of one sample point as a candidate direction coefficient of the candidate direction coefficient set, or a direction coefficient of the encoded block and a first component of the motion vector and the The ratio of the second component of the motion vector is taken as the candidate direction coefficient of the candidate direction coefficient set.
  • the direction coefficient information when the direction coefficient of the current block is a coefficient of the function, the direction coefficient information includes a value of the direction coefficient, when a direction coefficient of the current block is a direction of the coded block.
  • the direction coefficient information is used to indicate the value of the coded block or the direction coefficient of the coded block, which is not limited herein.
  • the method of acquiring the prediction signal and the prediction information adopted by the different motion vector field blocks there are various methods for determining the prediction signal and the prediction information used by the current block.
  • an index corresponding to each acquisition method can be prepared in advance by the encoding device and the decoding device.
  • the encoding device acquires the prediction signal and the prediction information of the current block, it traverses each of the acquisition methods and calculates the prediction residual signal in each of the acquisition methods, and the acquisition method corresponding to the prediction residual signal with the smallest energy is determined as the current The method of obtaining the block, and including the index of the obtaining method into the prediction information of the current block.
  • the motion vector field encoding method of the embodiment of the present invention has been described above. The following is a description of the motion vector field decoding method provided by the embodiment of the present invention.
  • the execution body of the motion vector field decoding method provided by the embodiment of the present invention is a decoding device, where the decoding device can be any video output that needs to be output and played. Devices such as mobile phones, laptops, tablets, personal computers, etc.
  • a motion vector field decoding method includes: acquiring prediction information of a current motion vector field block and a prediction residual signal, wherein the current motion vector field block divides the current motion vector field by Obtaining, the current motion vector field is a motion vector field corresponding to the video frame at time t; acquiring a prediction signal of the current motion vector field block according to the prediction information; and according to the prediction signal and the prediction residual A difference signal is used to calculate a reconstruction signal of the current motion vector field block.
  • FIG. 9 is a schematic flowchart of a motion vector field decoding method according to another embodiment of the present invention
  • FIG. 9 is a motion vector field decoding method according to another embodiment of the present invention.
  • Can include the following:
  • the decoding device After receiving the video code stream, the decoding device decodes the video code stream to restore each video image in the original video sequence.
  • the video frame is decoded by the reference frame of the video frame and the motion vector field of the video frame.
  • the decoding device needs to first decode the reference frame of the video frame and the motion vector field.
  • the motion vector field to be currently decoded is referred to as a current motion vector field.
  • the current motion vector field block is obtained by dividing a current motion vector field, where the current motion vector field is a motion vector field corresponding to a video frame at time t.
  • the reconstruction signal of the current motion vector field is obtained by sequentially reconstructing each motion vector field block in the current motion vector field.
  • the prediction information of the current block and the prediction residual signal are first acquired from the video code stream.
  • the content of the prediction information is different, and the method of acquiring the prediction signal of the current block according to the prediction information is also different.
  • acquiring the prediction signal of the current motion vector field block according to the prediction information specifically includes: an index according to an intra prediction mode. Determining an intra prediction mode, determining a reference motion vector field block according to a reference motion vector field block index, and then acquiring a signal according to an intra prediction mode and a reference motion vector field block The prediction signal of the previous block.
  • the encoding device and the decoding device pre-determine a method of calculating a prediction signal corresponding to each intra prediction mode. In this way, after receiving the prediction information, the decoding device calculates the prediction signal by using a pre-made calculation method according to the prediction mode in the prediction information and the index of the reference motion vector field block.
  • the acquired intra prediction mode is the horizontal prediction mode in the 35 intra prediction modes of the video frame provided in the HEVC standard, and then the reconstructed signal of the reference motion vector field block of the current block is used as the prediction signal of the current block.
  • the acquired intra prediction mode is the Intra_DC mode in the 35 intra prediction modes, and the reconstructed pixel average value of the reference motion vector field block is used as the prediction signal of the current block.
  • the prediction information acquired by the decoding device may not include the index of the reference motion vector field block, and the decoding device and the encoding device pre-specify the reference motion vector field used by the current block corresponding to each intra prediction mode. The position of the block relative to the current block. In this way, after the decoding device acquires the prediction information, the reference motion vector field of the current block is determined according to the intra prediction mode in the prediction information.
  • the prediction residual signal is used to indicate a difference between an original signal and a prediction signal of the current block. After the decoding device acquires the prediction signal of the current block, the prediction signal is corrected by the prediction residual signal, and the reconstructed signal of the current block is obtained.
  • the prediction information is information required to acquire the prediction signal of the current block
  • the prediction signal acquired by the decoding device according to the prediction information is the same as the prediction signal acquired by the encoding device, so that the decoding device is based on the prediction information and
  • the prediction residual signal can highly restore the original signal of the current block.
  • the prediction information includes information of a region division method and information of a method of determining a prediction signal for each region.
  • the information of the area dividing method is used to indicate the area dividing method, for example, an index of the area dividing method.
  • the information of the method of determining the prediction signal for each region is used to indicate a method of determining a prediction signal for each region, such as an index of a method for determining a prediction signal for each region.
  • the area dividing method is determined according to the information of the area dividing method, and the current block is divided into different areas by using the area dividing method.
  • a method of determining a prediction signal for each region is obtained based on the information of the method of determining the prediction signal of each region, and the prediction signal of each region is acquired by the method.
  • the information of the method of determining the prediction signal for each region indicates that the prediction signal of the region is the average of the motion vectors of all the sampling points in the region. Then, when the prediction signal of the region is acquired, the average value of the motion vectors of all the sampling points in the region is calculated, and the average value is used as the prediction signal of the region.
  • the information of the method of determining the prediction signal of each region indicates that the prediction signal of the region is a motion vector of one of the sampling points. Then, when acquiring the prediction signal of the region, the motion vector of the sampling point is obtained according to the index of the sampling point, and the motion vector of the sampling point is used as the prediction signal of the region.
  • the prediction information may not include the information of the method of determining the prediction signal of each region, but the encoding device and the decoding device store the same predetermined method for determining the prediction signal of each region. There are no restrictions here.
  • the prediction information includes information of a reference motion vector field and information of the matching block, where information of the reference motion vector field is used to indicate the reference motion vector field, and information of the matching block is used to indicate Said matching block.
  • the information of the reference motion vector field may be an index of the reference motion vector field
  • the information of the matching block may be a position of the matching block relative to a position of the first motion vector field block in the reference motion vector field.
  • Displacement information wherein the position of the first motion vector field block in the reference motion vector field is the same as the position of the current block on the current field; or the information of the matching block may also be an index of the matching block, which is not used herein. limit.
  • the information of the motion vector field is determined by the reference motion vector field, and the matching block is searched in the reference motion vector field according to the information of the matching block, and the reconstructed signal of the matching block is used as the prediction signal of the current block.
  • the prediction information includes the information for indicating a first reference motion vector field of the motion vector field block.
  • the first reference motion vector field Acquiring, according to the prediction information, the first reference motion vector field, where the first reference motion vector field is a motion vector field of a video frame at time t1;
  • a motion vector field block of the second reference motion vector field Acquiring a motion vector field block of the second reference motion vector field, wherein a coordinate range of the motion vector field block of the second reference motion vector field in the second reference motion vector field and the current motion vector field
  • the block has the same coordinate range in the current motion vector field
  • the prediction signal includes a motion vector field block of the second reference motion vector field.
  • the motion vector field and the video frame to be described will be explained below:
  • the position of the target object in the video frame at time t1 is A
  • the video frame is used for inter prediction.
  • the reference video frame is a video frame at time t2, wherein the position of the target object in the video frame at the time t2 is B.
  • the motion vector of the corresponding first sampling point of the target object in the first reference motion vector field Used to indicate the displacement of position B to position A.
  • the displacement of the target object during t-t1 should be That is, if the position in the video frame at the time t of the target object is C, then the displacement from position A to position C should be
  • each sample point in the first reference motion vector field is regarded as a target object, and the position to which each sample point is moved at time t can be derived.
  • a new motion vector field formed by moving each sample point in the first reference motion vector field according to the above rule and changing the motion vector is referred to as a second reference motion vector field.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time specifically includes:
  • a motion vector of the second sampling point of the second reference motion vector field where a motion vector of the first sampling point of the first reference motion vector field; wherein, starting from a position of the first sampling point, a motion vector of the second sampling point is a position to which the displacement is moved The position of the second sampling point is the same position.
  • each sampling point in the first reference motion vector field when each sampling point in the first reference motion vector field is moved to obtain the second reference motion vector field, at least two sampling points in the first reference motion vector field may be moved to the motion vector at time t. In the same position on the field.
  • each sampling point in the first reference motion vector field keeps the current speed and direction unchanged, and the first reference motion vector may appear when the motion vector field formed at time t (ie, the second reference motion vector field)
  • the sampling points of the position in the second reference motion vector field have various values, for example, the motion vector of one of the sampling points can be taken (t- The product of t1)/(t1-t2) is taken as the motion motion vector of the sampling point at the position.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time specifically:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second sampling point is referred to as a target second sampling point, and at least two first sampling points of the first reference motion vector field are referred to as a first set.
  • the weights of the motion vectors of the first sampling points in the first set may be preset to be equal, that is, the first in the first set
  • the average of the motion vectors of one sample point is taken as the motion vector of the target second sample point.
  • determining the weights of the motion vectors of the first sampling points in the first set specifically includes:
  • At least one second sampling point located around the second sampling point of the target is referred to as a second set, and each second sampling point is an element of the second set.
  • the second set may include at least one of four second sampling points located around the target second sampling point and adjacent to the target second sampling point.
  • the degree of similarity between the motion vector of the first sampling point and the motion vector of each sampling point in the second set is calculated.
  • the degree of similarity For example, the difference between the motion vector of the first sampling point and the motion vector of each element in the second set may be calculated, and the sum or average value of each difference is used as the motion vector of the first sampling point and the second.
  • S73 Determine, according to the degree of similarity, a weight of a motion vector of each first sampling point in the first set, where a weight of a motion vector of the first sampling point that is more similar to a motion vector of the second set is more Big.
  • the weights of the motion vectors of the elements in the first set After determining the similarity degree of the motion vectors of the elements in the first set, determining the weights of the motion vectors of the elements in the first set according to the magnitude of the similarity, wherein the weight of the motion vector of the elements with higher similarity The bigger the number. Specifically, after the weights corresponding to the different rankings are set in advance, and the ranking of the similarity degree of each element in the first set is determined, the weight corresponding to the ranking of the element is taken as the weight of the motion vector of the element.
  • a special position may also appear on the second reference motion vector field, wherein no sample point in the first reference motion vector field moves to the special position at time t.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time specifically includes:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the prediction information includes direction coefficient information indicating a direction coefficient of the current motion vector field block, where the direction coefficient is used to indicate a first component of a motion vector of a sampling point of the current motion vector field block. The relationship between the value and the value of the second component of the motion vector of the sample point.
  • the direction coefficient information includes information indicating a reconstructed motion vector field block in the current motion vector field, the direction coefficient including a direction coefficient of the reconstructed motion vector field block; or, the direction The coefficient information includes the value of the direction coefficient.
  • an encoding apparatus includes:
  • the first obtaining module 1001 is configured to acquire an original signal of a current motion vector field block, where the current motion vector field block is obtained by dividing a current motion vector field, where the current motion vector field is a video frame corresponding to time t Motion vector field
  • a second obtaining module 1002 configured to acquire a prediction signal of the current motion vector field block and prediction information of the current motion vector field block, where the prediction information is used to indicate information required to acquire the prediction signal;
  • the calculating module 1003 is configured to calculate, according to the prediction signal acquired by the second acquiring module and the original signal acquired by the first acquiring module, a prediction residual signal of the current motion vector field block, where the prediction a residual signal is used to indicate a residual between the original signal and the predicted signal;
  • the encoding module 1004 is configured to write the prediction information acquired by the second obtaining module and the prediction residual signal calculated by the computing module into a code stream.
  • the encoding apparatus when the encoding apparatus encodes the current block, since the original signal of the current motion vector field block is not required to be encoded, but the prediction information and the prediction residual signal are encoded, the compression efficiency of the motion vector field is improved. .
  • the second obtaining module 1002 is configured to:
  • the at least one motion vector field block that is encoded and reconstructed adjacent to the current motion vector field block on the current motion vector field is used as a reference motion vector field block of the current block; according to the intra prediction
  • the mode and the reference motion vector field block acquire a prediction signal of the current block.
  • the acquired intra prediction mode is a horizontal prediction mode among 33 directional prediction modes
  • the reference motion vector field block of the current block is the first motion vector field block on the left side on the same line as the current block.
  • the second obtaining module 1002 is configured to use the reconstructed signal of the reference motion vector field block as the prediction signal of the current block.
  • the acquired intra prediction mode is an Intra_DC mode
  • the second obtaining module 1002 is configured to use the reconstructed pixel average value of the reference motion vector field block as the prediction signal of the current block.
  • the second obtaining module 1002 is configured to:
  • first reference motion vector field of the current motion vector field block where the first reference motion vector field is an encoded and reconstructed motion vector field, wherein the first reference motion vector field is a video frame at time t1
  • the video frame at the time t1 is a video frame adjacent to the video frame at the time t;
  • the prediction signal includes a motion vector field block of the second reference motion vector field, where a motion vector field block of the second reference motion vector field is a coordinate range in the second reference motion vector field is the same as a coordinate range of the current motion vector field block in the current motion vector field;
  • the prediction information includes the information used to indicate the first reference motion vector field.
  • the second acquisition mode 1002 is used according to a calculation formula. Calculating a motion vector of the second sampling point of the second reference motion vector field, where a motion vector of the first sampling point of the first reference motion vector field; wherein, with the position of the first sampling point as a starting point, a position moved by the motion vector of the second sampling point as a displacement The position of the second sampling point is the same position.
  • the second acquisition mode 1002 is configured to determine a second sampling point of the second reference motion vector field, where the second sampling point is located and a position of each of the at least two first sampling points of the first reference motion vector field is a starting point, and a displacement vector of each of the at least two first sampling points is a displacement, and a position moved to the same position, the at least two The respective motion vectors of the first sampling points are the product of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second acquisition modulo 1002 is configured to acquire at least one second sampling point adjacent to a target second sampling point of the second reference motion vector field, where Position of any one of the first sampling points of the first reference motion vector field as a starting point For displacement, the position moved to is different from the position of the second sampling point of the target. a motion vector of the first sampling point;
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the second acquisition module 1002 is configured to:
  • the prediction information is acquired according to the direction coefficient, and the prediction information includes direction coefficient information indicating the direction coefficient.
  • the second acquisition module 1002 is configured to:
  • the coefficient of the preset function obtained by fitting is taken as the direction coefficient.
  • the second acquisition module 1002 is configured to:
  • the second acquisition module 1002 is configured to:
  • At least two of the current motion vector fields adjacent to the current motion vector field block When the direction coefficients of the code motion vector field block are the same, the direction coefficients of the at least two coded motion vector field blocks are used as the direction coefficients of the current motion vector field block.
  • the second acquisition module 1002 is configured to:
  • a decoding apparatus provided by an embodiment of the present invention includes:
  • the first obtaining module 1101 is configured to acquire prediction information and a prediction residual signal of the current motion vector field block, where the current motion vector field block is obtained by dividing the current motion vector field, where the current motion vector field is t a motion vector field corresponding to the video frame of the moment;
  • a second acquiring module 1102 configured to acquire, according to the prediction information, a prediction signal of the current motion vector field block
  • the calculation module 1103 is configured to calculate a reconstruction signal of the current motion vector field block according to the prediction signal acquired by the second acquisition module and the prediction residual signal acquired by the first acquisition module. number.
  • the prediction information includes the information for indicating a first reference motion vector field of the motion vector field block; the second obtaining module 1102 is configured to:
  • the first reference motion vector field Acquiring, according to the prediction information, the first reference motion vector field, where the first reference motion vector field is a motion vector field of a video frame at time t1;
  • a motion vector field block of the second reference motion vector field Acquiring a motion vector field block of the second reference motion vector field, wherein a coordinate range of the motion vector field block of the second reference motion vector field in the second reference motion vector field and the current motion vector field
  • the block has the same coordinate range in the current motion vector field
  • the prediction signal includes a motion vector field block of the second reference motion vector field.
  • the second acquisition module 1202 is configured to:
  • a motion vector of the second sampling point of the second reference motion vector field where a motion vector of the first sampling point of the first reference motion vector field; wherein, starting from a position of the first sampling point, a motion vector of the second sampling point is a position to which the displacement is moved The position of the second sampling point is the same position.
  • the second acquisition module 1202 is configured to:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the second acquisition module 1202 is configured to:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the prediction information includes direction coefficient information indicating a direction coefficient of the current motion vector field block, the direction coefficient is used to indicate sampling of the current motion vector field block a relationship between a value of a first component of a motion vector of a point and a value of a second component of a motion vector of the sampling point;
  • the second obtaining module 1102 is configured to acquire a reconstructed value of the first component, and calculate a predicted value of the second component according to the direction coefficient and a reconstructed value of the first component, where the prediction signal includes the The predicted value of the second component.
  • the direction coefficient information includes information indicating a reconstructed motion vector field block in the current motion vector field, the direction coefficient including the reconstructed motion vector field block The direction coefficient; or the direction coefficient information includes the value of the direction coefficient.
  • FIG. 12 is a block diagram showing the structure of an encoding apparatus 1200 according to another embodiment of the present invention.
  • the encoding device 1200 can include at least one processor 1201, a memory 1205, and at least one communication bus 1202.
  • the encoding device 1200 may further include: at least one network interface 1204 and/or a user interface 1203.
  • the user interface 1203 includes, for example, a display (eg, a touch screen, an LCD, a Holographic, a CRT, or a Projector), a pointing device (such as a mouse or a trackball touch panel or a touch screen, etc.), a camera, and/or Or a pickup device, etc.
  • the memory 1205 may include a read only memory and a random access memory, and provides instructions and data to the processor 1201. A portion of the memory 1205 may also include a non-volatile random access memory.
  • the memory 1205 stores the following elements, executable modules or data structures, or a subset thereof, or their extended set:
  • the operating system 12051 includes various system programs for implementing various basic services and processing bases. The task of hardware.
  • the application module 12052 includes various applications for implementing various application services.
  • the processor 1201 by invoking a program or instruction stored in the memory 1205, the processor 1201 is configured to:
  • the prediction information and the prediction residual signal are written to a code stream.
  • the compression efficiency of the motion vector field is improved.
  • the acquiring the prediction signal of the current motion vector field block and the prediction information of the current motion vector field block including:
  • first reference motion vector field of the current motion vector field block where the first reference motion vector field is an encoded and reconstructed motion vector field, wherein the first reference motion vector field is a video frame at time t1
  • the video frame at the time t1 is a video frame adjacent to the video frame at the time t;
  • the prediction signal includes a motion vector field block of the second reference motion vector field, where a motion vector field block of the second reference motion vector field is a coordinate range in the second reference motion vector field is the same as a coordinate range of the current motion vector field block in the current motion vector field;
  • the prediction information including the Information for indicating the first reference motion vector field.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the acquiring the prediction signal of the current motion vector field block and the prediction information of the current motion vector field block including:
  • Obtaining a direction coefficient of the current motion vector field block wherein the direction coefficient is used to indicate a relationship between a value of a first component of a motion vector of a sampling point of a current motion vector field block and a value of a second component of a motion vector of the sampling point;
  • the prediction information is acquired according to the direction coefficient, and the prediction information includes direction coefficient information indicating the direction coefficient.
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • the coefficient of the preset function obtained by fitting is taken as the direction coefficient.
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • the direction coefficients of the at least two encoded motion vector field blocks adjacent to the current motion vector field block in the current motion vector field are used as The direction coefficient of the current motion vector field block.
  • the acquiring a direction coefficient of the current motion vector field block includes:
  • FIG. 13 is a structural block diagram of a decoding apparatus 1300 according to another embodiment of the present invention.
  • the decoding device 1300 may include: at least one processor 1301, a memory 1305, and at least one communication bus 1302.
  • the video decoding device 1300 may further include: at least one network interface 1304 and/or a user interface 1303.
  • the user interface 1303 includes, for example, a display (eg, a touch screen, an LCD, a Holographic, a CRT, or a Projector), a pointing device (such as a mouse or a trackball touch panel or a touch screen, etc.), a camera, and/or Or a pickup device, etc.
  • the memory 1305 can include a read only memory and a random access memory and provides instructions and data to the processor 1001. A portion of the memory 1305 can also include a non-volatile random access memory.
  • the memory 1305 stores elements, executable modules or data structures, or a subset thereof, or their extension set:
  • the operating system 13051 includes various system programs for implementing various basic services and processing hardware-based tasks.
  • the application module 13052 includes various applications for implementing various application services.
  • the processor 1301 by calling a program or instruction stored in the memory 1305, the processor 1301 is configured to:
  • the current motion vector field block is obtained by dividing the current motion vector field, where the current motion vector field is a motion vector corresponding to the video frame at time t field;
  • the prediction information includes the information used to indicate a first reference motion vector field of the motion vector field block
  • the first reference motion vector field Acquiring, according to the prediction information, the first reference motion vector field, where the first reference motion vector field is a motion vector field of a video frame at time t1;
  • a motion vector field block of the second reference motion vector field Acquiring a motion vector field block of the second reference motion vector field, wherein a coordinate range of the motion vector field block of the second reference motion vector field in the second reference motion vector field and the current motion vector field
  • the block has the same coordinate range in the current motion vector field
  • the prediction signal includes a motion vector field block of the second reference motion vector field.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • a motion vector of the second sampling point of the second reference motion vector field where a motion vector of the first sampling point of the first reference motion vector field; wherein, starting from a position of the first sampling point, a motion vector of the second sampling point is a position to which the displacement is moved The position of the second sampling point is the same position.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • a position of the second sampling point starts from a position of each of the at least two first sampling points of the first reference motion vector field, respectively Positioning the respective movement vectors of the at least two first sampling points as displacements, the positions moved to the same position, and the respective motion vectors of the at least two first sampling points are the at least two first sampling points respectively The product of the motion vector and (t-t1)/(t1-t2);
  • a product of a weighted average of the motion vectors of the at least two first sampling points and (t-t1)/(t1-t2) is used as a motion vector of the second sampling point.
  • the acquiring the second reference motion vector field according to the first reference motion vector field, the t time, the t1 time, and the t2 time includes:
  • the weighting value of the one second sampling point is used as a motion vector of the target second sampling point
  • a weighted average of motion vectors of the at least two second sampling points is used as a motion vector of the target second sampling point.
  • the prediction information includes direction coefficient information indicating a direction coefficient of the current motion vector field block, where the direction coefficient is used to indicate a motion component first component of a sampling point of the current motion vector field block The relationship between the value of the value and the value of the second component of the motion vector of the sample point;
  • the direction coefficient information includes information for indicating a reconstructed motion vector field block in the current motion vector field, where the direction coefficient includes a direction coefficient of the reconstructed motion vector field block; or
  • the direction coefficient information includes the value of the direction coefficient.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明实施例公开了一种运动矢量场编码方法和解码方法、编码和解码装置。本发明实施例方法包括:获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;将所述预测信息和所述预测残差信号写入码流。本发明实施例能够提高了运动矢量场的压缩效率。

Description

运动矢量场编码方法和解码方法、编码和解码装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种运动矢量场编码方法和解码方法、编码和解码装置。
背景技术
自从国际电信联盟(英文:international telegraph union,缩写:ITU)在1984年推出第一个视频编码国际标准H.120以来,视频编码技术已经获得了迅猛蓬勃的发展,已成为了现代信息技术中不可或缺的重要组成部分。随着因特网(英文:internet)、无线通讯网和数字广播网的快速发展,人们对获取多媒体信息的需求日益旺盛,而视频编码技术是有效传输和存储视频信息的关键技术之一。
运动矢量场的压缩是大多数视频编码方案的一个重要部分。在视频编码中,一个运动场景所对应的视频包括一系列视频帧,每一个视频帧包括静止图像,该一系列视频帧形成运动的错觉是通过相对快速地显示连续图像,例如以每秒15至30帧的速率进行显示。由于相对快的帧速率,该系列视频帧内每一个视频帧上的图像很相似。取该系列视频帧中的一个视频帧作为参考图像,该系列视频帧中另一个视频帧的运动矢量场指的是该视频帧相对于参考图像的位移信息。需注意的是,该视频帧可以是与该参考图像相邻的图像,也可以不是与该参考图像相邻的图像。
具体来说,一个视频帧包括多个像素,可将视频帧中的图像分为多个图像单元,其中每一个图像单元内包括至少一个像素,而且每个图像单元内的所有像素的运动矢量相同,也即一个图像单元具有一个运动矢量。该视频帧的运动矢量场由所有图像单元的运动矢量构成。
然而,目前还没有能够有效地压缩运动矢量场的方法。
发明内容
本发明实施例提供了一种运动矢量场编码方法和解码方法、编码和解码装 置,能够提高了运动矢量场的压缩效率。
本发明实施例第一方面提供了一种视频编码方法,包括:
获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
将所述预测信息和所述预测残差信号写入码流。
结合第一方面,在在第一方面的第一种可能的实施方式中,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
结合第一方面的第一种可能的实施方式,在第一方面的第二种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
根据计算式
Figure PCTCN2015087947-appb-000001
计算获得所述第二参考运动矢量场 的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000002
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
结合第一方面的第一种可能的实施方式,在第一方面的第三种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
结合第一方面的第一种可能的实施方式,在第一方面的第四种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000003
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000004
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
结合第一方面,在第一方面的第五种可能的实施方式中,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
获取所述第一分量的重建值;
根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
结合第一方面的第五种可能的实施方式,在第一方面的第六种可能的实施方式中,所述获取所述当前运动矢量场块的方向系数,包括:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
将拟合得到的所述预设函数的系数作为所述方向系数。
结合第一方面的第五种可能的实施方式,在第一方面的第七种可能的实施方式中,所述获取所述当前运动矢量场块的方向系数,包括:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
结合第一方面的第五种可能的实施方式,在第一方面的第八种可能的实施方式中,所述获取所述当前运动矢量场块的方向系数,包括:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方 向系数作为所述当前运动矢量场块的方向系数。
结合第一方面的第五种可能的实施方式,在第一方面的第九种可能的实施方式中,所述获取所述当前运动矢量场块的方向系数,包括:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
获取所述原始信号中至少一个采样点的运动矢量;
在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
本发明实施例第二方面提供了一种运动运动矢量场解码方法,包括:
获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
根据所述预测信息获取所述当前运动矢量场块的预测信号;
根据所述预测信号和所述预测残差信号,计算所述当前运动矢量场块的重建信号。
结合第二方面,在第二方面的在第一种可能的实施方式中,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;
所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
结合第二方面的第一种可能的实施方式,在第二方面的第二种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
根据计算式
Figure PCTCN2015087947-appb-000005
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000006
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
结合第二方面的第一种可能的实施方式,在第二方面的第三种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
结合第二方面的第一种可能的实施方式,在第二方面的第四种可能的实施方式中,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000007
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000008
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
结合第二方面,在第二方面的第五种可能的实施方式中,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
结合第二方面的第五种可能的实施方式,在第二方面的第六种可能的实施方式中,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;
或者,
所述方向系数信息包括所述方向系数的值。
本发明实施例第三方面提供了一种编码装置,包括:
第一获取模块,用于获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
第二获取模块,用于获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
计算模块,用于根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
编码模块,用于将所述第二获取模块获取的所述预测信息和所述计算模块计算得到的所述预测残差信号写入码流。
结合第三方面,在在第三方面的第一种可能的实施方式中,所述第二获取模块用于:
获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
结合第三方面的第一种可能的实施方式,在第三方面的第二种可能的实施方式中,所述第二获取模用于根据计算式
Figure PCTCN2015087947-appb-000009
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000010
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
结合第三方面的第一种可能的实施方式,在第三方面的第三种可能的实施方式中,所述第二获取模用于确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
结合第三方面的第一种可能的实施方式,在第三方面的第四种可能的实施方式中,所述第二获取模用于获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000011
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000012
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
结合第三方面,在第三方面的第五种可能的实施方式中,所述第二获取模块用于:
获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
获取所述第一分量的重建值;
根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
结合第三方面的第五种可能的实施方式,在第三方面的第六种可能的实施方式中,所述第二获取模块用于:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
将拟合得到的所述预设函数的系数作为所述方向系数。
结合第三方面的第五种可能的实施方式,在第三方面的第七种可能的实施 方式中,所述第二获取模块用于:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
结合第三方面的第五种可能的实施方式,在第三方面的第八种可能的实施方式中,所述第二获取模块用于:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方向系数作为所述当前运动矢量场块的方向系数。
结合第三方面的第五种可能的实施方式,在第三方面的第九种可能的实施方式中,所述第二获取模块用于:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
获取所述原始信号中至少一个采样点的运动矢量;
在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
本发明实施例第四方面提供了一种解码装置,包括:
第一获取模块,用于获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
第二获取模块,用于根据所述预测信息获取所述当前运动矢量场块的预测信号;
计算模块,用于根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述预测残差信号,计算所述当前运动矢量场块的重建信号。
结合第四方面,在第四方面的在第一种可能的实施方式中,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;
所述第二获取模块用于:
根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
结合第四方面的第一种可能的实施方式,在第四方面的第二种可能的实施方式中,所述第二获取模块用于:
根据计算式
Figure PCTCN2015087947-appb-000013
计算获得所述第二参考运动矢量 场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000014
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
结合第四方面的第一种可能的实施方式,在第四方面的第三种可能的实施方式中,所述第二获取模块用于:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
结合第四方面的第一种可能的实施方式,在第四方面的第四种可能的实施方式中,所述第二获取模块用于:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000015
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000016
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
结合第四方面,在第四方面的第五种可能的实施方式中,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
所述第二获取模块,用于获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
结合第四方面的第五种可能的实施方式,在第四方面的第六种可能的实施方式中,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;
或者,
所述方向系数信息包括所述方向系数的值。
从以上技术方案可以看出,本发明实施例具有以下优点:
本发明实施例中,对当前块进行编码时,由于无需对当前运动矢量场块的原始信号进行编码,而是通过对预测信息和预测残差信号进行编码,提高了运动矢量场的压缩效率。
附图说明
图1为本发明的运动矢量场编码方法的一个实施例的流程图;
图2为连续切分和离散切分的示意图;
图3为图1所示实施例中获取所述当前块的预测信号的一个实施例的流程图;
图4为图3所示实施例中确定第一集合中各第一采样点的运动矢量的权数的一个实施例的流程图;
图5为图1所示实施例中获取所述当前块的预测信号的一个实施例的流程图;
图6为图5所示实施例中获取当前块的方向系数的一个实施例的流程图;
图7为图5所示实施例中获取当前块的方向系数的另一个实施例的流程图;
图8为图5所示实施例中获取当前块的方向系数的另一个实施例的流程图;
图9为本发明的运动矢量场解码方法的一个实施例的流程图;
图10为本发明的编码装置的一个实施例的结构示意图;
图11为本发明的解码装置的一个实施例的结构示意图;
图12是本发明的编码装置的另一个实施例的结构框图;
图13是本发明的解码装置的另一个实施例的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
以下分别进行详细说明。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等是用于区别不同的对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面先介绍本发明实施例提供的运动矢量场编码方法。本发明实施例提供的运动矢量场编码方法的执行主体是编码装置,其中,该编码装置可以是任何需要输出或存储视频的装置,如笔记本电脑、平板电脑、个人电脑、手机或视频服务器等设备。
本发明运动矢量场编码方法的一实施例,一种运动矢量场编码方法包括:获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;将所述预测信息和所述预测残差信号写入码流。
首先参见图1,图1为本发明的一实施例提供的一种运动矢量场编码方法的流程示意图,如图1所示,本发明的一实施例提供的一种运动矢量场编码方法可以包括以下内容:
101、获取当前运动矢量场块的原始信号。
运动矢量场为一幅图像相对于另一幅图像的运动信息。在视频压缩中,运 动矢量场用于描述目标视频帧相对于该目标视频帧的参考视频帧的运动信息,其中,目标视频帧中包括多个图像块,每个图像块在参考视频帧中有相应的匹配块。运动矢量场中的各采样点与目标视频帧中的各图像块一一对应,每个采样点的值为该采样点对应的图像块的运动矢量,其中,该运动矢量为该图像块相对于该图像块在参考视频帧中的匹配块的位移信息。
本实施例中,在对t时刻的视频帧对应的运动矢量进行压缩时,将该运动矢量场划分为不同的运动矢量场块,通过对每个运动矢量场块压缩编码来对该运动矢量场压缩编码,其中一个运动矢量场块包括至少一个采样点。下文中,为描述方便,将当前待压缩的运动矢量场块称为当前块,将该当前块所在的运动矢量场称为当前场。
需要注意的是,在对运动矢量场分为不同的运动矢量场块时,各运动矢量场块和该运动矢量场所对应的视频帧所分成的各图像块并不一定相对应。对运动矢量场分块的方法可以参考对视频帧分块的方法,在此不作限制。
102、获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息。
获取预测信号的方法多种,例如,目前HEVC标准中提供了视频帧的35种帧内预测模式,其中该35中帧内预测模式包括33个方向性预测模式,以及Intra_DC和Intra_Planar模式。本实施例中,在获取运动矢量场块的预测信号时,将该帧内预测模式运用到当前场的帧内预测中。
具体的,确定一种帧内预测模式,将当前场上与当前块邻近的、已编码并重建的至少一个运动矢量场块作为该当前块的参考运动矢量场块;根据该帧内预测模式和该参考运动矢量场块获取当前块的预测信号。
其中,与当前块邻近的运动矢量场块可以是与该运动矢量场块相邻(也即与该运动矢量场块相接)的运动矢量场块,也可以是与该当前块相隔有预置数值个运动矢量场块的运动矢量场块,在此不作限制。实际应用中,由于对当前场中各运动矢量场块的编码顺序为从左到右,从上到下。因此,一般选择位于当前块的左方、左下方、上方或者右上方的、相邻的、已编码并重建的运动矢量场块作为参考运动矢量场块。
根据预测模式和参考运动矢量场块计算预测信号的方法有多种。例如,获 取的帧内预测模式为所述33个方向性预测模式中的水平预测模式,那么当前块的参考运动矢量场块为与所述当前块位于同一行上的左边第一个运动矢量场块。将参考运动矢量场块的重建信号作为当前块的预测信号。
又例如,获取的帧内预测模式为Intra_DC模式,获取到当前块的参考运动矢量场块后,将参考运动矢量场块的重建像素平均值作为当前块的预测信号。
那么,相对应的,预测信息为帧内预测模式的索引以及参考运动矢量场块的索引。
或者,预测信息可以不包括参考运动矢量场块的索引。编码装置和解码装置预先制定好每一种帧内预测模式所对应的当前块采用的参考运动矢量场块相对该当前块的位置。本实施例中,编码装置和解码装置还预先制定好每一种帧内预测模式所对应的计算预测信号的方法。这样,解码装置在接收到预测信息后,根据预测信息内的预测模式以及参考运动矢量场块的索引,采用预制的计算方法计算预测信号。
实际应用中,在获取当前块的预测信号时,可直接确定一种帧内预测模式,采用该帧内预测模式来计算当前块的预测信号,并将该帧内预测模式的索引包括到预测信息内。或者,也可以是遍历每一种帧内预测模式,采用每一种帧内预测模式计算当前块的预测信号,并将与当前块的原始信号差异(也即下文提到的预测残差信号)的能量最小的预测信号所对应的帧内预测模式的索引包括到预测信息内,且采用该预测信号来计算后续的预测残差信号。
103、根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号。
所述预测残差信号用于指示所述原始信号和所述预测信号之间的差异,其中,该预测信号为根据预测信息所获取到的预测信号。获取到当前块的预测信号后,计算该当前块的原始信号和预测信号之间的差异,即可得到当前块的预测残差信号。
计算当前块的原始信号和预测信号之间的差异的方法为现有技术,在此不再赘述。
104、将所述预测信息和所述预测残差信号写入码流。
由于解码装置可以根据该预测信息对当前块进行预测,以获取当前块的预 测信号,再结合当前块的预测残差信号,便可以计算出当前场的原始信号。因此,在编码装置,在对当前块进行压缩编码,只需将预测信息和预测残差信号写入码流发送到解码装置,即可让解码装置获取到当前块的原始信号。
本实施例中,对预测残差信号的编码可参考视频标准中对视频帧的预测残差信号的编码。实际中,在对预测残差信号编码时,先将预测残差信号进行压缩,再将压缩数据写入码流。
在对预测残差信号进行压缩时,一般分为有损压缩和无损压缩。无损压缩是指经过压缩后,重建的运动矢量场信号与原始信号完全一样,没有信息的损失。有损压缩是指经过压缩后,重建的运动矢量场信号与原始信号不完全一样,有一定的信息损失。无损压缩的过程可以包括变换和熵编码。有损压缩的过程可以包括变换、量化和熵编码。此为现有技术,在此不再赘述。
本实施例中,对当前块进行编码时,由于无需对当前块的原始信号进行编码,而是通过对预测信息和预测残差信号进行编码,大大减少了所要编码的信息量,提高了运动矢量场的编码效率;而且,由于预测信息为获取当前块的预测信号所需的信息,解码装置可以根据该预测信息获取到的预测信号与编码装置获取到的预测信号相同,这样,解码装置根据预测信息和预测残差信号可以高度还原出当前块的原始信号。
本实施例中,预测信息有多种,相对应的,根据预测信息来获取当前块的预测信号的方法有多种。下面对其中的几种举例说明。
举例一,可采用场内预测的方法,也即根据当前场来对该当前块进行预测。由于当前块与空间上相邻的运动矢量场之间存在一定的相关性,可以根据场内预测方法对当前块进行预测。
实际应用中,场内预测方法有多种。例如,场内预测方法包括角度预测和帧内区域划分预测。其中,角度预测的方法可采用步骤102的解释说明中所描述的根据帧内预测模式和当前块的参考运动矢量场块来获取当前块的预测信号。
帧内区域划分预测方法具体为:将当前块切分为至少两个区域,在每一个区域中获取一个运动矢量作为该区域的预测信号。因此,在帧内区域划分预测方法中,预测信息包括区域划分方法的信息以及确定每个区域的预测信号的方 法的信息。其中,所述区域划分方法的信息用于指示所述区域划分方法,例如为所述区域划分方法的索引。所述确定每个区域的预测信号的方法的信息用于指示确定每个区域的预测信号的方法,例如为确定每个区域的预测信号的方法的索引。
或者,预测信息也可以不包括确定每个区域的预测信号的方法的信息,而是编码装置和解码装置内存储有预先制定的相同的确定每个区域的预测信号的方法。在此不作限制。
本实施例中,确定每个区域的预测信号的方法有多种,例如,可以将一个区域内所有采样点的运动矢量的平均值作为该区域的预测信号,也可以将该区域内的其中一个采样点的运动矢量作为该区域的预测信号,或者也可以是其他,在此不作限制。其中,取区域内的其中一个采样点的运动矢量作为该区域的预测信号时,可对该区域内所有采样点遍历一遍,并确定使得预测残差信号的能量最小的一个采样点的运动矢量作为预测信号。
本实施例中,将当前块划分的方法有多种。例如,首先将当前块切分为两个区域。再将该两个区域中的至少一个区域切分为两个区域,以此下去,直至满足预置条件时,停止对区域进行切分。
该预置条件有多种设置,例如,可以计算每一次切分后当前块的失真值,该失真值为该次切分后当前块中每个区域的预测信号与原始信号之间的差值中的最大值,当该失真值小于预置数值时,满足预置条件。当然,上述仅为对预置条件的举例描述,并不作限制。
又例如,可以预先设置好预置数值,每一次划分后,计算当前块所被划分成的区域的数量,当该数量达到预置数值时,满足预置条件。
其中,在将一个区域切分为两个区域时,其中一个区域或者其中两个区域的形状可以为梯形或者三角形。当然,该区域的形状也可以是其他形状,在此不作限制。
可选的,本实施例中,在对每一个区域切分时,遍历对该区域的每一种划分方法,并计算每一种切分方法下该区域内两个子区域的预测信号,并根据该两个子区域的预测信号来从中确定出最优的一种切分方法。具体的,可以采用率失真优化原则来确定出最优的一种切分方法。
本实施例中,在将当前块切分为不同区域时,可对当前块进行连续切分,也可以进行离散切分。下面结合图2对连续切分和离散切分进行解释。
如图2左图所示,连续切分指的是采用直线L直接将当前块S切分成两个区域P1和P2,离散切分指的是当前块S由多个像素块组成,切分时沿着当前场中的像素块的边缘对该当前块进行切分,将当前块S切分成两个区域P1和P2。
或者,在将当前块划分为不同区域时,还可以对当前块进行轮廓划分,也即根据当前块上的图像所表示的物体的轮廓来对当前块进行划分,在此不作限制。
举例二,可采用场间预测的方法,也即根据当前块与时间上邻近的运动矢量场之间的相关性,采用参考运动矢量场对当前块进行预测,其中该参考运动矢量场指的是与当前场时间上邻近的其他运动矢量场。
需注意的是,该邻近的运动矢量场可以是与当前场相邻的运动矢量场(也即在时间上与当前场对应的视频帧的下一个或者上一个视频帧的运动矢量场),也可以是与当前场相隔有至少一个运动矢量场(也即在时间上与当前场对应的视频帧相隔有至少一个视频帧的视频帧对应的运动矢量场)。其中,下面对场间预测的方法的其中的两种实施例进行举例描述。
实施例一:由于物体的运动在时间上存在一定的连贯性,因此,运动矢量场在时间上存在一定的相关性,也即当前块的至少部分会出现在参考运动矢量场中,但当前块在当前场中的位置和在参考运动矢量场中的位置不一定相同。
为描述清楚,在本文中,运动矢量场块在运动矢量场中的位置指的是该运动矢量场块在该运动矢量场中的坐标范围。那么,运动矢量场块p在运动矢量场P中的位置与运动矢量场块q在运动矢量场Q中的位置相同,指的是运动矢量场块p在运动矢量场P中的坐标范围与运动矢量场块q在运动矢量场Q中的坐标范围相同。
因此,本实施例中,获取所述当前块的预测信号具体包括:确定当前块的参考运动矢量场,在所述参考运动矢量场上查找所述当前块的匹配块,将所述匹配块的重建信号作为所述当前块的预测信号。查找匹配块的方法有多种,例如,可遍历参考运动矢量场上的每一个运动矢量场块,并计算该运动矢量场块 与当前块的差异,将与当前块差异最小的运动矢量场块作为当前块的匹配块。
获取所述当前块的预测信息具体包括:将所述参考运动矢量场的信息和所述匹配块的信息作为所述预测信息,其中,所述参考运动矢量场的信息用于指示所述参考运动矢量场,所述匹配块的信息用于指示所述匹配块。具体的,所述参考运动矢量场的信息可以是该参考运动矢量场的索引,所述匹配块的信息可以是该匹配块的位置相对于参考运动矢量场中第一运动矢量场块的位置的位移信息,其中第一运动矢量场块在参考运动矢量场中的位置与当前块在当前场上的位置相同;或者,所述匹配块的信息也可以是所述匹配块的索引,在此不作限制。
实施例二:本实施例中,假设视频序列中所拍摄的同一物体在较短时间内的运动状态保持不变,也即运动方向和大小不变,那么可以根据参考运动矢量场推导出当前块的预测信号。
具体的,本实施例中,如图3所示,获取所述当前块的预测信号具体包括:
S11、获取所述当前运动矢量场块的第一参考运动矢量场。
所述第一参考运动矢量场为与所述当前场邻近的、已编码并重建的运动矢量场。为方便描述,将所述当前场对应的视频帧称为t时刻的视频帧,将所述第一参考运动矢量场对应的视频帧称为t1时刻的视频帧。其中,t1时刻可以在t时刻之前,也可以在t时刻之后,在此不作限制。
S12、根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻获取第二参考运动矢量场。
为理解方便,下面先对将描述到的运动矢量场和视频帧进行解释:目标物体在t1时刻的视频帧中的位置为A,该视频帧进行帧间预测时所采用的参考视频帧为t2时刻的视频帧,其中目标物体在该t2时刻的视频帧中的位置为B。那么,目标物体在第一参考运动矢量场中对应的第一采样点的运动矢量
Figure PCTCN2015087947-appb-000017
用于指示位置B到位置A的位移。
假设目标物体的运动状态(包括速度和方向)维持不变,也即目标物体在t1-t2时间内对应的位移为
Figure PCTCN2015087947-appb-000018
那么,可以推断出,目标物体在t-t1的时间内的位移应该为
Figure PCTCN2015087947-appb-000019
也即假设目标物体t时刻的视频帧中的位置为C,那么位置A到位置C的位移应该为
Figure PCTCN2015087947-appb-000020
根据上述方法,将第一参考运动矢量场内每一个采样点看成一个目标物体,可以推导出每一个采样点在t时刻所移动到的位置。将所述第一参考运动矢量场中各采样点各自移动,其中,每一个采样点移动后的位置相比移动前的位置的位移
Figure PCTCN2015087947-appb-000021
且该采样点在移动前的运动矢量为移动后的运动矢量改为
Figure PCTCN2015087947-appb-000023
为描述方便,将第一参考运动矢量场内各采样点按上述规则移动后且改变运动矢量后所形成的新运动矢量场称为第二参考运动矢量场。
因此,获取第二参考运动矢量场时,根据计算式
Figure PCTCN2015087947-appb-000024
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000025
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
本实施例中,将所述第一参考运动矢量场中各采样点移动以获取第二参考运动矢量场时,会出现第一参考运动矢量场中至少两个采样点在t时刻移动到运动矢量场内相同的位置上。
也就是说,第一参考运动矢量场中各采样点保持当前的速度和方向不变,在t时刻形成的运动矢量场(也即第二参考运动矢量场)时,可能出现第一参考运动矢量场中至少两个采样点移动到同一个位置上,而第二参考运动矢量场上该位置的采样点的取值有多种取法,例如,可以取其中一个采样点的运动矢量与(t-t1)/(t1-t2)的乘积作为该位置上的采样点的运动运动矢量。
或者,优选的,在这种情况下,采用如下方法确定该位置的运动矢量:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积。
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。为下文描述方便,将该第二采样点称为目标第二采样点,该第一参考运动矢量场的至少两个第一采样点称为第 一集合。
其中,在确定第一集合中各第一采样点的运动矢量的权数时,可以预设该第一集合中各第一采样点的运动矢量的权数相等,也即将第一集合中各第一采样点的运动矢量的平均值作为目标第二采样点的运动矢量。
或者,如图4所示,确定第一集合中各第一采样点的运动矢量的权数具体包括:
S21、获取第二参考运动矢量场中位于目标第二采样点周围的至少一个第二采样点的运动矢量。
为描述方便,将位于目标第二采样点周围的至少一个第二采样点称为第二集合,每一个第二采样点为第二集合的一个元素。
可选的,该第二集合可以包括位于目标第二采样点周围且与目标第二采样点相邻的四个第二采样点中的至少一个。
S22、计算所述第一集合中第一采样点的运动矢量分别与所述第二集合的运动矢量的相似程度。
对第一集合中的每一个第一采样点,计算该第一采样点的运动矢量与第二集合中各采样点的运动矢量的相似程度。计算相似程度的方法有多种。例如,可以计算该第一采样点的运动矢量分别与第二集合中每一个元素的运动矢量的差值,再将各差值的和或者平均值作为该第一采样点的运动矢量与第二集合的运动矢量的相似程度,那么,差值的和或者平均值越小,相似程度越高。
当然,上述仅为举例,并不作限制。
S23、根据所述相似程度确定第一集合中各第一采样点的运动矢量的权数,其中,与第二集合的运动矢量的相似程度越高的第一采样点的运动矢量的权数越大。
确定第一集合中各元素的运动矢量分别对应的相似程度后,根据该相似程度的大小确定第一集合中各元素的运动矢量的权数,其中,相似程度越高的元素的运动矢量的权数越大。具体的,可以预先设置不同排名对应的权数,确定第一集合中各元素的相似程度的排名后,取该元素的排名对应的权数作为该元素的运动矢量的权数。
上面描述了第一参考运动矢量场中出现至少两个第一采样点在t时刻移动 到参考运动矢量场内相同的位置上的情况。
实际应用中,第二参考运动矢量场上还可能会出现特殊位置,其中第一参考运动矢量场中没有采样点在t时刻移动到该特殊位置上。
在这种情况下,获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000026
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000027
为所述第一采样点的运动矢量。
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
为描述方便,下文将该至少一个第二采样点称为第三集合。
其中,在确定第三集合中各第二采样点的运动矢量的权数时,可以预设第三集合中各第二采样点的运动矢量的权数相等,也即将第三集合中各第二采样点的运动矢量的平均值作为目标第二采样点的运动矢量。
或者,所述确定第三集合中各第二采样点的运动矢量的权数具体包括:
获取第三集合中各第二采样点分别与目标第二采样点的距离,根据该距离分别确定定第三集合中各第二采样点的权数,其中,距离越近的第二采样点的运动矢量的权数越大。
当然,当第三集合中只有一个第二采样点时,该第三集合中第二采样点的运动矢量的加权平均值为该第二采样点的运动矢量。
具体举例来说,距离目标第二采样点左边最近的一个第二采样点(运动矢量为
Figure PCTCN2015087947-appb-000028
)与目标第二采样点的距离为m,距离目标第二采样点右边最近的一个第二采样点(运动矢量为
Figure PCTCN2015087947-appb-000029
)与目标第二采样点的距离为n,那么,目标运动矢量的运动矢量为
Figure PCTCN2015087947-appb-000030
当然,上述仅为举例,并不作限制。
S13、根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量 场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同。
由于第二参考运动矢量场是t时刻的预测运动矢量场,也即该第二参考运动矢量场是当前场的预测信号。由于当前场中的不同块采用的预测方法不一定全是本实施例中所描述的方法,因此,在当前块采样本实施例描述的方法时,将第二参考运动矢量场中位于当前块所在区域处的运动矢量场块的信号作为当前块的预测信号。
S14、根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
由于在获取当前块的预测信号时只根据第一参考运动矢量场的内容即可确定当前块的预测信号,因此,只需将第一参考运动矢量场的信息作为预测信息,便可让解码装置计算出当前块的预测信号。其中,所述第一参考运动矢量场块的信息用于指示所述第一参考运动矢量场。例如,所述第一参考运动矢量场的信息为该第一参考运动矢量场的索引。
本实施例中,由于当前块的预测信息仅包括第一参考运动矢量场的信息,大大降低了对当前块码所需的比特数。
需注意的是,上文所描述的所有方法虽然是以运动矢量为例,但实际应用中也可以将运动矢量替换为该运动矢量的其中一个分量。
举例三,可采用运动矢量场分量间预测的方法,也即根据当前块的方向系数和当前块中各采样点的运动矢量的一个分量对另一个分量进行预测。
运动矢量包括方向和大小,可将运动矢量分解为水平和竖直两个分量。例如,对于一个运动矢量
Figure PCTCN2015087947-appb-000031
该运动矢量与水平方向的夹角为θ,那么该运动矢量在水平方向和竖直方向上的分量的大小分别为
Figure PCTCN2015087947-appb-000032
Figure PCTCN2015087947-appb-000033
由此可推出,
Figure PCTCN2015087947-appb-000034
以及
Figure PCTCN2015087947-appb-000035
实际应用中,编码装置一般通过对各采样点的运动矢量的水平分量和竖直分量存储来对该采样点的运动矢量存储。本实施例中,通过对运动矢量的一个分量 的大小以及该分量与另一个分量之间的关系,来计算运动矢量的另一个分量的预测信号。
具体的,请参阅图5,获取所述当前块的预测信号具体包括以下步骤:
S31、获取当前运动矢量场块的方向系数。
采样点的运动矢量可分解为垂直分量和水平分量。为描述方便,将运动矢量的垂直分量和水平分量中的其中一个分量称为第一分量,称另一个分量为运动矢量的第二分量。由于当前块中包括至少一个采样点,因此,当前块可分解为第一分量块和第二分量块,其中,第一分量块包括当前块中各采样点的第一分量,第二分量块包括当前块中各采样点的第二分量。
当前块的方向系数用于指示所述各采样点的第一分量的值与第二分量的值之间的关系,也即在对当前块进行预测时,假设当前块中所有采样点的第一分量和第二分量之间的函数关系相同。
S32、获取所述第一分量的重建值。
本实施例中,在对当前块进行压缩时,包括对当前块的第一分量块和第二分量块的压缩。在对第一分量块编码时,可采用如图1所示实施例中的方法,或者上文描述的场内预测或场间预测的方法来对第一分量块编码,在此不作限制。
在对第二分量块编码时,由于假设所有采样点的第一分量和第二分量之间的函数关系相同,因此,第二分量块中各采样点的预测值可由该采样点的第一分量与方向系数计算出来。
由于解码装置是根据方向系数和当前块中各采样点的第一分量的重建值来计算该采样点的第二分量的预测值。因此,在编码装置计算当前块的预测信号前,先获取方向系数和当前块中各采样点的第一分量的重建值,需注意的是,由于方向系数信息量较小,编码装置对方向系数一般为无损编码,因此编码装置无需获取方向系数的重建值,而是直接采样方向系数的原始值。
S33、根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值。
由于解码端可以获取到第一分量的重建值,因此,当前块的预测信号包括第二分量的预测值时,解码端即可根据该第二分量的预测值和第一分量的重建 值获取当前块的预测信号。
对当前块中的每一个采样点,将当前块的方向系数作为该采样点的方向系数,也即根据该采样点的第一分量以及当前块的方向系数计算该采样点的第二分量。这样,当前块中每个采样点的预测信号包括第二分量的预测值。
S34、根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
本实施例中,由于当前块的预测信息包括一个方向系数,当前块编码所需的比特数较少。
图5所示实施例中,获取当前块的方向系数的方法有多种。下面对其中的几种进行举例描述。
例一、如图6所示,获取当前块的方向系数包括:
S41、获取所述原始信号的至少两个采样点的运动矢量。
S42、将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合,将拟合得到的所述预设函数的系数作为所述方向系数。
本实施例中,通过对当前块中至少两个采样点的运动矢量进行拟合,其中,在拟合时以各采样点的运动矢量的第一分量作为预设函数的自变量,第二分量作为该预设函数的函数值,以获取当前块中各采样点的第一分量和第二分量间的函数关系。
具体的,可以将各采样点拟合为一条直线。将各采样点拟合为一条直线时,该直线的预设函数方程为y=kx,k不等于0,或者,y=ax+b,a不等于0。在预设函数方程为y=kx的情况中,k为当前块的方向系数。这表示该当前块中所有点均做方向相同的直线运动。在预设函数方程为y=ax+b的情况中,a和b为当前块的方向系数。
也可以将所有点拟合为一条曲线,那么该曲线对应的函数方程中的系数为当前块的方向系数。例如,该曲线对应的预设函数方程为y=ax2+bx+c,那么a、b、c为当前块的方向系数。
本实施例中,作为预测信息的一部分的当前运动矢量场的方向系数信息具 体包括方向系数的值。当然,编码装置和解码装置需预先制定好不同的系数个数分别对应的函数方程,这样,编码装置只需将方向系数写入码流,解码装置便可根据该方向系数中所包含的数值的个数确定对应的函数方程。
例二、如图7所示,获取当前块的方向系数包括:
S51、获取所述原始信号的至少两个采样点的运动矢量。
S52、将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合。
详细解释参考例一中的步骤S42的部分解释说明,在此不再赘述。
S53、获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数。
S54、将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数。
S55、取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
与例一不同的是,本实施例中,计算出当前块中的采样点的拟合函数的系数后,并不是直接将该拟合函数的系数作为当前块的方向系数。在位于当前块周围且与当前块相邻的所有已编码块中,当前块所对应的图像有可能与所述所有已编码块中的其中一个已编码块所对应的图像为同一个物体,那么,当前块的方向系数有可能与其中一个已编码块的方向系数相同。
因此,本实施例中,还获取所述所有已编码块的方向系数,并将所述拟合函数的系数和所述所有已编码块的方向系数作为所述当前块的候选方向系数集的候选方向系数,计算每一个候选方向系数对应的所述当前块的候选预测信号以及候选预测残差信号,将能力最小的候选预测残差信号所对应的候选方向系数作为所述当前块的方向系数。
本实施例中,当所述当前块的方向系数为所述函数的系数时,所述方向系数信息包括所述方向系数的值,当所述当前块的方向系数为所述已编码块的方向系数时,所述方向系数信息用于指示所述已编码块或者包括所述已编码块的 方向系数的值,在此不作限制。
例三、获取当前块的方向系数包括:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方向系数作为所述当前运动矢量场块的方向系数。
当与当前块相邻的至少两个已编码块的方向系数相同时时,可以推断该至少两个已编码块的图像属于同一个物体,进而假设该当前块的图像和该至少两个已编码块的图像属于同一个物体,因此,可以直接确定当前块的方向系数和该至少两个已编码块的方向系数相同。
本实施例中,当前运动矢量场的方向系数信息具体用于指示所述已编码块或者包括所述已编码块的方向系数的值,例如,所述方向系数信息包括所述已编码块的索引,在此不作限制。
例四、如图8所示,获取当前块的方向系数包括:
S61、当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
S62、将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数。
S63、获取所述原始信号中至少一个采样点的运动矢量,
在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数。
S64、获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
与例三不同的是,本实施例中,当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,并不是直接将该至少两个已编码运动矢量场块的方向系数作为当前块的方向系数。在与当前块相邻的已编码块中,当前块中的各采样点所对应的图像有可能为同一个物体,那么,当前块的方向系数有可能与其中一个采样点的方向系数相同。
因此,本实施例中,还获取所述原始信号中至少一个采样点的运动矢量,并将所述已编码块的方向系数以及所述至少一个采样点的运动矢量的第一分量与所述至少一个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数,或者,将所述已编码块的方向系数以及所述运动矢量的第一分量与所述运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数。
本实施例中,当所述当前块的方向系数为所述函数的系数时,所述方向系数信息包括所述方向系数的值,当所述当前块的方向系数为所述已编码块的方向系数时,所述方向系数信息用于指示所述已编码块或者包括所述已编码块的方向系数的值,在此不作限制。
上面对如何获取当前块的预测信号和预测信息的几种方法进行了解释说明。实际应用中,在对当前运动矢量场的各运动矢量场块进行压缩编码时,不同运动矢量场块获取预测信号和预测信息时可以采用相同的方法,也可以采用不同的方法。
在不同运动矢量场块采用的获取预测信号和预测信息的方法不同的情况中,确定当前块采用的获取预测信号和预测信息的方法有多种。例如,可在编码装置和解码装置预先制定好每一种获取方法对应的索引。在编码装置,获取当前块的预测信号和预测信息时,遍历每一种获取方法并计算每一种获取方法中的预测残差信号,选择能量最小的预测残差信号对应的获取方法确定为当前块的获取方法,并将该获取方法的索引包括到当前块的预测信息内。
上面对本发明实施例的运动矢量场编码方法进行了描述。下面介绍本发明实施例提供的运动矢量场解码方法,本发明实施例提供的运动矢量场解码方法的执行主体是解码装置,其中,该解码装置可以是任何需要输出、播放视频的 装置,如手机,笔记本电脑,平板电脑,个人电脑等设备。
本发明运动矢量场解码方法的一实施例,一种运动矢量场解码方法包括:获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;根据所述预测信息获取所述当前运动矢量场块的预测信号;根据所述预测信号和所述预测残差信号,计算所述当前运动矢量场块的重建信号。
请参见图9,图9为本发明的另一实施例提供的一种运动矢量场解码方法的流程示意图,如图9所示,本发明的另一实施例提供的一种运动矢量场解码方法可以包括以下内容:
901、获取当前运动矢量场块的预测信息和预测残差信号。
解码装置接收到视频码流后,对该视频码流进行解码,以还原原始视频序列中的各视频图像。其中,在对每一个视频帧解码时,通过该视频帧的参考帧以及该视频帧的运动矢量场来对该视频帧解码。
因此,解码装置需先对视频帧的参考帧以及运动矢量场进行解码。本实施例中,将当前要解码的运动矢量场称为当前运动矢量场。所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场。
在对当前运动矢量场解码时,通过依次对当前运动矢量场中的每一个运动矢量场块进行重建来获得该当前运动矢量场的重建信号。在对当前块进行解码时,先从视频码流中获取到当前块的预测信息和预测残差信号。
902、根据所述预测信息获取所述当前运动矢量场块的预测信号。
预测信息所包含的内容不同,根据预测信息获取当前块的预测信号的方法也不同。
举例来说,在预测信息为帧内预测模式的索引以及参考运动矢量场块的索引的情况下,根据预测信息获取所述当前运动矢量场块的预测信号具体包括:根据帧内预测模式的索引确定帧内预测模式,根据参考运动矢量场块索引确定参考运动矢量场块,然后根据帧内预测模式和参考运动矢量场块的信号获取当 前块的预测信号。
编码装置和解码装置预先制定好每一种帧内预测模式所对应的计算预测信号的方法。这样,解码装置在接收到预测信息后,根据预测信息内的预测模式以及参考运动矢量场块的索引,采用预制的计算方法计算预测信号。
例如,获取的帧内预测模式为HEVC标准中提供的视频帧的35种帧内预测模式中的水平预测模式,那么将当前块的参考运动矢量场块的重建信号作为当前块的预测信号。
又例如,获取的帧内预测模式为所述35种帧内预测模式中的Intra_DC模式,那么将参考运动矢量场块的重建像素平均值作为当前块的预测信号。
实际应用中,解码装置获取到的预测信息中也可以不包括参考运动矢量场块的索引,解码装置和编码装置预先指定好每一种帧内预测模式所对应的当前块采用的参考运动矢量场块相对该当前块的位置。这样,解码装置获取到预测信息后,根据预测信息内的帧内预测模式来确定当前块的参考运动矢量场。
903、根据所述预测信号和所述预测残差信号,计算所述当前运动矢量场块的重建信号。
所述预测残差信号用于指示所述当前块的原始信号和预测信号之间的差异。解码装置获取到当前块的预测信号后,通过预测残差信号对该预测信号进行修正,即可得到当前块的重建信号。
本实施例中,由于预测信息为获取当前块的预测信号所需的信息,解码装置可以根据该预测信息获取到的预测信号与编码装置获取到的预测信号相同,这样,解码装置根据预测信息和预测残差信号可以高度还原出当前块的原始信号。
本实施例中,预测信息有多种,相对应的,根据预测信息来获取当前块的预测信号的方法有多种。下面对其中的几种举例说明。
举例一:预测信息包括区域划分方法的信息以及确定每个区域的预测信号的方法的信息。
其中,所述区域划分方法的信息用于指示所述区域划分方法,例如为所述区域划分方法的索引。所述确定每个区域的预测信号的方法的信息用于指示确定每个区域的预测信号的方法,例如为确定每个区域的预测信号的方法的索引。
根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
根据区域划分方法的信息确定区域划分方法,并采用所述区域划分方法将所述当前块划分为不同区域。
根据所述确定每个区域的预测信号的方法的信息获取确定每个区域的预测信号的方法,并采用该方法获取每个区域的预测信号。
例如,所述确定每个区域的预测信号的方法的信息指示该区域的预测信号为该区域内所有采样点的运动矢量的平均值。那么,获取该区域的预测信号时,计算区域内所有采样点的运动矢量的平均值,将该平均值作为该区域的预测信号。
或者,
所述确定每个区域的预测信号的方法的信息指示该区域的预测信号为其中一个采样点的运动矢量。那么,获取该区域的预测信号时,根据该采样点的索引获取该采样点的运动矢量,将该采样点的运动矢量作为该区域的预测信号。
当然,上述描述仅为举例,在此不作限制。
或者,预测信息也可以不包括确定每个区域的预测信号的方法的信息,而是编码装置和解码装置内存储有预先制定的相同的确定每个区域的预测信号的方法。在此不作限制。
本实施例中,将当前块划分的方法有多种。具体可参考在运动矢量场编码方法中对获取预测信息和预测信号的举例一的解释描述中对当前块划分的方法的举例描述,在此不再赘述。
举例二:预测信息包括参考运动矢量场的信息和所述匹配块的信息,其中,所述参考运动矢量场的信息用于指示所述参考运动矢量场,所述匹配块的信息用于指示所述匹配块。
具体的,所述参考运动矢量场的信息可以是该参考运动矢量场的索引,所述匹配块的信息可以是该匹配块的位置相对于参考运动矢量场中第一运动矢量场块的位置的位移信息,其中第一运动矢量场块在参考运动矢量场中的位置与当前块在当前场上的位置相同;或者,所述匹配块的信息也可以是所述匹配块的索引,在此不作限制。
根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:根据参 考运动矢量场的信息确定参考运动矢量场,根据匹配块的信息在参考运动矢量场中查找所述匹配块,将所述匹配块的重建信号作为当前块的预测信号。
举例三:预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息。
根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
本实施例中,为理解方便,下面先对将描述到的运动矢量场和视频帧进行解释:目标物体在t1时刻的视频帧中的位置为A,该视频帧进行帧间预测时所采用的参考视频帧为t2时刻的视频帧,其中目标物体在该t2时刻的视频帧中的位置为B。那么,目标物体在第一参考运动矢量场中对应的第一采样点的运动矢量
Figure PCTCN2015087947-appb-000036
用于指示位置B到位置A的位移。
假设目标物体的运动状态(包括速度和方向)维持不变,也即目标物体在t1-t2时间内对应的位移为
Figure PCTCN2015087947-appb-000037
那么,可以推断出,目标物体在t-t1的时间内的位移应该为
Figure PCTCN2015087947-appb-000038
也即假设目标物体t时刻的视频帧中的位置为C,那么位置A到位置C的位移应该为
Figure PCTCN2015087947-appb-000039
根据上述方法,将第一参考运动矢量场内每一个采样点看成一个目标物体,可以推导出每一个采样点在t时刻所移动到的位置。将所述第一参考运动矢量场中各采样点各自移动,其中,每一个采样点移动后的位置相比移动前的位置的位移
Figure PCTCN2015087947-appb-000040
且该采样点在移动前的运动矢量为
Figure PCTCN2015087947-appb-000041
移动后的运动矢量改为
Figure PCTCN2015087947-appb-000042
为描述方便,将第一参考运动矢量场内各采样点按上述规则移动后且改变运动矢量后所形成的新运动矢 量场称为第二参考运动矢量场。
因此,可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,具体包括:
根据计算式
Figure PCTCN2015087947-appb-000043
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000044
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
本实施例中,将所述第一参考运动矢量场中各采样点移动以获取第二参考运动矢量场时,会出现第一参考运动矢量场中至少两个采样点在t时刻移动到运动矢量场内相同的位置上。
也就是说,第一参考运动矢量场中各采样点保持当前的速度和方向不变,在t时刻形成的运动矢量场(也即第二参考运动矢量场)时,可能出现第一参考运动矢量场中至少两个采样点移动到同一个位置上,而第二参考运动矢量场上该位置的采样点的取值有多种取法,例如,可以取其中一个采样点的运动矢量与(t-t1)/(t1-t2)的乘积作为该位置上的采样点的运动运动矢量。
或者,可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,具体包括:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
为下文描述方便,将该第二采样点称为目标第二采样点,该第一参考运动矢量场的至少两个第一采样点称为第一集合。
其中,在确定第一集合中各第一采样点的运动矢量的权数时,可以预设该第一集合中各第一采样点的运动矢量的权数相等,也即将第一集合中各第一采样点的运动矢量的平均值作为目标第二采样点的运动矢量。
或者,确定第一集合中各第一采样点的运动矢量的权数具体包括:
S71、获取第二参考运动矢量场中位于目标第二采样点周围的至少一个第二采样点的运动矢量。
为描述方便,将位于目标第二采样点周围的至少一个第二采样点称为第二集合,每一个第二采样点为第二集合的一个元素。
可选的,该第二集合可以包括位于目标第二采样点周围且与目标第二采样点相邻的四个第二采样点中的至少一个。
S72、计算所述第一集合中第一采样点的运动矢量分别与所述第二集合的运动矢量的相似程度。
对第一集合中的每一个第一采样点,计算该第一采样点的运动矢量与第二集合中各采样点的运动矢量的相似程度。计算相似程度的方法有多种。例如,可以计算该第一采样点的运动矢量分别与第二集合中每一个元素的运动矢量的差值,再将各差值的和或者平均值作为该第一采样点的运动矢量与第二集合的运动矢量的相似程度,那么,差值的和或者平均值越小,相似程度越高。
当然,上述仅为举例,并不作限制。
S73、根据所述相似程度确定第一集合中各第一采样点的运动矢量的权数,其中,与第二集合的运动矢量的相似程度越高的第一采样点的运动矢量的权数越大。
确定第一集合中各元素的运动矢量分别对应的相似程度后,根据该相似程度的大小确定第一集合中各元素的运动矢量的权数,其中,相似程度越高的元素的运动矢量的权数越大。具体的,可以预先设置不同排名对应的权数,确定第一集合中各元素的相似程度的排名后,取该元素的排名对应的权数作为该元素的运动矢量的权数。
上面描述了第一参考运动矢量场中出现至少两个第一采样点在t时刻移动到参考运动矢量场内相同的位置上的情况。
实际应用中,第二参考运动矢量场上还可能会出现特殊位置,其中第一参考运动矢量场中没有采样点在t时刻移动到该特殊位置上。
因此,可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,具体包括:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000045
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000046
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
举例三中的方案的具体解释可参考在运动矢量场编码方法中图3和图4所示实施例的解释描述,在此不再赘述。
举例四:所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系。
根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
其中,当前运动矢量场块的方向系数信息有多种。例如,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;或者,所述方向系数信息包括所述方向系数的值。
举例四中的方案的具体解释可参考在运动矢量场编码方法中图5所示实施例的解释描述,在此不再赘述。
上面对本发明的运动矢量场编码方法和运动矢量场解码方法进行了描述,下面对本发明的编码装置进行描述。
参见图10,本发明实施例提供的一种编码装置,包括:
第一获取模块1001,用于获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
第二获取模块1002,用于获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
计算模块1003,用于根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
编码模块1004,用于将所述第二获取模块获取的所述预测信息和所述计算模块计算得到的所述预测残差信号写入码流。
本实施例中,编码装置对当前块进行编码时,由于无需对当前运动矢量场块的原始信号进行编码,而是通过对预测信息和预测残差信号进行编码,提高了运动矢量场的压缩效率。
在本发明的一些可能的实施方式中,所述第二获取模块1002用于:
确定一种帧内预测模式,将当前运动矢量场上与当前运动矢量场块邻近的、已编码并重建的至少一个运动矢量场块作为该当前块的参考运动矢量场块;根据该帧内预测模式和该参考运动矢量场块获取当前块的预测信号。
例如,获取的帧内预测模式为33个方向性预测模式中的水平预测模式,当前块的参考运动矢量场块为与所述当前块位于同一行上的左边第一个运动矢量场块。那么第二获取模块1002用于将参考运动矢量场块的重建信号作为当前块的预测信号。
又例如,获取的帧内预测模式为Intra_DC模式,获取到当前块的参考运动矢量场块后,那么第二获取模块1002用于将参考运动矢量场块的重建像素平均值作为当前块的预测信号。
在本发明的一些可能的实施方式中,所述第二获取模块1002用于:
获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量 场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
在本发明的一些可能的实施方式中,所述第二获取模1002用于根据计算式
Figure PCTCN2015087947-appb-000047
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000048
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
在本发明的一些可能的实施方式中,所述第二获取模1002用于确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
在本发明的一些可能的实施方式中,所述第二获取模1002用于获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000049
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000050
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
在本发明的一些可能的实施方式中,所述第二获取模1002用于:
获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
获取所述第一分量的重建值;
根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
在本发明的一些可能的实施方式中,所述第二获取模1002用于:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
将拟合得到的所述预设函数的系数作为所述方向系数。
在本发明的一些可能的实施方式中,所述第二获取模1002用于:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
在本发明的一些可能的实施方式中,所述第二获取模1002用于:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编 码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方向系数作为所述当前运动矢量场块的方向系数。
在本发明的一些可能的实施方式中,所述第二获取模1002用于:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
获取所述原始信号中至少一个采样点的运动矢量,在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
上面对本发明的运动矢量场编码方法、运动矢量场解码方法和编码装置进行了描述,下面对本发明的解码装置进行描述。
参见图11,本发明实施例提供的一种解码装置,包括:
第一获取模块1101,用于获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
第二获取模块1102,用于根据所述预测信息获取所述当前运动矢量场块的预测信号;
计算模块1103,用于根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述预测残差信号,计算所述当前运动矢量场块的重建信 号。
在本发明的一些可能的实施方式中,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;所述第二获取模块1102用于:
根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
在本发明的一些可能的实施方式中,所述第二获取模1202用于:
根据计算式
Figure PCTCN2015087947-appb-000051
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000052
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
在本发明的一些可能的实施方式中,所述第二获取模1202用于:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
在本发明的一些可能的实施方式中,所述第二获取模1202用于:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000053
为位移,所移动到的位置与所述目标第二采 样点的位置不同,
Figure PCTCN2015087947-appb-000054
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
在本发明的一些可能的实施方式中,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
所述第二获取模块1102用于获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
在本发明的一些可能的实施方式中,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;或者,所述方向系数信息包括所述方向系数的值。
参见图12,图12是本发明的另一个实施例提供编码装置1200的结构框图。其中,编码装置1200可包括:至少1个处理器1201、存储器1205和至少1个通信总线1202。可选的,编码装置1200还可包括:至少1个网络接口1204和/或用户接口1203。其中,用户接口1203例如包括显示器(例如触摸屏、LCD、全息成像(Holographic)、CRT或者投影(Projector)等)、点击设备(例如鼠标或轨迹球(trackball)触感板或触摸屏等)、摄像头和/或拾音装置等。
其中,存储器1205可以包括只读存储器和随机存取存储器,并向处理器1201提供指令和数据。存储器1205中的一部分还可以包括非易失性随机存取存储器。
在一些实施方式中,存储器1205存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
操作系统12051,包含各种系统程序,用于实现各种基础业务以及处理基 于硬件的任务。
应用程序模块12052,包含各种应用程序,用于实现各种应用业务。
在本发明的实施例中,通过调用存储器1205存储的程序或指令,处理器1201用于:
获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
将所述预测信息和所述预测残差信号写入码流。
本发明实施例中,对当前块进行编码时,由于无需对当前运动矢量场块的原始信号进行编码,而是通过对预测信息和预测残差信号进行编码,提高了运动矢量场的压缩效率。
可选的,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述 用于指示所述第一参考运动矢量场的信息。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
根据计算式
Figure PCTCN2015087947-appb-000055
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000056
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000057
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000058
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
可选的,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所 述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
获取所述第一分量的重建值;
根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
可选的,所述获取所述当前运动矢量场块的方向系数,包括:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
将拟合得到的所述预设函数的系数作为所述方向系数。
可选的,所述获取所述当前运动矢量场块的方向系数,包括:
获取所述原始信号的至少两个采样点的运动矢量;
将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
可选的,所述获取所述当前运动矢量场块的方向系数,包括:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方向系数作为所述当前运动矢量场块的方向系数。
可选的,所述获取所述当前运动矢量场块的方向系数,包括:
当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
获取所述原始信号中至少一个采样点的运动矢量,在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
参见图13,图13是本发明的另一个实施例提供的解码装置1300的结构框图。其中,解码装置1300可包括:至少1个处理器1301、存储器1305和至少1个通信总线1302。可选的,视频解码装置1300还可包括:至少1个网络接口1304和/或用户接口1303。其中,用户接口1303例如包括显示器(例如触摸屏、LCD、全息成像(Holographic)、CRT或者投影(Projector)等)、点击设备(例如鼠标或轨迹球(trackball)触感板或触摸屏等)、摄像头和/或拾音装置等。
其中,存储器1305可以包括只读存储器和随机存取存储器,并向处理器1001提供指令和数据。存储器1305中的一部分还可以包括非易失性随机存取存储器。
在一些实施方式中,存储器1305存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
操作系统13051,包含各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
应用程序模块13052,包含各种应用程序,用于实现各种应用业务。
在本发明的实施例中,通过调用存储器1305存储的程序或指令,处理器1301用于:
获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
根据所述预测信息获取所述当前运动矢量场块的预测信号;
根据所述预测信号和所述预测残差信号,计算所述当前运动矢量场块的重建信号。
可选的,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;
所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
根据计算式
Figure PCTCN2015087947-appb-000059
计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
Figure PCTCN2015087947-appb-000060
为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
可选的,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
Figure PCTCN2015087947-appb-000061
为位移,所移动到的位置与所述目标第二采样点的位置不同,
Figure PCTCN2015087947-appb-000062
为所述第一采样点的运动矢量;
在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
可选的,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
可选的,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;或者,所述方向系数信息包括所述方向系数的值。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述 的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分 技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (34)

  1. 一种运动矢量场编码方法,其特征在于,包括:
    获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
    获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
    根据所述预测信号和所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
    将所述预测信息和所述预测残差信号写入码流。
  2. 根据权利要求1所述的运动矢量场编码方法,其特征在于,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
    获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
    根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
    根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
    根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
  3. 根据权利要求2所述的运动矢量场编码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    根据计算式
    Figure PCTCN2015087947-appb-100001
    计算获得所述第二参考运动矢量场 的第二采样点的运动矢量,其中,
    Figure PCTCN2015087947-appb-100002
    为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
  4. 根据权利要求2所述的运动矢量场编码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
    将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
  5. 根据权利要求2所述的运动矢量场编码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
    Figure PCTCN2015087947-appb-100003
    为位移,所移动到的位置与所述目标第二采样点的位置不同,
    Figure PCTCN2015087947-appb-100004
    为所述第一采样点的运动矢量;
    在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
    在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
  6. 根据权利要求1所述的运动矢量场编码方法,其特征在于,所述获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,包括:
    获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
    获取所述第一分量的重建值;
    根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
    根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
  7. 根据权利要求6所述的运动矢量场编码方法,其特征在于,
    所述获取所述当前运动矢量场块的方向系数,包括:
    获取所述原始信号的至少两个采样点的运动矢量;
    将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
    将拟合得到的所述预设函数的系数作为所述方向系数。
  8. 根据权利要求6所述的运动矢量场编码方法,其特征在于,所述获取所述当前运动矢量场块的方向系数,包括:
    获取所述原始信号的至少两个采样点的运动矢量;
    将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
    获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
    将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
    获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
  9. 根据权利要求6所述的运动矢量场编码方法,其特征在于,
    所述获取所述当前运动矢量场块的方向系数,包括:
    当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方 向系数作为所述当前运动矢量场块的方向系数。
  10. 根据权利要求6所述的运动矢量场编码方法,其特征在于,所述获取所述当前运动矢量场块的方向系数,包括:
    当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
    将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
    获取所述原始信号中至少一个采样点的运动矢量;
    在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
    在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
    获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
  11. 一种运动运动矢量场解码方法,其特征在于,包括:
    获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
    根据所述预测信息获取所述当前运动矢量场块的预测信号;
    根据所述预测信号和所述预测残差信号,计算所述当前运动矢量场块的重建信号。
  12. 根据权利要求11所述的运动矢量场解码方法,其特征在于,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;
    所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
    根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
    根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
    获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
  13. 根据权利要求12所述的运动矢量场解码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    根据计算式
    Figure PCTCN2015087947-appb-100005
    计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
    Figure PCTCN2015087947-appb-100006
    为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
  14. 根据权利要求12所述的运动矢量场编码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
    将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
  15. 根据权利要求12所述的运动矢量场编码方法,其特征在于,所述根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,包括:
    获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
    Figure PCTCN2015087947-appb-100007
    为位移,所移动到的位置与所述目标第二采样点的位置不同,
    Figure PCTCN2015087947-appb-100008
    为所述第一采样点的运动矢量;
    在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
    在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
  16. 根据权利要求11所述的运动矢量场解码方法,其特征在于,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
    所述根据所述预测信息获取所述当前运动矢量场块的预测信号,包括:
    获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
  17. 根据权利要求16所述的运动矢量场解码方法,其特征在于,所述方向系数信息包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;
    或者,
    所述方向系数信息包括所述方向系数的值。
  18. 一种编码装置,其特征在于,包括:
    第一获取模块,用于获取当前运动矢量场块的原始信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
    第二获取模块,用于获取所述当前运动矢量场块的预测信号和所述当前运动矢量场块的预测信息,所述预测信息用于指示获取所述预测信号所需的信息;
    计算模块,用于根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述原始信号,计算所述当前运动矢量场块的预测残差信号,所述预测残差信号用于指示所述原始信号和所述预测信号之间的残差;
    编码模块,用于将所述第二获取模块获取的所述预测信息和所述计算模块计算得到的所述预测残差信号写入码流。
  19. 根据权利要求18所述的编码装置,其特征在于,所述第二获取模块用于:
    获取所述当前运动矢量场块的第一参考运动矢量场,所述第一参考运动矢量场为已编码并重建的运动矢量场,其中,所述第一参考运动矢量场为t1时刻的视频帧对应的运动矢量场,所述t1时刻的视频帧为与所述t时刻的视频帧邻近的视频帧;
    根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为在对所述第一参考运动矢量场对应的视频帧进行帧间预测时所采用的参考视频帧对应的时刻;
    根据所述第二参考运动矢量场获取所述预测信号,所述预测信号包括所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同;
    根据所述第一参考运动矢量场获取所述预测信息,所述预测信息包括所述用于指示所述第一参考运动矢量场的信息。
  20. 根据权利要求19所述的编码装置,其特征在于,所述第二获取模用于根据计算式
    Figure PCTCN2015087947-appb-100009
    计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
    Figure PCTCN2015087947-appb-100010
    为所述第一参考运动矢量场的第一采样点的运动矢量;其中,以所述第一采样点的位置为起点,以所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
  21. 根据权利要求19所述的编码装置,其特征在于,所述第二获取模用于确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
    将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1- t2)的乘积作为所述第二采样点的运动矢量。
  22. 根据权利要求19所述的编码装置,其特征在于,所述第二获取模用于获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
    Figure PCTCN2015087947-appb-100011
    为位移,所移动到的位置与所述目标第二采样点的位置不同,
    Figure PCTCN2015087947-appb-100012
    为所述第一采样点的运动矢量;
    在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
    在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
  23. 根据权利要求18所述的编码装置,其特征在于,所述第二获取模块用于:
    获取所述当前运动矢量场块的方向系数,其中,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量的第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
    获取所述第一分量的重建值;
    根据所述第一分量的重建值和所述方向系数计算所述预测信号,所述预测信号包括所述第二分量的预测值;
    根据所述方向系数获取所述预测信息,所述预测信息包括用于指示所述方向系数的方向系数信息。
  24. 根据权利要求23所述的编码装置,其特征在于,所述第二获取模块用于:
    获取所述原始信号的至少两个采样点的运动矢量;
    将所述至少两个采样点的运动矢量的第一分量作为预设函数的自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
    将拟合得到的所述预设函数的系数作为所述方向系数。
  25. 根据权利要求23所述的编码装置,其特征在于,所述第二获取模块用于:
    获取所述原始信号的至少两个采样点的运动矢量;
    将所述至少两个采样点的运动矢量的第一分量作为自变量,将所述至少两个采样点的运动矢量的第二分量作为所述自变量对应的函数值,将所述自变量和所述函数值进行拟合;
    获取所述当前运动矢量场中与所述当前运动矢量场块邻近的至少一个已编码的运动矢量场块的方向系数;
    将拟合出的函数的系数和所述至少一个已编码的运动矢量场块的方向系数作为所述当前运动矢量场块的候选方向系数集的候选方向系数;
    取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
  26. 根据权利要求23所述的编码装置,其特征在于,所述第二获取模块用于:
    当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同时,将所述至少两个已编码运动矢量场块的方向系数作为所述当前运动矢量场块的方向系数。
  27. 根据权利要求23所述的编码装置,其特征在于,所述第二获取模块用于:
    当所述当前运动矢量场中与所述当前运动矢量场块相邻的至少两个已编码运动矢量场块的方向系数相同,并且所述至少两个已编码运动矢量场块的方向系数指示所述至少两个已编码运动矢量场块的采样点的运动矢量的第一分量与第二分量的比值时,执行以下步骤:
    将所述至少两个已编码运动矢量场块的方向系数作为候选方向系数集的候选方向系数;
    获取所述原始信号中至少一个采样点的运动矢量;
    在所述至少一个采样点为一个采样点时,将所述一个采样点的运动矢量的第一分量与所述一个采样点的运动矢量的第二分量的比值作为所述候选方向系数集的候选方向系数;或者,
    在所述至少一个采样点为至少两个采样点时,将所述至少两个采样点的运 动矢量的第一分量与所述至少两个采样点的运动矢量的第二分量的比值的平均值作为所述候选方向系数集的候选方向系数;
    获取所述候选方向系数集的候选方向系数对应的所述当前运动矢量场块的候选预测残差信号,将信号能量最小或者率失真最小的候选预测残差信号所对应的候选方向系数作为所述当前运动矢量场块的方向系数。
  28. 一种解码装置,其特征在于,包括:
    第一获取模块,用于获取当前运动矢量场块的预测信息和预测残差信号,所述当前运动矢量场块通过将当前运动矢量场分块后获取得到,所述当前运动矢量场为t时刻的视频帧对应的运动矢量场;
    第二获取模块,用于根据所述第一获取模块获取的所述预测信息获取所述当前运动矢量场块的预测信号;
    计算模块,根据所述第二获取模块获取的所述预测信号和所述第一获取模块获取的所述预测残差信号,计算所述当前运动矢量场块的重建信号。
  29. 根据权利要求28所述的解码装置,其特征在于,所述预测信息包括所述用于指示所述运动矢量场块的第一参考运动矢量场的信息;
    所述第二获取模块用于:
    根据所述预测信息获取所述第一参考运动矢量场,所述第一参考运动矢量场为t1时刻的视频帧的运动矢量场;
    根据所述第一参考运动矢量场、所述t时刻、所述t1时刻以及t2时刻,获取第二参考运动矢量场,其中,所述t2时刻为所述第一参考运动矢量场对应的视频帧所采用的参考视频帧对应的时刻;
    获取所述第二参考运动矢量场的运动矢量场块,其中,所述第二参考运动矢量场的运动矢量场块在所述第二参考运动矢量场中的坐标范围与所述当前运动矢量场块在所述当前运动矢量场中的坐标范围相同,所述预测信号包括所述第二参考运动矢量场的运动矢量场块。
  30. 根据权利要求29所述的解码装置,其特征在于,所述第二获取模块用于:
    根据计算式
    Figure PCTCN2015087947-appb-100013
    计算获得所述第二参考运动矢量场的第二采样点的运动矢量,其中,
    Figure PCTCN2015087947-appb-100014
    为所述第一参考运动矢量场的第一采 样点的运动矢量;其中,以所述第一采样点的位置为起点,以为所述第二采样点的运动矢量为位移所移动到的位置与所述第二采样点的位置为同一位置。
  31. 根据权利要求29所述的解码装置,其特征在于,所述第二获取模块用于:
    确定所述第二参考运动矢量场的第二采样点,其中,所述第二采样点的位置与分别以所述第一参考运动矢量场的至少两个第一采样点各自的位置为起点,以所述至少两个第一采样点各自的移动矢量为位移,所移动到的位置为同一位置,所述至少两个第一采样点各自的移动矢量为所述至少两个第一采样点各自的运动矢量与(t-t1)/(t1-t2)的乘积;
    将所述至少两个第一采样点的运动矢量的加权平均值与(t-t1)/(t1-t2)的乘积作为所述第二采样点的运动矢量。
  32. 根据权利要求29所述的解码装置,其特征在于,所述第二获取模块用于:
    获取与所述第二参考运动矢量场的目标第二采样点邻近的至少一个第二采样点,其中,以所述第一参考运动矢量场的任意一个第一采样点的位置为起点,以
    Figure PCTCN2015087947-appb-100015
    为位移,所移动到的位置与所述目标第二采样点的位置不同,
    Figure PCTCN2015087947-appb-100016
    为所述第一采样点的运动矢量;
    在所述至少一个第二采样点为一个第二采样点时,将所述一个第二采样点的加权值作为所述目标第二采样点的运动矢量;
    在所述至少一个第二采样点为至少两个第二采样点时,将所述至少两个第二采样点的运动矢量的加权平均值作为所述目标第二采样点的运动矢量。
  33. 根据权利要求28所述的解码装置,其特征在于,所述预测信息包括用于指示所述当前运动矢量场块的方向系数的方向系数信息,所述方向系数用于指示所述当前运动矢量场块的采样点的运动矢量第一分量的值与所述采样点的运动矢量的第二分量的值之间的关系;
    所述第二获取模块,用于获取所述第一分量的重建值,根据所述方向系数和所述第一分量的重建值计算所述第二分量的预测值,所述预测信号包括所述第二分量的预测值。
  34. 根据权利要求33所述的解码装置,其特征在于,所述方向系数信息 包括用于指示所述当前运动矢量场中的已重建运动矢量场块的信息,所述方向系数包括所述已重建运动矢量场块的方向系数;
    或者,
    所述方向系数信息包括所述方向系数的值。
PCT/CN2015/087947 2015-08-24 2015-08-24 运动矢量场编码方法和解码方法、编码和解码装置 WO2017031671A1 (zh)

Priority Applications (9)

Application Number Priority Date Filing Date Title
BR112018003247-6A BR112018003247B1 (pt) 2015-08-24 Método de codificação e decodificação de campo de vetor de movimentação, aparelho de codificação e aparelho de decodificação
KR1020187006478A KR102059066B1 (ko) 2015-08-24 2015-08-24 모션 벡터 필드 코딩 방법 및 디코딩 방법, 및 코딩 및 디코딩 장치들
JP2018508146A JP6636615B2 (ja) 2015-08-24 2015-08-24 動きベクトル場の符号化方法、復号方法、符号化装置、および復号装置
EP15901942.1A EP3343923B1 (en) 2015-08-24 2015-08-24 Motion vector field coding method and decoding method, and coding and decoding apparatuses
PCT/CN2015/087947 WO2017031671A1 (zh) 2015-08-24 2015-08-24 运动矢量场编码方法和解码方法、编码和解码装置
CN201580081772.5A CN107852500B (zh) 2015-08-24 2015-08-24 运动矢量场编码方法和解码方法、编码和解码装置
AU2015406855A AU2015406855A1 (en) 2015-08-24 2015-08-24 Motion vector field coding and decoding method, coding apparatus, and decoding apparatus
US15/901,410 US11102501B2 (en) 2015-08-24 2018-02-21 Motion vector field coding and decoding method, coding apparatus, and decoding apparatus
AU2019275631A AU2019275631B2 (en) 2015-08-24 2019-12-05 Motion vector field coding and decoding method, coding apparatus, and decoding apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/087947 WO2017031671A1 (zh) 2015-08-24 2015-08-24 运动矢量场编码方法和解码方法、编码和解码装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/901,410 Continuation US11102501B2 (en) 2015-08-24 2018-02-21 Motion vector field coding and decoding method, coding apparatus, and decoding apparatus

Publications (1)

Publication Number Publication Date
WO2017031671A1 true WO2017031671A1 (zh) 2017-03-02

Family

ID=58099484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/087947 WO2017031671A1 (zh) 2015-08-24 2015-08-24 运动矢量场编码方法和解码方法、编码和解码装置

Country Status (7)

Country Link
US (1) US11102501B2 (zh)
EP (1) EP3343923B1 (zh)
JP (1) JP6636615B2 (zh)
KR (1) KR102059066B1 (zh)
CN (1) CN107852500B (zh)
AU (2) AU2015406855A1 (zh)
WO (1) WO2017031671A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11665365B2 (en) * 2018-09-14 2023-05-30 Google Llc Motion prediction coding with coframe motion vectors
US11343525B2 (en) * 2019-03-19 2022-05-24 Tencent America LLC Method and apparatus for video coding by constraining sub-block motion vectors and determining adjustment values based on constrained sub-block motion vectors
CN110136135B (zh) * 2019-05-17 2021-07-06 深圳大学 分割方法、装置、设备以及存储介质
CN110636301B (zh) * 2019-09-18 2021-08-03 浙江大华技术股份有限公司 仿射预测方法、计算机设备和计算机可读存储介质
CN113556567B (zh) * 2020-04-24 2022-11-25 华为技术有限公司 帧间预测的方法和装置
CN115462080A (zh) * 2020-05-29 2022-12-09 交互数字Vc控股法国有限公司 使用深度神经网络的运动修正
CN112804562B (zh) * 2020-12-30 2022-01-07 北京大学 基于片重组的视频编码方法、装置、终端及介质
CN112788344B (zh) * 2020-12-30 2023-03-21 北京大数据研究院 基于编码单元重组的视频解码方法、装置、系统、介质及终端
CN112788336B (zh) * 2020-12-30 2023-04-14 北京大数据研究院 数据元素的排序还原方法、系统、终端及标记方法
CN112822488B (zh) * 2020-12-30 2023-04-07 北京大学 基于块重组的视频编解码系统、方法、装置、终端及介质
WO2023219301A1 (ko) * 2022-05-13 2023-11-16 현대자동차주식회사 인트라 예측 블록에 대한 움직임벡터 저장을 위한 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101014129A (zh) * 2007-03-06 2007-08-08 孟智平 一种视频数据压缩方法
US20070217516A1 (en) * 2006-03-16 2007-09-20 Sony Corporation And Sony Electronics Inc. Uni-modal based fast half-pel and fast quarter-pel refinement for video encoding
US20080025399A1 (en) * 2006-07-26 2008-01-31 Canon Kabushiki Kaisha Method and device for image compression, telecommunications system comprising such a device and program implementing such a method
CN101185342A (zh) * 2005-04-29 2008-05-21 三星电子株式会社 支持快速精细可分级的视频编码方法和装置
CN101466040A (zh) * 2009-01-09 2009-06-24 北京大学 一种用于视频编码模式决策的码率估计方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0984017A (ja) * 1995-09-14 1997-03-28 Nippon Telegr & Teleph Corp <Ntt> 動画像の動き補償予測符号化方法
JP3781194B2 (ja) * 1995-10-20 2006-05-31 ノキア コーポレイション 動きベクトルフィールド符号化
ES2545066T3 (es) * 1997-06-09 2015-09-08 Hitachi, Ltd. Medio de grabación de información de imágenes
GB2348064A (en) 1999-03-16 2000-09-20 Mitsubishi Electric Inf Tech Motion vector field encoding
DE60003070T2 (de) * 1999-08-11 2004-04-01 Nokia Corp. Adaptive bewegungsvektorfeldkodierung
US6735249B1 (en) * 1999-08-11 2004-05-11 Nokia Corporation Apparatus, and associated method, for forming a compressed motion vector field utilizing predictive motion coding
US20010047517A1 (en) * 2000-02-10 2001-11-29 Charilaos Christopoulos Method and apparatus for intelligent transcoding of multimedia data
FR2833797B1 (fr) * 2001-12-19 2004-02-13 Thomson Licensing Sa Procede d'estimation du mouvement dominant dans une sequence d'images
KR100955414B1 (ko) * 2002-01-17 2010-05-04 엔엑스피 비 브이 현재 움직임 벡터 추정용 유닛 및 그 추정 방법
US7003035B2 (en) * 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
WO2006054257A1 (en) * 2004-11-22 2006-05-26 Koninklijke Philips Electronics N.V. Motion vector field projection dealing with covering and uncovering
WO2007063465A2 (en) 2005-11-30 2007-06-07 Nxp B.V. Motion vector field correction
US9445121B2 (en) * 2008-08-04 2016-09-13 Dolby Laboratories Licensing Corporation Overlapped block disparity estimation and compensation architecture
US8411750B2 (en) 2009-10-30 2013-04-02 Qualcomm Incorporated Global motion parameter estimation using block-based motion vectors
US8358698B2 (en) 2010-01-08 2013-01-22 Research In Motion Limited Method and device for motion vector estimation in video transcoding using full-resolution residuals
JP5310614B2 (ja) * 2010-03-17 2013-10-09 富士通株式会社 動画像符号化装置、動画像符号化方法及び動画像復号装置ならびに動画像復号方法
GB2487197B (en) 2011-01-11 2015-06-17 Canon Kk Video encoding and decoding with improved error resilience
KR101934277B1 (ko) * 2011-11-28 2019-01-04 에스케이텔레콤 주식회사 개선된 머지를 이용한 영상 부호화/복호화 방법 및 장치
US9544612B2 (en) * 2012-10-04 2017-01-10 Intel Corporation Prediction parameter inheritance for 3D video coding
US20140169444A1 (en) 2012-12-14 2014-06-19 Microsoft Corporation Image sequence encoding/decoding using motion fields
CN106303544B (zh) * 2015-05-26 2019-06-11 华为技术有限公司 一种视频编解码方法、编码器和解码器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101185342A (zh) * 2005-04-29 2008-05-21 三星电子株式会社 支持快速精细可分级的视频编码方法和装置
US20070217516A1 (en) * 2006-03-16 2007-09-20 Sony Corporation And Sony Electronics Inc. Uni-modal based fast half-pel and fast quarter-pel refinement for video encoding
US20080025399A1 (en) * 2006-07-26 2008-01-31 Canon Kabushiki Kaisha Method and device for image compression, telecommunications system comprising such a device and program implementing such a method
CN101014129A (zh) * 2007-03-06 2007-08-08 孟智平 一种视频数据压缩方法
CN101466040A (zh) * 2009-01-09 2009-06-24 北京大学 一种用于视频编码模式决策的码率估计方法

Also Published As

Publication number Publication date
CN107852500A (zh) 2018-03-27
EP3343923B1 (en) 2021-10-06
BR112018003247A2 (zh) 2018-09-25
US11102501B2 (en) 2021-08-24
AU2019275631A1 (en) 2020-01-02
JP2018529270A (ja) 2018-10-04
AU2019275631B2 (en) 2021-09-23
JP6636615B2 (ja) 2020-01-29
KR20180037042A (ko) 2018-04-10
KR102059066B1 (ko) 2019-12-24
EP3343923A4 (en) 2019-03-13
CN107852500B (zh) 2020-02-21
AU2015406855A1 (en) 2018-03-15
EP3343923A1 (en) 2018-07-04
US20180184108A1 (en) 2018-06-28

Similar Documents

Publication Publication Date Title
WO2017031671A1 (zh) 运动矢量场编码方法和解码方法、编码和解码装置
US11178419B2 (en) Picture prediction method and related apparatus
US11968386B2 (en) Picture prediction method and related apparatus
US20210006818A1 (en) Picture prediction method and related apparatus
CN112203095B (zh) 视频运动估计方法、装置、设备及计算机可读存储介质
WO2017005146A1 (zh) 视频编码和解码方法、视频编码和解码装置
WO2016065872A1 (zh) 图像预测方法及相关装置
US10506249B2 (en) Segmentation-based parameterized motion models
US20130114704A1 (en) Utilizing A Search Scheme for Screen Content Video Coding
US11115678B2 (en) Diversified motion using multiple global motion models
CN109076234A (zh) 图像预测方法和相关设备
BR112018003247B1 (pt) Método de codificação e decodificação de campo de vetor de movimentação, aparelho de codificação e aparelho de decodificação

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15901942

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018508146

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2015901942

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187006478

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2015406855

Country of ref document: AU

Date of ref document: 20150824

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112018003247

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112018003247

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20180220