WO2020007306A1 - 一种解码、编码方法和设备 - Google Patents

一种解码、编码方法和设备 Download PDF

Info

Publication number
WO2020007306A1
WO2020007306A1 PCT/CN2019/094433 CN2019094433W WO2020007306A1 WO 2020007306 A1 WO2020007306 A1 WO 2020007306A1 CN 2019094433 W CN2019094433 W CN 2019094433W WO 2020007306 A1 WO2020007306 A1 WO 2020007306A1
Authority
WO
WIPO (PCT)
Prior art keywords
image block
motion vector
template
candidate
motion information
Prior art date
Application number
PCT/CN2019/094433
Other languages
English (en)
French (fr)
Inventor
陈方栋
王莉
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020007306A1 publication Critical patent/WO2020007306A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Definitions

  • the present application relates to the technical field of video encoding and decoding, and in particular, to a decoding and encoding method and device.
  • a complete video encoding method may include prediction, transformation, quantization, entropy encoding, and filtering.
  • predictive coding includes intra coding and inter coding.
  • Inter-frame coding uses the correlation of the video time domain to predict the pixels of the current image by using the pixels adjacent to the encoded image to effectively remove the video time domain redundancy.
  • a motion vector (Motion Vector, MV) can be used to represent the relative displacement between the current image block of the current frame image and the reference image block of the reference frame image. For example, there is a strong time-domain correlation between image A of the current frame and image B of the reference frame.
  • MV Motion Vector
  • image block A1 that is, the current image block
  • a motion search can be performed in image B to find The image block B1 (that is, the reference image block) that most closely matches the image block A1, and determines the relative displacement between the image block A1 and the image block B1, which is also the motion vector of the image block A1.
  • the encoding end may send the motion vector to the decoding end, instead of sending the image block A1 to the decoding end, the decoding end may obtain the image block A1 according to the motion vector and the image block B1. Since the number of bits occupied by the motion vector is smaller than the number of bits occupied by the image block A1, the above manner can save bits.
  • the spatial correlation between candidate image blocks can also be used to predict the motion vector of image block A1.
  • the motion vector of the image block A2 adjacent to the image block A1 may be determined as the motion vector of the image block A1.
  • the encoding end can send the index value of the image block A2 to the decoding end, and the decoding end can determine the motion vector of the image block A2 based on the index value, which is the motion vector of the image block A1. Since the number of bits occupied by the index value of the image block A2 is less than the number of bits occupied by the motion vector, the above manner can further save bits.
  • the motion vector of image block A2 is determined as the motion of image block A1 Vector, there are problems such as low prediction quality and prediction error.
  • the present application provides a decoding and encoding method and device that can improve the accuracy of motion vectors, and improve encoding and decoding performance.
  • This application provides a decoding method, which is applied to a decoding end.
  • the method includes: acquiring motion information of a candidate image block of a current image block; acquiring a template of the current image block according to the motion information of the candidate image block; The original motion information corresponding to the current image block and the obtained template are used to obtain target motion information based on the template; determine the final motion information of the current image block based on the target motion information; Decoding the current image block.
  • the present application provides an encoding method, which is applied to an encoding end.
  • the method includes: acquiring motion information of a candidate image block of a current image block; acquiring a template of the current image block according to the motion information of the candidate image block; The original motion information corresponding to the current image block and the obtained template are used to obtain target motion information based on the template; the final motion information of the current image block is determined according to the original motion information and the target motion information; The final motion information encodes the current image block to obtain an encoded bit stream.
  • This application provides a decoding device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor; and the processor is configured to execute the machine-executable The instructions are executed to implement the decoding method steps described above.
  • This application provides an encoding end device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor; and the processor is configured to execute the machine-executable The instructions are executed to implement the encoding method steps described above.
  • target motion information may be obtained based on the original motion information, and final motion information of the current image block may be determined based on the target motion information, instead of determining final motion information of the current image block based on the original motion information ,
  • a template of the current image block may be acquired according to the motion information of the candidate image block, and target motion information may be acquired according to the template of the current image block.
  • the foregoing manner may quickly obtain the current image block. Template, and then obtain target motion information based on the template, which can improve decoding efficiency and reduce decoding delay. For example, before the decoding reconstruction phase, a template of the current image block can be obtained, and target motion information can be obtained according to the template.
  • FIG. 1 is a flowchart of an encoding method according to an embodiment of the present application.
  • 2A-2O are schematic diagrams of a template of a current image block in an embodiment of the present application.
  • FIG. 3 is a flowchart of an encoding method in another embodiment of the present application.
  • 4A-4C are flowcharts of an encoding method in another embodiment of the present application.
  • FIG. 5 is a flowchart of a decoding method in another embodiment of the present application.
  • 6A and 6B are flowcharts of a decoding method in another embodiment of the present application.
  • FIG. 7 is a structural diagram of a decoding device according to an embodiment of the present application.
  • FIG. 8 is a structural diagram of an encoding device in another embodiment of the present application.
  • FIG. 9 is a hardware structural diagram of a decoder device in an embodiment of the present application.
  • FIG. 10 is a hardware structural diagram of an encoding end device in an embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in the embodiments of the present application, these information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • word “if” can be interpreted as “at” or “at " or "in response to a determination”.
  • An embodiment of the present application proposes a coding and decoding method.
  • the method may involve the following concepts.
  • Motion Vector In inter-frame coding, the motion vector is used to represent the relative displacement between the current image block of the current frame image and the reference image block of the reference frame image. Each image block has a corresponding motion vector. Send to the decoder. If the motion vector of each image block is independently encoded and transmitted, especially if it is divided into a large number of small image blocks, a considerable number of bits are consumed. To this end, the spatial correlation between adjacent image blocks can also be used to predict the motion vector of the current image block to be encoded according to the motion vectors of adjacent encoded image blocks, and then encode the prediction difference, which can effectively reduce Number of bits representing a motion vector.
  • Motion information In order to accurately point to an image block, in addition to acquiring motion vectors, index information of a reference frame image is required to indicate which reference frame image is used. For the current frame image, a reference frame image list can usually be established, and the reference frame index indicates that the current image block uses the reference frame image in the reference frame image list. Motion-related information such as motion vectors and reference frame indexes can be collectively referred to as motion information.
  • Template In video coding technology, the coding process is performed on an image-by-image block basis. When encoding the current image block, reconstruction information of the surrounding coded image blocks is available.
  • the template refers to the encoding / decoding information of a fixed shape around the current image block (adjacent areas in the time domain or the spatial domain). The template is exactly the same at the encoding and decoding ends. Therefore, some operations performed by the template on the encoding side can be used to obtain completely consistent results on the decoding side, that is, the information derived from the template based on the encoding side can be recovered losslessly on the decoding side without passing additional information. This further reduces the number of transmitted bits.
  • Rate-Distortion Optimized There are two major indicators for evaluating coding efficiency: bit rate (also known as Bit Rate, Second, BPS) and PSNR (Peak Signal to Noise Ratio, peak signal-to-noise ratio). For the same video, the smaller the encoded bitstream, the larger the compression ratio, and the larger the PSNR, the better the reconstructed image quality. In the mode selection, the discrimination formula is essentially a comprehensive evaluation of the two. For example, the cost of the model:
  • D Distortion
  • SSE Sud-Square Error index
  • SSE refers to the mean square sum of the difference between the reconstructed image block and the source image
  • is the Lagrangian multiplier
  • R It is the actual number of bits required for image block coding in this mode, including the sum of the bits required for coding mode information, motion information, and residuals.
  • Intra prediction refers to the use of reconstructed pixel values of spatially adjacent image blocks (such as the same frame image as the current image block) in the current image block for predictive encoding.
  • inter prediction means that the reconstructed pixel value of a temporally adjacent image block of the current image block (such as a different frame image from the current image block) can be used for predictive coding.
  • the method may include the following steps:
  • Step 101 The encoder obtains motion information of a candidate image block of the current image block.
  • the candidate image blocks of the current image block may include, but are not limited to: a spatial-domain candidate image block of the current image block; or a time-domain candidate image block of the current image block. There are no restrictions on this candidate image block.
  • the encoder can directly obtain the motion information of the candidate image block, such as the candidate image block.
  • the motion vector and reference frame index are not limited.
  • the motion information of the candidate image block may include, but is not limited to, the original motion information of the candidate image block, such as the original motion vector, or the original motion vector and the original reference frame.
  • the final motion information of the candidate image block such as the final motion vector, or the final motion vector and the final reference frame.
  • the final motion information of the current image block is used only for the encoding of the current image block (the encoding process of the prediction value generation and reconstruction of the current image block), and is not used for the prediction of adjacent image blocks, that is, obtained from the candidate image block.
  • the motion information of is the original motion information of the candidate image block, not the final motion information of the candidate image block.
  • the final motion information is not saved, but the original motion information is saved, that is, the motion information of the current image block is restored to the original motion information.
  • the method of obtaining the final motion information of the current image block will be introduced, and the method of obtaining the final motion information of the candidate image block is similar to the method of obtaining the final motion information of the current image block, that is, the candidate image block is used as the current image.
  • the final motion information is obtained.
  • the encoding end may store the motion information of the candidate image block, such as storing the original motion information of the candidate image block as the motion information of the candidate image block or storing the final motion information of the candidate image block as the motion information of the candidate image block.
  • the original motion information of the candidate image block can be directly queried locally from the encoding end.
  • the encoding end can obtain the original motion information of the candidate image block (such as the original motion vector and the original reference frame, etc.). For example, a motion vector is selected from the motion vector list of the candidate image block, and the selected motion vector is selected. Is the original motion vector. For another example, the motion information of the neighboring image blocks of the candidate image block may be determined as the original motion information of the candidate image block.
  • the above manner is only an example of obtaining the original motion information of the candidate image block, which is not limited.
  • Step 102 The encoder obtains a template of the current image block according to the motion information of the candidate image block.
  • Step 103 The encoding end obtains target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template.
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector.
  • the original motion information includes an original motion vector and an original reference frame
  • the target motion information includes a target motion vector and a target reference frame.
  • the encoding end obtains the target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template, including: The vector is determined as the center motion vector; each edge motion vector corresponding to the center motion vector is determined, and the edge motion vector is different from the center motion vector; and the coding performance of the center motion vector and each position are obtained according to the template.
  • the encoding performance of the edge motion vector is determined according to the encoding performance of the central motion vector and the encoding performance of each of the edge motion vectors, and the target motion vector is determined from the central motion vector and each of the edge motion vectors.
  • the encoding end obtains the target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template. Includes: obtaining candidate motion vectors corresponding to the original reference frame according to the original motion vector based on the template of the current image block; obtaining each initial motion vector corresponding to each candidate reference frame according to the original motion vector; Each initial motion vector is used to obtain each candidate motion vector corresponding to each candidate reference frame; and a candidate motion vector corresponding to each of the original reference frames and each candidate motion vector corresponding to each of the candidate reference frames is selected to have the best coding performance.
  • the candidate motion vector is used as the target motion vector; and a reference frame corresponding to the target motion vector is determined as the target reference frame.
  • Step 104 The encoding end determines the final motion information of the current image block according to the original motion information and the target motion information.
  • the encoding end determines the final motion information of the current image block according to the original motion information and the target motion information, which may include: obtaining The coding performance of the original motion vector and the coding performance of the target motion vector; when the coding performance of the target motion vector is better than the coding performance of the original motion vector, the final motion vector of the current image block is determined as the target motion vector; When the coding performance is better than the coding performance of the target motion vector, it is determined that the final motion vector of the current image block is the original motion vector.
  • Obtaining the encoding performance of the original motion vector includes: determining the encoding performance of the original motion vector according to the parameter information of the template of the current image block and the parameter information of the first target reference block, where the first target reference block is a reference image block corresponding to the template An image block obtained after offsetting based on the original motion vector.
  • Obtaining the encoding performance of the target motion vector includes: determining the encoding performance of the target motion vector according to the parameter information of the template of the current image block and the parameter information of the second target reference block, where the second target reference block is a reference corresponding to the template An image block is an image block obtained after offsetting based on a target motion vector.
  • Step 105 The encoding end encodes the current image block according to the final motion information to obtain an encoded bit stream corresponding to the current image block. Then, the encoding end can also send the encoded bit stream to the decoding end.
  • Step 106 The encoding end stores the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block. In this way, during the processing of other image blocks, if the current image block is selected as a candidate image block of another image block, the motion information of the candidate image block used in step 101 is the current image block stored in this step. Sports information.
  • storing the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block may include: the original motion information in the current image block is based on the spatially adjacent image block (or the spatially-adjacent neighbor) When the motion information of the image block is obtained, the original motion information corresponding to the current image block is stored as the motion information of the current image block. When the original motion information of the current image block is not obtained according to the motion information of the spatially adjacent image block (or the spatially adjacent image block), the final motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the spatially adjacent image block or the spatially adjacent image block is in the same frame as the current image block.
  • the final motion information corresponding to the current image block is stored as the current image block Sports information.
  • the final motion information corresponding to the current image block may also be stored as the motion information of the current image block.
  • the original motion information includes at least the original motion vector.
  • the original motion vector corresponding to the current image block is obtained according to the motion vector of the spatially adjacent image block
  • the original motion vector corresponding to the current image block is stored as the current image.
  • Block motion vector When the original motion information of the current image block is not obtained based on the motion vectors of the adjacent image blocks in the spatial domain, the final motion vector corresponding to the current image block is stored as the motion vector of the current image block.
  • the final motion vector corresponding to the current image block is stored as the motion vector of the current image block.
  • the final motion vector corresponding to the current image block may also be stored as the motion vector of the current image block.
  • the motion information includes a motion vector
  • the original motion vector corresponding to the current image block is stored as the motion vector of the current image block, or in another example, after encoding, the final value of the current image block is stored.
  • the motion vector is stored as the motion vector of the current image block.
  • the final motion information of the current image block is used only for the encoding of the current image block (the encoding process of the prediction value generation and reconstruction of the current image block), and is not used for the prediction of adjacent image blocks, that is, obtained from the candidate image block.
  • the motion information of is the original motion information of the candidate image block, not the final motion information of the candidate image block.
  • the final motion information is not saved, but the original motion information is saved, that is, the motion information of the current image block is restored to the original motion information.
  • the motion information of the candidate image block in step 101 may be the original motion vector of the candidate image block.
  • the encoding end may store the original motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 101 may be the final motion vector of the candidate image block.
  • the encoding end may store the original motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 101 may be the original motion vector of the candidate image block.
  • the encoding end may store the final motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 101 may be the final motion vector of the candidate image block.
  • the encoding end may store the final motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information also includes reference image frames and motion directions.
  • the encoding end may also obtain an index value of the original motion vector in the motion vector list; moreover, the encoded bit stream may carry the index value, and the process is described in a subsequent embodiment.
  • the encoded bit stream corresponding to the current image block may further carry first indication information, where the first indication information is used to indicate that the current image is determined based on the template.
  • the final motion information of the block If the final motion vector of the current image block is the original motion vector, the encoded bit stream corresponding to the current image block may also carry second instruction information, where the second instruction information is used to indicate that the determination is based on the original motion information corresponding to the current image block. The final motion information of the current image block.
  • the decoder is explicitly notified of the first indication information or the second indication information.
  • the notification may also be performed in an implicit manner, that is, the first indication information or the second indication information is not carried in the encoded bit stream.
  • the encoding end and the decoding end may also negotiate a decision strategy or define a decision strategy in a standard and store the decision strategy on the encoding end and the decoding end respectively.
  • the decision strategy may agree on the first strategy information and the first strategy.
  • the information is used to indicate that the final motion information of the current image block is determined based on the template; or the second policy information is agreed, and the second policy information is used to indicate that the final motion information of the current image block is determined based on the original motion information corresponding to the current image block; or,
  • the third policy information is agreed, and the third policy information is the same policy information as the neighboring image blocks of the current image block.
  • the adjacent image block used is a certain adjacent image block that is predetermined in advance by the encoding end and the decoding end.
  • the encoded bit stream may not carry the first indication information and the second indication information.
  • the encoded bit stream may not carry the first indication information and the second indication information.
  • the encoding end may encode the current image block according to the target motion information, and the encoded bit stream may not carry the first indication information and the second indication.
  • a template of the current image block can be obtained according to the motion information of the candidate image block of the current image block, and the current image block is encoded according to the template of the current image block to obtain the corresponding image block. Encoded bitstream.
  • the above method can quickly obtain the template of the current image block, which can improve encoding efficiency, reduce encoding delay, and improve encoding performance.
  • the encoder can simultaneously encode multiple image blocks in parallel, thereby further increasing the encoding speed, increasing the encoding efficiency, reducing the encoding delay, and improving the encoding performance.
  • the final motion information of the current image block is only used for the encoding of the current image block (the encoding process of the prediction value generation and reconstruction of the current image block), and is not used for the prediction of adjacent image blocks, that is, the acquisition of adjacent image blocks
  • the candidate motion information is the original motion information, not the final motion information.
  • the final motion information is not saved, but the original motion information is saved, that is, the motion information of the current image block is restored to the original motion information.
  • each image block is encoded one by one.
  • the current image block can be used to reconstruct the current image block. Therefore, the neighboring image blocks of the current image block can be used Information to get the template of the current image block.
  • the information may include, but is not limited to, reconstruction information of neighboring image blocks and / or prediction information of neighboring image blocks.
  • the reconstruction information may include, but is not limited to, a luminance value, a chrominance value, and the like; the prediction information may be an intermediate value capable of obtaining reconstruction information. For example, if a luminance value can be obtained using the intermediate value A, the intermediate value A is prediction information. There are no restrictions on this forecast information.
  • the generation of the template of the current image block needs to wait until the reconstruction stage, which reduces the efficiency of encoding and decoding and brings time delay.
  • the information is prediction information
  • the generation of the template of the current image block also needs to wait until the reconstruction stage, which reduces the efficiency of encoding and decoding and brings time delay. Therefore, the above-mentioned method will greatly affect the parallelism of encoding and decoding.
  • a template generation method is proposed, which is different from the method of using reconstruction information and prediction information to generate a template, which can be based on the motion information (such as motion vector and reference frame index) of candidate image blocks of the current image block.
  • Etc. Get the template of the current image block.
  • the method of obtaining the template of the current image block according to the motion information of the candidate image block can be applied to both the encoding end and the decoding end.
  • Obtaining a template of the current image block according to the motion information of the candidate image block may include: when the motion information includes a motion vector and a reference frame index of the candidate image block, determining a reference frame image of the candidate image block according to the reference frame index; The motion vector obtains a reference image block from the reference frame image and the candidate image block, and obtains a template of the current image block according to the reference image block.
  • image block A1 is the current image block
  • image block A2 and image block A3 are candidate image blocks of image block A1.
  • the reference frame index of the image block A2 is the index of the image B
  • the image block B2 corresponding to the image block A2 is selected from the image B ( As shown by the dashed arrows in the figure), the position of image block B2 in image B is the same as the position of image block A2 in image A; then, image block B2 can be moved according to the motion vector of image block A2, such as using The motion vector (3, 3) moves the image block B2 to obtain the image block B2 'corresponding to the image block B2 (such as moving 3 pixels to the right and 3 pixels upward), and the image block B2' is the image block A2.
  • Reference image block Similarly, it can be determined that the reference image block of the image block A3 is the image block B3 '(as shown by the dotted arrow in the figure).
  • the template of the image block A1 may be determined according to the image block B2 'and the image block B3', as shown in FIG. 2A.
  • the candidate image block may include M first candidate image blocks and N second candidate image blocks, where M is a natural number greater than or equal to 1, N is a natural number greater than or equal to 0, or M is greater than or A natural number equal to 0, and N is a natural number greater than or equal to 1.
  • the first candidate image block is a candidate image block on the upper side of the current image block
  • the second candidate image block is a candidate image block on the left side of the current image block.
  • acquiring the template of the current image block according to the motion information of the candidate image block may include, but is not limited to: determining the first template according to the motion vector prediction mode and the motion information of the M first candidate image blocks; A second template is determined based on the motion vector prediction mode and motion information of each second candidate image block; then, a template for the current image block is determined based on the first template and the second module.
  • determining the template of the current image block based on the first template and the second template may include, but is not limited to: determining the first template as the template of the current image block; or determining the second template as the template of the current image block; or , Determine the template of the current image block after stitching the first template and the second template.
  • the first template may be determined according to the motion vector prediction mode and motion information of the M first candidate image blocks, and the first template may be determined as a template of the current image block .
  • N is a natural number greater than or equal to 1
  • M is 0, a second template may be determined according to the motion vector prediction mode and motion information of the N second candidate image blocks, and the second template may be determined as a template of the current image block.
  • a first template may be determined based on the motion vector prediction mode and motion information of the M first candidate image blocks, and according to the N second candidate images
  • the motion vector prediction mode and motion information of the block determine a second template, and determine a template of the current image block according to the first template and the second template. Specifically, it may include determining the first template as the template of the current image block, or determining the second template as the template of the current image block, or determining the template of the current image block after stitching the first template and the second template.
  • the first candidate image block includes an adjacent image block and / or a second-neighboring image block on the upper side of the current image block.
  • the prediction mode of the neighboring image block is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • the first candidate image block may include at least one adjacent image block whose prediction mode is inter mode, for example, all adjacent image blocks on the upper side of the current image block, or the first adjacent image block on the upper side of the current image block, Or any one or more adjacent image blocks on the upper side of the current image block.
  • the first candidate image block may further include at least one second neighboring image block whose prediction mode is the inter mode.
  • the first candidate image block may further include adjacent image blocks of the intra mode, for example, the first intra mode on the upper side of the current image block. Adjacent image blocks, adjacent image blocks of all intra-modes above the current image block, and so on. The above is only an example of the first candidate image block, which is not limited.
  • the second candidate image block includes a neighboring image block and / or a next-neighbor image block on the left side of the current image block.
  • the prediction mode of the neighboring image block is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • the second candidate image block may include at least one adjacent image block whose prediction mode is inter mode, for example, all adjacent image blocks to the left of the current image block, or the first adjacent image block to the left of the current image block, Or any one or more adjacent image blocks to the left of the current image block.
  • the second candidate image block may further include at least one second image block whose prediction mode is the inter mode, for example, all the left side of the current image block The next-neighbor image block, or the first next-neighbor image block to the left of the current image block, or any one or more second-neighbor image blocks to the left of the current image block.
  • the first candidate image block may further include adjacent image blocks in the intra mode, for example, the first intra mode on the left side of the current image block Adjacent image blocks, adjacent image blocks of all intra modes to the left of the current image block, and so on.
  • the second candidate image block which is not limited.
  • the adjacent image blocks of the current image block include, but are not limited to: spatially adjacent image blocks of the current image block (that is, adjacent image blocks in the same frame of image); or, temporal image adjacent blocks of the current image block (that is, different Adjacent image blocks in a frame image).
  • the second-neighbor image block of the current image block includes, but is not limited to, the spatially-neighbor image block of the current image block (that is, the second-neighbor image block in the same frame image); or the time-domain next-neighbor image block (i.e. The next-most neighboring image block in the frame image).
  • the first template when M is greater than 1, the first template may include M sub-templates or P sub-templates, and is stitched from M sub-templates or P sub-templates, and P may be the first candidate image block of the inter-frame mode. , P is less than or equal to M.
  • the first template when the M first candidate image blocks are all candidate image blocks of the inter mode, the first template may include M sub-templates, and the M sub-templates are stitched together.
  • the first template may include M sub-templates (that is, each candidate image Block corresponds to one sub-template), and is composed of M sub-templates; or, the first template may include P sub-templates (that is, P sub-templates corresponding to P inter-mode candidate image blocks), and the P sub-templates are stitched together Made.
  • the first template may include a first sub-template, and the first sub-template may be determined according to a motion vector prediction mode and motion information of any candidate image block on the upper side of the current image block.
  • the first candidate image block includes at least one neighboring image block or a second neighboring image block whose prediction mode is the inter mode
  • the first template includes the neighboring image blocks of the inter mode or The first sub-template corresponding to the next neighboring image block.
  • the motion information may include a motion vector and a reference frame index of the first candidate image block.
  • the first template is determined according to the motion vector prediction mode and motion information of the M first candidate image blocks, which may include, but not Limited to the following situations.
  • Case 1 For the i-th candidate image block among the M first candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an inter mode, the reference frame of the i-th candidate image block is used The index determines the reference frame image of the i-th candidate image block; according to the motion vector of the i-th candidate image block, the reference image block of the i-th candidate image block is determined from the reference frame image, and the reference image block and the i-th candidate image block The relative displacement of the candidate image block matches the motion vector of the ith candidate image block; then, according to the determined reference image block, an image block having a size of the first horizontal length and the first vertical length can be obtained as a first template including I-th child template.
  • Case 2 For the i-th candidate image block among the M first candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is set to a default value (Such as the default pixel value, which can be a brightness value that is pre-configured based on experience) padding, based on the image blocks filled with the default values, to obtain image blocks of the first horizontal length and the first vertical length as the i-th included in the first template Child templates.
  • a default value Such as the default pixel value, which can be a brightness value that is pre-configured based on experience
  • Case 3 For the i-th candidate image block among the M first candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, according to the reference frame corresponding to the i-th candidate image block The index determines the reference frame image corresponding to the i-th candidate image block; and determines the reference image block corresponding to the i-th candidate image block from the reference frame image according to the motion vector corresponding to the i-th candidate image block.
  • each candidate image block matches the motion vector corresponding to the ith candidate image block (including equal or approximately equal); according to the determined reference image block, an image block with a first horizontal length and a first vertical length is obtained
  • the i-th sub-template included as the first template wherein the reference frame index and the motion vector corresponding to the i-th candidate image block are the reference frame index and the motion vector of the adjacent image block of the i-th candidate image block.
  • the first horizontal length and the horizontal length of the first candidate image block satisfy a first proportional relationship (such as 1: 1, 1: 2, 2: 1, etc., which is not limited), or the horizontal length of the current image block satisfies the first Two proportional relationships (such as 1: 1, 1: 2, 2: 1, etc.), or equal to the first preset length (can be configured according to experience).
  • a first proportional relationship such as 1: 1, 1: 2, 2: 1, etc., which is not limited
  • the horizontal length of the current image block satisfies the first Two proportional relationships (such as 1: 1, 1: 2, 2: 1, etc.), or equal to the first preset length (can be configured according to experience).
  • the first vertical length and the vertical length of the first candidate image block satisfy a third proportional relationship (such as 1: 1, 1: 2, 2: 1, etc.), or the vertical length of the current image block satisfies a fourth proportional relationship (such as 1 : 1, 1: 2, 2: 1, etc.), or equal to the second preset length (that is, a length configured according to experience).
  • a third proportional relationship such as 1: 1, 1: 2, 2: 1, etc.
  • a fourth proportional relationship such as 1 : 1, 1: 2, 2: 1, etc.
  • the first proportional relationship, the second proportional relationship, the third proportional relationship, and the fourth proportional relationship may be set to be the same or different.
  • the first preset length and the second preset length may be set to be the same or different.
  • the second template may include N sub-templates or R sub-templates, and is formed by splicing N sub-templates or R sub-templates, where R may be the second candidate for the inter-frame mode.
  • the number of image blocks, R is less than or equal to N.
  • the second template may include N sub-templates, and is formed by splicing the N sub-templates.
  • the second template may include N sub-templates (that is, each candidate image block (Corresponding to one sub-template), and is formed by splicing N sub-templates; or, the second template may include R sub-templates (that is, R sub-templates corresponding to R inter-mode candidate image blocks), and R sub-templates to make.
  • the second template may include a second sub-template, and the second sub-template may be determined according to a motion vector prediction mode and motion information of any candidate image block on the left side of the current image block.
  • the second candidate image block includes at least one neighboring image block or the next neighboring image block whose prediction mode is the inter mode
  • the second template includes the neighboring image blocks of the inter mode or The second sub-template corresponding to the next-neighbor image block.
  • the motion information may include a motion vector and a reference frame index of the second candidate image block.
  • the second template is determined according to the motion vector prediction mode and motion information of the N second candidate image blocks, which may include but not Limited to the following situations.
  • Case 1 For the i-th candidate image block of the N second candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an inter mode, the reference frame of the i-th candidate image block The index determines the reference frame image of the i-th candidate image block; according to the motion vector of the i-th candidate image block, the reference image block of the i-th candidate image block is determined from the reference frame image, and the reference image block and the i-th candidate image block The relative displacement of the candidate image block matches the motion vector of the i-th candidate image block; then, according to the determined reference image block, an image block having a size of the second horizontal length and the second vertical length can be obtained as a second template including I-th child template.
  • Case 2 For the i-th candidate image block of the N second candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is set to a default value (Such as the default pixel value, which can be a brightness value pre-configured according to experience) padding, based on the image blocks filled with the default values, to obtain image blocks of the second horizontal length and the second vertical length as the i-th included in the second template Child templates.
  • a default value Sud as the default pixel value, which can be a brightness value pre-configured according to experience
  • Case 3 For the i-th candidate image block of the N second candidate image blocks, when it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, according to the reference frame corresponding to the i-th candidate image block The index determines the reference frame image corresponding to the i-th candidate image block; and determines the reference image block corresponding to the i-th candidate image block from the reference frame image according to the motion vector corresponding to the i-th candidate image block.
  • each candidate image block matches the motion vector corresponding to the i-th candidate image block (including equal or approximately equal); according to the determined reference image block, image blocks of the second horizontal length and the second vertical length are obtained
  • the second horizontal length and the horizontal length of the second candidate image block satisfy a fifth proportional relationship (such as 1: 1, 1: 2, 2: 1, etc., which is not limited), or the horizontal length of the current image block satisfies the first Six proportional relationships (such as 1: 1, 1: 2, 2: 1, etc.), or equal to a third preset length (can be configured according to experience).
  • a fifth proportional relationship such as 1: 1, 1: 2, 2: 1, etc., which is not limited
  • the horizontal length of the current image block satisfies the first Six proportional relationships (such as 1: 1, 1: 2, 2: 1, etc.), or equal to a third preset length (can be configured according to experience).
  • the second vertical length and the vertical length of the second candidate image block satisfy a seventh proportional relationship (such as 1: 1, 1: 2, 2: 1, etc.), or the vertical length of the current image block satisfies an eighth proportional relationship (such as 1 : 1, 1: 2, 2: 1, etc.), or equal to the fourth preset length (that is, a length configured according to experience).
  • the fifth proportional relationship, the sixth proportional relationship, the seventh proportional relationship, and the eighth proportional relationship may be set to be the same or different.
  • the third preset length and the fourth preset length may be set to be the same or different.
  • acquiring a template of the current image block according to the motion information of the candidate image block may further include but is not limited to: when the current image block corresponds to multiple motion information, acquiring a template corresponding to the motion information according to each motion information, For the obtaining method of each template, refer to the foregoing embodiment; then, obtain the weight corresponding to each motion information, and obtain the template of the current image block according to the weight corresponding to each motion information and the template corresponding to the motion information. For example, based on the weight corresponding to each motion information and the template corresponding to the motion information, a template of the current image block may be obtained in a weighted average manner.
  • the motion information corresponding to the current image block may include original motion information of the current image block.
  • the current image block corresponds to the motion information A and the motion information B
  • the template TA corresponding to the motion information A is obtained by using the foregoing embodiment
  • the template TB corresponding to the motion information B is obtained by using the foregoing embodiment.
  • the weight W1 of the motion information A and the weight W2 of the motion information B can be obtained.
  • the template of the current image block can be (TA * W1 + TB * W2) / 2.
  • the candidate image block may include adjacent image blocks of all inter-modes on the upper side of the current image block, and adjacent image blocks of all inter-modes on the left side of the current image block.
  • the image block A3 and image block A4 in the inter mode can be determined as The second candidate image block of the current image block A1.
  • the image block A2 in the inter mode can be determined as the first candidate image block of the current image block A1.
  • the technical solution of this embodiment is no longer adopted, but a conventional method is adopted. If the current image block A1 has candidate image blocks, such as the left candidate image block and / or the upper candidate image block, the technical solution of this embodiment is adopted.
  • the candidate image blocks are the image block A2, the image block A3, and the image block A4, it can be obtained according to the motion information of the image block A2, the motion information of the image block A3, and the motion information of the image block A4.
  • the reference frame image of the image block A2 may be determined according to the reference frame index of the image block A2, and the image block B2 corresponding to the image block A2 is selected from the reference frame image, and the image block B2 is moved according to the motion vector of the image block A2 To obtain a reference image block B2 'of the image block A2.
  • the reference image block B3 'of the image block A3 and the reference image block B4' of the image block A4 can be obtained, as shown in FIG. 2C. Then, a template of the current image block A1 can be obtained from the reference image block B2 ', the reference image block B3', and the reference image block B4 '.
  • the horizontal length of the upper template of the current image block A1 is W and the vertical length is S.
  • the value of W can be configured based on experience, and the value of S can be configured based on experience.
  • the values of W and S are both No restrictions.
  • W may be the horizontal length of the current image block A1, or the horizontal length of the candidate image block A2, or twice the horizontal length of the current image block A1, etc .
  • S may be the vertical length of the candidate image block A2, or The length of the vertical length of the candidate image block A2 is 1/3 and so on.
  • FIG. 2D it is a schematic diagram of a template corresponding to the reference image block B2 '. In FIG.
  • the horizontal length W of the template is the horizontal length of the candidate image block A2, that is, W is the horizontal length of the reference image block B2 ';
  • the vertical length S of the template is 1/3 of the vertical length of the candidate image block A2. , That is, S is 1/3 of the longitudinal length of the reference image block B2 '.
  • the horizontal length of the template on the left side of the current image block A1 is R and the vertical length is H.
  • the value of R can be configured based on experience, and the value of H can be configured based on experience.
  • H may be the vertical length of the current image block A1 or the vertical length of the candidate image block A3;
  • R may be the horizontal length of the candidate image block A3, or 1/3 of the horizontal length of the candidate image block A3, and so on.
  • FIG. 2D a schematic diagram of a template corresponding to the reference image block B3 'is shown.
  • the vertical length H of the template is the vertical length of the candidate image block A3, and the horizontal length R of the template is 1/3 of the horizontal length of the candidate image block A3. .
  • the prediction mode of the candidate image block needs to be determined .
  • the corresponding sub-template is no longer generated, or it is filled in according to the default value (such as the default pixel value, which can be a brightness value pre-configured according to experience) as the i-th sub-template of the upper template.
  • the default value such as the default pixel value, which can be a brightness value pre-configured according to experience
  • the motion information (such as a motion vector and a reference frame index) of the i-th candidate image block is obtained, and a template with a horizontal length w i and a vertical length S is generated based on the motion vector and the reference frame index , As the i-th sub-template on the upper side.
  • the motion vector is MV and the reference frame index is idx
  • a rectangular block with a relative displacement of MV and a horizontal length of w i and a vertical length of S is used as the upper side.
  • the i-th child template may be agreed in advance by the codec and may be stored in the codec in advance.
  • the current left image block has N candidate image block, the i-th candidate for the image block, i H is assumed that the longitudinal length, lateral length of R, the need is determined prediction mode candidate image block.
  • the corresponding sub-template is no longer generated, or it is filled according to the default value (such as the default pixel value, which can be a brightness value pre-configured according to experience), as the i-th sub-template of the left template.
  • the default value such as the default pixel value, which can be a brightness value pre-configured according to experience
  • the motion information is acquired i-th candidate image blocks (e.g., motion vector and reference frame index, etc.), to generate a transverse length R based on the motion vector and the reference frame index, the longitudinal length of template h i As the i-th sub-template on the left.
  • the motion vector is MV
  • the reference frame index idx find the relative displacement of the lateral length of the MV R, a longitudinal length of a rectangular block h i, as in the left idx in the reference image of the current frame
  • the i-th child template wherein the lateral length R, a longitudinal length h i may be agreed in advance end codec, the codec may be stored in advance in the end.
  • the first template can be stitched from all the sub-templates on the upper side
  • the second template can be stitched from all the sub-templates on the left.
  • the first template and the second template are stitched into the template of the current image block.
  • the candidate image block may include a neighboring image block of the first inter mode on the upper side of the current image block, and a neighboring image block of the first inter mode on the left of the current image block.
  • the image block A3 may be determined as a candidate image block of the current image block A1. If the first image block A2 on the upper side is in an inter mode, the image block A2 may be determined as a candidate image block of the current image block A1.
  • the current image block A1 has no candidate image blocks.
  • the technical solution of this embodiment is no longer adopted, but a conventional method is adopted. If the current image block A1 has candidate image blocks, such as the left candidate image block and / or the upper candidate image block, the technical solution of this embodiment is adopted.
  • the template of the current image block A1 can be obtained according to the motion information of the image block A2 and the motion information of the image block A3.
  • the reference frame image of image block A2 may be determined according to the reference frame index of image block A2, the image block B2 corresponding to image block A2 is selected from the reference frame image, and image block B2 is moved according to the motion vector of image block A2.
  • a reference image block B2 'of the image block A2 is obtained.
  • a reference image block B3 'of the image block A3 can be obtained, and a template of the current image block A1 can be obtained according to the reference image block B2' and the reference image block B3 '.
  • the horizontal length of the upper template of the current image block A1 is W and the vertical length is S.
  • the value of W can be configured based on experience, the value of S can be configured according to experience, and the values of W and S are both No restrictions.
  • W may be the horizontal length of the current image block A1 or the horizontal length of the candidate image block A2;
  • S may be the vertical length of the candidate image block A2 or 1/3 of the vertical length of the candidate image block A2.
  • FIG. 2F it is a schematic diagram of a template corresponding to the reference image block B2 '.
  • the horizontal length of the template on the left side of the current image block A1 is R and the vertical length is H.
  • the value of R can be configured based on experience, and the value of H can be configured based on experience.
  • H may be the vertical length of the current image block A1, or the vertical length of the candidate image block A3;
  • R may be the horizontal length of the candidate image block A3, or 1/3 of the horizontal length of the candidate image block A3, and so on.
  • FIG. 2F it is a schematic diagram of a template corresponding to the reference image block B3 '.
  • Forecasting mode If it is in the intra mode, a corresponding template is not generated, or it is filled according to a default value (such as a default pixel value, which may be a brightness value pre-configured according to experience) as the first template. If it is in the inter mode, the motion information (such as the motion vector and the reference frame index) of the candidate image block is obtained, and a template with a horizontal length of w and a vertical length of S is generated based on the motion vector and the reference frame index as the first template. .
  • a default value such as a default pixel value, which may be a brightness value pre-configured according to experience
  • the candidate image block may include a neighboring image block of the first inter mode on the upper side of the current picture block, and a neighboring image block of the first inter mode on the left of the current picture block.
  • the horizontal length W of the upper template of the current image block A1 is set to the horizontal length of the current image block A1, and the left side of the current image block A1
  • the vertical length H of the template is set to the vertical length of the current image block A1.
  • the candidate image block may include the adjacent image block in the inter mode on the upper side of the current image block, and the next adjacent image block in the inter mode on the upper side of the current image block (that is, when the adjacent image block is in the intra mode, select The adjacent image block corresponds to the current image block that is the second-neighboring image block), the adjacent image block in the inter mode on the left side of the current image block, and the second-neighbor image block in the inter mode on the left side of the current image block.
  • the image block A3 and image block A4 in the inter mode can be determined.
  • the image block A2 in the inter mode can be determined as a candidate image block of the current image block A1.
  • image block A7 There are adjacent image blocks in the intra mode on the left, such as image block A7, and there is an image block A8 in the inter mode on the left of image block A7, that is, image block A8 is the next-to-adjacent image block on the left of the current image block A1, so ,
  • the next neighboring image block A8 in the inter mode may be determined as a candidate image block of the current image block A1.
  • image block A5 there are adjacent image blocks in the intra mode on the upper side, such as image block A5, and there is an image block A6 in the inter mode on the upper side of image block A5, that is, image block A6 is the next-neighbor image on the upper side of the current image block A1. Therefore, the next-neighbor image block A6 in the inter mode can be determined as a candidate image block of the current image block A1.
  • the candidate image block is an image block A2, an image block A3, an image block A4, an image block A6, and an image block A8, according to the motion information of the image block A2, the motion information of the image block A3, the motion information of the image block A4, Obtain the template of the current image block A1 from the motion information of the image block A6 and the motion information of the image block A8.
  • the sub-templates are obtained based on the motion information of the image block A6 and the motion information of the image block A8.
  • the template is finally shown in Figure 2I.
  • Case 5 If the first adjacent image block above the current image block is in intra mode and the image block above the first adjacent image block is in inter mode, the candidate image block may include the current image block The next neighboring image block in the inter mode. If the first adjacent image block on the left side of the current image block is in intra mode and the image block on the left side of the first adjacent image block is in inter mode, the candidate image block may include the left side of the current image block. Second-neighboring image block in inter mode.
  • the second-neighboring image block on the upper side of A1 therefore, the second-neighboring image block A4 in the inter mode can be determined as a candidate image block of the current image block A1.
  • the image block A5 is the next-to-next image block on the left of the current image block A1.
  • the next neighboring image block A5 in the inter mode is determined as a candidate image block of the current image block A1.
  • the template of the current image block A1 can be obtained according to the motion information of the image block A4 and the motion information of the image block A5. For details, please refer to Cases 2 and 3. The details are not repeated here, and the template is finally shown in FIG. 2K or FIG. 2L.
  • the candidate image block may include an adjacent image block in an inter mode on the upper side of the current image block, an adjacent image block in an intra mode on the upper side of the current image block, and an adjacent inter mode in the left side of the current image block.
  • the image block A3 and image block A4 in the inter mode can be determined.
  • the image block A2 in the inter mode can be determined as a candidate image block of the current image block A1.
  • image block A7 There are adjacent image blocks in the intra mode on the left, such as image block A7.
  • the image block A7 in the intra mode can be determined as a candidate image block of the current image block A1.
  • the image block A5 in the intra mode can be determined as a candidate image block of the current image block A1.
  • the candidate image block is an image block A2, an image block A3, an image block A4, an image block A5, and an image block A7, according to the motion information of the image block A2, the motion information of the image block A3, the motion information of the image block A4, The motion information of the image block A5 and the motion information of the image block A7 are used to obtain the template of the current image block A1.
  • the sub-template can be obtained according to the motion information of the image block A5 and the image block A7 in the intra mode.
  • the default value For example, the default pixel value can be a brightness value (pre-configured according to experience), and used as the top or left sub-template.
  • the template is finally shown in Figure 2M.
  • image block A5 when obtaining a template based on the motion information of image block A5, image block A5 is in the intra mode and does not have motion information. Therefore, neighboring image blocks of image block A5 (such as image block A6) The motion information is determined as the motion information of the image block A5. After the motion information of the image block A5 is obtained, the corresponding sub-template can also be obtained by using the motion information of the image block A5. For specific acquisition methods, see Case 1.
  • the motion information of adjacent image blocks (such as image block A8) of image block A7 can be determined. Is the motion information of the image block A7, and the corresponding sub-template is obtained by using the motion information of the image block A7.
  • the candidate image block may include the current image block Adjacent image blocks of the first intra mode on the upper side. If the first adjacent image block on the left side of the current image block is in intra mode and the image block on the left side of the first adjacent image block is in inter mode, the candidate image block may include the left side of the current image block. Adjacent image blocks of the first intra mode.
  • the image block in the intra mode can be changed.
  • A2 is determined as a candidate image block of the current image block A1. If the first image block A3 on the left side is in the intra mode, and the image block A5 in the inter mode exists on the left side of the image block A3, the image block A3 in the intra mode can be determined as a candidate image block of the current image block A1.
  • the template of the current image block A1 can be obtained according to the motion information of the image block A2 and the motion information of the image block A3.
  • the template can be obtained based on the motion information of the image block A2 and the image block A3 in the intra mode. Therefore, after determining that the prediction mode of the image block is the intra mode, the default value (such as The default pixel value can be a brightness value (pre-configured according to experience) to fill, as a template on the upper or left side.
  • the template is finally shown in FIG. 2N or FIG. 2O.
  • the neighboring image blocks of image block A2 (that is, image block A4)
  • the motion information is determined as the motion information of the image block A2, and the corresponding template is obtained using the motion information of the image block A2.
  • the motion information of an adjacent image block of the image block A3 (that is, the image block A5) can be determined as an image block The motion information of A3, and use the motion information of image block A3 to obtain the corresponding template.
  • a motion vector may be used to represent a relative displacement between a current image block of a current frame image and a reference image block of a reference frame image.
  • image A is the current frame image
  • image block A1 is the current image block
  • image B is the reference frame image of image A
  • image block B1 is the reference image block of image block A1.
  • a motion search can be performed in image B to find image block B1 that best matches image block A1 and determine the image
  • the relative displacement between the block A1 and the image block B1, and the relative displacement is also the motion vector of the image block A1.
  • the motion vector is (-6, 4), which indicates that the image block B1 moves 6 pixels to the left in the horizontal direction and 4 pixels upwards in the vertical direction compared to the image block A1.
  • the encoded bit stream carries the motion vector (-6, 4) of the image block A1, instead of the image block A1.
  • the decoder can obtain the motion vector (-6, 4) of image block A1.
  • the reference frame image The position of the reference image block B1 is determined in B, that is, at the position of the current image block A1, 6 pixels are moved to the left and 4 pixels are moved upward, and the position of the reference image block B1 is obtained.
  • the position of B1 reads the reference image block B1, and uses the reference image block B1 to reconstruct the current image block A1. Because the similarity between the reference image block B1 and the current image block A1 is very high, using the reference image block B1 to reconstruct the current image block A1 can reconstruct a highly similar image. Since the number of bits occupied by the motion vector is less than the number of bits occupied by the image block A1, by carrying the motion vector in the encoded bit stream corresponding to the current image block A1 instead of carrying the image block A1, a large number of bits can be saved.
  • an image block adjacent to the current image block A1 may include: an image block A2 and an image block A3, and the motion vector list of the current image block A1 may include the motion vector A21 of the image block A2 and Motion vector A31 of image block A3.
  • the encoded bit stream when the encoding end sends the encoded bit stream corresponding to the image block A1 to the decoding end, the encoded bit stream carries the index value of the original motion vector A21 (that is, the index value in the motion vector list) instead of the image block.
  • the motion vector (-6, 4) of A1 is not the image block A1.
  • the decoding end After receiving the encoded bit stream corresponding to the image block A1, the decoding end can obtain the index value of the original motion vector A21, and obtain the original motion vector A21 from the motion vector list according to the index value. Since the number of bits occupied by the index value is less than the number of bits occupied by the motion vector, further bits can be saved.
  • a target motion vector different from the original motion vector A21 is obtained according to the original motion vector A21, and the target motion vector is different from the image block A1.
  • the motion vector closest to the motion vector is used as the final motion vector of the image block A1.
  • the method of “using the target motion vector as the final motion vector of the image block A1” can improve prediction quality and reduce prediction errors.
  • the original motion information is the original motion vector corresponding to the current image block
  • the target motion information is the target motion vector corresponding to the current image block. See FIG. 3, which is a schematic flowchart of an encoding method.
  • Step 301 The encoder obtains motion information of a candidate image block of the current image block.
  • Step 302 The encoder obtains a template of the current image block according to the motion information of the candidate image block.
  • step 303 the encoding end obtains a template-based target motion vector according to the original motion vector corresponding to the current image block and the obtained template, and the target motion vector may be different from the original motion vector.
  • the original motion vector corresponding to the current image block can be obtained.
  • the motion vector list at the encoding end includes motion vector A21, motion vector A31, motion vector A41, and motion vector A51 in this order.
  • the default motion vector may be directly determined as the original motion vector.
  • the encoder selects a motion vector from the motion vector list, which may include: the encoder selects the first motion vector from the motion vector list; or selects the last motion vector from the motion vector list; or randomly selects from the motion vector list Select a motion vector; or, use a hash algorithm to select a motion vector from the motion vector list.
  • the above method is only a few examples, and there is no limitation on this, as long as the motion vector can be selected from the motion vector list.
  • the motion vector list is used to record a motion vector of an image block adjacent to the current image block. For example, after obtaining the motion vector A21 of the image block A2, the motion vector A21 may be recorded in the motion vector list, and after obtaining the motion vector A31 of the image block A3, the motion vector A31 may be recorded in the motion vector list, so as to By analogy, in the end, the motion vector list of image block A1 can be obtained.
  • Step 304 The encoding end determines the final motion vector of the current image block according to the original motion vector and the target motion vector, and encodes the current image block according to the final motion vector to obtain an encoded bit stream corresponding to the current image block.
  • the encoding end can obtain the encoding performance of the original motion vector and the encoding performance of the target motion vector.
  • the encoding end determines that the final motion vector of the current image block is the target motion vector, and the encoding end sends an encoded bit stream carrying the first indication information to the decoding end.
  • the encoding end determines that the final motion vector of the current image block is the original motion vector, and the encoding end sends an encoded bit stream carrying the second indication information to the decoding end.
  • the decoder is explicitly notified of the first indication information or the second indication information.
  • the notification may also be performed in an implicit manner, that is, the first indication information or the second indication information is not carried in the encoded bit stream.
  • the encoding end and the decoding end may also negotiate a motion vector decision strategy or define a decision strategy in a standard and store the decision strategy on the encoding end and the decoding end, for example, the motion vector decision strategy may agree on the first strategy information Or, the second policy information is agreed, or the third policy information is agreed.
  • the encoded bit stream may not carry the first indication information and the second indication information.
  • the encoded bit stream may not carry the first indication information and the second indication information.
  • the encoding performance of the target motion vector is better than the encoding performance of the original motion vector.
  • the encoded bit stream may not carry the first indication information and the first Two instructions.
  • the encoding end may also obtain the index value of the original motion vector in the motion vector list, and send the encoded bit stream carrying the index value to the decoding end. For example, if the original motion vector is motion vector A21 and motion vector A21 is the first motion vector in the motion vector list, the index value is 1.
  • Step 305 The encoding end stores the original motion vector or the final motion vector corresponding to the current image block as the motion information of the current image block.
  • the target motion vector can be obtained according to the original motion vector, and the final motion vector of the current image block is determined according to the target motion vector and the original motion vector, instead of directly using the original motion vector as the final motion vector of the current image block, Therefore, the accuracy of the motion vector is improved, and the coding performance is further improved.
  • a template of the current image block may be acquired according to the motion information of the candidate image block, and a target motion vector of the current image block may be acquired according to the template of the current image block.
  • the above method can quickly obtain the template of the current image block, and then obtain the target motion vector of the current image block according to the template, which can improve the encoding and decoding efficiency and reduce the encoding and decoding delay.
  • a template of the current image block can be obtained, and a target motion vector of the current image block can be obtained according to the template of the current image block.
  • the target motion vector based on the template is obtained according to the original motion vector corresponding to the current image block and the obtained template.
  • the implementation process can be shown in FIG. 4A and can include the following steps.
  • Step 401 The encoding end determines the original motion vector as a central motion vector.
  • Step 402 The encoding end determines each edge motion vector corresponding to the central motion vector.
  • the edge motion vector may be different from the center motion vector.
  • the encoding end determines each edge motion vector corresponding to the central motion vector, which may include: moving the central motion vector (x, y) in different directions by an offset St, thereby obtaining edge motion vectors (x-St, y), edge motion vector (x + St, y), edge motion vector (x, y + St), edge motion vector (x, y-St).
  • the center motion vector (x, y) can be shifted to the left by an offset St to obtain the edge motion vector (x-St, y); in the horizontal direction, the center motion vector (x, St y) Move the offset St to the right to get the edge motion vector (x + St, y); in the vertical direction, you can move the center motion vector (x, y) up by the offset St to get the edge motion vector (x , Y + St); in the vertical direction, the center motion vector (x, y) can be moved downward by the offset St to obtain the edge motion vector (x, y-St).
  • the initial value of the offset St can be configured according to experience, for example, it can be 2, 4, 8, 16, and so on.
  • the edge motion vector is the edge motion vector (7, 3), the edge motion vector (3, 7), and the edge motion vector (-1, 3). , Edge motion vector (3, -1).
  • Step 403 The encoding end obtains the encoding performance of the central motion vector according to the template of the current image block, and obtains the encoding performance of each edge motion vector according to the template of the current image block.
  • the encoding end obtains the encoding performance of the central motion vector according to the template of the current image block, which may include, but is not limited to, determining the encoding of the central motion vector according to the parameter information of the template of the current image block and the parameter information of the first target reference block. performance.
  • the first target reference block may be an image block obtained after the reference image block corresponding to the template is offset based on the central motion vector.
  • the encoding end may determine the prediction performance of the central motion vector according to the parameter information of the template and the parameter information of the first target reference block, and determine the encoding performance of the central motion vector according to the prediction performance of the central motion vector.
  • the encoding performance of the central motion vector may be determined based on the prediction performance and the actual number of bits required for encoding.
  • the above parameter information may be a brightness value; or, it may be a brightness value and a chrominance value.
  • the brightness value of the template of the current image block and the brightness value of the first target reference block may be obtained first. After obtaining the template of the current image block, the brightness value of each pixel of the template can be obtained, and the reference image block corresponding to the template can be obtained.
  • the reference image block can be moved by using the central motion vector (3, 3) to obtain the image block X corresponding to the reference image block (for example, the reference image block is moved to the right by 3 3 pixels, move up 3 pixels, and mark the processed image block as image block X), and the image block X is the first target reference block, and the brightness value of each pixel point of the image block X can be obtained.
  • the prediction performance of the central motion vector can be determined using the following formula:
  • SAD is the sum of available absolute differences, which is used to represent the prediction performance of the central motion vector.
  • TM i represents the brightness value of the i-th pixel of the template
  • TMP i represents the brightness value of the i-th pixel of the image block X
  • M represents the total number of pixels.
  • the parameter information is a luminance value and a chrominance value
  • the following formula is used to determine the luminance value prediction performance SAD of the central motion vector:
  • the average of the luma value prediction performance SAD and the chroma value prediction performance CSAD is the prediction performance of the center motion vector.
  • CSAD is the sum of available absolute differences, which is used to represent the prediction performance of the chrominance value of the central motion vector
  • CTM i is the chrominance value of the i-th pixel of the template
  • CTMP i is the chrominance of the i-th pixel of the image block X Value
  • M c represents the total number of pixels.
  • the encoding performance of the center motion vector may be determined according to the prediction performance and the actual number of bits required for encoding.
  • RDO Rate Distortion Optimized, Rate-Distortion Principle
  • RDO usually uses the following formula to determine the coding performance of the central motion vector:
  • J encoding performance
  • D prediction performance
  • is a Lagrangian multiplier, which is a numerical value configured according to experience
  • R is an actual number of bits required for image block encoding, that is, a sum of bits of information carried by an encoded bit stream.
  • the encoding end obtains the encoding performance of each edge motion vector according to the template of the current image block, which may include, but is not limited to, for each edge motion vector, according to the parameter information of the template of the current image block and the first motion vector corresponding to the edge motion vector.
  • Parameter information of the two target reference blocks determines the coding performance of the edge motion vector.
  • the second target reference block may be an image block obtained after the reference image block corresponding to the template is offset based on the edge motion vector.
  • the encoding end may determine the prediction performance of the edge motion vector according to the parameter information of the template and the parameter information of the second target reference block, so as to obtain the prediction performance of each edge motion vector.
  • the encoding performance of the edge motion vector may be determined according to the prediction performance and the actual number of bits required for encoding.
  • the above parameter information may be a brightness value; or, it may be a brightness value and a chrominance value.
  • Case 2 is similar to Case 1, except that in Case 2, the reference image block of the template is moved using each edge motion vector to obtain the corresponding second target reference block, and each edge motion vector is obtained using the second target reference block. Encoding performance, and in case one, the reference image block of the template is moved using the central motion vector to obtain a first target reference block, and the encoding performance of the central motion vector is obtained using the first target reference block.
  • Step 404 The encoding end determines the target motion vector from the center motion vector and each edge motion vector according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector.
  • the encoder can select the motion vector with the best encoding performance from the center motion vector and each edge motion vector.
  • the motion vector with the best encoding performance can be determined as the target.
  • Motion vector when the motion vector with the best encoding performance is the original motion vector, the encoding end can select the motion vector with the best encoding performance from the center motion vector and each edge motion vector, and determine the motion vector with the best encoding performance Target motion vector.
  • the encoding end may determine the edge motion vector (7, 3) as the target motion vector. If the motion vector with the best coding performance is the center motion vector of (3, 3), that is, the original motion vector, the coding end can also determine the motion vector with the best coding performance (such as the edge motion vector (7, 3)) Target motion vector.
  • the template-based target motion vector is obtained according to the original motion vector corresponding to the current image block and the obtained template.
  • the implementation process can be shown in FIG. 4B, and can include the following steps.
  • Step 411 The encoding end determines the original motion vector as the central motion vector.
  • Step 412 The encoding end determines each edge motion vector corresponding to the central motion vector.
  • the edge motion vector may be different from the center motion vector.
  • Step 413 The encoding end obtains the encoding performance of the central motion vector according to the template of the current image block, and obtains the encoding performance of each edge motion vector according to the template of the current image block.
  • steps 411 to 413 reference may be made to steps 401 to 403, and details are not described herein again.
  • Step 414 The encoder determines whether the iteration end condition of the target motion vector is satisfied. If yes, go to step 416; if not, go to step 415.
  • the iteration end condition may include, but is not limited to, the number of iterations reaching the number threshold, or the execution time reaching the time threshold, or the offset parameter St has been modified to a preset value, such as 1.
  • step 415 the encoder selects the motion vector with the best coding performance from the central motion vector and each edge motion vector as a new central motion vector, and returns to step 412.
  • the edge motion vector (7, 3) may be determined as the new central motion vector, and step 412 is performed again, and so on.
  • the value of the offset parameter St may be an initial value, such as 16.
  • the value of the offset parameter St is adjusted first, such as adjusted to the last offset parameter St minus 2, or adjusted to half of the last offset parameter St. This is not limited, as long as it is smaller than the last offset parameter St, and the subsequent description is made by adjusting to half of the last offset parameter St as an example. Therefore, when step 412 is executed a second time, the value of the offset parameter St is 8; when step 412 is executed a third time, the value of the offset parameter St is 4; and so on.
  • step 412 After adjusting the value of the offset parameter St, first determine whether the adjusted offset parameter St is less than or equal to a preset value, such as 1. If not, step 412 may be performed based on the adjusted offset parameter St. If so, the value of the offset parameter St can be set to 1, and step 412 is performed based on the offset parameter St (that is, the value 1), and when the execution reaches step 414, the determination result is that the end of the iteration is satisfied condition.
  • a preset value such as 1.
  • Step 416 The encoding end determines the target motion vector from the center motion vector and each edge motion vector according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector.
  • step 416 For processing in step 416, refer to step 404, and details are not described herein again.
  • the encoding end can obtain the encoding performance of the original motion vector and the encoding performance of the target motion vector.
  • the encoding end obtaining the encoding performance of the original motion vector may include, but is not limited to, determining the encoding performance of the original motion vector according to the parameter information of the template of the current image block and the parameter information of the third target reference block.
  • the third target reference block is an image block obtained after the reference image block corresponding to the template is shifted based on the original motion vector.
  • the prediction performance of the original motion vector may be determined according to the parameter information of the template and the parameter information of the third target reference block, and the encoding performance of the original motion vector may be determined according to the prediction performance; for example, according to the prediction performance and encoding requirements
  • the actual number of bits determines the encoding performance of the original motion vector.
  • the above parameter information may be a brightness value; or, a brightness value and a chrominance value.
  • the encoding end acquiring the encoding performance of the target motion vector may include, but is not limited to, determining the encoding performance of the target motion vector according to the parameter information of the template of the current image block and the parameter information of the fourth target reference block.
  • the fourth target reference block is an image block obtained after the reference image block corresponding to the template is offset based on the target motion vector.
  • the prediction performance of the target motion vector may be determined according to the parameter information of the template and the parameter information of the fourth target reference block, and the encoding performance of the target motion vector may be determined according to the prediction performance; for example, according to the prediction performance and encoding requirements
  • the actual number of bits determines the encoding performance of the target motion vector.
  • the foregoing parameter information may be a brightness value; or a brightness value and a chrominance value.
  • the above process is similar to the fourth embodiment, except that when the third target reference block or the fourth target reference block is obtained, the reference image block corresponding to the template is moved based on the original motion vector or the target motion vector, rather than as in the third embodiment.
  • the reference image block corresponding to the template is moved based on the center motion vector.
  • the original motion information is the original motion vector and the original reference frame corresponding to the current image block
  • the target motion information is the target motion vector and the target reference frame of the current image block.
  • the encoding end can obtain the original motion vector and the original reference frame corresponding to the current image block, and obtain a template-based target motion vector and target reference frame according to the original motion vector, the original reference frame, and the obtained template.
  • the current image block may be encoded according to the target motion vector and the target reference frame to obtain an encoded bit stream corresponding to the current image block, and the encoded bit stream is sent to the decoding end.
  • the encoding end may first obtain the original motion vector corresponding to the current image block. Assuming that the current image block is image block A1, the motion vector list at the encoding end includes motion vector A21, motion vector A31, motion vector A41, and motion vector in order. A51, select a motion vector from the motion vector list as the original motion vector of the image block A1.
  • the default motion vector may be directly determined as the original motion vector.
  • the current image block there can be one or more reference frames (video frames with strong time-domain correlation), one of the reference frames can be used as the original reference frame, and the remaining reference frames are all Is a candidate reference frame.
  • a target reference frame needs to be selected from the original reference frame and all candidate reference frames, and the target reference frame is also the final reference frame of the current image block, and the target reference frame is used for subsequent processing.
  • the process of obtaining the target motion vector and target reference frame based on the template by the encoder based on the original motion vector, the original reference frame, and the obtained template can refer to the subsequent embodiments.
  • the motion vector A21 is determined as the original motion vector
  • the reference frame 1 is determined as the original reference frame
  • the reference frame 2 and the reference frame 3 are determined as candidate reference frames
  • the original motion vector A21, the reference frame 1, and the reference frame 2 are used.
  • reference frame 3 to obtain a target motion vector and a target reference frame.
  • the target motion vector can be used as the final motion vector of the current image block.
  • the target reference frame can be any of reference frame 1, reference frame 2, and reference frame 3.
  • the target reference frame can be used as the final reference frame of the current image block. .
  • the encoder After the encoder obtains the target motion vector and the target reference frame, it can use the target motion vector and the target reference frame to encode the current image block, and there is no limitation on this encoding method. After the encoding is completed, the encoding end can obtain the encoded bit stream corresponding to the current image block, and sends the encoded bit stream to the decoding end.
  • the encoding end may send an encoded bit stream to the decoding end according to the original motion vector and the target motion vector. Specifically, the encoding performance of the original motion vector and the encoding performance of the target motion vector can be obtained.
  • the encoded bit stream corresponding to the current image block sent to the decoding end carries the first indication information.
  • the encoded bit stream corresponding to the current image block sent to the decoding end carries the second indication information.
  • the above method is to notify the first instruction information or the second instruction information explicitly.
  • the first instruction information or the second instruction information may also be notified implicitly, that is, the encoded bit stream corresponding to the current image block is not included.
  • the first instruction information or the second instruction information are carried in the encoder.
  • the encoding end and the decoding end can also negotiate a decision strategy or define a decision strategy in a standard, and store the decision strategy on the encoding end and the decoding end respectively, such as The decision strategy may agree on the first strategy information; or, agree on the second strategy information; or, agree on the third strategy information.
  • the third strategy information is the same strategy information as the neighboring image block of the current image block. Then, it may be determined based on the decision strategy in which case the first indication information or the second indication information may not be carried in the encoded bitstream.
  • the third embodiment refer to the third embodiment.
  • the encoding end sends the encoded bit stream to the decoding end according to the original motion vector and the target motion vector, and may further include: obtaining an index value of the original motion vector in the motion vector list; corresponding to the current image block sent to the decoding end
  • the indexed value is carried in the encoded bitstream of. For example, if the original motion vector is motion vector A21 and motion vector A21 is the first motion vector in the motion vector list, the index value may be 1.
  • the target motion vector and the target reference frame can be obtained according to the original motion vector and the original reference frame, and the final motion vector of the current image block is determined according to the target motion vector, and the final reference frame of the current image block is determined according to the target reference frame.
  • the target motion vector and the target reference frame can be obtained according to the original motion vector and the original reference frame, and the final motion vector of the current image block is determined according to the target motion vector.
  • the template of the current image block can be obtained. .
  • the encoder obtains the target motion vector and target reference frame based on the template according to the original motion vector, the original reference frame, and the template.
  • the implementation process can be shown in FIG. 4C.
  • Step 421 The encoding end obtains a candidate motion vector corresponding to the original reference frame according to the original motion vector based on the template of the current image block.
  • the candidate motion vector may be different from the original motion vector.
  • the encoding end obtains the candidate motion vector corresponding to the original reference frame based on the original motion vector based on the template of the current image block, which may include, but is not limited to, the encoding end determines the original motion vector as the center motion vector, and determines the center motion vector.
  • Each edge motion vector corresponding to the motion vector, the edge motion vector is different from the center motion vector; the encoding end obtains the encoding performance of the center motion vector according to the template of the current image block, and obtains the encoding performance of each edge motion vector according to the template of the current image block; Then, the encoding end may determine the candidate motion vector corresponding to the original reference frame from the center motion vector and each edge motion vector according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector.
  • Step 422 The encoder obtains an initial motion vector corresponding to each candidate reference frame according to the original motion vector.
  • the encoding end obtains the initial motion vector corresponding to each candidate reference frame according to the original motion vector, which may include: for each candidate reference frame, according to the distance between the current image block and the original reference frame (such as the current image block) The number of frames between the current frame and the original reference frame), the distance between the frame where the current image block is located and the candidate reference frame, and the original motion vector to obtain the initial motion vector of the candidate reference frame.
  • the original motion vector may include: for each candidate reference frame, according to the distance between the current image block and the original reference frame (such as the current image block) The number of frames between the current frame and the original reference frame), the distance between the frame where the current image block is located and the candidate reference frame, and the original motion vector to obtain the initial motion vector of the candidate reference frame.
  • the original motion vector is motion vector 1
  • the original reference frame is reference frame 1
  • the candidate reference frames are reference frame 2 and reference frame 3
  • the distance between the frame where the current image block is located (hereinafter referred to as the current frame) and reference frame 1
  • the distance between the current frame and the reference frame 2 is d2
  • the distance between the current frame and the reference frame 3 is d3
  • the initial motion vector corresponding to the reference frame 2 is the motion vector 1 * (d2 / d1)
  • the reference frame 3 corresponds to The initial motion vector is motion vector 1 * (d3 / d1).
  • Step 423 The encoder obtains the candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame.
  • the encoding end obtains the candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame, which may include, but is not limited to, for each candidate reference frame, the encoding end may use the initial motion vector of the candidate reference frame. Determine the center motion vector, and determine each edge motion vector corresponding to the center motion vector.
  • the edge motion vector is different from the center motion vector; the encoding end can obtain the coding performance of the center motion vector according to the template of the current image block, and according to the current image
  • the template of the block obtains the encoding performance of each edge motion vector; then, the encoding end can determine the corresponding corresponding reference frame of the candidate reference frame from the center motion vector and each edge motion vector according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector.
  • Candidate motion vector can determine the corresponding corresponding reference frame of the candidate reference frame from the center motion vector and each edge motion vector according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector.
  • the fourth or fifth embodiment For the process of obtaining the candidate motion vector of the candidate reference frame according to the initial motion vector of the candidate reference frame by the encoder, refer to the fourth or fifth embodiment, and only replace the target motion vector in the fourth or fifth embodiment with the candidate.
  • the candidate motion vector of the reference frame may be replaced by the original motion vector in the fourth or fifth embodiment with the initial motion vector of the candidate reference frame.
  • Step 424 The encoder selects a candidate motion vector with the best coding performance as the target motion vector from the candidate motion vectors corresponding to the original reference frame and the candidate motion vectors corresponding to each candidate reference frame.
  • the encoder After the encoder obtains the candidate motion vector corresponding to the original reference frame and the candidate motion vectors corresponding to each candidate reference frame, it can obtain the encoding performance of each candidate motion vector. For specific acquisition methods, refer to the foregoing fourth embodiment. Just replace the center motion vector with the candidate motion vector. After the coding end obtains the coding performance of each candidate motion vector, it can select the candidate motion vector with the best coding performance, and the selection process will not be repeated here.
  • Step 425 The encoding end determines a reference frame corresponding to the target motion vector as a target reference frame.
  • the encoding end may determine the original reference frame as the target reference frame, and when the target motion vector corresponds to the candidate reference frame, the encoding end may determine the candidate reference frame as the target reference frame.
  • a target reference frame is selected from the original reference frame and all candidate reference frames, and the target reference frame is the final reference frame of the current image block.
  • Select the target motion vector that is, the candidate motion vector with the best coding performance
  • the target motion vector is the final motion vector of the current image block
  • the reference frame corresponding to the target motion vector is selected as the target reference frame.
  • FIG. 5 is a schematic flowchart of a method for determining a motion vector
  • the method may include the following steps.
  • Step 501 The decoder obtains motion information of a candidate image block of the current image block.
  • the candidate image block of the current image block may include, but is not limited to, a spatial-domain candidate image block of the current image block; or a time-domain candidate image block of the current image block; there is no limitation on this candidate image block.
  • the motion information of the candidate image block may include, but is not limited to, the original motion information of the candidate image block, such as the original motion vector, or the original motion vector and the original reference frame.
  • the final motion information of the candidate image block such as the final motion vector, or the final motion vector and the final reference frame.
  • the final motion information of the current image block is only used for decoding of the current image block (decoding process such as prediction value generation and reconstruction of the current image block), and is not used for prediction of adjacent image blocks, that is, obtained from candidate image blocks.
  • the motion information of is the original motion information of the candidate image block, not the final motion information of the candidate image block.
  • the final motion information is not saved, but the original motion information is saved, that is, the motion information of the current image block is restored to the original motion information.
  • the decoding end may store the motion information of the candidate image block, such as storing the original motion information of the candidate image block as the motion information of the candidate image block or storing the final motion information of the candidate image block as the motion information of the candidate image block.
  • the original motion information of the candidate image block can be directly queried locally from the decoding end.
  • the decoder can obtain the original motion information of the candidate image block (such as the original motion vector and the original reference frame, etc.). For example, a motion vector is selected from the motion vector list of the candidate image block, and the selected motion vector is selected. Is the original motion vector. For another example, the motion information of the neighboring image blocks of the candidate image block may be determined as the original motion information of the candidate image block.
  • the above manner is only an example of obtaining the original motion information of the candidate image block, which is not limited.
  • Step 502 The decoder obtains a template of the current image block according to the motion information of the candidate image block.
  • the method for the decoder to obtain the template of the current image block according to the motion information of the candidate image block is the same as that of the encoder.
  • the method for the decoder to obtain the template of the current image block according to the motion information of the candidate image block is the same as that of the encoder.
  • Step 503 The decoding end obtains target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template.
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector.
  • the original motion information includes an original motion vector and an original reference frame
  • the target motion information includes a target motion vector and a target reference frame.
  • the decoding end obtains the target motion information based on the template according to the original motion information corresponding to the current image block and the acquired template, including: The vector is determined as the center motion vector; each edge motion vector corresponding to the center motion vector is determined, and the edge motion vector is different from the center motion vector; and the coding performance of the center motion vector and each position are obtained according to the template.
  • the encoding performance of the edge motion vector is determined according to the encoding performance of the central motion vector and the encoding performance of each of the edge motion vectors, and the target motion vector is determined from the central motion vector and each of the edge motion vectors.
  • the decoding end obtains target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template. Includes: obtaining candidate motion vectors corresponding to the original reference frame according to the original motion vector based on the template of the current image block; obtaining each initial motion vector corresponding to each candidate reference frame according to the original motion vector; Each initial motion vector is used to obtain each candidate motion vector corresponding to each candidate reference frame; and a candidate motion vector corresponding to each of the original reference frames and each candidate motion vector corresponding to each of the candidate reference frames is selected to have the best coding performance.
  • the candidate motion vector is used as the target motion vector; and a reference frame corresponding to the target motion vector is determined as the target reference frame.
  • the decoding end may also obtain the encoded bit stream corresponding to the current image block.
  • the encoded bit stream may be sent by the encoding end or the decoding end. It is obtained by encoding the bit stream corresponding to the current image block, and there is no limitation on this.
  • the encoding terminal sends the encoded bit stream as an example for description later.
  • the decoding end may receive an encoded bit stream corresponding to the current image block from the encoding end.
  • the encoded bit stream carries first indication information
  • the first indication information is used to indicate that the final motion information of the current image block is determined based on the template.
  • motion information of a candidate image block of the current image block is acquired, and a template of the current image block is acquired according to the motion information of the candidate image block, that is, steps 501-503 described above are performed.
  • the decoding end may receive an encoded bit stream corresponding to the current image block from the encoding end, and when the encoded bit stream carries second instruction information, the second instruction information is used to indicate that the current image is determined based on the original motion information corresponding to the current image block.
  • the encoded bit stream carries second instruction information
  • the second instruction information is used to indicate that the current image is determined based on the original motion information corresponding to the current image block.
  • the encoding end may also use the implicit notification, that is, the encoded bit stream corresponding to the current image block does not carry the first indication information or The second instruction information. Based on this, the decoding end may obtain the motion information of the candidate image block of the current image block according to the locally preset first policy information, and obtain the template of the current image block according to the motion information of the candidate image block, where the first policy information is used
  • the instruction indicates that the final motion information of the current image block is determined based on the template.
  • the decoding end may obtain the original motion information corresponding to the current image block according to the locally preset second policy information; determine the final motion information of the current image block according to the original motion information, where the second policy information is used to indicate The original motion information corresponding to the image block determines the final motion information of the current image block.
  • the decoding end may determine the policy information used by the current image block according to the locally preset third policy information, where the third policy information is to determine the current image block using the same policy information as the adjacent image block of the current image block. Final motion information; then, according to the policy information of adjacent image blocks, final motion information of the current image block is determined.
  • the encoding end and the decoding end may also negotiate a motion vector decision strategy or define a decision strategy in a standard and store the decision strategy on the encoding side and the decoding side, for example, the motion vector decision strategy may agree on the first strategy information; or The second strategy information; or the third strategy information is agreed.
  • the first strategy information is preset locally, the motion information of the candidate image block of the current image block is obtained based on the first strategy information, and the current image is obtained according to the motion information of the candidate image block.
  • Block template When the motion vector decision strategy agrees with the second strategy information, the second strategy information is preset locally, and the original motion information corresponding to the current image block is obtained based on the second strategy information; the final motion information of the current image block is determined based on the original motion information .
  • the motion vector decision policy agrees with the third policy information
  • the policy information of the adjacent image block is the first policy information
  • the motion information of the candidate image block of the current image block is obtained, and the current image is obtained according to the motion information of the candidate image block.
  • Template of the block if the policy information of the adjacent image block is the second policy information, the original motion information corresponding to the current image block is obtained, and the final motion information of the current image block is determined according to the original motion information.
  • the decoding end may also receive an encoded bit stream corresponding to the current image block from the encoding end.
  • the encoded bit stream The index value of the original motion vector in the motion vector list is carried; the decoding end may select a motion vector corresponding to the index value from the motion vector list; and determine the selected motion vector as the original motion vector corresponding to the current image block. If the index value is 1, the first motion vector of the motion vector list is obtained, and the motion vector is the original motion vector corresponding to the current image block.
  • the motion vector list is used to record the motion vectors of image blocks adjacent to the current image block.
  • the motion vector list maintained by the decoder is the same as the motion vector list maintained by the encoder.
  • the above manner is only an example.
  • the motion vector of the candidate image block of the current image block may be determined as the original motion vector corresponding to the current image block.
  • the decoding end obtains the processing process of the target motion information based on the template according to the original motion information corresponding to the current image block and the obtained template. For details, refer to the subsequent embodiments.
  • Step 504 The decoding end determines the final motion information of the current image block according to the target motion information.
  • the target motion information may be determined as the final motion information of the current image block.
  • the target motion information includes the target motion vector
  • the final motion information includes the final motion vector.
  • the original motion vector of the current image block and the target motion vector may be compared for coding performance, and a motion vector with better performance may be used as Final motion vector.
  • Step 505 The decoding end decodes the current image block according to the final motion information, and stores the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block.
  • the motion information of the candidate image block used is the motion of the current image block stored in this step. information.
  • storing the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block may include: the original motion information in the current image block is based on the spatially adjacent image block (or the spatially-adjacent neighbor) When the motion information of the image block is obtained, the original motion information corresponding to the current image block is stored as the motion information of the current image block. When the original motion information of the current image block is not obtained according to the motion information of the spatially adjacent image block (or the spatially adjacent image block), the final motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the final motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the final motion information corresponding to the current image block may also be stored as the motion information of the current image block.
  • the original motion information includes at least the original motion vector.
  • the original motion vector of the current image block is obtained according to the motion vector of the spatially adjacent image block
  • the original motion vector corresponding to the current image block is stored as the current image block.
  • Motion vector When the original motion information of the current image block is not obtained based on the motion vectors of the adjacent image blocks in the spatial domain, the final motion vector corresponding to the current image block is stored as the motion vector of the current image block.
  • the final motion vector corresponding to the current image block is stored as the motion vector of the current image block.
  • the final motion vector corresponding to the current image block may also be stored as the motion vector of the current image block.
  • the original motion vector corresponding to the current image block is stored as the motion vector of the current image block, or in another example, after decoding, the final motion vector of the current image block is stored as the current image Block motion vector.
  • the motion information of the candidate image block in step 501 may be the original motion vector of the candidate image block.
  • the decoding end may store the original motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 501 may be the final motion vector of the candidate image block.
  • the decoding end may store the original motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 501 may be the original motion vector of the candidate image block.
  • the decoding end may store the final motion vector corresponding to the current image block as the motion vector of the current image block.
  • the motion information of the candidate image block in step 501 may be the final motion vector of the candidate image block.
  • the decoding end may store the final motion vector corresponding to the current image block as the motion vector of the current image block.
  • the target motion vector can be obtained according to the original motion vector, and the final motion vector of the current image block is determined according to the original motion vector and the target motion vector, instead of directly using the original motion vector as the final motion vector of the current image block, Therefore, the accuracy of the motion vector is improved, and the decoding performance is further improved.
  • a template of the current image block may be obtained according to the motion information of the candidate image block, and the target motion vector of the current image block may be obtained based on the template of the current image block based on the original motion vector.
  • the above method can quickly obtain the template of the current image block, and then obtain the target motion vector of the current image block according to the template, which can improve decoding efficiency and reduce decoding delay. For example, before the decoding reconstruction phase, a template of the current image block can be obtained, and the target motion vector of the current image block can be obtained according to the template.
  • the decoder can simultaneously decode multiple image blocks in parallel, thereby further increasing the decoding speed, improving the decoding efficiency, reducing the decoding delay, and improving the decoding performance.
  • the final motion information of the current image block is only used for the decoding of the current image block (the decoding process of the prediction value generation and reconstruction of the current image block), and is not used for the prediction of adjacent image blocks, that is, the adjacent image blocks are obtained
  • the candidate motion information is the original motion information, not the final motion information.
  • the final motion information is not saved, but the original motion information is saved, that is, the motion information of the current image block is restored to the original motion information.
  • a template of the current image block may be obtained according to motion information (such as a motion vector and a reference frame index) of a candidate image block of the current image block.
  • the decoding end may determine a reference frame image of the candidate image block according to the reference frame index; obtain a reference image block that is related to the candidate image block from the reference frame image according to the motion vector, and obtain the reference image block according to the reference image block.
  • the template of the current image block may be obtained according to motion information (such as a motion vector and a reference frame index) of a candidate image block of the current image block.
  • the candidate image block may include M first candidate image blocks and N second candidate image blocks, where M is a natural number greater than or equal to 1, N is a natural number greater than or equal to 0, or M is a natural number greater than or equal to 0, N is a natural number greater than or equal to 1.
  • the first candidate image block is a candidate image block on the upper side of the current image block
  • the second candidate image block is a candidate image block on the left side of the current image block.
  • the first candidate image block includes an adjacent image block and / or a second-neighboring image block on the upper side of the current image block.
  • the prediction mode of the neighboring image block is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • the second candidate image block includes a neighboring image block and / or a next-neighbor image block on the left side of the current image block.
  • the prediction mode of the neighboring image block is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • Obtaining a template of the current image block according to the motion information of the candidate image block may include, but is not limited to: determining the first template according to the motion vector prediction mode and the motion information of the M first candidate image blocks; Determine a second template based on the motion vector prediction mode and motion information; then, determine a template of the current image block based on the first template and the second module.
  • determining the template of the current image block based on the first template and the second template may include, but is not limited to: determining the first template as the template of the current image block; or determining the second template as the template of the current image block; or , Determine the template of the current image block after stitching the first template and the second template.
  • the first template when M is greater than 1, may include M sub-templates or P sub-templates, and is stitched from M sub-templates or P sub-templates, where P is the first candidate image block of the inter mode. , P is less than or equal to M.
  • the first template when M is equal to 1, may include a first sub-template, and the first sub-template is determined according to a motion vector prediction mode and motion information of a candidate image block on the upper side of the current image block.
  • the second template may include N sub-templates or R sub-templates, and is formed by splicing N sub-templates or R sub-templates, R is the number of second candidate image blocks in the inter mode, and R is less than or Is equal to N.
  • the second template may include a second sub-template, and the second sub-template is determined according to a motion vector prediction mode and motion information of a candidate image block on the left side of the current image block.
  • acquiring the template of the current image block according to the motion information of the candidate image block may further include but is not limited to: when the current image block corresponds to multiple motion information, acquiring a template corresponding to the motion information according to each motion information. A weight corresponding to each motion information is obtained, and a template of a current image block is obtained according to a weight corresponding to each motion information and a template corresponding to the motion information.
  • the decoding end obtains a template-based target motion vector according to the original motion vector corresponding to the current image block and the obtained template.
  • the implementation process can be Referring to FIG. 6A, the following steps may be included.
  • Step 601 The decoding end determines the original motion vector as a central motion vector.
  • Step 602 The decoder determines each edge motion vector corresponding to the central motion vector.
  • the edge motion vector may be different from the center motion vector.
  • Determining each edge motion vector corresponding to the center motion vector may include: moving the center motion vector (x, y) in different directions by an offset St to obtain edge motion vectors (x-St, y) in different directions, and edge motion Vector (x + St, y), edge motion vector (x, y + St), edge motion vector (x, y-St).
  • Step 603 The decoder obtains the encoding performance of the central motion vector according to the template of the current image block, and obtains the encoding performance of each edge motion vector according to the template of the current image block.
  • the decoding end obtains the encoding performance of the central motion vector according to the template of the current image block, which may include, but is not limited to, the decoding end may determine the encoding of the central motion vector according to the parameter information of the template of the current image block and the parameter information of the first target reference block. Performance, where the first target reference block is an image block obtained after the reference image block corresponding to the template is shifted based on the central motion vector.
  • the encoder obtains the encoding performance of each edge motion vector according to the template of the current image block, which may include but is not limited to: for each edge motion vector, according to the parameter information of the template of the current image block and the second target reference corresponding to the edge motion vector The parameter information of the block determines the coding performance of the edge motion vector.
  • the second target reference block is an image block obtained after the reference image block corresponding to the template is offset based on the edge motion vector.
  • Step 604 The decoding end determines a target motion vector from the center motion vector and each edge motion vector according to the coding performance of the center motion vector and the coding performance of each edge motion vector.
  • the decoder can select a motion vector with the best coding performance from the center motion vector and each edge motion vector.
  • the motion vector with the best coding performance can be selected.
  • the decoding end obtains a template-based target motion vector according to the original motion vector corresponding to the current image block and the obtained template.
  • the implementation process can be Referring to FIG. 6B, the following steps may be included.
  • Step 611 The decoding end determines the original motion vector as a central motion vector.
  • Step 612 The decoder determines each edge motion vector corresponding to the central motion vector.
  • the edge motion vector may be different from the center motion vector.
  • Step 613 The decoder obtains the coding performance of the central motion vector according to the template of the current image block, and obtains the coding performance of each edge motion vector according to the template of the current image block.
  • Step 614 The decoder determines whether the iteration end condition of the target motion vector is satisfied. If so, step 616 may be performed; if not, step 615 may be performed.
  • Step 615 The decoder selects a motion vector with the best coding performance from the center motion vector and each edge motion vector as a new center motion vector, and returns to step 612.
  • Step 616 The decoding end determines the target motion vector from the center motion vector and each edge motion vector according to the coding performance of the center motion vector and the coding performance of each edge motion vector.
  • the decoding end may obtain the original motion vector and the original reference frame corresponding to the current image block, based on the current image.
  • the decoder can obtain the target motion vector (the target motion vector may be different from the original motion vector) and the target reference frame based on the template based on the original motion vector corresponding to the current image block and the original reference frame.
  • the decoder is based on the template of the current image block, and obtains the template-based target motion vector and target reference frame based on the original motion vector corresponding to the current image block and the original reference frame.
  • the target motion vector and target reference frame may be included, but not limited to:
  • the original motion vector obtains a candidate motion vector corresponding to the original reference frame; obtains an initial motion vector corresponding to each candidate reference frame according to the original motion vector; obtains a candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame; and ,
  • the candidate motion vector with the best coding performance may be selected from the candidate motion vectors corresponding to the original reference frame and the candidate motion vectors corresponding to each candidate reference frame as the target motion vector; and the reference frame corresponding to the target motion vector is determined as the target reference frame.
  • Obtaining the initial motion vector corresponding to each candidate reference frame according to the original motion vector may include, but is not limited to, for each candidate reference frame, according to the distance between the frame where the current image block is located and the original reference frame, the frame where the current image block is located, and the candidate reference The distance of the frame and the original motion vector to obtain the initial motion vector of the candidate reference frame.
  • obtaining the candidate motion vector corresponding to the original reference frame according to the original motion vector may include, but is not limited to, determining the original motion vector as a center motion vector, and determining a relationship with the center Each edge motion vector corresponding to the motion vector, the edge motion vector is different from the center motion vector; then, the encoding performance of the center motion vector is obtained according to the template of the current image block, and the encoding of each edge motion vector is obtained according to the template of the current image block Performance; then, according to the coding performance of the central motion vector and the coding performance of each edge motion vector, a candidate motion vector corresponding to the original reference frame may be determined from the central motion vector and each edge motion vector.
  • obtaining the candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame may include, but is not limited to, for each candidate reference frame, determining the initial motion vector of the candidate reference frame as the central motion vector. And determine each edge motion vector corresponding to the central motion vector, which is different from the central motion vector; obtain the coding performance of the central motion vector and the coding performance of each edge motion vector according to the template of the current image block; according to the central motion vector Encoding performance and encoding performance of each edge motion vector, a candidate motion vector corresponding to the first candidate reference frame is determined from the central motion vector and each edge motion vector.
  • an embodiment of the present application further proposes a decoding device, which is applied to the decoding end.
  • a decoding device As shown in FIG. 7, it is a structural diagram of the device.
  • the device includes:
  • the obtaining module 71 is configured to obtain motion information of a candidate image block of the current image block; acquire a template of the current image block according to the motion information of the candidate image block; and obtain the original motion information corresponding to the current image block and the acquired image block. Obtaining, by the template, target motion information based on the template; determining final motion information of the current image block according to the target motion information;
  • a determining module 72 is configured to decode the current image block according to the final motion information; and store original motion information or final motion information corresponding to the current image block as motion information of the current image block.
  • the determining module 72 when the determining module 72 stores the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block, the determining module 72 is specifically configured to: When it is obtained according to the motion information of the adjacent image blocks in the spatial domain, the original motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the determining module 72 when the determining module 72 stores the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block, the determining module 72 is specifically configured to: When it is not obtained based on the motion information of the adjacent image blocks in the spatial domain, the final motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the obtaining module 71 is further configured to receive an encoded bit stream corresponding to the current image block from the encoding end, where the encoded bit stream carries first indication information, and the first indication information is used to instruct to determine the current based on a template.
  • Final motion information of the image block acquiring motion information of a candidate image block of the current image block according to the first instruction information, and acquiring a template of the current image block according to the motion information of the candidate image block.
  • the obtaining module 71 is further configured to: receive an encoded bit stream corresponding to the current image block from the encoding end, where the encoded bit stream carries second instruction information, and the second instruction information is used to indicate the original based on the current image block.
  • the motion information determines the final motion information of the current image block; the original motion information corresponding to the current image block is obtained according to the second instruction information; the final motion information of the current image block is determined according to the original motion information .
  • the obtaining module 71 is further configured to obtain the motion information of the candidate image block of the current image block according to the locally preset first policy information, and obtain the template of the current image block according to the motion information of the candidate image block;
  • the first policy information is used to indicate that final motion information of the current image block is determined based on a template.
  • the obtaining module 71 is further configured to obtain original motion information corresponding to the current image block according to locally preset second policy information; and determine final motion information of the current image block according to the original motion information.
  • the second policy information is used to indicate that the final motion information of the current image block is determined based on the original motion information corresponding to the current image block.
  • the obtaining module 71 is further configured to determine whether the current image block is based on the original motion information or the referenced policy information adopted by the neighboring image block of the current image block according to the locally preset third policy information.
  • the target motion information is encoded, wherein the third policy information indicates that the same motion information as the adjacent image blocks of the current image block is used to determine the final motion information of the current image block.
  • the original motion information includes an original motion vector
  • the acquisition module 71 is further configured to: receive an encoded bit stream corresponding to the current image block from the encoding end, where the encoded bit stream carries the original motion vector in the An index value in a motion vector list; selecting a motion vector corresponding to the index value from the motion vector list; determining the selected motion vector as an original motion vector corresponding to the current image block; or The motion vector of the candidate image block of the image block is determined as the original motion vector corresponding to the current image block.
  • the candidate image blocks include M first candidate image blocks and N second candidate image blocks, where M is a natural number greater than or equal to 1, N is a natural number greater than or equal to 0, or M is greater than Or a natural number of 0, N is a natural number greater than or equal to 1;
  • the first candidate image block is a candidate image block on the upper side of the current image block, and the second candidate image block is the left side of the current image block
  • the obtaining module 71 is specifically configured to determine the motion vector prediction mode and motion information of the M first candidate image blocks A first template; determining a second template according to the motion vector prediction mode and motion information of the N second candidate image blocks; determining the first template as a template of the current image block; or determining the second template Determine the template of the current image block; or determine the template of the current image block after stitching the first template and the second template.
  • the first candidate image block includes an adjacent image block and / or a second-neighboring image block on the upper side of the current image block; the prediction mode of the adjacent image block is an inter mode or an intra mode.
  • the prediction mode of the second neighboring image block is an inter mode; the second candidate image block includes a neighboring image block and / or a second neighboring image block to the left of the current image block;
  • the prediction mode is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • the first template when the M is greater than 1, includes M sub-templates or P sub-templates, and is formed by splicing the M sub-templates or the P sub-templates, where P is an inter-frame
  • P is an inter-frame
  • the prediction mode and motion information are determined.
  • the motion information includes a motion vector and a reference frame index of the first candidate image block
  • the acquisition module 71 determines a first based on the motion vector prediction mode and motion information of the M first candidate image blocks.
  • the template is specifically used for: for the ith candidate image block of the M first candidate image blocks, when it is determined that the motion vector prediction mode of the ith candidate image block is an inter mode, according to the reference
  • the frame index determines a reference frame image corresponding to the i-th candidate image block; and determines a reference image corresponding to the i-th candidate image block from the reference frame image according to a motion vector of the i-th candidate image block Block, the relative displacement between the reference image block and the i-th candidate image block matches the motion vector of the i-th candidate image block; according to the reference image block, the size is obtained as a first lateral length and a first An image block of a vertical length is used as an ith sub-template included in the first template.
  • the acquiring module 71 is specifically configured to determine the first template according to the motion vector prediction mode and motion information of the M first candidate image blocks, and for the i-th candidate image block of the M first candidate image blocks, When it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is filled with a default value, and an image with a size of a first horizontal length and a first vertical length is obtained Block as the i-th sub-template included in the first template; or determining a reference frame image corresponding to the i-th candidate image block according to a reference frame index corresponding to the i-th candidate image block; according to the i-th A motion vector corresponding to each candidate image block, and a reference image block corresponding to the i-th candidate image block is determined from the reference frame image, and the relative displacement between the reference image block and the i-th candidate image block and the The motion vector corresponding to the i-th candidate image block is matched; and according to the determined reference image
  • the first horizontal length and the horizontal length of the first candidate image block satisfy a first proportional relationship, or the horizontal length of the current image block satisfies a second proportional relationship, or is equal to a first preset length;
  • a longitudinal length satisfies a third proportional relationship with the longitudinal length of the first candidate image block, or satisfies a fourth proportional relationship with the longitudinal length of the current image block, or is equal to a second preset length.
  • the second template when the N is greater than 1, includes N sub-templates or R sub-templates, and is formed by splicing the N sub-templates or the R sub-templates, where R is an inter-frame
  • the prediction mode and motion information are determined.
  • the motion information includes a motion vector and a reference frame index of the second candidate image block
  • the obtaining module 71 is specifically configured to determine the second template according to the motion vector prediction mode and motion information of the N second candidate image blocks.
  • the obtaining module 71 is specifically configured to determine the second template according to the motion vector prediction mode and motion information of the N second candidate image blocks: for the ith candidate image block of the N second candidate image blocks, When it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is filled with a default value, and an image with a size of a second horizontal length and a second vertical length is obtained Block as the i-th sub-template included in the second template; or determining a reference frame image corresponding to the i-th candidate image block according to a reference frame index corresponding to the i-th candidate image block; according to the i-th A motion vector corresponding to each candidate image block, and a reference image block corresponding to the i-th candidate image block is determined from the reference frame image, and the relative displacement between the reference image block and the i-th candidate image block and the The motion vector corresponding to the i-th candidate image block is matched; and according to the determined reference image block, an
  • the second horizontal length and the horizontal length of the second candidate image block satisfy a fifth proportional relationship, or the horizontal length of the current image block satisfies a sixth proportional relationship, or is equal to a third preset length;
  • the two vertical lengths satisfy a seventh proportional relationship with the vertical length of the second candidate image block, or satisfy an eighth proportional relationship with the vertical length of the current image block, or are equal to a fourth preset length.
  • the acquiring module 71 is specifically configured to acquire a template of the current image block according to the motion information of the candidate image block: when the current image block corresponds to multiple motion information, acquire a template corresponding to each motion information; acquire A weight corresponding to each motion information, and a template of the current image block is obtained according to a weight corresponding to each motion information and a template corresponding to the motion information.
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector
  • the obtaining module 71 obtains a target based on the template according to the original motion information corresponding to the current image block and the obtained template.
  • the motion information is specifically used to: determine the original motion vector as a central motion vector; determine each edge motion vector corresponding to the central motion vector, and each edge motion vector is different from the central motion vector; according to the The template obtains the encoding performance of the center motion vector and the encoding performance of each edge motion vector; and according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector,
  • the target motion vector is determined from each edge motion vector.
  • the original motion information includes an original motion vector and an original reference frame
  • the target motion information includes a target motion vector and a target reference frame
  • the acquisition module 71 is based on the original motion information corresponding to the current image block and the acquired template
  • the target motion information based on the template it is specifically used to: obtain a candidate motion vector corresponding to the original reference frame according to the original motion vector based on the template of the current image block; for multiple candidate frames, For each candidate frame, obtain an initial motion vector corresponding to the candidate reference frame according to the original motion vector; obtain a candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame; The candidate motion vector corresponding to the original reference frame and the candidate motion vector corresponding to each of the plurality of candidate reference frames are selected as the target motion vector, and the reference frame corresponding to the target motion vector is determined as The target reference frame.
  • an embodiment of the present application further proposes an encoding device, which is applied to the encoding end. See FIG. 8 for a structural diagram of the device.
  • the device includes:
  • An acquisition module 81 is configured to acquire motion information of a candidate image block of the current image block; acquire a template of the current image block according to the motion information of the candidate image block; and according to the original motion information corresponding to the current image block and the acquired The template to obtain target motion information based on the template;
  • a processing module 82 configured to determine final motion information of the current image block according to the original motion information and the target motion information; encode the current image block according to the final motion information to obtain the current image block A corresponding encoded bit stream; storing original motion information or final motion information corresponding to the current image block as motion information of the current image block.
  • the processing module 82 stores original motion information or final motion information corresponding to the current image block as motion information of the current image block
  • the processing module 82 is specifically configured to: when the original motion information of the current image block is adjacent according to a spatial domain When the motion information of the image block is obtained, the original motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the processing module 82 stores the original motion information or the final motion information corresponding to the current image block as the motion information of the current image block
  • the processing module 82 is specifically configured to: when the original motion information of the current image block is not adjacent according to the spatial domain
  • the final motion information corresponding to the current image block is stored as the motion information of the current image block.
  • the candidate image blocks include M first candidate image blocks and N second candidate image blocks, where M is a natural number greater than or equal to 1, N is a natural number greater than or equal to 0, or M is greater than Or a natural number of 0, N is a natural number greater than or equal to 1;
  • the first candidate image block is a candidate image block on the upper side of the current image block, and the second candidate image block is the left side of the current image block
  • the obtaining module 81 is specifically configured to determine the motion vector prediction mode and motion information of the M first candidate image blocks A first template; determining a second template according to the motion vector prediction mode and motion information of the N second candidate image blocks; determining the first template as a template of the current image block; or determining the second template Determine the template of the current image block; or determine the template of the current image block after stitching the first template and the second template.
  • the first candidate image block includes an adjacent image block and / or a second-neighboring image block on the upper side of the current image block; the prediction mode of the adjacent image block is an inter mode or an intra mode.
  • the prediction mode of the second neighboring image block is an inter mode; the second candidate image block includes a neighboring image block and / or a second neighboring image block to the left of the current image block;
  • the prediction mode is an inter mode or an intra mode; the prediction mode of the next neighboring image block is an inter mode.
  • the first template when the M is greater than 1, includes M sub-templates or P sub-templates, and is formed by splicing the M sub-templates or the P sub-templates, where P is an inter-frame
  • the vector prediction mode and motion information are determined.
  • the motion information includes a motion vector and a reference frame index of the first candidate image block
  • the acquisition module 81 determines a first based on the motion vector prediction mode and motion information of the M first candidate image blocks.
  • the template is specifically used for: for the ith candidate image block of the M first candidate image blocks, when it is determined that the motion vector prediction mode of the ith candidate image block is an inter mode, according to the reference
  • the frame index determines a reference frame image corresponding to the i-th candidate image block; and determines a reference image corresponding to the i-th candidate image block from the reference frame image according to a motion vector of the i-th candidate image block Block, the relative displacement between the reference image block and the i-th candidate image block matches the motion vector of the i-th candidate image block; according to the reference image block, the size is obtained as a first lateral length and a first An image block of a vertical length is used as an ith sub-template included in the first template.
  • the obtaining module 81 is specifically configured to determine the first template according to the motion vector prediction mode and motion information of the M first candidate image blocks, and for the i-th candidate image block of the M first candidate image blocks, When it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is filled with a default value, and an image with a size of a first horizontal length and a first vertical length is obtained Block as the i-th sub-template included in the first template; or determining a reference frame image corresponding to the i-th candidate image block according to a reference frame index corresponding to the i-th candidate image block; according to the i-th A motion vector corresponding to each candidate image block, and a reference image block corresponding to the i-th candidate image block is determined from the reference frame image, and the relative displacement between the reference image block and the i-th candidate image block and the The motion vector corresponding to the i-th candidate image block is matched; and according to the reference image block
  • the first horizontal length and the horizontal length of the first candidate image block satisfy a first proportional relationship, or the horizontal length of the current image block satisfies a second proportional relationship, or is equal to a first preset length;
  • a longitudinal length satisfies a third proportional relationship with the longitudinal length of the first candidate image block, or satisfies a fourth proportional relationship with the longitudinal length of the current image block, or is equal to a second preset length.
  • the second template when the N is greater than 1, includes N sub-templates or R sub-templates, and is formed by splicing the N sub-templates or the R sub-templates, where R is an inter-frame
  • the vector prediction mode and motion information are determined.
  • the motion information includes a motion vector and a reference frame index of the second candidate image block
  • the obtaining module 81 is specifically configured to determine the second template according to the motion vector prediction mode and motion information of the N second candidate image blocks.
  • the obtaining module 81 is specifically configured to determine the second template according to the motion vector prediction mode and the motion information of the N second candidate image blocks: for the ith candidate image block of the N second candidate image blocks, When it is determined that the motion vector prediction mode of the i-th candidate image block is an intra mode, the i-th candidate image block is filled with a default value, and an image with a size of a second horizontal length and a second vertical length is obtained Block as the i-th sub-template included in the second template; or determining a reference frame image corresponding to the i-th candidate image block according to a reference frame index corresponding to the i-th candidate image block; according to the i-th A motion vector corresponding to each candidate image block, and a reference image block corresponding to the i-th candidate image block is determined from the reference frame image, and the relative displacement between the reference image block and the i-th candidate image block and the The motion vector corresponding to the i-th candidate image block is matched; and according to the reference image block, an
  • the second horizontal length and the horizontal length of the second candidate image block satisfy a fifth proportional relationship, or the horizontal length of the current image block satisfies a sixth proportional relationship, or is equal to a third preset length;
  • the two vertical lengths and the vertical length of the second candidate image block satisfy a seventh proportional relationship, and the vertical lengths of the second candidate image block and the current image block satisfy an eighth proportional relationship, or are equal to a fourth preset length.
  • the acquiring module 81 is specifically configured to acquire a template of the current image block according to the motion information of the candidate image block, and when the current image block corresponds to multiple motion information, acquire the corresponding motion information according to each motion information. Obtaining a weight corresponding to each motion information, and obtaining a template of the current image block according to the weight corresponding to each motion information and the template corresponding to the motion information.
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector
  • the obtaining module 81 obtains a target based on the template according to the original motion information corresponding to the current image block and the obtained template.
  • the motion information is specifically used to: determine the original motion vector as a central motion vector; determine each edge motion vector corresponding to the central motion vector, and each edge motion vector is different from the central motion vector; according to the The template obtains the encoding performance of the center motion vector and the encoding performance of each edge motion vector; and according to the encoding performance of the center motion vector and the encoding performance of each edge motion vector,
  • the target motion vector is determined from each edge motion vector.
  • the original motion information includes an original motion vector and an original reference frame
  • the target motion information includes a target motion vector and a target reference frame
  • the acquisition module 81 is based on the original motion information corresponding to the current image block and the acquired template
  • the target motion information based on the template it is specifically used to: obtain a candidate motion vector corresponding to the original reference frame according to the original motion vector based on the template of the current image block; for multiple candidate frames, For each candidate frame, obtain an initial motion vector corresponding to the candidate reference frame according to the original motion vector; obtain a candidate motion vector corresponding to the candidate reference frame according to the initial motion vector of the candidate reference frame; The candidate motion vector corresponding to the original reference frame and the candidate motion vector corresponding to each of the plurality of candidate reference frames are selected as the target motion vector, and the reference frame corresponding to the target motion vector is determined as The target reference frame.
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector
  • the processing module 82 includes the original motion vector
  • the original motion information includes an original motion vector
  • the target motion information includes a target motion vector.
  • the processing module 82 specifically uses the original motion information and the target motion information to determine the final motion information of the current image block.
  • obtaining the encoding performance of the original motion vector and the encoding performance of the target motion vector determining the final performance of the current image block when the encoding performance of the original motion vector is better than the encoding performance of the target motion vector
  • the motion vector is the original motion vector; wherein the encoded bit stream corresponding to the current image block carries second instruction information, and the second instruction information is used to indicate that the current image is determined based on the original motion information corresponding to the current image block The final motion information of the block.
  • the processing module 82 obtains an index value of the original motion vector in the motion vector list; the encoded bit stream corresponding to the current image block carries the index value.
  • the processing module 82 acquires the encoding performance of the original motion vector is specifically used to determine the encoding performance of the original motion vector according to parameter information of a template of the current image block and parameter information of a first target reference block, Wherein, the first target reference block is an image block obtained after the reference image block corresponding to the template is offset based on the original motion vector; the processing module 82 is specifically used to obtain the coding performance of the target motion vector. Yu: determining encoding performance of the target motion vector according to parameter information of a template of the current image block and parameter information of a second target reference block, wherein the second target reference block is a reference image corresponding to the template An image block obtained after a block is shifted based on the target motion vector.
  • the processor 91 includes a processor 91 and a machine-readable storage medium 92.
  • the machine-readable storage medium 92 stores machine-executable instructions that can be executed by the processor 91.
  • the processor 91 is configured to execute machine-executable instructions. Instructions to implement the decoding method disclosed in the example above.
  • an embodiment of the present application further provides a machine-readable storage medium.
  • the machine-readable storage medium stores a number of computer instructions.
  • the present invention can be implemented. Apply for the decoding method disclosed in the above example.
  • the processor 93 includes a processor 93 and a machine-readable storage medium 94.
  • the machine-readable storage medium 94 stores machine-executable instructions that can be executed by the processor 93.
  • the processor 93 is configured to execute machine-executable instructions. To implement the encoding method disclosed in the above example.
  • an embodiment of the present application further provides a machine-readable storage medium.
  • the machine-readable storage medium stores a number of computer instructions.
  • the present invention can be implemented. Apply for the encoding method disclosed in the above example.
  • the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard disk, any type of storage disk (Such as optical discs, DVDs, etc.), or similar storage media, or a combination thereof.
  • the system, device, module, or unit described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or a product with a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or a combination of any of these devices.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Therefore, this application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • these computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device,
  • the instruction device implements the functions specified in a flowchart or a plurality of processes and / or a block or a plurality of blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of operation steps can be performed on the computer or other programmable device to generate a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种解码、编码方法和设备,该方法包括:获取当前图像块的候选图像块的运动信息;根据所述候选图像块的运动信息获取所述当前图像块的模板;根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;根据所述目标运动信息确定所述当前图像块的最终运动信息;根据所述最终运动信息对所述当前图像块进行解码。

Description

一种解码、编码方法和设备 技术领域
本申请涉及视频编解码技术领域,尤其涉及一种解码、编码方法和设备。
背景技术
为了达到节约空间的目的,视频中的图像都是经过编码后才传输的,完整的视频编码方法可以包括预测、变换、量化、熵编码、滤波等过程。其中,预测编码包括帧内编码和帧间编码。帧间编码是利用视频时域的相关性,使用邻近已编码图像的像素预测当前图像的像素,以有效去除视频时域冗余。
在帧间编码中,可以使用运动矢量(Motion Vector,MV)表示当前帧图像的当前图像块与参考帧图像的参考图像块之间的相对位移。例如,当前帧的图像A与参考帧的图像B存在很强的时域相关性,在需要传输图像A的图像块A1(即当前图像块)时,则可以在图像B中进行运动搜索,找到与图像块A1最匹配的图像块B1(即参考图像块),并确定图像块A1与图像块B1之间的相对位移,该相对位移也就是图像块A1的运动矢量。
编码端可以将运动矢量发送给解码端,不是将图像块A1发送给解码端,解码端可以根据运动矢量和图像块B1得到图像块A1。由于运动矢量占用的比特数小于图像块A1占用的比特数,因此,上述方式可以节约比特。
但是,若将图像A划分成大量图像块,在传输每个图像块的运动矢量时,也会占用比较多的比特数。为了进一步的节约比特,还可以利用候选图像块之间的空间相关性,预测图像块A1的运动矢量。例如,可以将与图像块A1相邻的图像块A2的运动矢量,确定为图像块A1的运动矢量。基于此,编码端可以将图像块A2的索引值发送给解码端,而解码端可以基于该索引值确定图像块A2的运动矢量,就是图像块A1的运动矢量。由于图像块A2的索引值占用的比特数小于运动矢量占用的比特数,因此,上述方式可以进一步节约比特。
由于图像块A1的运动与图像块A2的运动可能存在差异,即图像块A2的运动矢量与图像块A1的运动矢量可能并不一致,因此,将图像块A2的运动矢量确定为图像块A1的运动矢量,存在预测质量不高,预测错误等问题。
发明内容
本申请提供了一种解码、编码方法和设备可以提高运动矢量的精度,提高编码性能、解码性能。
本申请提供一种解码方法,应用于解码端,所述方法包括:获取当前图像块的候选图像块的运动信息;根据所述候选图像块的运动信息获取所述当前图像块的模板;根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;根据所述目标运动信息确定所述当前图像块的最终运动信息;根据所述最终运动 信息对所述当前图像块进行解码。
本申请提供一种编码方法,应用于编码端,所述方法包括:获取当前图像块的候选图像块的运动信息;根据所述候选图像块的运动信息获取所述当前图像块的模板;根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息;根据所述最终运动信息对所述当前图像块进行编码,得到编码比特流。
本申请提供一种解码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现上述的解码方法步骤。
本申请提供一种编码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现上述的编码方法步骤。
由以上技术方案可见,本申请实施例中,可以根据原始运动信息获取目标运动信息,并根据目标运动信息确定当前图像块的最终运动信息,而不是根据原始运动信息确定当前图像块的最终运动信息,从而提高运动信息的精度,提高编码性能。而且,在根据原始运动信息获取目标运动信息时,可以根据候选图像块的运动信息获取当前图像块的模板,并根据所述当前图像块的模板获取目标运动信息,上述方式可以快速得到当前图像块的模板,继而根据该模板得到目标运动信息,可以提高解码效率,减少解码时延。例如,在解码的重建阶段之前,就可以获取当前图像块的模板,并根据模板得到目标运动信息。
附图说明
图1是本申请一种实施方式中的编码方法的流程图。
图2A-图2O是本申请一种实施方式中的当前图像块的模板示意图。
图3是本申请另一种实施方式中的编码方法的流程图。
图4A-图4C是本申请另一种实施方式中的编码方法的流程图。
图5是本申请另一种实施方式中的解码方法的流程图。
图6A和图6B是本申请另一种实施方式中的解码方法的流程图。
图7是本申请一种实施方式中的解码装置的结构图。
图8是本申请另一种实施方式中的编码装置的结构图。
图9是本申请一种实施方式中的解码端设备的硬件结构图。
图10是本申请一种实施方式中的编码端设备的硬件结构图。
具体实施方式
在本申请实施例使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。 本申请和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
本申请实施例中提出一种编码、解码方法,该方法可以涉及如下概念。
运动矢量(Motion Vector,MV):在帧间编码中,使用运动矢量表示当前帧图像的当前图像块与参考帧图像的参考图像块之间的相对位移,每个图像块都有相应的运动矢量传送到解码端。如果对每个图像块的运动矢量进行独立编码和传输,特别是划分成小尺寸的大量图像块,则消耗相当多的比特。为此,还可以利用相邻图像块之间的空间相关性,根据相邻已编码图像块的运动矢量对当前待编码图像块的运动矢量进行预测,然后对预测差进行编码,这样可以有效降低表示运动矢量的比特数。
运动信息(Motion Information):为了准确指向图像块,除了获取运动矢量,还需要参考帧图像的索引信息来表示使用哪个参考帧图像。对于当前帧图像,通常可以建立一个参考帧图像列表,参考帧索引表示当前图像块采用了参考帧图像列表中的第几个参考帧图像。可以将运动矢量、参考帧索引等与运动相关的信息统称为运动信息。
模板(Template):在视频编码技术中,编码过程是按照逐个图像块进行的。在进行当前图像块的编码时,周围已编码图像块的重建信息是可被利用的。模板指当前图像块周围(时域或空域的相邻区域)固定形状的编码/解码信息。在编码端和解码端,模板是完全一样的。因此,在编码端利用模板进行的一些操作,在解码端可利用该模板获得完全一致的结果,也就是说,编码端基于模板推导的信息可以在解码端无损恢复,而不需要传递额外信息,从而进一步减少传输比特数。
率失真原则(Rate-Distortion Optimized):评价编码效率的有两大指标:码率(又称比特率,Bit Per Second,BPS)和PSNR(Peak Signal to Noise Ratio,峰值信噪比)。对于同一视频,编码比特流越小,则压缩率越大,PSNR越大,则重建图像质量越好。在模式选择时,判别公式实质上也就是对二者的综合评价。例如,模式对应的代价:
J(mode)=D+λ*R
其中,D表示Distortion(失真),通常可以使用SSE(Sum-Square Error)指标来衡量,SSE是指重建图像块与源图像的差值的均方和;λ是拉格朗日乘子,R就是该模式下图像块编码所需的实际比特数,包括编码模式信息、运动信息、残差等所需的比特总和。
帧内预测与帧间预测(intra prediction and inter prediction):帧内预测是指,可以利用当前图像块的空域相邻图像块(如与当前图像块处于同一帧图像)的重建像素值进行预测编码;而帧间预测是指,可以利用当前图像块的时域相邻图像块(如与当前图像块处于不同帧图像)的重建像素值进行预测编码。
以下结合几个具体的实施例,对本公开提供的解码方法、编码方法进行详细说明。
实施例一
参见图1所示,为编码方法的流程示意图,该方法可以包括以下步骤:
步骤101,编码端获取当前图像块的候选图像块的运动信息。
当前图像块的候选图像块可以包括但不限于:当前图像块的空域候选图像块;或者,当前图像块的时域候选图像块。对此候选图像块不做限制。
在对当前图像块进行编码时,由于当前图像块的候选图像块已经完成编码,即候选图像块的运动信息为已知,因此,编码端可以直接获取候选图像块的运动信息,如候选图像块的运动矢量和参考帧索引等,对此不做限制。
候选图像块的运动信息可以包括但不限于:候选图像块的原始运动信息,如原始运动矢量,或者,原始运动矢量和原始参考帧。候选图像块的最终运动信息,如最终运动矢量,或者,最终运动矢量和最终参考帧。
在一个例子中,当前图像块的最终运动信息仅用于当前图像块的编码(当前图像块的预测值生成、重建等编码过程),不用于相邻图像块的预测,即从候选图像块获得的运动信息为候选图像块的原始运动信息,而不是候选图像块的最终运动信息。当前图像块的编码结束后,不保存该最终运动信息,而是保存原始运动信息,即将当前图像块的运动信息恢复为原始运动信息。
后续实施例中,会介绍当前图像块的最终运动信息的获得方式,而候选图像块的最终运动信息的获得方式,与当前图像块的最终运动信息的获得方式类似,即候选图像块作为当前图像块时会获得最终运动信息。
在一个例子中,编码端可以存储候选图像块的运动信息,如将候选图像块的原始运动信息存储为候选图像块的运动信息或将候选图像块的最终运动信息存储为候选图像块的运动信息,这样,在步骤101中,可以直接从编码端本地查询到候选图像块的原始运动信息。
在另一个例子中,编码端可以获取候选图像块的原始运动信息(如原始运动矢量和原始参考帧等),例如,从候选图像块的运动矢量列表中选取一个运动矢量,而选取的运动矢量就是原始运动矢量。又例如,可以将候选图像块的相邻图像块的运动信息,确定为候选图像块的原始运动信息。
上述方式只是获取候选图像块的原始运动信息的示例,对此不做限制。
步骤102,编码端根据候选图像块的运动信息获取当前图像块的模板。
本步骤102的处理过程,可以参见后续实施例。
步骤103,编码端根据当前图像块对应的原始运动信息和获取的模板,得到基于该模板的目标运动信息。其中,所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量。或者,所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧。
当原始运动信息包括原始运动矢量,目标运动信息包括目标运动矢量时,编码端根据当前图像块对应的原始运动信息和获取的模板,得到基于该模板的目标运动信息,包括:将所述原始运动矢量确定为中心运动矢量;确定与所述中心运动矢量对应的各个边缘运动矢量,所述边缘运动矢量与所述中心运动矢量不同;根据所述模板获得所述中心运动矢量的编码性能和各个所述边缘运动矢量的编码性能;根据所述中心运动矢量的编码性能和各个所述边缘运动矢量的编码性能,从所述中心运动矢量和各个所述边缘运动矢量中确定所述目标运动矢量。
当原始运动信息包括原始运动矢量和原始参考帧,目标运动信息包括目标运动矢量和目标参考帧时,编码端根据当前图像块对应的原始运动信息和获取的模板,得到基于该模板的目标运动信息,包括:基于当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;根据所述原始运动矢量获取各个候选参考帧对应的各个初始运动矢量;根据所述各个初始运动矢量获取所述各个候选参考帧对应的各个候选运动矢量;从所述原始参考帧对应的候选运动矢量以及各个所述候选参考帧对应的各个候选运动矢量中选择编码性能最优的候选运动矢量作为所述目标运动矢量;将目标运动矢量对应的参考帧确定为所述目标参考帧。
本步骤103的详细处理过程,可以参见后续实施例。
步骤104,编码端根据原始运动信息和目标运动信息确定当前图像块的最终运动信息。
当原始运动信息包括原始运动矢量,目的运动信息包括目标运动矢量,最终运动信息包括最终运动矢量时,则编码端根据原始运动信息和目标运动信息确定当前图像块的最终运动信息,可以包括:获取原始运动矢量的编码性能和目标运动矢量的编码性能;当目标运动矢量的编码性能优于原始运动矢量的编码性能时,则确定当前图像块的最终运动矢量为目标运动矢量;当原始运动矢量的编码性能优于目标运动矢量的编码性能时,确定当前图像块的最终运动矢量为原始运动矢量。
获取原始运动矢量的编码性能包括:根据当前图像块的模板的参数信息和第一目标参考块的参数信息,确定原始运动矢量的编码性能,第一目标参考块为所述模板对应的参考图像块基于原始运动矢量进行偏移之后获得的图像块。获取所述目标运动矢量的编码性能包括:根据当前图像块的模板的参数信息和第二目标参考块的参数信息,确定目标运动矢量的编码性能,第二目标参考块为所述模板对应的参考图像块基于目标运动矢量进行偏移之后获得的图像块。
本步骤104的详细处理过程,可以参见后续实施例。
步骤105,编码端根据最终运动信息对当前图像块进行编码,得到该当前图像块对应的编码比特流。然后,编码端还可以将该编码比特流发送给解码端。
步骤106,编码端将当前图像块对应的原始运动信息或者最终运动信息存储为当前图像块的运动信息。这样,在其它图像块的处理过程中,若选择上述当前图像块作为其他图像块的候选图像块时,步骤101中,所使用的候选图像块的运动信息也就是本步骤中存储的当前图像块的运动信息。
在一个例子中,将当前图像块对应的原始运动信息或者最终运动信息存储为当前图像块的运动信息,可以包括:在当前图像块的原始运动信息是根据空域相邻图像块(或者空域次邻图像块)的运动信息获得时,将当前图像块对应的原始运动信息存储为当前图像块的运动信息。在当前图像块的原始运动信息不是根据空域相邻图像块(或者空域次邻图像块)的运动信息获得时,将当前图像块对应的最终运动信息存储为当前图像块的运动信息。其中,空域相邻图像块或空域次邻图像块与当前图像块在同一帧图像中。例如,在当前图像块对应的原始运动信息是根据时域相邻图像块(即与当前图像块处于不同帧图像)的运动信息获得时,将当前图像块对应的最终运动信息存储为当前图像块的运动信息。在默认情况下,也可以将当前图像块对应的最终运动信息存储为当前图像块的运动信息。
在一些实施例中,原始运动信息至少包括原始运动矢量,在当前图像块对应的原始运动矢量是根据空域相邻图像块的运动矢量获得时,将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。在当前图像块的原始运动信息不是根据空域相邻图像块的运动矢量获得时,将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。例如,在当前图像块对应的原始运动矢量是根据时域相邻图像块的运动矢量获得时,将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。在默认情况下,也可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,运动信息包括运动矢量,在编码之后,将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量,或者在另一个例子中,在编码之后,将当前图像块的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,当前图像块的最终运动信息仅用于当前图像块的编码(当前图像块的预测值生成、重建等编码过程),不用于相邻图像块的预测,即从候选图像块获得的运动信息为候选图像块的原始运动信息,而不是候选图像块的最终运动信息。当前图像块的编码结束后,不保存该最终运动信息,而是保存原始运动信息,即将当前图像块的运动信息恢复为原始运动信息。
在一个例子中,步骤101中的候选图像块的运动信息可以为候选图像块的原始运动矢量,步骤106中,编码端可以将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤101中的候选图像块的运动信息可以为候选图像块的最终运动矢量,步骤106中,编码端可以将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤101中的候选图像块的运动信息可以为候选图像块的原始运动矢量,步骤106中,编码端可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤101中的候选图像块的运动信息可以为候选图像块的最终运动矢量,步骤106中,编码端可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
运动信息除了包括运动矢量之外,还包括参考图像帧以及运动方向等。
在一个例子中,编码端还可以获取原始运动矢量在运动矢量列表中的索引值;而且,所述编码比特流可以携带所述索引值,该过程在后续实施例介绍。
在一个例子中,若当前图像块的最终运动矢量为目标运动矢量,则该当前图像块对应的编码比特流还可以携带第一指示信息,所述第一指示信息用于指示基于模板确定当前图像块的最终运动信息。若当前图像块的最终运动矢量为原始运动矢量,则该当前图像块对应的编码比特流还可以携带第二指示信息,所述第二指示信息用于指示基于当前图像块对应的原始运动信息确定当前图像块的最终运动信息。
上述方式是采用显式方式通知解码端第一指示信息或者第二指示信息,在实际应用中,还可以采用隐式方式通知,即不在编码比特流中携带第一指示信息或者第二指示信息。具体的,编码端和解码端还可以协商决策策略或者通过在标准中定义决策策略,并将所述决策策略分别存储在编码端和解码端,如决策策略可以约定第一策略信息,第一策略信息用于指示基于模板确定当前图像块的最终运动信息;或者,约定第二策略信息,第二策略信息用于指示基于当前图像块对应的原始运动信息确定当前图像块的最终运动信息;或者,约定第三策略信息,第三策略信息为采用与当前图像块的相邻图像块相同的策略信息。其中,所采用相邻图像块为编码端和解码端预先约定好的某个相邻图像块。
在此基础上,当决策策略约定第一策略信息,且编码端根据目标运动信息对当前图像块进行编码时,则编码比特流中可以不携带第一指示信息和第二指示信息。当决策策略约定第二策略信息,且编码端根据原始运动信息对当前图像块进行编码时,则编码比特流中可以不携带第一指示信息和第二指示信息。当决策策略约定第三策略信息,且相邻图像块采用第一策略信息,则编码端根据目标运动信息对当前图像块进行编码时,编码比特流中可以不携带第一指示信息和第二指示信息;当相邻图像块采用第二策略信息,编码端根据原始运动信息对当前图像块进行编码时,编码比特流中可以不携带第一指示信息和第二指示信息。
由以上技术方案可见,本申请实施例中,可以根据当前图像块的候选图像块的运动信息获取当前图像块的模板,根据当前图像块的模板对当前图像块进行编码,得到当前图像块对应的编码比特流。上述方式可以快速得到当前图像块的模板,可以提高编码效率,降低编码时延,提高编码性能。
编码端可以同时对多个图像块进行并行编码,从而进一步提高编码速度,提高编码效率,降低编码时延,提高编码性能。
在一个例子中,当前图像块的最终运动信息仅用于当前图像块的编码(当前图像块的预测值生成、重建等编码过程),不用于相邻图像块的预测,即相邻图像块获得的候选运动信息为原始运动信息,而不是最终运动信息。当前图像块的编码结束后,不保存该最终运动信息,而是保存原始运动信息,即将当前图像块的运动信息恢复为原始运动信息。
实施例二
在视频编码过程中,对每个图像块逐一编码。在对当前图像块编码时,若当前图像块周围的相邻图像块已经重建完成,则可以利用相邻图像块的信息对当前图像块进行重建,因此,可以利用当前图像块的相邻图像块的信息获取当前图像块的模板。该信息可以包括但不限于:相邻图像块的重建信息和/或相邻图像块的预测信息。该重建信息可以包括但不限于亮度值、色度值等;该预测信息可以是能够获取重建信息的中间值,例如,若能够利用中间值A获取到亮度值,则中间值A是预测信息,对此预测信息不做限制。
但是,若该信息是重建信息,则当前图像块的模板的生成需要等到重建阶段,降低了编解码的效率,带来了时延。若该信息是预测信息,则当前图像块的模板的生成也需要等到重建阶段,降低了编解码的效率,带来了时延。因此,上述方式会导致编解码的并行度受到较大影响。
为此,在本实施例中,提出了一种模板生成方式,与采用重建信息和预测信息生成模板的方式不同,可以根据当前图像块的候选图像块的运动信息(如运动矢量和参考帧索引等)获取当前图像块的模板。根据候选图像块的运动信息获取当前图像块的模板的方法均可应用在编码端和解码端。
根据候选图像块的运动信息获取当前图像块的模板,可以包括:当运动信息包括候选图像块的运动矢量和参考帧索引时,根据该参考帧索引确定该候选图像块的参考帧图像;根据该运动矢量从该参考帧图像中获取与该候选图像块的参考图像块,并根据该参考图像块获取该当前图像块的模板。
例如,参见图2A所示,假设图像块A1是当前图像块,而图像块A2和图像块A3是图像块A1的候选图像块。若图像块A2的参考帧索引为图像B的索引,则可以根据该参考帧索引确定图像块A2的参考帧图像是图像B;然后,从图像B中选取与图像块A2对应的图像块B2(如图中虚线箭头所示),即图像块B2在图像B中的位置与图像块A2在图像A中的位置相同;然后,可以根据图像块A2的运动矢量对图像块B2进行移动,如利用运动矢量(3,3)移动图像块B2,得到与图像块B2对应的图像块B2’(如向右移动3个像素点,向上移动3个像素点),图像块B2’就是图像块A2的参考图像块。同理,可以确定图像块A3的参考图像块是图像块B3’(如图中虚线箭头所示)。可以根据图像块B2’和图像块B3’确定图像块A1的模板,参见图2A所示。
在一个例子中,候选图像块可以包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数。第一候选图像块为当前图像块上侧的候选图像块,第二候选图像块为当前图像块左侧的候选图像块。在这种情况下,根据候选图像块的运动信息获取当前图像块的模板,可以包括但不限于:根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;然后,基于第一模板和第二模块确定当前图像块的模板。其中,基于第一模板和第二模板确定当前图像块的模板,可以包括但不限于:将第一模板确定为当前图像块的模板;或者,将第二模板确定为当前图像块的模板;或者,将第一模板和第二模板拼接之后确定为当前图像块的模板。
当M为大于或等于1的自然数,N为0时,可以根据M个第一候选图像块的运动 矢量预测模式和运动信息,确定第一模板,并将第一模板确定为当前图像块的模板。当N为大于或等于1的自然数,M为0时,可以根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板,并将第二模板确定为当前图像块的模板。当M为大于或等于1的自然数,N为大于或等于1的自然数,则可以根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板,根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板,并根据第一模板和第二模板确定当前图像块的模板。具体可以包括将第一模板确定为当前图像块的模板、或将第二模板确定为当前图像块的模板、或将第一模板和第二模板拼接之后确定为当前图像块的模板。
第一候选图像块包括当前图像块上侧的相邻图像块和/或次邻图像块。相邻图像块的预测模式为帧间模式或者帧内模式;次邻图像块的预测模式为帧间模式。第一候选图像块可以包括至少一个预测模式为帧间模式的相邻图像块,例如,当前图像块上侧的所有相邻图像块、或当前图像块上侧的第一个相邻图像块、或当前图像块上侧的任意一个或多个相邻图像块。当当前图像块上侧的相邻图像块均为帧内模式时,则第一候选图像块还可以包括至少一个预测模式为帧间模式的次邻图像块,例如,当前图像块上侧的所有次邻图像块、或当前图像块上侧的第一个次邻图像块、或当前图像块上侧的任意一个或多个次邻图像块。当当前图像块上侧存在帧内模式的相邻图像块时,则第一候选图像块还可以包括帧内模式的相邻图像块,例如,当前图像块上侧的第一个帧内模式的相邻图像块、当前图像块上侧的所有帧内模式的相邻图像块等。上述只是第一候选图像块的示例,对此不做限制。
第二候选图像块包括当前图像块左侧的相邻图像块和/或次邻图像块。相邻图像块的预测模式为帧间模式或者帧内模式;次邻图像块的预测模式为帧间模式。第二候选图像块可以包括至少一个预测模式为帧间模式的相邻图像块,例如,当前图像块左侧的所有相邻图像块、或当前图像块左侧的第一个相邻图像块、或当前图像块左侧的任意一个或多个相邻图像块。当当前图像块左侧的相邻图像块均为帧内模式时,则第二候选图像块还可以包括至少一个预测模式为帧间模式的次邻图像块,例如,当前图像块左侧的所有次邻图像块、或当前图像块左侧的第一个次邻图像块、或当前图像块左侧的任意一个或多个次邻图像块。当当前图像块左侧存在帧内模式的相邻图像块时,则第一候选图像块还可以包括帧内模式的相邻图像块,例如,当前图像块左侧的第一个帧内模式的相邻图像块、当前图像块左侧的所有帧内模式的相邻图像块等。上述只是第二候选图像块的示例,对此不做限制。
当前图像块的相邻图像块包括但不限于:当前图像块的空域相邻图像块(即同一帧图像中的相邻图像块);或者,当前图像块的时域相邻图像块(即不同帧图像中的相邻图像块)。当前图像块的次邻图像块包括但不限于:当前图像块的空域次邻图像块(即同一帧图像中的次邻图像块);或者,当前图像块的时域次邻图像块(即不同帧图像中的次邻图像块)。
在一个例子中,当M大于1时,则第一模板可以包括M个子模板或者P个子模板,且由M个子模板或者P个子模板拼接而成,P可以为帧间模式的第一候选图像块的数量,P小于或等于M。例如,当M个第一候选图像块均为帧间模式的候选图像块时,则第一模板可以包括M个子模板,且由M个子模板拼接而成。又例如,当M个第一候选图 像块包括P个帧间模式的候选图像块、并包括M-P个帧内模式的候选图像块时,则第一模板可以包括M个子模板(即每个候选图像块对应一个子模板),且由M个子模板拼接而成;或者,第一模板可以包括P个子模板(即P个帧间模式的候选图像块对应的P个子模板),且由P个子模板拼接而成。
当M等于1时,则第一模板可以包括第一子模板,该第一子模板可以是根据当前图像块上侧的任意一个候选图像块的运动矢量预测模式和运动信息确定的。其中,由于第一候选图像块包括至少一个预测模式为帧间模式的相邻图像块或次邻图像块,因此,M等于1时,第一模板包括的是帧间模式的相邻图像块或次邻图像块对应的第一子模板。
在一个例子中,运动信息可以包括第一候选图像块的运动矢量和参考帧索引,基于此,根据M个第一候选图像块的运动矢量预测模式和运动信息确定第一模板,可以包括但不限于以下情况。
情况一、针对M个第一候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧间模式时,则根据第i个候选图像块的参考帧索引确定第i个候选图像块的参考帧图像;根据第i个候选图像块的运动矢量,从该参考帧图像中确定第i个候选图像块的参考图像块,该参考图像块与第i个候选图像块的相对位移,与第i个候选图像块的运动矢量相匹配;然后,可以根据确定的参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为第一模板包括的第i个子模板。
情况二、针对M个第一候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧内模式时,则将第i个候选图像块按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,基于默认值填充后的图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为第一模板包括的第i个子模板。
情况三、针对M个第一候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧内模式时,根据第i个候选图像块对应的参考帧索引确定第i个候选图像块对应的参考帧图像;根据第i个候选图像块对应的运动矢量,从参考帧图像中确定第i个候选图像块对应的参考图像块,参考图像块与第i个候选图像块的相对位移与第i个候选图像块对应的运动矢量相匹配(包含相等或近似相等);根据确定的参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为第一模板包括的第i个子模板;其中,第i个候选图像块对应的参考帧索引和运动矢量,是第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
第一横向长度与第一候选图像块的横向长度满足第一比例关系(如1:1,1:2,2:1等,对此不做限制),或者与当前图像块的横向长度满足第二比例关系(如1:1,1:2,2:1等),或者等于第一预设长度(可根据经验配置)。
第一纵向长度与第一候选图像块的纵向长度满足第三比例关系(如1:1,1:2,2:1等),或者与当前图像块的纵向长度满足第四比例关系(如1:1,1:2,2:1等),或者等于第二预设长度(即根据经验配置的长度)。
第一比例关系、第二比例关系、第三比例关系和第四比例关系之间可以设置为相同,也可以不同。第一预设长度和第二预设长度可以设置为相同,也可以不同。
在一个例子中,当N大于1时,则第二模板可以包括N个子模板或者R个子模板,且由N个子模板或者R个子模板拼接而成,其中,R可以为帧间模式的第二候选图像块的数量,R小于或等于N。例如,当N个第二候选图像块均为帧间模式的候选图像块是,第二模板可以包括N个子模板,且由N个子模板拼接而成。又例如,当N个第二候选图像块包括R个帧间模式的候选图像块、并包括N-R个帧内模式的候选图像块,则第二模板可以包括N个子模板(即每个候选图像块对应一个子模板),且由N个子模板拼接而成;或者,第二模板可以包括R个子模板(即R个帧间模式的候选图像块对应的R个子模板),且由R个子模板拼接而成。
当N等于1时,则第二模板可以包括第二子模板,该第二子模板可以是根据当前图像块左侧的任意一个候选图像块的运动矢量预测模式和运动信息确定的。其中,由于第二候选图像块包括至少一个预测模式为帧间模式的相邻图像块或次邻图像块,因此,N等于1时,第二模板包括的是帧间模式的相邻图像块或次邻图像块对应的第二子模板。
在一个例子中,运动信息可以包括第二候选图像块的运动矢量和参考帧索引,基于此,根据N个第二候选图像块的运动矢量预测模式和运动信息确定第二模板,可以包括但不限于以下情况。
情况一、针对N个第二候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧间模式时,则根据第i个候选图像块的参考帧索引确定第i个候选图像块的参考帧图像;根据第i个候选图像块的运动矢量,从该参考帧图像中确定第i个候选图像块的参考图像块,该参考图像块与第i个候选图像块的相对位移,与第i个候选图像块的运动矢量相匹配;然后,可以根据确定的参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为第二模板包括的第i个子模板。
情况二、针对N个第二候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧内模式时,则将第i个候选图像块按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,基于默认值填充后的图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为第二模板包括的第i个子模板。
情况三、针对N个第二候选图像块中的第i个候选图像块,当确定第i个候选图像块的运动矢量预测模式为帧内模式时,根据第i个候选图像块对应的参考帧索引确定第i个候选图像块对应的参考帧图像;根据第i个候选图像块对应的运动矢量,从参考帧图像中确定第i个候选图像块对应的参考图像块,参考图像块与第i个候选图像块的相对位移与第i个候选图像块对应的运动矢量相匹配(包含相等或近似相等);根据确定的参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为第一模板包括的第i个子模板;其中,第i个候选图像块对应的参考帧索引和运动矢量,是第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
第二横向长度与第二候选图像块的横向长度满足第五比例关系(如1:1,1:2,2:1等,对此不做限制),或者与当前图像块的横向长度满足第六比例关系(如1:1,1:2,2:1等),或者等于第三预设长度(可根据经验配置)。
第二纵向长度与第二候选图像块的纵向长度满足第七比例关系(如1:1,1:2,2:1等),或者与当前图像块的纵向长度满足第八比例关系(如1:1,1:2,2:1等), 或者等于第四预设长度(即根据经验配置的长度)。
第五比例关系、第六比例关系、第七比例关系和第八比例关系之间可以设置为相同,也可以不同。第三预设长度和第四预设长度可以设置为相同,也可以不同。
在一个例子中,根据候选图像块的运动信息获取当前图像块的模板,还可以包括但不限于:当当前图像块对应多个运动信息时,根据每个运动信息获取该运动信息对应的模板,每个模板的获取方式参见上述实施例;然后,获取每个运动信息对应的权重,并根据每个运动信息对应的权重以及该运动信息对应的模板获取当前图像块的模板。例如,基于每个运动信息对应的权重以及该运动信息对应的模板,可利用加权平均方式获取当前图像块的模板。其中,当前图像块对应的运动信息可包括当前图像块的原始运动信息。这些权重可以是预先存储在编解码端的。
例如,当前图像块对应运动信息A和运动信息B,采用上述实施例获取运动信息A对应的模板TA,并采用上述实施例获取运动信息B对应的模板TB。
然后,可以获取运动信息A的权重W1和运动信息B的权重W2,这样,当前图像块的模板可以为(TA*W1+TB*W2)/2。
以下结合几个具体情况,对当前图像块的模板进行详细说明。
情况一:候选图像块可以包括当前图像块上侧的所有帧间模式的相邻图像块,以及,当前图像块左侧的所有帧间模式的相邻图像块。
参见图2B所示,对于当前图像块A1,若左侧存在帧间模式的相邻图像块,如图像块A3和图像块A4,则可以将帧间模式的图像块A3和图像块A4确定为当前图像块A1的第二候选图像块。类似的,若上侧存在帧间模式的相邻图像块,如图像块A2,则可以将帧间模式的图像块A2确定为当前图像块A1的第一候选图像块。
若左侧没有可用图像块,或者,左侧的可用图像块均是帧内模式,则可以说明当前图像块A1的左侧没有候选图像块。类似的,若上侧没有可用图像块,或者,上侧的可用图像块均是帧内模式,则可以说明当前图像块A1的上侧没有候选图像块。若左侧和上侧均没有候选图像块,则可以说明当前图像块A1没有候选图像块。
若当前图像块A1没有候选图像块,则不再采用本实施例的技术方案,而是采用传统方式。若当前图像块A1有候选图像块,如左侧的候选图像块和/或上侧的候选图像块,则采用本实施例的技术方案。
在一个例子中,在确定出候选图像块为图像块A2、图像块A3和图像块A4后,就可以根据图像块A2的运动信息、图像块A3的运动信息和图像块A4的运动信息,获取当前图像块A1的模板。例如,可以根据图像块A2的参考帧索引确定图像块A2的参考帧图像,从该参考帧图像中选取与图像块A2对应的图像块B2,根据图像块A2的运动矢量对图像块B2进行移动,得到图像块A2的参考图像块B2’。同理,可以得到图像块A3的参考图像块B3’,图像块A4的参考图像块B4’,参见图2C所示。然后,可以根据参考图像块B2’、参考图像块B3’和参考图像块B4’获得当前图像块A1的模板。
在一个例子中,假设当前图像块A1上侧模板的横向长度为W,纵向长度为S, W的取值可以根据经验配置,S的取值可以根据经验配置,对于W和S的取值均不做限制。例如,W可以为当前图像块A1的横向长度、或者为候选图像块A2的横向长度、或者为当前图像块A1的横向长度的2倍等;S可以为候选图像块A2的纵向长度、或者为候选图像块A2的纵向长度的1/3等。参见图2D所示,为参考图像块B2’对应的模板示意图。在图2D中,该模板的横向长度W为候选图像块A2的横向长度,即W为参考图像块B2’的横向长度;该模板的纵向长度S为候选图像块A2的纵向长度的1/3,即S为参考图像块B2’的纵向长度的1/3。
假设当前图像块A1左侧模板的横向长度为R,纵向长度为H,R的取值可以根据经验配置,H的取值可以根据经验配置,对于R和H的取值均不做限制。例如,H可以为当前图像块A1的纵向长度、或者为候选图像块A3的纵向长度;R可以为候选图像块A3的横向长度、或者为候选图像块A3的横向长度的1/3等。参见图2D所示,为参考图像块B3’对应的模板示意图,该模板的纵向长度H为候选图像块A3的纵向长度,该模板的横向长度R为候选图像块A3的横向长度的1/3。
同理,参考图像块B4’对应的模板页也可以参见图2D所示,在此不再赘述。
在一个例子中,假设当前图像块上侧有M个候选图像块,则针对第i个候选图像块,假设其横向长度为w i,纵向长度为S,则需判断该候选图像块的预测模式。
若为帧内模式,则不再生成相应的子模板,或者,按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,作为上侧模板的第i个子模板。
若为帧间模式,则获取第i个候选图像块的运动信息(如运动矢量和参考帧索引等),基于该运动矢量和该参考帧索引生成横向长度为w i,纵向长度为S的模板,作为上侧的第i个子模板。具体的,若运动矢量为MV,参考帧索引为idx,则在当前帧的第idx个参考图像中找到相对位移为MV的横向长度为w i,纵向长度为S的矩形块,作为上侧的第i个子模板。其中,横向长度w i,纵向长度S可以是编解码端事先约定好的,可以预先存储在编解码端。
假设当前图像块左侧有N个候选图像块,则针对第i个候选图像块,假设纵向长度为h i,横向长度为R,则需判断该候选图像块的预测模式。
若为帧内模式,则不再生成相应的子模板,或者,按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,作为左侧模板的第i个子模板。
若为帧间模式,则获取第i个候选图像块的运动信息(如运动矢量和参考帧索引等),基于该运动矢量和该参考帧索引生成横向长度为R,纵向长度为h i的模板,作为左侧的第i个子模板。具体的,若运动矢量为MV,参考帧索引为idx,则在当前帧的第idx个参考图像中找到相对位移为MV的横向长度为R,纵向长度为h i的矩形块,作为左侧的第i个子模板。其中,横向长度R,纵向长度h i可以是编解码端事先约定好的,可以预先存储在编解码端。
第一模板可以由上侧的所有子模板拼接而成,第二模板可以由左侧的所有子模板拼接而成,第一模板和第二模板就拼接成当前图像块的模板。
情况二:候选图像块可以包括当前图像块上侧的第一个帧间模式的相邻图像块, 以及,当前图像块左侧的第一个帧间模式的相邻图像块。
参见图2E所示,对于当前图像块A1,若左侧的第一个图像块A3是帧间模式,则可以将图像块A3确定为当前图像块A1的候选图像块。若上侧的第一个图像块A2是帧间模式,则可以将图像块A2确定为当前图像块A1的候选图像块。
若左侧没有可用图像块,或者,左侧的第一个图像块是帧内模式,则当前图像块A1的左侧没有候选图像块。若上侧没有可用图像块,或者,上侧的第一个图像块是帧内模式,则当前图像块A1的上侧没有候选图像块。若左侧和上侧均没有候选图像块,则当前图像块A1没有候选图像块。
若当前图像块A1没有候选图像块,则不再采用本实施例的技术方案,而是采用传统方式。若当前图像块A1有候选图像块,如左侧的候选图像块和/或上侧的候选图像块,则采用本实施例的技术方案。
在确定出候选图像块为图像块A2和图像块A3后,可以根据图像块A2的运动信息和图像块A3的运动信息,获取当前图像块A1的模板。例如,可以根据图像块A2的参考帧索引确定图像块A2的参考帧图像,从参考帧图像中选取与图像块A2对应的图像块B2,根据图像块A2的运动矢量对图像块B2进行移动,得到图像块A2的参考图像块B2’。同理,可以得到图像块A3的参考图像块B3’,可以根据参考图像块B2’和参考图像块B3’获得当前图像块A1的模板。
在一个例子中,假设当前图像块A1上侧模板的横向长度为W,纵向长度为S,W的取值可以根据经验配置,S的取值可以根据经验配置,对于W和S的取值均不做限制。例如,W可以为当前图像块A1的横向长度、或者为候选图像块A2的横向长度;S可以为候选图像块A2的纵向长度、或者为候选图像块A2的纵向长度的1/3等。参见图2F所示,为参考图像块B2’对应的模板示意图。
假设当前图像块A1左侧模板的横向长度为R,纵向长度为H,R的取值可以根据经验配置,H的取值可以根据经验配置,对于R和H的取值均不做限制。例如,H可以为当前图像块A1的纵向长度、或者为候选图像块A3的纵向长度等;R可以为候选图像块A3的横向长度、或者为候选图像块A3的横向长度的1/3等。在此基础上,参见图2F所示,为参考图像块B3’对应的模板示意图。
在一个例子中,假设当前图像块上侧有M个候选图像块,则针对上侧的第一个候选图像块,假设其横向长度为w、纵向长度为S,则需判断该候选图像块的预测模式。若为帧内模式,则不生成相应的模板,或者,按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,作为第一模板。若为帧间模式,则获取该候选图像块的运动信息(如运动矢量和参考帧索引等),基于运动矢量和参考帧索引生成横向长度为w,纵向长度为S的模板,作为第一模板。
假设当前图像块左侧有N个候选图像块,针对左侧的第一个候选图像块,假设纵向长度为h,横向长度为R,则需判断该候选图像块的预测模式。若为帧内模式,则不生成相应的模板,或按照默认值填充,作为第二模板。若为帧间模式,获取该候选图像块的运动信息(如运动矢量和参考帧索引),基于运动矢量和参考帧索引生成横向长 度为R,纵向长度为h的模板,作为第二模板。
情况三:候选图像块可以包括当前图像块上侧的第一个帧间模式的相邻图像块,以及,当前图像块左侧的第一个帧间模式的相邻图像块。
与情况二的图2F相比,在情况三中,参见图2G所示,当前图像块A1上侧模板的横向长度W,其取值为当前图像块A1的横向长度,当前图像块A1左侧模板的纵向长度H,其取值为当前图像块A1的纵向长度,其它过程与情况二类似,不再赘述。
情况四:候选图像块可以包括当前图像块上侧的帧间模式的相邻图像块、当前图像块上侧的帧间模式的次邻图像块(即相邻图像块为帧内模式时,选择该相邻图像块对应的与当前图像块是次邻的图像块)、当前图像块左侧的帧间模式的相邻图像块、当前图像块左侧的帧间模式的次邻图像块。
例如,参见图2H所示,对于当前图像块A1,左侧存在帧间模式的相邻图像块,如图像块A3和图像块A4,则可以将帧间模式的图像块A3和图像块A4确定为当前图像块A1的候选图像块。类似的,上侧存在帧间模式的相邻图像块,如图像块A2,则可以将帧间模式的图像块A2确定为当前图像块A1的候选图像块。
左侧存在帧内模式的相邻图像块,如图像块A7,且图像块A7左侧存在帧间模式的图像块A8,即图像块A8是当前图像块A1左侧的次邻图像块,因此,可以将帧间模式的次邻图像块A8确定为当前图像块A1的候选图像块。
类似的,上侧存在帧内模式的相邻图像块,如图像块A5,且图像块A5上侧存在帧间模式的图像块A6,即图像块A6是当前图像块A1上侧的次邻图像块,因此,可以将帧间模式的次邻图像块A6确定为当前图像块A1的候选图像块。
在确定出候选图像块为图像块A2、图像块A3、图像块A4、图像块A6和图像块A8后,根据图像块A2的运动信息、图像块A3的运动信息、图像块A4的运动信息、图像块A6的运动信息和图像块A8的运动信息,获取当前图像块A1的模板,具体获取方式参见情况一,只是多了根据图像块A6的运动信息和图像块A8的运动信息获取子模板,模板最终参见图2I所示。
情况五:若当前图像块上侧的第一个相邻图像块为帧内模式,且该第一个相邻图像块上侧的图像块为帧间模式,则候选图像块可以包括当前图像块上侧的帧间模式的次邻图像块。若当前图像块左侧的第一个相邻图像块为帧内模式,且该第一个相邻图像块左侧的图像块为帧间模式,则候选图像块可以包括当前图像块左侧的帧间模式的次邻图像块。
参见图2J所示,对于当前图像块A1,若上侧的第一个图像块A2是帧内模式,且图像块A2上侧存在帧间模式的图像块A4,即图像块A4是当前图像块A1上侧的次邻图像块,因此,可以将帧间模式的次邻图像块A4确定为当前图像块A1的候选图像块。
若左侧的第一个图像块A3是帧内模式,且图像块A3左侧存在帧间模式的图像块A5,即图像块A5是当前图像块A1左侧的次邻图像块,因此,可以将帧间模式的次邻图像块A5确定为当前图像块A1的候选图像块。
在确定出候选图像块为图像块A4和图像块A5后,可以根据图像块A4的运动信息和图像块A5的运动信息,获取当前图像块A1的模板,具体获取方式参见情况二和情况三,在此不再赘述,模板最终参见图2K或者图2L所示。
情况六:候选图像块可以包括当前图像块上侧的帧间模式的相邻图像块、当前图像块上侧的帧内模式的相邻图像块、当前图像块左侧的帧间模式的相邻图像块、当前图像块左侧的帧内模式的相邻图像块。
例如,参见图2H所示,对于当前图像块A1,左侧存在帧间模式的相邻图像块,如图像块A3和图像块A4,则可以将帧间模式的图像块A3和图像块A4确定为当前图像块A1的候选图像块。类似的,上侧存在帧间模式的相邻图像块,如图像块A2,则可以将帧间模式的图像块A2确定为当前图像块A1的候选图像块。
左侧存在帧内模式的相邻图像块,如图像块A7,可以将帧内模式的图像块A7确定为当前图像块A1的候选图像块。上侧存在帧内模式的相邻图像块,如图像块A5,可以将帧内模式的图像块A5确定为当前图像块A1的候选图像块。
在确定出候选图像块为图像块A2、图像块A3、图像块A4、图像块A5和图像块A7后,根据图像块A2的运动信息、图像块A3的运动信息、图像块A4的运动信息、图像块A5的运动信息和图像块A7的运动信息,获取当前图像块A1的模板,具体获取方式参见情况一。由于在这种情况下,可以根据帧内模式的图像块A5的运动信息和图像块A7的运动信息获取子模板,所以在判断图像块的预测模式为帧内模式后,还可以按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,作为上侧或左侧的子模板。模板最终参见图2M所示。
需要注意的是,在根据图像块A5的运动信息获取模板时,由于图像块A5是帧内模式,不具有运动信息,因此,可以将图像块A5的相邻图像块(如图像块A6)的运动信息确定为图像块A5的运动信息,在得到图像块A5的运动信息后,还可以利用图像块A5的运动信息获取相应的子模板,具体获取方式参见情况一。
同理,在根据图像块A7的运动信息获取模板时,由于图像块A7是帧内模式,不具有运动信息,因此可以将图像块A7的相邻图像块(如图像块A8)的运动信息确定为图像块A7的运动信息,并利用图像块A7的运动信息获取相应的子模板。
情况七:若当前图像块上侧的第一个相邻图像块为帧内模式,且该第一个相邻图像块上侧的图像块为帧间模式,则候选图像块可以包括当前图像块上侧的第一个帧内模式的相邻图像块。若当前图像块左侧的第一个相邻图像块为帧内模式,且该第一个相邻图像块左侧的图像块为帧间模式,则候选图像块可以包括当前图像块左侧的第一个帧内模式的相邻图像块。
参见图2J所示,对于当前图像块A1,若上侧第一个图像块A2是帧内模式,并且图像块A2上侧存在帧间模式的图像块A4,则可以将帧内模式的图像块A2确定为当前图像块A1的候选图像块。若左侧第一个图像块A3是帧内模式,并且图像块A3左侧存在帧间模式的图像块A5,则可以将帧内模式的图像块A3确定为当前图像块A1的候选图像块。
在确定出候选图像块为图像块A2和图像块A3后,可以根据图像块A2的运动信息和图像块A3的运动信息,获取当前图像块A1的模板,具体获取方式参见情况二和情况三。由于在这种情况下,可以根据帧内模式的图像块A2的运动信息和图像块A3的运动信息获取模板,所以在判断图像块的预测模式为帧内模式后,还可以按照默认值(如默认像素值,可以是根据经验预先配置的亮度值)填充,作为上侧或左侧的模板。模板最终参见图2N或者图2O所示。
需要注意的是,在根据图像块A2的运动信息获取模板时,由于图像块A2是帧内模式,不具有运动信息,因此,可以将图像块A2的相邻图像块(即图像块A4)的运动信息确定为图像块A2的运动信息,并利用图像块A2的运动信息获取相应的模板。在根据图像块A3的运动信息获取模板时,由于图像块A3是帧内模式,不具有运动信息,因此可以将图像块A3的相邻图像块(即图像块A5)的运动信息确定为图像块A3的运动信息,并利用图像块A3的运动信息获取相应的模板。
实施例三
在帧间编码技术中,可以使用运动矢量表示当前帧图像的当前图像块与参考帧图像的参考图像块之间的相对位移。例如,图像A是当前帧图像,图像块A1是当前图像块,图像B是图像A的参考帧图像,图像块B1是图像块A1的参考图像块。由于图像A与图像B存在很强的时域相关性,在需要传输图像A的图像块A1时,可以在图像B中进行运动搜索,找到与图像块A1最匹配的图像块B1,并确定图像块A1与图像块B1之间的相对位移,该相对位移也就是图像块A1的运动矢量。
例如,运动矢量为(-6,4),该运动矢量表示图像块B1与图像块A1相比,在水平方向上向左移动6个像素点,在垂直方向上向上移动4个像素点。
编码端向解码端发送图像块A1对应的编码比特流时,编码比特流携带的是图像块A1的运动矢量(-6,4),而不是图像块A1。解码端接收到图像块A1对应的编码比特流后,可以获得图像块A1的运动矢量(-6,4),基于当前图像块A1的位置和运动矢量(-6,4),在参考帧图像B中确定参考图像块B1的位置,即:在当前图像块A1的位置上,向左移动6个像素点,向上移动4个像素点,得到的是参考图像块B1的位置,从参考图像块B1的位置读取参考图像块B1,利用参考图像块B1对当前图像块A1进行重建。由于参考图像块B1和当前图像块A1的相似度很高,因此利用参考图像块B1对当前图像块A1进行重建,可重建出相似度很高的图像。由于运动矢量占用的比特数小于图像块A1占用的比特数,因此通过在当前图像块A1对应的编码比特流中携带运动矢量,而不是携带图像块A1,可以节约大量比特。
然而,若当前帧图像A包括大量图像块,则每个图像块的运动矢量也会占用比较多的比特,因此,为了进一步的节约比特数,还可以利用相邻图像块的运动矢量确定当前图像块A1的运动矢量。例如,在当前帧图像A中,与当前图像块A1相邻的图像块可以包括:图像块A2和图像块A3,在当前图像块A1的运动矢量列表中可以包括图像块A2的运动矢量A21和图像块A3的运动矢量A31。
在一个例子中,编码端在向解码端发送图像块A1对应的编码比特流时,编码比特流携带的是原始运动矢量A21的索引值(即运动矢量列表中的索引值),而不是图像 块A1的运动矢量(-6,4),更不是图像块A1。解码端接收到图像块A1对应的编码比特流后,可以获得原始运动矢量A21的索引值,并根据索引值从运动矢量列表中获取原始运动矢量A21。由于索引值占用的比特数小于运动矢量占用的比特数,因此可以进一步节约比特。
本实施例中,不是直接将原始运动矢量A21作为图像块A1的最终运动矢量,而是根据原始运动矢量A21获取与原始运动矢量A21不同的目标运动矢量,该目标运动矢量是与图像块A1的运动矢量最接近的运动矢量,将该目标运动矢量作为图像块A1的最终运动矢量。与“将原始运动矢量A21直接作为图像块A1的最终运动矢量”的方式相比,“将目标运动矢量作为图像块A1的最终运动矢量”的方式,可以提高预测质量,减少预测错误。
本实施例中,原始运动信息为当前图像块对应的原始运动矢量,目标运动信息为当前图像块对应的目标运动矢量,参见图3所示,为编码方法的流程示意图。
步骤301,编码端获取当前图像块的候选图像块的运动信息。
步骤302,编码端根据候选图像块的运动信息获取当前图像块的模板。
步骤303,编码端根据当前图像块对应的原始运动矢量和获取的模板,得到基于模板的目标运动矢量,该目标运动矢量与该原始运动矢量可以不同。
步骤303之前,可以先获取当前图像块对应的原始运动矢量,例如,假设当前图像块是图像块A1,在编码端的运动矢量列表依次包括运动矢量A21、运动矢量A31、运动矢量A41和运动矢量A51,则可以从运动矢量列表中选择一个运动矢量,作为图像块A1对应的原始运动矢量。上述方式只是示例,对此不做限制,例如,可以直接将默认运动矢量确定为原始运动矢量。
编码端从运动矢量列表中选择一个运动矢量,可以包括:编码端从运动矢量列表中选择第一个运动矢量;或者,从运动矢量列表中选择最后一个运动矢量;或者,从运动矢量列表中随机选择一个运动矢量;或者,采用hash(哈希)算法从运动矢量列表中选择一个运动矢量。上述方式只是几个示例,对此不做限制,只要能够从运动矢量列表中选择运动矢量即可。
运动矢量列表用于记录与当前图像块相邻的图像块的运动矢量。例如,在得到图像块A2的运动矢量A21后,可以将运动矢量A21记录到该运动矢量列表,在得到图像块A3的运动矢量A31后,可以将运动矢量A31记录到该运动矢量列表,以此类推,最终,可以得到图像块A1的运动矢量列表。
步骤304,编码端根据原始运动矢量和目标运动矢量确定当前图像块的最终运动矢量,并根据最终运动矢量对当前图像块进行编码,得到该当前图像块对应的编码比特流。
编码端可以获取原始运动矢量的编码性能和目标运动矢量的编码性能。当目标运动矢量的编码性能优于原始运动矢量的编码性能时,则编码端确定当前图像块的最终运动矢量为目标运动矢量,且编码端向解码端发送携带第一指示信息的编码比特流。当原始运动矢量的编码性能优于目标运动矢量的编码性能时,则编码端确定当前图像块的 最终运动矢量为原始运动矢量,且编码端向解码端发送携带第二指示信息的编码比特流。
上述方式是采用显式方式通知解码端第一指示信息或者第二指示信息,在实际应用中,还可以采用隐式方式通知,即不在编码比特流中携带第一指示信息或者第二指示信息。具体的,编码端和解码端还可以协商运动矢量决策策略或者通过在标准中定义决策策略,并将所述决策策略分别存储在编码端和解码端,如运动矢量决策策略可以约定第一策略信息,或者,约定第二策略信息,或者,约定第三策略信息。
当运动矢量决策策略约定第一策略信息,且目标运动矢量的编码性能优于原始运动矢量的编码性能,则编码比特流中可以不携带第一指示信息和第二指示信息。当运动矢量决策策略约定第二策略信息,且原始运动矢量的编码性能优于目标运动矢量的编码性能,则编码比特流中可以不携带第一指示信息和第二指示信息。又例如,当运动矢量决策策略约定第三策略信息,且相邻图像块采用第一策略信息,则目标运动矢量的编码性能优于原始运动矢量的编码性能时,编码比特流中可以不携带第一指示信息和第二指示信息;当相邻图像块采用第二策略信息,则原始运动矢量的编码性能优于目标运动矢量的编码性能时,编码比特流中可以不携带第一指示信息和第二指示信息。
在一个例子中,编码端还可以获取原始运动矢量在运动矢量列表中的索引值,向解码端发送携带该索引值的编码比特流。例如,若原始运动矢量为运动矢量A21,且运动矢量A21是运动矢量列表的第一个运动矢量,则索引值为1。
编码端获取该原始运动矢量的编码性能和该目标运动矢量的编码性能的过程,可以参见后续的实施例。
步骤305,编码端将当前图像块对应的原始运动矢量或者最终运动矢量存储为当前图像块的运动信息。
由以上技术方案可见,可以根据原始运动矢量获取目标运动矢量,并根据目标运动矢量和原始运动矢量确定当前图像块的最终运动矢量,而不是直接将原始运动矢量作为当前图像块的最终运动矢量,从而提高运动矢量的精度,进而提高编码性能。而且,在根据原始运动矢量获取目标运动矢量时,可以根据候选图像块的运动信息获取当前图像块的模板,并根据所述当前图像块的模板获取当前图像块的目标运动矢量。上述方式可以快速得到当前图像块的模板,继而根据该模板得到当前图像块的目标运动矢量,可以提高编解码效率,并减少编解码时延。例如,在重建阶段之前,就可以获取当前图像块的模板,并根据当前图像块的模板得到当前图像块的目标运动矢量。
实施例四
在实施例三的基础上,根据当前图像块对应的原始运动矢量和获取的模板,得到基于模板的目标运动矢量,其实现流程可以参见图4A所示,可以包括以下步骤。
步骤401,编码端将原始运动矢量确定为中心运动矢量。
步骤402,编码端确定与该中心运动矢量对应的各个边缘运动矢量。其中,边缘运动矢量可以与该中心运动矢量不同。
编码端确定与该中心运动矢量对应的各个边缘运动矢量,可以包括:将该中心 运动矢量(x,y)向不同方向移动偏移量St,从而得到不同方向的边缘运动矢量(x-St,y)、边缘运动矢量(x+St,y)、边缘运动矢量(x,y+St)、边缘运动矢量(x,y-St)。例如,在水平方向上,可以将中心运动矢量(x,y)向左移动偏移量St,得到边缘运动矢量(x-St,y);在水平方向上,可以将中心运动矢量(x,y)向右移动偏移量St,得到边缘运动矢量(x+St,y);在垂直方向上,可以将中心运动矢量(x,y)向上移动偏移量St,得到边缘运动矢量(x,y+St);在垂直方向上,可以将中心运动矢量(x,y)向下移动偏移量St,得到边缘运动矢量(x,y-St)。
偏移量St的初始值可以根据经验配置,如可以为2、4、8、16等。
假设中心运动矢量为(3,3),偏移量St为4,则边缘运动矢量为边缘运动矢量(7,3)、边缘运动矢量(3,7)、边缘运动矢量(-1,3)、边缘运动矢量(3,-1)。
步骤403,编码端根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能。
情况一,编码端根据当前图像块的模板获得中心运动矢量的编码性能,可以包括但不限于:根据当前图像块的模板的参数信息和第一目标参考块的参数信息,确定中心运动矢量的编码性能。其中,第一目标参考块可以为模板对应的参考图像块基于中心运动矢量进行偏移之后获得的图像块。具体的,编码端可以根据模板的参数信息和第一目标参考块的参数信息,确定中心运动矢量的预测性能,并根据中心运动矢量的预测性能确定中心运动矢量的编码性能。例如,可以根据该预测性能和编码所需的实际比特数,确定中心运动矢量的编码性能。
上述参数信息可以为亮度值;或者,可以为亮度值和色度值。
假设参数信息为亮度值,为了确定中心运动矢量的编码性能,可以先获取当前图像块的模板的亮度值、第一目标参考块的亮度值。在得到当前图像块的模板后,可以获取模板的每个像素点的亮度值,并获取该模板对应的参考图像块。例如,假设中心运动矢量为(3,3),则可以利用中心运动矢量(3,3)移动该参考图像块,得到与参考图像块对应的图像块X(如对参考图像块向右移动3个像素点,向上移动3个像素点,将处理后的图像块记为图像块X),而图像块X就是第一目标参考块,且可以获取图像块X的每个像素点的亮度值。
基于模板的每个像素点的亮度值、图像块X的每个像素点的亮度值,可以采用如下公式确定中心运动矢量的预测性能:
Figure PCTCN2019094433-appb-000001
其中,SAD是可用绝对差总和,用于表示中心运动矢量的预测性能。TM i表示模板的第i个像素点的亮度值,TMP i表示图像块X的第i个像素点的亮度值,M表示像素点的总数量。
假设参数信息为亮度值和色度值,则利用如下公式确定中心运动矢量的亮度值预测性能SAD:
Figure PCTCN2019094433-appb-000002
并采用如下公式确定中心运动矢量的色度值预测性能CSAD:
Figure PCTCN2019094433-appb-000003
亮度值预测性能SAD和色度值预测性能CSAD的平均值,就是中心运动矢量的预测性能。其中,CSAD是可用绝对差总和,用于表示中心运动矢量的色度值预测性能,CTM i表示模板第i个像素点的色度值,CTMP i表示图像块X第i个像素点的色度值,M c表示像素点总数量。
在得到中心运动矢量的预测性能后,可以根据该预测性能和编码所需的实际比特数,确定中心运动矢量的编码性能。例如,可以采用RDO(Rate Distortion Optimized,率失真原则)确定中心运动矢量的编码性能,而RDO通常采用如下公式确定中心运动矢量的编码性能:
J=D+λ*R
其中,J表示编码性能,D表示预测性能,λ是拉格朗日乘子,是根据经验配置的数值,R是图像块编码所需的实际比特数,即编码比特流携带信息的比特总和。
情况二,编码端根据当前图像块的模板获得各个边缘运动矢量的编码性能,可以包括但不限于:对于每个边缘运动矢量,根据当前图像块的模板的参数信息和该边缘运动矢量对应的第二目标参考块的参数信息,确定该边缘运动矢量的编码性能。其中,第二目标参考块可以为模板对应的参考图像块基于该边缘运动矢量进行偏移之后获得的图像块。具体的,编码端可以根据模板的参数信息和第二目标参考块的参数信息,确定该边缘运动矢量的预测性能,从而得到各个边缘运动矢量的预测性能。例如,可以根据该预测性能和编码所需的实际比特数,确定该边缘运动矢量的编码性能。
上述参数信息可以为亮度值;或者,可以为亮度值和色度值。
情况二与情况一类似,区别在于:在情况二中,利用每个边缘运动矢量移动模板的参考图像块,得到对应的第二目标参考块,并利用第二目标参考块获得每个边缘运动矢量的编码性能,而情况一中,利用中心运动矢量移动模板的参考图像块,得到第一目标参考块,并利用第一目标参考块获得中心运动矢量的编码性能。
步骤404,编码端根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定目标运动矢量。
编码端可以从中心运动矢量和各个边缘运动矢量中选择编码性能最优的运动矢量;当该编码性能最优的运动矢量不是原始运动矢量时,可以将该编码性能最优的运动矢量确定为目标运动矢量;当该编码性能最优的运动矢量是原始运动矢量时,编码端可以从中心运动矢量和各个边缘运动矢量中选择编码性能次优的运动矢量,并将编码性能次优的运动矢量确定为目标运动矢量。
例如,若编码性能最优的运动矢量是边缘运动矢量(7,3),则编码端可以将边缘运动矢量(7,3)确定为目标运动矢量。若编码性能最优的运动矢量是中心运动矢量为(3,3),也就是原始运动矢量,则编码端还可以将编码性能次优的运动矢量(如边缘运动矢量(7,3))确定为目标运动矢量。
实施例五
在实施例三的基础上,根据当前图像块对应的原始运动矢量和获取的模板,得到基于模板的目标运动矢量,其实现流程可以参见图4B所示,可以包括以下步骤。
步骤411,编码端将原始运动矢量确定为中心运动矢量。
步骤412,编码端确定与该中心运动矢量对应的各个边缘运动矢量。其中,边缘运动矢量可以与该中心运动矢量不同。
步骤413,编码端根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能。
步骤411-步骤413可以参见步骤401-步骤403,在此不再赘述。
步骤414,编码端判断是否满足目标运动矢量的迭代结束条件。如果是,则可以执行步骤416;如果否,则可以执行步骤415。
迭代结束条件可以包括但不限于:迭代次数达到次数阈值,或者,执行时间达到时间阈值,或者,偏移量参数St已经被修改为预设数值,如1。
上述只是迭代结束条件的几个示例,对此迭代结束条件不做限制。
步骤415,编码端从中心运动矢量和各个边缘运动矢量中选择编码性能最优的运动矢量作定为新的中心运动矢量,并返回步骤412。
例如,若编码性能最优的运动矢量是边缘运动矢量(7,3),则可以将边缘运动矢量(7,3)确定为新的中心运动矢量,并重新执行步骤412,以此类推。
当第一次执行步骤412时,偏移量参数St的取值可以为初始值,如可以为16。当再次执行步骤412时,先对偏移量参数St的取值进行调整,如调整为上次偏移量参数St减去2,或者,调整为上次偏移量参数St的一半等,对此不做限制,只要小于上次偏移量参数St即可,后续以调整为上次偏移量参数St的一半为例进行说明。因此,当第二次执行步骤412时,则偏移量参数St的取值为8;当第三次执行步骤412时,则偏移量参数St的取值为4;以此类推。
在对偏移量参数St的取值进行调整后,先判断调整后的偏移量参数St是否小于等于预设数值,如1。如果否,则可以基于调整后的偏移量参数St执行步骤412。如果是,则可以将偏移量参数St的取值设置为1,并基于该偏移量参数St(即取值1)执行步骤412,而在执行到步骤414时,判断结果为满足迭代结束条件。
步骤416,编码端根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定目标运动矢量。
步骤416的处理可以参见步骤404,在此不再赘述。
实施例六
在实施例三的基础上,编码端可以获取原始运动矢量的编码性能和目标运动矢量的编码性能。编码端获取原始运动矢量的编码性能,可以包括但不限于:根据当前图像块的模板的参数信息和第三目标参考块的参数信息,确定原始运动矢量的编码性能。其中第三目标参考块为模板对应的参考图像块基于原始运动矢量进行偏移之后获得的图像块。具体的,可以根据模板的参数信息和第三目标参考块的参数信息,确定原始运动矢量的预测性能,并根据该预测性能确定原始运动矢量的编码性能;例如,根据该预测性能和编码所需的实际比特数,确定原始运动矢量的编码性能。上述参数信息可以为亮度值;或者,亮度值和色度值。
编码端获取目标运动矢量的编码性能,可以包括但不限于:根据当前图像块的模板的参数信息和第四目标参考块的参数信息,确定目标运动矢量的编码性能。其中,第四目标参考块为模板对应的参考图像块基于目标运动矢量进行偏移之后获得的图像块。具体的,可以根据模板的参数信息和第四目标参考块的参数信息,确定目标运动矢量的预测性能,并根据该预测性能确定目标运动矢量的编码性能;例如,根据该预测性能和编码所需的实际比特数,确定目标运动矢量的编码性能。其中,上述参数信息可以为亮度值;或者,亮度值和色度值。
上述过程与实施例四类似,区别在于:在得到第三目标参考块或者第四目标参考块时,是基于原始运动矢量或者目标运动矢量移动模板对应的参考图像块,而不是如实施例三中所述基于中心运动矢量移动模板对应的参考图像块。
实施例七
本实施例中,原始运动信息为当前图像块对应的原始运动矢量和原始参考帧,目标运动信息为当前图像块的目标运动矢量和目标参考帧。基于此,编码端可以获取当前图像块对应的原始运动矢量和原始参考帧,并根据该原始运动矢量、该原始参考帧和获取的模板,得到基于模板的目标运动矢量和目标参考帧。然后,可以根据该目标运动矢量和目标参考帧对当前图像块进行编码,得到当前图像块对应的编码比特流,并将编码比特流发送给解码端。
在一个例子中,编码端可以先获取当前图像块对应的原始运动矢量,假设当前图像块是图像块A1,在编码端的运动矢量列表依次包括运动矢量A21、运动矢量A31、运动矢量A41和运动矢量A51,则从运动矢量列表中选择一个运动矢量作为图像块A1的原始运动矢量。上述方式只是示例,对此不做限制,例如,可以直接将默认运动矢量确定为原始运动矢量。
在一个例子中,对于当前图像块来说,可以有一个或者多个参考帧(具有很强时域相关性的视频帧),可以将其中一个参考帧作为原始参考帧,而剩余的参考帧均为候选参考帧。在本实施例中,需要从原始参考帧和所有候选参考帧中选取出目标参考帧,而目标参考帧也就是当前图像块最终参考帧,使用该目标参考帧进行后续处理。
在一个例子中,编码端根据原始运动矢量、原始参考帧和获取的模板,得到基于模板的目标运动矢量和目标参考帧的过程,可以参见后续实施例。例如,将运动矢量 A21确定为原始运动矢量,将参考帧1确定为原始参考帧,将参考帧2和参考帧3确定为候选参考帧,则利用原始运动矢量A21、参考帧1、参考帧2和参考帧3获取目标运动矢量和目标参考帧。其中,目标运动矢量可作为当前图像块的最终运动矢量,该目标参考帧可以为参考帧1、参考帧2和参考帧3中的任一个,且目标参考帧可作为当前图像块的最终参考帧。
编码端在得到目标运动矢量和目标参考帧后,可以利用目标运动矢量和目标参考帧对当前图像块进行编码,对此编码方式不做限制。在编码完成后,编码端可以得到当前图像块对应的编码比特流,并将所述编码比特流发送给解码端。
编码端可以根据原始运动矢量和目标运动矢量向解码端发送编码比特流。具体的,可以获取原始运动矢量的编码性能和目标运动矢量的编码性能。当目标运动矢量的编码性能优于原始运动矢量的编码性能时,向解码端发送的当前图像块对应的编码比特流中携带第一指示信息。当原始运动矢量的编码性能优于目标运动矢量的编码性能时,向解码端发送的当前图像块对应的编码比特流中携带第二指示信息。
编码端获取原始运动矢量的编码性能和目标运动矢量的编码性能的过程,可以参见上述的实施例六。
上述方式是采用显式方式通知第一指示信息或者第二指示信息,在实际应用中,还可以采用隐式方式通知第一指示信息或者第二指示信息,即不在当前图像块对应的编码比特流中携带第一指示信息或者第二指示信息,具体的,编码端和解码端还可以协商决策策略或者通过在标准中定义决策策略,并将所述决策策略分别存储在编码端和解码端,如决策策略可以约定第一策略信息;或者,约定第二策略信息;或者,约定第三策略信息,第三策略信息为采用与当前图像块的相邻图像块相同的策略信息。然后,可以基于决策策略确定在哪种情况下可以不在编码比特流中携带第一指示信息或者第二指示信息,详细处理参见实施例三。
在一个例子中,编码端根据原始运动矢量和目标运动矢量向解码端发送编码比特流,还可以包括:获取原始运动矢量在运动矢量列表中的索引值;在向解码端发送的当前图像块对应的编码比特流中携带该索引值。例如,若原始运动矢量为运动矢量A21,且运动矢量A21是运动矢量列表的第一个运动矢量,则该索引值可以为1。
由以上技术方案可见,可以根据原始运动矢量和原始参考帧获取目标运动矢量和目标参考帧,并根据目标运动矢量确定当前图像块的最终运动矢量,根据目标参考帧确定当前图像块的最终参考帧,而不是直接将原始运动矢量确定为当前图像块的最终运动矢量,并直接将原始参考帧确定为当前图像块的最终参考帧,从而有效地提高了运动矢量的精度,并提高了编码性能。而且,根据候选图像块的运动信息获取当前图像块的模板,可以快速得到当前图像块的模板,从而可以提高编码效率,减少编码时延,如在重建阶段之前,就可以获取当前图像块的模板。
实施例八
在实施例七的基础上,编码端根据原始运动矢量、原始参考帧和模板,得到基于模板的目标运动矢量和目标参考帧,实现流程可以参见图4C所示。
步骤421,编码端基于当前图像块的模板,根据原始运动矢量获取原始参考帧对应的候选运动矢量,该候选运动矢量可以与该原始运动矢量不同。
在一个例子中,编码端基于当前图像块的模板,根据原始运动矢量获取原始参考帧对应的候选运动矢量,可以包括但不限于:编码端将原始运动矢量确定为中心运动矢量,并确定与中心运动矢量对应的各个边缘运动矢量,边缘运动矢量与中心运动矢量不同;编码端根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能;然后,编码端可以根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定原始参考帧对应的候选运动矢量。
编码端根据原始运动矢量获取原始参考帧的候选运动矢量的过程,可以参见实施例四或者实施例五,只是将实施例四或者实施例五中的目标运动矢量,替换为原始参考帧的候选运动矢量即可。
步骤422,编码端根据原始运动矢量获取各个候选参考帧对应的初始运动矢量。
在一个例子中,编码端根据原始运动矢量获取各个候选参考帧对应的初始运动矢量,可以包括:对于每个候选参考帧,可以根据当前图像块所在帧与原始参考帧的距离(如当前图像块所在帧与原始参考帧之间的帧数)、当前图像块所在帧与所述该候选参考帧的距离和原始运动矢量,获取该候选参考帧的初始运动矢量。
例如,假设原始运动矢量为运动矢量1,原始参考帧为参考帧1,候选参考帧为参考帧2和参考帧3,当前图像块所在帧(后续称为当前帧)与参考帧1的距离为d1,当前帧与参考帧2的距离为d2,当前帧与参考帧3的距离为d3,则:参考帧2对应的初始运动矢量为运动矢量1*(d2/d1),参考帧3对应的初始运动矢量为运动矢量1*(d3/d1)。
步骤423,编码端根据候选参考帧的初始运动矢量获取候选参考帧对应的候选运动矢量。
在一个例子中,编码端根据候选参考帧的初始运动矢量获取候选参考帧对应的候选运动矢量,可以包括但不限于:对于每个候选参考帧,编码端可以将该候选参考帧的初始运动矢量确定为中心运动矢量,并确定与中心运动矢量对应的各个边缘运动矢量,边缘运动矢量与该中心运动矢量不同;编码端可以根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能;然后,编码端可以根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定该候选参考帧对应的候选运动矢量。
编码端根据候选参考帧的初始运动矢量获取该候选参考帧的候选运动矢量的过程,可以参见实施例四或实施例五,只是将实施例四或实施例五中的目标运动矢量,替换为候选参考帧的候选运动矢量,将实施例四或实施例五中的原始运动矢量,替换为候选参考帧的初始运动矢量即可。
步骤424,编码端从原始参考帧对应的候选运动矢量以及各个候选参考帧对应的候选运动矢量中,选择编码性能最优的候选运动矢量作为目标运动矢量。
例如,编码端得到原始参考帧对应的候选运动矢量以及各个候选参考帧对应的候选运动矢量后,可以获取每个候选运动矢量的编码性能,具体获取方式参见上述实施例四,只是将实施例四的中心运动矢量替换为候选运动矢量即可。编码端在得到每个候选运动矢量的编码性能后,就可以选择编码性能最优的候选运动矢量,对此选择过程不再赘述。
步骤425,编码端将目标运动矢量对应的参考帧确定为目标参考帧。
例如,当目标运动矢量对应原始参考帧时,编码端可以将该原始参考帧确定为目标参考帧,当目标运动矢量对应候选参考帧时,编码端可以将该候选参考帧确定为目标参考帧。
本实施例中,从原始参考帧和所有候选参考帧中选取目标参考帧,而目标参考帧也就是当前图像块最终参考帧。从原始参考帧对应的候选运动矢量和所有候选参考帧对应的候选运动矢量中,选取目标运动矢量(即编码性能最优的候选运动矢量),目标运动矢量就是当前图像块最终的运动矢量,并选择目标运动矢量对应的参考帧作为目标参考帧。
实施例九
参见图5所示,为运动矢量确定方法的流程示意图,该方法可以包括以下步骤。
步骤501,解码端获取当前图像块的候选图像块的运动信息。
当前图像块的候选图像块可以包括但不限于:当前图像块的空域候选图像块;或者,当前图像块的时域候选图像块;对此候选图像块不做限制。
候选图像块的运动信息可以包括但不限于:候选图像块的原始运动信息,如原始运动矢量,或者,原始运动矢量和原始参考帧。候选图像块的最终运动信息,如最终运动矢量,或者,最终运动矢量和最终参考帧。
在一个例子中,当前图像块的最终运动信息仅用于当前图像块的解码(当前图像块的预测值生成、重建等解码过程),不用于相邻图像块的预测,即从候选图像块获得的运动信息为候选图像块的原始运动信息,而不是候选图像块的最终运动信息。当前图像块的解码结束后,不保存该最终运动信息,而是保存原始运动信息,即将当前图像块的运动信息恢复为原始运动信息。
在一个例子中,解码端可以存储候选图像块的运动信息,如将候选图像块的原始运动信息存储为候选图像块的运动信息或将候选图像块的最终运动信息存储为候选图像块的运动信息,这样,在步骤501中,可以直接从解码端本地查询到候选图像块的原始运动信息。
在另一个例子中,解码端可以获取候选图像块的原始运动信息(如原始运动矢量和原始参考帧等),例如,从候选图像块的运动矢量列表中选取一个运动矢量,而选取的运动矢量就是原始运动矢量。又例如,可以将候选图像块的相邻图像块的运动信息,确定为候选图像块的原始运动信息。
上述方式只是获取候选图像块的原始运动信息的示例,对此不做限制。
步骤502,解码端根据候选图像块的运动信息获取当前图像块的模板。
解码端根据候选图像块的运动信息获取当前图像块的模板的方法与编码端相同,具体可参见实施例二的相关内容,在此不再赘述。
步骤503,解码端根据当前图像块对应的原始运动信息和获取的模板,得到基于模板的目标运动信息。其中,所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量。或者,所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧。
当原始运动信息包括原始运动矢量,目标运动信息包括目标运动矢量时,解码端根据当前图像块对应的原始运动信息和获取的模板,得到基于该模板的目标运动信息,包括:将所述原始运动矢量确定为中心运动矢量;确定与所述中心运动矢量对应的各个边缘运动矢量,所述边缘运动矢量与所述中心运动矢量不同;根据所述模板获得所述中心运动矢量的编码性能和各个所述边缘运动矢量的编码性能;根据所述中心运动矢量的编码性能和各个所述边缘运动矢量的编码性能,从所述中心运动矢量和各个所述边缘运动矢量中确定所述目标运动矢量。
当原始运动信息包括原始运动矢量和原始参考帧,目标运动信息包括目标运动矢量和目标参考帧时,解码端根据当前图像块对应的原始运动信息和获取的模板,得到基于该模板的目标运动信息,包括:基于当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;根据所述原始运动矢量获取各个候选参考帧对应的各个初始运动矢量;根据所述各个初始运动矢量获取所述各个候选参考帧对应的各个候选运动矢量;从所述原始参考帧对应的候选运动矢量以及各个所述候选参考帧对应的各个候选运动矢量中选择编码性能最优的候选运动矢量作为所述目标运动矢量;将目标运动矢量对应的参考帧确定为所述目标参考帧。
在一个例子中,解码端获取当前图像块的候选图像块的运动信息前,解码端还可以获取当前图像块对应的编码比特流,该编码比特流可以是编码端发送的,也可以是解码端对当前图像块对应的比特流进行编码后得到的,对此不做限制,后续以编码端发送编码比特流为例进行说明。
例如,解码端可以接收来自编码端的当前图像块对应的编码比特流,当该编码比特流携带第一指示信息时,该第一指示信息用于指示基于模板确定当前图像块的最终运动信息,可以根据第一指示信息,获取当前图像块的候选图像块的运动信息,并根据候选图像块的运动信息获取当前图像块的模板,即执行上述步骤501-503。
又例如,解码端可以接收来自编码端的当前图像块对应的编码比特流,当该编码比特流携带第二指示信息,该第二指示信息用于指示基于当前图像块对应的原始运动信息确定当前图像块的最终运动信息时,根据第二指示信息,获取当前图像块对应的原始运动信息,并根据该原始运动信息,确定当前图像块的最终运动信息。
上述方式是采用显式方式通知第一指示信息或者第二指示信息,在实际应用中,编码端还可以采用隐式方式通知,即当前图像块对应的编码比特流中未携带第一指示信息或者第二指示信息。基于此,解码端可以根据本地预设的第一策略信息,获取当前图 像块的候选图像块的运动信息,并根据候选图像块的运动信息获取当前图像块的模板,其中,第一策略信息用于指示基于模板确定当前图像块的最终运动信息。或者,解码端可以根据本地预设的第二策略信息,获取当前图像块对应的原始运动信息;根据原始运动信息,确定当前图像块的最终运动信息,其中,第二策略信息用于指示基于当前图像块对应的原始运动信息确定当前图像块的最终运动信息。或者,解码端可以根据本地预设的第三策略信息确定当前图像块采用的策略信息,其中,该第三策略信息为采用与当前图像块的相邻图像块相同的策略信息确定当前图像块的最终运动信息;然后,根据相邻图像块的策略信息,确定当前图像块的最终运动信息。
编码端和解码端还可以协商运动矢量决策策略或者通过在标准中定义决策策略,并将所述决策策略分别存储在编码端和解码端,如运动矢量决策策略可以约定第一策略信息;或约定第二策略信息;或约定第三策略信息。
当运动矢量决策策略约定第一策略信息时,则本地预设为第一策略信息,基于第一策略信息获取当前图像块的候选图像块的运动信息,并根据候选图像块的运动信息获取当前图像块的模板。当运动矢量决策策略约定第二策略信息时,则本地预设为第二策略信息,基于第二策略信息获取当前图像块对应的原始运动信息;根据原始运动信息,确定当前图像块的最终运动信息。当运动矢量决策策略约定第三策略信息时,若相邻图像块的策略信息为第一策略信息,则获取当前图像块的候选图像块的运动信息,并根据候选图像块的运动信息获取当前图像块的模板;若相邻图像块的策略信息为第二策略信息,则获取当前图像块对应的原始运动信息,根据原始运动信息,确定当前图像块的最终运动信息。
在一个例子中,在根据当前图像块对应的原始运动信息和获取的模板,得到基于模板的目标运动信息之前,解码端还可以接收来自编码端的当前图像块对应的编码比特流,该编码比特流携带原始运动矢量在运动矢量列表中的索引值;解码端可以从运动矢量列表中选取与该索引值对应的运动矢量;将选取的运动矢量确定为当前图像块对应的原始运动矢量。如索引值为1时,则获取运动矢量列表的第一个运动矢量,该运动矢量是当前图像块对应的原始运动矢量。其中,运动矢量列表用于记录与当前图像块相邻的图像块的运动矢量,解码端维护的运动矢量列表与编码端维护的运动矢量列表相同。上述方式只是示例,如可以将当前图像块的候选图像块的运动矢量确定为当前图像块对应的原始运动矢量。
解码端根据当前图像块对应的原始运动信息和获取的模板,得到基于模板的目标运动信息的处理过程,可以参见后续实施例。
步骤504,解码端根据目标运动信息确定当前图像块的最终运动信息,例如,可以将目标运动信息确定为当前图像块的最终运动信息。
比如目标运动信息包括目标运动矢量,最终运动信息包括最终运动矢量,在一个例子中,可以将当前图像块的原始运动矢量和目标运动矢量进行编码性能的比较,将性能较好的一个运动矢量作为最终运动矢量。
步骤505,解码端根据最终运动信息对当前图像块进行解码,并将当前图像块对应的原始运动信息或者最终运动信息存储为当前图像块的运动信息。
在其它图像块的处理过程中,若选择上述当前图像块作为其他图像块的候选图像块时,步骤501中,所使用的候选图像块的运动信息也就是本步骤中存储的当前图像块的运动信息。
在一个例子中,将当前图像块对应的原始运动信息或者最终运动信息存储为当前图像块的运动信息,可以包括:在当前图像块的原始运动信息是根据空域相邻图像块(或者空域次邻图像块)的运动信息获得时,将当前图像块对应的原始运动信息存储为当前图像块的运动信息。在当前图像块的原始运动信息不是根据空域相邻图像块(或者空域次邻图像块)的运动信息获得时,将当前图像块对应的最终运动信息存储为当前图像块的运动信息。例如,在当前图像块对应的原始运动信息是根据时域相邻图像块的运动信息获得时,将当前图像块对应的最终运动信息存储为当前图像块的运动信息。在默认情况下,也可以将当前图像块对应的最终运动信息存储为当前图像块的运动信息。
在一些实施例中,原始运动信息至少包括原始运动矢量,在当前图像块的原始运动矢量是根据空域相邻图像块的运动矢量获得时,将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。在当前图像块的原始运动信息不是根据空域相邻图像块的运动矢量获得时,将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。例如,在当前图像块对应的原始运动矢量是根据时域相邻图像块的运动矢量获得时,将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。在默认情况下,也可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,在解码之后,将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量,或者在另一个例子中,在解码之后,将当前图像块的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤501中的候选图像块的运动信息可以为候选图像块的原始运动矢量,步骤505中,解码端可以将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤501中的候选图像块的运动信息可以为候选图像块的最终运动矢量,步骤505中,解码端可以将当前图像块对应的原始运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤501中的候选图像块的运动信息可以为候选图像块的原始运动矢量,步骤505中,解码端可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
在一个例子中,步骤501中的候选图像块的运动信息可以为候选图像块的最终运动矢量,步骤505中,解码端可以将当前图像块对应的最终运动矢量存储为当前图像块的运动矢量。
由以上技术方案可见,可以根据原始运动矢量获取目标运动矢量,并根据原始运动矢量和目标运动矢量确定当前图像块的最终运动矢量,而不是直接将原始运动矢量作为当前图像块的最终运动矢量,从而提高运动矢量的精度,进而提高解码性能。而且,在根据原始运动矢量获取目标运动矢量时,可以根据候选图像块的运动信息获取当前图 像块的模板,基于当前图像块的模板,根据原始运动矢量获取当前图像块的目标运动矢量。上述方式可以快速得到当前图像块的模板,继而根据该模板得到当前图像块的目标运动矢量,可以提高解码效率,并减少解码时延。例如,在解码的重建阶段之前,就可以获取当前图像块的模板,并根据模板得到当前图像块的目标运动矢量。
解码端可以同时对多个图像块进行并行解码,从而进一步提高解码速度,提高解码效率,降低解码时延,提高解码性能。
在一个例子中,当前图像块的最终运动信息仅用于当前图像块的解码(当前图像块的预测值生成、重建等解码过程),不用于相邻图像块的预测,即相邻图像块获得的候选运动信息为原始运动信息,而不是最终运动信息。当前图像块的解码结束后,不保存该最终运动信息,而是保存原始运动信息,即将当前图像块的运动信息恢复为原始运动信息。
实施例十
本实施例中,与采用重建信息和预测信息生成模板的方式不同,可以根据当前图像块的候选图像块的运动信息(如运动矢量和参考帧索引等)获取当前图像块的模板。具体的,解码端可以根据该参考帧索引确定该候选图像块的参考帧图像;根据该运动矢量从该参考帧图像中获取与该候选图像块的参考图像块,并根据该参考图像块获取该当前图像块的模板。
候选图像块可以包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数。第一候选图像块为当前图像块上侧的候选图像块,第二候选图像块为当前图像块左侧的候选图像块。第一候选图像块包括当前图像块上侧的相邻图像块和/或次邻图像块。相邻图像块的预测模式为帧间模式或者帧内模式;次邻图像块的预测模式为帧间模式。第二候选图像块包括当前图像块左侧的相邻图像块和/或次邻图像块。相邻图像块的预测模式为帧间模式或者帧内模式;次邻图像块的预测模式为帧间模式。
根据候选图像块的运动信息获取当前图像块的模板,可以包括但不限于:根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;然后,基于第一模板和第二模块确定当前图像块的模板。其中,基于第一模板和第二模板确定当前图像块的模板,可以包括但不限于:将第一模板确定为当前图像块的模板;或者,将第二模板确定为当前图像块的模板;或者,将第一模板和第二模板拼接之后确定为当前图像块的模板。
在一个例子中,当M大于1时,则上述第一模板可以包括M个子模板或者P个子模板,且由M个子模板或P个子模板拼接而成,P为帧间模式的第一候选图像块的数量,P小于或等于M。当M等于1时,则第一模板可以包括第一子模板,第一子模板是根据当前图像块上侧的一个候选图像块的运动矢量预测模式和运动信息确定的。当N大于1时,则第二模板可以包括N个子模板或者R个子模板,且由N个子模板或R个子模板拼接而成,R为帧间模式的第二候选图像块的数量,R小于或等于N。当N等于1时,则第二模板可以包括第二子模板,第二子模板是根据当前图像块左侧的一个候 选图像块的运动矢量预测模式和运动信息确定的。
在一个例子中,根据候选图像块的运动信息获取当前图像块的模板,还可以包括但不限于:当当前图像块对应多个运动信息时,根据每个运动信息获取该运动信息对应的模板。获取每个运动信息对应的权重,并根据每个运动信息对应的权重以及该运动信息对应的模板获取当前图像块的模板。
实施例十一
当所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量时,解码端根据当前图像块对应的原始运动矢量和获取的模板,得到基于模板的目标运动矢量,其实现流程可以参见图6A所示,可以包括以下步骤。
步骤601,解码端将原始运动矢量确定为中心运动矢量。
步骤602,解码端确定与该中心运动矢量对应的各个边缘运动矢量。其中,边缘运动矢量可以与该中心运动矢量不同。
确定与中心运动矢量对应的各个边缘运动矢量,可以包括:将中心运动矢量(x,y)向不同方向移动偏移量St,得到不同方向的边缘运动矢量(x-St,y)、边缘运动矢量(x+St,y)、边缘运动矢量(x,y+St)、边缘运动矢量(x,y-St)。
步骤603,解码端根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能。
解码端根据当前图像块的模板获得中心运动矢量的编码性能,可以包括但不限于:解码端可以根据当前图像块的模板的参数信息和第一目标参考块的参数信息,确定中心运动矢量的编码性能,其中,第一目标参考块为模板对应的参考图像块基于中心运动矢量进行偏移之后获得的图像块。
编码端根据当前图像块的模板获得各个边缘运动矢量的编码性能,可以包括但不限于:对于每个边缘运动矢量,根据当前图像块的模板的参数信息和该边缘运动矢量对应的第二目标参考块的参数信息,确定该边缘运动矢量的编码性能。其中,第二目标参考块为模板对应的参考图像块基于该边缘运动矢量进行偏移之后获得的图像块。
步骤604,解码端根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定目标运动矢量。
具体的,解码端可以从中心运动矢量和各个边缘运动矢量中选择编码性能最优的运动矢量;当该编码性能最优的运动矢量不是原始运动矢量时,可以将该编码性能最优的运动矢量确定为目标运动矢量;当该编码性能最优的运动矢量是原始运动矢量时,解码端可以从中心运动矢量和各个边缘运动矢量中选择编码性能次优的运动矢量,并将编码性能次优的运动矢量确定为目标运动矢量。
实施例十二
当所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量时,解码端根据当前图像块对应的原始运动矢量和获取的模板,得到基于模板的目标运动矢量,其实现流程可以参见图6B所示,可以包括以下步骤。
步骤611,解码端将原始运动矢量确定为中心运动矢量。
步骤612,解码端确定与该中心运动矢量对应的各个边缘运动矢量。其中,边缘运动矢量可以与该中心运动矢量不同。
步骤613,解码端根据当前图像块的模板获得中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能。
步骤614,解码端判断是否满足目标运动矢量的迭代结束条件。如果是,则可以执行步骤616;如果否,则可以执行步骤615。
步骤615,解码端从中心运动矢量和各个边缘运动矢量中选择编码性能最优的运动矢量作定为新的中心运动矢量,并返回步骤612。
步骤616,解码端根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定目标运动矢量。
步骤611-步骤616的详细处理流程,可以参见实施例四和实施例五,只是执行主体从编码端变更为解码端,其它处理流程相同。
实施例十三
当所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧时,解码端可以获取当前图像块对应的原始运动矢量和原始参考帧,基于当前图像块的模板,解码端可以根据当前图像块对应的原始运动矢量和该原始参考帧,得到基于模板的目标运动矢量(目标运动矢量可以与原始运动矢量不同)和目标参考帧。
解码端基于当前图像块的模板,根据当前图像块对应的原始运动矢量和该原始参考帧,得到基于模板的目标运动矢量和目标参考帧,可以包括但不限于:基于当前图像块的模板,根据该原始运动矢量获取该原始参考帧对应的候选运动矢量;根据该原始运动矢量获取各个候选参考帧对应的初始运动矢量;根据候选参考帧的初始运动矢量获取候选参考帧对应的候选运动矢量;然后,可以从原始参考帧对应的候选运动矢量以及各个候选参考帧对应的候选运动矢量中选择编码性能最优的候选运动矢量作为目标运动矢量;将目标运动矢量对应的参考帧确定为目标参考帧。
根据原始运动矢量获取各个候选参考帧对应的初始运动矢量,可以包括但不限于:对于每个候选参考帧,根据当前图像块所在帧与原始参考帧的距离、当前图像块所在帧与该候选参考帧的距离和原始运动矢量,获取该候选参考帧的初始运动矢量。
在一个例子中,基于当前图像块的模板,根据该原始运动矢量获取该原始参考帧对应的候选运动矢量,可以包括但不限于:将该原始运动矢量确定为中心运动矢量,并确定与该中心运动矢量对应的各个边缘运动矢量,边缘运动矢量与该中心运动矢量不同;然后,根据当前图像块的模板获得该中心运动矢量的编码性能,并根据当前图像块的模板获得各个边缘运动矢量的编码性能;然后,可以根据该中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从该中心运动矢量和各个边缘运动矢量中确定原始参考帧对应的候选运动矢量。
在一个例子中,根据候选参考帧的初始运动矢量获取候选参考帧对应的候选运动矢量,可以包括但不限于:对于每个候选参考帧,将该候选参考帧的初始运动矢量确定为中心运动矢量,并确定与该中心运动矢量对应的各个边缘运动矢量,边缘运动矢量与中心运动矢量不同;根据当前图像块的模板获得中心运动矢量的编码性能和各个边缘运动矢量的编码性能;根据中心运动矢量的编码性能和各个边缘运动矢量的编码性能,从中心运动矢量和各个边缘运动矢量中确定第一候选参考帧对应的候选运动矢量。
实施例十四
基于与上述方法同样的申请构思,本申请实施例还提出一种解码装置,应用于解码端,如图7所示,为所述装置的结构图,所述装置包括:
获取模块71,用于获取当前图像块的候选图像块的运动信息;根据所述候选图像块的运动信息获取所述当前图像块的模板;根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;根据所述目标运动信息确定所述当前图像块的最终运动信息;
确定模块72,用于根据所述最终运动信息对所述当前图像块进行解码;将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息。
在一个例子中,所述确定模块72将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息时具体用于:在所述当前图像块的原始运动信息是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的原始运动信息存储为所述当前图像块的运动信息。
在一个例子中,所述确定模块72将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息时具体用于:在所述当前图像块的原始运动信息不是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的最终运动信息存储为所述当前图像块的运动信息。
所述获取模块71还用于:接收来自编码端所述当前图像块对应的编码比特流,所述编码比特流携带第一指示信息,所述第一指示信息用于指示基于模板确定所述当前图像块的最终运动信息;根据所述第一指示信息,获取当前图像块的候选图像块的运动信息,根据所述候选图像块的运动信息获取所述当前图像块的模板。
所述获取模块71还用于:接收来自编码端所述当前图像块对应的编码比特流,所述编码比特流携带第二指示信息,所述第二指示信息用于指示基于当前图像块的原始运动信息确定所述当前图像块的最终运动信息;根据所述第二指示信息,获取所述当前图像块对应的原始运动信息;根据所述原始运动信息,确定所述当前图像块的最终运动信息。
所述获取模块71还用于:根据本地预设的第一策略信息,获取当前图像块的候选图像块的运动信息,根据所述候选图像块的运动信息获取所述当前图像块的模板;所述第一策略信息用于指示基于模板确定所述当前图像块的最终运动信息。或者,所述获取模块71还用于:根据本地预设的第二策略信息,获取所述当前图像块对应的原始运动信息;根据所述原始运动信息,确定所述当前图像块的最终运动信息;所述第二策略 信息用于指示基于当前图像块对应的原始运动信息确定所述当前图像块的最终运动信息。或者,所述获取模块71还用于:根据本地预设的第三策略信息,参考所述当前图像块的相邻图像块所采用的策略信息,确定当前图像块是基于所述原始运动信息还是所述目标运动信息被编码,其中,所述第三策略信息表示采用与当前图像块的相邻图像块相同的策略信息确定所述当前图像块的最终运动信息。
在一个例子中,所述原始运动信息包括原始运动矢量;所述获取模块71还用于:接收来自编码端所述当前图像块对应的的编码比特流,所述编码比特流携带原始运动矢量在运动矢量列表中的索引值;从运动矢量列表中选取与所述索引值对应的运动矢量;将选取的所述运动矢量确定为所述当前图像块对应的原始运动矢量;或者,将所述当前图像块的候选图像块的运动矢量确定为所述当前图像块对应的原始运动矢量。
在一个例子中,所述候选图像块包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数;所述第一候选图像块为所述当前图像块上侧的候选图像块,所述第二候选图像块为所述当前图像块左侧的候选图像块;所述获取模块71根据所述候选图像块的运动信息获取所述当前图像块的模板时具体用于:根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;将所述第一模板确定为所述当前图像块的模板;或者,将所述第二模板确定为所述当前图像块的模板;或者,将所述第一模板和所述第二模板拼接之后确定为所述当前图像块的模板。
在一个例子中,所述第一候选图像块包括所述当前图像块上侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式;所述第二候选图像块包括所述当前图像块左侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式。
在一个例子中,当所述M大于1时,所述第一模板包括M个子模板或者P个子模板,且由所述M个子模板或所述P个子模板拼接而成,所述P为帧间模式的第一候选图像块的数量;当M等于1时,所述第一模板包括第一子模板,所述第一子模板是根据所述当前图像块上侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
在一个例子中,所述运动信息包括所述第一候选图像块的运动矢量和参考帧索引,所述获取模块71根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板时具体用于:针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;根据所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板。
所述获取模块71根据M个第一候选图像块的运动矢量预测模式和运动信息, 确定第一模板时具体用于:针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,将所述第i个候选图像块按照默认值填充,并获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;或者,根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;根据确定的所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
所述第一横向长度与所述第一候选图像块的横向长度满足第一比例关系,或者与所述当前图像块的横向长度满足第二比例关系,或者等于第一预设长度;所述第一纵向长度与所述第一候选图像块的纵向长度满足第三比例关系,或者与所述当前图像块的纵向长度满足第四比例关系,或者等于第二预设长度。
在一个例子中,当所述N大于1时,所述第二模板包括N个子模板或者R个子模板,且由所述N个子模板或所述R个子模板拼接而成,所述R为帧间模式的第二候选图像块的数量;当N等于1时,所述第二模板包括第二子模板,所述第二子模板是根据所述当前图像块左侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
所述运动信息包括所述第二候选图像块的运动矢量和参考帧索引,所述获取模块71根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板时具体用于:针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板。
所述获取模块71根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板时具体用于:针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,将所述第i个候选图像块按照默认值填充,并获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板;或者,根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;根据确定的所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第一模板包括的第i个子模板;其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
所述第二横向长度与所述第二候选图像块的横向长度满足第五比例关系,或者 与所述当前图像块的横向长度满足第六比例关系,或者等于第三预设长度;所述第二纵向长度与所述第二候选图像块的纵向长度满足第七比例关系,或者与所述当前图像块的纵向长度满足第八比例关系,或者等于第四预设长度。
所述获取模块71根据所述候选图像块的运动信息获取所述当前图像块的模板时具体用于:当所述当前图像块对应多个运动信息时,获取每个运动信息对应的模板;获取每个运动信息对应的权重,根据每个运动信息对应的权重和所述运动信息对应的模板获取所述当前图像块的模板。
所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量,所述获取模块71根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息时具体用于:将所述原始运动矢量确定为中心运动矢量;确定与所述中心运动矢量对应的各个边缘运动矢量,所述各个边缘运动矢量与所述中心运动矢量不同;根据所述模板获得所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能;根据所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能,从所述中心运动矢量和所述各个边缘运动矢量中确定所述目标运动矢量。
所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧,所述获取模块71根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息时具体用于:基于所述当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;针对多个候选帧中的每个候选帧,根据所述原始运动矢量获取所述候选参考帧对应的初始运动矢量;根据所述候选参考帧的所述初始运动矢量获取所述候选参考帧对应的候选运动矢量;从所述原始参考帧对应的候选运动矢量以及所述多个候选参考帧各自对应的候选运动矢量中选择编码性能最优的候选运动矢量作为目标运动矢量;将所述目标运动矢量对应的参考帧确定为所述目标参考帧。
基于与上述方法同样的申请构思,本申请实施例还提出一种编码装置,应用于编码端,参见图8所示,为所述装置的结构图,所述装置包括:
获取模块81,用于获取当前图像块的候选图像块的运动信息;根据所述候选图像块的运动信息获取所述当前图像块的模板;根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;
处理模块82,用于根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息;根据所述最终运动信息对所述当前图像块进行编码,得到所述当前图像块对应的编码比特流;将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息。
所述处理模块82将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息时具体用于:在所述当前图像块的原始运动信息是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的原始运动信息存储为所述当前图像块的运动信息。
所述处理模块82将所述当前图像块对应的原始运动信息或者最终运动信息存储 为所述当前图像块的运动信息时具体用于:在所述当前图像块的原始运动信息不是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的最终运动信息存储为所述当前图像块的运动信息。
在一个例子中,所述候选图像块包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数;所述第一候选图像块为所述当前图像块上侧的候选图像块,所述第二候选图像块为所述当前图像块左侧的候选图像块;所述获取模块81根据所述候选图像块的运动信息获取所述当前图像块的模板时具体用于:根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;将所述第一模板确定为所述当前图像块的模板;或者,将所述第二模板确定为所述当前图像块的模板;或者,将所述第一模板和所述第二模板拼接之后确定为所述当前图像块的模板。
在一个例子中,所述第一候选图像块包括所述当前图像块上侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式;所述第二候选图像块包括所述当前图像块左侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式。
在一个例子中,当所述M大于1时,所述第一模板包括M个子模板或者P个子模板,且由所述M个子模板或所述P个子模板拼接而成,所述P为帧间模式的第一候选图像块的数量;当M等于1时,所述第一模板包括第一子模板,所述第一子模板是根据所述当前图像块上侧的首个候选图像块的运动矢量预测模式和运动信息确定的。
在一个例子中,所述运动信息包括所述第一候选图像块的运动矢量和参考帧索引,所述获取模块81根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板时具体用于:针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;根据所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板。
所述获取模块81根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板时具体用于:针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,将所述第i个候选图像块按照默认值填充,并获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;或者,根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;根据 所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
所述第一横向长度与所述第一候选图像块的横向长度满足第一比例关系,或者与所述当前图像块的横向长度满足第二比例关系,或者等于第一预设长度;所述第一纵向长度与所述第一候选图像块的纵向长度满足第三比例关系,或者与所述当前图像块的纵向长度满足第四比例关系,或者等于第二预设长度。
在一个例子中,当所述N大于1时,所述第二模板包括N个子模板或者R个子模板,且由所述N个子模板或所述R个子模板拼接而成,所述R为帧间模式的第二候选图像块的数量;当N等于1时,所述第二模板包括第二子模板,所述第二子模板是根据所述当前图像块左侧的首个候选图像块的运动矢量预测模式和运动信息确定的。
所述运动信息包括所述第二候选图像块的运动矢量和参考帧索引,所述获取模块81根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板时具体用于:针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板。
所述获取模块81根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板时具体用于:针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,将所述第i个候选图像块按照默认值填充,并获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板;或者,根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第一模板包括的第i个子模板;其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
所述第二横向长度与所述第二候选图像块的横向长度满足第五比例关系,或者与所述当前图像块的横向长度满足第六比例关系,或者等于第三预设长度;所述第二纵向长度与所述第二候选图像块的纵向长度满足第七比例关系,者与所述当前图像块的纵向长度满足第八比例关系,或者等于第四预设长度。
所述获取模块81根据所述候选图像块的运动信息获取所述当前图像块的模板时具体用于:当所述当前图像块对应多个运动信息时,根据每个运动信息获取该运动信息对应的模板;获取每个运动信息对应的权重,根据每个运动信息对应的权重和所述运动信息对应的模板获取所述当前图像块的模板。
所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量,所述获取模块81根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息时具体用于:将所述原始运动矢量确定为中心运动矢量;确定与所述中心运动矢量对应的各个边缘运动矢量,所述各个边缘运动矢量与所述中心运动矢量不同;根据所述模板获得所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能;根据所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能,从所述中心运动矢量和所述各个边缘运动矢量中确定所述目标运动矢量。
所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧,所述获取模块81根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息时具体用于:基于所述当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;针对多个候选帧中的每个候选帧,根据所述原始运动矢量获取所述候选参考帧对应的初始运动矢量;根据所述候选参考帧的所述初始运动矢量获取所述候选参考帧对应的候选运动矢量;从所述原始参考帧对应的候选运动矢量以及所述多个候选参考帧各自对应的候选运动矢量中选择编码性能最优的候选运动矢量作为目标运动矢量;将所述目标运动矢量对应的参考帧确定为所述目标参考帧。
所述原始运动信息包括原始运动矢量,所述目的运动信息包括目标运动矢量,所述处理模块82所述原始运动信息包括原始运动矢量,所述目的运动信息包括目标运动矢量时具体用于:获取所述原始运动矢量的编码性能和所述目标运动矢量的编码性能;当所述目标运动矢量的编码性能优于所述原始运动矢量的编码性能时,确定所述当前图像块的最终运动矢量为所述目标运动矢量;其中,所述当前图像块对应的编码比特流携带第一指示信息,所述第一指示信息用于指示基于模板确定所述当前图像块的最终运动信息。
所述原始运动信息包括原始运动矢量,所述目的运动信息包括目标运动矢量,所述处理模块82根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息时具体用于:获取所述原始运动矢量的编码性能和所述目标运动矢量的编码性能;当所述原始运动矢量的编码性能优于所述目标运动矢量的编码性能时,确定所述当前图像块的最终运动矢量为所述原始运动矢量;其中,所述当前图像块对应的编码比特流携带第二指示信息,所述第二指示信息用于指示基于当前图像块对应的原始运动信息确定所述当前图像块的最终运动信息。
所述处理模块82获取原始运动矢量在运动矢量列表中的索引值;所述当前图像块对应的编码比特流携带所述索引值。
所述处理模块82获取所述原始运动矢量的编码性能时具体用于:根据所述当前图像块的模板的参数信息和第一目标参考块的参数信息,确定所述原始运动矢量的编码性能,其中,所述第一目标参考块为所述模板对应的参考图像块基于所述原始运动矢量进行偏移之后获得的图像块;所述处理模块82获取所述目标运动矢量的编码性能时具体用于:根据所述当前图像块的模板的参数信息和第二目标参考块的参数信息,确定所述目标运动矢量的编码性能,其中,所述第二目标参考块为所述模板对应的参考图像块 基于所述目标运动矢量进行偏移之后获得的图像块。
本申请实施例提供的解码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图9所示。包括:处理器91和机器可读存储介质92,其中:所述机器可读存储介质92存储有能够被所述处理器91执行的机器可执行指令;所述处理器91用于执行机器可执行指令,以实现上述示例公开的解码方法。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的解码方法。
本申请实施例提供的编码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图10所示。包括:处理器93和机器可读存储介质94,所述机器可读存储介质94存储有能够被所述处理器93执行的机器可执行指令;所述处理器93用于执行机器可执行指令,以实现上述示例公开的编码方法。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的编码方法。
其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的 指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (43)

  1. 一种解码方法,应用于解码端,包括:
    获取所述当前图像块的候选图像块的运动信息;
    根据所述候选图像块的运动信息获取所述当前图像块的模板;
    根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;
    根据所述目标运动信息确定所述当前图像块的最终运动信息;
    根据所述最终运动信息对所述当前图像块进行解码。
  2. 根据权利要求1所述的方法,其特征在于,根据所述最终运动信息对所述当前图像块进行解码之后,所述方法还包括:
    将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息。
  3. 根据权利要求2所述的方法,其特征在于,将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息,包括:
    在所述当前图像块的原始运动信息是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的原始运动信息存储为所述当前图像块的运动信息;或者,
    在所述当前图像块的原始运动信息不是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的最终运动信息存储为所述当前图像块的运动信息。
  4. 根据权利要求1所述的方法,其特征在于,所述获取当前图像块的候选图像块的运动信息之前,所述方法还包括:
    接收来自编码端所述当前图像块对应的编码比特流;
    所述编码比特流携带第一指示信息,所述第一指示信息用于指示基于模板确定所述当前图像块的最终运动信息;
    根据所述第一指示信息,获取当前图像块的候选图像块的运动信息,根据所述候选图像块的运动信息获取所述当前图像块的模板。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收来自编码端的所述当前图像块对应的编码比特流;
    所述编码比特流携带第二指示信息,所述第二指示信息用于指示基于当前图像块对应的原始运动信息确定所述当前图像块的最终运动信息;
    根据所述第二指示信息,获取所述当前图像块对应的原始运动信息;
    根据所述原始运动信息,确定所述当前图像块的最终运动信息。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据本地预设的第一策略信息,获取当前图像块的候选图像块的运动信息,根据所述候选图像块的运动信息获取所述当前图像块的模板;所述第一策略信息用于指示基于模板确定所述当前图像块的最终运动信息;或者,
    根据本地预设的第二策略信息,获取所述当前图像块对应的原始运动信息;根据所述原始运动信息,确定所述当前图像块的最终运动信息;所述第二策略信息用于指示基于当前图像块对应的原始运动信息确定所述当前图像块的最终运动信息;或者,
    根据本地预设的第三策略信息,参考所述当前图像块的相邻图像块所采用的策略信息,确定当前图像块是基于所述原始运动信息还是所述目标运动信息被编码,其中,所述第三策略信息表示采用与所述当前图像块的相邻图像块相同的策略信息确定所述当前图像块的最终运动信息。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述原始运动信息包括原始运动矢量;在根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息之前,所述方法还包括:
    接收来自编码端的所述当前图像块对应的编码比特流,所述编码比特流携带所述原始运动矢量在运动矢量列表中的索引值;从所述运动矢量列表中选取与所述索引值对应的运动矢量;
    将选取的所述运动矢量确定为所述当前图像块对应的原始运动矢量。
  8. 根据权利要求1至6中任一项所述的方法,其特征在于,所述原始运动信息包括原始运动矢量;在根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息之前,所述方法还包括:
    将所述当前图像块的候选图像块的运动矢量确定为所述当前图像块对应的原始运动矢量。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,
    所述候选图像块包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数;
    所述第一候选图像块为所述当前图像块上侧的候选图像块,
    所述第二候选图像块为所述当前图像块左侧的候选图像块;
    根据所述候选图像块的运动信息获取所述当前图像块的模板,包括:
    根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;
    根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;
    将所述第一模板确定为所述当前图像块的模板,或者
    将所述第二模板确定为所述当前图像块的模板,或者
    将所述第一模板和所述第二模板拼接之后确定为所述当前图像块的模板。
  10. 根据权利要求9所述的方法,其特征在于,
    所述第一候选图像块包括所述当前图像块上侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式;
    所述第二候选图像块包括所述当前图像块左侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式。
  11. 根据权利要求9所述的方法,其特征在于,
    当所述M大于1时,所述第一模板包括M个子模板或者P个子模板,且由所述M个子模板或所述P个子模板拼接而成,所述P为帧间模式的第一候选图像块的数量,P为小于或等于M的自然数;
    当所述M等于1时,所述第一模板包括第一子模板,所述第一子模板是根据所述当前图像块上侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
  12. 根据权利要求11所述的方法,其特征在于,所述运动信息包括所述第一候选图像块的运动矢量和参考帧索引,根据所述M个第一候选图像块的运动矢量预测模式和运动信息,确定所述第一模板,包括:
    针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,
    根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板。
  13. 根据权利要求11所述的方法,其特征在于,根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板,包括:
    针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像 块的运动矢量预测模式为帧内模式时,
    将所述第i个候选图像块按照默认值填充,并
    获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;
    或者,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;
    根据确定的所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;
    其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
  14. 根据权利要求12或13所述的方法,其特征在于,
    所述第一横向长度与所述第一候选图像块的横向长度满足第一比例关系,或者与所述当前图像块的横向长度满足第二比例关系,或者等于第一预设长度;
    所述第一纵向长度与所述第一候选图像块的纵向长度满足第三比例关系,或者与所述当前图像块的纵向长度满足第四比例关系,或者等于第二预设长度。
  15. 根据权利要求9所述的方法,其特征在于,
    当所述N大于1时,所述第二模板包括N个子模板或者R个子模板,且由所述N个子模板或所述R个子模板拼接而成,所述R为帧间模式的第二候选图像块的数量,R为小于或等于N的自然数;
    当所述N等于1时,所述第二模板包括第二子模板,所述第二子模板是根据所述当前图像块左侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
  16. 根据权利要求15所述的方法,其特征在于,所述运动信息包括所述第二候选图像块的运动矢量和参考帧索引,根据所述N个第二候选图像块的运动矢量预测模式和运动信息,确定所述第二模板,包括:
    针对所述N个第二候选图像块中的第i个候选图像块,
    当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块的参考帧图像;
    根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板。
  17. 根据权利要求15所述的方法,其特征在于,根据所述N个第二候选图像块的运动矢量预测模式和运动信息,确定所述第二模板,包括:
    针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    将所述第i个候选图像块按照默认值填充,并
    获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板;
    或者,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;
    根据确定的所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第一模板包括的第i个子模板;
    其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
  18. 根据权利要求16或17所述的方法,其特征在于,
    所述第二横向长度与所述第二候选图像块的横向长度满足第五比例关系,或者与所述当前图像块的横向长度满足第六比例关系,或者等于第三预设长度;
    所述第二纵向长度与所述第二候选图像块的纵向长度满足第七比例关系,或者与所述当前图像块的纵向长度满足第八比例关系,或者等于第四预设长度。
  19. 根据权利要求1至18中任一项所述的方法,其特征在于,根据所述候选图像块的运动信息获取所述当前图像块的模板,包括:
    当所述当前图像块对应多个运动信息时,获取每个运动信息对应的模板;
    获取每个运动信息对应的权重,并
    根据每个运动信息对应的权重以及所述运动信息对应的模板获取所述当前图像块的模板。
  20. 根据权利要求1所述的方法,其特征在于,所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量,根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息,包括:
    将所述原始运动矢量确定为中心运动矢量;
    确定与所述中心运动矢量对应的各个边缘运动矢量,所述各个边缘运动矢量与所述中心运动矢量不同;
    根据所述模板获得所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能;
    根据所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能,从所述中心运动矢量和所述各个边缘运动矢量中确定所述目标运动矢量。
  21. 根据权利要求1所述的方法,其特征在于,所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧,根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息,包括:
    基于所述当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;
    针对多个候选帧中的每个候选帧,
    根据所述原始运动矢量获取所述候选参考帧对应的初始运动矢量;
    根据所述候选参考帧的所述初始运动矢量获取所述候选参考帧对应的候选运动矢量;
    从所述原始参考帧对应的候选运动矢量以及所述多个候选参考帧各自对应的候选运动矢量中,选择编码性能最优的候选运动矢量作为目标运动矢量;
    将所述目标运动矢量对应的参考帧确定为所述目标参考帧。
  22. 一种编码方法,应用于编码端,包括:
    获取当前图像块的候选图像块的运动信息;
    根据所述候选图像块的运动信息获取所述当前图像块的模板;
    根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息;
    根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息;
    根据所述最终运动信息对所述当前图像块进行编码,得到所述当前图像块对应的编 码比特流。
  23. 根据权利要求22所述的方法,其特征在于,根据所述最终运动信息对所述当前图像块进行编码,得到所述当前图像块对应的编码比特流之后,所述方法还包括:
    将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息。
  24. 根据权利要求23所述的方法,其特征在于,将所述当前图像块对应的原始运动信息或者最终运动信息存储为所述当前图像块的运动信息,包括:
    在所述当前图像块的原始运动信息是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的原始运动信息存储为所述当前图像块的运动信息;或者,
    在所述当前图像块的原始运动信息不是根据空域相邻图像块的运动信息获得时,将所述当前图像块对应的最终运动信息存储为所述当前图像块的运动信息。
  25. 根据权利要求22所述的方法,其特征在于,
    所述候选图像块包括M个第一候选图像块和N个第二候选图像块,M为大于或等于1的自然数,N为大于或等于0的自然数,或者,M为大于或等于0的自然数,N为大于或等于1的自然数;
    所述第一候选图像块为所述当前图像块上侧的候选图像块,
    所述第二候选图像块为所述当前图像块左侧的候选图像块;
    根据所述候选图像块的运动信息获取所述当前图像块的模板,包括:
    根据M个第一候选图像块的运动矢量预测模式和运动信息,确定第一模板;
    根据N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板;
    将所述第一模板确定为所述当前图像块的模板,或者
    将所述第二模板确定为所述当前图像块的模板,或者
    将所述第一模板和所述第二模板拼接之后确定为所述当前图像块的模板。
  26. 根据权利要求25所述的方法,其特征在于,
    所述第一候选图像块包括所述当前图像块上侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式;
    所述第二候选图像块包括所述当前图像块左侧的相邻图像块和/或次邻图像块;所述相邻图像块的预测模式为帧间模式或者帧内模式;所述次邻图像块的预测模式为帧间模式。
  27. 根据权利要求25所述的方法,其特征在于,
    当所述M大于1时,所述第一模板包括M个子模板或者P个子模板,且由所述M个子模板或所述P个子模板拼接而成,所述P为帧间模式的第一候选图像块的数量,P为小于或等于M的自然数;
    当所述M等于1时,所述第一模板包括第一子模板,所述第一子模板是根据所述当前图像块上侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
  28. 根据权利要求27所述的方法,其特征在于,所述运动信息包括所述第一候选图像块的运动矢量和参考帧索引,根据所述M个第一候选图像块的运动矢量预测模式和运动信息,确定所述第一模板,包括:
    针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,
    根据所述参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板。
  29. 根据权利要求27所述的方法,其特征在于,根据所述M个第一候选图像块的运动矢量预测模式和运动信息,确定所述第一模板,包括:
    针对所述M个第一候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    将所述第i个候选图像块按照默认值填充,并
    获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;
    或者,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第一横向长度和第一纵向长度的图像块作为所述第一模板包括的第i个子模板;
    其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
  30. 根据权利要求28或29所述的方法,其特征在于,
    所述第一横向长度与所述第一候选图像块的横向长度满足第一比例关系,或者与所述当前图像块的横向长度满足第二比例关系,或者等于第一预设长度;
    所述第一纵向长度与所述第一候选图像块的纵向长度满足第三比例关系,或者与所述当前图像块的纵向长度满足第四比例关系,或者等于第二预设长度。
  31. 根据权利要求25所述的方法,其特征在于,
    当所述N大于1时,所述第二模板包括N个子模板或者R个子模板,且由所述N个子模板或所述R个子模板拼接而成,所述R为帧间模式的第二候选图像块的数量,R为小于或等于N的自然数;
    当所述N等于1时,所述第二模板包括第二子模板,所述第二子模板是根据所述当前图像块左侧的一个候选图像块的运动矢量预测模式和运动信息确定的。
  32. 根据权利要求31所述的方法,其特征在于,所述运动信息包括所述第二候选图像块的运动矢量和参考帧索引,根据所述N个第二候选图像块的运动矢量预测模式和运动信息,确定第二模板,包括:
    针对所述N个第二候选图像块中的第i个候选图像块,
    当确定所述第i个候选图像块的运动矢量预测模式为帧间模式时,根据所述参考帧索引确定所述第i个候选图像块的参考帧图像;
    根据所述第i个候选图像块的运动矢量,从所述参考帧图像中确定所述第i个候选图像块的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板。
  33. 根据权利要求31所述的方法,其特征在于,根据所述N个第二候选图像块的运动矢量预测模式和运动信息,确定所述第二模板,包括:
    针对所述N个第二候选图像块中的第i个候选图像块,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    将所述第i个候选图像块按照默认值填充,并
    获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第二模板包括的第i个子模板;
    或者,当确定所述第i个候选图像块的运动矢量预测模式为帧内模式时,
    根据所述第i个候选图像块对应的参考帧索引确定所述第i个候选图像块对应的参考帧图像;
    根据所述第i个候选图像块对应的运动矢量,从所述参考帧图像中确定所述第i个候选图像块对应的参考图像块,所述参考图像块与所述第i个候选图像块的相对位移与所述第i个候选图像块对应的运动矢量相匹配;
    根据所述参考图像块,获取尺寸为第二横向长度和第二纵向长度的图像块作为所述第一模板包括的第i个子模板;
    其中,所述第i个候选图像块对应的参考帧索引和运动矢量,是所述第i个候选图像块的相邻图像块的参考帧索引和运动矢量。
  34. 根据权利要求32或33所述的方法,其特征在于,
    所述第二横向长度与所述第二候选图像块的横向长度满足第五比例关系,或者与所述当前图像块的横向长度满足第六比例关系,或者等于第三预设长度;
    所述第二纵向长度与所述第二候选图像块的纵向长度满足第七比例关系,或者与所述当前图像块的纵向长度满足第八比例关系,或者等于第四预设长度。
  35. 根据权利要求22至34中任一项所述的方法,其特征在于,根据所述候选图像块的运动信息获取所述当前图像块的模板,包括:
    当所述当前图像块对应多个运动信息时,根据每个运动信息获取该运动信息对应的模板;
    获取每个运动信息对应的权重,并
    根据每个运动信息对应的权重以及所述运动信息对应的模板获取所述当前图像块的模板。
  36. 根据权利要求22所述的方法,其特征在于,所述原始运动信息包括原始运动矢量,所述目标运动信息包括目标运动矢量,根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息,包括:
    将所述原始运动矢量确定为中心运动矢量;
    确定与所述中心运动矢量对应的各个边缘运动矢量,所述各个边缘运动矢量与所述中心运动矢量不同;
    根据所述模板获得所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能;
    根据所述中心运动矢量的编码性能和所述各个边缘运动矢量的编码性能,从所述中 心运动矢量和所述各个边缘运动矢量中确定所述目标运动矢量。
  37. 根据权利要求22所述的方法,其特征在于,所述原始运动信息包括原始运动矢量和原始参考帧,所述目标运动信息包括目标运动矢量和目标参考帧,根据所述当前图像块对应的原始运动信息和获取的所述模板,得到基于所述模板的目标运动信息,包括:
    基于所述当前图像块的所述模板,根据所述原始运动矢量获取所述原始参考帧对应的候选运动矢量;
    针对多个候选帧中的每个候选帧,
    根据所述原始运动矢量获取所述候选参考帧对应的初始运动矢量;
    根据所述候选参考帧的所述初始运动矢量获取所述候选参考帧对应的候选运动矢量;
    从所述原始参考帧对应的候选运动矢量以及所述多个候选参考帧各自对应的候选运动矢量中,选择编码性能最优的候选运动矢量作为目标运动矢量;
    将所述目标运动矢量对应的参考帧确定为所述目标参考帧。
  38. 根据权利要求22所述的方法,其特征在于,
    所述原始运动信息包括原始运动矢量,所述目的运动信息包括目标运动矢量,
    根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息,包括:
    获取所述原始运动矢量的编码性能和所述目标运动矢量的编码性能;
    当所述目标运动矢量的编码性能优于所述原始运动矢量的编码性能时,确定所述当前图像块的最终运动矢量为所述目标运动矢量;
    根据所述最终运动信息对所述当前图像块进行编码,得到所述当前图像块对应的编码比特流,包括:
    所述当前图像块对应的编码比特流携带第一指示信息,所述第一指示信息用于指示基于模板确定所述当前图像块的最终运动信息。
  39. 根据权利要求22所述的方法,其特征在于,
    所述原始运动信息包括原始运动矢量,所述目的运动信息包括目标运动矢量,
    所述根据所述原始运动信息和所述目标运动信息确定所述当前图像块的最终运动信息,包括:
    获取所述原始运动矢量的编码性能和所述目标运动矢量的编码性能;
    当所述原始运动矢量的编码性能优于所述目标运动矢量的编码性能时,确定所 述当前图像块的最终运动矢量为所述原始运动矢量;
    根据所述最终运动信息对所述当前图像块进行编码,得到所述当前图像块对应的编码比特流,包括:
    所述当前图像块对应的编码比特流携带第二指示信息,所述第二指示信息用于指示基于当前图像块对应的原始运动信息确定所述当前图像块的最终运动信息。
  40. 根据权利要求22所述的方法,其特征在于,所述方法还包括:获取所述原始运动矢量在运动矢量列表中的索引值;所述编码比特流携带所述索引值。
  41. 根据权利要求38或39所述的方法,其特征在于,
    获取所述原始运动矢量的编码性能,包括:
    根据所述当前图像块的所述模板的参数信息和第一目标参考块的参数信息,确定所述原始运动矢量的编码性能,其中,所述第一目标参考块为所述模板对应的参考图像块基于所述原始运动矢量进行偏移之后获得的图像块;
    获取所述目标运动矢量的编码性能,包括:
    根据所述当前图像块的所述模板的参数信息和第二目标参考块的参数信息,确定所述目标运动矢量的编码性能,其中,所述第二目标参考块为所述模板对应的参考图像块基于所述目标运动矢量进行偏移之后获得的图像块。
  42. 一种解码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行所述机器可执行指令,以实现权利要求1-21任一所述的方法步骤。
  43. 一种编码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行所述机器可执行指令,以实现权利要求22-41任一所述的方法步骤。
PCT/CN2019/094433 2018-07-06 2019-07-02 一种解码、编码方法和设备 WO2020007306A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810738280.4A CN110691247B (zh) 2018-07-06 2018-07-06 一种解码、编码方法和设备
CN201810738280.4 2018-07-06

Publications (1)

Publication Number Publication Date
WO2020007306A1 true WO2020007306A1 (zh) 2020-01-09

Family

ID=69060771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094433 WO2020007306A1 (zh) 2018-07-06 2019-07-02 一种解码、编码方法和设备

Country Status (2)

Country Link
CN (1) CN110691247B (zh)
WO (1) WO2020007306A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0723366A2 (en) * 1995-01-17 1996-07-24 Graphics Communications Laboratories Motion estimation method and apparatus for calculating a motion vector
CN101686393A (zh) * 2008-09-28 2010-03-31 华为技术有限公司 应用于模板匹配的快速运动搜索方法及装置
CN102611887A (zh) * 2011-01-21 2012-07-25 华为技术有限公司 非整像素位置运动矢量的坐标值取整方法和装置
CN102611886A (zh) * 2011-01-22 2012-07-25 华为技术有限公司 一种运动预测或补偿方法
CN102640495A (zh) * 2009-09-10 2012-08-15 Sk电信有限公司 运动向量编码/解码方法和装置及使用该方法和装置的图像编码/解码方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0723366A2 (en) * 1995-01-17 1996-07-24 Graphics Communications Laboratories Motion estimation method and apparatus for calculating a motion vector
CN101686393A (zh) * 2008-09-28 2010-03-31 华为技术有限公司 应用于模板匹配的快速运动搜索方法及装置
CN102640495A (zh) * 2009-09-10 2012-08-15 Sk电信有限公司 运动向量编码/解码方法和装置及使用该方法和装置的图像编码/解码方法和装置
CN102611887A (zh) * 2011-01-21 2012-07-25 华为技术有限公司 非整像素位置运动矢量的坐标值取整方法和装置
CN102611886A (zh) * 2011-01-22 2012-07-25 华为技术有限公司 一种运动预测或补偿方法

Also Published As

Publication number Publication date
CN110691247A (zh) 2020-01-14
CN110691247B (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
JP6073404B2 (ja) 映像復号化方法及び装置
WO2020057559A1 (zh) 一种解码、编码方法及其设备
TWI688257B (zh) 視訊編解碼之方法及裝置
JP5367098B2 (ja) 動きベクトル予測符号化方法,動きベクトル予測復号方法,動画像符号化装置,動画像復号装置およびそれらのプログラム
JP5306485B2 (ja) 動きベクトル予測符号化方法、動きベクトル予測復号方法、動画像符号化装置、動画像復号装置およびそれらのプログラム
JP5367097B2 (ja) 動きベクトル予測符号化方法、動きベクトル予測復号方法、動画像符号化装置、動画像復号装置およびそれらのプログラム
WO2015010319A1 (zh) 一种基于p帧的多假设运动补偿编码方法
WO2020001591A1 (zh) 一种解码、编码方法及其设备
JP2018511237A (ja) コンテンツ適応型bピクチャパターンビデオエンコーディング
WO2020001624A1 (zh) 一种运动矢量确定方法和设备
WO2020007306A1 (zh) 一种解码、编码方法和设备
WO2020253728A1 (zh) 编码方法及装置
Jubran et al. Sequence-level reference frames in video coding
JP5322956B2 (ja) 画像符号化装置および画像符号化方法
JP2012120108A (ja) 補間画像生成装置及びプログラム、並びに、動画像復号装置及びプログラム
CN110365987B (zh) 一种运动矢量确定方法、装置及其设备
KR101525325B1 (ko) 인트라 예측 모드 결정 방법 및 그 장치
JPWO2009128208A1 (ja) 動画像符号化装置、動画像復号化装置、動画像符号化方法、および動画像復号化方法
JP2013517733A (ja) 以前ブロックの動きベクトルを現在ブロックの動きベクトルとして用いる映像符号化/復号化方法及び装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19830673

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19830673

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/07/2021)