CN110505485B - Motion compensation method, motion compensation device, computer equipment and storage medium - Google Patents

Motion compensation method, motion compensation device, computer equipment and storage medium Download PDF

Info

Publication number
CN110505485B
CN110505485B CN201910785583.6A CN201910785583A CN110505485B CN 110505485 B CN110505485 B CN 110505485B CN 201910785583 A CN201910785583 A CN 201910785583A CN 110505485 B CN110505485 B CN 110505485B
Authority
CN
China
Prior art keywords
block
processed
sub
image
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910785583.6A
Other languages
Chinese (zh)
Other versions
CN110505485A (en
Inventor
陈宇聪
郑云飞
闻兴
陈敏
黄跃
王晓楠
赵明菲
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910785583.6A priority Critical patent/CN110505485B/en
Publication of CN110505485A publication Critical patent/CN110505485A/en
Application granted granted Critical
Publication of CN110505485B publication Critical patent/CN110505485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The method inputs the characteristics of an image block to be processed into a first target model, the first target model virtually divides the image block to be processed, any subblock with the size larger than a preset size is output, the image block to be processed is divided according to the size of any subblock, and the size of the target subblock obtained after division is larger than the preset size, so that the number of the target subblocks divided from the image block to be processed is controllable, when the number of the target subblocks is small, the number of times of motion compensation performed by a coding and decoding end is small, the number of times of memory reading is small, and the consumption of memory bandwidth can be reduced.

Description

Motion compensation method, motion compensation device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a motion compensation method and apparatus, a computer device, and a storage medium.
Background
With the gradual maturity of computer technology, video encoding and decoding technology has also been developed greatly, when video encoding and decoding are performed on an image frame, firstly, an encoding and decoding end needs to perform sampling processing on any image to obtain a luminance block and two chrominance blocks; secondly, the encoding and decoding end divides each brightness block and each chroma block to obtain a plurality of brightness sub-blocks and a plurality of chroma sub-blocks; thirdly, the encoding and decoding end obtains the motion vector of each brightness sub-block and the motion vector of each chroma sub-block; and finally, the coding and decoding end performs motion compensation on each brightness sub-block according to the motion vector of each brightness sub-block, and performs motion compensation on each chroma sub-block according to the motion vector of each chroma sub-block.
In the above image processing process, in order to ensure the accuracy of the motion vector of the subblock, the luminance block and the chrominance block are generally divided into a large number of subblocks, and for each subblock, the decoding end needs to perform motion compensation once.
Disclosure of Invention
The present disclosure provides a motion compensation method, a motion compensation apparatus, a computer device, and a storage medium, so as to at least solve the problem of large memory bandwidth consumption caused by a large number of motion compensation times in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a motion compensation method, including:
acquiring the characteristics of an image block to be processed in an image;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
motion compensation is performed on the at least one target sub-block.
Optionally, before the motion compensating the at least one target sub-block, the method further includes:
obtaining a motion vector of the at least one target sub-block;
the motion compensating the at least one target sub-block comprises:
and performing motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between a luminance block and the chrominance block when any image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
when the image block to be processed is a chrominance block and the size of the image block to be processed is equal to the size of the target sub-block, inputting the characteristics to a second target model, performing characteristic extraction on the image block to be processed according to the characteristics by the second target model, and outputting a motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion characteristic of the image block to be processed.
According to a second aspect of the embodiments of the present disclosure, there is provided a motion compensation apparatus including:
the acquisition unit is configured to acquire the characteristics of the image blocks to be processed in the image;
the processing unit is configured to input the features into a first target model, the first target model performs virtual segmentation on the image to be processed according to the features, and outputs the size of any sub-block after the virtual segmentation, wherein the size of any sub-block is larger than a preset size;
the segmentation unit is configured to perform segmentation on the image block to be processed according to the size of any sub-block to obtain at least one target sub-block, wherein the size of each target sub-block is equal to the size of any sub-block;
a compensation unit configured to perform motion compensation on the at least one target sub-block.
Optionally, the obtaining unit is further configured to perform obtaining a motion vector of the at least one target sub-block;
the compensation unit is further configured to perform motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining unit is further configured to perform:
obtaining a motion vector of the at least one target sub-block;
the motion compensating the at least one target sub-block comprises:
and performing motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining unit is further configured to perform:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
Optionally, the obtaining unit is further configured to perform:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between a luminance block and the chrominance block when any image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
Optionally, the obtaining unit is further configured to perform:
when the image block to be processed is a chrominance block and the size of the image block to be processed is equal to the size of the target sub-block, inputting the characteristics to a second target model, performing characteristic extraction on the image block to be processed according to the characteristics by the second target model, and outputting a motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion characteristic of the image block to be processed.
According to a third aspect of embodiments of the present disclosure, there is provided a computer device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform:
acquiring the characteristics of an image block to be processed in an image;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
motion compensation is performed on the at least one target sub-block.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of a computer device, enable the computer device to perform a motion compensation method, the method comprising:
acquiring the characteristics of an image block to be processed in an image;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
motion compensation is performed on the at least one target sub-block.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions executable by a processor of a computer device to perform the method steps of the motion compensation method provided in the above embodiments, the method steps may include:
acquiring the characteristics of an image block to be processed in an image;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
motion compensation is performed on the at least one target sub-block.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of inputting the characteristics of an image block to be processed into a first target model, virtually segmenting the image block to be processed by the first target model, outputting any subblock larger than a preset size, segmenting the image block to be processed according to the size of any subblock, and segmenting the image block to be processed according to the size of any subblock, wherein the size of the target subblock obtained after segmentation is larger than the preset size, so that the number of the target subblocks segmented from the image block to be processed is controllable, and when the number of the target subblocks is small, the number of times of motion compensation performed by a coding and decoding end is small, so that the number of times of memory reading is small, and the consumption of memory bandwidth can be reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow chart illustrating a method of motion compensation according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of motion compensation according to an example embodiment.
FIG. 3 is a diagram illustrating a pending image block according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a motion compensation apparatus according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a motion compensation method according to an exemplary embodiment, where the motion compensation method is used in a codec end as shown in fig. 1, and includes the following steps.
In step S11, the features of the image blocks to be processed in the image are acquired.
In step S12, the feature is input into a first target model, the first target model performs virtual segmentation on the image to be processed according to the feature, and outputs the size of any sub-block after the virtual segmentation, where the size of any sub-block is greater than a preset size.
In step S13, the to-be-processed image block is segmented according to the size of any sub-block to obtain at least one target sub-block, and the size of each target sub-block is equal to the size of any sub-block.
In step S14, motion compensation is performed on the at least one target sub-block.
Optionally, before the motion compensating the at least one target sub-block, the method further includes:
obtaining a motion vector of the at least one target sub-block;
the motion compensating the at least one target sub-block comprises:
and performing motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between a luminance block and the chrominance block when any image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
Optionally, the obtaining the motion vector of the at least one target sub-block comprises:
when the image block to be processed is a chrominance block and the size of the image block to be processed is equal to the size of the target sub-block, inputting the characteristics to a second target model, performing characteristic extraction on the image block to be processed according to the characteristics by the second target model, and outputting a motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion characteristic of the image block to be processed.
According to the method provided by the embodiment of the disclosure, the characteristics of the image block to be processed are input into the first target model, the first target model performs virtual segmentation on the image block to be processed, any sub-block with the size larger than the preset size is output, and the image block to be processed is segmented according to the size of any sub-block.
After the encoding and decoding end segments any image block, the motion vector of the sub-block of the image block may also be obtained, so that the encoding and decoding end performs motion compensation on the image block based on the motion vector of the sub-block of the image block, and for further explaining a process of obtaining the motion vector of the sub-block, refer to a flowchart of a motion compensation method shown in fig. 2 according to an exemplary embodiment.
In step S21, the codec acquires the features of the image block to be processed.
The image block to be processed may be a result of sampling any image, and the image block to be processed may be a chrominance block or a luminance block. For an image of any frame, the image may include a plurality of image areas, and the any image is an image of any one of the plurality of image areas.
Before this step S21, the codec side may perform chroma sampling and luma sampling on any image, and in some embodiments, the chroma sampling rate is half of the luma sampling rate, so that the codec side may obtain two chroma blocks and one luma block, where a size of one chroma block is N × N and a size of one luma block is 2N × 2N, where N is a positive integer greater than 0, and then the one luma block corresponds to the two chroma blocks.
To further illustrate the correspondence between the luminance block and the two chrominance blocks, for example, for any image, there are three image channels Y, U and V, respectively, on image channel Y, the codec end performs luminance sampling on any image to obtain a luminance block, on image channels U and V, respectively, performs chrominance sampling on any image to obtain a chrominance block, respectively, if the coordinates of any image is (x, Y), the coordinates of the luminance block may be (x, Y), and the coordinates of each chrominance block may be (x/2, Y/2), so that there is a correspondence between the luminance block and the chrominance block on the coordinates of (x, Y) and (x/2, Y/2), that is, if the coordinates of the luminance block are (x, Y), the coordinates of the chrominance block corresponding to the luminance block may be directly determined to be (x/2, y/2).
When the image block to be processed is a chroma block, the features of the image block to be processed may include a motion vector of the control point, a motion vector of the chroma sub-block, pixel values of the chroma block, and the like, and when the image block to be processed is a luma block, the features of the image block may include a motion vector of the control point, a motion vector of the luma sub-block, and pixel values of the luma block. Of course, the image block to be exported may also include other features, and the specific content of the features of the image block is not limited in the embodiments of the present disclosure.
The image block to be processed may comprise 2 control points,for example, an image block 1 in a schematic diagram of an image block to be processed shown in fig. 3 according to an exemplary embodiment is shown, where the image block 1 includes a control point 1 at the upper left corner and a control point 2 at the upper right corner, where a motion vector of the control point 1 is v0Control point 1 has a motion vector of v1. Certainly, the image block to be processed may further include 3 control points, where the three control points include, in addition to the 2 control points, a pixel point at the lower left corner of the image block to be processed, for example, the image block 2 in fig. 3, and the image block 2 includes control points 1-3, where a motion vector of the control point 3 is v2
The encoding and decoding end can obtain the motion vector of the control point of the image block to be processed according to the motion vector of the encoded and decoded image block around the image block to be processed.
In some embodiments, the to-be-processed image block has default subblocks, the encoding and decoding end may use a motion vector of the default subblock as a feature of the to-be-processed image block, and when the number of the default subblocks is too large, the encoding and decoding end may perform re-segmentation on the to-be-processed image block through the processes shown in the following steps S22-S23, so that the number of subblocks of the to-be-processed image block after re-segmentation is reduced, and thus the number of times of motion compensation performed on the subblocks by the subsequent encoding and decoding end is reduced, and memory consumption may be reduced.
When the image block to be processed is a luminance block, the encoding end may determine the motion vector of each luminance sub-block according to the motion vector of the control point, specifically, the control point at the upper left corner of the image block to be processed is used as the origin, the direction from the control point at the upper left corner to the control point at the upper right corner of the image block to be processed is used as the positive direction of the x-axis, the direction from the control point at the upper left corner to the control point at the lower left corner of the image block to be processed is used as the positive direction of the y-axis, the ith sub-block in the positive direction of the x-axis, and the jth sub-block in the positive direction of the y-axis are used as examples, (the direction of the x-axis is the positive direction of the y-axis), and the ith sub-block in the positive direction of the x-axis, the jth sub-block in the positive direction of the y-axis is used as an example(i,j),y(i,j)) The coordinates of the center point of the (i, j) th sub-block on the image block to be processed are represented, wherein (x)(i,j),y(i,j)) Can be determined by equation (1).
Figure BDA0002177912850000071
Where M is the width of a sub-block and N is the height of a sub-block. When the image block to be processed includes two control points, for any sub-block in the image block to be processed, a motion vector of any sub-block may be determined by the following equation (2).
Figure BDA0002177912850000081
Wherein (x, y) is the coordinate of the center of any sub-block, (mv)x,mvy) Is the motion vector of any sub-block, (mv)1x,mv1y) For the motion vector of the control point in the upper right corner of the image block to be processed, (mv)0x,mv0y) Is the motion vector of the control point at the upper left corner of the image block to be processed, and W is the height of the image block to be processed.
For another example, when the image block to be processed has 3 control points, the motion vector of any sub-block can be obtained by the following formula (3).
Figure BDA0002177912850000082
Wherein (mv)2x,mv2y) And H is the width of the image block to be processed, so that the motion vector of each sub-block can be determined according to the formulas (1) and (2), and the motion vector of each sub-block can also be determined according to the formulas (1) and (3).
When the image block to be processed is a chroma block, the encoding and decoding end may first determine a luminance block corresponding to the chroma block, then determine each luminance sub-block of the luminance block corresponding to the chroma block according to a splitting principle of the luminance block, then determine at least one luminance sub-block corresponding to each chroma sub-block of the chroma block, and finally, the encoding and decoding end obtains a motion vector of each chroma sub-block according to a motion vector of at least one luminance sub-block corresponding to each chroma sub-block.
For example, the chroma block has a size of 2 × 2 and is composed of 2 rows and 2 columns of chroma subblocks, the size of the luma block corresponding to the chroma block is 4 × 4 and is composed of 4 rows and 4 columns of luma subblocks, for the chroma subblock in the 1 st row and the 1 st column of the chroma block, the 1 st row to the 2 nd row and the 1 st column to the 2 nd column of the luma subblocks in the luma block are corresponded, then, the coding/decoding end may obtain the motion vectors of the 4 luma subblocks, then calculate the average value, the median value, or other values of the motion vectors of the 4 luma subblocks, use the calculated average value, median value, or other values as the motion vectors of the chroma subblocks in the 1 st row and the 1 st column, and perform the above process of obtaining the motion vector of the first chroma subblock for each chroma subblock, thereby obtaining the motion vector of each chroma subblock in the chroma subblock.
It should be noted that the other values and the process of acquiring the pixel values of the image block are not specifically limited in the embodiments of the present disclosure.
In step S22, the codec inputs the feature into a first target model, and the first target model performs virtual segmentation on the image to be processed according to the feature, and outputs the size of any subblock after the virtual segmentation, where the size of any subblock is larger than a preset size.
The first target model may be a decision tree, a Support Vector Machine (SVM), a neural network, or the like. The first target model is not specifically limited in the embodiments of the present disclosure, and the any sub-block is any sub-block obtained by virtually segmenting the to-be-processed image block by the first target model.
Inputting the characteristic into the first target model is equivalent to inputting the image block to be processed into the first target model, and in order to ensure that the size of any sub-block virtually divided from the image block to be processed by the first target model is larger than the preset size, the first target model adjusts the size of any sub-block by taking the preset sub-block as a reference until any sub-block with the output size larger than the preset size is output, wherein the preset sub-block is a preset area in the image block to be processed. In one possible implementation manner, the first target model obtains a first prediction block of a preset sub-block according to the preset sub-block and the characteristics corresponding to the preset sub-block in the characteristics of the image block to be processed; the method comprises the steps that a first target model increases a target size of a preset sub-block to obtain a first target sub-block, wherein the size of the first target sub-block is equal to the sum of the size of the preset sub-block and the target size, and the first target model obtains a second prediction block of the first target sub-block according to the first target sub-block and the characteristics, corresponding to the preset sub-block, of the characteristics of an image block to be processed; when SSE (sum of the squared errors) among the preset subblock, the first prediction block, the first target subblock and the second prediction block is smaller than a target value, the first target model takes the first target subblock as any subblock after the image to be processed is virtually sliced, and outputs the size of the first target subblock, otherwise, the first target model increases the target size of the first target subblock to obtain a new first target subblock, and when SSE among the preset subblock, the first prediction block, the new first target subblock and the second prediction block of the new first target subblock is smaller than the first target value, the first target model takes the new first target subblock as any subblock after the image to be processed is virtually sliced, and outputs the size of the new first target subblock.
It should be noted that, when the SSE between the preset sub-block, the first prediction block, the first target sub-block, and the second prediction block is smaller than the target value, the size of the first target sub-block output by the first target model is larger than the preset size, that is, the size of any sub-block is larger than the preset size. The first target model determines a first target sub-block, that is, performs virtual segmentation on the image block to be processed.
If the size of any subblock output by the first target model is too small, the coding and decoding end can obtain more target subblocks by dividing the image block to be processed based on the size of any subblock, so that the number of times of motion compensation can not be reduced, and the consumption of memory bandwidth can still not be reduced. It should be noted that, the preset size and the first target value are not specifically limited in the embodiments of the present disclosure.
The virtual segmentation of the first target model is enabled because the first target model is a model trained based on the features of the sample image blocks and the set optimal sub-block sizes of the sample image blocks, and therefore, the first target model learns the capability of determining the optimal sub-block sizes from the features of the image blocks. Before the step S22, the first target model may be trained in advance, so that the first target model may specifically determine the capability of the optimal subblock size according to the feature of the image block to be processed, and when the encoding and decoding end performs encoding and decoding, the feature of the image block to be processed may be directly input into the first target model, so that the first target model may output the optimal subblock size (the size of any subblock) of the image block to be processed according to its own capability.
In some embodiments, when the to-be-processed image block is a chroma block, the codec side may not obtain the size of any sub-block through the processes shown in steps S21-S22, and may obtain the size of any sub-block of the to-be-processed image block according to the variance of the motion vector of at least one luma sub-block in the luma block. In one possible implementation, see the process shown in steps A-B below.
And step A, when the image block to be processed is a chrominance block, the coding and decoding end obtains the motion vector of at least one luminance sub-block in the luminance block corresponding to the image block to be processed.
In a possible implementation manner, the codec may implement step a by the following procedure shown in steps a1-a 2.
Step A1, when the image block is a chroma block, the encoding and decoding end determines the luminance block corresponding to the image block to be processed according to the corresponding relation between the luminance block and the chroma block when sampling any image.
The method for determining the luminance block corresponding to the chrominance block is introduced in step S21, and this step a1 is not described in detail in this embodiment of the disclosure.
Step A2, the encoding/decoding end obtains the motion vector of at least one luminance sub-block in the luminance block corresponding to the image block to be processed.
The process of obtaining the motion vector of at least one luminance sub-block in the corresponding luminance block is introduced in step S21, and the embodiment of the present disclosure does not repeat this step a 2.
And step B, the encoding end determines the sub-block size of the image block to be processed according to the size of the variance of the motion vector of the at least one brightness sub-block.
The encoding/decoding end may first obtain the variance according to the motion vector of the at least one luminance sub-block, and then perform the step B.
In some embodiments, the encoding and decoding end may preset a plurality of different variance ranges, each variance range corresponds to one subblock size, so that the encoding and decoding end may determine, according to a variance range to which an obtained variance belongs, a subblock size corresponding to the variance range to which the obtained variance belongs, and determine the corresponding subblock size as the subblock size of the image block to be processed.
For example, in the range of the plurality of different variances stored in table 1, when the variance obtained at the encoding end is in range 1, the size of the sub-block is 3 × 3, when the variance obtained at the encoding end is in range 2, the size of the sub-block is 2 × 2, and when the variance obtained at the encoding end is in range 3, the size of the sub-block is 1 × 1.
TABLE 1
Serial number Interval of range Sub-block size
Range 1 Greater than or equal to 0 and less than 0.2 3*3
Range 2 Greater than or equal to 0.2 and less than 0.4 2*2
Range 3 Greater than or equal to 0.4 and less than 0.6 1*1
Therefore, the size of the sub-block in the chroma block can be determined directly according to the variance of the luminance sub-block, and the calculation is simpler.
In step S23, the encoding/decoding end performs segmentation on the image block to be processed according to the size of any sub-block to obtain at least one target sub-block, where the size of each target sub-block is equal to the size of any sub-block.
The encoding and decoding end can divide the image block to be processed by taking the size of any sub-block output by the first target model as the size of each target sub-block, when the size of the target sub-block is equal to that of the image block to be processed, the encoding and decoding end does not divide the image block to be processed, the image block to be processed is also a target sub-block, and when the size of the sub-target block is smaller than that of the image block to be processed, the encoding and decoding end divides the image block to be processed to obtain at least two target sub-blocks.
Through the processes shown in the above steps S21-S23, the number of the finally obtained target sub-blocks may be small, which may affect the accuracy of the to-be-processed image block obtained after the motion compensation, and the encoding and decoding end may perform the following step S24 to obtain the motion vector of at least one sub-block, so as to ensure the accuracy of the to-be-processed image block obtained after the motion compensation.
In step S24, the codec obtains the motion vector of the at least one target sub-block.
When the image block to be processed is a brightness block, the encoding and decoding end obtains the motion vector of at least one target sub-block of the brightness block. It should be noted that, in step S21, a process of obtaining the motion vector of the luminance sub-block by the encoding/decoding end is introduced, and therefore, the embodiment of the present disclosure does not repeat this step S24.
When the image block to be processed is a chroma block, the present disclosure provides the following 3 ways to obtain the motion vector of at least one target sub-block.
In the mode 1, when the image block to be processed is a chrominance block, the encoding and decoding end obtains the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
Specifically, the motion vector of each target sub-block may be obtained by equations (1) and (2), or may be obtained by equations (1) and (3).
In the second mode, the encoding/decoding end obtains the motion vector of each target sub-block in the chroma block according to the motion vector of at least one luma sub-block of the luma block corresponding to the chroma block.
In a possible implementation manner, the codec end may implement the 2 nd manner through the following procedure shown in steps 1-3.
Step 1, when the image block to be processed is a chroma block, the encoding and decoding end determines a brightness block corresponding to the image block according to the corresponding relation between the brightness block and the chroma block when any image is sampled.
The manner of determining the luminance block corresponding to the chrominance block is introduced in step S21, and this step 1 is not described in detail in this embodiment of the disclosure.
And 2, the encoding and decoding end acquires the motion vector of at least one brightness sub-block in the brightness block corresponding to the image block.
The process of obtaining the motion vector of at least one luminance sub-block in the luminance block is introduced in step S21, and here, this step 2 is not described in detail in this embodiment of the disclosure.
And 3, the coding and decoding end acquires the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
When the image block to be processed is a chroma block, the target sub-block is also a chroma sub-block, and the correspondence between the luma sub-block and the chroma sub-block is introduced in step S21, which is not described in detail in the embodiments of the present disclosure, and a process of obtaining the motion vector of the at least one chroma sub-block according to the motion vector of the at least one luma sub-block is introduced in step S21, which is not described in detail in step 3 in the embodiments of the present disclosure.
It should be noted that, when the to-be-processed image block itself is used as a target sub-block, the motion vector of the sub-block of the to-be-processed image block is also the motion vector of the center of the to-be-processed image block. When the image block to be processed is used as a target sub-block, if the encoding and decoding end performs motion compensation on the image block to be processed, only one motion compensation is performed, and multiple motion compensations are not required, so that the consumption of the memory can be reduced to the maximum extent.
In the 3 rd mode, when the to-be-processed image block is a chrominance block and the size of the to-be-processed image block is equal to the size of the target sub-block, the feature is input into a second target model, the second target model performs feature extraction on the to-be-processed image block according to the feature and outputs a motion vector of the at least one target sub-block, and the motion vector of each target sub-block is used for indicating one motion feature of the to-be-processed image block.
The second target model may be a decision tree, a SVM, a neural network, or the like. The second target model is not particularly limited in the embodiments of the present disclosure. The encoding and decoding end can firstly carry out model training to obtain the second target model, so that the second model has the capability of carrying out feature extraction on the image blocks according to the features of the image blocks, and when the encoding and decoding end inputs the features of the image blocks to be processed into the second target model, the second target model can directly carry out feature extraction on the image blocks to be processed according to the features and output the motion vector of the at least one target sub-block. It should be noted that, when the size of the target sub-block is equal to the size of the to-be-processed image block, it indicates that the to-be-processed image block is a sub-block of itself, and then the second target model outputs a motion vector of one target sub-block, that is, a motion vector of the whole to-be-processed image block.
The process of performing model training on the second target model may be: the coding and decoding end inputs the characteristics of the sample image blocks and a preset motion vector to a second target model, the preset motion vector is the motion vector of the preset sample image blocks, and the second target model searches nearby the target motion vector based on the characteristics of the sample image blocks to obtain at least one target motion vector; and when SSE between a part of the at least one prediction block and the sample image block is smaller than a second target value, the second target model takes the prediction block with the minimum SSE between the part of the prediction block and the sample image block as an optimal prediction block, and takes the target motion vector corresponding to the optimal prediction block as the optimal motion vector of the sample image block, and then training of the second target model is completed.
The trained second target model can learn preset motion vectors, optimal motion vectors and the characteristics of the sample image blocks, the characteristics of the image blocks comprise the motion vectors, pixel values and the like of image control points, and each target motion vector of the sample image block is related to the motion vector of the control point of the sample image block, so that the second target model can correct the motion vector of the control point according to the texture and relative position characteristics between the pixel values of the sample image block and the image block to be processed, the finally output motion vector is superior to the motion vector directly derived by the control point, namely the trained second target model can extract the characteristics of the image block to be processed according to the characteristics and output the motion vector of at least one target sub-block, and then a subsequent encoding and decoding end can directly input the characteristics of the image block to be processed to the trained second target model, to obtain the motion vector of the target sub-block of the image block to be processed.
It should be noted that, in the above three ways of obtaining the motion vector of the target subblock, all the obtained motion vectors are the motion vectors of the pixel points in the center of the target subblock, and the motion vectors of the pixel points in the center of the target subblock can better embody the motion vector of the whole target subblock relative to the motion vectors of other pixel points in the target subblock, so that the motion vector of the pixel points in the center of the target subblock is used as the motion vector of the target subblock, and the precision of the compensated target subblock obtained by performing motion compensation on the target subblock according to the motion vector of the target subblock is higher. In addition, the motion vector of the control point of the target sub-block can be corrected by the method 3, so that the finally obtained target vector of the target sub-block is optimal, and the precision of the compensated target sub-block can be further improved. The process of motion compensation for the target sub-block according to the motion vector of the target sub-block is shown in step S25.
In step S25, the codec performs motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
For any target sub-block in the at least one target sub-block, after determining a motion vector of any target sub-block, the encoding and decoding end obtains a residual error of any target sub-block according to an image block indicated by the motion vector of any target sub-block and any target sub-block, wherein the residual error is a pixel difference between the image block indicated by the motion vector of any target sub-block and any target sub-block, and when the encoding and decoding end does not taste the motion of any target sub-block, the encoding and decoding end can perform a motion compensation process on any target sub-block according to the residual error of any target sub-block and the image block indicated by the motion vector of any target sub-block.
If the above-mentioned process of motion compensation for any target sub-block is performed for the at least one target sub-block, motion compensation for the at least one target sub-block can be implemented.
It should be noted that the process shown in step S25 is also a process of performing motion compensation on the at least one target sub-block.
According to the method provided by the embodiment of the disclosure, the characteristics of the image block to be processed are input into the first target model, the first target model performs virtual segmentation on the image block to be processed, any sub-block with the size larger than the preset size is output, and the image block to be processed is segmented according to the size of any sub-block. In addition, the motion vector of the pixel point in the center of the target sub-block can better reflect the motion vector of the whole target sub-block relative to the motion vectors of other pixel points in the target sub-block, so that the motion vector of the pixel point in the center of the target sub-block is used as the motion vector of the target sub-block, and the precision of the compensated target sub-block obtained by performing motion compensation on the target sub-block according to the motion vector of the target sub-block is higher.
Fig. 4 is a block diagram illustrating a motion compensation apparatus according to an exemplary embodiment. Referring to fig. 4, the apparatus includes an acquisition unit 401, a processing unit 402, a slicing unit 403, and a compensation unit 404.
The acquiring unit 401 is configured to perform acquiring features of image blocks to be processed in an image;
the processing unit 402 is configured to input the feature into a first target model, where the first target model performs virtual segmentation on the image to be processed according to the feature, and outputs the size of any virtually-segmented sub-block, where the size of any sub-block is greater than a preset size;
the segmentation unit 403 is configured to perform segmentation on the to-be-processed image block according to the size of any sub-block to obtain at least one target sub-block, where the size of each target sub-block is equal to the size of any sub-block;
a compensation unit 404 configured to perform motion compensation on the at least one target sub-block.
Optionally, the obtaining unit 401 is further configured to perform obtaining a motion vector of the at least one target sub-block;
the compensation unit 404 is further configured to perform motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining unit 401 is further configured to perform:
obtaining a motion vector of the at least one target sub-block;
the motion compensating the at least one target sub-block comprises:
and performing motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
Optionally, the obtaining unit 402 is further configured to perform:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
Optionally, the obtaining unit 403 is further configured to perform:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between a luminance block and the chrominance block when any image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
Optionally, the obtaining unit 403 is further configured to perform:
when the image block to be processed is a chrominance block and the size of the image block to be processed is equal to the size of the target sub-block, inputting the characteristics to a second target model, performing characteristic extraction on the image block to be processed according to the characteristics by the second target model, and outputting a motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion characteristic of the image block to be processed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 5 is a schematic structural diagram of a computer device 500 according to an exemplary embodiment, where the computer device 500 may have a relatively large difference due to different configurations or performances, and may include one or more CPUs (processors) 501 and one or more memories 502, where the memory 502 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 501 to implement the methods provided by the above or below method embodiments. Of course, the computer device 500 may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the computer device 500 may further include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, there is also provided a storage medium, such as a memory, comprising instructions executable by a processor in a computer device to perform the motion compensation method in the embodiments described below. For example, the storage medium may be a ROM (read-only memory), a RAM (random access memory), a CD-ROM (compact disc read-only memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Illustratively, there is also provided a computer program product comprising one or more instructions executable by a processor of a computer device to perform the method steps of the motion compensation method provided in the above embodiments, which method steps may comprise:
acquiring the characteristics of an image block to be processed in an image;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
motion compensation is performed on the at least one target sub-block.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A method of motion compensation, comprising:
acquiring the characteristics of an image block to be processed in an image, wherein the characteristics comprise a motion vector and a pixel value of a control point of the image block to be processed;
inputting the characteristics into a first target model, virtually segmenting the image to be processed according to the characteristics by the first target model, and outputting the size of any subblock after the virtually segmentation, wherein the size of any subblock is larger than a preset size;
according to the size of any sub-block, the image block to be processed is segmented to obtain at least one target sub-block, and the size of each target sub-block is equal to that of any sub-block;
when the image block to be processed is a chrominance block and the size of the image block to be processed is equal to the size of the target sub-block, inputting the characteristics to a second target model, correcting the motion vector of the control point by the second target model according to the texture and relative position characteristics between the pixel values of the sample image block and the image block to be processed, and outputting the motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion characteristic of the image block to be processed;
and performing motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
2. The motion compensation method according to claim 1, wherein the method further comprises:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
3. The motion compensation method according to claim 1, wherein the method further comprises:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between the luminance block and the chrominance block when the image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
4. A motion compensation apparatus, comprising:
the image processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire the characteristics of an image block to be processed in an image, and the characteristics comprise a motion vector and a pixel value of a control point of the image block to be processed;
the processing unit is configured to input the features into a first target model, the first target model performs virtual segmentation on the image to be processed according to the features, and outputs the size of any sub-block after the virtual segmentation, wherein the size of any sub-block is larger than a preset size;
the segmentation unit is configured to perform segmentation on the image block to be processed according to the size of any sub-block to obtain at least one target sub-block, wherein the size of each target sub-block is equal to the size of any sub-block;
the obtaining unit is further configured to input the feature to a second target model when the to-be-processed image block is a chrominance block and the size of the to-be-processed image block is equal to the size of the target sub-block, the second target model corrects the motion vector of the control point according to texture and relative position characteristics between pixel values of a sample image block and the to-be-processed image block, and outputs a motion vector of the at least one target sub-block, wherein the motion vector of each target sub-block is used for indicating one motion feature of the to-be-processed image block;
a compensation unit configured to perform motion compensation on the at least one target sub-block according to the motion vector of the at least one target sub-block.
5. The motion compensation apparatus of claim 4, wherein the obtaining unit is further configured to perform:
and when the image block to be processed is a chrominance block, obtaining the motion vector of the at least one target sub-block according to the motion vector of the control point of the image block to be processed, wherein the control point comprises a pixel point at the upper left corner of the image block to be processed and a pixel point at the upper right corner of the image block to be processed.
6. The motion compensation apparatus of claim 4, wherein the obtaining unit is further configured to perform:
when the image block to be processed is a chrominance block, determining a luminance block corresponding to the image block to be processed according to the corresponding relation between the luminance block and the chrominance block when the image is sampled;
obtaining a motion vector of at least one brightness sub-block in the corresponding brightness block;
and obtaining the motion vector of the at least one target sub-block according to the motion vector of the at least one brightness sub-block and the corresponding relation between the brightness sub-block and the target sub-block.
7. A computer device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the motion compensation method of any of claims 1 to 3.
8. A storage medium having instructions that, when executed by a processor of a computer device, enable the computer device to perform a motion compensation method as claimed in any one of claims 1 to 3.
CN201910785583.6A 2019-08-23 2019-08-23 Motion compensation method, motion compensation device, computer equipment and storage medium Active CN110505485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910785583.6A CN110505485B (en) 2019-08-23 2019-08-23 Motion compensation method, motion compensation device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910785583.6A CN110505485B (en) 2019-08-23 2019-08-23 Motion compensation method, motion compensation device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110505485A CN110505485A (en) 2019-11-26
CN110505485B true CN110505485B (en) 2021-09-17

Family

ID=68589201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910785583.6A Active CN110505485B (en) 2019-08-23 2019-08-23 Motion compensation method, motion compensation device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110505485B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327901B (en) * 2020-03-10 2023-05-30 北京达佳互联信息技术有限公司 Video encoding method, device, storage medium and encoding equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103155566A (en) * 2010-11-02 2013-06-12 松下电器产业株式会社 Movie image encoding method and movie image encoding device
CN105594212A (en) * 2013-07-24 2016-05-18 三星电子株式会社 Method for determining motion vector and apparatus therefor
CN106375659A (en) * 2016-06-06 2017-02-01 中国矿业大学 Electronic image stabilization method based on multi-resolution gray projection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060002474A1 (en) * 2004-06-26 2006-01-05 Oscar Chi-Lim Au Efficient multi-block motion estimation for video compression
KR101555327B1 (en) * 2007-10-12 2015-09-23 톰슨 라이센싱 Methods and apparatus for video encoding and decoding geometrically partitioned bi-predictive mode partitions
FR2980942A1 (en) * 2011-09-30 2013-04-05 France Telecom IMAGE ENCODING AND DECODING METHOD, IMAGE ENCODING AND DECODING DEVICE AND CORRESPONDING COMPUTER PROGRAMS

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103155566A (en) * 2010-11-02 2013-06-12 松下电器产业株式会社 Movie image encoding method and movie image encoding device
CN105594212A (en) * 2013-07-24 2016-05-18 三星电子株式会社 Method for determining motion vector and apparatus therefor
CN106375659A (en) * 2016-06-06 2017-02-01 中国矿业大学 Electronic image stabilization method based on multi-resolution gray projection

Also Published As

Publication number Publication date
CN110505485A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
EP3488388B1 (en) Video processing method and apparatus
US10034005B2 (en) Banding prediction for video encoding
CN111988611B (en) Quantization offset information determining method, image encoding device and electronic equipment
JP5075861B2 (en) Image processing apparatus and image processing method
US11216910B2 (en) Image processing system, image processing method and display device
CN104023225B (en) Video quality evaluation without reference method based on Space-time domain natural scene statistical nature
EP3051822B1 (en) Color table generation for image coding
US20200374526A1 (en) Method, device, apparatus for predicting video coding complexity and storage medium
WO2015183450A1 (en) Block-based static region detection for video processing
CN110324617B (en) Image processing method and device
Kim et al. Deep blind image quality assessment by employing FR-IQA
CN113099067B (en) Reversible information hiding method and system based on pixel value sequencing prediction and diamond prediction
US9294676B2 (en) Choosing optimal correction in video stabilization
CN114255187A (en) Multi-level and multi-level image optimization method and system based on big data platform
CN113012073A (en) Training method and device for video quality improvement model
CN110505485B (en) Motion compensation method, motion compensation device, computer equipment and storage medium
CN111754429A (en) Motion vector post-processing method and device, electronic device and storage medium
CN114222133B (en) Content self-adaptive VVC intra-frame coding rapid dividing method based on classification
CN103871035B (en) Image denoising method and device
CN111246221B (en) AVS3 intra-frame rapid division method, system and storage medium
CN112085667A (en) Deblocking effect removing method and device based on pseudo-analog video transmission
CN113111770B (en) Video processing method, device, terminal and storage medium
CN112634224B (en) Focus detection method and device based on target image
JPH04297962A (en) Method and apparatus for emphasizing image
CN110807483B (en) FPGA-based template matching implementation device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant