CN104202603B - Motion vector field generation method applied to video frame rate up-conversion - Google Patents

Motion vector field generation method applied to video frame rate up-conversion Download PDF

Info

Publication number
CN104202603B
CN104202603B CN201410489709.2A CN201410489709A CN104202603B CN 104202603 B CN104202603 B CN 104202603B CN 201410489709 A CN201410489709 A CN 201410489709A CN 104202603 B CN104202603 B CN 104202603B
Authority
CN
China
Prior art keywords
block
motion vector
image
motion
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410489709.2A
Other languages
Chinese (zh)
Other versions
CN104202603A (en
Inventor
陈卫刚
时佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Otto Electric Co ltd
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201410489709.2A priority Critical patent/CN104202603B/en
Publication of CN104202603A publication Critical patent/CN104202603A/en
Application granted granted Critical
Publication of CN104202603B publication Critical patent/CN104202603B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Systems (AREA)

Abstract

The invention relates to the field of video image processing, in particular to a motion vector field generation method applied to video frame rate up-conversion. The motion vector field generation method includes: firstly, determining the size of blocks in a self-adaptive mode under the framework of particle filtering, adopting the larger blocks in the area presenting single motion so as to beneficially reduce influence of image noise on the matching searching process, adopting the smaller blocks in the area presenting complex motion so as to capture those complex motion, and therefore, beneficially, correctly and reliably structuring motion vector fields used for motion compensation interpolation; secondly, for areas with uniform gray level and lack of texture information, of the video image, adopting a two-pass scanning mode to search best match in a smaller range by taking motion vectors of the neighborhood blocks as candidate vectors after other parts of an image are estimated to acquire the motion vectors; thirdly, for the video image with exposing areas, adopting two consecutive frames of I1 and I2 which are located after interpolation frames, determining the motion vectors of the corresponding positions of the interpolation frames by the motion vectors with the I1 pointing to the I2 so as to effectively prevent error vectors from occurring in the exposing areas.

Description

Motion vector field generation method applied to video frame rate up-conversion
Technical Field
The present invention relates to the field of video image processing, and in particular, to a motion vector field generation method applied to video frame rate up-conversion.
Background
Frame rate up-conversion periodically inserts new frames in the lower frame rate video to increase the frame rate of the video. In the applications of normal frame rate playback of low frame rate video coding, video format conversion and the like, the viewing experience of a viewer can be effectively improved.
In the existing frame rate up-conversion algorithms, the motion compensation based algorithm has better performance than those of simple frame repetition or frame averaging, because the video image often has dynamic regions introduced by moving objects, the motion compensation based algorithm tries to interpolate each pixel on the motion track, and by the close combination of motion estimation and neighborhood filtering, it is possible to avoid the inter-frame jitter introduced by simple repetition and the motion blur introduced by frame averaging.
For motion compensation based algorithms, the following two elements will determine the quality of the interpolated frame: (1) how to estimate the true motion of each pixel to restore their motion trajectory; (2) how to construct interpolated frames in which visually the object exhibits a continuous and smooth motion in the presence of more or less unreliable motion vectors. For the first element, the difficulty comes from the fact that motion estimation itself is a morbid problem, and existing motion estimation techniques, most of which are designed for video coding. They aim at minimizing the compensation redundancy, reducing the coding rate, and the estimated motion vectors sometimes do not reflect the true motion of the object.
It should be noted that, for the block matching algorithm, the size of the block will affect the reliability and precision of the motion vector, when a region presents consistent motion, the use of a block with a larger size is beneficial to reduce the influence of image noise on the matching search process, and conversely, in the edge and other regions of a moving object, the use of a smaller block is beneficial to capture those complex motions; for the block positioned in the gray uniform area, the block matching search process is easily influenced by image noise, and for the area, after the motion vector is determined in other areas, the motion vector of the adjacent block is taken as a reference vector, so that the motion vector reflecting the real motion of the area can be obtained more favorably, and the calculation cost can be reduced favorably; in addition, for the exposed area, the motion vector obtained by performing block matching motion search on the previous frame and the next frame adjacent to the interpolated frame is deemed unreliable, and two consecutive frames after the interpolated frame are required to reversely estimate the motion vector at the corresponding position of the interpolated frame.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a motion vector field generation method applied to video frame rate up-conversion, which adaptively adopts a larger block in a region presenting single motion and adopts a smaller block in a region presenting complex motion to perform block matching motion estimation according to the motion characteristics of a video image time domain, thereby effectively avoiding wrong motion vectors introduced by the fact that pixels from different moving objects are contained in one block; according to spatial domain texture information, a larger block size is adopted in a region with uniform gray scale and lacking texture detail, and motion vectors of neighboring domain blocks adjacent to the block and having obvious texture information are used as candidate vectors to estimate motion vectors of the blocks so as to avoid unreliable motion vectors.
To achieve the above object, the present invention provides a motion vector field generating method applied to video frame rate up-conversion, comprising the steps of:
step one, reading a frame which is adjacent to a frame to be interpolated in a time domain and is ahead of the frame, and recording the frame as I0Two frames immediately after the frame to be interpolated, denoted as I1And I2In 1 with0Is a current frame, I1For reference frames, a motion vector field MF _0 is estimated in a multi-resolution, variable block size manner, where each vector corresponds to I0A block in the reference image, the relative offset between the block and the best matching block in the reference image being given in the form of a vector;
step two, judging whether each motion vector in the motion vector field MF _0 is reliable or not;
step three, projection processing of the motion vector is carried out according to the interpolation frame IΔtAnd I0In between, each reliable motion vector projection in the vector field MF _0 is assigned to a pixel of the interpolated frame, resulting in an and IΔtVector field MF _ X of the same size;
step four, for the area of vector field MF _ X without assigned motion vector, seeking an interpolated frame IΔtStarting from, passing through I1And I2And can obtain I1And I2The vector with the smallest matching cost is used as the motion vector.
Further, the first step is to estimate the motion vector field in a multi-resolution and variable-size block matching manner, model the process of adaptively dividing the image into sub-blocks with different sizes into a dynamic system, and use the state vector s ═ s1,s2,...,sN]TDescription, where N is the total number of blocks, si=(xi,yi,wi) Representing the ith particle, corresponding to an image block, (x)i,yi) Is the coordinate of the upper left corner point of the block, wiThe block size in width and height direction is in pixel, under the framework of particle filtering, the block size is driven by an importance density function q, and the motion vector field estimation is realized in an iterative form, and the method comprises the following steps:
step 1, constructing an L-layer image pyramid, wherein the 0 th layer corresponds to an original image, the L-1 st layer corresponds to a minimum resolution image, and if the size of the k-layer image is H × W, the size of the k +1 st layer image is H × WAnd each pixel therein is equal to the mean of 4 pixels of the k-th layer image;
step 2, taking the image of the 0 th layer as input, calculating the gradient intensity of the image, detecting a gradient intensity image in a window scanning mode, if the number of pixels with larger gradient values in a window is less than a pre-specified threshold value, judging that the window is positioned in a gray uniform area, and carrying out no treatment in the subsequent steps 3 to 7;
step 3, judging whether the block corresponding to the particle is split from the block of the previous layer in the form of a quadtree or the current processed image of the L-1 layer, if so, adopting a quick block matching algorithm, taking the sum of the minimized absolute frame differences as a matching criterion, searching a best match in the reference image, and taking the relative offset (dx) of the two as the matching criterioni,dyi) As a motion vector; otherwise, determining an initial searching position according to the motion vector obtained by the previous layer searching, and searching the best matching in a smaller range;
step 4, calculating observation measurement according to the matching error and the motion vector distribution consistency of the neighborhood blocks, specifically, firstly calculating each particle according to the following formula:
wherein b iskIs the block size of the current layer, SAD is the sum of the absolute frame differences obtained in the matching process of step 3, corresponding to the best match, eV∈[0,1]If the current block and the adjacent domain block have consistent motion, the parameter takes a smaller value, otherwise, the parameter takes a larger value;
next, an observation metric o is calculated for each particle as followsk,i
Step 5, calculating the observation likelihood density function p of the kth iteration according to the following formula by the observation measurementk(z|s)
Wherein N (-) represents a Gaussian distribution function, uk,iRepresenting the block center position corresponding to the ith particle at the kth iteration, ∑k,iIs a covariance matrix, related to the block size, z corresponds to an observation metric related to the partition;
step 6, updating the importance density function according to the following formula
Wherein a isk∈(0,1]Is the overwrite coefficient for the kth iteration;
and 7, sampling to generate a particle set for next iterative computation according to the importance density function q. The kth iteration is directed to the L-k layer image, one of which, bk×bkThe block corresponds to a 2b of the L-k-1 layerk×2bkIf a certain particle has a larger q value, dividing the corresponding block into four sub-blocks in a quadtree form at an L-k-1 layer of the next iteration, wherein each sub-block corresponds to one particle in the (k + 1) th cycle particle set, otherwise, not dividing the sub-block corresponding to the particle, and taking the particle with the size of 2b in the next iterationk×2bkBy adopting the method in the step 3, the motion vector obtained by the estimation of the previous layer is taken as a reference vector, and the best matching is searched in a smaller range;
step 8, if L times of circulation is performed, executing step 9 sequentially, otherwise, turning to step 3 to continue circulation;
step 9, for being at I0Estimating motion vector of block in middle gray uniform region, if block corresponding to particle is detected as being in gray uniform region in step 2, then it will be ignored in the following steps 3-7, and after the above-mentioned circulation process is finished, adopting two-pass scanning modeEstimating motion vectors for the blocks, wherein the first pass is processed from top to bottom and from left to right, for a block to be processed, blocks adjacent to the block and positioned above, above left, above right and left of the block are already estimated to obtain the motion vectors, the motion vectors of the blocks are used as candidate vectors, and the vector capable of generating the minimum sum of absolute frame differences is selected from the candidate vector set to be used as the result of the first pass; the second scanning pass is processed from bottom to top and from right to left, for a block to be processed, the blocks adjacent to the block to be processed and positioned below, below left, below right and right have been estimated to obtain motion vectors, the motion vectors and the motion vector estimated in the first scanning pass are taken as candidate vectors, and the vector which can generate the smallest sum of absolute frame differences is selected in the candidate vector set as a final result.
Further, said step four, for the area of the vector field MF _ X not assigned with motion vectors, seeks an interpolated frame IΔtStarting from, passing through I1And I2And can obtain I1And I2The vector with the minimum matching cost is taken as a motion vector, and the motion vector comprises the following components: limiting the block matching search to a certain range, and aiming at each possible vector (u) in the search rangex,uy) Calculating the matching error when calculating different i and j:
where i and j are smaller integers, is calculated for all i and j as follows
Finally, the motion vector of the block is determined as
The beneficial technical effects of the invention are as follows: under the framework of particle filtering, the size of a block is determined in a self-adaptive mode, a larger block is adopted in a region presenting single motion, and a smaller block is adopted in a region presenting complex motion, so that a motion vector field for motion compensation interpolation is favorably and reliably constructed; aiming at the existence of an exposed area in the video image, the motion vector pointing to a frame after the interpolation frame from a frame before the interpolation frame cannot exist in the part of the area, and two continuous frames I which are adjacent to the interpolation frame on the time axis and are positioned after the interpolation frame are adopted1And I2From I1Point of direction I2To determine the motion vector of the corresponding position of the interpolated frame, so as to effectively avoid the occurrence of erroneous interpolation in the exposed area.
Drawings
FIG. 1 is a block diagram of an embodiment of the present invention for generating a motion vector field for use in video frame rate up-conversion;
FIG. 2 is a flow diagram of a multi-resolution variable block size motion vector field estimation;
FIG. 3 is a schematic diagram of a two-step search algorithm for searching for a location;
FIG. 4 is a schematic 4-neighborhood diagram of an image block;
FIG. 5 is a schematic diagram of processing blocks located in a gray uniform region in a two-pass scan;
FIG. 6 is a schematic diagram of different regions assigned different numbers of motion vectors;
FIG. 7 is a diagram illustrating the block matching of two subsequent frames to determine the motion vector of the hollow region in the interpolated frame.
Detailed Description
Will be combined as followsThe figures illustrate specific embodiments of the method provided by the present invention. Let three consecutive frames of input be I0、I1And I2To be estimated is an interpolated frame IΔt(0 < Δ t < 1), fig. 1 shows a block diagram of an embodiment of the invention, comprising the following steps:
step 101, adopting a multi-resolution and variable block size mode, and using I0Is a current frame, I1Estimating a motion vector field MF _0 for a reference frame, where each vector corresponds to I0A block in the reference image, the relative offset between the block and the best matching block in the reference image being given in the form of a vector;
step 102, judging whether each motion vector in MF _0 is reliable;
103, projecting the motion vector according to the interpolation frame IΔtAnd I0The time interval between them, each reliable motion vector in the vector field MF _0 is given to the pixel of the interpolated frame in a projection manner, the result is one and IΔtVector field MF _ X of the same size;
step 104, for the areas of the vector field MF _ X not assigned with motion vectors, one seeks to interpolate a frame IΔtStarting from, passing through I1And I2And can obtain I1And I2The vector with the smallest matching cost is used as the motion vector.
In step 101, a motion vector field MF _0 is estimated by using a multi-resolution, variable-size block matching algorithm. The performance of motion compensated frame rate up-conversion algorithms depends to a large extent on whether the estimated motion vectors reflect the true motion of the individual pixels well. For the block matching algorithm, the size of the block will affect the reliability and precision of the motion vector, when a region presents consistent motion, the adoption of a larger block is beneficial to reducing the influence of image noise on the matching search process, and conversely, in the regions such as the edge of a moving object, the adoption of a smaller block is beneficial to capturing the complex motion. Therefore, the size of the block is adaptively determined according to the motion characteristics of the image, and a larger block is used in a region exhibiting a single motion and a smaller block is used in a region exhibiting a complex motion, which is advantageous for correctly and reliably constructing a motion vector field for motion compensation interpolation.
The invention models the process of adaptively dividing an image into blocks of different sizes into a dynamic system and uses a state vector s ═ s1,s2,...,sN]TDescription, where N is the total number of blocks, si=(xi,yi,wi) Representing the ith particle, corresponding to an image block, containing information describing the position and size of the image block, (x)i,yi) Is the coordinates of the upper left corner point of the sub-block, wiIs the block size in pixels in the width and height directions. In the framework of particle filtering, under the drive of an importance density function q in an iterative manner, a larger block is used in a background region, and a region which is located at the edge of a moving object or shows complex motion is divided into smaller sub-blocks, so that multi-resolution and variable-block-size motion vector field estimation is realized, and fig. 2 shows specific steps.
Step 200, constructing an L-layer image pyramid, wherein the 0 th layer corresponds to the original image, the L-1 st layer corresponds to the minimum resolution image, and if the size of the k-layer image is H × W, the size of the k +1 st layer image is H × WAnd each pixel is equal to the average of 4 pixels in the k-th layer image a preferred embodiment of the invention takes L3 for images with a resolution less than 1920 × 1280 and L4 for images with a resolution greater than or equal to this resolution.
The L-1 layer image of the minimum resolution is uniformly divided into b0×b0Each block corresponding to a particle, an embodiment of the invention takes b08. The initial importance density function q0Defined as a uniform distribution, the particle set being formed by all particlesWhereinN0Equal to the total number of blocks of the L-1 layer picture.
Step 201, using the 0 th layer image as the input to detect the gray level uniform area, one embodiment of the present invention uses the gradient operator [ -1, 0, +1 [ -1, 0 [ ]]And [ -1, 0, +1]TRespectively performing convolution operation with the image to calculate gradient image I in X and Y directionsxAnd IyThe gradient strength was calculated as follows
Step 203, scanning the gradient intensity map with fixed size and non-overlapping windows, if the gradient intensity value in the window is larger than the threshold value TeLess than a given number TnThen the corresponding block is determined to be in the uniform gray scale region and is not processed in subsequent steps 204-207. one embodiment of the present invention takes a window size of 32 × 32, Te=6,Tn=4。
The idea behind the above steps is: if the block is located in the uniform gray area, the block matching search process is easily affected by image noise, there are often some positions in the search process, the matching error of the block is very close to the minimum matching error, and the position where the minimum matching error occurs may not reflect the real motion. For such a region, after the motion vectors of other regions are determined, the motion vectors of neighboring blocks are used as candidate vectors, which is more favorable for obtaining the motion vectors reflecting the true motion of the region and is also favorable for reducing the calculation cost.
Step 204, judging whether the block corresponding to the particle is split from the block of the previous layer in the form of a quadtree or not, or the current processed image is the L-1 layer image.
Step 205, for the image block corresponding to the particle determined to be eligible in step 204, a fast matching algorithm is adopted, the sum of the minimized absolute frame differences is taken as the matching criterion, a best match is searched in the reference image, and the relative offset (dx) between the two is taken as the matching criterioni,dyi) As a motion vector. Specifically, one embodiment of the present invention employs a diamond search algorithm with the sum of absolute frame differences, SAD, calculated as follows as the match error
Wherein (x)0,y0) Is a particle siCoordinates of the upper left corner of the corresponding sub-block, bkIs the block size of the L-k th layer.
Step 206, if the condition judgment of step 204 is not true, determining the initial search position according to the motion vector obtained by the previous layer search, specifically, setting the motion vector of the previous layer as (dx, dy), (x)0,y0) The coordinate of the upper left corner of the block is (x)0+2dx,y0+2dy) as the initial search location, the best match is searched using a two-step search algorithm. Let (1, 1) be the motion vector of the previous layer, fig. 3 shows the search positions of the first step and the second step, where the first step searches for 9 positions of the solid circle mark, and if the point at the right position is the point corresponding to the minimum matching error, the second step searches for 8 points of the hollow circle mark.
Step 207, determining an observation metric from the matching error and the motion vector distribution consistency of the neighborhood blocks, specifically, first calculating for each particle according to the following formula:
wherein e isV∈[0,1]Is a parameter reflecting the degree of motion coincidence between the current block and its neighboring blocks, if anyIf the previous block has consistent motion with its neighboring blocks, the parameter is given a smaller value, otherwise the parameter is given a larger value. Referring to FIG. 4, let the current block be Bm,nThe adjacent four-neighborhood blocks are respectively Bm-1,n、Bm+1,n、Bm,n-1And Bm,n+1In one embodiment of the present invention, parameter e is determined as followsV
Wherein u isdxIs a four-neighborhood block Bm-1,n、Bm+1,n、Bm,n-1And Bm,n+1Of the X component of the motion vector of (1), udyMin (a, b) and max (a, b) represent the smaller and larger values of a and b, respectively, as the mean of the Y components.
Next, an observation metric o is calculated for each particle as followsk,i
Step 208, determining whether all the particles in the current particle set have been processed, if yes, executing step 209, otherwise, turning to step 203 to continue processing the next particle in the particle set.
Step 209, calculate the observed likelihood density function p for the kth iteration from the observed metrics as followsk(z|s)
Wherein N (-) represents a Gaussian distribution function, uk,iRepresents the center position of the block corresponding to the ith particle at the kth iteration, ∑k,iIs a covariance matrix, related to the block size, one embodiment of the invention takes
Step 210, update the importance density function according to the following formula
Wherein a isk∈(0,1]Is the overwrite coefficient for the kth iteration, one embodiment of the invention takes the constants: a isk=0.75。
And step 211, sampling to generate a particle set for next iterative computation according to the importance density function q. The kth iteration is directed to the L-k layer image, one of which, bk×bkThe block corresponds to a 2b of the L-k-1 layerk×2bkIf a certain particle has a larger q value, dividing the corresponding block into four sub-blocks in a quadtree form at an L-k-1 layer of the next iteration, wherein each sub-block corresponds to one particle in the particle set of the (k + 1) th cycle, otherwise, not dividing the sub-block corresponding to the particle, and taking the size of 2b for the next iterationk×2bkThe motion vector is estimated using the two-step search method described in step 206.
Step 212, determining whether the layer 0 image has been processed, if yes, turning to step 213, otherwise, turning to step 202 to continue the next iteration.
Step 213, for being at I0Blocks of the medium gray uniform region estimate motion vectors. If in step 201 the block corresponding to the particle is detected as being located in a gray uniform area, it will be ignored in subsequent steps, and such block is marked with a white box with reference to fig. 5. The invention adopts a two-pass scanning mode to estimate the motion vectors of the blocks, the first pass is processed from top to bottom and from left to right, the block to be processed is set as A, and the blocks adjacent to the block to be processed and positioned above, above left, above right and left are estimated to obtain the motion vectorsThe motion vectors of the blocks are used as candidate vectors, and the vector which can generate the minimum SAD is selected from the candidate vector set as the result of the first scanning. And the second scanning pass is processed from bottom to top and from right to left, and if the block to be processed is B, the motion vectors of the blocks which are adjacent to the block to be processed and are positioned below, below left, below right and right are estimated, the motion vector is also estimated during the first scanning pass, the vectors are used as candidate vectors, and the vector which can generate the minimum SAD is selected from the candidate vector set as the result of the second scanning pass.
After the above steps 200 to 213 are completed, the motion vector field MF _0 is output, wherein each vector corresponds to I0One block of (1).
In the step 102, it is determined whether the motion vector in MF _0 is reliable, since the motion vector is estimated by using the aforementioned multi-resolution and variable-size block matching search algorithm, and the motion vectors of the neighboring blocks are used as candidate vectors to perform motion estimation on the blocks in the gray uniform region in two passes, the unreliable motion vector in MF _0 is mostly from the occluded region and the exposed region, note that one motion vector in MF _0 is most likely to correspond to a 64 × 64 block of the 0 th layer image, and least likely to correspond to an 8 × 8 block, and for convenience of processing, MF _0 is uniformly processed into a block with 8 × 8 per vector, specifically, if a vector corresponds to a 2 block of 2 layers of imagesr×2rWherein r is an integer and 4. ltoreq. r.ltoreq.6, are divided into 2 blocks in each of the horizontal and vertical directionsr-3Each sub-block, and assigning the vector to all sub-blocks. One embodiment of the present invention determines the reliability of a motion vector using the following two steps:
step one, setting Bm,nThe motion vector of the block is (v)x,vy) The four neighborhood blocks are respectively Bm-1,n、Bm,n-1、Bm,n+1And Bm+1,nCalculating B by the following formulam,nMaximum of absolute difference of motion vectors in four adjacent domains
Δvmax=max{|vx(m,n)-vx(m+i,n+j)|+|vy(m,n)-vy(m+i,n+j)|} (9)
Where i, j equals 0, ± 1, and | i | ≠ | j |. If Δ vmax1Then, B is judgedm,nAnd (4) the block is positioned in the motion consistency area, the judgment of the subsequent step two is not needed, otherwise, the step two is switched. Wherein,1is a smaller threshold, one embodiment of the invention takes1=2。
Step two, setting a current frame I0B of (A)m,nBlock and its reference frame I1The relative offset between the best matching blocks in (v) isx,vy). In the reference frame with (m + v)x,n+vy) For the coordinates of the upper left corner, the size and B are takenm,nThe same image block, find the block at I0The best matching block in a frame is written as a motion vector of (u)x,uy). If the following formula holds, then (v)x,vy) Is a reliable motion vector, otherwise the motion vector is not reliable:
|vx+ux|+|vy+uy|≤2(10)
wherein2Is a threshold value determined by experiment, a preferred embodiment of the present invention is taken2=2。
Said step 103, the motion vector projection processing, according to the interpolated frame IΔtAnd I0In between, each reliable motion vector in the vector field MF _0 is assigned to a pixel of the interpolated frame, resulting in an and IΔtThe same magnitude vector field MF _ X. Specifically, let vector field MF _0 be (x)0,y0) The vector of positions is M (x)0,y0)=(vx,vy) And is with I0Coordinate of upper left corner of frame is (x)0N,y0N) corresponds to an N × N block, then the coordinate of the upper left corner in MF _ X is (X)0N+round(vxΔt),y0N+round(vyAt)), of size N × N blocksAll positions are given motion vectors (v)xΔt,vyΔ t), where round (.) represents the rounding operation.
Fig. 6 shows a schematic diagram of MF _ X with different numbers of motion vectors at different positions, via the above motion vector projection: one position of the area A is assigned with a unique motion vector, the area B forms a hollow area of the interpolation frame, elements of the hollow area are not assigned with the motion vector, and the area C forms an overlapping area of the interpolation frame, wherein each position corresponds to two or even more than two motion vectors.
If a pixel (x, y) in the interpolated frame is assigned a unique motion vector (dx, dy), then the interpolated frame is generated directly from I0The frame copy pixel values, specifically,
IΔt(x,y)=I0(x-dx,y-dy) (11)
if n motion vectors are assigned to the pixel (x, y) of the interpolated frameThe pixel values of the interpolated frame are calculated using the following form of weighted sum in the subsequent generation of the interpolated frame
Wherein wkIs related to the motion vector MkThe corresponding weight, specifically, the matching error is first calculated as follows
Secondly, calculate the weight according to the following formula
Where E is a small number, an embodiment of the present invention takes E as 0.5, and E is calculated as follows
Due to unreliable motion vectors in MF _0 and the difference of direction and module value between adjacent motion vectors, there will be some regions in the interpolated frame that are not assigned with motion vectors. To detect these regions, one embodiment of the present invention uses a sum-interpolated frame I in the motion vector projectionΔtThe same size matrix C records the number of motion vectors assigned to each pixel, i.e. the
C(x,y)=a (16)
Where a is 0, 1, 2, the number of motion vectors passing through the pixel after the projection process is complete, a window scan C of 3 × 3 is used, if the window center position is located at (x)0,y0) If the time-based expression is true, go to step 104, otherwise continue to scan the next position.
The step 104, referring to FIG. 7, is to scan the center position (x) of the window0,y0) To center on, one embodiment of the invention takes a block size of N8, a search range of-17 to +17, for each possible vector (u) in the search rangex,uy) Calculating the matching error at different i and j:
wherein, i is more than or equal to 2, j is less than or equal to 2 and is an integer. For all i and j, the following equation is used
Finally, determining the motion vector of the block where the scanning window is located as follows:
and is based on the vector by I1The pixel values of the frames determine the value of the corresponding location of the interpolated frame.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any modification or replacement within the spirit and principle of the present invention should be covered within the scope of the present invention.

Claims (5)

1. A method for generating a motion vector field for use in frame rate up-conversion of video, comprising: estimating motion vector field by multi-resolution and variable-size block matching mode, modeling the process of adaptively dividing image into blocks with different sizes into a dynamic system, and using state vector s [ [ s ] ]1,s2,...,sN]TThe system is described, where N is the total number of blocks, si=(xi,yi,wi) Representing the ith particle corresponding to an image block, (x)i,yi) Is the upper left corner of the blockLabel, wiThe block size in the width direction and the height direction is calculated by pixels, the block size is driven by an importance density function under the framework of particle filtering, and the motion vector field estimation is realized in an iterative mode, and the method comprises the following steps:
(1) constructing an L-level image pyramid, wherein the 0 th level corresponds to the original image, the L-1 th level corresponds to the minimum resolution image in the pyramid, and if the size of the k-level image is H × W, the size of the k + 1-level image is H × W
(2) Taking the image of the 0 th layer as input, calculating the gradient intensity of the image, detecting the gradient intensity image in a window scanning mode, and if the gradient intensity image in the window is larger than a pre-specified threshold value TeIs less than a pre-specified threshold value TnJudging that the window is located in a gray uniform area, and carrying out no treatment on the area in the subsequent steps (3) to (7);
(3) for each particle, judging whether a block corresponding to the particle is split from a block of a previous layer in a quadtree form or an L-1 layer image is currently processed, if so, searching a best match in a reference image by adopting a quick block matching algorithm and taking the sum of minimized absolute frame differences as a matching criterion, and taking the relative offset (dx, dy) of the two as a motion vector; otherwise, determining an initial searching position according to the motion vector obtained by the previous layer searching, and searching the best match in the range of-3- +3 in the two directions of the position;
(4) an observation metric is determined from the match error and the motion vector consistency of the neighborhood blocks, and is first calculated for each particle as follows:
wherein b iskIs the block size of the current layer, SAD is the sum of the absolute frame differences obtained in the matching process of step (3) and corresponding to the best match, eV∈[0,1]Is a reflection of the current block Bm,nThe parameter of the degree of motion consistency with the neighboring blocks is calculated according to the following formula:
Wherein u isdxIs a four-neighborhood block Bm-1,n、Bm+1,n、Bm,n-1And Bm,n+1Of the X component of the motion vector of (1), udyMin (a, b) and max (a, b) represent the smaller and larger values of a and b, respectively, which are the mean values of the Y components; next, an observation metric o is calculated for each particle as followsk,i
(5) From the observation metrics, the observation likelihood density function p for the kth iteration is calculated ask(z|s)
Wherein N (-) represents a Gaussian distribution function, uk,iRepresents the center position of the block corresponding to the ith particle at the kth iteration, ∑k,iIs a covariance matrix, related to the block size, calculated as follows
(6) Updating the importance density function as follows
Wherein a isk∈(0,1]Is the overwrite coefficient for the kth iteration;
(7) and (4) according to the importance density function q, sampling to generate a particle set for next iterative calculation, and re-executing the steps (3) - (7) until the L-th loop is executed.
2. The method of claim 1, wherein the motion vector field is a first frame I of two frames inputted consecutively0For the current frame, the second frame I1Calculated for reference frames, I0Located before the interpolated frame, I1After the interpolated frame.
3. The method of claim 1, further comprising:
(8) from interpolated frame IΔtAnd I0The interval Δ t between the two frames is used for motion vector projection processing, and the motion vector field obtained by estimating the 0 th layer image is set as MF _0, and the magnitude thereof isWherein Width, Height and b represent the Width, Height and block size of the layer 0 image, and each vector of MF _0 corresponds to I0An image block of the frame, wherein the position of the image block corresponding to the vector with (x, y) is (xb, yb), if MF _0(x, y) ═ dx (x, y), dy (x, y)), after motion vector projection, the frame I is interpolatedΔtThe coordinates at the top left corner are (xb + round (Δ tdx (x, y)), yb + round (Δ tdy (x, y))), and all positions of block b × b are assigned motion vectors (Δ tdx (x, y), Δ tdy (x, y)), where round (·) represents the rounding operation;
(9) scanning the projected motion vector field, and finding an interpolated frame I if the pixel without motion vector is included in the scanning windowΔtStarting from, passing through I1And a frame I following it2And can obtain I1And I2The vector with the smallest matching cost is used as the motion vector.
4. The method of claim 1, wherein in step (7), the kth iteration is for an L-k layer image, one of which, bk×bkThe block corresponds to a 2b of the L-k-1 layerk×2bkBlock, if a particle hasWith a larger q value, dividing the corresponding block into four sub-blocks in the form of a quadtree at the L-k-1 layer of the next iteration, wherein the size of each sub-block is bk×bkAnd respectively corresponding to one particle in the (k + 1) th cycle particle set, otherwise, the sub-block corresponding to the particle is not divided, and the size of the corresponding block is 2bk×2bkThe set of particles at cycle k +1 corresponds to only one particle.
5. The method of claim 1, wherein the estimation method of step (9) is as follows: limiting the block matching search to a certain range, and aiming at each possible vector (u) in the search rangex,uy) Calculating the match error at different i and j
Wherein (x)0,y0) For the center position of the scanning window in step (9), i and j are integers, i is more than or equal to-2, j is less than or equal to 2, and for all i and j, the calculation is performed according to the following formula
Finally, the motion vector of the block is determined as
CN201410489709.2A 2014-09-23 2014-09-23 Motion vector field generation method applied to video frame rate up-conversion Expired - Fee Related CN104202603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410489709.2A CN104202603B (en) 2014-09-23 2014-09-23 Motion vector field generation method applied to video frame rate up-conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410489709.2A CN104202603B (en) 2014-09-23 2014-09-23 Motion vector field generation method applied to video frame rate up-conversion

Publications (2)

Publication Number Publication Date
CN104202603A CN104202603A (en) 2014-12-10
CN104202603B true CN104202603B (en) 2017-05-24

Family

ID=52087821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410489709.2A Expired - Fee Related CN104202603B (en) 2014-09-23 2014-09-23 Motion vector field generation method applied to video frame rate up-conversion

Country Status (1)

Country Link
CN (1) CN104202603B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796721B (en) * 2015-04-24 2018-03-16 宏祐图像科技(上海)有限公司 The method for carrying out estimation to image light scene change using MEMC technologies
CN105376584B (en) * 2015-11-20 2018-02-16 信阳师范学院 Turn evidence collecting method in video motion compensation frame per second based on noise level estimation
CN106131567B (en) * 2016-07-04 2019-01-08 西安电子科技大学 Ultraviolet aurora up-conversion method of video frame rate based on Lattice Boltzmann
CN108810317B (en) * 2017-05-05 2021-03-09 展讯通信(上海)有限公司 True motion estimation method and device, computer readable storage medium and terminal
TWI750486B (en) 2018-06-29 2021-12-21 大陸商北京字節跳動網絡技術有限公司 Restriction of motion information sharing
CN111174782B (en) * 2019-12-31 2021-09-17 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523419A (en) * 2011-12-31 2012-06-27 上海大学 Digital video signal conversion method based on motion compensation
CN102685438A (en) * 2012-05-08 2012-09-19 清华大学 Up-conversion method of video frame rate based on time-domain evolution
CN103220488A (en) * 2013-04-18 2013-07-24 北京大学 Up-conversion device and method of video frame rate
CN103260024A (en) * 2011-12-22 2013-08-21 英特尔公司 Complexity scalable frame rate up-conversion
CN103702128A (en) * 2013-12-24 2014-04-02 浙江工商大学 Interpolation frame generating method applied to up-conversion of video frame rate

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4645736B2 (en) * 2008-12-22 2011-03-09 ソニー株式会社 Image processing apparatus, image processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260024A (en) * 2011-12-22 2013-08-21 英特尔公司 Complexity scalable frame rate up-conversion
CN102523419A (en) * 2011-12-31 2012-06-27 上海大学 Digital video signal conversion method based on motion compensation
CN102685438A (en) * 2012-05-08 2012-09-19 清华大学 Up-conversion method of video frame rate based on time-domain evolution
CN103220488A (en) * 2013-04-18 2013-07-24 北京大学 Up-conversion device and method of video frame rate
CN103702128A (en) * 2013-12-24 2014-04-02 浙江工商大学 Interpolation frame generating method applied to up-conversion of video frame rate

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Bo-Won Joen.Coarse-to-fine frame interpolation for frame rate up-conversion using pyramid structure.《IEEE Transactions on Consumer Electronics》.2003,第49卷(第3期),第499-508页. *
Byeong-Doo Choi et al..Motion-Compensated Frame Interpolation Using Bilateral Motion Estimation and Adaptive Overlapped Block Motion Compensation.《IEEE Transactions on Circuits & Systems for Video Technology》.2007,第17卷(第4期),第407-416页. *
R.Castagno et al..A method for motion adaptive frame rate up-conversion.《IEEE Transactions on Circuits and Systems for Video Technology》.2008,第6卷(第5期),第436-446页. *
Yen-Kuang Chen et al..Frame-rate up-conversion using transmitted true motion vectors.《Multimedia Signal Processing,1998 IEEE Second Workshop on》.2002,全文. *

Also Published As

Publication number Publication date
CN104202603A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
CN104202603B (en) Motion vector field generation method applied to video frame rate up-conversion
US11216914B2 (en) Video blind denoising method based on deep learning, computer equipment and storage medium
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
US9661228B1 (en) Robust image feature based video stabilization and smoothing
Patwardhan et al. Video inpainting under constrained camera motion
CN109963048B (en) Noise reduction method, noise reduction device and noise reduction circuit system
US11170202B2 (en) Apparatus and method for performing 3D estimation based on locally determined 3D information hypotheses
CN107749987B (en) Digital video image stabilization method based on block motion estimation
CN106570886B (en) A kind of method for tracking target based on super-resolution rebuilding
CN105931213B (en) The method that high dynamic range video based on edge detection and frame difference method removes ghost
CN111127376B (en) Digital video file repairing method and device
CN113269682B (en) Non-uniform motion blur video restoration method combined with interframe information
WO2016120132A1 (en) Method and apparatus for generating an initial superpixel label map for an image
CN108270945B (en) Motion compensation denoising method and device
Liu et al. Learning to see through obstructions with layered decomposition
JP2001520781A (en) Motion or depth estimation
CN107767393B (en) Scene flow estimation method for mobile hardware
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
Bertalmio et al. Movie denoising by average of warped lines
Wang et al. Digital video stabilization based on block motion estimation
JP6216192B2 (en) Motion estimation apparatus and program
Keaomanee et al. RGB-D depth inpainting with color guide inverse distance weight
Sreegeethi et al. Online Video Stabilization using Mesh Flow with Minimum Latency
Yu Robust Selfie and General Video Stabilization
Cai et al. ConVRT: Consistent Video Restoration Through Turbulence with Test-time Optimization of Neural Video Representations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: No.477 Yongle Road, Wangdian Town, Xiuzhou District, Jiaxing City, Zhejiang Province

Patentee after: Zhejiang Otto Electric Co.,Ltd.

Address before: No.18 Xuezheng street, Xiasha Economic Development Zone, Hangzhou City, Zhejiang Province, 310018

Patentee before: ZHEJIANG GONGSHANG University

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170524