CN105141963A - Image motion estimation method and device - Google Patents

Image motion estimation method and device Download PDF

Info

Publication number
CN105141963A
CN105141963A CN201410228500.0A CN201410228500A CN105141963A CN 105141963 A CN105141963 A CN 105141963A CN 201410228500 A CN201410228500 A CN 201410228500A CN 105141963 A CN105141963 A CN 105141963A
Authority
CN
China
Prior art keywords
image block
pixel
value
current reference
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410228500.0A
Other languages
Chinese (zh)
Other versions
CN105141963B (en
Inventor
叶健
黄鹏
成喜民
刘屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Beizhuo Intelligent Technology Co Ltd
Original Assignee
Shanghai Beizhuo Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Beizhuo Intelligent Technology Co Ltd filed Critical Shanghai Beizhuo Intelligent Technology Co Ltd
Priority to CN201410228500.0A priority Critical patent/CN105141963B/en
Publication of CN105141963A publication Critical patent/CN105141963A/en
Application granted granted Critical
Publication of CN105141963B publication Critical patent/CN105141963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image motion estimation method and a device. For each pixel in a current input image, in the motion estimation process, for a first image block which comprises a current processing pixel and satisfies a first preset condition and each current reference picture block in the second image block which satisfies a second preset condition in the reference image, based on the randomly selected position in the first image block and a position which is same with the randomly selected position in the current reference picture block, a preset same local texture feature extraction method is used to extract a local feature to carry out local texture analysis. According to the method the device, the wrong judgment caused by image block dissimilarity and statistical or texture feature similarity of an image block fixed position can be avoided, and thus the accuracy of pixel similarity of the image and motion vector judgment can be improved.

Description

Picture motion estimating method and device
Technical field
The present invention relates to digital picture and technical field of video processing, particularly relate to a kind of picture motion estimating method and device.
Background technology
Relate in the multimedia processing system of display device multiple, the estimation effect of image plays conclusive effect to the series of algorithms such as noise reduction, de interlacing, super-resolution rebuilding of image and technology.
Existing is the method adopting Block-matching in current mode, that is: find block the most similar between adjacent image on time shaft in video by motion compensation, compare afterwards between block and block, and then obtains the similitude of pixel and corresponding motion vector.When comparing between block and block, in block, usually use local grain and the detail analysis of fixed position.
But, in block, use the analysis of local regions of fixed position, easily erroneous judgement is produced to a large amount of non-similar but similar on these fixed positions block, thus the accuracy that the pixel similarity of the final image obtained of impact and motion vector judge.
Summary of the invention
The problem that the embodiment of the present invention solves how to improve the pixel similarity of image and the accuracy of motion vector judgement.
For solving the problem, the embodiment of the present invention provides a kind of picture motion estimating method, comprising:
Obtain the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size;
First pre-conditioned the first image block is met to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it, wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block,
According to the arrangement mode identical with the pixel of described current input image, export the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
Optionally, described first pre-conditionedly comprises: the shape of described first image block is rectangle, is of a size of AxB; Described second pre-conditionedly comprises: the shape of described second image block is rectangle, is of a size of MxN.
Optionally, describedly meet first pre-conditioned first image block to comprise when pre-treatment pixel in current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, comprise:
Generate random number i and j, wherein, 1≤i≤A, 1≤j≤B;
With centered by the coordinate (p, q) of pre-treatment pixel, extract the first image block being of a size of AxB;
From described second image block, extract each the current reference image block being of a size of AxB according to preset order successively, and record the coordinate (w, k) of the central point of described current reference image block;
Select the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block, wherein, described Local textural feature extracting method and i, j or relevant with the combination of i and j, described first stack features value characterizes the Local textural feature of described first image block, and described second stack features value characterizes the Local textural feature of described current reference image block,
To obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the corresponding vertical motion vector of coordinate (p, q) when pre-treatment pixel and horizontal motion vector.
Optionally, described generation random number i and j, comprise: by the described coordinate (p when pre-treatment pixel, q) value of p, the value of q in, or p and q two value is as input, or by the frame number of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.
Optionally, by the first image block in described current input image and the reference image block in described reference picture, adopt default same Local textural feature extracting method texture feature extraction respectively, before carrying out analysis of local regions, also comprise: respectively described first image block and all current reference image blocks are carried out two-dimensional low pass ripple.
Optionally, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference motion estimated values of the current reference image block of point centered by, comprise:
The pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point are done differ from and take absolute value, obtains a block level difference value;
Each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value;
Each pixel value on described low pass first image block i-th row and jth row is done differ from and take absolute value with each pixel value with same coordinate arranged at described low pass current reference image block i-th row and jth successively respectively, by the absolute value of all differences of pixel value on the i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value;
Described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
Optionally, described described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, comprising: described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are weighted superposition.
Optionally, described acquisition is as the coordinate (p of pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, comprise: when described minimum value is unique, described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w0), and horizontal motion vector is (q-k0).
Optionally, described acquisition is as the coordinate (p of pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, comprise: when the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then obtain described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w_avg), and horizontal motion vector is (q-k_avg).
Optionally, described by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, comprise: by the coordinate (w0 corresponding to all minimum values, k0) Europe a few reason moral distance is calculated with (p, q), by the coordinate (w0 of the minimum Euclidean distance of acquisition, k0) as final coordinate (w_avg, k_avg).
Optionally, described first pre-conditionedly comprises: the shape of described first image block is for circular, and radius is R1; Described second pre-conditionedly comprises: the shape of described second image block is for circular, and radius is R2.
For solving the problem, the embodiment of the present invention additionally provides a kind of image motion estimation device, and described device comprises:
Acquiring unit, for obtaining the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size;
Motion estimation unit, for meeting first pre-conditioned first image block to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it, wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block,
Output unit, for according to the arrangement mode identical with the pixel of described current input image, export the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
Optionally, described first pre-conditionedly comprises: the shape of described first image block is rectangle, is of a size of AxB; Described second pre-conditionedly comprises: the shape of described second image block is rectangle, is of a size of MxN.
Optionally, described motion estimation unit comprises:
Generating random number subelement, for generating random number i and j, wherein, 1≤i≤A, 1≤j≤B;
First extracts subelement, for centered by the coordinate (p, q) of pre-treatment pixel, extracts the first image block being of a size of AxB;
Second extracts subelement, for extracting each the current reference image block being of a size of AxB from described second image block successively according to preset order, and records the coordinate (w, k) of the central point of described current reference image block;
Estimation subelement, for selecting the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block, and to obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the coordinate (p when pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, wherein, described Local textural feature extracting method and i, j or relevant to the combination of i and j, described first stack features value characterizes the Local textural feature of described first image block, described second stack features value characterizes the Local textural feature of described current reference image block.
Optionally, described generating random number subelement is used for the described coordinate (p when pre-treatment pixel, q) value of p, the value of q in, or the value of p and q is as input, or by the frame number at the place of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.
Optionally, described motion estimation unit also comprises low-pass filtering subelement, for respectively described first image block and all current reference image blocks being carried out two-dimensional low pass ripple, corresponding low pass first image block obtained and low pass current reference image block, and export described estimation subelement to, to obtain all current reference motion estimated values of the current reference image block of point centered by (w, k) in the second image block.
Optionally, described estimation subelement comprises:
First computing module, for the pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point being done differ from and take absolute value, obtains a block level difference value;
Second computing module, for each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value;
3rd computing module, for each pixel value on described low pass first image block i-th row is done differ from and take absolute value with each pixel value with same coordinate on described low pass current reference image block i-th row successively respectively, by the absolute value of all differences of pixel value on described i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value;
4th computing module, for each pixel value on described low pass first image block jth row is done differ from and take absolute value with each pixel value with same coordinate arranged in described low pass current reference image block jth successively respectively, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value;
5th computing module, for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
Optionally, described 5th computing module is used for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value to be weighted superposition, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
Optionally, described estimation subelement comprises:
Motion vector computation module, for when described minimum value is unique, when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) is (p-w0) described in calculating, horizontal motion vector is (q-k0).
Optionally, described motion vector computation module is also for when the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then calculate described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w_avg), and horizontal motion vector is (q-k_avg).
Compared with prior art, the technical scheme of the embodiment of the present invention has the following advantages:
For each pixel in current input image, in motion estimation process, first pre-conditioned the first image block is met when pre-treatment pixel for comprising, each current reference image block in second pre-conditioned the second image block is met with reference picture, due to all based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, in other words, when carrying out analysis of local regions, the position of extracting Local textural feature in current image block in described first image block and described second image block is random, therefore the non-similar but statistics in image block fixed position of image block or similar the caused erroneous judgement of textural characteristics can be avoided, thus the pixel similarity of image and the accuracy of motion vector judgement can be improved.
Further, extract Local textural feature extracting method that in the first stack features value and the second stack features value, each characteristic value adopts all with the abscissa i of the random selecting position in described first image block, or ordinate j or i is corresponding with j, the Local textural feature randomness that extraction therefore can be made to obtain is stronger, and then the non-similar but statistics in image block fixed position of image block or similar the caused erroneous judgement of textural characteristics can be avoided better, thus the accuracy that the pixel similarity of image and motion vector judge can be improved further.
In addition, again corresponding low pass first image block obtained and low pass current reference image block are compared after described first image block and all current reference image blocks are carried out two-dimensional low pass ripple, can picture noise be reduced, improve the pixel similarity of image and the accuracy of motion vector judgement further.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of picture motion estimating method in the embodiment of the present invention;
Fig. 2 is the relation schematic diagram of the first image block in the embodiment of the present invention in current input image and the second image block in reference picture;
Fig. 3 is the flow chart of another kind of picture motion estimating method in the embodiment of the present invention;
Fig. 4 is the schematic diagram of the random selecting position in the embodiment of the present invention in the first image block;
Fig. 5 is the flow chart of the method obtaining a current reference motion estimated values in the embodiment of the present invention;
Fig. 6 is the structural representation of image motion estimation device in the embodiment of the present invention;
Fig. 7 is motion estimation unit structural representation in the embodiment of the present invention.
Embodiment
Carry out in the method for image motion estimation for existing Block-matching, local grain and the detail analysis of fixed position is used in block, easily erroneous judgement is produced to a large amount of non-similar but similar on these fixed positions block, and then affect the pixel similarity of image and the problem of motion vector judgment accuracy of final acquisition, in the embodiment of the present invention, for each pixel in current input image, in motion estimation process, first pre-conditioned the first image block is met when pre-treatment pixel for comprising, with each the current reference image block met in reference picture in second pre-conditioned the second image block, due to all based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, therefore the non-similar but statistics in the fixed position of image block of image block or similar the caused erroneous judgement of textural characteristics can be avoided, thus the pixel similarity of image and the accuracy of motion vector judgement can be improved.
Picture motion estimating method in the embodiment of the present invention and device are suitable for the multiple multimedia processing system relating to display device.Described multimedia processing system can include but not limited to Digital Television, Set Top Box, intelligent terminal, video monitoring system, mobile phone, panel computer and palmtop PC etc.The display device of described multimedia processing system can include but not limited to the display modes such as cathode ray tube, plasma, liquid crystal.
For enabling above-mentioned purpose of the present invention, feature and advantage more become apparent, and are described in detail specific embodiments of the invention below in conjunction with accompanying drawing.
With reference to the flow chart of a kind of picture motion estimating method in the embodiment of the present invention shown in Fig. 1, be described in detail below by way of concrete steps:
S101, obtain the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size.
In concrete enforcement, multimedia processing system can obtain the reference picture of described current input image and described current input image in several ways.Such as, can by taking pictures, the mode such as capture video, previewing photos or preview video obtains current input image.Can by taking pictures, the mode of capture video, previewing photos or preview video, or obtained the reference picture of described current input image by the mode reading the memory devices such as internal memory, buffer memory, External memory equipment.For describe for simplicity, hereinafter by the reference picture of described current input image referred to as reference picture.
In concrete enforcement, described reference picture can be adjacent with described current input image on a timeline before piece image, also can be described current input image image that obtain after certain process, or there is the image of same size for any and described current input image.
S102, first pre-conditioned the first image block is met to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it.
Wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block.
In concrete enforcement, can arrange the shape and size of the first image block and the second image block according to actual needs.Described first image block and the second image block can be circle, rectangle, ellipse, also can be other regular shapes or irregularly shaped.The size of described second image block is greater than the size of described first image block.Pre-conditioned and second pre-conditionedly to represent respectively by first below the above-mentioned restriction first image block and the second image block shape and size etc. being chosen to condition.Such as, described first pre-conditionedly can be: the shape of described first image block is for circular, and radius is R1; Correspondingly, described second pre-conditionedly can be: the shape of described second image block is for circular, and radius is R2, wherein R2 >=R1.Described first pre-conditionedly can also be: the shape of described first image block is rectangle, is of a size of AxB; Correspondingly, described second pre-conditionedly can also be: the shape of described second image block is rectangle, is of a size of MxN.Wherein, M >=A, N >=B.In embodiments of the present invention, the unit of above-mentioned size can be pixel, millimeter, centimetre etc.In the examples below, if no special instructions, the unit of image and image block size is pixel.
The relation schematic diagram of the first image block in the embodiment of the present invention of reference shown in Fig. 2 in current input image and the second image block in reference picture, for each pixel in current input image 1, all adopting uses the same method carries out estimation, such as, for working as pre-treatment pixel, its coordinate can use (p, q) represent, comprise as pre-treatment pixel (p, the shape of the first image block 11 q) is rectangle, be of a size of AxB, wherein A is the height of the first image block, and B is the width of the first image block.In concrete enforcement, A, B can be odd number, also can be the integral multiple of odd number.In reference picture 2 corresponding to current input image 1,3rd image block 22 and the first image block 11 are positioned at the same position of entire image, its size is also AxB, if as pre-treatment pixel (p, q) be the words at center of the first image block 11, the centre coordinate of the 3rd image block is also (p, q).In image motion estimation process, the regional extent of searching for each current reference image block in reference picture 2 is the second image block 21, second image block 21 is of a size of MxN, wherein, M is the height of the second image block, and N is the width of the second image block, M >=A, N >=B, M, N all can be odd number, also can be the integral multiple of odd number.
S103, according to the arrangement mode identical with the pixel of described current input image, exports the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
In the above-described embodiments, due to for each pixel in current input image, in motion estimation process, first pre-conditioned the first image block is met when pre-treatment pixel for comprising, with each the current reference image block met in reference picture in second pre-conditioned the second image block, all based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, carry out analysis of local regions, in other words, when carrying out analysis of local regions, the position of extracting Local textural feature in current image block in described first image block and described second image block is random, therefore the non-similar but statistics in the fixed position of image block of image block or similar the caused erroneous judgement of textural characteristics can be avoided, therefore the pixel similarity of image and the accuracy of motion vector judgement can be improved.
With reference to the flow chart of picture motion estimating method in the embodiment of the present invention shown in Fig. 3, the flow chart of image motion estimation process is wherein only shown.Fig. 4 is the schematic diagram of the random selecting position in the embodiment of the present invention in the first image block.Below in conjunction with Fig. 2, Fig. 3 and Fig. 4, by a kind of execution mode, image motion estimation process is described in detail:
S301, generates random number i and j, wherein, and 1≤i≤A, 1≤j≤B.
In concrete enforcement, can by middle for the described coordinate (p, q) when pre-treatment the pixel value of p, the value of q, or p and q two value is as input, or by the frame number of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.If the first image block is of a size of AxB, so 1≤i≤A, 1≤j≤B.In one embodiment, the first image block is of a size of 5x7, and i and j is two random numbers, then 1≤i≤5 generating, 1≤j≤7.Be understandable that, described first image block can be greater than the block of 0x0 for arbitrary dimension.
If the shape of the first image block and the second image block is circular, the first image block radius be R1, the radius of the second image block is R2, then 0≤i≤R1,0≤j≤R1.
In an embodiment of the present invention, above-mentioned random number i is produced by structure in the mode of pseudobinary random sequence that p, q are input and j, i and j separately submit to one dimensional uniform distribution.Be understandable that, in concrete enforcement, the above-mentioned random number i of any probability distribution of the obedience produced with any input and other all execution modes of j all belong to protection scope of the present invention.
Be understandable that, in concrete enforcement, the process of described generation described random number i and j can be separate, also can generate together, and in other words, random number i and j can be separate, also can be relevant.
S302, with centered by the coordinate (p, q) of pre-treatment pixel, extracts the first image block being of a size of AxB.
S303, extracts each the current reference image block being of a size of AxB successively, and records the central point (w, k) of described current reference image block from described second image block according to preset order.
Wherein, the shape of the second image block can be rectangle, is of a size of MxN, and M >=A, N >=B, M, N, A and B can be all odd number, also can be the integral multiple of odd number.The central point of described current reference image block can use (w, k) to represent corresponding to the coordinate of current input image.In concrete enforcement, all current reference image blocks being of a size of AxB in multiple order traversal second image block can be adopted.Such as, can from the upper left corner of described second image block, according to elder generation from left to right, each extraction successively in the second image block of order is from top to bottom of a size of the image block of AxB again, each extracting from the second image block current is of a size of the image block of AxB, is called current reference image block.
S304, select the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block.
Wherein, described first stack features value characterizes the Local textural feature of described first image block, and described second stack features value characterizes the Local textural feature of described current reference image block.
In an embodiment of the present invention, described Local textural feature extracting method and i, j or relevant with the combination of i and j.In other words, the Local textural feature extracting method adopted can be only relevant to the abscissa i of described random selecting position, also can be only relevant to the ordinate j of described random selecting position, also can be simultaneously related with the abscissa i of described random selecting position and ordinate j.In concrete enforcement, multiple Local textural feature extraction algorithm can be adopted as one group of Local textural feature extraction algorithm, and correspondingly obtained a series of Local textural feature value is as one group of Local textural feature value.Multiple Local textural feature extraction algorithms in described one group can be all relevant to the abscissa i of described random selecting position, or all relevant to the ordinate j of described random selecting position, or the abscissa i corresponding with described random selecting position and ordinate j is related all simultaneously.Be understandable that, the concrete operation method that described one group of Local textural feature extraction algorithm adopts also can only some algorithm be relevant to described random site.
In an embodiment of the present invention, can adopt and carry out estimation with the following method: select the coordinate corresponding to described random number i and j as the random selecting position in described first image block; Afterwards, based on the random selecting position in described first image block, adopt n the Local textural feature extraction algorithm preset, obtain first group of n characteristic value, described first group of n characteristic value characterizes described first image block Local textural feature, based on position identical with described random selecting position in described current reference image block, adopt described same n default Local textural feature extraction algorithm, obtain second group of n characteristic value, described second group of n characteristic value characterizes described current reference image block Local textural feature; First group of n characteristic value of described first image block is compared respectively with each the corresponding characteristic value in the n in described second image block the second stack features value, obtain n comparative result, n the comparative result obtained is carried out the first computing according to preset rules, obtain the current reference motion estimated values of the current reference image block of point centered by coordinate (w, k) in described second image block.
Aforesaid way and described first image block is adopted to compare, until obtain the current reference motion estimated values of all current reference image blocks in described second image block all possible reference image block in the second image block.
It should be noted that, described random number i and j remains unchanged relative to each the possible reference image block in described second image block.
S305, to obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the corresponding vertical motion vector of coordinate (p, q) when pre-treatment pixel and horizontal motion vector.
In the above-described embodiments, by generating pair of random numbers i and j, carrying out in motion estimation process, by selecting the coordinate corresponding to described i and j as the random selecting position in described first image block, and adopt default same Local textural feature extracting method respectively, based on the random selecting position in described first image block, acquisition can characterize described first image block Local textural feature first stack features value, based on position identical with described random selecting position in described current reference image block, acquisition can characterize described current reference image block Local textural feature second stack features value, afterwards the first stack features value of described first image block is compared respectively with the individual features value in the second stack features value of each the current reference image block in described second image block respectively, wholely carry out in motion estimation process, the position of carrying out selected by Local textural feature is random, and the Local textural feature extracted is relevant to the position of described random selecting, therefore described first image block and selected reference image block can be avoided dissimilar but the generation of the statistics of the fixed position chosen or textural characteristics similar situation as far as possible, therefore the pixel similarity of image and the accuracy of motion vector judgement can be improved.
In concrete enforcement, can also further expansion be done to above-described embodiment or optimize, to improve the effect of image motion estimation further.Such as, described first image block is being compared with all current reference image blocks respectively, before carrying out image motion estimation, first two-dimensional low pass ripple is carried out to the first image block extracted and all current reference image blocks and carry out two-dimensional low pass ripple, again corresponding low pass first image block obtained after filtering and low pass current reference image block are compared afterwards, to obtain all current reference motion estimated values of the current reference image block of point centered by (w, k) in the second image block.Before image block carries out estimation, first low-pass filtering is carried out to the first image block and all current reference image blocks, can weaken or removal of images noise, thus the pixel similarity of image and the accuracy of motion vector judgement can be improved further.
Two-dimensional low pass ripple can realize in the following way: first described first image block and a two-dimensional low-pass filter are carried out two-dimensional linear convolution, obtains low pass first image block.And above-mentioned current reference image block and described two-dimensional low-pass filter are carried out two-dimensional linear convolution, obtain low pass current reference image block.
In concrete enforcement, described two-dimensional low-pass filter can adopt dimensional Gaussian low pass filter, be understandable that, other any linear and nonlinear filters also can be adopted to carry out low-pass filtering to described first image block and current reference image block, illustrate no longer one by one.
Referring to Fig. 5, describe how analysis of local regions is carried out to the image block after low-pass filtering treatment in detail by a kind of execution mode, with obtain described current input image as pre-treatment pixel (p, a current reference motion estimated values of the current reference image block put centered by (w, k) in q) He the second image block.
S501, does differ from and take absolute value by the pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point, obtains a block level difference value.
S502, each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value.
In concrete enforcement, in the following way the absolute value of described difference of trying to achieve under all coordinates can be normalized: by the absolute value of described difference of trying to achieve under all coordinates be added obtain and, continuously divided by height A and the width B of described first image block, described overall pixel difference value can be obtained.
S503, each pixel value on described low pass first image block i-th row and jth row is done differ from and take absolute value with each pixel value with same coordinate arranged at described low pass current reference image block i-th row and jth successively respectively, by the absolute value of all differences of pixel value on the i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value.
S504, described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out carry out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
In concrete enforcement, linear or nonlinear operation can be carried out according to multiple preset rules to described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value, with obtain current input image as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).Such as, described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value can be weighted superposition.
In above-described embodiment, each pixel value of the pixel value of described first image block central point, each pixel value of described first image block, described first image block i-th row and each pixel value of described first image block jth row are namely as described first stack features value; In described second image block, each pixel value of the pixel value of current image block central point, each pixel value of current reference image block, each pixel value of described current reference image block i-th row and described current reference image block jth row is namely as described second stack features value.The Local textural feature extraction algorithm adopted differs from for simply doing and takes absolute value, or is normalized.Described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are the comparative result obtained.
Be understandable that, also can adopt other Local textural feature extracting method and Local textural feature method of estimation according to actual needs.
Such as, can respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, obtain the edge strength of the horizontal direction of the first image block and the edge strength of vertical direction respectively, and the edge strength of the horizontal direction of current reference image block and the edge strength of vertical direction, the edge strength of horizontal direction of described first image block and current image block and the edge strength of vertical direction are carried out corresponding comparison respectively, draw comparative result, comparative result is carried out corresponding computing according to preset rules, can obtain equally in the second image block with coordinate (w, k) all current reference motion estimated values of the current reference image block of point centered by.For improving image enhancement effects further, in analysis of local regions process, the mean flow rate of described first image block and described current reference image block can also be carried out com-parison and analysis as textural characteristics value.For ease of computing, those skilled in the art can construct corresponding Local textural feature function and carry out computing according to selected different characteristic value, repeat no more here.
Said method can be adopted to be mated with each current reference image block in the second image block in reference picture by the first image block in current input image, thus in the second image block, obtain all current reference motion estimated values of corresponding first image block.
Obtain all current reference motion estimated values of corresponding first image block in the second image block after, can therefrom obtain a minimum value as in described current input image as pre-treatment pixel (p, q) kinematic similitude degree, the central point that simultaneously can record current reference image block corresponding to this minimum value corresponds to the coordinate (w0, k0) of described current input image.The corresponding vertical motion vector of coordinate (p, q) when pre-treatment pixel and horizontal motion vector can be obtained afterwards further, specific as follows:
When described minimum value is unique, described when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) be (p-w0), horizontal motion vector is (q-k0).
When the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) is (p-w_avg) described in obtaining, horizontal motion vector is (q-k_avg).
Correspondingly, after completing image motion estimation process, the motion vector matrix exported comprises the vertical motion vector matrix in vertical direction and the horizontal motion vector matrix in horizontal direction.
In concrete enforcement, described second computing can by calculating the coordinate (w0 corresponding to all minimum values, k0) with (p, q) calculate that Europe is several manages the mode of distance realizes, that is: by the coordinate (w0, k0) corresponding to all minimum values and (p, q) calculate that Europe is several manages to obtain distance, using the coordinate (w0, k0) of the minimum Euclidean distance of acquisition as final coordinate (w_avg, k_avg).
Be understandable that, other any operation methods that can obtain the final coordinate (w_avg, k_avg) of described conduct all belong to protection scope of the present invention.
It should be noted that, in concrete enforcement, also can not carry out low-pass filtering treatment to above-mentioned current input image and reference picture, and directly carry out Local textural feature analysis.
For making those skilled in the art understand better and realize the present invention, the embodiment of the present invention additionally provides image motion estimation device corresponding to said method embodiment.
With reference to the structural representation of the image motion estimation device shown in Fig. 6, image motion estimation device comprises: acquiring unit 61, motion estimation unit 62 and output unit 63, wherein:
Acquiring unit 61, for obtaining the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size;
Motion estimation unit 62, for meeting first pre-conditioned first image block to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the pixel value of random selecting position in described first image block, and with described random selecting position, there is the pixel value of same position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out texture analysis, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it, wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block,
Output unit 63, for according to the arrangement mode identical with the pixel of described current input image, export the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
Adopt above-mentioned image motion estimation device, due to for each pixel in current input image, in motion estimation process, first pre-conditioned the first image block is met when pre-treatment pixel for comprising, with each the current reference image block met in reference picture in second pre-conditioned the second image block, all based on the pixel value of random selecting position in described first image block, and with described random selecting position, there is the pixel value of same position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature and carry out texture analysis respectively, therefore the non-similar but statistics in the fixed position of image block of image block or similar the caused erroneous judgement of textural characteristics can be avoided, therefore the pixel similarity of image and the accuracy of motion vector judgement can be improved.
In an embodiment of the present invention, described first pre-conditionedly can comprise: the shape of described first image block is rectangle, is of a size of AxB; Correspondingly, described second pre-conditionedly comprises: the shape of described second image block is rectangle, is of a size of MxN.
In an alternative embodiment of the invention, described first pre-conditionedly comprises: the shape of described first image block is for circular, and radius is R1; Correspondingly, described second pre-conditionedly comprises: the shape of described second image block is for circular, and radius is R2.
In concrete enforcement, with reference to the motion estimation unit structural representation shown in Fig. 7, motion estimation unit 62 can comprise:
Generating random number subelement 621, for generating pair of random numbers i and j, wherein, 1≤i≤A, 1≤j≤B;
First extracts subelement 622, for centered by the coordinate (p, q) of pre-treatment pixel, extracts the first image block being of a size of AxB;
Second extracts subelement 623, for extracting each the current reference image block being of a size of AxB from described second image block successively according to preset order, and records the central point (w, k) of described current reference image block;
Estimation subelement 624, for selecting the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block, and to obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the coordinate (p when pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, wherein, described Local textural feature extracting method and i, j or relevant to the combination of i and j, described first stack features value characterizes the Local textural feature of described first image block, described second stack features value characterizes the Local textural feature of described current reference image block.
In concrete enforcement, generating random number subelement 621 may be used for the described coordinate (p when pre-treatment pixel, q) value of p, the value of q in, or the value of p and q is as input, or by the frame number at the place of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.
Described motion estimation unit 62 can also comprise low-pass filtering subelement 625, for respectively described first image block and all current reference image blocks being carried out two-dimensional low pass ripple, corresponding low pass first image block obtained and low pass current reference image block, and export described estimation subelement to and compare, to obtain all current reference motion estimated values of the current reference image block of point centered by (w, k) in the second image block.
In concrete enforcement, described low-pass filtering subelement can adopt two-dimensional low-pass filter to realize, and such as, can adopt dimensional Gaussian low pass filter.By adopting low-pass filtering subelement 625 to carry out two-dimensional low pass ripple to the first image block and all current reference image blocks, can weaken or removal of images noise, thus the pixel similarity of image and the accuracy of motion vector judgement can be improved further.
In concrete enforcement, described estimation subelement 624 can comprise:
First computing module 6241, for the pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point being done differ from and take absolute value, obtains a block level difference value;
Second computing module 6242, for each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value;
3rd computing module 6243, for each pixel value on described low pass first image block i-th row is done differ from and take absolute value with each pixel value with same coordinate on described low pass current reference image block i-th row successively respectively, by the absolute value of all differences of pixel value on described i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value;
4th computing module 6244, for each pixel value on described low pass first image block jth row is done differ from and take absolute value with each pixel value with same coordinate arranged in described low pass current reference image block jth successively respectively, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value;
5th computing module 6245, for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
Such as, described 5th computing module 6245 may be used for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value to be weighted superposition, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
In the image motion estimation device described in above-described embodiment, pair of random numbers i and j is generated by generating random number subelement 621, carrying out in motion estimation process, described first image block and selected reference image block can be avoided dissimilar but the generation of the fixed position similar situation chosen as far as possible, therefore can improve the accuracy that the pixel similarity of image and motion vector judge further.
In concrete enforcement, estimation subelement 624 can comprise: motion vector computation module 6246, can be used for when described minimum value is unique, calculate described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w0), and horizontal motion vector is (q-k0).
Described motion vector computation module 6246 also can be used for when the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) is (p-w_avg) described in calculating, horizontal motion vector is (q-k_avg).
Correspondingly, the motion vector matrix that described output unit 63 exports comprises the horizontal motion vector matrix in the vertical motion vector matrix of vertical direction and horizontal direction.
In concrete enforcement, motion vector computation module 6246 can by the coordinate (w0 corresponding to all minimum values, k0) with (p, q) Europe a few reason moral distance is calculated, by the coordinate (w0 of the minimum Euclidean distance of acquisition, k0) as final coordinate (w_avg, k_avg).
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is that the hardware that can carry out instruction relevant by program has come, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM, RAM, disk or CD etc.
Although the present invention discloses as above, the present invention is not defined in this.Any those skilled in the art, without departing from the spirit and scope of the present invention, all can make various changes or modifications, and therefore protection scope of the present invention should be as the criterion with claim limited range.

Claims (20)

1. a picture motion estimating method, is characterized in that, comprising:
Obtain the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size;
First pre-conditioned the first image block is met to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it, wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block,
According to the arrangement mode identical with the pixel of described current input image, export the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
2. picture motion estimating method as claimed in claim 1, is characterized in that, described first pre-conditionedly comprises: the shape of described first image block is rectangle, is of a size of AxB; Described second pre-conditionedly comprises: the shape of described second image block is rectangle, is of a size of MxN.
3. picture motion estimating method as claimed in claim 2, it is characterized in that, describedly meet first pre-conditioned first image block to comprise when pre-treatment pixel in current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, comprise:
Generate random number i and j, wherein, 1≤i≤A, 1≤j≤B;
With centered by the coordinate (p, q) of pre-treatment pixel, extract the first image block being of a size of AxB;
From described second image block, extract each the current reference image block being of a size of AxB according to preset order successively, and record the coordinate (w, k) of the central point of described current reference image block;
Select the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block, wherein, described Local textural feature extracting method and i, j or relevant with the combination of i and j, described first stack features value characterizes the Local textural feature of described first image block, and described second stack features value characterizes the Local textural feature of described current reference image block,
To obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the corresponding vertical motion vector of coordinate (p, q) when pre-treatment pixel and horizontal motion vector.
4. picture motion estimating method as claimed in claim 3, it is characterized in that, described generation random number i and j, comprise: by the described coordinate (p when pre-treatment pixel, q) value of p, the value of q in, or p and q two value is as input, or by the frame number of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.
5. picture motion estimating method as claimed in claim 3, it is characterized in that, by the first image block in described current input image and the reference image block in described reference picture, adopt default same Local textural feature extracting method texture feature extraction respectively, before carrying out analysis of local regions, also comprise: respectively described first image block and all current reference image blocks are carried out two-dimensional low pass ripple.
6. picture motion estimating method as claimed in claim 5, it is characterized in that, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference motion estimated values of the current reference image block of point centered by, comprise:
The pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point are done differ from and take absolute value, obtains a block level difference value;
Each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value;
Each pixel value on described low pass first image block i-th row and jth row is done differ from and take absolute value with each pixel value with same coordinate arranged at described low pass current reference image block i-th row and jth successively respectively, by the absolute value of all differences of pixel value on the i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value;
Described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
7. picture motion estimating method as claimed in claim 6, it is characterized in that, described described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, comprising: described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are weighted superposition.
8. picture motion estimating method as claimed in claim 3, it is characterized in that, described acquisition is as the coordinate (p of pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, comprise: when described minimum value is unique, described when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) be (p-w0), horizontal motion vector is (q-k0).
9. picture motion estimating method as claimed in claim 3, it is characterized in that, described acquisition is as the coordinate (p of pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, comprise: when the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then obtain described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w_avg), horizontal motion vector is (q-k_avg).
10. picture motion estimating method as claimed in claim 9, it is characterized in that, described by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, comprise: by the coordinate (w0 corresponding to all minimum values, k0) with (p, q) Europe a few reason moral distance is calculated, by the coordinate (w0 of the minimum Euclidean distance of acquisition, k0) as final coordinate (w_avg, k_avg).
11. picture motion estimating methods as claimed in claim 1, is characterized in that, described first pre-conditionedly comprises: the shape of described first image block is for circular, and radius is R1; Described second pre-conditionedly comprises: the shape of described second image block is for circular, and radius is R2.
12. 1 kinds of image motion estimation devices, is characterized in that, comprising:
Acquiring unit, for obtaining the reference picture of current input image and described current input image, described reference picture and current input image are two-dimensional matrix and have same size;
Motion estimation unit, for meeting first pre-conditioned first image block to comprise when pre-treatment pixel by described current input image, each current reference image block in second pre-conditioned the second image block is met with described reference picture, based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, default same Local textural feature extracting method is adopted to extract Local textural feature respectively, carry out analysis of local regions, and analysis result is compared, obtain the kinematic similitude degree to the described pixel the most similar when pre-treatment pixel, and it is described when pre-treatment pixel and described motion vector between the pixel that pre-treatment pixel is the most similar, until obtain the kinematic similitude degree of the most similar pixel of each pixel in described current input image, and the motion vector between each pixel pixel the most similar to it, wherein, described second image block comprises three image block identical with described first image block position, and described current reference image block is meet described first pre-conditioned image block,
Output unit, for according to the arrangement mode identical with the pixel of described current input image, export the similarity matrix of the corresponding similarity formation of each pixel for described current input image, and the motion vector matrix that the motion vector corresponding to each pixel is formed.
13. image motion estimation devices as claimed in claim 12, is characterized in that, described first pre-conditionedly comprises: the shape of described first image block is rectangle, is of a size of AxB; Described second pre-conditionedly comprises: the shape of described second image block is rectangle, is of a size of MxN.
14. image motion estimation devices as claimed in claim 13, it is characterized in that, described motion estimation unit comprises:
Generating random number subelement, for generating random number i and j, wherein, 1≤i≤A, 1≤j≤B;
First extracts subelement, for centered by the coordinate (p, q) of pre-treatment pixel, extracts the first image block being of a size of AxB;
Second extracts subelement, for extracting each the current reference image block being of a size of AxB from described second image block successively according to preset order, and records the coordinate (w, k) of the central point of described current reference image block;
Estimation subelement, for selecting the coordinate corresponding to described random number i and j as the random selecting position in described first image block, respectively based on the random selecting position in described first image block, and the position identical with described random selecting position in described current reference image block, adopt default same Local textural feature extracting method respectively, obtain the first stack features value and the second stack features value, individual features value in described first stack features value and described second stack features value is compared respectively, comparative result is carried out the first computing according to preset rules, obtain in described second image block with coordinate (w, k) the current reference image block put centered by is relative to the current reference motion estimated values of described first image block, travel through each reference image block in described second image block, aforesaid way is adopted to carry out estimation with described first image block respectively, until obtain the current reference motion estimated values of all current reference image blocks in described second image block relative to described first image block, and to obtain in described second image block all current reference image blocks relative to the minimum value in the current reference motion estimated values of described first image block, as the coordinate (p when pre-treatment pixel of described current input image, q) kinematic similitude degree, and record the coordinate (w0 of the central point of described current reference image block corresponding to described minimum value, k0), obtain the coordinate (p when pre-treatment pixel, q) corresponding vertical motion vector and horizontal motion vector, wherein, described Local textural feature extracting method and i, j or relevant to the combination of i and j, described first stack features value characterizes the Local textural feature of described first image block, described second stack features value characterizes the Local textural feature of described current reference image block.
15. image motion estimation devices as claimed in claim 14, it is characterized in that, described generating random number subelement is used for the described coordinate (p when pre-treatment pixel, q) value of p, the value of q in, or the value of p and q is as input, or by the frame number at the place of described current input image, at least one of them, as input, generates described random number i and j to the size of described image.
16. image motion estimation devices as claimed in claim 14, it is characterized in that, described motion estimation unit also comprises low-pass filtering subelement, for respectively described first image block and all current reference image blocks being carried out two-dimensional low pass ripple, corresponding low pass first image block obtained and low pass current reference image block, and export described estimation subelement to, to obtain all current reference motion estimated values of the current reference image block of point centered by (w, k) in the second image block.
17. image motion estimation devices as claimed in claim 16, it is characterized in that, described estimation subelement comprises:
First computing module, for the pixel value of described low pass first image block central point and the pixel value of low pass current reference image block central point being done differ from and take absolute value, obtains a block level difference value;
Second computing module, for each pixel value of described low pass first image block is done differ from and take absolute value with the respective pixel values with same coordinate of described low pass current reference image block respectively, then the absolute value of the difference of trying to achieve under all coordinates is normalized, obtains an overall pixel difference value;
3rd computing module, for each pixel value on described low pass first image block i-th row is done differ from and take absolute value with each pixel value with same coordinate on described low pass current reference image block i-th row successively respectively, by the absolute value of all differences of pixel value on described i-th row and divided by the width B of described first image block, obtain a horizontal pixel difference value;
4th computing module, for each pixel value on described low pass first image block jth row is done differ from and take absolute value with each pixel value with same coordinate arranged in described low pass current reference image block jth successively respectively, by the absolute value of the upper all differences of pixel value of jth row and divided by the height A of described first image block, obtain a vertical pixel difference value;
5th computing module, for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value are carried out linear or nonlinear operation according to preset rules, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
18. image motion estimation devices as claimed in claim 17, it is characterized in that, described 5th computing module is used for described piece of level difference value, overall pixel difference value, horizontal pixel difference value and vertical pixel difference value to be weighted superposition, what obtain described current input image works as pre-treatment pixel (p, q) the current reference estimated value of the current reference image block of point and centered by (w, k).
19. image motion estimation devices as claimed in claim 14, it is characterized in that, described estimation subelement comprises:
Motion vector computation module, for when described minimum value is unique, when the corresponding vertical motion vector of pre-treatment pixel coordinate (p, q) is (p-w0) described in calculating, horizontal motion vector is (q-k0).
20. image motion estimation devices as claimed in claim 19, it is characterized in that, described motion vector computation module is also for when the number of described minimum value is greater than 1, by the center point coordinate (w0 of current reference image block in the second image block corresponding to all minimum values, k0) the second computing is carried out, obtain a final coordinate (w_avg, k_avg), and then calculate described as pre-treatment pixel coordinate (p, q) corresponding vertical motion vector is (p-w_avg), and horizontal motion vector is (q-k_avg).
CN201410228500.0A 2014-05-27 2014-05-27 Picture motion estimating method and device Active CN105141963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410228500.0A CN105141963B (en) 2014-05-27 2014-05-27 Picture motion estimating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410228500.0A CN105141963B (en) 2014-05-27 2014-05-27 Picture motion estimating method and device

Publications (2)

Publication Number Publication Date
CN105141963A true CN105141963A (en) 2015-12-09
CN105141963B CN105141963B (en) 2018-04-03

Family

ID=54727150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410228500.0A Active CN105141963B (en) 2014-05-27 2014-05-27 Picture motion estimating method and device

Country Status (1)

Country Link
CN (1) CN105141963B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728746A (en) * 2019-09-23 2020-01-24 清华大学 Modeling method and system for dynamic texture
CN111147891A (en) * 2019-12-31 2020-05-12 杭州威佩网络科技有限公司 Method, device and equipment for acquiring information of object in video picture
CN113645466A (en) * 2021-06-29 2021-11-12 深圳市迪威码半导体有限公司 Image removal block based on random probability

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1625900A (en) * 2001-07-02 2005-06-08 月光无线有限公司 Method and apparatus for motion estimation between video frames
CN101835037A (en) * 2009-03-12 2010-09-15 索尼株式会社 Method and system for carrying out reliability classification on motion vector in video
US20140133569A1 (en) * 2012-11-14 2014-05-15 Samsung Electronics Co., Ltd. Method for selecting a matching block

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1625900A (en) * 2001-07-02 2005-06-08 月光无线有限公司 Method and apparatus for motion estimation between video frames
CN101835037A (en) * 2009-03-12 2010-09-15 索尼株式会社 Method and system for carrying out reliability classification on motion vector in video
US20140133569A1 (en) * 2012-11-14 2014-05-15 Samsung Electronics Co., Ltd. Method for selecting a matching block

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728746A (en) * 2019-09-23 2020-01-24 清华大学 Modeling method and system for dynamic texture
CN110728746B (en) * 2019-09-23 2021-09-21 清华大学 Modeling method and system for dynamic texture
CN111147891A (en) * 2019-12-31 2020-05-12 杭州威佩网络科技有限公司 Method, device and equipment for acquiring information of object in video picture
CN111147891B (en) * 2019-12-31 2022-09-13 杭州威佩网络科技有限公司 Method, device and equipment for acquiring information of object in video picture
CN113645466A (en) * 2021-06-29 2021-11-12 深圳市迪威码半导体有限公司 Image removal block based on random probability
CN113645466B (en) * 2021-06-29 2024-03-08 深圳市迪威码半导体有限公司 Image deblocking algorithm based on random probability

Also Published As

Publication number Publication date
CN105141963B (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107301402B (en) Method, device, medium and equipment for determining key frame of real scene
US9779324B2 (en) Method and device for detecting interest points in image
CN103366170A (en) Image binarization processing device and method thereof
US20140376882A1 (en) Computing device with video analyzing function and video analyzing method
US9781382B2 (en) Method for determining small-object region, and method and apparatus for interpolating frame between video frames
CN104253929B (en) Vedio noise reduction method and its system
US20240078680A1 (en) Image segmentation method, network training method, electronic equipment and storage medium
WO2021012965A1 (en) Image processing method and apparatus, mobile terminal video processing method and apparatus, device and medium
CN109313806A (en) Image processing apparatus, image processing system, image processing method and program
CN104574331A (en) Data processing method, device, computer storage medium and user terminal
CN103440664A (en) Method, system and computing device for generating high-resolution depth map
CN110263699A (en) Method of video image processing, device, equipment and storage medium
Xu et al. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries
Yao et al. Object based video synopsis
CN105141963A (en) Image motion estimation method and device
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
CN113628259A (en) Image registration processing method and device
CN113177941A (en) Steel coil edge crack identification method, system, medium and terminal
CN104602096A (en) Detecting method and device for video subtitle area
CN116431857A (en) Video processing method and system for unmanned scene
KR101920159B1 (en) Stereo Matching Method and Device using Support point interpolation
CN113706639B (en) Image compression method and device based on rectangular NAM, storage medium and computing equipment
CN111754417B (en) Noise reduction method for video image, video matting method, device and electronic system
US9432690B2 (en) Apparatus and method for video processing
CN113191210A (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant