CN106210449A - The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system - Google Patents

The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system Download PDF

Info

Publication number
CN106210449A
CN106210449A CN201610657029.6A CN201610657029A CN106210449A CN 106210449 A CN106210449 A CN 106210449A CN 201610657029 A CN201610657029 A CN 201610657029A CN 106210449 A CN106210449 A CN 106210449A
Authority
CN
China
Prior art keywords
motion vector
vector
optical flow
matching
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610657029.6A
Other languages
Chinese (zh)
Other versions
CN106210449B (en
Inventor
张小云
鲁国
包文博
高志勇
陈立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201610657029.6A priority Critical patent/CN106210449B/en
Publication of CN106210449A publication Critical patent/CN106210449A/en
Application granted granted Critical
Publication of CN106210449B publication Critical patent/CN106210449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses frame rate up-conversion method for estimating and the system of a kind of Multi-information acquisition, method step is: two two field picture before and after reading, respectively it is carried out down-sampled, down-sampled image is carried out estimation based on optical flow method, original image is carried out the estimation of Block-matching, the SIFT feature of two two field pictures before and after extraction, calculate characteristic vector and carry out characteristic matching, obtain the motion vector of characteristic matching, block matching motion vector, characteristic matching motion vector and optical flow method motion vector are merged, the transmission of motion vector.The present invention compare tradition motion estimation algorithm based on Block-matching, precision improvement is obvious.For general light stream algorithm for estimating, keep that there is better performance at wisp motion vector and moving boundaries vector.

Description

The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
Technical field
The present invention relates to video frame rate upconversion field, in particular it relates to the frame rate up-conversion fortune of a kind of Multi-information acquisition The dynamic method and system estimated.
Background technology
Video frame rate, as the key parameter of sport video, affects the smoothness for moving object in people's watching process Sexual experience.In low frame-rate video, the pixel qualities of adjacent interframe fluctuates and is easily caused film flicker sense (flickering), and Have in moving scene or the sports cast of the same direction of motion, be then readily observed motion jitter (motion And animation sense jittering).High frame-rate video can be substantially improved these visual experiences, brings more preferable comprehensive visual quality With picture fluency, become developing direction and the important need of future video application.
Frame rate up-conversion FRUC is as an important technology of Video post-processing, at Computer Vision and DTV etc. Field existing numerous studies work and related ends.At present, main flow practicality FRUC method is mainly by video sequence consecutive frame Between dependency, carry out interpolation along movement locus of object, be therefore also referred to as frame rate up-conversion based on motion compensation (Motion compensated FRUC,MC-FRUC).Existing research is mainly around the estimation of frame rate up-conversion and benefit Repaying interpolation the two key point, the accuracy just improving motion vector field and the reliability strengthening interpolation image have obtained a series of Achievement in research.According to the producing method of motion vector, can be divided into based on Block-matching, characteristic matching, region, gradient or conversion Territory etc. five class commonly uses method for estimating.Wherein the method for Block-matching efficiently easily realizes with hardware due to simple, in practice To being widely applied.But the flatness of block matching method is poor, especially under dull background, the motion of Block-matching is vowed Amount noise is the biggest.The most most block matching method is difficult to accurately estimation, the most in practice, base for big motion vector Often easily there is mistake on motion vector boundary and dynamic background in method in Block-matching, reduces interpolation frame video Visual quality.For Block-matching produced problem, research worker proposes a variety of method, and the motion including multiresolution is estimated Meter, indirectly or directly applies smoothness constraint and promotes picture quality further.But nonetheless, the quality of interleave image Still need to be improved further.
Optical flow method, the variation optical flow method being based especially on Horn-Schunck had obtained further sending out in the last few years Exhibition." the High Accuracy Optical Flow Estimation Based on that wherein Brox et al. delivers on ECCV A Theory for Warping " method is widely used in actual applications.This method utilizes the framework of layering, with Vector field is smoothed by the mode of Shi Liyong global optimization.Experiment finds, the vector field that optical flow method produces is no matter in precision Or flatness is all compared the method for Block-matching and is obviously improved, and can be applied in the estimation of frame rate up-conversion. But also there is its weak point in traditional optical flow method.First it is that traditional optical flow method uses the framework being layered, the most permissible The convergence of accelerated motion vector, but it is easily lost the motion vector for wisp in the superiors and describes.This situation is little The size of object compare with its motion vector gap little when particularly evident.Additionally, due to the existence of overall situation smoothness constraint, Optical flow method is inaccurate for the estimation of big motion vector, and easily goes out easily to occur mistake near big object of which movement vector Assemble.
The method of feature extraction and characteristic matching is widely used in image procossing, wherein SIFT feature, SURF feature uses more.SIFT feature has following characteristics, be first SIFT feature be the local feature of image, it is to rotation Turn, scaling, brightness flop maintain the invariance, and visual angle change, affine transformation, noise are also kept a certain degree of stable Property;Secondly, SIFT feature uniqueness is good, informative, it is adaptable to carry out fast and accurately in magnanimity property data base Join;Additionally SIFT has volume, even if several objects of minority can also extract substantial amounts of SIFT feature vector.SIFT feature Method mainly comprises the following steps, is first metric space extremum extracting, and key point positions, and direction determines, key point describes. By the key point of two continuous frames image being calculated and mating, the motion of corresponding key point between two frames can be obtained Vector.Due to the characteristic of SIFT feature, the motion vector obtained by characteristic matching is for compound movement and the arrow that moves greatly Amount all has well estimates performance.But SIFT feature is the most sparse and the position of key point is unpredictable.Additionally, due to Being Point matching, seldom or easily there is mistake in texture simple region in the motion vector that SIFT feature coupling obtains.
Through retrieval, Publication No. CN104915966A, the Chinese invention application of Application No. CN201510233587.5, it is somebody's turn to do A kind of frame rate up-conversion method for estimating based on Kalman filtering of disclosure of the invention and system.Described method includes following step Rapid: first parameter and the state initial value of Kalman filter model are set so that model coincide with real system;Then pass through First carry out one-way movement estimation, after be mapped to the strategy of interpolation frame and obtain motion vector observation;When finally using a kind of gain Measurement vector is updated by the kalman filter method become, thus obtains motion vector the most accurately.
But the method for estimating of this kind of frame rate up-conversion is estimated for wisp motion vector and big motion vector Count not accurate enough.The most sane for compound movement vector.
Summary of the invention
For the defect of prior art, it is an object of the invention to provide estimation side in the frame rate up-conversion of a kind of video Method and system, by merging by optical flow method motion vector, characteristic matching motion vector and block matching motion vector, In conjunction with motion post processing means, thus provide high performance motion vector, improve the visual quality of interpolation frame
According to the first aspect of the invention, it is provided that the frame rate up-conversion method for estimating of a kind of Multi-information acquisition, described Method comprises the steps:
Step one: read two continuous frames image from high definition video steaming;
Step 2: the view data of step one is carried out down-sampled;
Step 3: two frame data before and after in step 2 are carried out estimation based on optical flow method, obtains optical flow method motion Vector;
Step 4: two frame image datas before and after in step one are carried out estimation based on Block-matching, obtains Block-matching Motion vector;
Step 5: two frame image datas before and after in step one are carried out SIFT feature extraction, and calculates each feature Characteristic vector, and carry out characteristic matching, obtain characteristic of correspondence motion vector;
Step 6: the characteristic kinematic vector step 3 that the block matching motion vector that step 4 obtained, step 5 obtain The optical flow method motion vector obtained merges, the motion vector field after being merged;
Step 7: the motion vector field after the fusion obtaining step 6 uses the mode of reliable motion vectors transmission to carry out Revise;
Step 8: return to step one, reads lower two two field pictures.
Preferably, in step 2, the method for image drop sampling is that level, every 4 of vertical direction take a point.
Preferably, in step 3, optical flow method uses classical variation optical flow method.
Preferably, in step 4, block matching algorithm uses 3-dimensional recursive search.
Preferably, in step 6, characteristic kinematic vector, block matching motion vector vector merge with optical flow method motion vector, Step is:
S1: obtain the optical flow method motion vector field vo that obtained by optical flow method, characteristic kinematic vector that characteristic matching obtains Vf and block matching motion vector field vb, create vector field vm after merging;
S2: according to from left to right, order from top to bottom scans the motion vector field vm after fusion successively;
S3: judge in current position, characteristic matching motion vector field vf whether existing characteristics motion vector, if do not deposited , forward S4 to, otherwise forward S5 to;
S4: computing block match motion vector field vb at the margin of image element SADB of current position respective motion vectors, and Optical flow method motion vector field vo is at the margin of image element SADO of current position respective motion vectors;
S5: select motion vector corresponding to minima as vector field vm after merging in present bit from SADB with SADO Put the motion vector at place;Forward S8 to;
S6: calculate the optical flow method motion vector field vo margin of image element SADO in current position respective motion vectors respectively, And characteristic kinematic vector vf exists in the margin of image element SADF of current position respective motion vectors, block matching motion vector field vb The margin of image element SADB of current position respective motion vectors;
S7: the motion vector selecting minima corresponding from SADB, SADO with SADF exists as vector field vm after merging The motion vector of current position;Forward S8 to;
S8: scanned the motion vector field vm after fusion, the most then exit and obtain motion vector field vm, otherwise Return S2.
It is highly preferred that in S4 and S6, the method for the margin of image element calculating motion vector is:
Obtaining current pixel location F (x), pixel RF (x+v) of reference frame, wherein x is current coordinate position, and v is to work as The motion vector that front position is corresponding, RF is reference frame;
SAD=| F (x) RF (x+v) |, wherein SAD is required margin of image element.
Preferably, in step 7, motion vector transmits, and step is:
Obtain the optical flow method motion vector field vo that obtained by optical flow method, characteristic kinematic vector vf that characteristic matching obtains with And block matching motion vector field vb, if creating vector field vm after merging;
S701: according to from left to right, order from top to bottom scans vector field vm after fusion successively;
S702: for current location (px, py), wherein px is horizontal position coordinate, and py is vertical position coordinate, vm (px, Py) it is the vector of current location, and obtains leftward position vector vm (px-1, py) successively, upper position vector vm (px, py- 1), the vector vm (px-1, py-1) of top-left position;
S703: the margin of image element of frame before and after four vector correspondences in calculating S702, selects minimum margin of image element corresponding Vector the vector field vm vector in current location after merging is updated;
S704: scanned full frame images, if it is not, return S701, otherwise enters S705;
S705: according to from right to left, order from top to bottom scans vector field vm after fusion successively;
S706: for current location (px, py), wherein px is horizontal position coordinate, and py is vertical position coordinate, vm (px, Py) it is the vector of current location, and obtains right positions vector vm (px+1, py) successively, lower position vector vm (px, py+ 1), the vector vm (px+1, py+1) of bottom-right location;
S707: the margin of image element of frame before and after four vector correspondences in calculating S706, selects minimum margin of image element corresponding Vector vm is updated at the vector of current location;
708: scanned full frame images, if it is not, return S705, otherwise exit.
According to the second aspect of the invention, it is provided that the frame rate up-conversion movement estimation system of a kind of Multi-information acquisition, including:
Image input module: read two continuous frames image from high definition video steaming;
Down-sampled module: the view data of image input module input is carried out down-sampled;
Optical flow method motion vector obtains module: two frame data before and after after down-sampled module are carried out fortune based on optical flow method Dynamic estimation, obtains optical flow method motion vector;
Block matching motion vector obtains module: before and after inputting image input module, two frame image datas are carried out based on block The estimation of coupling, obtains block matching motion vector;
Characteristic kinematic vector obtains module: before and after inputting image input module, two frame image datas carry out SIFT feature Extract, and calculate the characteristic vector of each feature, and carry out characteristic matching, obtain characteristic of correspondence motion vector;
Motion vector field Fusion Module: block matching motion vector that above-mentioned each module is obtained, characteristic kinematic vector light Stream method motion vector merges, the motion vector field after being merged;
Vector transfer module: the motion vector field after the fusion obtain motion vector field Fusion Module uses reliable movement The mode of vector transmission is modified.
Compared with prior art, the present invention has a following beneficial effect:
The present invention, because have employed the fusion method of much information, is especially considering that the method for characteristic matching is for big fortune Dynamic vector and complicated vector there is well sign, and block matching motion method has for wisp and preferably estimates Meter, in terms of estimation, the invention performance than Publication No. CN104915966A is good.
Further, the present invention:
1. by using optical flow method to can ensure that the optical flow method motion vector field obtained is essentially smooth;
2. estimation based on Block-matching, for ensureing that the precision of the motion vector of wisp has positive effect;
3. the characteristic kinematic vector of characteristic matching, has well estimation for big motion vector and compound movement vector Effect, it is possible to be obviously improved frame rate up-conversion performance under complex scene;
4. the advantage making full use of each method for estimating in fusion process, carries out mutual mutual, right by judging Answer the multiple motion vector in position corresponding margin of image element, select current position optimal motion vector, promote entirety vow The reliability of amount;
5. the transmission of motion vector is to select current location optimum vector according to surrounding vectors, on the one hand can be by correct Vector transmits, and additionally also ensure that the flatness of overall motion vector field;
6. the transmission of motion vector uses the motion transmission of twice different directions, can sufficiently be transmitted by correct vector, Avoid the occurrence of omission.
Accompanying drawing explanation
By the detailed description non-limiting example made with reference to the following drawings of reading, the further feature of the present invention, Purpose and advantage will become more apparent upon:
Fig. 1 is the method flow diagram of one embodiment of the invention;
Fig. 2 is the motion vector transmission schematic diagram of one embodiment of the invention;
Fig. 3 is one embodiment of the invention system architecture diagram.
Detailed description of the invention
Below in conjunction with specific embodiment, the present invention is described in detail.Following example will assist in the technology of this area Personnel are further appreciated by the present invention, but limit the present invention the most in any form.It should be pointed out that, the ordinary skill to this area For personnel, without departing from the inventive concept of the premise, it is also possible to make some deformation and improvement.These broadly fall into the present invention Protection domain.
As it is shown in figure 1, be an embodiment flow chart of the frame rate up-conversion method for estimating of Multi-information acquisition of the present invention, Concrete, this embodiment comprises the steps:
1 reads the two continuous frames image in video sequence, and is stored as f1, f2 respectively;
2 image drop samplings
For HD video (resolution is 1920x1080), in order to reduce follow-up computation complexity, carried out level Down-sampled with vertical direction.Every four point selection of horizontal direction wherein, the most every four point selection therein Point as the image after down-sampled, can obtain in this manner down-sampled after image, f1_s and f2_s, its resolution is divided Wei 480x270;For other resolution, can select in the case of in view of motion estimation performance and computing capability Select suitable down-sampled mode.
3 optical flow method estimation obtain the light stream vector field of correspondence.
The estimation of optical flow method can use prior art to realize, and such as uses the paper that Brox delivers on ECCV Side in " High Accuracy Optical Flow Estimation Based on a Theory for Warping " Method;Wherein input picture be down-sampled after image f1_s and f2_s, the motion vector obtained points to before f2_s from f1_s To motion vector vo;
4 block-based motion estimation obtain block matching motion vector field.
Block matching method uses the method for three-dimensional recursive search, specifically:
The two continuous frames image of 4.1 inputs is f1, f2;
Whole two field picture is carried out dividing according to the block of NxN size by 4.2, and the size of N should be the fall with image here The yardstick of sampling is consistent.In the present embodiment, N=4 is taken;
4.3 according to from left to right, and scanning sequency from top to bottom travels through the block of each NxN size successively from the upper left corner;
4.4 are positioned at for position that (x, y) block at place, its candidate vector comes from the vector being computed, and is i.e. working as Candidate vector at front position come from left position (x-1, y) the motion vector vleft at place, upper position (x, y-1) place Motion vector vabove, and zero vector v0.Calculate the sad value of each candidate motion vector.Wherein for vector vt, level Direction motion vector is vtx, and movement in vertical direction vector is vty.Its sad value is equal to f1 (x, y)-f2 (x+vtx, y+vty) Absolute value.Wherein (x y) represents, with f2 (x+vtx, y+vty), the block that size is NxN to f1;Therefrom select sad value minimum corresponding Motion vector, is defined as vc;
4.5 are respectively vcx and vcy for the motion vector vc obtained, the value of its horizontal and vertical.The motion updated is waited Selecting vector is (vcx-1, vcy), (vcx+1, vcy), (vcx, vcy-1), (vcx, vcy+1).Side according to above-mentioned middle calculating SAD Formula calculates the SAD that vector is corresponding respectively, and the vector that wherein sad value is minimum is as final block matching motion vector;
4.6 scan whole two field picture, obtain block matching motion vector field vb.
5 feature extractions, characteristic matching and calculating characteristic kinematic vector field.
The feature used is SIFT feature, specifically:
The two continuous frames image of 5.1 inputs is f1, f2;
5.2 extract image f1, the characteristic point on f2 respectively, are designated as k1, k2;
5.3 couples of k1, k2 mate, and obtain characteristic of correspondence match motion vector field v;
5.4 in the same size, to vector field v according to front in order to block matching motion vector field and light stream motion vector field The down-sampled mode stated carries out down-sampled, obtains vector field vf.
6 characteristic kinematic vectors, block matching motion vector optical flow method motion vector merge.Specifically:
The 6.1 light stream vector field vo reading a size of 480x270, Block-matching vector field vb and characteristic matching vector field vf, And set up vector field vm after the fusion of comparable size;
6.2 according to from left to right, and scanning sequency from top to bottom, from the beginning of upper left, block-by-block scans;
6.3 current scan positions are p.Because the motion vector of characteristic matching is sparse, it is therefore desirable to judge current Position, whether characteristic matching vector field vf exists the motion vector of correspondence.If it does not, perform step 6.4, otherwise perform Step 6.5;
6.4 computing block match motion vector vb, optical flow method motion vector vo is at absolute value corresponding to current position vector Error.The present embodiment uses absolute error minimum and as judgment criterion herein, but those skilled in the art is easy to understand The judgment criterion of the present embodiment can also choose other, as least mean-square error and etc.;Assuming that motion vector is v, absolute value is by mistake Difference sad=| f (p) f (p+v) |.The vector that absolute value error is minimum, for the optimal motion vector of current location, is stored in vm Correspondence position;
6.5 computing block match motion vector vb, optical flow method motion vector vo and characteristic matching vf are vowed in current position The absolute value error that amount is corresponding.Absolute value error is identical with the definition in 6.4.The motion vector of corresponding least absolute value error is Optimal motion vector, the correspondence position being stored in vm;
6.6 scan whole positions, the motion vector field vm after being merged.
7 motion vector transmission.
The purpose of motion vector transmission is to be diffused by correct motion vector, the precision of lifter motion vector field, and And ensure the flatness of overall vector field.The transmission of motion vector can carry out the transmission of twice distinct methods.As shown in Figure 2.
7.1 according to from left to right, and order from top to bottom scans fusion vector field vm successively;
7.2 for current location p, and the coordinate of its horizontal and vertical is respectively px, py, and corresponding motion vector is v0.Its In its left position (px-1, py) vector v1, upper position (px, py-1) motion vector v2, top-left position (px-1, py-1) Motion vector v3.According to current scan process order, v1, v2 and v3 are the vectors that scan process is crossed;
7.3 calculate v0, v1, v2 and v3 respective pixel value difference, here can be with absolute value error or other similar mark Accurate.Vx, vy are respectively for vector v, horizontal displacement and vertical displacement, need to calculate its position corresponding on the original image, And also to increase robustness, use the margin of image element that before and after calculating, two frames block on movement locus is corresponding.Wherein fall is adopted Coordinate after sample is px, py respectively.The extending space N=2 used;So horizontal coordinate model of corresponding blocks on original resolution Enclose h and be equal to [-N+4*py, N+4* (py+1)-1] equal to [-N+4*px, N+4* (px+1)-1], the coordinate range v of vertical direction. In the present embodiment, margin of image element use absolute value error sad as standard, sad=| f1 (h, v) f2 (h+vx, v+vy) |. From v0, v1, v2, v3, select vector minimum for sad, as current optimal motion vector, and update fusion vector field vm;
7.4 have scanned the whole positions merging vector field vm;
7.5 according to from top to bottom, and position from right to left scans the most updated motion vector field vm successively;
7.6 for current location p, and the coordinate of its horizontal and vertical is respectively px, py, and corresponding motion vector is v0.Its In its location right (px+1, py) vector v1, lower position (px, py+1) motion vector v2, bottom-right location (px+1, py+1) Motion vector v3.According to current scan process order, v1, v2 and v3 are the vectors that scan process is crossed;
7.7 is identical with 7.3, calculates v0 respectively, the margin of image element that v1, v2, v3 are corresponding.Select the motion that margin of image element is minimum Vector, as merging the vector field vm optimal motion vector in current location;
7.8 have scanned the whole positions merging vector field vm.
As it is shown on figure 3, describe based on said method, it is provided that become in the frame per second of a kind of Multi-information acquisition realizing said method Change movement estimation system, including:
Image input module: read two continuous frames image from high definition video steaming;
Down-sampled module: the view data of image input module input is carried out down-sampled;
Optical flow method motion vector obtains module: two frame data before and after after down-sampled module are carried out fortune based on optical flow method Dynamic estimation, obtains optical flow method motion vector;
Block matching motion vector obtains module: before and after inputting image input module, two frame image datas are carried out based on block The estimation of coupling, obtains block matching motion vector;
Characteristic kinematic vector obtains module: before and after inputting image input module, two frame image datas carry out SIFT feature Extract, and calculate the characteristic vector of each feature, and carry out characteristic matching, obtain characteristic of correspondence motion vector;
Motion vector field Fusion Module: block matching motion vector that above-mentioned each module is obtained, characteristic kinematic vector light Stream method motion vector merges, the motion vector field after being merged;
Vector transfer module: the motion vector field after the fusion obtain motion vector field Fusion Module uses reliable movement The mode of vector transmission is modified.
In the frame rate up-conversion method for estimating step of the technology that above-mentioned each module implements and Multi-information acquisition Technology is identical, does not repeats them here.
The present invention uses the mode of Multi-information acquisition to carry out the estimation of frame rate up-conversion, utilizes much information each Advantage, thus the accuracy of lifter motion vector field.The present invention is for ensureing the slickness of motion vector field, simultaneously for carrying Rise, for the estimation of wisp motion vector and the estimation of complex scene, there is remarkable result.
Although present disclosure has been made to be discussed in detail by above preferred embodiment, but it should be appreciated that above-mentioned Description is not considered as limitation of the present invention.After those skilled in the art have read foregoing, for the present invention's Multiple amendment and replacement all will be apparent from.Therefore, protection scope of the present invention should be limited to the appended claims.

Claims (7)

1. the frame rate up-conversion method for estimating of a Multi-information acquisition, it is characterised in that comprise the steps:
Step one: read two continuous frames image from high definition video steaming;
Step 2: the view data of step one is carried out down-sampled;
Step 3: two continuous frames data in step 2 are carried out estimation based on optical flow method, obtains optical flow method motion vector;
Step 4: two continuous frames view data in step one is carried out estimation based on Block-matching, obtains block matching motion Vector;
Step 5: two continuous frames view data in step one is carried out SIFT feature extraction, and calculates the feature of each feature Vector, and carry out characteristic matching, obtain characteristic of correspondence motion vector;
Step 6: the characteristic kinematic vector step 3 that the block matching motion vector that step 4 obtained, step 5 obtain obtains Optical flow method motion vector merge, the motion vector field after being merged;
Step 7: the motion vector field after the fusion obtaining step 6 uses the mode of reliable motion vectors transmission to repair Just;
Step 8: return to step one, reads lower two two field pictures.
Method the most according to claim 1, it is characterised in that in step 2, the method for image drop sampling is level, vertical Every 4 of direction takes a point.
Method the most according to claim 1, it is characterised in that in step 4, block matching algorithm uses 3-dimensional recurrence and searches Rope.
Method the most according to claim 1, it is characterised in that in step 6, characteristic kinematic vector, block matching motion vector Vector merges with optical flow method motion vector, and step is:
S1: obtain the optical flow method motion vector field vo that obtained by optical flow method, characteristic kinematic vector vf that characteristic matching obtains with And block matching motion vector field vb, create vector field vm after merging;
S2: according to from left to right, order from top to bottom scans the motion vector field vm after fusion successively;
S3: judge in current position, characteristic matching motion vector field vf whether existing characteristics motion vector, if do not existed, turns To S4, otherwise forward S5 to;
S4: computing block match motion vector field vb is at the margin of image element SADB of current position respective motion vectors, and light stream Method motion vector field vo is at the margin of image element SADO of current position respective motion vectors;
S5: select motion vector corresponding to minima as vector field vm after merging in current position from SADB with SADO Motion vector;Forward S8 to;
S6: calculate the optical flow method motion vector field vo margin of image element SADO in current position respective motion vectors respectively, and Characteristic kinematic vector vf is at the margin of image element SADF of current position respective motion vectors, and block matching motion vector field vb is currently The margin of image element SADB of position respective motion vectors;
S7: select motion vector corresponding to minima as vector field vm after merging currently from SADB, SADO with SADF The motion vector of position;Forward S8 to;
S8: scanned the motion vector field vm after fusion, the most then exit and obtain motion vector field vm, otherwise returning S2。
Method the most according to claim 4, it is characterised in that in S4 and S6, calculates the side of the margin of image element of motion vector Method is:
Obtaining current pixel location F (x), pixel RF (x+v) of reference frame, wherein x is current coordinate position, and v is present bit Putting the motion vector of correspondence, RF is reference frame;
SAD=| F (x) RF (x+v) |, wherein SAD is required margin of image element.
6., according to the method described in any one of claim 1-5, it is characterised in that in step 7, motion vector transmits, step For:
Obtain the optical flow method motion vector field vo that obtained by optical flow method, characteristic kinematic vector vf that characteristic matching obtains and block Match motion vector field vb, if creating vector field vm after merging;
S701: according to from left to right, order from top to bottom scans vector field vm after fusion successively;
S702: for current location (px, py), wherein px is horizontal position coordinate, and py is vertical position coordinate, vm (px, py) For the vector of current location, and obtain leftward position vector vm (px-1, py) successively, upper position vector vm (px, py-1), left The vector vm (px-1, py-1) of upper position;
S703: the margin of image element of frame before and after four vector correspondences in calculating S702, selects the arrow that minimum margin of image element is corresponding Measure and the vector field vm vector in current location after merging is updated;
S704: scanned full frame images, if it is not, return S701, otherwise enters S705;
S705: according to from right to left, order from top to bottom scans vector field vm after fusion successively;
S706: for current location (px, py), wherein px is horizontal position coordinate, and py is vertical position coordinate, vm (px, py) For the vector of current location, and obtain right positions vector vm (px+1, py) successively, lower position vector vm (px, py+1), right The vector vm (px+1, py+1) of lower position;
S707: the margin of image element of frame before and after four vector correspondences in calculating S706, selects the arrow that minimum margin of image element is corresponding Vm is updated by amount at the vector of current location;
708: scanned full frame images, if it is not, return S705, otherwise exit.
7. realizing a frame rate up-conversion movement estimation system for the Multi-information acquisition of claim 1-6 any one method, it is special Levy and be to include:
Image input module: read two continuous frames image from high definition video steaming;
Down-sampled module: the view data of image input module input is carried out down-sampled;
Optical flow method motion vector obtains module: two frame data before and after after down-sampled module are carried out motion based on optical flow method and estimates Meter, obtains optical flow method motion vector;
Block matching motion vector obtains module: before and after inputting image input module, two frame image datas are carried out based on Block-matching Estimation, obtain block matching motion vector;
Characteristic kinematic vector obtains module: two frame image datas before and after image input module input are carried out SIFT feature and carries Take, and calculate the characteristic vector of each feature, and carry out characteristic matching, obtain characteristic of correspondence motion vector;
Motion vector field Fusion Module: block matching motion vector that above-mentioned each module is obtained, characteristic kinematic vector optical flow method Motion vector merges, the motion vector field after being merged;
Vector transfer module: the motion vector field after the fusion obtain motion vector field Fusion Module uses reliable motion vectors The mode of transmission is modified.
CN201610657029.6A 2016-08-11 2016-08-11 Multi-information fusion frame rate up-conversion motion estimation method and system Active CN106210449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610657029.6A CN106210449B (en) 2016-08-11 2016-08-11 Multi-information fusion frame rate up-conversion motion estimation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610657029.6A CN106210449B (en) 2016-08-11 2016-08-11 Multi-information fusion frame rate up-conversion motion estimation method and system

Publications (2)

Publication Number Publication Date
CN106210449A true CN106210449A (en) 2016-12-07
CN106210449B CN106210449B (en) 2020-01-07

Family

ID=57514586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610657029.6A Active CN106210449B (en) 2016-08-11 2016-08-11 Multi-information fusion frame rate up-conversion motion estimation method and system

Country Status (1)

Country Link
CN (1) CN106210449B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN108574844A (en) * 2017-03-13 2018-09-25 信阳师范学院 A kind of more tactful video frame rate method for improving of space-time remarkable perception
CN108833920A (en) * 2018-06-04 2018-11-16 四川大学 A kind of DVC side information fusion method based on light stream and Block- matching
WO2019191889A1 (en) * 2018-04-02 2019-10-10 北京大学 Method and device for video processing
CN110446107A (en) * 2019-08-15 2019-11-12 电子科技大学 A kind of video frame rate upconversion method suitable for scaling movement and light and shade variation
CN110555805A (en) * 2018-05-31 2019-12-10 杭州海康威视数字技术股份有限公司 Image processing method, device, equipment and storage medium
CN111277863A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Optical flow frame interpolation method and device
CN111405316A (en) * 2020-03-12 2020-07-10 北京奇艺世纪科技有限公司 Frame insertion method, electronic device and readable storage medium
CN111741304A (en) * 2019-03-25 2020-10-02 四川大学 Method for combining frame rate up-conversion and HEVC (high efficiency video coding) based on motion vector refinement
CN112511859A (en) * 2020-11-12 2021-03-16 Oppo广东移动通信有限公司 Video processing method, device and storage medium
CN112954454A (en) * 2021-02-08 2021-06-11 北京奇艺世纪科技有限公司 Video frame generation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325108A (en) * 2013-05-27 2013-09-25 浙江大学 Method for designing monocular vision odometer with light stream method and feature point matching method integrated
CN105023278A (en) * 2015-07-01 2015-11-04 中国矿业大学 Movable target tracking method and system based on optical flow approach
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105590327A (en) * 2014-10-24 2016-05-18 华为技术有限公司 Motion estimation method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325108A (en) * 2013-05-27 2013-09-25 浙江大学 Method for designing monocular vision odometer with light stream method and feature point matching method integrated
CN105590327A (en) * 2014-10-24 2016-05-18 华为技术有限公司 Motion estimation method and apparatus
CN105023278A (en) * 2015-07-01 2015-11-04 中国矿业大学 Movable target tracking method and system based on optical flow approach
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴垠、李良福等: "基于尺度不变特征的光流法目标跟踪技术研究", 《计算机工程与应用》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108574844B (en) * 2017-03-13 2021-09-28 信阳师范学院 Multi-strategy video frame rate improving method for space-time significant perception
CN108574844A (en) * 2017-03-13 2018-09-25 信阳师范学院 A kind of more tactful video frame rate method for improving of space-time remarkable perception
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
WO2019191889A1 (en) * 2018-04-02 2019-10-10 北京大学 Method and device for video processing
CN110555805A (en) * 2018-05-31 2019-12-10 杭州海康威视数字技术股份有限公司 Image processing method, device, equipment and storage medium
CN110555805B (en) * 2018-05-31 2022-05-31 杭州海康威视数字技术股份有限公司 Image processing method, device, equipment and storage medium
CN108833920A (en) * 2018-06-04 2018-11-16 四川大学 A kind of DVC side information fusion method based on light stream and Block- matching
CN108833920B (en) * 2018-06-04 2022-02-11 四川大学 DVC side information fusion method based on optical flow and block matching
CN111277863A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Optical flow frame interpolation method and device
CN111277863B (en) * 2018-12-05 2022-06-14 阿里巴巴集团控股有限公司 Optical flow frame interpolation method and device
CN111741304A (en) * 2019-03-25 2020-10-02 四川大学 Method for combining frame rate up-conversion and HEVC (high efficiency video coding) based on motion vector refinement
CN110446107A (en) * 2019-08-15 2019-11-12 电子科技大学 A kind of video frame rate upconversion method suitable for scaling movement and light and shade variation
CN110446107B (en) * 2019-08-15 2020-06-23 电子科技大学 Video frame rate up-conversion method suitable for scaling motion and brightness change
CN111405316A (en) * 2020-03-12 2020-07-10 北京奇艺世纪科技有限公司 Frame insertion method, electronic device and readable storage medium
CN112511859A (en) * 2020-11-12 2021-03-16 Oppo广东移动通信有限公司 Video processing method, device and storage medium
CN112954454A (en) * 2021-02-08 2021-06-11 北京奇艺世纪科技有限公司 Video frame generation method and device
CN112954454B (en) * 2021-02-08 2023-09-05 北京奇艺世纪科技有限公司 Video frame generation method and device

Also Published As

Publication number Publication date
CN106210449B (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN106210449A (en) The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
Jeong et al. Motion-compensated frame interpolation based on multihypothesis motion estimation and texture optimization
CN106600536A (en) Video imager super-resolution reconstruction method and apparatus
Li et al. Multi-scale 3D scene flow from binocular stereo sequences
Yu et al. Multi-level video frame interpolation: Exploiting the interaction among different levels
Hsu et al. Accurate computation of optical flow by using layered motion representations
Seyid et al. FPGA-based hardware implementation of real-time optical flow calculation
CN102263957B (en) Search-window adaptive parallax estimation method
KR100987412B1 (en) Multi-Frame Combined Video Object Matting System and Method Thereof
Yang et al. Global auto-regressive depth recovery via iterative non-local filtering
CN103051857A (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
CN106204456A (en) Panoramic video sequences estimation is crossed the border folding searching method
CN104980726B (en) A kind of binocular video solid matching method of associated movement vector
Irani et al. Direct recovery of planar-parallax from multiple frames
CN107767393B (en) Scene flow estimation method for mobile hardware
JP2004356747A (en) Method and apparatus for matching image
US10432962B1 (en) Accuracy and local smoothness of motion vector fields using motion-model fitting
Lin et al. Depth map enhancement on rgb-d video captured by kinect v2
CN114943911A (en) Video object instance segmentation method based on improved hierarchical deep self-attention network
CN106331729A (en) Method of adaptively compensating stereo video frame rate up conversion based on correlation
CN107124617A (en) The generation method and system of random vector in motion estimation motion compensation
Schreer et al. Hybrid recursive matching and segmentation-based postprocessing in real-time immersive video conferencing
Boltz et al. Randomized motion estimation
Seo et al. Robust 3D object tracking using an elaborate motion model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant