CN101216941B  Motion estimation method under violent illumination variation based on corner matching and optic flow method  Google Patents
Motion estimation method under violent illumination variation based on corner matching and optic flow method Download PDFInfo
 Publication number
 CN101216941B CN101216941B CN2008100327412A CN200810032741A CN101216941B CN 101216941 B CN101216941 B CN 101216941B CN 2008100327412 A CN2008100327412 A CN 2008100327412A CN 200810032741 A CN200810032741 A CN 200810032741A CN 101216941 B CN101216941 B CN 101216941B
 Authority
 CN
 China
 Prior art keywords
 frame
 image
 angle point
 motion
 compensation
 Prior art date
Links
 238000005286 illumination Methods 0.000 title claims abstract description 39
 230000003287 optical Effects 0.000 claims abstract description 25
 238000001514 detection method Methods 0.000 claims abstract description 10
 230000001131 transforming Effects 0.000 claims abstract description 10
 238000001914 filtration Methods 0.000 claims description 9
 230000001808 coupling Effects 0.000 claims description 7
 238000010606 normalization Methods 0.000 claims description 7
 238000010168 coupling process Methods 0.000 claims description 6
 238000005859 coupling reaction Methods 0.000 claims description 6
 238000004364 calculation method Methods 0.000 claims description 5
 230000000875 corresponding Effects 0.000 claims description 3
 238000005516 engineering process Methods 0.000 abstract description 2
 230000000694 effects Effects 0.000 description 4
 239000011159 matrix material Substances 0.000 description 4
 238000000034 method Methods 0.000 description 3
 238000001429 visible spectrum Methods 0.000 description 3
 238000005314 correlation function Methods 0.000 description 2
 238000000605 extraction Methods 0.000 description 2
 230000015572 biosynthetic process Effects 0.000 description 1
 230000000903 blocking Effects 0.000 description 1
 238000005755 formation reaction Methods 0.000 description 1
 238000010191 image analysis Methods 0.000 description 1
 238000005192 partition Methods 0.000 description 1
 238000005070 sampling Methods 0.000 description 1
 230000011218 segmentation Effects 0.000 description 1
Abstract
A motion estimation method based on the anglepoint matching and the optical flow method under violent light change, relating to the computer vision technology field, comprises the following steps of: first, carrying out anglepoint detection to a present frame; next, carrying out normalized anglepoint matching of the present frame and a last frame rotationinvariantly; then, dividing the matched frame into blocks and calculating the affine transformation parameter in each block, carrying out block global motion vector estimating of the present frame according to the affine transformation parameter and performing motion compensation of the last frame using the vector estimation; again, carrying out block linear illumination compensation of the last frame after the motion compensation; finally, estimating the global motion vector of the next frame via the optical flow computation of the linear illumination compensated image and the present image frame. With overcoming the advantages of the invalidation of the optical flow method caused by bad conditions, violent light change and great rotation, etc, the invention obtains higher accuracy.
Description
Technical field
The present invention relates to a kind of method of technical field of image processing, specifically is based on the method for estimating of corners Matching and optical flow method under a kind of violent illumination variation.
Background technology
Optical flow method is the important method of object of which movement in the analytical sequence image, and extremely important application is arranged aspect video image analysis.When object moves before video camera or video camera when moving in environment, will find the change of image, the velocity distribution of image model motion just is called light stream, and the every bit on the image just forms optical flow field.But because the existence of the constant condition hypothesis of brightness between the consecutive frame makes its range of application greatly reduce.The problem of the calculating of optical flow field mainly is two aspects: 1, by the caused motion uncontinuity of relative motion; 2, violent illumination variation is destroyed the constant equation of light stream.When object have with background speed inequality when moving, the uncontinuity of the motion that is produced has been broken the HornSchunck (basic assumption of optical flow method of Huo EnShu Enke).
Through existing technical literature retrieval is found, Hulya Yalcin etc. are at Background Estimationunder Rapid Gain Change in Thermal Imagery (background forecast in the infrared image under the change in gain fast), Second IEEE Workshop on Object Tracking and Classification inand Beyond the Visible Spectrum (OTCBVS ' 05), June, 2005 (second visible spectrum and nonvisible spectrum object trackings and the symposial of classification Institute of Electrical and Electric Engineers, in June, 2005), at the bigger situation of change in gain in the infrared image former frame I after a kind of usefulness has been carried out motion compensation has been proposed in this article
^{s} _{T1}Come present frame I
_{t}Utilize expression I
_{t}=m*I
^{s} _{T1}+ b carries out the model of linear illumination compensation.But this model is relatively coarse, also there is following deficiency: 1, because its picture is infrared picture, if, must be that each pixel gain all changes according to the same manner so, and variation be linearity so the illumination gain intensity that is assumed to be in secret changes.The nonuniform change situation of image irradiation intensity was not inconsistent when but this obviously carried out by bright place to the scanning of dark place with usual our used video monitoring video camera; 2, because it mates with normalized cross correlation function, and cross correlation function is rotated, and factor affecting such as affined transformation is bigger, and therefore being not suitable for the relatively more violent and motion of motion conditions is nonlinear situation; 3, for the many situations of angle point number in the moving object, the method does not propose a kind of effective solution yet the erroneous judgement of the motion vector on the object is eliminated for the situation of global motion vector.
Summary of the invention
The present invention is directed to abovementioned deficiency of the prior art, method for estimating based on corners Matching and optical flow method has been proposed under a kind of violent illumination variation, make it can either improve the precision of optical flow method under the general case, can extremely provide the estimation of motion comparatively reliably under the condition of severe at extraneous photoenvironment again, make its coding and decoding video and extraction and central performance use of steady picture at next step.
The present invention is achieved through the following technical solutions, and the present invention includes following steps:
1. present frame is carried out Corner Detection;
2. present frame and previous frame are rotated constant normalization angle point coupling;
3. for the angle point that matches, carry out piecemeal, and calculate the affine transformation parameter in each piece that is divided, at affine transformation parameter present frame is carried out the piecemeal global motion vector and estimate, and utilize vector to estimate former frame is carried out motion compensation;
4. carry out the linear illumination compensation of piecemeal at the former frame piecemeal after the motion compensation;
5. utilize the image and the current image frame of having carried out linear illumination compensation to carry out optical flow computation, the global motion vector of next frame is estimated.
Described invariable rotary, be meant after through the maximized angularpoint detection method of the gradient of body surface in based on image or radian that the angle point in the middle of the picture frame is explicit showing, extract and all adopt the method that does not rely on rotation when near feature of each angle point and angle point territory are selected, specifically be to adopt to come that for the insensitive Laplce's Gaussian filter of rotation image is carried out filtering and come selected characteristic, the employing radius is that the circle of r is used as the angle point territory, and r is 3 pixel to 10 pixels.
The normalization angle point coupling of described invariable rotary, be meant the feature of choosing invariable rotary and angle point territory and this feature is mated, obtain the total filtering image pixel value in each angle point zone, the average filter image pixel value, total image pixel value before the filtering, the average image pixel value, more than four values constitute four features of each angle point of token image, then these four features are carried out normalized, eliminate by illumination and change caused influence suddenly, form the normalization proper vector of each picture element, utilize this proper vector to seek in the former frame with the angle point in this angle point near zone scope, choosing therewith, the angle point of the proper vector representative of angle point proper vector square error minimum mates angle point as it.
Affine transformation parameter in each piece that described calculating divided, be meant the entire image frame is divided into 16 pieces, choose nonconterminous three pieces respectively arbitrarily at every turn, match point of random choose is right in each piece again, random choose consistance (RANSAC) method of utilizing piecemeal to select again, the ratio that accounts for all angle point numbers for angle point number on the foreground moving object reaches 1/3 picture frame and carries out image affine motion CALCULATION OF PARAMETERS, obtain six parameters of affine coordinates, again by six parameters of affine coordinates, calculating kinematical vector.
The described linear illumination compensation of piecemeal that carries out, be meant: entire image is divided into n piece, all carrying out except that grayscale value for former frame and each piece that has carried out the present frame after the motion compensation is the pointwise comparison of the corresponding blocks of zero point, and by the linear regression formula
Be compensated coefficient, wherein
Gradation of image value after the representative compensation, I ' representative compensation preceding pixel grayscale value, i represents i piece, i＜n, (u v) represents certain some position in the former frame, m, b is the linear compensation coefficient, by the linear regression formula, obtain arbitrfary point (u, the light intensity of v) locating, for connecting the illumination sudden change that the seam place occurs, adopt the local average method of neighborhood to eliminate.
The described optical flow computation of carrying out, be meant: the affine motion that will be assumed to be six parameter models via former frame image after the linear illumination compensation of piecemeal and the motion between the current pixel frame, so according to constant equation of light stream and energy minimization principle, and according to affine motion model
Calculate six parameter a of affine motion
_{1}, a
_{2}, a
_{3}, a
_{4}, b
_{1}, b
_{2}, u, v represent certain some position in the former frame, u ', this position in the v ' expression present frame.
Compared with prior art, the present invention has following beneficial effect: the present invention adopted the normalized corners Matching method of invariable rotary to come image is carried out the illumination compensation operation before carrying out the optical flow method calculating kinematical vector earlier, overcome owing to violent illumination variation and the shortcoming that lost efficacy of the optical flow method that causes of bad condition such as rotation significantly, obtained higher accuracy rate; Through experimental verification, above the more common optical flow method of the present invention can reduce under the described abominable situation false drop rate more than 10% between 35%.And the present invention adopts the method for blocking to carry out motion compensation, it is many and cause the situation of motion vector erroneous judgement to have overcome angle point number in the moving object, utilizes method of partition to carry out illumination compensation and makes the present invention have robustness in common camera by bright detection during to scanning in darkness.So the present invention combines normalization angle point matching process squelch is well had good robustness and the high advantage of optical flow method operational precision, can be applied to technical fields such as video monitoring, coding and decoding video, electronic steady image, Video Segmentation.
Description of drawings
Fig. 1 is the picture frame in the video that moves automobile of containing at faint illumination variation and little motion yardstick;
Fig. 2 is and be separated by the picture frame in the video that moves automobile of containing at faint illumination variation and little motion yardstick of 5 frames of Fig. 1;
Fig. 3 is that big motion yardstick has contained the picture frame in bicycle pedestrian's the video;
Fig. 4 has contained picture frame in bicycle pedestrian's the video with the be separated by big motion yardstick of 5 frames of Fig. 3;
Fig. 5 is the picture frame that contains under the strong illumination variation in pedestrian's the video;
Fig. 6 is and be separated by picture frame in the video that contains the pedestrian under the strong illumination variation of 5 frames of Fig. 5.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment is being to implement under the prerequisite with the technical solution of the present invention, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The source video sequence that present embodiment adopted is in the many objects video library of taking with illumination sudden change.
Present embodiment comprises following concrete steps:
Step 1 adopts Harris (Harris) corner detection approach, and image is carried out single order or second order difference, calculates the mean square gradient matrix of each pixel:
Wherein,
Being respectively picture frame I (x) is s with standard deviation
^{σ}The gaussian kernel differential carry out convolution algorithm and obtain
The back is to u, and v is asked respectivelythe rank local derviation, wherein
Be twodimensional vector,
Be certain point on the image after the convolution
Pixel value
At u, the mixing local derviation on the v direction.
Eigenwert
Judge whether R is whether the selected point of decidable is angle point greater than the thresholding of setting, the automatic adjustment that is set to carry out interlocking of thresholding in the present embodiment according to the angle point number, that is to say, set in advance a thresholding th=40000, and lock variable s=0, it is 300～400 that p=0 makes angle point number wherein, if number is not in this scope, then automatically thresholding is extracted square root or take advantage of 3/2 power to operate, but for the first time greater than 300, then make s=1 for the angle point number of thresholding extraction of square root back gained; Thresholding takes advantage of the angle point number of gained behind 3/2 power for the first time less than 400, then makes p=1; Iteration is adjusted this thresholding, and be 300～400 or extract square root or take advantage of which operation in 3/2 power all will make s=1 up to the angle point number, and till the p=1.Wherein, det is a determinant of a matrix, and tr is the straight mark of matrix, and k is that an empirical value value is 0.04～0.06, gets k=0.05 in this example.
Step 2 because the video monitoring camera can be done to a certain degree rotation according to target, and when doing affined transformation, also can make common matching process produce erroneous judgement, therefore adopts the normalization angle point coupling of invariable rotary.
At first, extract the feature of each angle point near zone, employing is that a circle of 5 pixels constitutes the angle point territory and adopts for the insensitive Laplce's Gaussian filter of rotation to come image is carried out filtering to choosing radius around each angle point, obtain two eigenwerts of each angle point, be total filtering image pixel value and average filter image pixel value, then with filtering before four features of total image pixel value and each angle point of the average image pixel value formation token image in the angle point territory of image;
Secondly, these four features are carried out normalized, form the proper vector of each picture element, utilize this proper vector to seek in the former frame with all angle points near the 20*20 regional extent this angle point, choosing therewith, the angle point of the proper vector representative of angle point proper vector square error minimum mates angle point as it.
Step 3 is carried out piecemeal affine motion CALCULATION OF PARAMETERS by the angle point that matches, and the parameter estimation and the compensation that come present frame is carried out global motion vector thus;
For the motion of overall importance of monitoring camera, only relate to rotation and translation, therefore take affine coordinates six parameter models to carry out it is simulated, use u, v represents certain some position in the former frame, with u ', v ' represents this position in the present frame, and affine relation is as follows:
Wherein: a
_{1}, a
_{2}, a
_{3}, a
_{4}, b
_{1}, b
_{2}Be the affine motion parameter.
From the step 2 coupling angle point that the match is successful, utilize random choose coherence method (RANSAC) method selected point this parameter to be estimated to coming.
Use RANSAC that data are carried out the calculating that affine parameter is carried out in repeatedly grab sample in the present embodiment, the random choose coherence method is a kind of method that effectively has robustness.
Described random choose coherence method, specific as follows:
The first step, according to the experimental formula of the number of times of random choose:
Estimate required number of iterations, wherein: z, w is respectively the correct probability of desired result and taken point is better probability, need the correct probability z=0.99 of result in the present embodiment, w=0.5 that is to say that a least bit is better, can obtain K=17 according to abovementioned formula, for guaranteeing correctness, this method adopts K=30.
In second step, take out three coupling angle points at random to determining the affine model parameter, promptly by formula at every turn:
Simulate affine parameter a
_{1}, a
_{2}, a
_{3}, a
_{4}, b
_{1}, b
_{2},
The 3rd step, choose the whether reasonable threshold value t of match of a judging point, for have a few beyond the sampled point with t come comparison this put and utilize second go on foot in match come out set up distance between the calculation level of model by six parameters, if this distance is less than t, determine so point very close to, it is defined as better; If greater than t, so it is defined as bad point, and writes down this and put the right d that counts out that gets well.
To choose the angle point of finding out three maximum couplings of the d that counts out K=30 time right subsequent iteration the second, three step like this, with this angle point the candidate point as the computation model parameter carried out affine six parameter a
_{1}, a
_{2}, a
_{3}, a
_{4}, b
_{1}, b
_{2}Calculating.
Utilize the RANSAC method more accurately the searching and computing background occupy the affine parameter in most areas in the entire image frame, yet for the bigger picture frame of foreground object, the ratio that accounts for all angle point numbers is very many because foreground moving object and background upper corners are counted out, in this case, obviously such random choose might be used as background information with foreground information fully, thereby the result of calculation that leads to errors, therefore, the method that present embodiment takes a kind of piecemeal to select, that is to say entire image is divided into 16 pieces, choose nonconterminous three pieces respectively arbitrarily at every turn, match point team of random choose in each piece again, so just greatly reduce all points of choosing and all be chosen at probability in the same moving object, also reduce the probability of miscalculation simultaneously.
For six parameters of the affine coordinates of calculating, adopt
Calculate motion vector, come thus present frame is carried out global motion compensation, make in the present frame be stabilized to the position of former frame a little, and with emerging grayscale value zero setting.
Step 4 is drawn piece with present frame and is carried out piecemeal linearity illumination compensation carrying out former frame after the motion compensation;
The hypothesis light source is single pointolite in the present embodiment, for between the canonical reference fritter of the consecutive frame of choosing arbitrarily because the sudden change of the light intensity that relative variation is produced between camera and the light source can utilize a linear function to represent, but on the picture frame remaining therewith between the canonical reference fritter between the distant reference point of distance since the different light intensity variations that cause of distance with this model just represent imprecise, the method that this step still takes piecemeal to find the solution.
Described piecemeal linear light is specially according to compensation method: entire image is divided into nine pieces, all carries out the pointwise comparison of the corresponding blocks except that grayscale value is zero point for each piece of the present frame after former frame and the motion compensation, and by the linear regression formula
Be compensated coefficient, wherein
Gradation of image value after the representative compensation, I ' representative compensation preceding pixel grayscale value, i represents i piece, by the linear regression formula, obtain on the picture frame (u, the light intensity of v) locating more arbitrarily, but can luminance difference may appear in the image block junction, therefore, adopt the local average method of four vertex neighborhoods to eliminate this phenomenon, can obtain the smoother effect of brightness like this for connecting seam crossing.
Step 5 utilize to be eliminated former frame image and current image frame behind the illumination effect, utilizes optical flow method to estimate that the global motion vector of next frame is specially: this motion of overall importance is assumed to be the affine motion of six parameter models, according to the constant equation of light stream
Wherein, Iu, Iv, It are respectively light intensity for u on the metric space, the local derviation of v direction and for the local derviation of time, and x, y are motion vector, again according to the energy minimization principle
Wherein
Be motion vector
Vector representation, and according to affine motion model
Be transformed to its matrix form:
Wherein,
Six parameters that calculate affine motion are
Six parameters can be by formula like this, thus
And
Obtain motion vector, be the estimation vector of being asked.
Compared with prior art, the beneficial effect of this enforcement is: the motion vector estimation method that present embodiment is proposed in conjunction with corners Matching method and optical flow approach basis, by a large amount of evidences, the present embodiment method is the estimation of the method vector that can move preferably as compared with the past.
Fig. 1, the 2nd, two pictures frames of 5 frames of being separated by in the ordinary video, Fig. 1 (a), Fig. 2 (a) are current image frame, Fig. 1 (b), Fig. 2 (b) are for giving the consecutive frame difference of carrying out on the former frame basis by the motion vector compensation of present embodiment method gained, carry out accuracy thus relatively, by Fig. 1, as can be seen, can under the situation of faint illumination variation and little motion yardstick, accurately the Automobile Detection of operation be come out shown in 2 by the resulting result of present embodiment method.
Fig. 3, the 4th, two pictures of 5 frames of being separated by in the big motion yardstick video, Fig. 3 (a), Fig. 4 (a) are current image frame, Fig. 3 (b), Fig. 4 (b) are for giving the consecutive frame difference of carrying out on the former frame basis by the motion vector compensation of present embodiment method gained, carry out accuracy thus relatively, by shown in the accompanying drawing 3,4 as can be seen, can under the situation of faint illumination variation and big motion yardstick, accurately the people detection of its bicycle be come out by the resulting result of present embodiment method.
Fig. 5, the 6th, two pictures of 5 frames of being separated by in the strong illumination variation video, Fig. 5 (a), Fig. 6 (a) are current image frame, Fig. 5 (b), Fig. 6 (b) are for giving the consecutive frame difference of carrying out on the former frame basis by the motion vector compensation of present embodiment method gained, carry out accuracy thus relatively, by accompanying drawing 5, shown in 6 as can be seen, can under the situation of strong illumination variation and big motion yardstick, accurately pedestrian detection be come out by the resulting result of present embodiment method, and flase drop wherein is fewer.
Claims (4)
 Under the violent illumination variation based on the method for estimating of corners Matching and optical flow method, it is characterized in that, comprise the steps:1. present frame is carried out Corner Detection;2. present frame and previous frame are rotated constant normalization angle point coupling: choose the feature of invariable rotary and angle point territory and this feature is mated, obtain the total filtering image pixel value in each angle point zone, the average filter image pixel value, total image pixel value before the filtering, the average image pixel value, more than four values constitute four features of each angle point of token image, then these four features are carried out normalized, eliminate by illumination and change caused influence suddenly, form the normalization proper vector of each picture element, utilize this proper vector to seek in the former frame with the angle point in this angle point near zone scope, choosing therewith, the angle point of the proper vector representative of angle point proper vector square error minimum mates angle point as it;3. for the angle point that matches, carry out piecemeal, and calculate the affine transformation parameter in each piece that is divided, at affine transformation parameter present frame is carried out the piecemeal global motion vector and estimate, and utilize vector to estimate present frame is carried out motion compensation;Affine transformation parameter in each piece that described calculating divided, be meant the entire image frame is divided into 16 pieces, choose nonconterminous three pieces respectively arbitrarily at every turn, match point of random choose is right in each piece, the random choose coherence method that utilizes piecemeal to select again, the ratio that accounts for all angle point numbers for angle point number on the foreground moving object reaches 1/3 picture frame and carries out image affine motion CALCULATION OF PARAMETERS, obtain six parameters of affine coordinates, again by six parameters of affine coordinates, calculating kinematical vector;4. carry out the linear illumination compensation of piecemeal at former frame after the motion compensation and present frame piecemeal;5. utilize the previous image frame and the current image frame of having carried out linear illumination compensation to carry out optical flow computation, the global motion vector of next frame is estimated.
 2. under the violent illumination variation according to claim 1 based on the method for estimating of corners Matching and optical flow method, it is characterized in that, described invariable rotary, be meant after through the maximized angularpoint detection method of the gradient of body surface in based on image or radian that the angle point in the middle of the picture frame is explicit showing, adopt Laplce's Gaussian filter to come that image is carried out filtering and come selected characteristic, the employing radius is that the circle of r is used as the angle point territory, and r is 3 pixel to 10 pixels.
 3. under the violent illumination variation according to claim 1 based on the method for estimating of corners Matching and optical flow method, it is characterized in that, the described linear illumination compensation of piecemeal that carries out, be meant: entire image is divided into n piece, all carrying out except that grayscale value for former frame and each piece that has carried out the present frame after the motion compensation is the pointwise comparison of the corresponding blocks of zero point, and by the linear regression formula Be compensated coefficient, wherein Gradation of image value after the representative compensation, I ' representative compensation preceding pixel grayscale value, i represents i piece, i＜n, (u v) represents certain some position in the former frame, m, b is the linear compensation coefficient, by the linear regression formula, obtain arbitrfary point (u, the light intensity of v) locating, for connecting the illumination sudden change that the seam place occurs, adopt the local average method of neighborhood to eliminate.
 4. under the violent illumination variation according to claim 1 based on the method for estimating of corners Matching and optical flow method, the described optical flow computation of carrying out, be meant: the affine motion that will be assumed to be six parameter models via former frame image after the linear illumination compensation of piecemeal and the motion between the current pixel frame, according to constant equation of light stream and energy minimization principle, and according to affine motion model Calculate six parameter a of affine motion _{1}, a _{2}, a _{3}, a _{4}, b _{1}, b _{2}, u, v represent certain some position in the former frame, u ', this position in the v ' expression present frame.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN2008100327412A CN101216941B (en)  20080117  20080117  Motion estimation method under violent illumination variation based on corner matching and optic flow method 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN2008100327412A CN101216941B (en)  20080117  20080117  Motion estimation method under violent illumination variation based on corner matching and optic flow method 
Publications (2)
Publication Number  Publication Date 

CN101216941A CN101216941A (en)  20080709 
CN101216941B true CN101216941B (en)  20100421 
Family
ID=39623369
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN2008100327412A CN101216941B (en)  20080117  20080117  Motion estimation method under violent illumination variation based on corner matching and optic flow method 
Country Status (1)
Country  Link 

CN (1)  CN101216941B (en) 
Families Citing this family (25)
Publication number  Priority date  Publication date  Assignee  Title 

CN102074017B (en) *  20091123  20121128  北京工业大学  Method and device for detecting and tracking barbell central point 
CN101794446B (en) *  20100211  20111214  东南大学  Line search type detection method of image corner point 
CN101789127B (en) *  20100226  20120118  成都三泰电子实业股份有限公司  Method for extracting target from video image 
CN101894278B (en) *  20100716  20120627  西安电子科技大学  Human motion tracing method based on variable structure multimodel 
CN102074034B (en) *  20110106  20131106  西安电子科技大学  Multimodel human motion tracking method 
CN102156991B (en) *  20110411  20130501  上海交通大学  Quaternion based object optical flow tracking method 
CN102156985A (en) *  20110411  20110817  上海交通大学  Method for counting pedestrians and vehicles based on virtual gate 
US8553943B2 (en) *  20110614  20131008  Qualcomm Incorporated  Contentadaptive systems, methods and apparatus for determining optical flow 
CN102289670B (en) *  20110831  20130320  长安大学  Image characteristic extraction method with illumination robustness 
FR2985065B1 (en) *  20111221  20140110  Univ Paris Curie  OPTICAL FLOAT ESTIMATING METHOD FROM LIGHT ASYNCHRONOUS SENSOR 
CN103428407B (en) *  20120525  20170825  信帧机器人技术（北京）有限公司  A kind of method for detecting fought in video 
CN102880634B (en) *  20120730  20160720  成都西可科技有限公司  Intelligent humanface recognition and retrieval method based on cloud 
CN103810692B (en) *  20121108  20161221  杭州海康威视数字技术股份有限公司  Video monitoring equipment carries out method and this video monitoring equipment of video tracking 
CN103079037B (en) *  20130205  20150610  哈尔滨工业大学  Selfadaptive electronic image stabilization method based on longrange view and closerange view switching 
WO2015192372A1 (en) *  20140620  20151223  Mediatek Singapore Pte. Ltd.  A simplified method for illumination compensation in multiview and 3d video coding 
CN105818746A (en) *  20150105  20160803  上海纵目科技有限公司  Calibration method and system of panoramic advanced driver assistance system 
CN104853064B (en) *  20150410  20180417  海视英科光电（苏州）有限公司  Electronic image stabilization method based on thermal infrared imager 
CN104881645B (en) *  20150526  20180914  南京通用电器有限公司  The vehicle front mesh object detection method of feature based point mutual information and optical flow method 
CN105100771A (en) *  20150714  20151125  山东大学  Singleviewpoint video depth obtaining method based on scene classification and geometric dimension 
CN105261042A (en) *  20151019  20160120  华为技术有限公司  Optical flow estimation method and apparatus 
CN107705320A (en) *  20160808  20180216  佳能株式会社  The method and apparatus for tracking the boundary point of the object in video 
CN107230220B (en) *  20170526  20200221  深圳大学  Novel spacetime Harris corner detection method and device 
CN109040522A (en) *  20170608  20181218  奥迪股份公司  Image processing system and method 
CN107808388A (en) *  20171019  20180316  中科创达软件股份有限公司  Image processing method, device and electronic equipment comprising moving target 
CN110378930B (en) *  20190911  20200131  湖南德雅坤创科技有限公司  Moving object extraction method and device, electronic equipment and readable storage medium 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN101072355A (en) *  20060512  20071114  中国科学院计算技术研究所  Weighted predication motion compensating method 

2008
 20080117 CN CN2008100327412A patent/CN101216941B/en not_active IP Right Cessation
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN101072355A (en) *  20060512  20071114  中国科学院计算技术研究所  Weighted predication motion compensating method 
NonPatent Citations (4)
Title 

吴剑英，艾斯卡尔.一种图像全局运动鲁棒估计与补偿技术.河北理工学院学报29 1.2007,29(1),5662. 
吴剑英，艾斯卡尔.一种图像全局运动鲁棒估计与补偿技术.河北理工学院学报29 1.2007,29(1),5662. * 
李亚桥.交通事故现场摄影测量中特征点提取与匹配技术.吉林大学硕士学位论文.2006,4345. * 
王旸.全搜索块匹配算法的一种改进算法.陕西师范大学继续教育学报(西安)24 1.2007,24(1),110112. * 
Also Published As
Publication number  Publication date 

CN101216941A (en)  20080709 
Similar Documents
Publication  Publication Date  Title 

Zhou et al.  Efficient road detection and tracking for unmanned aerial vehicle  
JP6095018B2 (en)  Detection and tracking of moving objects  
JP6586430B2 (en)  Estimation of vehicle position  
US9424486B2 (en)  Method of image processing  
Yang et al.  Fusion of median and bilateral filtering for range image upsampling  
Qiu et al.  Deeplidar: Deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image  
Yang et al.  Colorguided depth recovery from RGBD data using an adaptive autoregressive model  
Hu et al.  Single and multiple object tracking using logEuclidean Riemannian subspace and blockdivision appearance model  
US9864927B2 (en)  Method of detecting structural parts of a scene  
CN102999759B (en)  A kind of state of motion of vehicle method of estimation based on light stream  
CN103455797B (en)  Detection and tracking method of moving small target in aerial shot video  
Veeraraghavan et al.  Computer vision algorithms for intersection monitoring  
Wang et al.  Lane detection using spline model  
Kang et al.  Handling occlusions in dense multiview stereo  
CN103325112B (en)  Moving target method for quick in dynamic scene  
Mittal et al.  Motionbased background subtraction using adaptive kernel density estimation  
Park et al.  Comparative study of vision tracking methods for tracking of construction site resources  
Brown et al.  Advances in computational stereo  
US20170316569A1 (en)  Robust Anytime Tracking Combining 3D Shape, Color, and Motion with Annealed Dynamic Histograms  
Brasnett et al.  Sequential Monte Carlo tracking by fusing multiple cues in video sequences  
US7778446B2 (en)  Fast human pose estimation using appearance and motion via multidimensional boosting regression  
US8447069B2 (en)  Apparatus and method for moving object detection  
US8045783B2 (en)  Method for moving cell detection from temporal image sequence model estimation  
Stein et al.  A robust method for computing vehicle egomotion  
Leotta et al.  Vehicle surveillance with a generic, adaptive, 3d vehicle model 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
C06  Publication  
C10  Entry into substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
C14  Grant of patent or utility model  
CF01  Termination of patent right due to nonpayment of annual fee 
Granted publication date: 20100421 Termination date: 20130117 