CN103905826A - Self-adaptation global motion estimation method - Google Patents

Self-adaptation global motion estimation method Download PDF

Info

Publication number
CN103905826A
CN103905826A CN201410144161.8A CN201410144161A CN103905826A CN 103905826 A CN103905826 A CN 103905826A CN 201410144161 A CN201410144161 A CN 201410144161A CN 103905826 A CN103905826 A CN 103905826A
Authority
CN
China
Prior art keywords
field picture
play amount
moment
characteristic point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410144161.8A
Other languages
Chinese (zh)
Inventor
张会清
高琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201410144161.8A priority Critical patent/CN103905826A/en
Publication of CN103905826A publication Critical patent/CN103905826A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A global motion estimation indoor visual positioning method comprises the steps that a camera collects images in real time; the ith frame of image in an input image sequence is extracted, when the i is smaller than or equal to five, the offset of the (i+1)th frame of image relative to the ith frame of image is calculated, and the camera movement track of the (i+1)th frame of image at the corresponding moment is drawn in real time according to the offset; when the i is larger than five, the offset of the ith frame of image relative to the (i-1)th frame of image is obtained through prediction by using a Kalman filtering algorithm, the overlapping region of the ith frame of image and the (i-1)th frame of image is obtained according to the prediction offset, a feature point set of the overlapping region is calculated, then, each feature point in the set is calculated, so that a feature point descriptor set four-dimensional vector is obtained, accordingly, a feature point descriptor matching pair of the overlapping region is obtained, the offset of the ith frame of image relative to the (i-1)th frame of image is obtained according to the matching pair, and the camera movement track of the ith frame of image at the corresponding moment is drawn in real time.

Description

A kind of self adaptation global motion estimating method
Technical field:
The invention belongs to image processing field.It is a kind of indoor vision positioning method that utilizes image capture technology, computer technology, digital image processing techniques, optical technology etc. to realize.The method can realize the automatic analysis to camera picked-up video, and judges the displacement size and Orientation of camera.
Background technology:
Overall motion estimation is a kind of method of motion analysis based on model, and cardinal principle is to utilize the method for estimation of iteration optimization to solve to obtain optimal parameter, has now been widely used in Video coding, image and the field such as has cut apart.
For overall motion estimation algorithm, experts and scholars both domestic and external are devoted to the parametric solution in best estimation model in trial.The people such as YosiKeller propose the algorithm of the various features point based on gradient, make overall motion estimation amount of calculation reduce approximately 20 times.Barfoot adopts SIFT characteristic matching method to solve three-dimensional motion estimation problem, but will rely on the objects of reference such as prepositioned road sign, and speed is slower.The people such as Konrad have proposed LM algorithm, utilize residual error histogram to remove noise, but algorithm amount of calculation are large, and result is subject to noise jamming, the very difficult real-time that ensures.Li has proposed to improve based on background extracting the algorithm of overall motion estimation precision and speed, but does not consider the real-time of overall motion estimation, and algorithm amount of calculation is still larger.
In recent years, for the method for a lot of overall motion estimation, locating speed is slow, the subject matter that real-time is low is that amount of calculation is larger, cause parameter Estimation speed slow, thereby limited their application, and being difficult to meet the location-based service demand improving constantly, the problem that therefore improves real-time is urgently to be resolved hurrily.
Summary of the invention:
For the slow problem of algorithm locating speed, the present invention proposes a kind of self adaptation global motion method.After the 5th coupling, use Kalman filtering algorithm to dope the overlapping region of two width images to be matched, then only on overlapping region, detect characteristic point and mate.Taking the advantage of SUSAN algorithm and SURF algorithm as basis, effectively in conjunction with the SUSAN-SURF algorithm extract minutiae of the high efficiency of SURF algorithm and the outstanding profile information of SUSAN, finally use KNN method acceleration images match.The displacement size and Orientation that estimates again camera according to matching result with six parameter affine models, goes out the true motion track of camera at host computer interface real-time rendering.The method also comprises following steps successively:
(1) camera real-time image acquisition, obtains input image sequence;
(2) extract the i two field picture in input image sequence, wherein the initial value of i is 1; In the time of i≤5, jump to step 3; When i>5, jump to step (5);
(3) calculate the side-play amount of i+1 two field picture with respect to i two field picture, specifically comprise: the characteristic point set of using SUSAN algorithm to calculate respectively i frame and i+1 two field picture, then respectively each characteristic point in pair set to calculate the set of characteristic point descriptor with SURF algorithm four-dimensional vectorial; Next,, according to the four-dimensional vector of characteristic point descriptor set, the characteristic point descriptor coupling that obtains i frame and i+1 two field picture is right; Right according to the coupling obtaining, adopt six parameter affine models, obtain the side-play amount of i+1 two field picture with respect to i two field picture;
(4) according to i+1 two field picture with respect to the side-play amount of i two field picture at the host computer real-time rendering i+1 two field picture camera motion track in corresponding moment; Jump to step 9;
(5) use Kalman filtering algorithm, prediction obtains the side-play amount of i two field picture with respect to i-1 two field picture, continues next step, and wherein said forecasting process specifically comprises the following steps:
1. according to the side-play amount estimated value in k-1 moment
Figure BDA0000489544660000021
obtain the side-play amount predicted value in k moment
Figure BDA0000489544660000022
computing formula is: wherein,
Figure BDA0000489544660000024
be the side-play amount estimated value in k-1 moment, i-1 two field picture is with respect to the side-play amount estimated value of i-2 two field picture, w k-1be the white Gaussian noise sampled value in k-1 moment, a is system parameters, and the initial value of k is 6,
Figure BDA0000489544660000025
for utilizing the 5th two field picture that step 3 the obtains side-play amount with respect to the 4th two field picture;
2. calculate k moment side-play amount predicted value
Figure BDA0000489544660000029
variance P k', computing formula is: P k'=AP k-1a t+ Q k-1, wherein, P k-1be the variance of k-1 moment side-play amount estimated value, i-1 two field picture is with respect to the variance of the side-play amount estimated value of i-2 two field picture, Q k-1be the system noise variance in k-1 moment, A is system parameters matrix, A tit is system parameters transpose of a matrix matrix;
3. upgrade k moment side-play amount estimated value
Figure BDA0000489544660000027
computing formula is:
Figure BDA0000489544660000028
wherein Z kfor the i.e. location matrix of i two field picture of k moment, K k=P k' C t[CP k' C t+ R k] -1, C is the observing matrix of additional noise measured value, R kit is the covariance matrix of i.e. i two field picture additional noise observation of k moment;
4. calculate the variance P of k moment side-play amount estimated value k, computing formula is: P k=(I-K kc) P k', wherein I is unit matrix, P 5=1;
(6), according to prediction drift amount, obtain the overlapping region of i two field picture and i-1 two field picture, and from i two field picture and i-1 two field picture, extract overlapping region A respectively iand A i-1;
(7) use SUSAN algorithm to calculate respectively overlapping region A iand A i-1characteristic point set, then each characteristic point in pair set calculates the four-dimensional vector of characteristic point descriptor set with SURF algorithm respectively; Next,, according to the four-dimensional vector of characteristic point descriptor set, obtain overlapping region A iand A i-1characteristic point descriptor coupling right; Right according to the coupling obtaining, adopt six parameter affine models, obtain the side-play amount of i two field picture with respect to i-1 two field picture;
(8) according to i two field picture with respect to the side-play amount of i-1 two field picture at the host computer real-time rendering i two field picture camera motion track in corresponding moment;
(9) judge whether continue location, if continued, returns to step (2), otherwise, finish.
Compared with prior art, the present invention has realized the quick location in global motion, has solved the slow problem of locating speed in traditional global motion method.By using Kalman prediction overlapping region, only carry out feature point detection and coupling in overlapping region, significantly promote the real-time of location, avoid extracting the characteristic point of entire image, reduce the amount of calculation of algorithm.
Brief description of the drawings:
The general flow chart of this development system of Fig. 1 operation
The application system structured flowchart of this development system of Fig. 2
Wherein 1 wireless camera 2 computer 3 display interfaces
The method flow diagram of Fig. 3 SUSAN algorithm extract minutiae
Fig. 4 detects the USAN template of characteristic point
The method flow diagram of Fig. 5 Kalman prediction
The overlapping region comparison diagram of Fig. 6 Kalman Prediction images match
Wherein Fig. 6 (a) is that the 4th two field picture Fig. 6 (b) is the 5th two field picture
Fig. 6 (c) is that the 8th two field picture Fig. 6 (d) is the 9th two field picture
The trajectory diagram that Fig. 7 camera moves
Embodiment:
Below in conjunction with drawings and Examples to being described further.The concrete implementing procedure of system of the present invention is shown in Fig. 1.
System composition structured flowchart as shown in Figure 2, is made up of hardware and software two parts, and hardware is made up of a wireless camera, and pixel is 640 × 480, and resolution is 96dpi, and video frame rate is 15fps; Software is in windows7 system, and host computer interface is to develop based on the MicrosoftFoundationClasses under MicrosoftVisualStudio2008 environment, and it is based on OpenCV storehouse that algorithm is realized.The host computer window of this software can be realized the simulation location to video, can also be applied in the real-time location in actual environment.
Specific implementation process enters as follows:
1. camera real-time image acquisition, obtains input image sequence;
2. extract the i two field picture in input image sequence, wherein the initial value of i is 1; In the time of i≤5, jump to step 3; When i>5, jump to step (5);
3. calculate the side-play amount of i+1 two field picture with respect to i two field picture.
(1) use SUSAN algorithm to calculate respectively the characteristic point set R of i frame and i+1 two field picture, extracting method as shown in Figure 3.The present invention adopts 7 × 7 masterplates of SUSAN classics to carry out feature point detection, and USAN masterplate is shown in Fig. 4, and wherein solid line part is USAN masterplate, and light grey area is image border, and Dark grey area is USAN region.
1. the gray value of pixel in the current masterplate of the i two field picture in calculating input image frame sequence
Figure BDA0000489544660000041
formula is as follows: wherein
Figure BDA0000489544660000043
for the point except core point in USAN masterplate,
Figure BDA0000489544660000044
for the gray value of the point except core point in USAN masterplate,
Figure BDA0000489544660000045
for the core point in USAN masterplate,
Figure BDA0000489544660000046
the gray value that represents USAN masterplate inner core point, t is gray difference threshold; 2. institute's gray value a little in cumulative masterplate, the gray scale that obtains current core pixel with
Figure BDA0000489544660000047
formula is 3. obtain the characteristic point set R on i two field picture, R=(r 1, r 2... r k), determine that the formula of some characteristic points in set is as follows: wherein
Figure BDA00004895446600000410
be k core pixel gray scale and, g is how much threshold values.
(2) each characteristic point in pair set calculates the four-dimensional vectorial V of characteristic point descriptor set with SURF algorithm respectively.Computing formula is as follows: wherein, Σ d xresponse on the characteristic point x direction of principal axis calculating for small echo, Σ d ythe response on the characteristic point y direction of principal axis that calculates of small echo, | Σ d x| be the response in characteristic point x axle positive direction, | Σ d y| the response in characteristic point y axle positive direction.The feature point set of i+1 two field picture adds up to be calculated as above.
(3), according to the four-dimensional vector of characteristic point descriptor set, the characteristic point descriptor coupling that obtains i frame and i+1 two field picture is right.Main method adopts KNN matching process, calls FLANN storehouse, obtains 4 dimension search trees, retrieves the characteristic point descriptor that is same as i two field picture in i+1 two field picture and mate in tree, obtains characteristic point descriptor coupling right.
(4) adopt six parameter affine models, obtain the side-play amount of i+1 two field picture with respect to i two field picture.Side-play amount mainly comprises the offset Δ y occurring with respect on the offset Δ x occurring on x axle and y axle.
The six parameter affine model expression formulas that video camera is subjected to displacement variation are as follows:
x i y i = a 1 a 2 a 4 a 5 x i + 1 y i + 1 + b 1 b 2 = k cos θ - k sin θ k sin θ k cos θ x i y i + c d - - - ( 1 )
In formula, (x i, y i), (x i+1, y i+1) be the match point pixel coordinate of adjacent two two field pictures.Wherein b 1output valve be offset Δ x, the b of camera on x axle 2output valve be the offset Δ y of camera on y axle, a 1, a 2, a 4, a 5for convergent-divergent, the left rotation and right rotation component motion of image.
According to i+1 two field picture with respect to the side-play amount of i two field picture at the host computer real-time rendering i+1 two field picture camera motion track in corresponding moment; Jump to step 9;
5. use Kalman filtering algorithm, prediction obtains the side-play amount of i two field picture with respect to i-1 two field picture, continues next step, and the flow chart of steps of the method is shown in Fig. 5, and wherein forecasting process specifically comprises the following steps:
1. according to the side-play amount estimated value in k-1 moment obtain the side-play amount predicted value in k moment
Figure BDA0000489544660000053
computing formula is: wherein, be the side-play amount estimated value in k-1 moment, i-1 two field picture is with respect to the side-play amount estimated value of i-2 two field picture, w k-1be the white Gaussian noise sampled value in k-1 moment, a is system parameters, and the initial value of k is 6,
Figure BDA0000489544660000056
for utilizing the 5th two field picture that step 3 the obtains side-play amount with respect to the 4th two field picture;
2. calculate k moment side-play amount predicted value
Figure BDA0000489544660000057
variance P k', computing formula is: P k'=AP k-1a t+ Q k-1, wherein, P k-1be the variance of k-1 moment side-play amount estimated value, i-1 two field picture is with respect to the variance of the side-play amount estimated value of i-2 two field picture, Q k-1be the system noise variance in k-1 moment, A is system parameters matrix, A tit is system parameters transpose of a matrix matrix;
3. upgrade k moment side-play amount estimated value
Figure BDA0000489544660000058
computing formula is:
Figure BDA0000489544660000059
wherein Z kfor the i.e. location matrix of i two field picture of k moment, K k=P k' C t[CP k' C t+ R k] -1, C is the observing matrix of additional noise measured value, R kit is the covariance matrix of i.e. i two field picture additional noise observation of k moment;
4. calculate the variance P of k moment side-play amount estimated value k, computing formula is: P k=(I-K kc) P k', wherein I is unit matrix, P 5=1;
6. according to prediction drift amount, obtain the overlapping region of i two field picture and i-1 two field picture, and from i two field picture and i-1 two field picture, extract overlapping region A respectively iand A i-1; So-called overlapping region, along with the uniform motion of camera, the size of the side-play amount that computer occurs with respect to i-1 two field picture according to the i two field picture of the 5th output on x axle and y axle, dope according to Kalman filtering state model in next moment, the area that the offset Δ y occurring on the offset Δ x that i two field picture occurs on x axle with respect to i-1 two field picture and y axle surrounds, computing formula is as follows: (x) (Y-Δ y) for X-Δ for S=, wherein, X is the length of input picture, and Y is the width of input picture;
7. use SUSAN algorithm to calculate respectively overlapping region A iand A i-1characteristic point set, then each characteristic point in pair set calculates the four-dimensional vector of characteristic point descriptor set with SURF algorithm respectively; Next,, according to the four-dimensional vector of characteristic point descriptor set, obtain overlapping region A iand A i-1characteristic point descriptor coupling right; Right according to the coupling obtaining, adopt six parameter affine models, obtain the side-play amount of i two field picture with respect to i-1 two field picture;
Predict the outcome and see shown in Fig. 6 (a)-6 (d), by allowing camera collection ground image, for the ease of finding out intuitively the movement of camera, sequenced along floor tile line horizontal positioned 1-5 five numbered cards successively, and allow camera move horizontally from left to right along digital card from 1-5.The characteristic point that wherein representative of black "+" part is extracted.Because the first five two field picture does not use Kalman filtering algorithm prediction overlapping region, therefore can see intuitively from Fig. 6 (a) and Fig. 6 (b), black short-term has been covered with whole image-region, has namely extracted the characteristic point of entire image.And in Fig. 6 (c) and Fig. 6 (d), the extraction of characteristic point is no longer using entire image as extracting region, but carry out the extraction of characteristic point in the overlapping region predicting, therefore black "+" branch is on the subregion of image, and the overlapping region that instruction card Kalman Filtering dopes is effective.
According to i two field picture with respect to the side-play amount of i-1 two field picture at the host computer real-time rendering i two field picture camera motion track in corresponding moment, track drafting is shown in Fig. 7.Figure is the host computer interface that the present invention writes, wherein window place in the lower right corner is the ground image that video camera photographs in real movement, and gray area is the motion track that the present invention states the camera of drawing out after step on the implementation, track top solid black rectangle is the camera of simulation.Can see clearly that from the ground image of the lower right corner window gradient is about the floor tile line of 45 °, illustrate that now camera is doing along the rectilinear movement of 45 ° of northeastwards, and the motion track that now host computer is drawn out conforms to actual mobile alignment, illustrate that the present invention is effectively, has feasibility.
9. judge whether continue location, if continued, returns to step (2), otherwise, finish.

Claims (1)

1. a self adaptation global motion estimating method, is characterized in that comprising the following steps:
(1) camera real-time image acquisition, obtains input image sequence;
(2) extract the i two field picture in input image sequence, wherein the initial value of i is 1; In the time of i≤5, jump to step 3; When i>5, jump to step (5);
(3) calculate the side-play amount of i+1 two field picture with respect to i two field picture, specifically comprise: the characteristic point set of using SUSAN algorithm to calculate respectively i frame and i+1 two field picture, then respectively each characteristic point in pair set to calculate the set of characteristic point descriptor with SURF algorithm four-dimensional vectorial; Next,, according to the four-dimensional vector of characteristic point descriptor set, the characteristic point descriptor coupling that obtains i frame and i+1 two field picture is right; Right according to the coupling obtaining, adopt six parameter affine models, obtain the side-play amount of i+1 two field picture with respect to i two field picture;
(4) according to i+1 two field picture with respect to the side-play amount of i two field picture at the host computer real-time rendering i+1 two field picture camera motion track in corresponding moment; Jump to step 9;
(5) use Kalman filtering algorithm, prediction obtains the side-play amount of i two field picture with respect to i-1 two field picture, continues next step, and wherein said forecasting process specifically comprises the following steps:
1. according to the side-play amount estimated value in k-1 moment
Figure FDA0000489544650000018
obtain the side-play amount predicted value in k moment
Figure FDA0000489544650000011
computing formula is:
Figure FDA0000489544650000012
wherein,
Figure FDA0000489544650000013
be the side-play amount estimated value in k-1 moment, i-1 two field picture is with respect to the side-play amount estimated value of i-2 two field picture, w k-1be the white Gaussian noise sampled value in k-1 moment, a is system parameters, and the initial value of k is 6, for utilizing the 5th two field picture that step 3 the obtains side-play amount with respect to the 4th two field picture;
2. calculate k moment side-play amount predicted value
Figure FDA0000489544650000019
variance P k, computing formula is: P k'=AP k-1a t+ Q k-1, wherein, P k-1be the variance of k-1 moment side-play amount estimated value, i-1 two field picture is with respect to the variance of the side-play amount estimated value of i-2 two field picture, Q k-1be the system noise variance in k-1 moment, A is system parameters matrix, A tit is system parameters transpose of a matrix matrix;
3. upgrade k moment side-play amount estimated value
Figure FDA0000489544650000016
computing formula is:
Figure FDA0000489544650000017
wherein Z kfor the i.e. location matrix of i two field picture of k moment, K k=P k' C t[CP k' C t+ R k] -1, C is the observing matrix of additional noise measured value, R kit is the covariance matrix of i.e. i two field picture additional noise observation of k moment;
4. calculate the variance P of k moment side-play amount estimated value k, computing formula is: P k=(I-K kc) P k' wherein I is unit matrix, P 5=1;
(6), according to prediction drift amount, obtain the overlapping region of i two field picture and i-1 two field picture, and from i two field picture and i-1 two field picture, extract overlapping region A respectively iand A i-1;
(7) use SUSAN algorithm to calculate respectively overlapping region A iand A i-1characteristic point set, then each characteristic point in pair set calculates the four-dimensional vector of characteristic point descriptor set with SURF algorithm respectively; Next,, according to the four-dimensional vector of characteristic point descriptor set, obtain overlapping region A iand A i-1characteristic point descriptor coupling right; Right according to the coupling obtaining, adopt six parameter affine models, obtain the side-play amount of i two field picture with respect to i-1 two field picture;
(8) according to i two field picture with respect to the side-play amount of i-1 two field picture at the host computer real-time rendering i two field picture camera motion track in corresponding moment;
(9) judge whether continue location, if continued, returns to step (2), otherwise, finish.
CN201410144161.8A 2014-04-10 2014-04-10 Self-adaptation global motion estimation method Pending CN103905826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410144161.8A CN103905826A (en) 2014-04-10 2014-04-10 Self-adaptation global motion estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410144161.8A CN103905826A (en) 2014-04-10 2014-04-10 Self-adaptation global motion estimation method

Publications (1)

Publication Number Publication Date
CN103905826A true CN103905826A (en) 2014-07-02

Family

ID=50996948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410144161.8A Pending CN103905826A (en) 2014-04-10 2014-04-10 Self-adaptation global motion estimation method

Country Status (1)

Country Link
CN (1) CN103905826A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104634342A (en) * 2015-01-16 2015-05-20 梁二 Indoor person navigation positioning system and method based on camera shooting displacement
CN104915966A (en) * 2015-05-08 2015-09-16 上海交通大学 Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering
CN107734335A (en) * 2014-09-30 2018-02-23 华为技术有限公司 Image prediction method and relevant apparatus
CN111091025A (en) * 2018-10-23 2020-05-01 阿里巴巴集团控股有限公司 Image processing method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
CN102629329A (en) * 2012-02-28 2012-08-08 北京工业大学 Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm
CN103198491A (en) * 2013-01-31 2013-07-10 北京工业大学 Indoor visual positioning method
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
CN102629329A (en) * 2012-02-28 2012-08-08 北京工业大学 Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm
CN103198491A (en) * 2013-01-31 2013-07-10 北京工业大学 Indoor visual positioning method
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
R.E.KALMAN: "A new approach to linear filtering and prediction problem", 《JOURNAL OF BASIC ENGINEERING》 *
崔吉等: "基于卡尔曼滤波的多运动目标跟踪算法研究", 《影像技术》 *
彭丁聪: "卡尔曼滤波的基本原理及应用", 《软件导刊》 *
曹鲁光: "基于自适应全局运动估计的室内视觉定位方法研究", 《中国优秀硕士学位论文全文数据库,信息科技辑》 *
朱建军等: "集成地质、力学信息和监测数据的滑坡动态模型", 《测绘学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734335A (en) * 2014-09-30 2018-02-23 华为技术有限公司 Image prediction method and relevant apparatus
US10827194B2 (en) * 2014-09-30 2020-11-03 Huawei Technologies Co., Ltd. Picture prediction method and related apparatus
CN107734335B (en) * 2014-09-30 2020-11-06 华为技术有限公司 Image prediction method and related device
CN104634342A (en) * 2015-01-16 2015-05-20 梁二 Indoor person navigation positioning system and method based on camera shooting displacement
CN104915966A (en) * 2015-05-08 2015-09-16 上海交通大学 Frame rate up-conversion motion estimation method and frame rate up-conversion motion estimation system based on Kalman filtering
CN104915966B (en) * 2015-05-08 2018-02-09 上海交通大学 Frame rate up-conversion method for estimating and system based on Kalman filtering
CN111091025A (en) * 2018-10-23 2020-05-01 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN111091025B (en) * 2018-10-23 2023-04-18 阿里巴巴集团控股有限公司 Image processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN110125928B (en) Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames
CN109544636B (en) Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method
CN107481270B (en) Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment
US10546387B2 (en) Pose determination with semantic segmentation
CN103325112B (en) Moving target method for quick in dynamic scene
US9888235B2 (en) Image processing method, particularly used in a vision-based localization of a device
CN101593022B (en) Method for quick-speed human-computer interaction based on finger tip tracking
CN102629329B (en) Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm
Li et al. Vision-aided inertial navigation for resource-constrained systems
CN102915545A (en) OpenCV(open source computer vision library)-based video target tracking algorithm
CN101916446A (en) Gray level target tracking algorithm based on marginal information and mean shift
CN102063727B (en) Covariance matching-based active contour tracking method
CN109708658B (en) Visual odometer method based on convolutional neural network
CN104794737A (en) Depth-information-aided particle filter tracking method
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN103905826A (en) Self-adaptation global motion estimation method
CN110533661A (en) Adaptive real-time closed-loop detection method based on characteristics of image cascade
CN112529962A (en) Indoor space key positioning technical method based on visual algorithm
CN103679740B (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN103500454A (en) Method for extracting moving target of shaking video
CN112652021A (en) Camera offset detection method and device, electronic equipment and storage medium
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140702

RJ01 Rejection of invention patent application after publication