CN104463859B - A kind of real-time video joining method based on tracking specified point - Google Patents

A kind of real-time video joining method based on tracking specified point Download PDF

Info

Publication number
CN104463859B
CN104463859B CN201410709348.8A CN201410709348A CN104463859B CN 104463859 B CN104463859 B CN 104463859B CN 201410709348 A CN201410709348 A CN 201410709348A CN 104463859 B CN104463859 B CN 104463859B
Authority
CN
China
Prior art keywords
point
image
video
splicing
trace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410709348.8A
Other languages
Chinese (zh)
Other versions
CN104463859A (en
Inventor
向永红
张国勇
姜梁
孙浩惠
马祥森
郭茜
王家星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Aerospace Electronics Technology Co Ltd
Original Assignee
China Academy of Aerospace Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Aerospace Electronics Technology Co Ltd filed Critical China Academy of Aerospace Electronics Technology Co Ltd
Priority to CN201410709348.8A priority Critical patent/CN104463859B/en
Publication of CN104463859A publication Critical patent/CN104463859A/en
Application granted granted Critical
Publication of CN104463859B publication Critical patent/CN104463859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Abstract

The present invention provides a kind of real-time video joining method based on tracking specified point, and the video-splicing method directly chooses some pixels as trace point from image, and controls calculating time and the effect of splicing by controlling the number of trace point and distribution.Present invention advantage compared with prior art is:There is no feature point extraction process, some pixels are directly chosen from image as trace point;The number of trace point and distribution can be controlled, such that it is able to partly control calculating time and effect.

Description

A kind of real-time video joining method based on tracking specified point
Technical field
The invention belongs to video field, and in particular to one kind calculates image registration matrix by being tracked to specified point Method and the real-time video splicing realized.
Background technology
Exactly be stitched together for video frame image to form big figure by video-splicing, it can be seen that shooting area corresponding to video Panorama.UAV Video splicing is one of basic step of unmanned plane information processing, is follow-up such as map products output step Rapid basis.Video-splicing often faces the contradiction between splicing effect control and real-time, and the splicing effect to realize is just It is difficult to splicing in real time;Otherwise, the continuous splicing time is short, and effect is poor.SUAV receives external environmental interference due to easy, Attitudes vibration is more violent, while the camera review quality that SUAV is carried is relatively low, SUAV real-time video is spelled It is all the time a problem to connect.
Image mosaic based on frequency domain is mainly spliced again after image information is transformed into frequency domain by Fourier transformation, Such as Fourier-Mellin algorithms and phase correlation method.Image registration algorithm main at present is based on spatial domain, including base In gray scale and the major class of feature based two.The image split-joint method key step of feature based Point matching includes:Image preprocessing, figure As feature point extraction, Feature Points Matching, Mismatching point removal, image registration (calculating transformation matrix), image co-registration, generation splicing Figure.Wherein, core procedure is image registration, that is, calculate transformation matrix of the new image frame relative to reference image frame.Usually require that The algorithm of characteristic point is calculated to insensitive for noise, with the constant of translation transformation, rotation transformation, change of scale, affine transformation etc. Property.There are Harris angle points, SIFT feature, SURF features etc. than the method for more typical calculating image characteristic point.Good characteristic point Computational algorithm often exists computationally intensive, it is difficult to the problem for realizing calculating in real time.In consideration of it, there is researcher to propose based on improved Harris characteristic matchings or the stitching algorithm of SIFT feature matching, or speed-up computation is carried out using GPU, also there is researcher to carry The method for having gone out to be carried out by tracking angle point, SIFT feature, SURF characteristic points etc. image registration calculating carrys out stitching image.
The content of the invention
Technology solve problem of the invention:Method with traditional characteristic point registration is compared, and the method registration using tracking is special The hunting zone for a little reducing Characteristic points match is levied, computation complexity is reduced.Even if carrying out image using feature point tracking Registration, is still difficult to splicing in real time, and key reason is that (search characteristics are obvious in the picture for image characteristic point extraction Point) also require that larger amount of calculation.It is used as trace point by specifying the point of ad-hoc location in image, is carried so as to save characteristic point Process is taken, the amount of calculation of image characteristic point extraction is greatly reduced, it is final to realize splicing in real time.
Technical solution of the invention:Main working process (referring to Fig. 1) is as follows:(1) trace point is specified, from image Automatically a number of point, is uniformly chosen;(2) calculated using LK sparse optical flow methods and specify position of the trace point in current image frame Put, the position in the image that resulting point has been spelled using LK sparse optical flow method backwards calculations;(3) specified point is asked for The distance between corresponding points obtained with backwards calculation, homography matrix is calculated by n point before being chosen after sorting from small to large; (4) image co-registration and display splicing figure.
Present invention advantage compared with prior art is:Compared with existing image split-joint method, figure proposed by the present invention As joining method has following two features:
(1) there is no feature point extraction process, some pixels are directly chosen from image as trace point;
(2) number of trace point and distribution can be controlled, such that it is able to partly control calculating time and effect.
Carried out by a large amount of different models, different loads, the UAV Video shot in different location and different periods Experiment, it was demonstrated that the method for being proposed realizes splicing in real time, obtains preferable splicing effect.
Brief description of the drawings
Fig. 1 is video-splicing basic calculating flow chart of the present invention;
Fig. 2 is trace point designation method of the present invention;
Fig. 3 compares figure for splicing effect of the present invention:A () SURF algorithm splicing effect compares figure, (b) SIFT algorithms splicing effect Fruit compares figure, and (c) specified point tracing splicing effect proposed by the present invention compares figure.
Specific embodiment
Specified from trace point below, LK sparse optical flow methods tracking, homography matrix is calculated, image co-registration and experimental result ratio Relatively several aspects such as analysis, are described further with reference to accompanying drawing to the present invention.
1. trace point is specified
If image a width of w, a height of h, equably chosen from image m × m point as trace point (referring to Fig. 2, institute in figure Show that crosspoint is specified trace point).Because boundary point is not as trace point, then the collection of selected point is combined into Two neighbor tracking point horizontal ranges are w/ (m+1), and vertical range is h/ (m+1). This designation method is compared with common characteristic point is calculated with choosing method, and resulting trace point distribution is bigger, tracking Distance will not be too near, too intensive between point, is conducive to obtaining more accurate registration matrix;This mode also eliminates characteristic point selection Complex calculation.
By to size (in units of pixel) for 576 × 384 and 1024 × 768 video experiment test, find for 26 × 26 and less trace point, tracking effect is not ideal, and for 30 × 30 and with last trace point, although tracking effect Well, but calculate the time it is more long.Experimental result shows that 28 × 28 trace points are a more satisfactory selections.
2.LK sparse optical flow methods are tracked
The calculating of optical flow field is generally divided into four classes:Method (such as Horn-Schunck and Lucas-Kanade based on gradient That is LK algorithms);Method based on matching;Method based on energy;Method based on phase etc..Optical flow method have it is assumed hereinafter that:
(1) colour consistency, a trace point is in frame fi-1With frame fiIn color value be the same (for gray level image For, it is exactly identical brightness).
I.e. for point p=(x, y) on imageTIn moment t=ti-1(photographing image frame fsThe corresponding moment is ts) gray scale Be worth is I (x, y, ti-1), through time interval Δ t=ti-ti-1Afterwards, the gray scale of corresponding points is that (x+ Δs x, y+ Δ y, t+ Δ t), has IThen I (x, y, t)=I (x+ Δs x, y+ Δ y, t+ Δ t).Assuming thatThe point is represented respectively Component of the light number of dropouts on x, y directions, the equation left side is launched using Taylor's formula, ignores high-order term more than second order, such as Fruit makes Δ t=0, then have
Ixu+Iy v+It=0
WhereinRepresentative image gray scale is relative to x, the local derviation of y, t respectively.
(2) pixel displacement is smaller between two images.
(3) motion of Space Consistency, i.e. neighborhood pixels is consistent.
(4) motion perpendicular to partial gradient can not be recognized:Optical flow constraint equation includes two unknown quantitys of u and v, it is clear that by One equation can not be uniquely determined, here it is aperture problem, will solve this problem, it is necessary to look for new constraint.
Because the video overlay rate that unmanned plane shoots is high, above-mentioned four assumption is substantially conformed to, carried out using LK optical flow methods Tracking has reasonability.LK algorithms are based on local restriction, it is assumed that the light stream of each point in a small neighbourhood centered on point p It is identical, different weights are given to point different in region, the calculating of such light stream translates into minimum equation below:
Wherein Ω represents a small neighbourhood centered on p points,It is gradient operator, V=(u, v) is light stream, and W (x, y) is Window function, represents the weight at region midpoint (x, y), from p points more close to, weight is higher.
The optical flow field of more violent video is moved to calculate, two methods can be used.(1) the search model of Ω is expanded Enclose;(2) combined with LK methods using gaussian pyramid layering.First method is easily achieved, but can introduce larger Amount of calculation, it is difficult to adapt to the requirement of real-time.Second method is employed picture breakdown by the thick Stratified Strategy to essence into not Same resolution ratio, and the result that will be obtained under thick yardstick is used as the initial value of next yardstick, the method is to calculate big motion The effective technology means of speed and larger attitudes vibration.The gaussian pyramid level used in experiment is 3, can preferably process big The video of movement velocity.
By previous frame image fi-1M × m trace point two-dimensional columns the vector representation specified be:(xs, ys)T, wherein S ∈ { 1,2 ..., m × m }.The method combined using Pyramid technology and LK, with image fiIn point (xs, ys)TFor starting point is calculated fi-1In trace point (xs, ys)TIn fiIn light stream.If being calculated in fiThe coordinate of middle trace point isSimilarly, with fiMidpointIt is trace point, calculates it in fi-1In light stream, obtain To after traceback, fiMidpointIn image fi-1On coordinateIf tracking It is more accurate, then (xi, yi)TWithEuclidean distance can be smaller.
3. homography matrix is calculated
According to previous frame image fi-1Trace point (the x for specifyings, ys)TWith the f by being obtained after forward and reverse trackingi-1In Corresponding pointsThe error amount error of each characteristic point can be calculated using formula belows。errorsValue Smaller expression point is tracked must be more accurate, and it show also fi-1In trace point (xs, ys)TWith fiMiddle trace pointMatching degree.
Be ranked up according to the error amount that each is put, selection wherein error minimum n match point is calculated two field pictures it Between homography matrix.Mismatching point can effectively be rejected using the sequence of error amount.
Substantial amounts of Mismatching point can be rejected using error amount sort method, but in order to obtain more accurate homography square Battle array, we additionally use random sampling uniformity (Random Sample Consensus, RANSAC) algorithm and matching double points are entered Row filtering.The algorithm remains to keep effective in the case where Mismatching point is more, has the disadvantage computationally intensive, and speed is slow.Calculate institute The homography matrix for obtaining is image fiTo image fi-1Transformation relation, useRepresent.
By present image fiTransform to the first two field picture f1Visual angle, complete the Panoramagram montage based on first visual angle.Give People feels the panorama sketch that the visual angle based on the first two field picture is seen.The first two field picture f is calculated using formula below1With it is current Image fiBetween transformation relation
Each matrix that will can be directly calculated before using formula tire out and multiplied, and can theoretically obtain image fiTo image f1Transformation relation.But the precision problem of double-precision floating pointses is directed to due to each matrix multiplication operation, this meeting Cause it is tired multiply multiple transformation matrixs after produce larger error.To eliminate the calculation error that floating point precision is brought, using as follows Mode calculates image fiTo image f1Between transformation relation:By fi-1And f1Between four angular coordinates under global coordinate system Corresponding relation be calculatedCalculate fiWith fi-1Transformation relationSuch that it is able to calculate
Pass throughCan be by image fiIt is converted on panorama sketch.Floating point precision can be effectively eliminated using the method to bring Error, can continuously splice more pictures.
4. image co-registration
Complete fiWith fi-1Between track Point matching, and obtained transformation matrixAfterwards, it is seamless spliced in order to obtain Scheme, it is necessary to be merged to two images from suitable Image Fusion greatly.Image co-registration is that a kind of special data are melted Close, refer to that the view data on same target for being collected multi-source channel, will by image procossing and computer technology etc. Informix in two images is the bigger figure of a width information content, improves amount of image information.Image co-registration has many methods, such as Direct mean value method, weighted mean method, distance weighting method, multi-resolution Fusion method etc..The present invention is realized using weighted mean method Image co-registration.In order to eliminate image mosaic cavity problem, generally using reversely corresponding pixel points are sought, this requires to calculateIt is inverse MatrixWithIt is good to show.The basic step of image co-registration is:
(1) utilizeCalculate fiCoordinate of four angle points in g (the big figure for having spliced), obtains fiBounding box in g B;
(2) to each pixel (x, y) in BT, utilizeIt is calculated it and corresponds to fiIn pixel (x ', y ')T, Because the pixel coordinate for obtaining is not necessarily integer, the present invention takes and is calculated closest to the pixel value put;
(3) if in bounding box B certain point (x, y) brightness value IB(x, y) ≠ 0, then carry out fusion calculation, updates IB(x, Y) it isα=0.4, β=0.6 are set in experiment;If IB(x, y)=0, then update IB(x, Y) it is
In step 2, can also be calculated using linear interpolation or more complicated method, but these methods are simultaneously The calculating time can be increased.There is color change using may result in spliced image closest to method, the fusion of step 3 is at certain This deficiency is compensate in the degree of kind.
5. Comparison of experiment results analysis
The algorithm for being proposed mainly sends out this in the operating systems of Windows 7, Intel i7 CPU processors, 4GB internal memories The specified point tracing of bright proposition is contrasted with the joining method based on SIFT and SURF (being based on OpenCV storehouses), obtain as Result shown in table 1.
As seen from the table, in the case of given relevant parameter, the calculating time of specified point tracing splicing is about SIFT 1/4, SURF 1/6, its frame per second has reached about 17 frames/second.Processed by taking out frame, realized preferable, long period stabilization Real-time splicing effect.Because UAV Video interframe Duplication is higher, takes a frame every a frame and have substantially no effect on splicing effect.
Fig. 3 gives the effect for obtaining spliced map based on three registration Algorithms and compares, the spliced map effect of left-hand column in figure Preferably, the figure of right-hand column then all there is a problem of certain.The registering matrix that wherein SURF algorithm (Fig. 3 (a)) is obtained occur in that compared with Big error, bifurcated is occurred in that in No. 1 circle after the splicing of same road.Resolving occurs in the splicing of SIFT algorithms (Fig. 3 (b)) splits (No. 2 Circle) and ghost (No. 4 circles), and No. 3 are enclosed then road and disappear a part.And be based on specified point tracing splicing (Fig. 3 (c)) though So occur in that fracture (No. 5 circles) phenomenon, but generally acceptable.Importantly, the present invention is as a result of new Homography matrix calculative strategy, does not have the stitching image later stage deformation of SURF and SIFT algorithms serious.
Although the present invention does not select feature, significantly point is tracked, but specifies trace point, and has used simpler Single blending algorithm, but because (1) trace point number can be specified, and by RANSAC after, generally can also reduce, lead to About 40 best trace points of selection are crossed, relatively good registering matrix can be calculated;(2) using specified point tracing significantly Calculating speed is improve, when real-time video splicing is carried out, the raising with Quasi velosity causes that taking out frame number is reduced, and will not typically be gone out The situation of two frames or multiframe is now continuously taken out, therefore interframe displacement is smaller, reduces the iterations for calculating RANSAC, and obtain More preferable registering matrix is arrived, it is achieved thereby that preferably real time video image splicing.
Table 1

Claims (8)

1. it is a kind of based on tracking specified point real-time video joining method, it is characterised in that the video-splicing method directly from Some pixels are chosen in image as trace point, and the calculating of splicing is controlled by controlling the number of trace point and distribution Time and effect,
The described method comprises the following steps:
1) trace point is specified, a number of point is chosen as trace point automatically, uniformly from image;
2) calculated using LK sparse optical flow methods and specify position of the trace point in current image frame, resulting point is dilute using LK Dredge the position in the image that optical flow method backwards calculation has been spelled;
3) the distance between corresponding points that specified point is obtained with backwards calculation are asked for, by n point before being chosen after sorting from small to large Calculate homography matrix;
4) image co-registration and display splicing figure.
2. video-splicing method according to claim 1, it is characterised in that the step 1) in:It is high if a width of w of image It is h, m × m point is equably chosen from image as trace point, two neighbor tracking point horizontal ranges is w/ (m+1), vertically Distance is h/ (m+1).
3. video-splicing method according to claim 2, it is characterised in that the number of the trace point be 28 × 28 with Track point.
4. video-splicing method according to claim 1, it is characterised in that the step 2) in be specifically:By previous frame M × m trace point two-dimensional columns the vector representation that image fi-1 is specified be:(xs, ys) T, wherein s ∈ { 1,2 ..., m × m }; The method combined using Pyramid technology and LK, is the tracking during starting point calculates fi-1 with point (xs, the ys) T in image fi Light streams of point (xs, the ys) T in fi;If be calculated the coordinate of trace point in fi beingSimilarly, with fi PointIt is trace point, calculates its light stream in fi-1, after obtaining traceback, fi midpointsIn figure As the coordinate on fi-1
5. video-splicing method according to claim 1, it is characterised in that the step 3) in be specifically:
According to right in trace point (xs, ys) the T and fi-1 by being obtained after forward and reverse tracking that previous frame image fi-1 is specified Ying DianThe error amount errors of each characteristic point is calculated using formula below,
It is ranked up according to the error amount that each is put, the minimum n of selection wherein error is to match point meter Calculate the homography matrix between two field pictures.
6. video-splicing method according to claim 5, it is characterised in that further using RANSAC algorithm Matching double points are filtered, and eliminates the error that floating point precision is brought.
7. video-splicing method according to claim 6, it is characterised in that the elimination floating point precision brings the side of error Method is:Calculating fi and fi-1 is calculated by corresponding relations of the fi-1 and f1 between four angular coordinates under global coordinate system Transformation relationSo as to calculate
8. video-splicing method according to claim 7, it is characterised in that the basic step of described image fusion is:
A () utilizesCoordinate of tetra- angle points of fi in the big figure g for having spliced is calculated, bounding box Bs of the fi in g is obtained;
B () utilizes to each pixel (x, y) T in BIts pixel (x ', y ') T in corresponding to fi is calculated, is taken most The pixel value of point of proximity is calculated;
If brightness value IB (x, y) ≠ 0 of certain point (x, y) in (c) bounding box B, then carry out fusion calculation, IB (x, y) is updated It is α Ifi (x ', y ')+β IB (x, y), α=0.4 β=0.6 is set;If IB (x, y)=0, then update IB (x, y) be Ifi (x ', Y '),
Wherein,RepresentInverse matrix
CN201410709348.8A 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point Active CN104463859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410709348.8A CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410709348.8A CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Publications (2)

Publication Number Publication Date
CN104463859A CN104463859A (en) 2015-03-25
CN104463859B true CN104463859B (en) 2017-07-04

Family

ID=52909841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410709348.8A Active CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Country Status (1)

Country Link
CN (1) CN104463859B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245841B (en) * 2015-10-08 2018-10-09 北京工业大学 A kind of panoramic video monitoring system based on CUDA
CN105488760A (en) * 2015-12-08 2016-04-13 电子科技大学 Virtual image stitching method based on flow field
CN106447730B (en) * 2016-09-14 2020-02-28 深圳地平线机器人科技有限公司 Parameter estimation method and device and electronic equipment
CN106780563A (en) * 2017-01-22 2017-05-31 王恒升 A kind of image characteristic point tracing for taking back light-metering stream
CN107451952B (en) * 2017-08-04 2020-11-03 追光人动画设计(北京)有限公司 Splicing and fusing method, equipment and system for panoramic video
CN108307200B (en) * 2018-01-31 2020-06-09 深圳积木易搭科技技术有限公司 Online video splicing method and system
CN109785661A (en) * 2019-02-01 2019-05-21 广东工业大学 A kind of parking guide method based on machine learning
CN111529063B (en) * 2020-05-26 2022-06-17 广州狄卡视觉科技有限公司 Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
CN112288628B (en) * 2020-10-26 2023-03-24 武汉大学 Aerial image splicing acceleration method and system based on optical flow tracking and frame extraction mapping
CN113205749B (en) * 2021-05-14 2022-09-20 业成科技(成都)有限公司 Joint compensation method for spliced display and spliced display applying same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2460187C2 (en) * 2008-02-01 2012-08-27 Рокстек Аб Transition frame with inbuilt pressing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于光流技术的运动目标检测和跟踪方法研究;戴斌 等;《科技导报》;20091231;第27卷(第12期);第57页左栏第1段 *
基于尺度不变特征的光流法目标跟踪技术研究;吴垠 等;《计算机工程与应用》;20131231;第49卷(第15期);全文 *

Also Published As

Publication number Publication date
CN104463859A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN104463859B (en) A kind of real-time video joining method based on tracking specified point
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN110390640B (en) Template-based Poisson fusion image splicing method, system, equipment and medium
CN107481279B (en) Monocular video depth map calculation method
CN109102522B (en) Target tracking method and device
CN103198488B (en) PTZ surveillance camera realtime posture rapid estimation
CN104091324A (en) Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN109102525A (en) A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN102722697A (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN111899164B (en) Image splicing method for multi-focal-segment scene
CN109766758A (en) A kind of vision SLAM method based on ORB feature
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN112053447A (en) Augmented reality three-dimensional registration method and device
Li et al. 3D reconstruction of indoor scenes via image registration
CN116883610A (en) Digital twin intersection construction method and system based on vehicle identification and track mapping
CN114036969A (en) 3D human body action recognition algorithm under multi-view condition
CN103646397B (en) Real-time synthetic aperture perspective imaging method based on multisource data fusion
Zhu et al. Neuromorphic visual odometry system for intelligent vehicle application with bio-inspired vision sensor
CN114689038A (en) Fruit detection positioning and orchard map construction method based on machine vision
Zhai et al. Pgmanet: Pose-guided mixed attention network for occluded person re-identification
Li et al. Scale-aware monocular SLAM based on convolutional neural network
CN107135336A (en) A kind of video camera array
Wan et al. Boosting image-based localization via randomly geometric data augmentation
Reid et al. 3D Trajectories from a Single Viewpoint using Shadows.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant