CN110390685A - Feature point tracking method based on event camera - Google Patents

Feature point tracking method based on event camera Download PDF

Info

Publication number
CN110390685A
CN110390685A CN201910672162.2A CN201910672162A CN110390685A CN 110390685 A CN110390685 A CN 110390685A CN 201910672162 A CN201910672162 A CN 201910672162A CN 110390685 A CN110390685 A CN 110390685A
Authority
CN
China
Prior art keywords
characteristic point
module
event
moment
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910672162.2A
Other languages
Chinese (zh)
Other versions
CN110390685B (en
Inventor
史殿习
李凯月
李睿豪
伽晗
王明坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201910672162.2A priority Critical patent/CN110390685B/en
Publication of CN110390685A publication Critical patent/CN110390685A/en
Application granted granted Critical
Publication of CN110390685B publication Critical patent/CN110390685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a characteristic point tracking method based on an event camera, aiming at improving the tracking precision of characteristic points. The technical scheme is that a characteristic point tracking system based on an event camera, which consists of a data acquisition module, an initialization module, an event set selection module, a matching module, a characteristic point updating module and a template edge updating module, is constructed. An initialization module extracts feature points and edge maps from image frames; an event set selection module selects an event set S of the feature points from the event stream around the feature points; the matching module matches the S with the template edges around the feature points to calculate tkOptical flow set G of n feature points at timekThe feature point update module is according to GkCalculating tk+1Time n bitsPosition set of feature points FDk+1The template edge updating module uses the IMU data pair PBDkIs updated to obtain tk+1Position set PBD of template edge corresponding to n characteristic points at momentk+1. The method and the device can improve the accuracy of tracking the characteristic points on the event stream and prolong the average tracking time of the characteristic points.

Description

A kind of feature point tracking method based on event camera
Technical field
The present invention relates to Computer Image Processing fields, and in particular to completes with event camera to characteristic point in image Tracking.
Background technique
SLAM (full name positions simultaneously and builds figure) has obtained everybody as one important branch of robot field in recent years Research extensively.SLAM attempts to solve such problems: a robot moves in unknown environment, how by environment Observation determines the motion profile of itself, while constructing the map of environment.SLAM technology is precisely in order to realize that this target is related to The summation of many technologies arrived.One complete SLAM system mainly includes front-end vision odometer part and rear end Optimization Dept. Point.Robotary is estimated in visual odometry part, and estimation method is broadly divided into two methods: method of characteristic point and direct method.It is special Sign point method is the main stream approach of current robotary estimation, i.e., characteristic point is extracted in the picture first, between different frame Characteristic point is matched, and then to matched characteristic point to relevant operation is done, estimates the pose of camera.Commonly point feature includes Harris angle point, SIFT, SURF, ORB, HOG feature.Different from method of characteristic point, direct method can save the mistake for extracting characteristic point Journey directly estimates robotary using the grayscale information in image, but this method is still immature, and robustness is poor.
However, either method of characteristic point or direct method, using standard camera when coping with extreme environment there are still precision and The problem of robustness.Extreme case mainly includes the case where two kinds: camera high-speed motion, when obtaining image using standard camera, If camera movement is too fast, the image got will appear motion blur phenomenon;The case where high dynamic range scene, light in scene Strong variation is strong, and the variation of before and after frames light and shade is obvious.Under these extreme cases, using standard camera can seriously affect direct method and The algorithm effect of method of characteristic point.In addition, standard camera can not provide more accurate characteristic point motion profile between frame and frame.And Standard camera can generate redundancy in static scene, this does not only result in the waste of storage resource, and can be in subsequent figure As consuming a large amount of additional computing resources in treatment process.
A kind of appearance for the event camera that biology inspires, overcomes the above-mentioned limitation of standard camera.The typical case of event camera Being represented as DVS, (Dynamic Vision Sensor, the entitled dynamic visual sensor of Chinese, 2011 by Patrick Lichtsteiner et al. is in Circuits volume 43 of periodical IEEE Journal of Solid-State, the 2nd phase, 566- Article " 128 × 128 120dB of A, the 15 μ s Latency Asynchronous Temporal delivered on page 576 Contrast Vision Sensor " i.e. " 128 × 128 pixels, 120dB dynamic range, 15 microseconds delay asynchronous time Compare visual sensor " it proposes, it is produced by the iviVation AG company of Switzerland.Different from standard camera, event camera is only defeated The brightness change of pixel scale out.In pixel array, when Strength Changes are more than threshold value, corresponding pixel will be independently An output is generated, output is referred to as " event ".So it is different from standard camera, the space when data of event camera output are In flow of event.Due to low latency, event camera has very big advantage in fast moving scenes.In addition, event camera is also Redundancy can be recorded to avoid in slowly varying scene.2014, a kind of new event camera --- DAVIS (Dynamic and Active-pixel Vision Sensor, the entitled dynamic of Chinese and active pixel sensor) by Christian Brandli et al. is in Circuits volume 49 of periodical IEEE Journal of Solid-State, the 10th phase, Article " 240 × 180130dB of A, the 3 μ s Latency Global Shutter delivered on the 2333-2344 pages Spatiotemporal Vision Sensor " i.e. " 240 × 180 pixels, 130dB dynamic range, 3 microseconds postpone complete Office's shutter space-time visual sensor " propose, after produced by the iviVation AG company of Switzerland.DAVIS is by standard camera and thing Part camera DVS is combined togather, and can export picture frame and flow of event simultaneously.
For using event camera to carry out feature point tracking, Zhu et al. is in article " Event-based feature Tracking with probabilistic data association " " event of being based on is realized using probabilistic data association Feature point tracking " (2017, be published in meeting International Conference on Robotics and Automation (ICRA), on page 4465-4470) in propose it is a kind of characteristic point is directly extracted in flow of event, and use Event stream calculation feature-point optical flow, thus the method for realizing feature point tracking.Since this method only uses event stream calculation feature Point light stream, causes tracking accuracy not high.Kueng et al. is in article " Low-Latency Visual Odometry using Event-based Feature Tracks " " use the low latency visual odometry of the feature point tracking based on event " (2016, it is published in 2016 International Conference on Intelligent Robots and of meeting Systems (IROS), Inspec searching number be 16504091) in propose a kind of feature in combination with picture frame and flow of event Point-tracking method, this method is using the information and flow of event progress geometrical registration in picture frame, thus tracking characteristics point.Use this Method, every position for receiving an event and just updating a characteristic point, this just makes calculation amount become larger, and the method can introduce More errors cause feature point tracking precision to be deteriorated, cannot achieve prolonged tracking.
Therefore, the existing method for carrying out feature point tracking using event camera still has tracking accuracy not high, tracks The shorter disadvantage of duration.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of feature point tracking method based on event camera, the thing used Part camera is DAVIS, has not only improved the precision of tracking characteristics point on flow of event, but also extend the characteristic point average tracking time.
To solve this problem, the invention proposes a kind of feature point tracking method based on event camera, the event used Camera is DAVIS, and the present invention is matched using the side in picture frame with the event in flow of event simultaneously, seeks the light of characteristic point Stream, since the information of two kinds of forms is utilized simultaneously, so that the light stream being calculated is more accurate, to improve feature point tracking Precision.And present invention introduces IMU (Inertial Measurement Unit, the entitled Inertial Measurement Unit of the Chinese) sides Lai Gengxin Position so that tracking during side position it is more accurate, to extend the characteristic point average tracking time.
Specific technical solution is:
The first step constructs the feature point tracking system based on event camera.The feature point tracking system of event camera is by counting Module, matching module, characteristic point update module, template side update module group are chosen according to module, initialization module, event set is obtained At.
Data acquisition module chooses module with initialization module, event set, template side update module is connected.Data set obtains Open event camera data collection " The Event-Camera Dataset and Simulator " of the module from University of Zurich (Chinese entitled " event camera data collection and simulator ", which is acquired by DAVIS, includes picture frame, event in data set Stream and IMU data) download data, and the picture frame that will acquire is sent to initialization module, and flow of event is sent to event Collection chooses module, and IMU data are sent to template side update module.
Initialization module chooses module with data acquisition module, event set, matching module is connected.Initialization module is from data It obtains module and receives picture frame, characteristic point and edge graph are extracted from picture frame, is obtained around the position and characteristic point of characteristic point Template while position (in edge graph around characteristic point while be referred to as the template side of character pair point), the position of characteristic point is sent to Event set chooses module, and the position on the template side around characteristic point is sent to matching module.
Event set chooses module and is connected with data acquisition module, initialization module, characteristic point update module, matching module, Event set chooses module and receives flow of event from data acquisition module, updates mould from initialization module (recycling for the first time) or characteristic point The position that block (since recycling second) receives characteristic point receives the light stream of characteristic point (from second from characteristic point update module Secondary circulation starts), around each characteristic point from flow of event selected characteristic point event set, by the position of each characteristic point with And corresponding event set is sent to matching module.
Matching module chooses module, characteristic point update module, template side update module phase with initialization module, event set Even, matching module chooses position and the corresponding event set of each characteristic point that module receives characteristic point from event set, from initial Change module (recycle for the first time) or template when update module (since recycling second) receives the template around characteristic point The event set of characteristic point is matched with the template side around characteristic point, finds out the light stream of each characteristic point by position, will be each The position on template side and the light stream of characteristic point around the position of characteristic point, characteristic point are sent to characteristic point update module.
Characteristic point update module chooses module with matching module, template side update module, event set and is connected, and characteristic point updates Module from matching module receive the position of characteristic point, template side around characteristic point position, characteristic point light stream, utilize feature The new position of the optical flow computation characteristic point of point, the new position in the position on the template side around characteristic point and characteristic point is sent to The new position of the light stream of characteristic point and characteristic point is sent to event set and chooses module by template side update module, and by characteristic point New position (i.e. feature point tracking result) output, is checked for user.
Template side update module is connected with data acquisition module, characteristic point update module, matching module, and template side updates mould Block receives the position on the template side around characteristic point and the position that characteristic point is new from characteristic point update module, from data acquisition module IMU data are received, the position on the template side around characteristic point is updated by IMU data, the updated position in template side is sent out Give matching module.
Second step, data acquisition module is from event camera data collection " The Event-Camera Dataset and Picture frame, flow of event and IMU data are obtained in Simulator ".
Third step, using the feature point tracking system based on event camera to from t0Moment starts to tNMoment terminate when Between characteristic point in data acquisition module obtains in section flow of event tracked.By time interval [t during tracking0,tN] point At a series of sub- time interval [t0,t1],...,[tk,tk+1],...,[tN-1,tN] ,+1 chronon section of kth is expressed as [tk,tk+1], N indicates that sub- time interval number, the size of N are determined by flow of event time span, and value range is N >=1.Tracking Process is as follows:
3.1, enable time serial number k=0;
3.2, initialization module initializes, the specific steps are as follows:
3.2.1 initialization module using Harris corner detection approach (1988, by C.G.Harris et al. in meeting Alvey Vision Conference, volume 15, the article delivered on the 50th phase " A combined corner and edge Detector " i.e. " combined type angle point and edge detector " propose) from the picture frame that data acquisition module obtains extract feature The characteristic point extracted is put into set FD by point, and enabling FD is { f1,...,fi,...,fn, fiIt is special to represent i-th detected Point is levied, n is characterized a number.The function that the position of characteristic point is regarded as to the time, by t0The position of n characteristic point is put in moment FD In characteristic point position set FD0In, FD0={ f1(t0),...,fi(t0),...,fn(t0), fi(t0) represent characteristic point fiIn t0 The position at moment, by FD0It is sent to event set and chooses module.
3.2.2 initialization module using the side Canny detection method (1987, by John Canny et al. in meeting Readings in Computer Vision, article " the A computational approach delivered on the 184-203 pages To edge detection " i.e. " calculation method of edge detection " proposition) it is extracted from the picture frame that data acquisition module obtains Edge graph, the corresponding edge graph of each picture frame.
3.2.3 initialization module chooses in FD the corresponding template of n characteristic point in the (collection of all template points on when by template Closing indicates), by the corresponding t of n characteristic point in FD0The location sets PBD on moment template side0It is sent to matching module.Method is such as Under:
3.2.3.1 i=1 is enabled;
3.2.3.2 with characteristic point fiIn t0The position f at momenti(t0) centered on, in fi(t0) surrounding one rectangle region of selection DomainSize is s × s, i.e.,Length and width be s, the range of s is 20 to 30 pixels.The edge graph that 3.2.2 is detected InInterior side is as fiTemplate side.Pixel on template side is fiTemplate point.DefinitionInterior template point Set Namely fiCorresponding template side, willIt is put into template line set PB:
pjFor fiJ-th of template point.By pjPosition be regarded as the function of time, pj(t0) indicate pjIn t0The position at moment It sets.It indicates in t0Moment pjIn rectangular areaIt is interior.miIndicate setThe number of middle template point.
3.2.3.3 i=i+1 is enabled;
If 3.2.3.4 i≤n turns 3.2.3.2;Otherwise, indicate that having obtained the corresponding template side of n characteristic point in FD is constituted Template line set PB,DefinitionMiddle template point is in t0The set of the position at momentEnable in PB in n template side template point in t0The location sets at momentBy PBD0It is sent to matching module, turns the 4th step.
4th step, event set chooses module and receives flow of event from data acquisition module, according to different k values respectively from initial Change module or characteristic point update module and receive different data, around n characteristic point from flow of event selected characteristic point thing Part collection S, by FDkIt is sent to matching module with the event set S of characteristic point, method is:
If 4.1 k=0, event set chooses module and receives flow of event from data acquisition module, receives t from initialization modulekWhen Carve the location sets FD of n characteristic pointk, FDk={ f1(tk),...,fi(tk),...,fn(tk) it (is at this time FD0={ f1 (t0),...,fi(t0),...,fn(t0)), enable t1=t0+ 1, unit is the second, turns 4.3;
If 4.2 k >=1, event set chooses module and receives flow of event from data acquisition module, receives from characteristic point update module tkThe location sets FD of n characteristic point of momentkAnd tk-1The light stream set G of n characteristic point of momentk-1,WhereinIndicate tk-1Moment characteristic point fiLight stream, estimate sub- time interval [tk,tk+1] Size, according to formula (2) calculate tk+1:
WhereinIt indicates in sub- time interval [tk-1,tk] on characteristic point fiLight stream, obtained from characteristic point update module.It indicates in sub- time interval [tk-1,tk] on all characteristic points average light stream.T is calculated by formula (2)k+1Physics Meaning is that the time that 3 pixels of characteristic point mean motion in a upper time interval need is as current interval [tk,tk+1] The discreet value of size.Turn 4.3;
4.3 in tkThe location sets FD of n characteristic point of momentkIn to choose each characteristic point around each position corresponding Event set.
4.3.1 i=1 is enabled;
These events are put into characteristic point f by the event that coincidence formula (3) requirement is 4.3.2 selected from flow of eventiIn tkWhen Carve corresponding event setIn, and willIt is put into event set S:
Indicate that the set of the event in a three-dimensional space-time region, spatial dimension are characterized point fiIn tkThe position at moment Set fi(tk) around rectangular area Hfi, time range is section [tk,tk+1]。edIt representsIn d-th of event, edIt comes from In flow of event, and in the three-dimensional space-time region specified in formula (3), d={ 1,2 ..., zi, ziIt indicatesInterior event Number.Event edRepresentation bexdExpression event edCoordinate in pixel coordinate system,Expression event edThe time of generation,Indicate edPixel coordinate existIt is interior.
4.3.3 i=i+1 is enabled;
If 4.3.4 i≤n turns 4.3.2;Otherwise, illustrate to have obtained tkThe corresponding event set S of n characteristic point in moment FD,And by FDkIt is sent to matching module with S, turns the 5th step.
5th step, matching module choose module from event set and receive FDkAnd S receives n from initialization module if k=0 The location sets PBD on the corresponding template side of characteristic pointk, turn the 6th step;If k >=1, n feature is received from template side update module The location sets PBD on the corresponding template side of pointk, turn the 6th step;
6th step, matching module match S with the template side around characteristic point, find out tkN characteristic point of moment Light stream set Gk, by the location sets FD of n characteristic pointk, the corresponding template side of n characteristic point location sets PBDkAnd light stream Set GkIt is sent to characteristic point update module.Method particularly includes:
6.1 enable i=1;
6.2 couples of characteristic point fiConstruct matching error function:
ForInterior event ed, correct event edIn tkThe position at moment, formula are as follows:
x′dIndicate the event e being calculateddIn tkThe position at moment, referred to as event edThe position of Motion correction,Table Show in time interval [tk,tk+1] on characteristic point fiLight stream.Symbol indicates dot product, the equal table in place of the symbol hereinafter occurs Show dot product, in the case where not causing ambiguity, hereinafter part formula omits the symbol.In order to indicate convenient, definition Symbol
It is as follows to construct matching error function:
ε indicates error, pj(tk) represent tkMoment template point pjPosition.rdjIndicate edBy template point pjThe probability of generation, That is edWith pjMatched probability.Here by rdjAs weight, rdjFor event edWith template point pjThe probability to match, i.e. rdjMore Greatly, event edWith template point pjBetween in tkThe ratio that the range difference at moment accounts for overall error is bigger.Double vertical lines are indicated to double in formula Vector in vertical line carries out modulo operation, similarly hereinafter.rdjCalculation formula it is as follows:
The physical significance of the formula is that molecule is event edThe position of Motion correction and template point pjSquare of distance, point Mother is event edThe position of Motion correction and characteristic point fiCorresponding miThe two is divided by work by the quadratic sum of the distance of a template point For event edWith template point pjThe probability to match.
6.3 using EM-ICP algorithms (be a kind of matching algorithm, 2002, by S é bastien Granger et al. in meeting European Conference on Computer Vision, the 418-432 pages article " Multi-scale EM- delivered ICP:A fast and robust approach for surface registration " i.e. " multiple dimensioned EM-ICP: a kind of Quick and steady surface registration method " propose).The matching error function that 6.2 steps construct is minimized, solution obtains optimal Light streamSolution procedure is as follows:
6.3.1 initialization
6.3.2 r is calculated by formula (6)dj
6.3.3 updating light stream:
6.3.4 the variable quantity of light stream is calculatedAnd it enablesIt willIt is put into tkMoment light stream Set GkIn.
6.3.5 ifExpression obtains final optimum resultsTurn 6.4;IfTurn 6.3.2.σ is The variable quantity threshold value of light stream, value range are 0.01 to 0.1, and unit is that pixel is (pixel/s) per second.
6.4 enable i=i+1;
If 6.5 i≤n, turn 6.2;Otherwise t has been obtainedkThe light stream of n characteristic point of moment is to get having arrived tkMoment light stream Set Gk,By FDk、PBDkAnd GkIt is sent to characteristic point update module, turns the 7th step.
7th step, characteristic point update module receive t from matching modulekThe location sets FD of n characteristic point of momentk、tkMoment The location sets PBD on the corresponding template side of n characteristic pointkWith the light stream set G of n characteristic pointk, pass through optical flow computation tk+1Moment The location sets FD of n characteristic pointk+1, by GkAnd FDk+1It is sent to event set and chooses module, by FDk+1And PBDkIt is sent to template Side update module.Method is:
7.1 enable i=1;
7.2 calculate tk+1Moment fiPosition fi(tk+1), by fi(tk+1) it is put into set FDk+1In,
It is multiplied, indicates from t with the time for light streamkMoment is to tk+1Moment characteristic point fiMove distance.
7.3 enable i=i+1;
If 7.4 i≤n, turn 7.2;Otherwise t has been obtainedk+1The position of n characteristic point of moment, and obtain set FDk+1, FDk+1={ f1(tk+1),...,fi(tk+1),...,fn(tk+1)}.By GkAnd FDk+1It is sent to event set and chooses module, by FDk+1 And PBDkIt is sent to template side update module;And by tk+1The location sets FD of n characteristic point of momentk+1It is shown or is stored It is checked in destination file for user, turns the 8th step.
8th step, template side update module receive IMU data from data acquisition module, receive from characteristic point update module tk+1The location sets FD of n characteristic point of momentk+1And tkThe location sets PBD on moment characteristic point surrounding formwork sidek, use IMU number According to PBDkIn position be updated, obtain tk+1The location sets PBD on the corresponding template side of n characteristic point of momentk+1, will PBDk+1Matching module is sent to,.The specific method is as follows:
8.1 enable i=1;
8.2 update characteristic point fiCorresponding template side is in tk+1The position at moment, method are:
8.2.1 j=1 is enabled;
8.2.2 defining symbol F is and fiPoint in corresponding three-dimensional space, PjFor with template point pjCorresponding three-dimensional Point in space, F and PjIndicated with the form of three-dimensional coordinate (x, y, z).By template point pjIt is indicated using homogeneous coordinates form, Format is (x, y, 1), and preceding bidimensional respectively indicates pjAbscissa and ordinate under pixel coordinate system, then following equation at It is vertical:
In tkMoment,
In tk+1Moment,
Wherein K is the internal reference matrix of event camera, carries parameter for event camera.R is from tkMoment is to tk+1Moment event The spin matrix of camera, t are from tkMoment is to tk+1The translation vector of moment event camera, the IMU data meter by getting It obtains.For PjIn tkDepth under moment camera coordinates system,For PjIn tk+1Depth under moment camera coordinates system, It is F in tkDepth under moment camera coordinates system,It is F in tk+1Depth under moment camera coordinates system.
8.2.3 two equatioies in formula (10) are subtracted each other, obtains tk+1Moment template point pjRelative to fiRelative positionCalculation formula is as follows:
Due in pixel coordinate system lower template point pjIn characteristic point fiClose region in, so corresponding point in space PjEqually close with F, in this field, such situation is i.e. it is believed that PjWith F in tk+1Have in the camera coordinates system at moment identical Depth, i.e.,Formula (9) is brought into formula (11), and abbreviation is carried out to formula (11) and is obtained:
Consider pjAnd fiIt is that homogeneous coordinates indicate, t further is obtained to formula (12) abbreviationk+1Moment, template point was with respect to position The formula set are as follows:
Symbol Nor () indicates homogeneous partial differential operation, i.e., is homogeneous coordinates by the coordinate transformation in bracket.It is obtained by formula (13) To updated tk+1Moment template point pjRelative to fiRelative positionT is obtained by formula (14)k+1Shi Kemo Plate point pjPosition pj(tk+1), by pj(tk+1) it is put into tk+1Moment characteristic point fiThe location sets on surrounding formwork side
8.2.4 j=j+1 is enabled;
8.2.5 if j≤mi, turn 8.2.2;Otherwise it indicates to obtain It willIt is put into tk+1The position on n characteristic point surrounding formwork side of moment Set set PBDk+1In, turn 8.3;
8.3 enable i=i+1;
If 8.4 i≤n, turn 8.2;Otherwise PBD is obtainedk+1, By PBDk+1It is sent to matching module, turns the 9th step.
9th step, enables k=k+1.
Tenth step turns the 4th step if k < N;Otherwise terminate.
Following technical effect can achieve using the present invention:
1. two kinds of data informations of picture frame and flow of event of DAVIS output are utilized in the present invention simultaneously, use first Harris corner detection approach extracts characteristic point and extracts edge graph using the side Canny detection method, and space is then selected from flow of event Time window, and matched the event in window with the template point on edge graph using EM-ICP algorithm, estimate characteristic point Light stream, and then by optical flow tracking characteristic point, such tracking takes full advantage of the marginal information in picture frame and and is utilized The asynchronism of flow of event improves the precision of feature point tracking while guaranteeing asynchronous tracking.
2. the present invention uses the position of IMU more new template point in template renewal, template side is improved during tracking Tracking accuracy, thus improve calculate light stream accuracy, further improve the tracking accuracy of characteristic point.
Event camera data collection " the The Event-Camera Dataset and that the present invention is issued in University of Zurich The characteristic point for having carried out experimental verification on Simulator " (event camera data collection and simulator), and having been proposed with Zhu et al. with The method that track method, Kueng et al. are proposed has done comparative experiments, the experimental results showed that the present invention not only increases feature point tracking Precision, and extend the feature point tracking time.
Detailed description of the invention
Fig. 1 is overview flow chart of the present invention;
Fig. 2 is the feature point tracking system logic structure figure based on event camera of the first step of the present invention building;
Fig. 3 is the present invention and existing tracking average tracking trueness error contrast and experiment;
Fig. 4 is the present invention and existing tracking average tracking time contrast and experiment.
Specific embodiment
Fig. 1 is overview flow chart of the present invention;As shown in Figure 1, the present invention the following steps are included:
The first step constructs the feature point tracking system based on event camera.Feature point tracking system such as Fig. 2 of event camera It is shown, module, matching module, characteristic point update module, template side are chosen by data acquisition module, initialization module, event set Update module composition.
Data acquisition module chooses module with initialization module, event set, template side update module is connected.Data set obtains Open event camera data collection " The Event-Camera Dataset and Simulator " of the module from University of Zurich Download data, and the picture frame that will acquire is sent to initialization module, and flow of event is sent to event set and chooses module, it will IMU data are sent to template side update module.
Initialization module chooses module with data acquisition module, event set, matching module is connected.Initialization module is from data It obtains module and receives picture frame, characteristic point and edge graph are extracted from picture frame, is obtained around the position and characteristic point of characteristic point The position of characteristic point is sent to event set and chooses module, the position on the template side around characteristic point is sent out by the position on template side Give matching module.
Event set chooses module and is connected with data acquisition module, initialization module, characteristic point update module, matching module, Event set chooses module and receives flow of event from data acquisition module, updates mould from initialization module (recycling for the first time) or characteristic point The position that block (since recycling second) receives characteristic point receives the light stream of characteristic point (from second from characteristic point update module Secondary circulation starts), around each characteristic point from flow of event selected characteristic point event set, by the position of each characteristic point with And corresponding event set is sent to matching module.
Matching module chooses module, characteristic point update module, template side update module phase with initialization module, event set Even, matching module chooses position and the corresponding event set of each characteristic point that module receives characteristic point from event set, from initial Change module (recycle for the first time) or template when update module (since recycling second) receives the template around characteristic point The event set of characteristic point is matched with the template side around characteristic point, finds out the light stream of each characteristic point by position, will be each The position on template side and the light stream of characteristic point around the position of characteristic point, characteristic point are sent to characteristic point update module.
Characteristic point update module chooses module with matching module, template side update module, event set and is connected, and characteristic point updates Module from matching module receive the position of characteristic point, template side around characteristic point position, characteristic point light stream, utilize feature The new position of the optical flow computation characteristic point of point, the new position in the position on the template side around characteristic point and characteristic point is sent to The new position of the light stream of characteristic point and characteristic point is sent to event set and chooses module by template side update module, and by characteristic point New position (i.e. feature point tracking result) output, is checked for user.
Template side update module is connected with data acquisition module, characteristic point update module, matching module, and template side updates mould Block receives the position on the template side around characteristic point and the position that characteristic point is new from characteristic point update module, from data acquisition module IMU data are received, the position on the template side around characteristic point is updated by IMU data, the updated position in template side is sent out Give matching module.
Second step, data acquisition module is from event camera data collection " The Event-Camera Dataset and Picture frame, flow of event and IMU data are obtained in Simulator ".
Third step, using the feature point tracking system based on event camera to from t0Moment starts to tNMoment terminate when Between characteristic point in data acquisition module obtains in section flow of event tracked.By time interval [t during tracking0,tN] point At a series of sub- time interval [t0,t1],...,[tk,tk+1],...,[tN-1,tN] ,+1 chronon section of kth is expressed as [tk,tk+1], N indicates that sub- time interval number, the size of N are determined by flow of event time span, and value range is N >=1.Tracking Process is as follows:
3.1, enable time serial number k=0;
3.2, initialization module initializes, the specific steps are as follows:
3.2.1 initialization module extracts spy using Harris corner detection approach from the picture frame that data acquisition module obtains Point is levied, the characteristic point extracted is put into set FD, enabling FD is { f1,...,fi,...,fn, fiRepresent i-th detected Characteristic point, n are characterized a number.The function that the position of characteristic point is regarded as to the time, by t0The position of n characteristic point in moment FD It is placed on characteristic point position set FD0In, FD0={ f1(t0),...,fi(t0),...,fn(t0), fi(t0) represent characteristic point fiIn t0The position at moment, by FD0It is sent to event set and chooses module.
3.2.2 initialization module extracts edge graph from the picture frame that data acquisition module obtains using the side Canny detection method, The corresponding edge graph of each picture frame.
3.2.3 initialization module chooses in FD the corresponding template of n characteristic point in the (collection of all template points on when by template Closing indicates), by the corresponding t of n characteristic point in FD0The location sets PBD on moment template side0It is sent to matching module.Method is such as Under:
3.2.3.1 i=1 is enabled;
3.2.3.2 with characteristic point fiIn t0The position f at momenti(t0) centered on, in fi(t0) surrounding one rectangle region of selection DomainSize is s × s, i.e.,Length and width be s, the range of s is 20 to 30 pixels.The edge graph that 3.2.2 is detected InInterior side is as fiTemplate side.Pixel on template side is fiTemplate point.DefinitionInterior template point Set Namely fiCorresponding template side, willIt is put into template line set PB:
pjFor fiJ-th of template point.By pjPosition be regarded as the function of time, pj(t0) indicate pjIn t0The position at moment It sets.It indicates in t0Moment pjIn rectangular areaIt is interior.miIndicate setThe number of middle template point.
3.2.3.3 i=i+1 is enabled;
If 3.2.3.4 i≤n turns 3.2.3.2;Otherwise, indicate that having obtained the corresponding template side of n characteristic point in FD is constituted Template line set PB,DefinitionMiddle template point is in t0The set of the position at momentEnable in PB in n template side template point in t0The location sets at momentBy PBD0It is sent to matching module, turns the 4th step.
4th step, event set chooses module and receives flow of event from data acquisition module, according to different k values respectively from initial Change module or characteristic point update module and receive different data, around n characteristic point from flow of event selected characteristic point thing Part collection S, by FDkIt is sent to matching module with the event set S of characteristic point, method is:
If 4.1 k=0, event set chooses module and receives flow of event from data acquisition module, receives t from initialization modulekWhen Carve the location sets FD of n characteristic point0, enable t1=t0+ 1, unit is the second, turns 4.3;
If 4.2 k >=1, event set chooses module and receives flow of event from data acquisition module, receives from characteristic point update module tkThe location sets FD of n characteristic point of momentkAnd tk-1The light stream set G of n characteristic point of momentk-1,WhereinIndicate tk-1Moment characteristic point fiLight stream, estimate sub- time interval [tk,tk+1] Size, according to formula (2) calculate tk+1:
WhereinIt indicates in sub- time interval [tk-1,tk] on characteristic point fiLight stream, obtained from characteristic point update module.It indicates in sub- time interval [tk-1,tk] on all characteristic points average light stream.T is calculated by formula (2)k+1Physics Meaning is that the time that 3 pixels of characteristic point mean motion in a upper time interval need is as current interval [tk,tk+1] The discreet value of size.Turn 4.3;
4.3 in tkThe location sets FD of n characteristic point of momentkIn to choose each characteristic point around each position corresponding Event set.
4.3.1 i=1 is enabled;
These events are put into characteristic point f by the event that coincidence formula (3) requirement is 4.3.2 selected from flow of eventiIn tkWhen Carve corresponding event setIn, and willIt is put into event set S:
Indicate that the set of the event in a three-dimensional space-time region, spatial dimension are characterized point fiIn tkMoment Position fi(tk) around rectangular areaTime range is section [tk,tk+1]。edIt representsIn d-th of event, d= {1,2,...,zi, ziIt indicatesThe number of interior event.Event edRepresentation bexdExpression event edIn Coordinate in pixel coordinate system,Expression event edThe time of generation,Indicate edPixel coordinate existIt is interior.
4.3.3 i=i+1 is enabled;
If 4.3.4 i≤n turns 4.3.2;Otherwise, illustrate to have obtained tkThe corresponding event set S of n characteristic point in moment FD,And by FDkIt is sent to matching module with S, turns the 5th step.
5th step, matching module choose module from event set and receive FDkAnd S receives n from initialization module if k=0 The location sets PBD on the corresponding template side of characteristic pointk, turn the 6th step;If k >=1, n feature is received from template side update module The location sets PBD on the corresponding template side of pointk, turn the 6th step;
6th step, matching module match S with the template side around characteristic point, find out tkN characteristic point of moment Light stream set Gk, by the location sets FD of n characteristic pointk, the corresponding template side of n characteristic point location sets PBDkAnd light stream Set GkIt is sent to characteristic point update module.Method particularly includes:
6.1 enable i=1;
6.2 couples of characteristic point fiConstruct matching error function:
ForInterior event ed, correct event edIn tkThe position at moment, formula are as follows:
x′dIndicate the event e being calculateddIn tkThe position at moment, referred to as event edThe position of Motion correction,Table Show in time interval [tk,tk+1] on characteristic point fiLight stream.Symbol indicates dot product, the equal table in place of the symbol hereinafter occurs Show dot product, in the case where not causing ambiguity, hereinafter part formula omits the symbol.In order to indicate convenient, definition Symbol
It is as follows to construct matching error function:
ε indicates error, pj(tk) represent tkMoment template point pjPosition.rdjIndicate edBy template point pjThe probability of generation, That is edWith pjMatched probability.Here by rdjAs weight, rdjFor event edWith template point pjThe probability to match, rdjCalculating Formula is as follows:
6.3 matching error functions constructed using 6.2 step of EM-ICP algorithmic minimizing, solution obtain optimal light streamSolution procedure is as follows:
6.3.1 initialization
6.3.2 r is calculated by formula (6)dj
6.3.3 updating light stream:
6.3.4 the variable quantity of light stream is calculatedAnd it enablesIt willIt is put into tkMoment light stream Set GkIn.
6.3.5 ifExpression obtains final optimum resultsTurn 6.4;IfTurn 6.3.2.σ is light The variable quantity threshold value of stream, value range are 0.01 to 0.1, and unit is that pixel is per second.
6.4 enable i=i+1;
If 6.5 i≤n, turn 6.2;Otherwise t has been obtainedkThe light stream of n characteristic point of moment is to get having arrived tkMoment light stream Set Gk,By FDk、PBDkAnd GkIt is sent to characteristic point update module, turns the 7th step.
7th step, characteristic point update module receive t from matching modulekThe location sets FD of n characteristic point of momentk、tkMoment The location sets PBD on the corresponding template side of n characteristic pointkWith the light stream set G of n characteristic pointk, pass through optical flow computation tk+1Moment The location sets FD of n characteristic pointk+1, by GkAnd FDk+1It is sent to event set and chooses module, by FDk+1And PBDkIt is sent to template Side update module.Method is:
7.1 enable i=1;
7.2 calculate tk+1Moment fiPosition fi(tk+1), by fi(tk+1) it is put into set FDk+1In,
It is multiplied, indicates from t with the time for light streamkMoment is to tk+1Moment characteristic point fiMove distance.
7.3 enable i=i+1;
If 7.4 i≤n, turn 7.2;Otherwise t has been obtainedk+1The position of n characteristic point of moment, and obtain set FDk+1, FDk+1={ f1(tk+1),...,fi(tk+1),...,fn(tk+1)}.By GkAnd FDk+1It is sent to event set and chooses module, by FDk+1 And PBDkIt is sent to template side update module;And by tk+1The location sets FD of n characteristic point of momentk+1It is shown or is stored It is checked in destination file for user, turns the 8th step.
8th step, template side update module receive IMU data from data acquisition module, receive from characteristic point update module tk+1The location sets FD of n characteristic point of momentk+1And tkThe location sets PBD on moment characteristic point surrounding formwork sidek, use IMU number According to PBDkIn position be updated, obtain tk+1The location sets PBD on the corresponding template side of n characteristic point of momentk+1, will PBDk+1Matching module is sent to,.The specific method is as follows:
8.1 enable i=1;
8.2 update characteristic point fiCorresponding template side is in tk+1The position at moment, method are:
8.2.1 j=1 is enabled;
8.2.2 defining symbol F is and fiPoint in corresponding three-dimensional space, PjFor with template point pjCorresponding three-dimensional Point in space, F and PjIndicated with the form of three-dimensional coordinate (x, y, z).By template point pjIt is indicated using homogeneous coordinates form, Format is (x, y, 1), and preceding bidimensional respectively indicates pjAbscissa and ordinate under pixel coordinate system, then following equation at It is vertical:
In tkMoment,
In tk+1Moment,
Wherein K is the internal reference matrix of event camera, carries parameter for event camera.R is from tkMoment is to tk+1Moment event The spin matrix of camera, t are from tkMoment is to tk+1The translation vector of moment event camera, the IMU data meter by getting It obtains.For PjIn tkDepth under moment camera coordinates system,For PjIn tk+1Depth under moment camera coordinates system, It is F in tkDepth under moment camera coordinates system,It is F in tk+1Depth under moment camera coordinates system.
8.2.3 two equatioies in formula (10) are subtracted each other, obtains tk+1Moment template point pjRelative to fiRelative positionCalculation formula is as follows:
Due in pixel coordinate system lower template point pjIn characteristic point fiClose region in, so corresponding point in space PjEqually close with F, in this field, such situation is i.e. it is believed that PjWith F in tk+1Have in the camera coordinates system at moment identical Depth, i.e.,Formula (9) is brought into formula (11), and abbreviation is carried out to formula (11) and is obtained:
Consider pjAnd fiIt is that homogeneous coordinates indicate, t further is obtained to formula (12) abbreviationk+1Moment, template point was with respect to position The formula set are as follows:
Symbol Nor () indicates homogeneous partial differential operation, i.e., is homogeneous coordinates by the coordinate transformation in bracket.It is obtained by formula (13) To updated tk+1Moment template point pjRelative to fiRelative positionT is obtained by formula (14)k+1Shi Kemo Plate point pjPosition pj(tk+1), by pj(tk+1) it is put into tk+1Moment characteristic point fiThe location sets on surrounding formwork side
8.2.4 j=j+1 is enabled;
8.2.5 if j≤mi, turn 8.2.2;Otherwise it indicates to obtain It willIt is put into tk+1The position on n characteristic point surrounding formwork side of moment Set set PBDk+1In, turn 8.3;
8.3 enable i=i+1;
If 8.4 i≤n, turn 8.2;Otherwise PBD is obtainedk+1, By PBDk+1It is sent to matching module, turns the 9th step.
9th step, enables k=k+1.
Tenth step turns the 4th step if k < N;Otherwise terminate.
Fig. 3 is the present invention and existing tracking average tracking trueness error contrast and experiment;The experimental result be The result tested on " The Event-Camera Dataset and Simulator " data set using the present invention. Experimental situation is the notebook for being configured to i7 2.8GHz CPU, 8G RAM.The evaluation index of the experiment is characterized a little flat Equal tracking error, unit are pixel.Left side is the title of data sequence in the data set in figure, and putting down a little is characterized above right side Equal tracking error.Three column experimental datas are respectively the method for the method of the present invention, Zhu et al., Kueng et al. identical in figure The result tested under sequence of test data, same experimental situation.Experimental result is shown, compared with other two methods, this hair It is bright that all there is lower average tracking error in all sequence of test data."×" indicates no data in figure.
Fig. 4 is the present invention and existing tracking average tracking time contrast and experiment.Experiment reality corresponding with Fig. 3 The test data set tested is identical with experimental situation.The evaluation index of the experiment is characterized the average tracking time a little, and unit is the second. Left side is the title of data sequence in the data set in figure, and the average tracking time a little is characterized above right side.Three column are real in figure Testing data is respectively the system of the system of this system, Zhu et al., Kueng et al. identical test data is serial, same reality Test the result tested under environment.Experimental result is shown, compared with other two methods, the present invention is removing data sequence " boxes_ Translation " can be realized the tracking of longer time in other data sequences except " boxes_6dof ".

Claims (5)

1. a kind of feature point tracking method based on event camera, it is characterised in that the following steps are included:
The first step, constructs the feature point tracking system based on event camera, and the feature point tracking system of event camera is obtained by data Modulus block, initialization module, event set choose module, matching module, characteristic point update module, template side update module composition;
Data acquisition module chooses module with initialization module, event set, template side update module is connected;Data set acquisition module From event camera data collection download data, the picture frame that will acquire is sent to initialization module, flow of event is sent to thing Part collection chooses module, and IMU data, that is, Inertial Measurement Unit data are sent to template side update module;
Initialization module chooses module with data acquisition module, event set, matching module is connected;Initialization module is from data acquisition Module receives picture frame, and characteristic point and edge graph are extracted from picture frame, obtains the template around the position and characteristic point of characteristic point The position of characteristic point is sent to event set and chooses module, the position on the template side around characteristic point is sent to by the position on side Matching module;
Event set chooses module and is connected with data acquisition module, initialization module, characteristic point update module, matching module, event Collection chooses module and receives flow of event from data acquisition module, and the position of characteristic point is received from initialization module or characteristic point update module Set, from characteristic point update module receive characteristic point light stream, around each characteristic point from flow of event selected characteristic point thing The position of each characteristic point and corresponding event set are sent to matching module by part collection;
Matching module chooses module, characteristic point update module, template side update module with initialization module, event set and is connected, Position and the corresponding event set of each characteristic point that module receives characteristic point are chosen from event set with module, from initialization module Or position of the template when update module receives the template around characteristic point, by the mould around the event set and characteristic point of characteristic point Edges of boards are matched, and the light stream of each characteristic point is found out, by the position on the template side around the position of each characteristic point, characteristic point Light stream with characteristic point is sent to characteristic point update module.
Characteristic point update module chooses module with matching module, template side update module, event set and is connected, characteristic point update module Light stream from the position, characteristic point on the template side around the position of matching module reception characteristic point, characteristic point, utilizes characteristic point The new position in the position on the template side around characteristic point and characteristic point is sent to template by the new position of optical flow computation characteristic point The new position of the light stream of characteristic point and characteristic point is sent to event set and chooses module by side update module, and characteristic point is new Position output;
Template side update module is connected with data acquisition module, characteristic point update module, matching module, template side update module from Characteristic point update module receives the position on the template side around characteristic point and the position that characteristic point is new, receives from data acquisition module IMU data update the position on the template side around characteristic point by IMU data, and the updated position in template side is sent to Matching module;
Second step, data acquisition module are concentrated from event camera data and obtain picture frame, flow of event and IMU data;
Third step, using the feature point tracking system based on event camera to from t0Moment starts to tNThe period that moment terminates The characteristic point in flow of event that interior data acquisition module obtains is tracked, by time interval [t0,tN] it is divided into a series of period of the day from 11 p.m. to 1 a.m Between section [t0,t1],...,[tk,tk+1],...,[tN-1,tN], N indicates sub- time interval number, when the size of N is by flow of event Between length determine, value range be N >=1;+ 1 chronon section of kth is expressed as [tk,tk+1];Tracking process is as follows:
3.1, enable time serial number k=0;
3.2, if k=0, initialization module is initialized, method are as follows:
3.2.1 initialization module extracts feature from the picture frame that data acquisition module obtains using Harris corner detection approach The characteristic point extracted is put into set FD by point, and enabling FD is { f0,...,fi,...,fn, fiIt is special to represent i-th detected Point is levied, n is characterized a number;The function that the position of characteristic point is regarded as to the time, by t0The position of n characteristic point is put in moment FD In characteristic point position set FD0In, FD0={ f0(t0),...,fi(t0),...,fn(t0), fi(t0) represent characteristic point fiIn t0 The position at moment, by FD0It is sent to event set and chooses module;
3.2.2 initialization module extracts edge graph from the picture frame that data acquisition module obtains using the side Canny detection method, each Picture frame corresponds to an edge graph;
3.2.3 initialization module chooses the corresponding template side of n characteristic point in FD, by the corresponding t of n characteristic point in FD0Shi Kemo The location sets PBD of edges of boards0It is sent to matching module, method are as follows:
3.2.3.1 i=1 is enabled;
3.2.3.2 with characteristic point fiIn t0The position f at momenti(t0) centered on, in fi(t0) one rectangular area of surrounding selection Size is s × s, i.e. the length of the rectangle and width is s;In the edge graph that 3.2.2 is detectedInterior side is as fiTemplate Side, the pixel on template side is fiTemplate point;DefinitionThe set of interior template point Namely fiCorresponding mould Edges of boards, willIt is put into template line set PB,
pjFor fiJ-th of template point, by pjPosition be regarded as the function of time, pj(t0) indicate pjIn t0The position at moment,It indicates in t0Moment pjIn rectangular areaIt is interior, miIt indicatesThe number of middle template point;
3.2.3.3 i=i+1 is enabled;
If 3.2.3.4 i≤n turns 3.2.3.2;Otherwise, it indicates to have obtained the mould that the corresponding template side of n characteristic point in FD is constituted Edges of boards set PB,DefinitionMiddle template point is in t0The set of the position at momentEnable in PB in n template side template point in t0The location sets at momentBy PBD0It is sent to matching module, turns the 4th step;
4th step, event set choose module and receive flow of event from data acquisition module, according to different k values respectively from initialization mould Block or characteristic point update module receive different data, around n characteristic point from flow of event selected characteristic point event set S, by FDkIt is sent to matching module with the event set S of characteristic point, method is:
If 4.1 k=0, event set chooses module and receives flow of event from data acquisition module, receives t from initialization modulekMoment n The location sets FD of characteristic point0, enable t1=t0+ 1, unit is the second, turns 4.3;
If 4.2 k >=1, event set chooses module and receives flow of event from data acquisition module, receives t from characteristic point update modulekWhen Carve the location sets FD of n characteristic pointkAnd tk-1The light stream set G of n characteristic point of momentk-1, WhereinIndicate tk-1Moment characteristic point fiLight stream, estimate sub- time interval [tk,tk+1] size, according to formula (2) calculate tk+1:
WhereinIt indicates in sub- time interval [tk-1,tk] on characteristic point fiLight stream;It indicates in sub- time interval [tk-1, tk] on all characteristic points average light stream, turn 4.3;
4.3 in tkThe location sets FD of n characteristic point of momentkIn choose the corresponding event of each characteristic point around each position Collection, method is:
4.3.1 i=1 is enabled;
These events are put into characteristic point f by the event that coincidence formula (3) requirement is 4.3.2 selected from flow of eventiIn tkMoment pair The event set answeredIn, and willIt is put into event set S:
Indicate that the set of the event in a three-dimensional space-time region, spatial dimension are characterized point fiIn tkThe position f at momenti (tk) around rectangular areaTime range is section [tk,tk+1];edIt representsIn d-th of event, d=1, 2,...,zi, ziIt indicatesThe number of interior event;xdExpression event edCoordinate in pixel coordinate system, Expression event edThe time of generation,Indicate edPixel coordinate existIt is interior;
4.3.3 i=i+1 is enabled;
If 4.3.4 i≤n turns 4.3.2;Otherwise, illustrate to have obtained tkThe corresponding event set S of n characteristic point in moment FD,By FDkIt is sent to matching module with S, turns the 5th step;
5th step, matching module choose module from event set and receive FDkAnd S receives n characteristic point from initialization module if k=0 The location sets PBD on corresponding template sidek, turn the 6th step;If k >=1, it is corresponding that n characteristic point is received from template side update module Template side location sets PBDk, turn the 6th step;
6th step, matching module match S with the template side around characteristic point, find out tkThe light stream collection of n characteristic point of moment Close Gk, by the location sets FD of n characteristic pointk, the corresponding template side of n characteristic point location sets PBDkWith light stream set Gk It is sent to characteristic point update module, method particularly includes:
6.1 enable i=1;
6.2 couples of characteristic point fiMatching error function is constructed, method is:
Event e is corrected by formula (4)dIn tkThe position at moment:
x′dIndicate the event e being calculateddIn tkThe position at moment, referred to as event edThe position of Motion correction,Indicate when Between section [tk,tk+1] on characteristic point fiLight stream;Symbol indicates dot product, defines symbol
It is as follows to construct matching error function:
ε indicates error, pj(tk) represent tkMoment template point pjPosition;rdjIndicate edBy template point pjThe probability of generation, i.e. ed With pjMatched probability;rdjFor event edWith template point pjThe probability to match, in formula double vertical lines indicate in double vertical lines to Amount carries out modulo operation, rdjCalculation formula it is as follows:
6.3 use EM-ICP algorithmic minimizing matching error function, and solution obtains optimal light streamMethod is:
6.3.1 initialization
6.3.2 r is calculated by formula (6)dj
6.3.3 updating light stream:
6.3.4 the variable quantity of light stream is calculatedAnd it enablesIt willIt is put into tkMoment light stream set GkIn;
6.3.5 ifExpression obtains final optimum resultsTurn 6.4;IfTurn 6.3.2;σ is light stream Variable quantity threshold value;
6.4 enable i=i+1;
If 6.5 i≤n, turn 6.2;Otherwise t has been obtainedkMoment light stream set Gk, turnBy FDk、 PBDkAnd GkIt is sent to characteristic point update module, turns the 7th step;
7th step, characteristic point update module receive FD from matching modulek、PBDkAnd Gk, pass through optical flow computation tk+1N feature of moment The location sets FD of pointk+1, by GkAnd FDk+1It is sent to event set and chooses module, by FDk+1And PBDkIt is sent to template side and updates mould Block, method are:
7.1 enable i=1;
7.2 calculate tk+1Moment fiPosition fi(tk+1), by fi(tk+1) it is put into set FDk+1In,
It is multiplied, indicates from t with the time for light streamkMoment is to tk+1Moment characteristic point fiMove distance;
7.3 enable i=i+1;
If 7.4 i≤n, turn 7.2;Otherwise t has been obtainedk+1The position of n characteristic point of moment, and obtain set FDk+1, FDk+1 ={ f1(tk+1),...,fi(tk+1),...,fn(tk+1)};By GkAnd FDk+1It is sent to event set and chooses module, by FDk+1And PBDk It is sent to template side update module;And by tk+1The location sets FD of n characteristic point of momentk+1Shown or stored result text In part, turn the 8th step;
8th step, template side update module receive IMU data from data acquisition module, receive FD from characteristic point update modulek+1With PBDk, using IMU data to PBDkIn position be updated, obtain tk+1The position on the corresponding template side of n characteristic point of moment Set PBDk+1, by PBDk+1It is sent to matching module, the method is as follows:
8.1 enable i=1;
8.2 update characteristic point fiCorresponding template side is in tk+1The position at moment, method are:
8.2.1 j=1 is enabled;
8.2.2 defining symbol F is and fiPoint in corresponding three-dimensional space, PjFor with template point pjCorresponding three-dimensional space In point, F and PjIndicated with the form of three-dimensional coordinate (x, y, z);By template point pjIt is indicated using homogeneous coordinates form, format For (x, y, 1), preceding bidimensional respectively indicates pjAbscissa and ordinate under pixel coordinate system, obtain following equation:
In tkMoment,
In tk+1Moment,
Wherein K is the internal reference matrix of event camera, carries parameter for event camera;R is from tkMoment is to tk+1Moment event camera Spin matrix, t is from tkMoment is to tk+1The translation vector of moment event camera is calculated by the IMU data got It arrives;For PjIn tkDepth under moment camera coordinates system,For PjIn tk+1Depth under moment camera coordinates system,For F In tkDepth under moment camera coordinates system,It is F in tk+1Depth under moment camera coordinates system;
8.2.3 two equatioies in formula (10) are subtracted each other, obtains tk+1Moment template point pjRelative to fiRelative position
PjWith F in tk+1Depth having the same in the camera coordinates system at moment, i.e.,Bring formula (9) into formula (11), and abbreviation is carried out to formula (11) to obtain:
T is obtained to formula (12) abbreviationk+1The formula of moment template point relative position are as follows:
Symbol Nor () indicates homogeneous partial differential operation, i.e., is homogeneous coordinates by the coordinate transformation in bracket;T is obtained by formula (14)k+1 Moment template point pjPosition pj(tk+1), by pj(tk+1) it is put into tk+1Moment characteristic point fiThe location sets on surrounding formwork side
8.2.4 j=j+1 is enabled;
8.2.5 if j≤mi, turn 8.2.2;Otherwise it indicates to obtain It willIt is put into tk+1The position on n characteristic point surrounding formwork side of moment Set set PBDk+1In, turn 8.3;
8.3 enable i=i+1;
If 8.4 i≤n, turn 8.2;Otherwise PBD is obtainedk+1,It will PBDk+1It is sent to matching module, turns the 9th step;
9th step, enables k=k+1;
Tenth step turns the 4th step if k < N;Otherwise terminate.
2. a kind of feature point tracking method based on event camera as described in claim 1, it is characterised in that the event phase Machine data set refers to " event camera data collection and simulator " i.e. " the The Event-Camera Dataset acquired by DAVIS And Simulator ", it includes picture frame, flow of event and IMU data that event camera data, which is concentrated,;DAVIS refers to dynamically and active Element sensor.
3. a kind of feature point tracking method based on event camera as described in claim 1, it is characterised in that the template side By the set expression of template points all on template side.
4. a kind of feature point tracking method based on event camera as described in claim 1, it is characterised in that 3.2.3.2 walks institute The range for stating s is 20 to 30 pixels.
5. a kind of feature point tracking method based on event camera as described in claim 1, it is characterised in that described in 6.3.5 step The value range of σ is 0.01 to 0.1, and unit is that pixel is per second.
CN201910672162.2A 2019-07-24 2019-07-24 Feature point tracking method based on event camera Active CN110390685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910672162.2A CN110390685B (en) 2019-07-24 2019-07-24 Feature point tracking method based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910672162.2A CN110390685B (en) 2019-07-24 2019-07-24 Feature point tracking method based on event camera

Publications (2)

Publication Number Publication Date
CN110390685A true CN110390685A (en) 2019-10-29
CN110390685B CN110390685B (en) 2021-03-09

Family

ID=68287327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910672162.2A Active CN110390685B (en) 2019-07-24 2019-07-24 Feature point tracking method based on event camera

Country Status (1)

Country Link
CN (1) CN110390685B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062337A (en) * 2019-12-19 2020-04-24 北京迈格威科技有限公司 People flow direction detection method and device, storage medium and electronic equipment
CN113066104A (en) * 2021-03-25 2021-07-02 三星(中国)半导体有限公司 Angular point detection method and angular point detection device
CN113160218A (en) * 2021-05-12 2021-07-23 深圳龙岗智能视听研究院 Method for detecting object motion intensity based on event camera
WO2023061187A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Optical flow estimation method and device
CN116188533A (en) * 2023-04-23 2023-05-30 深圳时识科技有限公司 Feature point tracking method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808407A (en) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
US20190065885A1 (en) * 2017-08-29 2019-02-28 Beijing Samsung Telecom R&D Center Object detection method and system
CN109697726A (en) * 2019-01-09 2019-04-30 厦门大学 A kind of end-to-end target method for estimating based on event camera
WO2019099337A1 (en) * 2017-11-14 2019-05-23 Kaban Technologies Llc Event camera-based deformable object tracking
CN109934862A (en) * 2019-02-22 2019-06-25 上海大学 A kind of binocular vision SLAM method that dotted line feature combines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065885A1 (en) * 2017-08-29 2019-02-28 Beijing Samsung Telecom R&D Center Object detection method and system
CN107808407A (en) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
WO2019099337A1 (en) * 2017-11-14 2019-05-23 Kaban Technologies Llc Event camera-based deformable object tracking
CN109697726A (en) * 2019-01-09 2019-04-30 厦门大学 A kind of end-to-end target method for estimating based on event camera
CN109934862A (en) * 2019-02-22 2019-06-25 上海大学 A kind of binocular vision SLAM method that dotted line feature combines

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALEX ZIHAO ZHU ET AL.: "Event-based feature tracking with probabilistic data association", 《2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
BEAT KUENG ET AL.: "Low-latency visual odometry using event-based feature tracks", 《2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 *
IGNACIO ALZUGARAY ET AL.: "Asynchronous Corner Detection and Tracking for Event Cameras in Real Time", 《IEEE ROBOTICS AND AUTOMATION LETTERS 》 *
谢榛: "基于无人机视觉的场景感知方法研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062337A (en) * 2019-12-19 2020-04-24 北京迈格威科技有限公司 People flow direction detection method and device, storage medium and electronic equipment
CN111062337B (en) * 2019-12-19 2023-08-04 北京迈格威科技有限公司 People stream direction detection method and device, storage medium and electronic equipment
CN113066104A (en) * 2021-03-25 2021-07-02 三星(中国)半导体有限公司 Angular point detection method and angular point detection device
CN113066104B (en) * 2021-03-25 2024-04-19 三星(中国)半导体有限公司 Corner detection method and corner detection device
CN113160218A (en) * 2021-05-12 2021-07-23 深圳龙岗智能视听研究院 Method for detecting object motion intensity based on event camera
CN113160218B (en) * 2021-05-12 2023-06-20 深圳龙岗智能视听研究院 Method for detecting object motion intensity based on event camera
WO2023061187A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Optical flow estimation method and device
CN116188533A (en) * 2023-04-23 2023-05-30 深圳时识科技有限公司 Feature point tracking method and device and electronic equipment
CN116188533B (en) * 2023-04-23 2023-08-08 深圳时识科技有限公司 Feature point tracking method and device and electronic equipment

Also Published As

Publication number Publication date
CN110390685B (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN110390685A (en) Feature point tracking method based on event camera
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN108416840B (en) Three-dimensional scene dense reconstruction method based on monocular camera
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
Cvišić et al. Stereo odometry based on careful feature selection and tracking
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN111862213A (en) Positioning method and device, electronic equipment and computer readable storage medium
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN113108771B (en) Movement pose estimation method based on closed-loop direct sparse visual odometer
CN108615246A (en) It improves visual odometry system robustness and reduces the method that algorithm calculates consumption
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
Fanani et al. Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?
Chen et al. A stereo visual-inertial SLAM approach for indoor mobile robots in unknown environments without occlusions
CN111914756A (en) Video data processing method and device
CN106709870A (en) Close-range image straight-line segment matching method
CN110070578A (en) A kind of winding detection method
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN112767546A (en) Binocular image-based visual map generation method for mobile robot
CN108765326A (en) A kind of synchronous superposition method and device
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
Jo et al. Mixture density-PoseNet and its application to monocular camera-based global localization
Xie et al. Hierarchical quadtree feature optical flow tracking based sparse pose-graph visual-inertial SLAM
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN108694348B (en) Tracking registration method and device based on natural features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant