CN113724297A - Event camera-based tracking method - Google Patents

Event camera-based tracking method Download PDF

Info

Publication number
CN113724297A
CN113724297A CN202111011203.7A CN202111011203A CN113724297A CN 113724297 A CN113724297 A CN 113724297A CN 202111011203 A CN202111011203 A CN 202111011203A CN 113724297 A CN113724297 A CN 113724297A
Authority
CN
China
Prior art keywords
target
event
events
cluster
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111011203.7A
Other languages
Chinese (zh)
Inventor
吕恒毅
韩诚山
冯阳
张以撒
赵宇宸
孙铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202111011203.7A priority Critical patent/CN113724297A/en
Publication of CN113724297A publication Critical patent/CN113724297A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention relates to a tracking method based on an event camera, which relates to the technical field of image tracking and solves the problems that the prior target tracking method of images can generate a motion blur phenomenon when a high-speed moving target is imaged, data redundancy and low sensor resolution exist in the calculation process of the method, the output rate of an event stream is low and the like; and finally, matching the newly obtained target with the target in the tracker to obtain a new position of the target, and completing the tracking of the target track. The invention processes the events in a time period in batch, obtains the position of the event cluster, marks the label, and detects the target position in the next time period, and the method is more suitable for the scanned read event stream data.

Description

Event camera-based tracking method
Technical Field
The invention relates to the technical field of image tracking, in particular to a tracking method based on an event camera.
Background
The target detection method based on the traditional area array image sensor carries out target positioning by globally searching the existing target model, but the method has large calculation amount. In order to reduce the calculation amount, target position prediction can be performed before positioning, and the search range is reduced, such as methods of kalman filtering, particle filtering, mean value transfer, and the like. The method determines the position of the target by using a single mathematical model, only utilizes the characteristic information of the target, does not use background information in the image, and has certain limitation. When the target is subjected to rotational deformation, illumination change and motion blur, the position search result is influenced. With the rapid development of computer technology in recent years, machine learning and deep learning methods such as support vector machines, random forests, deep convolutional neural networks and the like are successfully applied to machine vision tasks such as target detection, tracking, classification and the like, and high precision is obtained; in addition, the related filtering method is also applied to the target tracking problem, and the method can have higher calculation speed compared with a deep learning algorithm.
The dynamic vision sensor has the advantages of high dynamic range, high response speed and the like, can effectively avoid overexposure and motion blur in extreme scenes, and can observe objects moving at high speed in wider scenes. It has been used by researchers in the field of tracking due to its unique advantages. However, unlike the output data of a conventional image sensor, a dynamic vision sensor outputs data that is an event stream rather than an image frame. And the data form is different from the matrix form of the traditional image, so that the traditional target tracking method cannot be directly used in the target tracking task of the event stream. There is a need for a target tracking method for event streams.
In view of the above requirements, at present, there are many solutions related at home and abroad, most of the existing target tracking methods based on dynamic vision sensors extract target position information by clustering events, the judgment of clusters depends on judging the distance between events and the number of events with close distance, the event distance is less than a certain threshold, and the number of events is more than a certain threshold, namely the target is defined.
A cluster-based method inspired by the traditional mean-shift method has been proposed in the document embedded vision system for real-time target tracking using asynchronous transient vision sensors to track the arms of a robotic football goalkeeper. Other work of tracking moving targets by using a cluster-based method is embodied in a paper 'spatio-temporal clustering method of event-based 3D visual real-time motion analysis'.
The two methods described above differ in the way events are assigned to clusters. The assignment of newly generated events depends on the 3D manhattan distance in space and time between the event and the cluster. Compared with the traditional Euclidean distance, the clustering method has the effect of inhibiting noise. The cluster-based approach is suitable for embedded vision systems due to low memory footprint, but the cluster size needs to be adjusted according to specific targets, so the above approach is only suitable for specific scenes.
A method for clustering events based on a gaussian mixture model is developed. A K Gaussian clustering method is provided in the multi-person space-time tracking based on a dynamic vision sensor, and events are modeled by Gaussian clustering. Later, the document "simultaneous event-based multi-kernel algorithm for high-speed visual feature tracking" improved on this approach, where the spatial distribution of events was modeled with bivariate gaussians. This is also the inspiration gained from the mean algorithm. The cluster is updated by determining which cluster an event in the event stream belongs to.
The document "high-speed target tracking using dynamic vision sensor" proposes a position event correlation detection algorithm, which divides an event stream into 32 or 64 blocks according to space and time, respectively detects the position event correlation and extracts an event cluster therein, and then matches a newly found target with an existing target. However, when the geometric size of the target is large and is on the boundary line of the space, one target may be mistakenly divided into a plurality of targets, which affects the tracking effect.
There are several methods for obtaining a target position through an event stream: if the event-by-event process can be carried out, the cluster is distributed to the newly arrived events through Euclidean distance for tracking the vehicles; or tracking lines by adopting a Hough transform-based method; or a data association method with limited space, time and speed directions is adopted to realize corner event tracking; or detecting initial slippage, slippage and vibration by observing the object contact area with dynamic and active pixel vision sensors; in addition, there are methods such as an iterative closest point method and a particle filter.
The above methods use sensors with relatively low resolution and low event stream output rate, so that less resources are required for event-by-event processing, but with the introduction of higher resolution and faster output rate sensors, such as Gen 4CD with 1280 × 720 pixel resolution of Prophese, CeleX-V with 1280 × 800 pixel resolution of CelePixel, DVS-Gen4 with 1280 × 960 pixel resolution of Samsung, etc., the event output mode with line-by-line refresh and the event-by-event processing method are difficult to apply.
The traditional image sensor needs to extract the position of a target in each frame of image in a video, and then match the position with the target at the previous moment to form a continuous motion track, so that the aim of target tracking is fulfilled. Although target tracking algorithms based on traditional images are very mature, high tracking accuracy is achieved in a medium-low speed scene with a low dynamic range. However, due to the limitation of the imaging principle of the conventional image sensor, overexposure occurs when a scene with a high dynamic range is imaged, and a motion blur phenomenon occurs when a high-speed moving object is imaged. Furthermore, conventional cameras must store and process all pixels, most of which are unchanged from the previous frame, which results in data redundancy that adversely affects computation, latency and memory consumption.
Disclosure of Invention
The invention provides a tracking method based on an event camera, aiming at solving the problems that the existing target tracking method of images can generate a motion blur phenomenon when a high-speed moving target is imaged, data redundancy exists in the calculation process of the method, the resolution of a sensor is low, the output rate of an event stream is low, and the like.
An event camera based tracking method, the method comprising target tracking and detection of an event stream target; the method is realized by the following steps:
step one, detecting event cluster information in an event stream to obtain events included in an event cluster, a centroid of the event cluster, a radius of the event cluster and a target last update timestamp; the specific process is as follows:
the method comprises the steps of reducing noise of an original event stream, caching the noise-reduced event stream, and reserving an event stream segment with fixed duration;
step two, calculating the correlation of each event traversed in the event stream segment, and determining the support event of the corresponding event by comparing threshold values;
step three, after traversing is completed, taking an event cluster with the number of elements in the set of the supporting events exceeding the event number as a target event cluster, and marking the ID of the target event cluster;
step two, calculating the space-time mass center of each target event cluster and the maximum transverse and longitudinal radius of the target event cluster;
matching the detected target with the existing target in the tracker based on a centroid two-dimensional space Euclidean distance comparison method to obtain a complete target motion track;
then, the unmatched target is used as a new target to update the target in the tracker, and finally, the target which is not updated for a long time is deleted.
The invention has the beneficial effects that:
1. and carrying out batch processing on the events in one time period to obtain the positions of the event clusters, labeling the positions, and then carrying out target position detection in the next time period.
2. The correlation of the event is quantified by a kernel function, and the time and space distances of the event are substituted into the kernel function, so that the number of parameters can be reduced, and a later-stage judgment method is simplified.
3. The marking rule of the target event cluster ID is based on the support event, the whole method is based on the event stream, conversion into a gray image or an event virtual frame is not needed, and the storage space is saved. The marking method can not only extract the target, but also filter certain noise.
Drawings
FIG. 1 is a schematic diagram of an event camera based tracking method according to the present invention;
FIG. 2 is a flow chart of target detection in an event camera based tracking method according to the present invention;
FIG. 3 is a flowchart of target tracking in an event camera based tracking method according to the present invention;
FIG. 4 is a diagram illustrating the effect of an original event stream in a tracking method based on an event camera according to the present invention;
fig. 5 is an effect diagram of a target event stream in the event camera-based tracking method according to the present invention.
Detailed Description
The embodiment is described with reference to fig. 1 to 5, and the method for tracking based on the event camera is divided into 2 steps of detecting and tracking an event stream target. As shown in fig. 1, object detection and object tracking, respectively. The original event stream is shown on the left side of the diagram, and the target event cluster is shown in the right side box. Firstly, denoising an original event stream, removing noise points influencing target detection, slicing the event stream by fixed time, extracting a target event cluster, and acquiring information such as target position, time and the like; and finally, matching the newly obtained target with the target in the tracker to obtain a new position of the target, and completing the tracking of the target track. The method is realized by the following steps:
step 1, the embodiment is described with reference to fig. 1 and fig. 2, and the target detection algorithm aims to detect event cluster information in an event stream, and obtain information such as events contained in an event cluster, a centroid of the event cluster, a radius of the event cluster, and time.
Firstly, denoising an original event stream, caching the denoised event stream, and reserving an event stream segment with fixed duration;
then, calculating the correlation of each event traversed in the event stream segment through the following formula (1), and determining the support event of the corresponding event through comparing a threshold value;
when detecting events included in an event cluster, firstly, a Gaussian mixture model is used to quantify the correlation among the events in turn, and event correlation indexes p of the event ei (xi, yi, ti) and other events ej (xj, yj, tj)ijComprises the following steps:
Figure BDA0003238535480000054
where N represents the total number of events in each event stream slice, dijIs the spatial distance between events, τijσ 1 and σ 2 are the variances of the spatial distance and the temporal distance between events respectively;
Figure BDA0003238535480000051
Figure BDA0003238535480000052
τij=|ti-tj| (4)
when p isij>And f, marking the event ej (xj, yj, tj) as a supporting event of ei (xi, yi, ti), and marking a target event cluster ID, wherein the target event cluster ID is used for distinguishing events of different targets in the event stream slice.
Finally, after traversing is completed, taking the event cluster with the number of elements in the supporting event set exceeding the event number n as a target event cluster, and marking the target event cluster ID for the event according to the flow of FIG. 2;
as shown in fig. 2: calculating the correlation index p of each event ei and other events ej in turnijObtaining pij>A target event cluster ID set I of events of gamma, and whether I is an empty set or not is judged
Figure BDA0003238535480000053
If so, marking the event with a new target event cluster ID, if not, judging whether the number of the elements in the I is 1, and if so, regarding the unmarked event with the elements in the I as the target event cluster ID; if not, marking Max { I } for the events which are not marked, marking Max { I } for the events which are marked with other elements in I in the event stream slice, and ending.
Step 2, calculating the space-time centroid [ X ] of each event clusterc,Yc,Tc]TAnd the maximum transverse and longitudinal radius [ X ] of the targetR,YR]T
Labeling each event in the event stream slice in sequence, after traversing, regarding the event cluster with the number of supporting events, namely the number of elements in the set Ii exceeding a certain threshold number n as a target, and calculating the time-space centroid [ X ] of the event clusterc,Yc,Tc]TAnd the maximum transverse and longitudinal radius [ X ] of the targetR,YR]TNamely:
Figure BDA0003238535480000061
Figure BDA0003238535480000062
wherein Nobjk is the number of events that make up target k, and max (. cndot.) is a function of the maximum value.
After the target detection in the time slice event is completed, the event included by the target, the space-time centroid of the target, the maximum transverse and longitudinal radius of the target and the latest update timestamp of the target are obtained.
Step 3, matching the newly acquired target with an existing target in the tracker based on a centroid two-dimensional space Euclidean distance comparison method; updating the tracker by taking the unmatched target as a new target;
and finally deleting the target which is not updated for a long time.
The present embodiment is described with reference to fig. 3, and the target tracking is to match a newly detected target with an existing target in the tracker, so as to obtain a complete target motion trajectory.
In the imaging process of the dynamic vision sensor, an event only exists in a pixel where a moving target is located, the time resolution of the event is only microsecond level, and the extremely high spatial and temporal continuity is achieved, so that an event stream slice in an extremely short time period is easily obtained, the displacement generated by the target is smaller in shorter time, and the matching of the target can be completed according to the spatial position relation between the targets by the extremely short time slice under the condition that shielding does not occur.
The target tracking method comprises the following three steps: firstly, matching a newly acquired target with an existing target in a tracker based on a centroid two-dimensional space Euclidean distance comparison method; then, updating the tracker by taking the unmatched target as a new target; and finally deleting the target which is not updated for a long time. The specific steps are described as follows:
firstly, defining the two-dimensional Euclidean distance of the mass center of the target in two different time periods as DcentroidDefining two-dimensional space Euclidean distance of a target mass center of a space critical in two different time periods as DradiusNamely:
Figure BDA0003238535480000063
Figure BDA0003238535480000071
where trk (p) is the ith target already in the tracker and dtc (q) is the jth target newly detected in the detector.
The matching principle is as follows: dtc (q) compares D with each target trk (p) in the tracker in turncentroidAnd DradiusWhen the value of D is largecentroid≤DradiusMatching trk (p) with dtc (q), marking dtc (q) with the same target ID as trk (p), and updating the target timestamp; when D is presentcentroid>DradiusThen the two targets cannot match.
After the target in the detector is matched with the existing target of the tracker, the target which is not successfully matched is marked with a new target ID as a brand new target, and the tracker is updated.
After the above process is completed, the target which has not been matched for a long time in the tracker is regarded as a disappeared target, and the disappeared target is deleted.
In the target detection section of the present embodiment, the processing method of the event stream is event batch processing, the correlation between events is calculated, the supporting event of the event is extracted, and the composition of the target event cluster is determined. Including the following 3 sections.
a. The method for calculating the event correlation adopts a method of formula (1), substitutes the time and space distances among the events into a Gaussian kernel function, and quantifies the event correlation. Not only gaussian kernel functions but also other isotropic kernel functions can be used here.
b. And (b) after the event correlation is calculated according to the a, judging the number of the elements of the existing event cluster ID set, and labeling the ID of the newly arrived event.
c. Labeling event cluster ID rules: when ei (x)i,yi,ti) If the event cluster ID sets I supporting all the events are empty, marking the events with uniform event cluster IDs; when ei (x)i,yi,ti) If the number of elements of all event cluster ID sets I is 1, the events which are not labeled are labeled with the same event cluster ID. When ei (x)i,yi,ti) And if the number of the elements of the event cluster ID set I of all the supporting events is more than 1, marking the Max { I } of the maximum value elements in the supporting event marking set I which are not marked, and marking the events marked with the non-maximum value IDs in the set I in the event stream slice as Max { I }.
The method of the embodiment can be applied to an upper computer and an embedded system.
The event correlation quantization method in the present embodiment may adopt other isotropic kernel functions to perform quantization, thereby achieving the same purpose.

Claims (5)

1. The event camera-based tracking method is characterized in that: the method comprises the steps of detecting an event stream target and tracking the target; the method is realized by the following steps:
step one, detecting event cluster information in an event stream to obtain events included in an event cluster, a centroid of the event cluster, a radius of the event cluster and a target last update timestamp; the specific process is as follows:
the method comprises the steps of reducing noise of an original event stream, caching the noise-reduced event stream, and reserving an event stream segment with fixed duration;
step two, calculating the correlation of each event traversed in the event stream segment, and determining the support event of the corresponding event by comparing threshold values;
step three, after traversing is completed, taking an event cluster with the number of elements in the set of the supporting events exceeding the event number as a target event cluster, and marking the ID of the target event cluster;
step two, calculating the space-time mass center of each target event cluster and the maximum transverse and longitudinal radius of the target event cluster;
matching the detected target with the existing target in the tracker based on a centroid two-dimensional space Euclidean distance comparison method to obtain a complete target motion track;
then, the unmatched target is used as a new target to update the target in the tracker, and finally, the target which is not updated for a long time is deleted.
2. The event camera-based tracking method of claim 1, wherein: the specific process of the first step and the second step is as follows:
calculating the correlation between events in the event, event ei (x)i,yi,ti) And event ej (x)j,yj,tj) Is (a) an event correlation index pij,xi,yiIs the spatial coordinate of the ith event, tiA timestamp for the ith event;
is represented by the following formula:
Figure FDA0003238535470000011
wherein N is the total number of events in each event stream slice, dijIs the spatial distance between events, τijσ 1 and σ 2 are the variances of the spatial distance and the temporal distance between events respectively;
Figure FDA0003238535470000012
Figure FDA0003238535470000013
τij=|ti-tj|
when p isijIf the value is greater than the set threshold value gamma, the event ej (x)j,yj,tj) Marking as event ei (x)i,yi,ti) And marking a target event cluster ID, wherein the target event cluster ID is used for distinguishing events of different targets in the event stream slice.
3. The event camera-based tracking method of claim 1, wherein: in the first step, the rule for marking the target event cluster ID is as follows:
when event ei (x)i,yi,ti) If the target event cluster ID sets I supporting all the events are empty, marking new event cluster IDs for all the events with the empty ID sets I;
when event ei (x)i,yi,ti) If the number of elements of the target event cluster ID set I supporting all the events is 1, marking the events which are not marked with the same target event cluster ID;
when event ei (x)i,yi,ti) And if the number of the elements of the target event cluster ID set I supporting all the events is more than 1, marking the events which are not marked as Max { I } in the target event cluster ID set I, and marking the IDs of the events marked with the non-maximum IDs in the target event cluster ID set I in the event stream slice as Max { I }.
4. The event camera-based tracking method of claim 1, wherein: in the second step, the space-time mass center [ X ] of each target event cluster is calculatedc,Yc,Tc]TAnd the maximum transverse and longitudinal radius [ X ] of the target event clusterR,YR]TNamely:
Figure FDA0003238535470000021
Figure FDA0003238535470000022
in the formula, NobjkFor the number of events that make up the target k, max (-) is a function of the maximum, XobjkAnd yobjkSpatial coordinates of events, t, which constitute object kobjkIs the timestamp of the event for target k.
5. The event camera-based tracking method of claim 1, wherein: the concrete process of the third step is as follows:
firstly, defining the two-dimensional Euclidean distance of the mass center of the target in two different time periods as DcentroidDefining two-dimensional space Euclidean distance of a target mass center of a space critical in two different time periods as DradiusNamely:
Figure FDA0003238535470000031
Figure FDA0003238535470000032
in the formula, Xctrk(p),Yctrk(p)Respectively, the centroid horizontal and vertical coordinates, X, of the p-th target in the trackercdtc(q),Ycdtc(q)Respectively is the centroid horizontal and vertical coordinates of the qth target in the detector;
the matching principle is as follows: dtc (q) compares D with each target trk (p) in the tracker in turncentroidAnd DradiusWhen the value of D is largecentroid≤DradiusMatching trk (p) with dtc (q), marking dtc (q) with the same target ID as trk (p), and updating the target timestamp; when D is presentcentroid>DradiusThen the two targets cannot be matched;
after the target in the detector is matched with the existing target of the tracker, the target which is not successfully matched is marked with a new target ID as a brand new target, and the tracker is updated;
after the above process is completed, the target which is not matched for a long time in the tracker is regarded as a disappeared target, and the disappeared target is deleted.
CN202111011203.7A 2021-08-31 2021-08-31 Event camera-based tracking method Pending CN113724297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011203.7A CN113724297A (en) 2021-08-31 2021-08-31 Event camera-based tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011203.7A CN113724297A (en) 2021-08-31 2021-08-31 Event camera-based tracking method

Publications (1)

Publication Number Publication Date
CN113724297A true CN113724297A (en) 2021-11-30

Family

ID=78679660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011203.7A Pending CN113724297A (en) 2021-08-31 2021-08-31 Event camera-based tracking method

Country Status (1)

Country Link
CN (1) CN113724297A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723776A (en) * 2022-04-01 2022-07-08 深圳市九天睿芯科技有限公司 Target tracking method and device
CN114777764A (en) * 2022-04-20 2022-07-22 中国科学院光电技术研究所 High-dynamic star sensor star point extraction method based on event camera
CN114842045A (en) * 2022-04-01 2022-08-02 深圳市九天睿芯科技有限公司 Target tracking method and device
CN116957973A (en) * 2023-07-25 2023-10-27 上海宇勘科技有限公司 Data set generation method for event stream noise reduction algorithm evaluation
CN116994075A (en) * 2023-09-27 2023-11-03 安徽大学 Small target rapid early warning and identifying method based on compound eye event imaging
CN114723776B (en) * 2022-04-01 2024-04-19 深圳市九天睿芯科技有限公司 Target tracking method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN109785365A (en) * 2019-01-17 2019-05-21 西安电子科技大学 Address events drive the real-time modeling method method of unstructured signal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN109785365A (en) * 2019-01-17 2019-05-21 西安电子科技大学 Address events drive the real-time modeling method method of unstructured signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯阳: "动态视觉传感器事件流处理方法研究", 《中国博士学位论文全文数据库信息科技辑》, pages 138 - 7 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723776A (en) * 2022-04-01 2022-07-08 深圳市九天睿芯科技有限公司 Target tracking method and device
CN114842045A (en) * 2022-04-01 2022-08-02 深圳市九天睿芯科技有限公司 Target tracking method and device
CN114842045B (en) * 2022-04-01 2024-04-16 深圳市九天睿芯科技有限公司 Target tracking method and device
CN114723776B (en) * 2022-04-01 2024-04-19 深圳市九天睿芯科技有限公司 Target tracking method and device
CN114777764A (en) * 2022-04-20 2022-07-22 中国科学院光电技术研究所 High-dynamic star sensor star point extraction method based on event camera
CN114777764B (en) * 2022-04-20 2023-06-30 中国科学院光电技术研究所 High-dynamic star sensor star point extraction method based on event camera
CN116957973A (en) * 2023-07-25 2023-10-27 上海宇勘科技有限公司 Data set generation method for event stream noise reduction algorithm evaluation
CN116957973B (en) * 2023-07-25 2024-03-15 上海宇勘科技有限公司 Data set generation method for event stream noise reduction algorithm evaluation
CN116994075A (en) * 2023-09-27 2023-11-03 安徽大学 Small target rapid early warning and identifying method based on compound eye event imaging
CN116994075B (en) * 2023-09-27 2023-12-15 安徽大学 Small target rapid early warning and identifying method based on compound eye event imaging

Similar Documents

Publication Publication Date Title
CN113724297A (en) Event camera-based tracking method
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
Chen et al. Asynchronous tracking-by-detection on adaptive time surfaces for event-based object tracking
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
CN111383244B (en) Target detection tracking method
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN110287907B (en) Object detection method and device
CN111445497B (en) Target tracking and following method based on scale context regression
US20210350705A1 (en) Deep-learning-based driving assistance system and method thereof
CN112949440A (en) Method for extracting gait features of pedestrian, gait recognition method and system
Tsintotas et al. DOSeqSLAM: Dynamic on-line sequence based loop closure detection algorithm for SLAM
CN111144213A (en) Object detection method and related equipment
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN113256731A (en) Target detection method and device based on monocular vision
CN111105436B (en) Target tracking method, computer device and storage medium
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN107274477B (en) Background modeling method based on three-dimensional space surface layer
US20200258237A1 (en) Method for real time surface tracking in unstructured environments
CN115564798A (en) Intelligent robot vision tracking method based on deep learning
CN112348853B (en) Particle filter tracking method based on infrared saliency feature fusion
CN114882363A (en) Method and device for treating stains of sweeper
CN111161304B (en) Remote sensing video target track tracking method for rapid background estimation
Ren et al. An improved ORB-SLAM2 algorithm based on image information entropy
CN111414827B (en) Depth image human body detection method and system based on sparse coding features
Pan et al. Learning to Track by Bi-Directional Long Short-Term Memory Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination