CN113888607A - Target detection and tracking method and system based on event camera and storage medium - Google Patents

Target detection and tracking method and system based on event camera and storage medium Download PDF

Info

Publication number
CN113888607A
CN113888607A CN202111024841.2A CN202111024841A CN113888607A CN 113888607 A CN113888607 A CN 113888607A CN 202111024841 A CN202111024841 A CN 202111024841A CN 113888607 A CN113888607 A CN 113888607A
Authority
CN
China
Prior art keywords
target
time
event
kalman filtering
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111024841.2A
Other languages
Chinese (zh)
Inventor
高爽
徐庶
刘庆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanhu Research Institute Of Electronic Technology Of China
Original Assignee
Nanhu Research Institute Of Electronic Technology Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanhu Research Institute Of Electronic Technology Of China filed Critical Nanhu Research Institute Of Electronic Technology Of China
Priority to CN202111024841.2A priority Critical patent/CN113888607A/en
Publication of CN113888607A publication Critical patent/CN113888607A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target detection and tracking method and system based on an event camera and a storage medium. The method comprises the following steps: reading a DVS event sequence, and performing background noise reduction processing; setting a time window; initializing parameters of Kalman filtering; converting the DVS event sequence in the time window into a normalized average time surface; denoising the target event in the normalized average time plane; detecting the position of a target, and meanwhile, predicting the position of the target by adopting Kalman filtering, wherein the initial position is determined by detection; calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detection position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment; performing track smoothing; and moving to the next time window to repeat the steps until the DVS event sequence is ended. The invention has the characteristics of low delay, noise interference avoidance, inaccurate detection position avoidance and target frame jitter prevention.

Description

Target detection and tracking method and system based on event camera and storage medium
Technical Field
The invention relates to the field of computer vision, in particular to target detection and tracking based on computer vision.
Background
Although there are many solutions to detection and tracking in current computer vision tasks, these solutions are not sufficient to handle high speed moving objects, high speed dynamic changes, and changing light conditions. The conventional camera is photographed at a fixed frame rate, motion blur is formed when an object moving at a high speed is captured, and characteristics of the object are changed when dark light or light is changed, and are not easily distinguished from a background. And if the background has more color or the object and the background have similar color, the object is not easy to be found in the background.
Dynamic Vision Sensors (DVS), also known as event-based cameras. Dynamic visual sensors use an event-based driven approach to capture dynamic changes in a scene. Unlike conventional cameras, which acquire a complete image at a specified frame rate (e.g., 30 fps). Whereas event cameras, such as DVS, do not have the concept of frame rate. Each pixel responds asynchronously and independently to changes in brightness in the scene. The output of the event camera is a sequence of "events" or "pulses", the rate being variable, each event representing a change in light intensity, the pulses being generated when the light intensity changes above a certain threshold from the last instant.
In videos shot by the traditional camera, under the condition of complex background, a moving target is difficult to detect, and the DVS camera simulates a pulse event of a biomembrane potential, and only the moving target triggers the event, so that the DVS camera has the advantage of being capable of quickly finding the moving target.
The prior art CN112927261A discloses a target tracking method integrating position prediction and correlation filtering. Wherein, the method comprises the following steps: on the basis of detecting the nuclear correlation filter, a Kalman filter is introduced to correct the tracking result. At the start of tracking, initial coefficients of the kernel correlation filter are calculated in the initial frame video image, based on the position and size of the target in the tag given in advance by the video sequence. In the subsequent frames, the target position of the previous frame is taken as the center, the target range is expanded by 2.5 times to be used as a target search area, the response of the kernel correlation filter is calculated, the position corresponding to the maximum response value is used as an initial target result, the result is used as an observation value, the tracking result is corrected by using a Kalman filter, the kernel correlation filter coefficient is updated, and the process is repeated until the tracking is finished. The target detection and tracking in the prior art described above is still based on the conventional RGB camera, and needs to extract the image features of the target, so that the asynchronous and low-delay characteristics of DVS cannot be utilized.
The prior art CN112949512A discloses a dynamic gesture recognition method, a gesture interaction method and an interaction system. Wherein, the method comprises the following steps: a Dynamic Vision Sensor (DVS) adapted to trigger an event based on relative motion of an object in a field of view and the dynamic vision sensor and to output an event data stream to a hand detection module; a hand detection module adapted to process the event data stream to determine an initial hand position; and the hand tracking module is suitable for determining a series of state vectors indicating hand motion states in the event data stream by using Kalman filtering based on the initial hand position. The target detection and tracking in the prior art is easily interfered by noise; and the predicted position of the kalman filter in the above prior art is easily deviated when the target stops or turns.
Therefore, in the prior art, the traditional RGB video needs to extract the image features of the target, rather than directly using the event information, and therefore cannot use the asynchronous, low-delay characteristics of DVS; the static background also participates in calculation, the calculation cost is large, and the method is not suitable for a low-power-consumption calculation scene. The prior art of target detection and tracking based on DVS is easily interfered by noise to a target position, and has the problem of inaccurate target frame during detection; above-mentioned prior art when adopting the kalman filter, takes place the skew easily when the target stops or turns to: in the prior art, a template near a target position predicted by Kalman filtering is used for updating a detector, and the target position detected by the detector further updates parameters of the Kalman filtering, so that error accumulation easily occurs in a complex scene, and finally a target is lost. In the prior art, a target position needs to be given in an initial frame, labor cost is added, and a plurality of scenes have no corresponding manpower to stare at a video constantly so as to give the initial position of the target. In the prior art, in the subsequent frame, the window range of 2.5 times of the size of the target frame detected in the previous frame needs to be searched, the calculation cost is high, and once the target deviates from the search range, the target is lost.
Disclosure of Invention
To overcome the above-mentioned deficiencies of the prior art, the present invention provides a target detecting and tracking method, system and storage medium based on an event camera. The method has the characteristics of low delay, noise interference avoidance, inaccurate detection position avoidance and target frame jitter prevention.
The invention provides a target detection and tracking method based on an event camera, which is characterized by comprising the following steps:
step 1, reading a DVS event sequence and carrying out background noise reduction processing; setting a time window; initializing parameters of Kalman filtering;
step 2, converting the DVS event sequence in the time window into a normalized average time surface;
step 3, denoising the target event in the normalized average time plane; detecting the position of a target, and meanwhile, predicting the position of the target by adopting Kalman filtering, wherein the initial position is determined by detection;
step 4, calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detection position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
step 5, smoothing the track; moving to the next time window, repeating steps 2-5 until the DVS event sequence is over.
The invention provides a target detection and tracking system based on an event camera, which is characterized by comprising the following components:
the preprocessing module is used for reading the DVS event sequence, performing background noise reduction processing, setting a time window and initializing parameters of Kalman filtering;
a normalized average time plane module, configured to convert the DVS event sequence in the time window into a normalized average time plane;
the target position prediction module is used for denoising a target event in a normalized average time plane, detecting the position of a target, and predicting the target position by adopting Kalman filtering, but determining the initial position by detection;
the target position updating module is used for calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detected position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
the track smoothing module is used for smoothing the track; and moving to the next time window, and continuing to process by the module until the DVS event sequence is ended.
The invention provides a computer-readable storage medium, which is characterized by comprising a stored program, wherein when the program runs, a device where the computer-readable storage medium is located is controlled to execute the target detection and tracking method based on the event camera.
Based on the scheme, the invention directly utilizes the event information, avoids the calculation of unnecessary information in the traditional image, and has the characteristics of low delay and asynchrony. Background denoising and target event denoising are adopted, and interference of noise on a target position is avoided. And by combining detection and Kalman filtering, the inaccuracy of the detection position and the deviation of the Kalman filtering prediction position are avoided. And a track smoothing strategy is adopted, so that the jitter of the target frame is effectively prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of background denoising effect.
Fig. 3 is a visualization of a normalized mean time surface.
Fig. 4 is a diagram of the effect of object detection.
FIG. 5 is a diagram of the effect of balancing the detected position with the position predicted by Kalman filtering, where detection denotes detection and KCF denotes Kalman filtering.
FIG. 6 is a diagram of the effect of track smoothing, where detection represents detection and KCF represents Kalman filtering.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Interpretation of related terms:
event camera: also called an event camera, is a camera that an asynchronous sensor samples light according to scene dynamics. While standard cameras acquire full images at a specified frame rate (e.g., 30fps), event cameras do not have the concept of a frame rate, with each pixel responding asynchronously and independently to brightness changes in the scene. The output of the event camera is a sequence of "events" or "pulses", the rate being variable, each event representing a change in light intensity, the pulses being generated when the light intensity changes above a certain threshold from the last instant. The event contains information about the location, the sign (light getting stronger or weaker), and the current time. It is based on the pulse mechanism in biological vision.
DVS: dynamic vision sensors, also known as neuromorphic vision sensors. Refers to a sensor that outputs positive and negative (light-up or light-down) events.
Event: that is, an event, an "event" or "pulse" is generated when the light intensity changes beyond a certain threshold at each location in the event camera, called an event. An event contains information on position, time, and positive and negative polarity (light becoming stronger or weaker).
Time surface: the location and time information of the event is represented in the form of a two-dimensional graph. The time information is taken as a "pixel value" for each position.
Target detection: finding out all interested objects in the image, comprising two subtasks of object positioning and object classification, and determining the category and the position of the object.
Target tracking: the method is divided into single target tracking and multi-target tracking. Tracking a single target: given an object, the location of the object is tracked. Multi-target tracking: the positions of a plurality of targets are tracked.
Membrane potential: the membrane potential generally refers to the potential difference generated between two solutions separated by a membrane. Generally refers to the electrical phenomenon accompanying the cell's vital movement, which is the potential difference existing across the cell membrane. Membrane potential plays an important role in the process of neuronal cell communication.
Complex background: for example, a background that is similar in color to the target, or that has many occlusions, or that appears as a moving object similar to the target.
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
In one embodiment, the present invention provides a target detecting and tracking method based on an event camera, including:
step 1, reading a DVS event sequence and carrying out background noise reduction processing; setting a time window; initializing parameters of Kalman filtering;
step 2, converting the DVS event sequence in the time window into a normalized average time surface;
step 3, denoising the target event in the normalized average time plane; detecting the position of a target, and meanwhile, predicting the position of the target by adopting Kalman filtering, wherein the initial position is determined by detection;
step 4, calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detection position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
step 5, smoothing the track; moving to the next time window, repeating steps 2-5 until the DVS event sequence is over.
Optionally, step 1 includes:
step 1a, defining a time interval, judging whether an 8-neighborhood interval of each event position (x, y) has an event in the time interval, if not, considering the event as background noise, and filtering the background noise;
step 1b, setting a time window and initializing Kalman filtering parameters as follows;
setting a state transition matrix A ═ As,00,As];
Wherein A iss0 is represented bysRear side parallel with and AsAll 0 matrices of the same size; 0, AsIs shown in AsFront side parallel with column AsAll 0 matrices of the same size; a. thes,00,AsThe semicolon in (1) indicates the next line;
setting a prediction noise covariance matrix Q ═ Qs,00,Qs];
Wherein Q iss0 is represented by QsRear parallel sum QsAll 0 matrices of the same size; 0, QsIs shown at QsFront parallel sum QsAll 0 matrices of the same size; qs,00,QsThe semicolon in (1) indicates the next line;
set observation matrix H ═ Hs,0;0,Hs];
Wherein Hs0 is represented by HsRear side parallel and HsAll 0 matrices of the same size; 0, HsIs shown in HsFront side parallel and sum HsAll 0 matrices of the same size; hs,00,HsThe semicolon in (1) indicates the next line;
setting an observation noise covariance matrix R;
setting a state covariance matrix P;
setting the motion state of the target as x ═ pcol,vcol,acol,prow,vrow,arow]TWherein (p)row,pcol) Is the center coordinate of the target, (v)row,vcol) Is the target speed (a)row,acol) Is the acceleration of the target.
Optionally, in step 1b,
the time window is 5 ms;
optionally, in step 1b,
As=[1,1,0.5;0,1,1;0,0,1];
optionally, in step 1b,
Qs=diag([15,15,10]) Wherein diag is the creation of a diagonal matrix;
optionally, in step 1b,
Hs=[1,0,0];
optionally, in step 1b,
the observed noise covariance matrix is R ═ 25, 0; 0, 25 ];
optionally, in step 1b,
the state covariance matrix is P ═ diag ([10 ]5,105,105,105,105,105]);
Optionally, in step 1b,
the initial value of the speed of the target is (0, 0), and the initial value of the acceleration of the target is (0, 0).
FIG. 2 shows the background denoising effect of step 1 in the present invention.
Optionally, step 2 includes:
step 2a, calculating an average time plane; the time information of each event at position (I, j) in the time window is t, and the cumulative event number is Ii,j. Then the average time plane is
Figure BDA0003242996420000091
Step 2b, calculating a normalized average time plane; is defined as
Figure BDA0003242996420000092
Where (i, j) ∈ T denotes the event position on the average time plane.
Fig. 3 is a visualization of the normalized average time plane of step 2 of the present invention.
Optionally, step 3 includes:
usually, the temporal information of a moving object is more than background noise, so a threshold λ is set for filtering noise.
Step 3a, target detection; setting a threshold lambda for filtering noise, wherein the target is O { (i, j) | Ni,jλ }; the target detection position at the time k is zdkMean (o), where mean is the mean operation;
step 3b, tracking the target; predicting state at time k
Figure BDA0003242996420000093
Wherein
Figure BDA0003242996420000094
Is the actual value of the state at time k-1,
Figure BDA0003242996420000095
a predicted value representing k time; predicting state covariance matrix at time k
Figure BDA0003242996420000108
Wherein P isk-1Is the state covariance matrix at time k-1.
Alternatively, in step 3a, λ may be 0.2, and in order to filter out noise, the smaller 20% portion and the larger 20% portion of the target O may be removed.
Fig. 4 is a diagram showing the target detection effect in step 3 of the present invention.
Optionally, step 4 includes:
step 4a, according to the target detection position z obtained in the step 3adkCalculating the Euclidean distance between the current k time position and the k-1 time position
Figure BDA0003242996420000101
Defining a threshold thres when d1And (3) when the detected target position is considered to be correct at the time of being less than or equal to thres, and the parameters for updating the Kalman filtering are as follows:
computing Kalman gain at time k
Figure BDA0003242996420000102
Updating the state at time k
Figure BDA0003242996420000103
Updating state covariance matrix at time k
Figure BDA0003242996420000104
When d is1When the k time is larger than thres, calculating the prediction position of Kalman filtering at the current k time
Figure BDA0003242996420000105
Figure BDA0003242996420000106
And 4 b: calculating the Euclidean distance between the target position predicted by Kalman filtering at the current k moment and the target position at the k-1 moment
Figure BDA0003242996420000107
And 4 c: counting the number of events in a target frame where a Kalman filtering prediction position is located and a target frame where a detection position is located, and taking a central point with a large number of events as a current target position zk
Optionally, in step 4a, thres may take 30.
FIG. 5 shows the effect of balancing the detected position with the predicted position by Kalman filtering in step 4 of the present invention.
Optionally, step 5 includes:
when the target movement is slow, the number of events is reduced, the current predicted target position may deviate far from the target position at the previous moment, and the target frame shakes; to solve this problem, a trajectory smoothing strategy is adopted.
If d in step 4b2If > thres, the target position z at the current k momentk=ω1·zk2·zk-1Wherein ω is1,ω2Are weights.
Alternatively, ω is above1=0.5,ω2=0.5。
Fig. 6 shows the effect of track smoothing in step 5 of the present invention.
In another embodiment, the present invention provides an object detecting and tracking system based on an event camera, comprising:
the preprocessing module is used for reading the DVS event sequence, performing background noise reduction processing, setting a time window and initializing parameters of Kalman filtering;
a normalized average time plane module, configured to convert the DVS event sequence in the time window into a normalized average time plane;
the target position prediction module is used for denoising a target event in a normalized average time plane, detecting the position of a target, and predicting the target position by adopting Kalman filtering, but determining the initial position by detection;
the target position updating module is used for calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detected position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
the track smoothing module is used for smoothing the track; and moving to the next time window, and continuing to process by the module until the DVS event sequence is ended.
In another embodiment, the present invention provides a computer-readable storage medium characterized in that the computer-readable storage medium stores a plurality of programs; and controlling the equipment where the computer readable storage medium is located to load and execute the target detection and tracking method based on the event camera when the program runs.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An object detection and tracking method based on an event camera is characterized by comprising the following steps:
step 1, reading a DVS event sequence and carrying out background noise reduction processing; setting a time window; initializing parameters of Kalman filtering;
step 2, converting the DVS event sequence in the time window into a normalized average time surface;
step 3, denoising the target event in the normalized average time plane; detecting the position of a target, and meanwhile, predicting the position of the target by adopting Kalman filtering, wherein the initial position is determined by detection;
step 4, calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detection position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
step 5, smoothing the track; moving to the next time window, repeating steps 2-5 until the DVS event sequence is over.
2. The method according to claim 1, wherein step 1 comprises:
step 1a, defining a time interval, judging whether an event exists in an 8-neighborhood interval of each event position in the time interval, if not, considering the event as background noise, and filtering the background noise;
step 1b, setting a time window and initializing Kalman filtering parameters as follows;
setting a state transition matrix A ═ As,0;0,As];
Setting a prediction noise covariance matrix Q ═ Qs,0;0,Qs];
Set observation matrix H ═ Hs,0;0,Hs];
Setting an observation noise covariance matrix R;
setting a state covariance matrix P;
setting the motion state of the target as x ═ pcol,vcol,acol,prow,vrow,arow]TWherein (p)row,pcol) Is the center coordinate of the target, (v)row,vcol) Is the target speed (a)row,acol) Is the acceleration of the target.
3. The method of claim 1, wherein step 2 comprises:
step 2a, calculating an average time plane; the time information of the event at position (I, j) in the time window is t, and the cumulative event number is Ii,j. Then the average time plane is
Figure FDA0003242996410000021
Step 2b, calculating a normalized average time plane; is defined as
Figure FDA0003242996410000022
Where (i, j) ∈ T denotes the event position on the average time plane.
4. The method according to claim 1, wherein step 3 comprises:
step 3a, target detection; setting a threshold lambda for filtering noise, wherein the target is O { (i, j) | Ni,j>Lambda }; the target detection position at the time k is zdkMean (o), where mean is the mean operation;
step 3b, tracking the target; predicting state at time k
Figure FDA0003242996410000023
Wherein
Figure FDA0003242996410000024
Is the actual value of the state at time k-1,
Figure FDA0003242996410000025
a predicted value representing k time; predicting state covariance matrix at time k
Figure FDA0003242996410000026
Wherein P isk-1Is the state covariance matrix at time k-1.
5. The method of claim 4, wherein λ is 0.2 in step 3a, and the smaller 20% and the larger 20% of the target O are removed to filter out noise.
6. The method according to claim 4, wherein step 4 comprises:
step 4a, according to the target detection position z obtained in the step 3adkCalculating the Euclidean distance between the current k time position and the k-1 time position
Figure FDA0003242996410000031
Defining a threshold thres when d1And (3) when the detected target position is considered to be correct at the time of being less than or equal to thres, and the parameters for updating the Kalman filtering are as follows:
computing Kalman gain at time k
Figure FDA0003242996410000032
Updating the state at time k
Figure FDA0003242996410000033
Update the shape of time kState covariance matrix
Figure FDA0003242996410000034
When d is1>When thres, calculating the prediction position of Kalman filtering at the current k moment
Figure FDA0003242996410000035
Figure FDA0003242996410000036
And 4 b: calculating the Euclidean distance between the target position predicted by Kalman filtering at the current k moment and the target position at the k-1 moment
Figure FDA0003242996410000037
And 4 c: counting the number of events in a target frame where a Kalman filtering prediction position is located and a target frame where a detection position is located, and taking a central point with a large number of events as a current target position zk
7. The method of claim 6, wherein step 5 comprises:
if d in step 4b2>thres, the target position z at the current time kk=ω1·zk2·zk-1Wherein ω is1,ω2Are weights.
8. The method of claim 7, wherein ω is1=0.5,ω2=0.5。
9. An event camera based object detection and tracking system, comprising:
the preprocessing module is used for reading the DVS event sequence, performing background noise reduction processing, setting a time window and initializing parameters of Kalman filtering;
a normalized average time plane module, configured to convert the DVS event sequence in the time window into a normalized average time plane;
the target position prediction module is used for denoising a target event in a normalized average time plane, detecting the position of a target, and predicting the target position by adopting Kalman filtering, but determining the initial position by detection;
the target position updating module is used for calculating the distance between the target position detected at the current moment and the target position at the last moment, and balancing the detected position and the position predicted by Kalman filtering according to the distance information so as to determine the position of the target at the current moment;
the track smoothing module is used for smoothing the track; and moving to the next time window, and continuing to process by the module until the DVS event sequence is ended.
10. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any of claims 1-8.
CN202111024841.2A 2021-09-02 2021-09-02 Target detection and tracking method and system based on event camera and storage medium Pending CN113888607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111024841.2A CN113888607A (en) 2021-09-02 2021-09-02 Target detection and tracking method and system based on event camera and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111024841.2A CN113888607A (en) 2021-09-02 2021-09-02 Target detection and tracking method and system based on event camera and storage medium

Publications (1)

Publication Number Publication Date
CN113888607A true CN113888607A (en) 2022-01-04

Family

ID=79012049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111024841.2A Pending CN113888607A (en) 2021-09-02 2021-09-02 Target detection and tracking method and system based on event camera and storage medium

Country Status (1)

Country Link
CN (1) CN113888607A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996320A (en) * 2023-03-22 2023-04-21 深圳市九天睿芯科技有限公司 Event camera adaptive threshold adjustment method, device, equipment and storage medium
CN116958142A (en) * 2023-09-20 2023-10-27 安徽大学 Target detection and tracking method based on compound eye event imaging and high-speed turntable

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996320A (en) * 2023-03-22 2023-04-21 深圳市九天睿芯科技有限公司 Event camera adaptive threshold adjustment method, device, equipment and storage medium
CN116958142A (en) * 2023-09-20 2023-10-27 安徽大学 Target detection and tracking method based on compound eye event imaging and high-speed turntable
CN116958142B (en) * 2023-09-20 2023-12-15 安徽大学 Target detection and tracking method based on compound eye event imaging and high-speed turntable

Similar Documents

Publication Publication Date Title
US11823429B2 (en) Method, system and device for difference automatic calibration in cross modal target detection
CN107516321B (en) Video multi-target tracking method and device
US6400830B1 (en) Technique for tracking objects through a series of images
US20170248971A1 (en) Method for detecting target object, detection apparatus and robot
US6556708B1 (en) Technique for classifying objects within an image
US10748294B2 (en) Method, system, and computer-readable recording medium for image object tracking
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
US20130070105A1 (en) Tracking device, tracking method, and computer program product
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
CN113723190A (en) Multi-target tracking method for synchronous moving target
CN113888607A (en) Target detection and tracking method and system based on event camera and storage medium
CN110647836B (en) Robust single-target tracking method based on deep learning
CN109033955B (en) Face tracking method and system
CN111127519B (en) Dual-model fusion target tracking control system and method thereof
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
US20220321792A1 (en) Main subject determining apparatus, image capturing apparatus, main subject determining method, and storage medium
CN112184767A (en) Method, device, equipment and storage medium for tracking moving object track
KR102434397B1 (en) Real time multi-object tracking device and method by using global motion
CN109978908B (en) Single-target rapid tracking and positioning method suitable for large-scale deformation
US6184858B1 (en) Technique for updating a background image
CN107665495B (en) Object tracking method and object tracking device
CN110930436B (en) Target tracking method and device
CN112131991A (en) Data association method based on event camera
CN115439771A (en) Improved DSST infrared laser spot tracking method
JP6555940B2 (en) Subject tracking device, imaging device, and method for controlling subject tracking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination