CN114612507A - High-speed target tracking method based on pulse sequence type image sensor - Google Patents

High-speed target tracking method based on pulse sequence type image sensor Download PDF

Info

Publication number
CN114612507A
CN114612507A CN202210185846.1A CN202210185846A CN114612507A CN 114612507 A CN114612507 A CN 114612507A CN 202210185846 A CN202210185846 A CN 202210185846A CN 114612507 A CN114612507 A CN 114612507A
Authority
CN
China
Prior art keywords
target
pulse
pixel
value
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210185846.1A
Other languages
Chinese (zh)
Inventor
徐江涛
孙硕
高志远
高静
聂凯明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210185846.1A priority Critical patent/CN114612507A/en
Publication of CN114612507A publication Critical patent/CN114612507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The invention relates to the fields of optics, image sensor imaging and image processing, and provides a high-speed target tracking method for a pulse image sensor. Therefore, the invention adopts the technical scheme that the high-speed target tracking method based on the pulse sequence type image sensor comprises the following steps: (1) acquiring pulse intervals and pulse frequencies according to the pulse data; (2) carrying out motion detection on the moving target, and removing ghost and cavities appearing in the motion detection process; (3) locking and tracking the target; (4) and after finding out a real target area, carrying out image reconstruction on the scene and the target by adopting bilateral filtering. The invention is mainly applied to the occasions of designing and manufacturing the image sensor.

Description

High-speed target tracking method based on pulse sequence type image sensor
Technical Field
The invention relates to the fields of optics, image sensor imaging and image processing, in particular to a high-speed target tracking method based on a pulse sequence type image sensor.
Background
Image sensors have been a focus of human research. The bionic pulse sequence type image sensor is used as a nerve morphology vision sensor, has the characteristics of high frame frequency and low data throughput, and meets the requirement of high-speed imaging. The circuit structure of the pixel of the pulse sequence type image sensor, an equivalent model and the working principle thereof are shown in fig. 1, and the circuit structure of the pixel mainly comprises a photodiode, a reset tube, a comparator, a self-reset unit and a pixel internal reading circuit unit. As can be seen from the circuit equivalent model and the working principle, the integrator continuously integrates the photo-generated current I under the illumination conditionDGenerating photo-generated charge QDWhen the photo-generated charge reaches a threshold value QrefThe integrator is reset and starts integration again, and generates a pulse data 1 at the same time, and transmits the pulse data 1 to the outside of the chip when the synchronous read signal arrives, and because the time period of the read signal arrives is a frame period, if no pulse data 1 is generated in a frame period, the pulse data 0 is output to the outside of the chip when the read signal arrives. The output data of the pulse image sensor is thus a single bit data sequence with only 0 and 1.
The imaging system and the analysis algorithm are important components of machine vision, and the imaging system and the analysis algorithm are interdependent and inseparable. The high-speed target tracking system is an important branch of machine vision, and in the high-speed target tracking system, an image sensor is used as an imaging system, and a target tracking algorithm is used as an analysis algorithm. In the last decades, although the target tracking algorithm has been studied as an analysis algorithm in a large amount and has achieved satisfactory reliability and accuracy, most literature documents are based on the study of video image sequences captured by conventional cameras, and the study on the bionic pulse image sensor is rare and rare, on the one hand, the pulse image sensor data is single-bit data and does not directly contain gray scale information, so that the pulse image sensor is not suitable for the conventional target tracking algorithm. On the other hand, in order to apply the conventional target tracking method, it is necessary to reconstruct an image sequence from single-bit data, although some researches propose some related image reconstruction algorithms based on an impulse image sensor, which can reconstruct gray scale information of a scene from single-bit impulse data. However, if the conventional target tracking algorithm is applied to the reconstructed image sequence, the accuracy and stability of target tracking will be seriously affected by errors and noises introduced in the image reconstruction process.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a high-speed target tracking method for a pulse image sensor, which can effectively improve the precision and stability of the pulse image sensor in tracking a high-speed target in a scene. Therefore, the invention adopts the technical scheme that the high-speed target tracking method based on the pulse sequence type image sensor comprises the following steps:
(1) acquiring pulse intervals and pulse frequencies according to the pulse data;
(2) carrying out motion detection on the moving target, and removing ghost and cavities appearing in the motion detection process;
(3) locking and tracking the target;
(4) and after finding out a real target area, carrying out image reconstruction on the scene and the target by adopting bilateral filtering.
The detailed steps are as follows:
(1) acquiring a pulse interval and a pulse frequency according to pulse data, for a pixel (i, j) at time t, first, an output time Zmax of a most recent primary pulse data 1 before time t of the pixel and an output time Zmin of a most recent primary pulse data 1 after time t need to be found, as shown in formula (1):
Figure BDA0003522811360000021
wherein Iz(i, j) is the pulse data for pixel (i, j) at time z, followed by the original pulse interval P at time t for that pixelt(i, j) is obtained by the difference between these two time instants, as shown in equation (2):
Pt(i,j)=Zmin-Zmax (2)
pulse data is converted into a pulse interval by equation (2), and then a pulse frequency is found from the converted pulse interval, that is:
ft(i,j)=1/Pt(i,j) (3)
(2) detecting a moving target, detecting the moving target by taking the pulse frequency as the value of a pixel, and initializing a background model by using the value of an A frame pixel, wherein the formula (4) is as follows:
MA(i,j)={f1(i,j),f2(i,j),…,fA(i,j)} (4)
then calculating the Euclidean distance between the pixel value of the current moment and the background model, and comparing the calculation result with a set threshold value RDMaking a comparison below a threshold RDThen compare the result rkIs 1, above a threshold value RDThen 0, then the comparison result is counted, if the statistical result is greater than a given threshold value RBThen the detection result B of the pixelt(i, j) is background, the binary gray value is 0, otherwise, the binary gray value is 255 for the moving object, that is:
Figure BDA0003522811360000022
meanwhile, if the pixel is detected as the background, the background model needs to be updated, that is, the pixel value is used to randomly replace any sample in the background model of any pixel in the neighborhood of the pixel, as shown in formula (6):
Figure BDA0003522811360000023
wherein f isan(i, j) is the background model M of the pixel (i, j)A(ii) any sample of (i, j), fan(ian,jan) Is an arbitrary sample in the background model for any pixel in the neighborhood of pixel (i, j);
then removing ghost and void in the motion detection process, and for the pixel (i, j), performing neighborhood on the pixel (i, j)RijZone fixed time period thCounting the pulse signals in the neighborhood and calculating the pulse density d of the neighborhoodt(i,j):
Figure BDA0003522811360000024
Wherein Ht(x, y) is the neighborhood RijPulse data of inner pixel (x, y), pulse density range at actual moving object is RHIf the binary gray-scale value at the ghost is 255, the pulse density is not necessarily at RHWithin the range, ghosting is eliminated accordingly, namely:
Figure BDA0003522811360000025
similarly, since the binary gray value of the void inside the target is 0, but the pulse density is the same as that of the actual target, the void is compensated accordingly, that is:
Bt(i,j)=255,if Bt(i,j)=0 and dt(i,j)∈RH (9)
(3) locking and tracking the target: firstly, locking a target to be tracked from all moving objects detected by movement, marking a region where the target is located, then taking the current target region as a target model, calculating the probability density of the pulse interval of the region, marking the region where all moving objects detected by movement are located again after one or more frames of time, then calculating the probability density of the pulse interval of each moving object region, and comparing with the probability density of the target model, so that the moving object with the most similar probability density to the target model in a certain distance range is the target to be tracked; finally, the moving object is used as a new target model, and the probability density in the area of the moving object is used as the probability density of the target model, so that the target model is updated and used for target tracking of the next frame or a plurality of frames;
(4) after finding a real target area, performing image reconstruction on a scene and a target by adopting bilateral filtering, as shown in formula (17):
Figure BDA0003522811360000031
wherein P't(i, j) filtered pulse interval, Pt(i, j) is the original pulse interval, WpTo normalize the coefficients, GσsIs a spatial kernel, GσrIs pixel kernel, S is spatial domain of pixel (i, j), σ S and σ r are standard deviations of spatial kernel and pixel kernel, and then image reconstruction is performed, as shown in equation (18):
Gt(i,j)=K/P′t(i,j),K=255×Pmin (18)
where K is the conversion factor from pulse interval to grey value, PminIs the minimum pulse interval in the pulse interval sequence;
in step (3), first, at an initial t0Locking a target to be tracked at any time, marking the area of the target through a connected domain, taking the target area at the moment as a target model and naming the target area as RQThe nth pixel point in the region is named as pnWith a pulse interval of znThe center pixel of the region is named as p0With a pulse interval z0As shown in formula (10):
Figure BDA0003522811360000032
then calculating the pulse interval probability density Q of the target modelu(p0) As follows:
Figure BDA0003522811360000033
where K is the Epanechnikov kernel function, h is the window size of the kernel function, and C is a normalization coefficient, such that
Figure BDA0003522811360000034
b(zn) Pulse interval z for pixels in a regionnNbin is the maximum quantization range, u is the feature value, δ (b (z)n) U) is a Delta function for determining the quantized value b (z)n) Whether it belongs to the characteristic value u, NumAfter the number of all pixel points in the region is one frame or several frames, namely at the time of t, the regions where all moving objects are located are detected by the connected domain mark motion again, and are named as RP1,RP2,RP3,…,RPm…, suppose that the m-th moving object region RPmThe nth pixel point in (1) is pnmWith a pulse interval znmThe center pixel of the model is p0mWith a pulse interval z0mAs shown in formula (12):
Figure BDA0003522811360000041
then calculating the pulse interval probability density of all moving object regions, wherein the probability density Q of the m-th moving objectum(p0m) As shown in formula (13):
Figure BDA0003522811360000042
measuring target model and all moving objects R through similarity function Bhattacharyya coefficientsP1,RP2,RP3,…,RPm… and finding the true target R therefromP(ii) a In addition, since the pulse image sensor has a high frame rate, the change in the position of the target between two or more frames must not be greatly different, i.e., the target between two or more frames must be maintained within a certain distance range LdThe adjustment to the Bhattacharyya coefficient can therefore be made as follows:
Figure BDA0003522811360000043
where ρ ismIs the adjusted Bhattacharyya coefficient value of the mth candidate model, and in addition to that, LdCan be adaptively adjusted in an iteration mode, and firstly gives LdSetting a small initial value and then according to rhomTo adjust LdIf p ismIs 0, then LdIncrease in size by 1 until pmIs other than 0, as shown in equation (15):
Figure BDA0003522811360000044
while the real target RPThen is to make ρmThe largest moving object, namely:
RP=RPm,if ρm=max(ρm),m∈[1,2,3,…] (16)。
the target model, i.e. R, is then updatedQ=RP
The invention has the characteristics and beneficial effects that:
a high-speed target tracking method is provided for a pulse sequence type image sensor, and the method can effectively improve the precision and stability of the pulse sequence type image sensor in tracking the high-speed target in a scene.
Description of the drawings:
fig. 1 shows a circuit configuration, an equivalent model and an operation principle of an image sensor pixel.
Fig. 2 pulse data to pulse interval conversion process.
FIG. 3 locks and marks the target.
Fig. 4 labels all moving objects.
The overall flow of the algorithm of fig. 5.
Fig. 6 motion detection of the high speed turntable.
(a) The method comprises the steps of (a) shooting scene of a high-speed turntable, (b) before ghost and void removal, (c) after ghost and void removal.
The tracking effect of the algorithm of fig. 7 on a high-speed turntable.
Detailed Description
The technical scheme of the invention is as follows:
(1) firstly, the pulse interval and the pulse frequency are acquired according to the pulse data. Taking the pixel (i, j) at time t as an example, it is first necessary to find the output time Zmax of the most recent primary pulse data 1 before time t of the pixel and the output time Zmin of the most recent primary pulse data 1 after time t, as follows:
Figure BDA0003522811360000051
wherein Iz(i, j) is the pulse data of pixel (i, j) at time z. After which the original pulse interval P at the moment t of the pixelt(i, j) can be obtained by the difference between these two time instants as follows:
Pt(i,j)=Zmin-Zmax (2)
the pulse data can be converted into the pulse interval by equation (2), and fig. 2 describes the conversion process of the pulse data into the pulse interval. From the converted pulse intervals, the pulse frequency can then be found, i.e.:
ft(i,j)=1/Pt(i,j) (3)
(2) and carrying out motion detection on the moving target. Since the higher the light intensity, the higher the pulse frequency, the motion object detection can be performed with the pulse frequency as the value of the pixel, i.e. here the background model is initialized with the value of the a-frame pixel, as follows:
MA(i,j)={f1(i,j),f2(i,j),…,fA(i,j)} (4)
then calculating the Euclidean distance between the pixel value of the current moment and the background model, and comparing the calculation result with a set threshold value RDMaking a comparison below a threshold RDThen compare the result rkIs 1, above a threshold value RDIt is 0. Then counting the comparison result, if the statistical result is greater than a given threshold value RBThen the detection result B of the pixelt(i, j) is background, the binary gray value is 0, otherwise, the binary gray value is 255 for the moving object, that is:
Figure BDA0003522811360000052
meanwhile, if the pixel is detected as the background, the background model needs to be updated, that is, the pixel value is used to randomly replace any sample in the background model of any pixel in the neighborhood of the pixel, as follows:
Figure BDA0003522811360000053
wherein f isan(i, j) is the background model M of the pixel (i, j)AArbitrary sample of (i, j), fan(ian,jan) Is an arbitrary sample in the background model of any pixel in the neighborhood of pixel (i, j).
And then removing ghost images and holes generated in the motion detection process. Taking pixel (i, j) as an example, we are for the neighborhood R of pixel (i, j)ijZone fixed time period thCounting the pulse signals in the neighborhood and calculating the pulse density d of the neighborhoodt(i,j):
Figure BDA0003522811360000054
Wherein Ht(x, y) is the neighborhood RijThe pulse data of the inner pixel (x, y) has different pulse density because the light intensity at the ghost is different from the light intensity at the actual moving object, and the pulse density range at the actual moving object is assumed to be RHIf the binary gray-scale value at the ghost is 255, the pulse density is not necessarily at RHWithin the range. Ghosting can be eliminated accordingly, i.e.:
Figure BDA0003522811360000064
similarly, since the binary gray value of the void inside the target is 0, but the pulse density is the same as that of the actual target, the void can be compensated, that is:
Bt(i,j)=255,if Bt(i,j)=0 and dt(i,j)∈RH (9)
this completes the motion detection.
(3) And locking and tracking the target. Firstly, locking a target to be tracked from all moving objects detected by movement, marking a region where the target is located, then taking the target region at the moment as a target model, and calculating the probability density of the pulse interval of the region. After one or more frames of time, marking the regions where all the moving objects detected by the movement are located again, then calculating the pulse interval probability density of each moving object region, and comparing the pulse interval probability density with the probability density of the target model, wherein the moving object with the most similar probability density to the target model in a certain distance range is the target to be tracked. And finally, taking the moving object as a new target model, and taking the probability density in the area of the moving object as the probability density of the target model, so as to update the target model for target tracking of the next frame or a plurality of frames. Taking tracking of the turntable pattern as an example, as shown in FIG. 3, assume an initial t0Locking the airplane pattern on the turntable as the target to be tracked at any time, marking the area of the target through a connected domain, taking the target area at the moment as a target model and naming the target area as RQThe nth pixel point in the region is named as pnWith a pulse interval znThe center pixel of the region is named as p0With a pulse interval z0As follows:
Figure BDA0003522811360000061
then calculating the pulse interval probability density Q of the target modelu(p0) As follows:
Figure BDA0003522811360000062
wherein K is the Epanechnikov kernel functionH is the window size of the kernel function and C is the normalization coefficient, such that
Figure BDA0003522811360000063
b(zn) Pulse interval z for pixels in a regionnNbin is the maximum quantization range, u is the feature value, δ (b (z)n) U) is a Delta function for determining the quantized value b (z)n) Whether it belongs to the characteristic value u, NumThe number of all pixel points in the region. After one or several frames, as shown in fig. 4, at time t, the area where all the motion patterns on the turntable are located is detected again by the connected component marking motion, and named as RP1,RP2,RP3,…,RPm…, suppose that the m-th moving object region RPmThe nth pixel point in (1) is pnmWith a pulse interval znmThe model center pixel is p0mWith a pulse interval z0mAs follows:
Figure BDA0003522811360000071
then calculating the pulse interval probability density of all motion pattern areas, wherein the probability density Q of the mth turntable patternum(p0m) As follows:
Figure BDA0003522811360000072
the target model and all the turntable patterns R are scaled by the similarity function Bhattacharyya coefficientsP1,RP2,RP3,…,RPm… and finding the true target R therefromP(ii) a In addition, since the pulse image sensor has a high frame frequency, the position change of the object between two or more frames must not be greatly different, that is, the object between two or more frames must be maintained within a certain distance range LdThe adjustment to the Bhattacharyya coefficient can therefore be made as follows:
Figure BDA0003522811360000073
where ρ ismIs the adjusted Bhattacharyya coefficient value of the mth candidate model. In addition, LdCan be adaptively adjusted in an iteration mode, and firstly gives LdSetting a small initial value and then according to rhomTo adjust LdIf p ismIs 0, then LdIncrease in size by 1 until pmIs not 0, as follows:
Figure BDA0003522811360000074
while the real target RPThen is to make ρmThe largest carousel pattern, namely:
RP=RPm,if ρm=max(ρm),m∈[1,2,3,…] (16)
the target model, i.e. R, is then updatedQ=RP
(4) After the real target area is found, image reconstruction of the scene and the target is also required. In order to reconstruct the target and the scene where it is located clearly, the original pulse interval is filtered, here, bilateral filtering is used, as follows:
Figure BDA0003522811360000075
wherein P't(i, j) filtered pulse interval, Pt(i, j) is the original pulse interval, WpTo normalize the coefficients, GσsIs a spatial kernel, GσrIs the kernel of pixels, S is the spatial domain of pixel (i, j), and σ S and σ r are the standard deviations of the spatial and pixel kernels. Image reconstruction is then performed as follows:
Gt(i,j)=K/P′t(i,j),K=255×Pmin (18)
where K is the conversion factor from pulse interval to grey value, PminIs the minimum pulse interval in the sequence of pulse intervals. Wherein figure 5 shows the overall flow of the algorithm.
The parameter A in formula (4) is generally 20, and the parameter R in formula (5)DIs 10, RBIs 2, the neighborhood R in formula (7)ijTypically a circle with a radius of 2 centered on pixel (i, j). The parameter Nbin in the formulas (11) and (13) is preferably 8 to 20. The parameters in equation (17) are generally filtered bilaterally by using a matrix window sequence of 5 × 5 × 5 size centered on the pixel to be filtered, and the standard deviations σ s and σ r are preferably 5 in size. Fig. 6 shows the motion detection of the high-speed turntable, and it can be seen that the method can effectively remove ghosts and holes, and improve the integrity of the motion detection. Fig. 7 shows the tracking result of the pulse image sensor on the high-speed turntable pattern, in which the blue frame is the tracking effect of the algorithm, and the red frame and the green frame are the tracking effects of other algorithms, and it can be seen that the algorithm has higher tracking accuracy and tracking stability than other algorithms, and is more suitable for the pulse sequence image sensor.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (3)

1. A high-speed target tracking method based on a pulse sequence type image sensor is characterized by comprising the following steps:
(1) acquiring pulse intervals and pulse frequencies according to the pulse data;
(2) carrying out motion detection on the moving target, and removing ghost and cavities appearing in the motion detection process;
(3) locking and tracking the target;
(4) and after finding out a real target area, carrying out image reconstruction on the scene and the target by adopting bilateral filtering.
2. The method of claim 1, wherein the detailed steps are as follows:
(1) acquiring a pulse interval and a pulse frequency according to pulse data, for a pixel (i, j) at time t, first, an output time Zmax of a most recent primary pulse data 1 before time t of the pixel and an output time Zmin of a most recent primary pulse data 1 after time t need to be found, as shown in formula (1):
Figure FDA0003522811350000011
wherein Iz(i, j) is the pulse data for pixel (i, j) at time z, followed by the original pulse interval P at time t for that pixelt(i, j) is obtained by the difference between these two time instants, as shown in equation (2):
Pt(i,j)=Zmin-Zmax (2)
pulse data is converted into a pulse interval by equation (2), and then a pulse frequency is found from the converted pulse interval, that is:
ft(i,j)=1/Pt(i,j) (3)
(2) detecting a moving target, detecting the moving target by taking the pulse frequency as the value of a pixel, and initializing a background model by using the value of an A frame pixel, as shown in formula (4):
MA(i,j)={f1(i,j),f2(i,j),…,fA(i,j)} (4)
then calculating the Euclidean distance between the pixel value of the current moment and the background model, and comparing the calculation result with a set threshold value RDMaking a comparison below a threshold RDThen compare the result rkIs 1, above a threshold value RDThen 0, then the comparison result is counted, if the statistical result is greater than a given threshold value RBThen the detection result B of the pixelt(i, j) is background, the binary gray value is 0, otherwise, the binary gray value is 255 for the moving object, that is:
Figure FDA0003522811350000012
meanwhile, if the pixel is detected as the background, the background model needs to be updated, that is, the pixel value is used to randomly replace any sample in the background model of any pixel in the neighborhood of the pixel, as shown in formula (6):
Figure FDA0003522811350000013
wherein f isan(i, j) is the background model M of the pixel (i, j)A(ii) any sample of (i, j), fan(ian,jan) Is an arbitrary sample in the background model for any pixel in the neighborhood of pixel (i, j);
then removing ghost and void in the motion detection process, and for the pixel (i, j), performing neighborhood R on the pixel (i, j)ijZone fixed time period thCounting the pulse signals in the neighborhood and calculating the pulse density d of the neighborhoodt(i,j):
Figure FDA0003522811350000021
Wherein Ht(x, y) is the neighborhood RijPulse data of inner pixel (x, y), pulse density range at actual moving object is RHIf the binary gray-scale value at the ghost is 255, the pulse density is not necessarily at RHWithin the range, the ghosting is eliminated accordingly, i.e.:
Figure FDA0003522811350000022
similarly, since the binary gray value of the void inside the target is 0, but the pulse density is the same as that of the actual target, the void is compensated accordingly, that is:
Bt(i,j)=255,if Bt(i,j)=0 and dt(i,j)∈RH (9)
(3) locking and tracking the target: firstly, locking a target to be tracked from all moving objects detected by movement, marking a region where the target is located, then taking the target region at the moment as a target model, calculating the probability density of a pulse interval of the region, marking the region where all the moving objects detected by movement are located again after one frame or several frames of time, then calculating the pulse interval probability density of each moving object region, and comparing with the probability density of the target model, wherein the moving object with the probability density most similar to that of the target model in a certain distance range is the target to be tracked; finally, the moving object is used as a new target model, and the probability density in the area of the moving object is used as the probability density of the target model, so that the target model is updated and used for target tracking of the next frame or a plurality of frames;
(4) after finding a real target area, performing image reconstruction on the scene and the target by adopting bilateral filtering, as shown in formula (17):
Figure FDA0003522811350000023
wherein P ist' (i, j) filtered pulse spacing, Pt(i, j) is the original pulse interval, WpTo normalize the coefficients, GσsIs a spatial kernel, GσrIs pixel kernel, S is spatial domain of pixel (i, j), σ S and σ r are standard deviations of spatial kernel and pixel kernel, and then image reconstruction is performed, as shown in equation (18):
Gt(i,j)=K/Pt′(i,j),K=255×Pmin (18)
where K is the conversion factor from pulse interval to grey value, PminIs the minimum pulse interval in the sequence of pulse intervals.
3. The high-speed object tracking method based on pulse sequence image sensor as claimed in claim 2, wherein in step (3), the initial stage is first performedBeginning t0Locking a target to be tracked at any time, marking the area of the target through a connected domain, taking the target area at the moment as a target model and naming the target area as RQThe nth pixel point in the region is named as pnWith a pulse interval znThe center pixel of the region is named as p0With a pulse interval z0As shown in formula (10):
Figure FDA0003522811350000031
then calculating the pulse interval probability density Q of the target modelu(p0) As follows:
Figure FDA0003522811350000032
where K is the Epanechnikov kernel function, h is the window size of the kernel function, and C is a normalization coefficient, such that
Figure FDA0003522811350000033
b(zn) Pulse interval z for pixels in a regionnNbin is the maximum quantization range, u is the feature value, δ (b (z)n) U) is a Delta function for determining the quantized value b (z)n) Whether it belongs to the characteristic value u, NumAfter the number of all pixel points in the region is one frame or several frames, namely at the time of t, the regions where all moving objects are located are detected by the connected domain mark motion again, and are named as RP1,RP2,RP3,…,RPm…, suppose that the m-th moving object region RPmThe nth pixel point in (1) is pnmWith a pulse interval znmThe center pixel of the model is p0mWith a pulse interval of z0mAs shown in formula (12):
Figure FDA0003522811350000034
then calculating the pulse interval probability density of all moving object regions, wherein the probability density Q of the m-th moving objectum(p0m) As shown in formula (13):
Figure FDA0003522811350000035
measuring target model and all moving objects R through similarity function Bhattacharyya coefficientsP1,RP2,RP3,…,RPm… and finding the true target R therefromP(ii) a In addition, since the pulse image sensor has a high frame frequency, the position change of the object between two or more frames must not be greatly different, that is, the object between two or more frames must be maintained within a certain distance range LdThe adjustment to the Bhattacharyya coefficient can therefore be made as follows:
Figure FDA0003522811350000036
where ρ ismIs the adjusted Bhattacharyya coefficient value of the mth candidate model, and in addition to that, LdCan be adaptively and iteratively adjusted, firstly giving LdSetting a small initial value and then according to rhomTo adjust LdIf p ismIs 0, then LdIncrease in size by 1 until pmIs other than 0, as shown in equation (15):
Figure FDA0003522811350000041
while the real target RPThen is to make ρmThe largest moving object, namely:
RP=RPm,if ρm=max(ρm),m∈[1,2,3,…] (16)
the target model, i.e. R, is then updatedQ=RP
CN202210185846.1A 2022-02-28 2022-02-28 High-speed target tracking method based on pulse sequence type image sensor Pending CN114612507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210185846.1A CN114612507A (en) 2022-02-28 2022-02-28 High-speed target tracking method based on pulse sequence type image sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210185846.1A CN114612507A (en) 2022-02-28 2022-02-28 High-speed target tracking method based on pulse sequence type image sensor

Publications (1)

Publication Number Publication Date
CN114612507A true CN114612507A (en) 2022-06-10

Family

ID=81858542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210185846.1A Pending CN114612507A (en) 2022-02-28 2022-02-28 High-speed target tracking method based on pulse sequence type image sensor

Country Status (1)

Country Link
CN (1) CN114612507A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169387A (en) * 2022-06-20 2022-10-11 脉冲视觉(北京)科技有限公司 Foreground detection method and device of pulse signal, electronic equipment and storage medium
CN116012833A (en) * 2023-02-03 2023-04-25 脉冲视觉(北京)科技有限公司 License plate detection method, device, equipment, medium and program product

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169387A (en) * 2022-06-20 2022-10-11 脉冲视觉(北京)科技有限公司 Foreground detection method and device of pulse signal, electronic equipment and storage medium
CN116012833A (en) * 2023-02-03 2023-04-25 脉冲视觉(北京)科技有限公司 License plate detection method, device, equipment, medium and program product
CN116012833B (en) * 2023-02-03 2023-10-10 脉冲视觉(北京)科技有限公司 License plate detection method, device, equipment, medium and program product

Similar Documents

Publication Publication Date Title
Benedek et al. Bayesian foreground and shadow detection in uncertain frame rate surveillance videos
Ye et al. Foreground–background separation from video clips via motion-assisted matrix restoration
Cevher et al. Compressive sensing for background subtraction
Huang An advanced motion detection algorithm with video quality analysis for video surveillance systems
CN114612507A (en) High-speed target tracking method based on pulse sequence type image sensor
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN110046659B (en) TLD-based long-time single-target tracking method
CN111462012A (en) SAR image simulation method for generating countermeasure network based on conditions
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
KR101906796B1 (en) Device and method for image analyzing based on deep learning
CN110598613B (en) Expressway agglomerate fog monitoring method
CN111985314B (en) Smoke detection method based on ViBe and improved LBP
CN110414558A (en) Characteristic point matching method based on event camera
TWI394097B (en) Detecting method and system for moving object
CN113688741A (en) Motion training evaluation system and method based on cooperation of event camera and visual camera
Chen et al. A residual learning approach to deblur and generate high frame rate video with an event camera
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
Angelo A novel approach on object detection and tracking using adaptive background subtraction method
CN107220945A (en) The restored method of the pole blurred picture of multiple degeneration
CN107301652B (en) Robust target tracking method based on local sparse representation and particle swarm optimization
Shao et al. Hyper RPCA: joint maximum correntropy criterion and Laplacian scale mixture modeling on-the-fly for moving object detection
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
CN108573217B (en) Compression tracking method combined with local structured information
Szwoch et al. Detection of moving objects in images combined from video and thermal cameras
Crnojević et al. Optimal wavelet differencing method for robust motion detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination