CN113034542B - Moving target detection tracking method - Google Patents

Moving target detection tracking method Download PDF

Info

Publication number
CN113034542B
CN113034542B CN202110255763.0A CN202110255763A CN113034542B CN 113034542 B CN113034542 B CN 113034542B CN 202110255763 A CN202110255763 A CN 202110255763A CN 113034542 B CN113034542 B CN 113034542B
Authority
CN
China
Prior art keywords
pulse
motion
space
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110255763.0A
Other languages
Chinese (zh)
Other versions
CN113034542A (en
Inventor
黄铁军
郑雅菁
余肇飞
田永鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110255763.0A priority Critical patent/CN113034542B/en
Publication of CN113034542A publication Critical patent/CN113034542A/en
Application granted granted Critical
Publication of CN113034542B publication Critical patent/CN113034542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to the technical field of motion detection, in particular to a method for detecting and tracking a moving target. The method comprises the following steps: acquiring space-time signals of a monitoring area to generate a space-time pulse array; performing motion detection on the space-time pulse array and generating a corresponding pulse coding sequence; inputting the space-time pulse array into a pulse neural network, and clustering the pulse neural network according to the release mode of the space-time pulse array; combining the pulse coding sequence and the clustering result to obtain motion information and position information of the target; and carrying out state prediction according to the motion information and the position information of the target, and feeding back a prediction result to the clustered impulse neural network so as to assist in correcting errors generated during prediction. The method can analyze and acquire the motion information such as the position, the size, the motion direction, the motion speed and the like of the moving object, thereby realizing detection tracking and motion trail prediction of different moving targets.

Description

Moving target detection tracking method
Technical Field
The application relates to the technical field of motion detection, in particular to a method for detecting and tracking a moving target.
Background
Different from the traditional camera for generating pictures through fixed exposure time, the nerve vision sensor imitates the principle that the biological retina generates pulse signals according to the external illumination environment, and generates pulse signals based on the change of the optical signals obtained by sampling in the monitoring area. The time resolution of pulse signals generated by the nerve vision sensor can be controlled within tens of microseconds, visual information changed in a scene can be collected more easily by combining a sampling mechanism of the nerve vision sensor, and the nerve vision sensor is more suitable for obtaining information of moving objects than a traditional camera naturally, for example, when the camera or a detection target moves at an excessive speed, pictures generated by the traditional camera have the phenomenon of motion blur.
However, since different moving objects in the monitored area may be blocked from each other, or a large number of pulse signals are generated due to the movement of the camera, the pulse signals corresponding to the moving objects are difficult to distinguish from each other, and it is a challenging task to distinguish between the different moving objects and the visual information of the camera self-movement. In addition, the output of the neuromorphic sensor is in the form of an event or pulse, which is naturally suitable as input to a low power consumption, low delay model of a pulsed neural network. However, no method based on a pulse neural network can detect and track a moving object according to the information of a pulse sequence at present.
Accordingly, in order to solve the above-mentioned problems, the present application proposes a moving object detecting and tracking method based on a pulse neural network to at least partially solve the above-mentioned technical problems.
Disclosure of Invention
The application mainly provides a moving target detection tracking method, which can be applied to pulse arrays generated by neuromorphic vision chips. The method aims at utilizing the space-time characteristics of a pulse array obtained by a high-frequency retina camera, simulating biological visual characteristics, directly taking a visual pulse sequence as input, distinguishing visual information of different moving objects and camera self-motion, analyzing and obtaining information corresponding to the motions respectively, such as the position, the size, the motion direction, the motion speed and the like of the moving objects, and realizing detection tracking and motion trail prediction of different moving objects.
Firstly, different moving objects are clustered according to the motion state, wherein the clustering method adopts a motion state generator, and various hidden state distributions in motion information are learned and detected. And after clustering, obtaining motion information corresponding to each category, predicting the motion state of different objects at the next moment according to each type of motion model obtained by the motion state generator, initializing the object category for the pulse array at the next moment, and finally updating motion model parameters by combining the difference between the motion detection result and the predicted value of the pulse array at the next moment.
In order to achieve the technical purpose, the application provides a moving object detection and tracking method, which comprises the following steps:
acquiring space-time signals of a monitoring area to generate a space-time pulse array;
performing motion detection on the space-time pulse array and generating a corresponding pulse coding sequence;
inputting the space-time pulse array into a pulse neural network, and clustering the pulse neural network according to the release mode of the space-time pulse array;
combining the pulse coding sequence and the clustering result to obtain the motion information and the position information of the target;
and carrying out state prediction according to the motion information and the position information of the target, and feeding back a prediction result to the clustered impulse neural network so as to assist in correcting errors generated during prediction.
Specifically, the motion detection of the space-time pulse array includes: and comparing the time-domain of the space-time signals of each local space position to obtain the motion information of each local space position in the time domain, wherein the motion information comprises speed and orientation.
Specifically, the motion detection of the space-time pulse array and the generation of a corresponding pulse coding sequence comprise: and coding according to the similarity of the motion modes according to the motion information of each local spatial position, or simultaneously combining the spatial information and the motion information.
Further, the motion information according to each local spatial position is encoded according to the similarity of motion modes, the encoding with similar motion modes is in the same pulse array, and different pulse arrays represent different motion information; the coding is performed by combining spatial information and motion information at the same time, the motion modes are similar, codes with similar distribution positions are in the same pulse array, and different pulse arrays represent a certain motion mode of a certain part.
Specifically, the pulse neural network for clustering comprises a significant region extraction layer, an input layer and an output layer, wherein the significant region extraction layer receives the space-time pulse array, extracts significant regions, cuts off non-significant regions and outputs a space-time pulse array corresponding to the significant regions; the input layer receives and encodes pulse arrays output by the salient region extraction layer, and each pulse array is provided with a corresponding input layer neuron; the output neurons of the output layer are in full connection with the input layer neurons, each output neuron corresponds to a category, and the output layer outputs a corresponding pulse coding sequence.
Preferably, the full connection weight of the output layer and the output layer is adjusted in real time according to the input pulse array, and if the characteristics of the same target gradually change in the motion process, the extracted characteristics of the output neurons corresponding to the target are correspondingly adjusted.
Preferably, the feeding back the prediction result to the clustered impulse neural network includes: and feeding back the prediction result to the input layer of the impulse neural network as an auxiliary input.
Preferably, the output layer further sets a suppression term so that each neuron can only correspond to one target at most, and each target as a whole can only correspond to one output neuron, and the suppression term includes lateral suppression among neurons or includes a global suppression neuron for control.
Preferably, when a certain target is not detected at a plurality of consecutive moments, the target is defaulted to be already disappeared in the monitoring area, no prediction is performed, and the corresponding output neuron is readjusted to a corresponding new moving target.
Further, the step of obtaining the motion information and the position information of the target by combining the pulse code sequence and the clustering result comprises the following steps: and obtaining motion information obtained by motion detection in a corresponding region of the clustering target according to the position of the clustering target, and taking an average value of the motion information in the target region as the overall motion speed and the overall motion direction of the clustering target.
Further, the predicting the state according to the motion information and the position information of the target, and feeding back the predicted result to the clustered impulse neural network, including:
after the position information and the motion information of the moving object are obtained, predicting the position of the moving object to be reached at the next moment according to the motion speed, and if the object is not detected at the next moment or the difference between the detected position and the position predicted before exceeds a preset threshold value, adopting the predicted result at the previous moment;
and feeding back the predicted position information to the clustered pulse neural network, expanding the information of output neurons corresponding to different targets, and if the target is detected by adopting a predicted result at the last moment, keeping the corresponding connection weight of the output neurons corresponding to the predicted target unchanged from the last moment.
Preferably, the clustering process includes:
outputting a pulse array by the neuromorphic camera;
extracting a pulse array of a significant region from the pulse array;
inputting the pulse array of the salient region into a pulse array generation model;
and obtaining posterior probabilities corresponding to different areas, and obtaining motion information corresponding to each category.
The beneficial effects of the application are as follows:
the moving target detection tracking method provided by the application can be applied to pulse arrays generated by neuromorphic vision chips. The method aims at utilizing the space-time characteristics of a pulse array obtained by a high-frequency retina camera, simulating biological visual characteristics, directly taking a visual pulse sequence as input, distinguishing visual information of different moving objects and camera self-motion, analyzing and obtaining information corresponding to the motions respectively, such as the position, the size, the motion direction, the motion speed and the like of the moving objects, and further realizing detection tracking of different moving targets and prediction of motion trajectories.
Drawings
FIG. 1 shows a schematic flow diagram of a method of embodiment 1 of the present application;
FIG. 2 shows a schematic diagram of a clustering process of embodiment 1 of the present application;
FIG. 3 is a schematic diagram showing the detection tracking effect of embodiment 1 of the present application;
fig. 4 shows a schematic flow diagram of the method of example 2 and example 3 of the present application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the application. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present application. It will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and certain details may be omitted for clarity of presentation. The shapes of the various regions, layers and relative sizes, positional relationships between them shown in the drawings are merely exemplary, may in practice deviate due to manufacturing tolerances or technical limitations, and one skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions as actually required.
Example 1:
the embodiment implements a moving object detection and tracking method, as shown in fig. 1, including the following steps:
acquiring space-time signals of a monitoring area to generate a space-time pulse array;
performing motion detection on the time-space pulse array and generating a corresponding pulse coding sequence;
inputting a space-time pulse array into a pulse neural network, and clustering the pulse neural network according to the release mode of the space-time pulse array;
combining the pulse coding sequence and the clustering result to obtain the motion information and the position information of the target;
and carrying out state prediction according to the motion information and the position information of the target, and feeding back a prediction result to the clustered impulse neural network so as to assist in correcting errors generated during prediction.
Performing motion detection on the space-time pulse array, including: and comparing the time-domain of the space-time signals of each local space position to obtain the motion information of each local space position in the time domain, wherein the motion information comprises speed and orientation.
Performing motion detection on the space-time pulse array and generating a corresponding pulse coding sequence, wherein the method comprises the following steps of: and coding according to the similarity of the motion modes according to the motion information of each local spatial position, or simultaneously combining the spatial information and the motion information.
Further, the motion information according to each local spatial position is encoded according to the similarity of motion modes, the encoding with similar motion modes is in the same pulse array, and different pulse arrays represent different motion information; the coding is performed by combining spatial information and motion information at the same time, the motion modes are similar, codes with similar distribution positions are in the same pulse array, and different pulse arrays represent a certain motion mode of a certain part.
Specifically, the pulse neural network for clustering comprises a significant region extraction layer, an input layer and an output layer, wherein the significant region extraction layer receives the space-time pulse array, extracts significant regions, cuts off non-significant regions and outputs a space-time pulse array corresponding to the significant regions; the input layer receives and encodes pulse arrays output by the salient region extraction layer, and each pulse array is provided with a corresponding input layer neuron; the output neurons of the output layer are in full connection with the input layer neurons, each output neuron corresponds to a category, and the output layer outputs a corresponding pulse coding sequence.
Preferably, the full connection weight of the output layer and the output layer is adjusted in real time according to the input pulse array, and if the characteristics of the same target gradually change in the motion process, the extracted characteristics of the output neurons corresponding to the target are correspondingly adjusted.
Preferably, the feeding back the prediction result to the clustered impulse neural network includes: and feeding back the prediction result to the input layer of the impulse neural network as an auxiliary input.
Preferably, the output layer further sets a suppression term so that each neuron can only correspond to one target at most, and each target as a whole can only correspond to one output neuron, and the suppression term includes lateral suppression among neurons or includes a global suppression neuron for control.
Preferably, when a certain target is not detected at a plurality of consecutive moments, the target is defaulted to be already disappeared in the monitoring area, no prediction is performed, and the corresponding output neuron is readjusted to a corresponding new moving target.
Further, obtaining motion information and position information of the target by combining the pulse code sequence and the clustering result comprises the following steps: and obtaining motion information obtained by motion detection in a corresponding region of the clustering target according to the position of the clustering target, and taking an average value of the motion information in the target region as the overall motion speed and the overall motion direction of the clustering target.
Further, performing state prediction according to the motion information and the position information of the target, and feeding back a prediction result to the clustered impulse neural network, including:
after the position information and the motion information of the moving object are obtained, predicting the position of the moving object to be reached at the next moment according to the motion speed, and if the object is not detected at the next moment or the difference between the detected position and the position predicted before exceeds a preset threshold value, adopting the predicted result at the previous moment;
and feeding back the predicted position information to the clustered pulse neural network, expanding the information of output neurons corresponding to different targets, and if the target is detected by adopting a predicted result at the last moment, keeping the corresponding connection weight of the output neurons corresponding to the predicted target unchanged from the last moment.
Wherein, as shown in fig. 2, the clustering process includes: outputting a pulse array by the neuromorphic camera; extracting a pulse array of a significant region from the pulse array; inputting the pulse array of the salient region into a pulse array generation model; and obtaining posterior probabilities corresponding to different areas, and obtaining motion information corresponding to each category. The clustering method adopts a motion state generator, and learns to detect various hidden state distributions in motion information. And after clustering, obtaining motion information corresponding to each category, predicting the motion state of different objects at the next moment according to each type of motion model obtained by the motion state generator, initializing the object category for the pulse array at the next moment, and finally updating motion model parameters by combining the difference between the motion detection result and the predicted value of the pulse array at the next moment.
Fig. 3 is a rotation track diagram of five moving characters, and as shown in fig. 3, five characters, "5", "6", "7", "8" and "9" are respectively represented by colors with different gray scales, and the tracking reaches a corresponding effect in a space position according to the steps described above, so that the method of the embodiment can analyze and acquire the moving information such as the position, the size, the moving direction, the moving speed and the like of a moving object, thereby realizing detection tracking of different moving objects and prediction of the moving track.
Example 2:
the embodiment provides an implementation manner of a moving object detection and tracking method, and the overall flow of the method is shown in fig. 4. The detection and tracking of the moving object are directly carried out based on the three-dimensional space-time pulse sequence output by the neuromorphic vision sensor, x and y represent the space positions of pulse signals, t represents the time of generating pulse signals at different space positions, a binary pulse sequence corresponding to all the space positions at a certain moment is taken to form a pulse array, and the example of the pulse array is as follows:
here, "1" indicates that the pulse signal is present at the corresponding position at the time point, and "0" indicates that the pulse signal is absent at the corresponding position at the time point. Bold "1" indicates a pulse signal generated by a moving object, and non-bold "1" indicates a pulse signal generated by a stationary/background area. The steps of motion detection tracking based on the input pulse array are as follows:
step 1, motion detection is performed on an input pulse array, and the detection method can be an optical flow method based on pulse signals or a method based on a pulse neural network. And outputting various motion states existing in the pulse array, such as motion directions, motion speeds and the like, using different pulse arrays to represent different motion states, and if eight motion directions corresponding to 0-360 degrees are adopted, and the motion speeds are all states of 1 pixel per moment, outputting a pulse sequence consisting of eight pulse arrays after motion detection, wherein the pulse sequence is used for respectively representing the motion states existing in a detection area.
Step 2, assume that the pulse array e (e.e. [0, 1)] {w,h} W and h are the width and height of the pulse array, respectively), and there are k moving objects (including moving object and camera motion) in total, denoted by the variable z, the generation of the pulse signal is the result of these different motions together. The pulse arrays generated by the different motions are distinguished, and a process flow chart of the moving object clustering process for obtaining the different moving objects is shown in fig. 2. Firstly, according to a salient region extraction method, a more pulsed region in a pulse array e is obtained as a salient region, and a salient region pulse array e which is not limited to one is obtained i (e i ∈[0,1] {w1,h1} W1 is less than w, h1 is less than h, and i is less than or equal to k). There are two distinct regions in the following array:
wherein the bold area indicates the detected salient area, there are two salient areas corresponding to the pulse array e 1 And e 2 The method comprises the following steps of:
if the relation between different classes z and pulse arrays e corresponding to different significant areas in the generator is expressed by a parameter theta, then the distribution condition p of motion states in the input pulse arrays is analyzed * (e) The model p (e, z|θ) can be generated by learning to find one θ * The values are such that the edge probability distribution of the variable eAs close as possible to p * (e) A. The application relates to a method for producing a fibre-reinforced plastic composite In solving this problem, the maximum expectation value method (Exception Maximization) in statistical mathematics, that is, the EM algorithm, can be used to solve for θ, and the solving process can be used to make the objective function E according to the gradient-increasing method {p*} [logp(e|θ)]The maximum value of (2) representing the motion profile p of the pulse array at the input * (e) The posterior probability distribution for a given θ is the largest expected.
And step 3, after solving and obtaining the relation between the current pulse array and the object category, the position of each moving target can be obtained according to the posterior probability p (z|e, theta), and the pulse arrays in different areas correspond to the variable z with the maximum posterior probability, namely the corresponding output category. Then, the position to be reached at the next moment can be predicted by combining the movement speed, and if the target is not detected at the next moment or the detected position is greatly different from the position predicted before, the predicted result at the last moment is adopted.
Step 4, the predicted position information is returned to the pulse neural network for clustering, the information of output neurons corresponding to different targets is expanded, and if the targets are detected, the variable z is adopted i Adopting the prediction result of the last moment, the output neuron corresponding to the prediction target is correspondingly connected with the weightAnd remains unchanged from the last moment.
Step 5, when a certain target z is not detected at a plurality of continuous moments i When default the target has disappeared in the monitoring area, no prediction is performed, the corresponding variable z i Can be readjusted to correspond to the new moving object.
Example 3:
the embodiment implements a moving object detecting and tracking method, and the overall flow of the method is still shown in fig. 4. The three-dimensional space-time pulse sequence output by the neuromorphic vision sensor is used for directly detecting and tracking the moving object, and when detecting and tracking according to the pulse array, the pulse array is firstly subjected to motion detection in the same way as in the step 1 of the embodiment 2, so that various existing motion states are obtained. However, in the moving object clustering process described by using the impulse neural network to model the graph, the optimization target of the impulse neural network is also to find the parameter I, analyze the category z to which each motion belongs, so that the objective function E {p*} [logp(e|θ)]The maximum value is obtained under the current pulse array. The solving process does not use a gradient maximum method, but uses a pulse timing-based plasticity model adjustment (Spike-timing-dependent plasticity) in nerve computation, namely an STDP model, and the output variable z is modeled by output neurons in a network.
In the established impulse neural network, the significant area impulse array described in the embodiment 2 is input, the network output neurons represent the class z (k output neurons) to which different states belong, θ in the model is the connection weight between the input neurons and the output neurons, and the update of the parameter θ is completed by the STDP model. Obtaining the output state of the output neuron by a leakage integration issue (LIF) model given the input pulse sequence e and the network weight θ is equivalent to the process of solving for p (z|e, θ) in step 3 in example 2, and similarly, the process of calculating the output neuron at the input next time of the pulse array is equivalent to solving for p (z) {t+1} |e {t+1} θ). After obtaining the firing state of the output neurons of the impulse neural network, findThe location of the significant region of the input pulse array that fired the neuron firing represents the location of the target it tracked. The remaining steps are the same as steps 4 and 5 of example 2: the predicted position information is returned to the pulse neural network for clustering, the information of output neurons corresponding to different targets is expanded, and if the targets are detected, the variable z is adopted i Adopting the prediction result of the last moment, the output neuron corresponding to the prediction target is correspondingly connected with the weightKeeping unchanged from the previous moment; when a certain target z is not detected at a plurality of continuous moments i When default the target has disappeared in the monitoring area, no prediction is performed, the corresponding variable z i Can be readjusted to correspond to the new moving object.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The moving target detection tracking method is characterized by comprising the following steps of:
acquiring space-time signals of a monitoring area to generate a space-time pulse array;
performing motion detection on the space-time pulse array and generating a corresponding pulse coding sequence;
wherein the motion detection of the space-time pulse array comprises: comparing the time domain of the space-time signals of each local space position to obtain the motion information of each local space position in the time domain, wherein the motion information comprises speed and orientation;
coding according to the similarity of motion modes according to the motion information of each local spatial position, or simultaneously combining the spatial information and the motion information;
inputting the space-time pulse array into a pulse neural network, and clustering the pulse neural network according to the release mode of the space-time pulse array;
the pulse neural network for clustering comprises a significant region extraction layer, an input layer and an output layer, wherein the significant region extraction layer receives the space-time pulse array, extracts significant regions, cuts off non-significant regions and outputs a space-time pulse array corresponding to the significant regions;
the input layer receives and encodes pulse arrays output by the salient region extraction layer, and each pulse array is provided with a corresponding input layer neuron;
the output neurons of the output layer are in full connection with the input layer neurons, each output neuron corresponds to a category, and the output layer outputs a corresponding pulse coding sequence;
combining the pulse coding sequence and the clustering result to obtain the motion information and the position information of the target;
and carrying out state prediction according to the motion information and the position information of the target, and feeding back a prediction result to the pulse neural network for clustering.
2. The method for detecting and tracking a moving object according to claim 1, wherein the motion information according to each local spatial position is encoded according to similarity of motion modes, the encoding with similar motion modes is in the same pulse array, and different pulse arrays represent different motion information; the coding is performed by combining spatial information and motion information at the same time, the motion modes are similar, codes with similar distribution positions are in the same pulse array, and different pulse arrays represent a certain motion mode of a certain part.
3. The method for detecting and tracking a moving object according to claim 1, wherein the full connection weight of the output layer and the output layer is adjusted in real time according to the input pulse array, and if the characteristics of the same object change gradually during the movement process, the characteristics extracted from the output neurons corresponding to the object are correspondingly adjusted.
4. The moving object detection tracking method according to claim 1, wherein the feeding back a prediction result to the impulse neural network for clustering comprises: and feeding back the prediction result to the input layer of the impulse neural network as an auxiliary input.
5. The moving object detecting and tracking method according to claim 1, wherein the output layer further sets a suppression term such that each neuron can correspond to at most one object, and each object as a whole can correspond to only one output neuron, the suppression term including lateral suppression between neurons or including a global suppression neuron for control.
6. The moving object detecting and tracking method according to claim 1, wherein when a certain object is not detected at a plurality of consecutive moments, the object is defaulted to have disappeared in the monitored area, no prediction is performed, and the corresponding output neuron is readjusted to a corresponding new moving object.
7. The moving object detecting and tracking method according to claim 1, wherein the combining the pulse code sequence and the clustering result to obtain the moving information and the position information of the object comprises: and obtaining motion information obtained by motion detection in a corresponding region of the clustering target according to the position of the clustering target, and taking an average value of the motion information in the target region as the overall motion speed and the overall motion direction of the clustering target.
8. The moving object detecting and tracking method according to claim 1, wherein the performing state prediction according to the motion information and the position information of the object, and feeding back a prediction result to the impulse neural network for clustering, comprises:
after the position information and the motion information of the moving object are obtained, predicting the position of the moving object to be reached at the next moment according to the motion speed, and if the object is not detected at the next moment or the difference between the detected position and the position predicted before exceeds a preset threshold value, adopting the predicted result at the previous moment;
and feeding back the predicted position information to the clustered pulse neural network, expanding the information of output neurons corresponding to different targets, and if the target is detected by adopting a predicted result at the last moment, keeping the corresponding connection weight of the output neurons corresponding to the predicted target unchanged from the last moment.
9. The moving object detecting and tracking method according to claim 1, wherein the clustering process includes:
outputting a pulse array by the neuromorphic camera;
extracting a pulse array of a significant region from the pulse array;
inputting the pulse array of the salient region into a pulse neural network;
and obtaining posterior probabilities corresponding to different areas, and obtaining motion information corresponding to each category.
CN202110255763.0A 2021-03-09 2021-03-09 Moving target detection tracking method Active CN113034542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110255763.0A CN113034542B (en) 2021-03-09 2021-03-09 Moving target detection tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110255763.0A CN113034542B (en) 2021-03-09 2021-03-09 Moving target detection tracking method

Publications (2)

Publication Number Publication Date
CN113034542A CN113034542A (en) 2021-06-25
CN113034542B true CN113034542B (en) 2023-10-10

Family

ID=76467322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110255763.0A Active CN113034542B (en) 2021-03-09 2021-03-09 Moving target detection tracking method

Country Status (1)

Country Link
CN (1) CN113034542B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048954A (en) * 2022-05-23 2022-09-13 北京大学 Retina-imitating target detection method and device, storage medium and terminal
CN115687911B (en) * 2022-06-13 2023-06-02 北京融合未来技术有限公司 Signal lamp detection method, device and system based on pulse signals

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium

Also Published As

Publication number Publication date
CN113034542A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Lim et al. Foreground segmentation using convolutional neural networks for multiscale feature encoding
CN108154118B (en) A kind of target detection system and method based on adaptive combined filter and multistage detection
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN107122736B (en) Human body orientation prediction method and device based on deep learning
US20200160535A1 (en) Predicting subject body poses and subject movement intent using probabilistic generative models
CN113034542B (en) Moving target detection tracking method
CN110097028B (en) Crowd abnormal event detection method based on three-dimensional pyramid image generation network
CN110728698B (en) Multi-target tracking system based on composite cyclic neural network system
CN109993770B (en) Target tracking method for adaptive space-time learning and state recognition
CN108108688B (en) Limb conflict behavior detection method based on low-dimensional space-time feature extraction and topic modeling
CN112288776B (en) Target tracking method based on multi-time step pyramid codec
CN113112521B (en) Motion detection method based on pulse array
Xu et al. Face expression recognition based on convolutional neural network
CN114638408A (en) Pedestrian trajectory prediction method based on spatiotemporal information
CN109272036B (en) Random fern target tracking method based on depth residual error network
CN114694261A (en) Video three-dimensional human body posture estimation method and system based on multi-level supervision graph convolution
Douillard et al. A spatio-temporal probabilistic model for multi-sensor object recognition
Chandrapala et al. Invariant feature extraction from event based stimuli
Ruan et al. Automatic recognition of radar signal types based on CNN-LSTM
Andrade et al. Characterisation of optical flow anomalies in pedestrian traffic
Du et al. Infrared small target detection and tracking method suitable for different scenes
Xu et al. An intra-frame classification network for video anomaly detection and localization
Liu et al. Robust hand tracking with Hough forest and multi-cue flocks of features
Deng et al. BEmST: Multi-frame Infrared Small-dim Target Detection using Probabilistic Estimation of Sequential Backgrounds
CN113177920B (en) Target re-identification method and system of model biological tracking system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant