CN109509213B - Harris corner detection method applied to asynchronous time domain vision sensor - Google Patents

Harris corner detection method applied to asynchronous time domain vision sensor Download PDF

Info

Publication number
CN109509213B
CN109509213B CN201811250135.8A CN201811250135A CN109509213B CN 109509213 B CN109509213 B CN 109509213B CN 201811250135 A CN201811250135 A CN 201811250135A CN 109509213 B CN109509213 B CN 109509213B
Authority
CN
China
Prior art keywords
time
event
point
optical flow
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811250135.8A
Other languages
Chinese (zh)
Other versions
CN109509213A (en
Inventor
胡燕翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Normal University
Original Assignee
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Normal University filed Critical Tianjin Normal University
Priority to CN201811250135.8A priority Critical patent/CN109509213B/en
Publication of CN109509213A publication Critical patent/CN109509213A/en
Application granted granted Critical
Publication of CN109509213B publication Critical patent/CN109509213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention discloses a corner detection method based on a novel photoelectric imaging device-asynchronous time domain vision sensor, and provides specific implementation steps. The novel imaging device is only sensitive to the light intensity change in a shooting scene, so that the output visual information has the characteristics of real time, extremely low redundancy and small data size, and the operation amount and the storage requirement are greatly reduced compared with the traditional frame sampling imaging, so that the novel imaging device is more suitable for high-speed visual application. The core idea of the invention is that firstly, a continuously-changing 'speed surface' is constructed by using an 'asynchronous event stream' output by a visual sensor, and then, event-driven angular point detection is realized on the speed surface by using a Harris angular point detection principle.

Description

Harris corner detection method applied to asynchronous time domain vision sensor
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a Harris corner detection method applied to an asynchronous time domain vision sensor.
Background
Of all sources of information in humans, image information accounts for over 80%. Currently, silicon-based semiconductor image sensors (including CCDs and CMOS image sensors) have completely replaced silver iodide films, and become the most dominant optoelectronic imaging devices. Based on the digital image generated by the method, computer vision and machine vision are increasingly widely used, and become an important component of artificial intelligence.
According to the imaging principle, the images currently used are all generated based on a "frame sampling" approach:
1. all pixels start to sense light (collect photo charge) after being reset at the same time, and stop sensing light after reaching the set exposure time;
2. sequentially reading out the photoelectric charges collected by each pixel and converting the photoelectric charges into voltages;
3. the voltage is converted into digital quantity after analog-digital conversion, and is output and stored. The digital quantity is the brightness value of the point. The two-dimensional matrix formed by the brightness values of all the pixels is the photographed image.
In a machine vision system using the above-described "frame-sampled" image sensor (camera), a computer sequentially processes a sequence of images taken by the camera (typically 30 frames/second), extracts features, objects, and performs various recognition, analysis, and understanding in the images using various image processing algorithms.
The above-described "frame sampling" approach, when applied to high-speed vision processing, suffers from the following significant drawbacks:
(1) Data redundancy. There is a lot of redundant information between two adjacent frames, and the background area which is identical to the previous frame is read out by repeated sampling. These redundant information simultaneously place great stress on the processing and storage of the system;
(2) High latency. Changes occurring in the scene must be sampled and output in a frame-by-frame fashion. This high delay between "change-perception" is clearly detrimental to the identification and tracking of high-speed moving objects. If the high-speed camera is used for shooting, the processing and storage pressure caused by the step (1) is larger.
There is no concept of "frames" in the biological vision system, visual photoreceptor cells are only sensitive to changes and transmit such changes in the form of nerve impulses to the visual cortex of the brain for processing. In recent years, researchers in neuroengineering propose to implement bionic vision sensors using very large scale integrated circuit (VLSI) technology based on the principle of "variable sampling" of biological vision. The working principle comprises the following steps:
(1) Only to "change Event (AE)" in the scene and samples the output. AE can be classified into two types, spatial variation (variation in luminance relation of pixels and their peripheral pixels) and temporal variation (variation in pixel luminance) according to their properties. One of the most important classifications of current biomimetic visual sensors is the asynchronous time domain visual sensor (Asynchronous Temporal Vision Sensor, ATVS); (2) the ATVS pixel autonomously detects changes in the illumination received: if there is no change (less than the set threshold), then no output is maintained; otherwise, the AE generated by the pixels is asynchronously output through the serial bus of the chip level, and the pixels are not related to each other. AE is denoted ae= (x, y, P), where (x, y) is the address of the pixel in the VS pixel array, P denotes the polarity of the AE, e.g. the intensity increases to "1" and the intensity decreases to "0". This method of representing AEs with addresses is called Address-Event-Representation (AER). For moving objects, AEs are mainly generated by object boundaries (contours).
(3) Since all pixels in ATVS use the same set of serial bus outputs AE, an arbiter needs to be used to decide the output order of simultaneous AEs, which means that AEs belonging to one moving object are not consecutively output, i.e., two adjacently output AEs may not belong to the same moving object.
(4) Each AE of the ATVS output is assigned a timestamp T by the external interface controller indicating the specific time of the event output, so AE is fully represented as ae= (x, y, P, T).
In summary, ATVS is only sensitive to the change in the scene, so that the ATVS has the characteristics of small data size and real-time response, is very suitable for the application of target positioning, tracking, speed measurement, shape analysis and the like in various machine vision, and greatly reduces the processing speed and memory requirement of the system under the condition of the same resolution.
The traditional method for detecting the angular points of the event domain by using the Harris principle is to calculate the time gradient of each arrival event on a time surface (a space curved surface formed by the arrival time of the event) and further calculate the angular points, and the method has the problems that false angular points caused by boundary crossing cannot be distinguished when objects are mutually shielded, so that accurate target tracking is not facilitated.
Disclosure of Invention
In view of the above, the present invention aims to provide a Harris corner detection method applied to an asynchronous time domain vision sensor, so that the output vision information has the characteristics of real time, extremely low redundancy and small data size, the processing time and the resource requirement are greatly reduced compared with the traditional whole frame imaging, and the method is more suitable for high-speed vision application.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
a Harris corner detection method applied to an asynchronous time domain vision sensor specifically comprises the following steps:
(1) Constructing a time surface TS and speed surface VS data structure;
(2) Reading a specified number of events for initializing TS and VS;
(3) Calculating a value of the light at the event point and recording the value and time t at the corresponding location of the VS;
(4) Updating the light values at other positions of the VS;
(5) Performing corner detection by using VS;
(6) And (5) returning to the step (2) for continuing if the follow-up event exists, and ending if the follow-up event does not exist.
Further, in the step (1), assuming that the size of the pixel array of ATVS is m×n, the event AE (x, y, P, t) indicates that the pixel with address (x, y) generates a change of P attribute (p=1 or 0, the light intensity increases or decreases), and the change is output at the time t; wherein, the liquid crystal display device comprises a liquid crystal display device,
time surface TS: a two-dimensional array of the same size as the ATVS pixel array, recording the time when each pixel has last occurred an event of a specified type (increase or decrease), when the difference between the time when one pixel has newly occurred an event and the time when it has last occurred an event is less than the refractory period threshold T ref Discarding the new event when it is time;
speed surface VS: a two-dimensional array of the same size as the ATVS pixel array records the last calculated optical flow value for each pixel, including the magnitude and direction angle θ.
Further, the step (2) specifically comprises the following steps,
(21) All positions of TS and VS arrays are set to 0;
(22) If the number of read events is smaller than the specified value (reference value m×n×10), reading the next AE; otherwise go to step (24);
(23) If the time difference between the output time of the read-in event and the last time the event with the same type attribute was generated is smaller than the refractory period threshold T ref (reference 100 us), discarding the event, returning to step (22); otherwise, recording the output time of the device at the corresponding position of the TS by taking 100us as a unit, and returning to the step (22);
(24) Optical flow values for each non-edge point on VS are calculated using TS and noted VS.
Further, the step (3) specifically includes the following steps:
at time t, the optical flow values of the position points (x, y) are calculated using TS as follows:
(31) On TS, taking (x, y) as a center, searching a point set which is less than 1ms away from t in 8 adjacent domains:
Figure GDA0004093924790000041
(32) Ending if the number of PS midpoints is less than 3; otherwise turning to (33);
(33) Solving a space fitting plane of the PS by using a least square method or a eigenvalue decomposition method:
L: ax+by+ct+d=0, s.t. min(dis tan ce(PS,L)) (2)
minimizing the sum of the distances from all points in PS to plane L;
(34) The normal equation for plane L is:
Figure GDA0004093924790000051
projection of the normal vector of the normal onto the (X-Y) plane:
Figure GDA0004093924790000052
namely, the optical flow of the point (x, y) points, the direction angle of which is:
Figure GDA0004093924790000053
further, in the step (4), after the optical flow values at the points (x, y) are updated, the magnitudes of the optical flow values at the other positions are updated according to the following formula, and the direction angle θ is unchanged:
VS(m,n)=VS(m,n)*e -k(t-t0) , s.t. m≠x&n≠y (6)
k is an attenuation speed parameter, and when the unit of t/t0 is 0.1ms, the reference value is 10-20.
Further, in the step (5), after receiving the event AE (x, y, P, t) and updating TS and VS in sequence, for a 3*3 small window w=vs [ x-1:x+1, y-1:y+1] centered on (x, y) on VS:
(51) Speed direction angle consistency check:
let the optical flow of (x, y) point be (V, θ), V be the optical flow magnitude, θ be the velocity direction angle, check if the velocity direction angle of 8 neighborhood points around (x, y) is consistent with it:
Figure GDA0004093924790000054
t theta speed direction angle uniformity threshold, reference value 20 deg..
(52) Performing Gaussian smoothing convolution:
Figure GDA0004093924790000055
the function of the above formula is to smooth the velocity value of each point in w, and reduce noise or interference of missing points.
(53) The horizontal gradient wx and the vertical gradient wy for each point within w are calculated:
Figure GDA0004093924790000061
(54) Constructing a gradient autocorrelation matrix for each point within w as follows:
Figure GDA0004093924790000062
(55) Calculating the eigenvalue of each point gradient autocorrelation matrix:
R(x,y)=|w self-cor (x,y)|-k*tr 2 (w self-cor (x,y)) (11)
|w self-cor (x, y) | is the determinant of the point (x, y) gradient autocorrelation matrix, tr (|w) self-cor (x, y) | represents the trace of the point (x, y) gradient autocorrelation matrix, k takes 0.05;
(6) When the R value of a point in the small window is larger than the set threshold T Corner When it is marked as a corner point, T Corner The reference value takes 0.7.
Compared with the prior art, the Harris corner detection method applied to the asynchronous time domain vision sensor has the following advantages:
(1) The ATVS adopts the imaging principles of bionic visual change driving sampling, asynchronous autonomous output and address event representation, so that the ATVS has the special advantages of extremely low data redundancy, high instantaneity and time resolution, and is very suitable for visual application of high-speed motion scenes:
(2) The data size of ATVS of the present invention is typically only 5-10% of that of a "frame sampled" image sensor, thus greatly reducing the computational and memory requirements for back-end computer systems;
(3) The invention adopts the working principle of 'change sampling + pixel asynchronous output', so the change in the scene can be perceived and output with microsecond delay, which is equivalent to thousands to tens of thousands of frames per second under frame sampling, and the real-time performance is very high;
(4) Since the edge of the moving object is the main cause of the light intensity change, the calculation amount of basic tasks in machine vision such as edge detection, target current situation extraction and the like is greatly reduced;
(5) The invention carries out angular point detection on the event-driven speed surface (optical flow field), and can effectively distinguish angular point characteristics of the moving object and false angular points generated by mutual shielding between the objects due to different speeds of different moving objects, thereby providing a more robust foundation for accurate target tracking.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of comparing frame sampling imaging with variation sampling imaging according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an ATVS structure according to an embodiment of the present invention;
FIG. 3 is a conceptual illustration of optical flow according to embodiments of the invention;
fig. 4 is a schematic view of a corner point according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the relationship among a moving object, an event accumulation map, a time surface and a speed surface according to the embodiment of the present invention;
fig. 6 is a schematic diagram of an event-based corner detection principle according to an embodiment of the present invention;
fig. 7 is a flow chart of a Harris corner detection method applied to an asynchronous time domain vision sensor according to an embodiment of the invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention will be described in detail below with reference to the drawings in connection with embodiments.
The invention discloses a corner detection method based on a novel photoelectric imaging device-asynchronous time domain vision sensor, and provides specific implementation steps. The novel imaging device is only sensitive to the light intensity change in a shooting scene, so that the output visual information has the characteristics of real time, extremely low redundancy and small data size, and the processing time and the resource requirement are greatly reduced compared with the traditional whole-frame imaging, so that the novel imaging device is more suitable for high-speed visual application. The core idea of the invention is that firstly, an asynchronous event stream output by a visual sensor is used for calculating a continuously-changing speed surface, and an event-driven angular point detection is realized on the speed surface by using the calculation principle of Harris angular point detection in a frame image.
Fig. 1 shows a comparison of frame sample imaging with varying sample imaging. In the conventional "frame sampling" imaging mode, all pixels use the integrated amount of photoelectric conversion in a fixed period as an output value, which has the advantage of containing all spatial information and the disadvantage of losing time-domain continuity at fixed inter-frame intervals. The change sampling is only sensitive to the light intensity change, outputs the position, attribute and occurrence time of the change in the form of an event, has extremely short change perception delay and small data volume, but lacks spatial information between pixels (intensity contrast between pixels).
Fig. 2 shows a structural composition of ATVS. The light sensing module of the ATVS pixel converts the illumination intensity into photocurrent, and the quantification comparison module compares the variation amplitude of the photovoltage through integration. When the magnitude of the change exceeds a predetermined proportion (typically set to 15%) of the previous integrated value, the pixel issues an output request to the arbitration module. The peripheral interface circuit combines the coordinates of the pixel output, the polarity of the change and the actual output time into an "event" output.
Fig. 3 gives a conceptual illustration of the optical flow. Optical flow is the projection of the velocity of a spatially moving object onto the plane of the sensor pixel array. Optical flow-based object detection and recognition is one of the main methods of visual object detection and recognition. (a) Is an image comprising a rotating object, wherein the optical flow of points is shown in figure (b): the direction of the small arrow is the movement direction, and the small arrow scale is the speed.
Fig. 4 shows a schematic view of the corner points. In a frame image, corner points are detected by gray scale differences between pixels. (a) Each point on the middle straight line has only vertical gradient and horizontal gradient is 0, so that no corner point exists. (b) The horizontal gradient and the vertical gradient of the pixel at the intersection point of the two straight lines are not 0, and the pixel can be judged as the corner point. (c) The eigenvalue R of the autocorrelation matrix of the line segment intersection in (equation 6) is smaller, one of the two orthogonal eigenvalues is larger and the other is smaller, and whether or not the corner point is determined depends on the setting of the determination threshold.
Fig. 5 shows a schematic diagram of the relationship among a moving object, an event accumulation map, a time surface and a speed surface in the present invention. (a) Is a disk containing a black dot that rotates clockwise to create a changing event. (b) To "display frames" that integrate events generated by ATVS within 20 ms. It can be seen that only the rotating dots generate events, the light-facing surface is positive, and the light-backing surface is negative. (c) is a time space diagram of an event. The visible events are continuously generated on the time axis and the trajectory of the rotating dots in the space-time diagram is continuous. The "event frame image" in (b) corresponds to the compression of events together for a period of time. (d) is an event driven velocity surface schematic. One dot (represented as four pixels) is rotated clockwise at a constant speed from a previous time to a current time. The moving direction at the current moment is horizontal, and the speed is the largest. Whereas the speed direction at a certain time in the past is different from the current one, the speed magnitude "decays" over time. The longer from the current time, the less attenuated.
Fig. 6 shows the principle of event-based corner detection. The corner point (a) consisting of four points generates an event due to the movement. The velocity surfaces at the first reached points (1) to (3) are shown in fig. (b), where no corner points occur. When (4) arrives, the velocity surfaces (c) they constitute corner points.
Fig. 7 shows the main calculation steps of the present invention, including the steps of initialization, time surface update, optical flow calculation, velocity surface update, velocity direction angle consistency check, corner detection, etc.
Description of principle:
1. definition of velocity surface
The asynchronous event stream generated by ATVS contains the change information in the scene, but is not in the form of a spatial distribution of light intensity (frames), but rather is embodied as a continuously changing position-time-polarity (events). Since a single event does not reflect the overall spatial change, a spatial state must be constructed that is updated continuously with the input event, in which feature detection and tracking occurs:
1. the polarity of the event indicates the direction (increase or decrease) of the intensity change, and is related to the movement direction and the position of the light source, the light-facing surface generates positive events, and the backlight surface generates negative events, so that only one type is processed because the positive events and the negative events occur in pairs; the location of the event occurrence coincides with its location in the spatial state;
2. event time: the time of event output is not the exact time of occurrence. Since the events contained in the asynchronous event stream are serially output, the time stream contained therein is monotonically increasing, but this does not mean that the events must occur sequentially for the following reasons:
(1) All change events caused by the edge of a moving object are simultaneously generated but output in series, so that the output time is inconsistent;
(2) Due to the competition of serial outputs, two events having a very short occurrence time interval may occur, and the output time may be reverse to the occurrence order.
In summary, the time that each pixel has occurred at the last event may constitute a "time surface", i.e., the time domain distribution of the event stream is regarded as a spatial domain distribution, and the spatial surface is continuously updated as new events arrive. From the shape of the spatial surface, shape features in the scene can be analyzed. The traditional method uses the concept of a time surface, and realizes the corner detection method driven by events based on the Harris focus detection principle. Similar to the intensity surface, the problem of using the time surface for feature/target detection is that when moving targets in a scene are mutually blocked, the attribution relation between the boundary point and the targets cannot be solved, and difficulty is caused to the identification and tracking of the targets based on the feature points.
One idea to solve this problem is to use the position and time information of the event occurrence to obtain speed information of the movement of objects in the scene for an asynchronous event stream generated by ATVS. We can consider the spatial distribution of all pixel velocities (optical flow) as one "velocity surface". The advantage of object/feature recognition at the velocity surface is that: when the objects are blocked, the pixel points belonging to the boundary of a moving object can be easily distinguished according to the speed information.
2. Method for structuring a speed surface
1. Event-driven optical flow computation
The optical flow calculation of ATVS is event driven: when ATVS generates a new event, optical flow at that location point is calculated. The invention adopts a local time surface fitting method. The method regards the time of an asynchronous event stream generated by a moving object as a continuous space curved surface, the curved surface is continuous and smooth in a small range, the normal direction of the small plane is the space velocity direction of a central point, and the projection vector of the reciprocal of the unit normal length on a pixel plane is the velocity of the central point of the small plane. The main calculation steps are as follows: (1) Reading an event E (i, j, P, tc) of a specified type (P represents an increasing event or a decreasing event), calculating tc=tc/S, and S is a proportionality coefficient for adjusting time precision. In general, the time accuracy of ATVS output is us, and in order to reduce the calculation amount of optical flow, the time accuracy is reduced to be 100us or 1ms, and the corresponding s=100 or 1000;
(2) Discarding the event when the difference between the time T0 of the last generation of the event of the same type by the pixel (i, j) and the time tc of the generation of the event is smaller than the refractory period threshold T; otherwise it will be recorded at the event location on the time surface. The purpose of setting the "refractory period threshold" is to prevent certain pixels from being triggered frequently by large intensity variations;
(3) On the time surface, taking the position as the center, performing plane fitting in a small window around the position 3*3, and setting the obtained plane equation as follows:
ax+by+ct+d=0
wherein x and y are event position coordinates, and t is a time coordinate. The unit normal vector of the facet is:
Figure GDA0004093924790000111
the projection of the normal vector of the normal on the (X-Y) plane is:
Figure GDA0004093924790000112
2. construction of speed surfaces
The above optical flows of points are calculated in sequence event by event, but when the spatial feature detection is performed, the optical flows of points in a region need to be simultaneously judged to determine the represented spatial shape, so that a set with a certain duration needs to be constructed and used as the set of spatial feature judgment. The event set is constructed in the following manner:
(1) A fixed time period. The events are divided into segments according to a fixed length of time T, with events within one segment being considered to occur simultaneously. Because events generated by one moving object are serially output, if T is too small, the events belonging to one moving object are divided into different sections, and the spatial relevance is lost; if T is too large, the timeliness of the change response is lost;
(2) The number of events is fixed. A fixed number of events is taken as a set of simultaneous events. Since the number of change events varies greatly in different scenarios, there is the same problem as a fixed time segment.
In order to overcome the defects of the two methods, the invention builds a speed surface by referring to the concept of the neuron model. The neuromorphic engineering research shows that the biological neuron has the characteristics of memory storage and time attenuation: when spike excitations of input synapses are received, the membrane potential in neurons rises with time, but at the same time decays exponentially with time. The update process of the velocity surface is thus as follows:
(1) Upon receipt of event E (i, j, P, t) and calculation of its velocity V (magnitude and direction angle) using planar fitting, the velocity surface VS is updated as:
VS(i,j,t)=V(i,j,t)
(2) While the other positions of VS attenuate as:
VS(m,n)=VS(m,n)*e -k(t-t0) ,s.t.m≠i&n≠j
where t0 is the last update time of VS and k is the decay intensity coefficient.
3. Event-driven Harris corner detection method
When an event AE (i, j, P, t) is received, the velocity (V, t) is calculated and the velocity surface VS is updated, and then whether a corner exists at the position (i, j) at the time t is detected according to the following steps:
1. speed consistency check: for points within the n x n small window (n=3 or 5) with VS centered on (i, j), if their direction differs from the direction of the center point by more than a limit value, they are not considered to be generated by the same moving object:
if|VS(m,n,θ)-VS(i,j,θ)|≥T θ VS(m,n)=0
for all m≠i&n≠j
t in the above θ Is a direction consistency threshold.
2. And (3) carrying out Gaussian smoothing on the small window processed in the step (1):
3. and calculating spatial gradient and Gaussian smoothing on the smoothed small window according to a Harris operator, calculating a local extremum, and determining whether the corner point is the corner point or not.
The specific implementation process of the invention is as follows:
assuming that the pixel array size of ATVS is m×n, event AE (x, y, P, t) indicates that the pixel with address (x, y) produces a change in P attribute (p=1 or 0, increase or decrease in light intensity) that is output at time t. The Harris corner detection is carried out according to the following steps:
1. the following data structure is constructed:
(1) Time surface TS: a two-dimensional array of the same size as the ATVS pixel array records the time each pixel has last occurred for a given type of event (increase or decrease). When the difference between the time of the new occurrence of the event of one pixel and the time of the last occurrence of the event is smaller than the refractory period threshold T ref Discarding the new event when it is time;
(2) Speed surface VS: recording an optical flow value which is calculated by each pixel last time and comprises a magnitude and a direction angle theta by a two-dimensional array with the same size as the ATVS pixel array;
2. the whole algorithm flow is as follows:
(1) The algorithm starts, reading in a specified number of events for initializing TS and VS (see step 3);
(2) Reading in a new event, if the time difference between the output time of the read-in event and the last time the same type attribute event is generated is smaller than the refractory period threshold T ref (reference 100 us), discarding the event, and continuing to execute the step; otherwise, recording the output time of the device in the corresponding position of the TS by taking 100us as a unit, and executing the next step;
(3) Calculating optical flow values at the event points (see step 4) and recording the values and time t at the corresponding positions of the VS, updating the optical flow values at other positions of the VS (see step 5);
(4) Corner detection using VS (see step 6);
(5) And (5) returning to the step (2) for continuing if the follow-up event exists, and ending if the follow-up event does not exist.
3. TS and VS initialization:
(1) All positions of TS and VS arrays are set to 0;
(2) If the number of read events is smaller than the specified value (reference value m×n×10), reading the next AE; otherwise, turning to the step (4);
(3) If an event has been read inThe difference between the output time of the same type attribute event and the last time the same type attribute event is generated is smaller than the refractory period threshold T ref (reference 100 us), discarding the event, returning to step (2); otherwise, recording the output time of the device at the corresponding position of the TS by taking 100us as a unit, and returning to the step (2);
(4) Optical flow values for each non-edge point on VS are calculated using TS and noted VS.
4. Optical flow calculation
At time t, the optical flow values of the position points (x, y) are calculated using TS as follows:
(1) On TS, taking (x, y) as a center, searching a point set which is less than 1ms away from t in 8 adjacent domains:
Figure GDA0004093924790000141
(2) Ending if the number of PS midpoints is less than 3; otherwise, turning to (3);
(3) Solving a space fitting plane of the PS by using a least square method or a eigenvalue decomposition method:
ax+by+ct+d=0, s.t.min (distance (PS, L)) (II) such that the sum of the distances from all points in PS to plane L is minimal;
(4) The normal equation for plane L is:
Figure GDA0004093924790000142
projection of the normal vector of the normal onto the (X-Y) plane:
Figure GDA0004093924790000143
namely, the optical flow of the point (x, y) points, the direction angle of which is:
θ=actan(b/a) (V)
5. VS update
When the optical flow values at the points (x, y) are updated, the optical flow values at other positions are updated according to the following formula, and the direction angle θ is unchanged:
VS(m,n)=VS(m,n)*e -k(t-t0) ,s.t.m≠x&n≠y (VI)
k is an attenuation speed parameter, and when the unit of t/t0 is 0.1ms, the reference value is 10-20.
6. Harris corner detection
When event AE (x, y, P, t) is received and TS and VS are updated sequentially, for a 3*3 small window w=vs [ x-1:x+1, y-1:y+1] centered on (x, y) on VS:
(1) Speed direction angle consistency check:
let the optical flow of (x, y) point be (V, θ), V be the optical flow magnitude, θ be the velocity direction angle, check if the velocity direction angle of 8 neighborhood points around (x, y) is consistent with it:
Figure GDA0004093924790000151
t theta speed direction angle uniformity threshold, reference value 20 deg..
(2) Performing Gaussian smoothing convolution:
Figure GDA0004093924790000152
the function of the above formula is to smooth the velocity value of each point in w, and reduce noise or interference of missing points.
(3) The horizontal gradient wx and the vertical gradient wy for each point within w are calculated:
Figure GDA0004093924790000153
(4) Constructing a gradient autocorrelation matrix for each point within w as follows:
Figure GDA0004093924790000154
(5) Calculating the eigenvalue of each point gradient autocorrelation matrix:
R(x,y)=|w self-cor (x,y)|-k*tr 2 (w self-cor (x,y)) (XI)
|w self-cor (x, y) | is the determinant of the point (x, y) gradient autocorrelation matrix, tr (|w) self-cor (x, y) |) represents the trace of the point (x, y) gradient autocorrelation matrix, k is 0.05.
(6) When the R value of a point in the small window is larger than the set threshold T Corner When it is marked as a corner point. T (T) Corner The reference value takes 0.7.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (1)

1. A Harris corner detection method applied to an asynchronous time domain vision sensor is characterized in that: the method specifically comprises the following steps:
(1) Constructing a time surface TS and speed surface VS data structure;
(2) Reading a specified number of events for initializing TS and VS;
(3) Calculating a value of the light at the event point and recording the value and time t at the corresponding location of the VS;
(4) Updating the light values at other positions of the VS;
(5) Performing corner detection by using VS;
(6) Returning to the step (2) for continuing if the follow-up event exists, and ending if the follow-up event does not exist;
in the step (1), the size of the pixel array of ATVS is set to be m×n, and the event AE (x, y, P, t) indicates that the pixel with the address (x, y) generates a change of P attribute, p=1 or 0, and the light intensity increases or decreases, and the change is output at the time t; wherein, the liquid crystal display device comprises a liquid crystal display device,
time surface TS: recording the time of last increase or decrease of each pixel when the difference between the time of the new occurrence of one pixel and the time of the last occurrence of the event is smaller than the refractory period threshold T ref Discarding the new event when it is time;
speed surface VS: recording an optical flow value which is calculated by each pixel last time and comprises a magnitude and a direction angle theta by a two-dimensional array with the same size as the ATVS pixel array;
the step (2) specifically comprises the following steps,
(21) All positions of TS and VS arrays are set to 0;
(22) If the number of read-in events is smaller than the specified reference value M x N x 10, reading in the next AE; otherwise go to step (24);
(23) If the time difference between the output time of the read-in event and the last time the event with the same type attribute was generated is smaller than the refractory period threshold T ref The refractory period threshold T ref Discarding the event at 100us, returning to step (22); otherwise, recording the output time of the device at the corresponding position of the TS by taking 100us as a unit, and returning to the step (22);
(24) Calculating the optical flow value of each non-edge point on the VS by using the TS, and recording the value into the VS;
the step (3) specifically comprises the following steps:
at time t, the optical flow values of the position points (x, y) are calculated using TS as follows:
(31) On TS, taking (x, y) as a center, searching a point set which is less than 1ms away from t in 8 adjacent domains:
Figure FDA0004093924780000021
(32) Ending if the number of PS midpoints is less than 3; otherwise turning to (33);
(33) Solving a space fitting plane of the PS by using a least square method or a eigenvalue decomposition method:
L:ax+by+ct+d=0,s.t.min(dis tan ce(PS,L)) (2);
minimizing the sum of the distances from all points in PS to plane L;
(34) The normal equation for plane L is:
Figure FDA0004093924780000022
projection of normal vector of normal on (X-Y) plane:
Figure FDA0004093924780000023
namely, the optical flow of the point (x, y) points, the direction angle of which is:
θ=ac tan(b/a) (5);
in the step (4), after the optical flow values at the points (x, y) are updated, the magnitudes of the optical flow values at the other positions are updated according to the following formula, and the direction angle θ is unchanged:
VS(m,n)=VS(m,n)*e -k(t-t0) ,s.t.m≠x&n≠y (6)
k is an attenuation speed parameter, and when the unit of t/t0 is 0.1ms, the reference value is 10-20;
in the step (5), after receiving the event AE (x, y, P, t) and updating TS and VS in sequence, for a 3*3 small window w=vs [ x-1:x+1, y-1:y+1] centered on (x, y) on VS:
(51) Speed direction angle consistency check:
let the optical flow of (x, y) point be (V, θ), V be the optical flow magnitude, θ be the velocity direction angle, check if the velocity direction angle of 8 neighborhood points around (x, y) is consistent with it:
Figure FDA0004093924780000031
a consistency threshold value of the T theta speed direction angle, a reference value of 20 degrees;
(52) Performing Gaussian smoothing convolution:
Figure FDA0004093924780000032
the function of the above formula is to smooth the speed value of each point in w, and reduce the noise or the interference of the missing point;
(53) The horizontal gradient wx and the vertical gradient wy for each point within w are calculated:
Figure FDA0004093924780000033
(54) Constructing a gradient autocorrelation matrix for each point within w as follows:
Figure FDA0004093924780000034
(55) Calculating the eigenvalue of each point gradient autocorrelation matrix:
R(x,y)=|w self-cor (x,y)|-k*tr 2 (w self-cor (x,y)) (11)
|w self-cor (x, y) | is the determinant of the point (x, y) gradient autocorrelation matrix, tr (|w) self-cor (x, y) | represents the trace of the point (x, y) gradient autocorrelation matrix, k takes 0.05;
(6) When the R value of a point in the small window is larger than the set threshold T Corner When it is marked as a corner point, T Corner The reference value takes 0.7.
CN201811250135.8A 2018-10-25 2018-10-25 Harris corner detection method applied to asynchronous time domain vision sensor Active CN109509213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811250135.8A CN109509213B (en) 2018-10-25 2018-10-25 Harris corner detection method applied to asynchronous time domain vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811250135.8A CN109509213B (en) 2018-10-25 2018-10-25 Harris corner detection method applied to asynchronous time domain vision sensor

Publications (2)

Publication Number Publication Date
CN109509213A CN109509213A (en) 2019-03-22
CN109509213B true CN109509213B (en) 2023-06-27

Family

ID=65745971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811250135.8A Active CN109509213B (en) 2018-10-25 2018-10-25 Harris corner detection method applied to asynchronous time domain vision sensor

Country Status (1)

Country Link
CN (1) CN109509213B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176028B (en) * 2019-06-05 2020-12-15 中国人民解放军国防科技大学 Asynchronous corner detection method based on event camera
CN110399908B (en) * 2019-07-04 2021-06-08 西北工业大学 Event-based camera classification method and apparatus, storage medium, and electronic apparatus
CN110536083B (en) * 2019-08-30 2020-11-06 上海芯仑光电科技有限公司 Image sensor and image acquisition system
CN113066104B (en) * 2021-03-25 2024-04-19 三星(中国)半导体有限公司 Corner detection method and corner detection device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6545705B1 (en) * 1998-04-10 2003-04-08 Lynx System Developers, Inc. Camera with object recognition/data output
US7961925B2 (en) * 2006-11-14 2011-06-14 Siemens Aktiengesellschaft Method and system for dual energy image registration
WO2012006578A2 (en) * 2010-07-08 2012-01-12 The Regents Of The University Of California End-to-end visual recognition system and methods
CN104766342A (en) * 2015-03-30 2015-07-08 天津师范大学 Moving target tracking system and speed measuring method based on temporal vision sensor
CN105160703B (en) * 2015-08-25 2018-10-19 天津师范大学 A kind of optical flow computation method using time-domain visual sensor

Also Published As

Publication number Publication date
CN109509213A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109509213B (en) Harris corner detection method applied to asynchronous time domain vision sensor
TWI685798B (en) Object detection system, autonomous vehicle, and object detection method thereof
CN106462976B (en) Method for tracking shape in scene observed by asynchronous sensor
US20190065885A1 (en) Object detection method and system
CN109461173B (en) Rapid corner detection method for time domain vision sensor signal processing
CN111476827B (en) Target tracking method, system, electronic device and storage medium
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN104766342A (en) Moving target tracking system and speed measuring method based on temporal vision sensor
CN107238727B (en) Photoelectric type rotation speed sensor based on dynamic vision sensor chip and detection method
JP2014229303A (en) Method of detection of object in scene
CN105160703A (en) Optical flow computation method using time domain visual sensor
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
CN112669344A (en) Method and device for positioning moving object, electronic equipment and storage medium
JP2021061573A (en) Imaging system, method for imaging, imaging system for imaging target, and method for processing intensity image of dynamic scene acquired using template, and event data acquired asynchronously
CN106128121A (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
US11501536B2 (en) Image processing method, an image processing apparatus, and a surveillance system
CN105721772A (en) Asynchronous time domain visual information imaging method
WO2021175281A1 (en) Infrared temperature measurement method, apparatus, and device, and storage medium
CN115375581A (en) Dynamic visual event stream noise reduction effect evaluation method based on event time-space synchronization
Ma et al. Dynamic gesture contour feature extraction method using residual network transfer learning
CN105203045B (en) A kind of shape of product integrity detection system and inspection method based on asynchronous time domain visual sensor
Zuo et al. Accurate depth estimation from a hybrid event-RGB stereo setup
Lu et al. Event camera point cloud feature analysis and shadow removal for road traffic sensing
CN112734794B (en) Moving target tracking and positioning method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant