CN113689461A - Self-adaptive cutting method based on bionic visual sensor space-time data stream - Google Patents

Self-adaptive cutting method based on bionic visual sensor space-time data stream Download PDF

Info

Publication number
CN113689461A
CN113689461A CN202110973961.0A CN202110973961A CN113689461A CN 113689461 A CN113689461 A CN 113689461A CN 202110973961 A CN202110973961 A CN 202110973961A CN 113689461 A CN113689461 A CN 113689461A
Authority
CN
China
Prior art keywords
event
space
data stream
time data
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110973961.0A
Other languages
Chinese (zh)
Other versions
CN113689461B (en
Inventor
吕恒毅
韩诚山
张以撒
冯阳
赵宇宸
孙铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202110973961.0A priority Critical patent/CN113689461B/en
Publication of CN113689461A publication Critical patent/CN113689461A/en
Application granted granted Critical
Publication of CN113689461B publication Critical patent/CN113689461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A self-adaptive cutting method based on a bionic visual sensor space-time data stream relates to the technical field of image processing and solves the problems that the existing data cutting method has larger error in space-time data stream cutting and poor noise robustness; and the confidence interval involved in the calculation is not updated; the method can reduce noise in the data cutting process, reduce the influence of noise on data stream cutting and improve the robustness of the noise. And the calculation parameters are continuously updated in a self-adaptive manner according to the scenes so as to deal with the situation that the target speed changes or the number changes, so that the method is suitable for various complex scenes. And a virtual frame with clear and sharp edges can be obtained from the space-time data stream by utilizing a past event elimination mechanism. Finally, the space-time data stream can be cut in a self-adaptive mode, target information contained in the cut space-time data segment can retain complete target information, the phenomenon of smear does not exist, and the accuracy of obtaining the next target motion information is improved.

Description

Self-adaptive cutting method based on bionic visual sensor space-time data stream
Technical Field
The invention relates to the technical field of image processing, in particular to a self-adaptive cutting method based on a bionic vision sensor space-time data stream.
Background
The imaging devices in the mainstream today are CCD and CMOS image sensors, which output images in a frame imaging mode, and the output images are more intuitive and can better appeal to the human eye. However, the frame-based transmission method cannot realize high-speed reading and real-time processing of large data. Thus, a biomimetic event sensor is generated. The sensor only images the place with light intensity change due to the special pixel structure, and has the advantages of large dynamic range and low data volume. Therefore, the sensor is widely used in the field of machine vision, but the output image of the sensor is not good for human eyes, and in the field of machine vision, a method of fixing a time window or fixing the number of events is mostly adopted for the image processing of the sensor. However, in a complex scene, the cutting effects of the two methods are not ideal and the noise of the time-space data stream cannot be reduced. In the case of a drastic change in the speed of movement of the target or a significant change in the number of targets, if the time window for cutting is too long or the number of events (generated by the bionic vision sensor pixels, which together form a spatio-temporal data stream) is too large, a smearing phenomenon may be caused. For slower moving targets, if the time window for slicing is too short or the number of events is too small, target information may be lost.
At present, some solutions are available at home and abroad for solving the problems.
In the prior art 1, an adaptive time-curved-surface-based asynchronous event target tracking method is provided, and a spatio-temporal data stream-based adaptive cutting method is provided, in which a plurality of frames of virtual frames with clear target edges are selected, information entropy of the virtual frames is calculated first, a confidence interval is calculated according to the information entropy, then the spatio-temporal data stream is cut, whether the information entropy of the virtual frames formed by the cut spatio-temporal data stream is in the confidence interval is judged, if so, the cutting is completed, and if not, events are continuously accumulated for cutting. However, the confidence interval of this method is not updated at all times, and therefore is also not suitable for complex scenes.
In the prior art 2, adaptive time pool target detection based on a dynamic visual sensor adopts a neural network method to provide a concept of "adaptive time pooling pool", that is, different motion characteristics of a target are pooled in a neural network pooling layer, but the method needs to extract the characteristics of the target, and is not suitable for simple preprocessing of a time-space data stream.
The method can solve the problem of smear or target information loss caused by improper cutting of the space-time data stream when the target motion speed is changed violently in a complex scene to a certain extent. However, due to the working principle of the bionic visual sensor, a large amount of background noise exists in the time-space data stream, so that the calculation of the virtual frame information entropy of the time-space data stream has great influence, and the time-space data stream cutting has great errors and poor noise robustness; the confidence interval of the calculation is not updated, so that the method is not suitable for the situation that the target in the scene is increased or decreased, and the method has no universality; the segmentation of the time-space data stream belongs to the pretreatment of the data of the bionic vision sensor, so the existing method for acquiring the target characteristics in advance and then pooling the characteristics is not suitable for the data pretreatment.
Disclosure of Invention
The invention aims to solve the problems that the existing data cutting method has larger error in time-space data stream cutting and poorer noise robustness; the confidence interval participating in calculation is not updated, so that the method is not suitable for the situation that the target is increased or decreased in the scene, and has poor universality and the like; a self-adaptive cutting method based on the space-time data stream of a bionic vision sensor is provided.
The self-adaptive cutting method based on the bionic visual sensor space-time data stream is realized by the following steps:
acquiring an ideal frame image;
acquiring a frame of ideal frame image by accumulating Tms events and adopting a mode of eliminating past events; the method specifically comprises the following steps:
step one, constructing a 3X 3 slider by taking (x, y, t5) as a central event, and setting eight fields of events which take (x, y, t5) as the central event as (x-1, y +1, t1), (x, y-1, t2), (x +1, y +1, t3), (x-1, y, t4), (x +1, y, t6), (x-1, y-1, t7), (x, y-1, t8) and (x +1, y-1, t 9); if there is no eight-domain event around the central event, defining the event as noise and clearing;
step two, acquiring a vector relation between the eight-domain event and a central event (x, y, t5) for acquiring a motion direction;
Figure BDA0003226650620000021
Figure BDA0003226650620000022
Figure BDA0003226650620000023
Figure BDA0003226650620000024
Figure BDA0003226650620000031
Figure BDA0003226650620000032
Figure BDA0003226650620000033
Figure BDA0003226650620000034
step three, preliminarily obtaining a motion vector v of the central event by using a vector synthesis method(x,y,t)Expressed by the following formula:
Figure BDA0003226650620000035
determining the final motion direction of the central event by using the principle of local consistency, and acquiring motion vectors of eight fields of the central event, wherein the motion vectors are respectively as follows:
Figure BDA0003226650620000036
Figure BDA0003226650620000037
synthesizing all event motion vectors within the 3 x 3 slider to finally obtain the motion vector of the center event:
Figure BDA0003226650620000038
step five, obtaining a past event of (x, y, t5) by the obtained central event motion vector, and eliminating the past event to obtain an ideal frame image;
step two, constructing a virtual frame image;
accumulating a plurality of space-time data, removing time information from the accumulated space-time data, only retaining position information, and completing construction of a plurality of virtual frame images;
step three, calculating the image similarity;
calculating the similarity between the ideal frame image obtained in the step one and the plurality of virtual frame images obtained in the step two to obtain a group of similarity data;
step four, calculating confidence intervals [ a, b ] by using the group of similarity data obtained in the step three;
step five, judging stepThe similarity obtained in the third step is in the confidence interval [ a, b]Maximum value S of internal existenceNIf yes, executing step six; if not, returning to the first step if T is T + 1;
step six, taking the maximum value SNAnd the corresponding time-space data is used as the time-space data stream after the cutting is finished.
The invention has the beneficial effects that: the invention provides a self-adaptive cutting method based on space-time data flow by utilizing a probability theory. Compared with the existing space-time data stream cutting method, the method has the advantages of strong applicability (being applicable to various complex motion scenes), noise reduction of the space-time data stream during cutting, and capability of acquiring a virtual frame image with a clear sharp edge at any time. And the space-time data segment after cutting not only contains complete target information, but also has no smear phenomenon. Has the following advantages:
(1) noise reduction can be carried out in the data cutting process, the influence of noise on data stream cutting is reduced, and the robustness of the noise is improved;
(2) the calculation parameters are continuously updated in a self-adaptive manner to deal with the situation that the target is reduced or increased, so that the method is suitable for various complex scenes;
(3) a past event elimination mechanism is provided, and a virtual frame with clear and sharp edges can be obtained from the space-time data stream no matter at any moment;
(4) the time-space data stream is cut in a self-adaptive mode, and the cut time-space data segment contains complete target information and has no smear phenomenon, so that the accuracy of obtaining the next target motion information is improved.
Drawings
FIG. 1 is a schematic diagram of a 3 × 3 slider in the adaptive cutting method based on the spatiotemporal data stream of the bionic visual sensor according to the present invention;
FIG. 2 is a schematic diagram of a local consistency calculation;
FIG. 3 is a flow chart of the adaptive cutting method based on the bionic visual sensor spatiotemporal data stream.
Detailed Description
The embodiment is described with reference to fig. 1 to 3, and is based on a self-adaptive cutting method of a bionic visual sensor spatio-temporal data stream, and the embodiment first utilizes a past event elimination mechanism to obtain an ideal frame with a clear sharp edge while reducing noise, calculates similarity by utilizing the ideal frame and a virtual frame formed by event accumulation, and calculates a confidence interval [ a, b ] through the similarity. And then judging whether the similarity has the maximum value in the confidence interval or not, and if so, taking the space-time data with the maximum similarity as the space-time data after the cutting is finished. If not, updating the confidence interval.
The method of the embodiment comprises the following specific steps:
firstly, eliminating a past event to obtain an ideal frame image with a clear sharp edge;
(1) A3X 3 slider is constructed by taking (x, y, t5) as a center, as shown in FIG. 1, and eight field events are (x-1, y +1, t1), (x, y-1, t2), (x +1, y +1, t3), (x-1, y, t4), (x +1, y, t6), (x-1, y-1, t7), (x, y-1, t8), and (x +1, y-1, t 9). If there are no eight domain events around the center event, then the event is defined as noise and cleaned.
(2) Acquiring the vector relation of the eight-domain event and the central event (x, y, t5) to acquire the motion direction:
Figure BDA0003226650620000051
Figure BDA0003226650620000052
Figure BDA0003226650620000053
Figure BDA0003226650620000054
Figure BDA0003226650620000055
Figure BDA0003226650620000056
Figure BDA0003226650620000057
Figure BDA0003226650620000058
(3) preliminarily acquiring motion vector v of central event by using vector synthesis method(x,y,t)
Figure BDA0003226650620000059
(4) Since a single event cannot represent the moving direction of the whole object, but the moving direction of 90% of the whole events generated locally by the object is consistent, the moving direction of the object can be represented. Therefore, the final motion direction of the event is determined by using the principle of local consistency, and the motion vectors of the event in the eight fields of the central event are obtained by using the same method:
Figure BDA00032266506200000510
Figure BDA00032266506200000511
synthesizing all event motion vectors in the 3 × 3 slider, and finally obtaining a motion vector of a central event:
Figure BDA00032266506200000512
(5) the past event of (x, y, t5) is derived from the acquired event motion vector, and is eliminated, thereby acquiring an ideal frame image.
Secondly, constructing a virtual frame image;
accumulating 15 segments of spatio-temporal data of Xms, X +1ms, X +2ms … … X +14ms, removing time information from the accumulated 15 segments of spatio-temporal data, and constructing two-dimensional images only after retaining position information, namely completing the construction of 15 virtual frame images;
thirdly, calculating the image similarity;
calculating the similarity between the ideal frame image in the step one and the 15 virtual frame images obtained in the step two to obtain a group of similarity data [ S ]1、S2、…S15]Calculating confidence intervals [ a, b ] using the obtained 15 similarity data];
Fourthly, adaptively cutting the space-time data stream;
judging the similarity obtained in the third step in the confidence interval [ a, b ]]Maximum value S of internal existenceNIf it is, take the maximum value SNThe spatiotemporal data stream of the corresponding Nms time period is the spatiotemporal data stream of which the cutting is completed. Then T ═ T + 1; returning to the first step to continue the cutting of the residual space-time data stream; if not, judging whether other space-time data streams needing to be cut exist or not, if so, judging that T is T +1, and returning to the step one; if not, the process is ended.
The method in the embodiment can reduce noise in the data cutting process, reduce the influence of noise on data stream cutting, and improve the robustness of noise. And the calculation parameters are continuously updated in a self-adaptive manner according to the scenes so as to deal with the situation that the target speed changes or the number changes, so that the method is suitable for various complex scenes. And an ideal frame with clear and sharp edges can be obtained from the spatio-temporal data stream at any time by utilizing a past event elimination mechanism. Finally, the space-time data stream can be cut in a self-adaptive mode, target information contained in the cut space-time data segment can retain complete target information, the phenomenon of smearing does not exist, and therefore the accuracy of obtaining the next target motion information is improved.
Robustness to noise; when the vector relation between the eight-domain event and the central event (x, y, t5) is calculated, if there is no eight-domain event around the central event (x, y, t5), the central event is calculated and defined as noise, and the noise is eliminated, so that the central event does not participate in the following calculation, the noise reduction effect is achieved, and the robustness of the method to the noise is improved.
The elimination of past events is used to obtain an ideal frame image with sharp edges.
The motion direction of the event is preliminarily obtained by calculating the vector relation between the central event of the 3 x 3 slider and eight fields thereof and carrying out vector synthesis, because a single event cannot represent the motion direction of the target, the motion direction of each event in the 3 x 3 slider is calculated by using the same method, finally, the motion vectors of 9 events of the slider are synthesized, and the motion vector of the central event is determined so as to obtain the past event of the event and is eliminated. And finally only the events generated at the target current moment are reserved. Thereby obtaining a virtual frame with a clear sharp edge.
And calculating the self-adaptive updating of the confidence interval in the parameters, thereby being capable of adapting to different motion scenes. The adaptive cutting of the space-time data stream can not cause target smear or loss of moving target information.

Claims (2)

1. The self-adaptive cutting method based on the bionic visual sensor space-time data stream is characterized by comprising the following steps: the method is realized by the following steps:
acquiring an ideal frame image;
acquiring a frame of ideal frame image by accumulating Tms events and adopting a mode of eliminating past events; the method specifically comprises the following steps:
step one, constructing a 3X 3 slider by taking (x, y, t5) as a central event, and setting eight fields of events which take (x, y, t5) as the central event as (x-1, y +1, t1), (x, y-1, t2), (x +1, y +1, t3), (x-1, y, t4), (x +1, y, t6), (x-1, y-1, t7), (x, y-1, t8) and (x +1, y-1, t 9); if there is no eight-domain event around the central event, defining the event as noise and clearing;
step two, acquiring a vector relation between the eight-domain event and a central event (x, y, t5) for acquiring a motion direction;
Figure FDA0003226650610000011
Figure FDA0003226650610000012
Figure FDA0003226650610000013
Figure FDA0003226650610000014
Figure FDA0003226650610000015
Figure FDA0003226650610000016
Figure FDA0003226650610000017
Figure FDA0003226650610000018
step three, preliminarily obtaining a motion vector v of the central event by using a vector synthesis method(x,y,t)Expressed by the following formula:
Figure FDA0003226650610000019
determining the final motion direction of the central event by using the principle of local consistency, and acquiring motion vectors of eight fields of the central event, wherein the motion vectors are respectively as follows:
Figure FDA00032266506100000110
synthesizing all event motion vectors within the 3 x 3 slider to finally obtain the motion vector of the center event:
Figure FDA0003226650610000021
step five, obtaining a past event of (x, y, t5) by the obtained central event motion vector, and eliminating the past event to obtain an ideal frame image;
step two, constructing a virtual frame image;
accumulating a plurality of space-time data, removing time information from the accumulated space-time data, only retaining position information, and completing construction of a plurality of virtual frame images;
step three, calculating the image similarity;
calculating the similarity between the ideal frame image obtained in the step one and the plurality of virtual frame images obtained in the step two to obtain a group of similarity data;
step four, calculating confidence intervals [ a, b ] by using the group of similarity data obtained in the step three;
step five, judging that the similarity obtained in the step three is in a confidence interval [ a, b ]]Maximum value S of internal existenceNIf yes, executing step six; if not, returning to the first step if T is T + 1;
step six, taking the maximum value SNAnd the corresponding time-space data stream is used as the time-space data stream after the cutting is finished.
2. The adaptive segmentation method based on bionic visual sensor space-time data stream according to claim 1, characterized in that: step six, judging whether other space-time data streams needing to be cut exist, if so, judging that T is T +1, and returning to the step one; if not, the process is ended.
CN202110973961.0A 2021-08-24 2021-08-24 Self-adaptive cutting method based on space-time data flow of bionic visual sensor Active CN113689461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110973961.0A CN113689461B (en) 2021-08-24 2021-08-24 Self-adaptive cutting method based on space-time data flow of bionic visual sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110973961.0A CN113689461B (en) 2021-08-24 2021-08-24 Self-adaptive cutting method based on space-time data flow of bionic visual sensor

Publications (2)

Publication Number Publication Date
CN113689461A true CN113689461A (en) 2021-11-23
CN113689461B CN113689461B (en) 2023-12-26

Family

ID=78581811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110973961.0A Active CN113689461B (en) 2021-08-24 2021-08-24 Self-adaptive cutting method based on space-time data flow of bionic visual sensor

Country Status (1)

Country Link
CN (1) CN113689461B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105027550A (en) * 2012-11-06 2015-11-04 阿尔卡特朗讯公司 System and method for processing visual information for event detection
US20180098082A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Motion estimation using hybrid video imaging system
CN111770290A (en) * 2020-07-29 2020-10-13 中国科学院长春光学精密机械与物理研究所 Noise reduction method for dynamic vision sensor output event stream

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105027550A (en) * 2012-11-06 2015-11-04 阿尔卡特朗讯公司 System and method for processing visual information for event detection
US20180098082A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Motion estimation using hybrid video imaging system
CN111770290A (en) * 2020-07-29 2020-10-13 中国科学院长春光学精密机械与物理研究所 Noise reduction method for dynamic vision sensor output event stream

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MYO TUN AUNG等: ""Event-based Plane-fitting Optical Flow for Dynamic Vision Sensors in FPGA"", 《2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)》 *
NATALIA NEVEROVA等: ""A Multi-scale Approach to Gesture Detection and Recognition"", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS》 *

Also Published As

Publication number Publication date
CN113689461B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN106846359B (en) Moving target rapid detection method based on video sequence
US10769480B2 (en) Object detection method and system
US20220417590A1 (en) Electronic device, contents searching system and searching method thereof
CN113034634B (en) Adaptive imaging method, system and computer medium based on pulse signal
CN110580472B (en) Video foreground detection method based on full convolution network and conditional countermeasure network
CN106331723B (en) Video frame rate up-conversion method and system based on motion region segmentation
Zheng et al. Deep learning for event-based vision: A comprehensive survey and benchmarks
CN105913404A (en) Low-illumination imaging method based on frame accumulation
CN114245007B (en) High-frame-rate video synthesis method, device, equipment and storage medium
CN108280844B (en) Video target positioning method based on area candidate frame tracking
CN108492245B (en) Low-luminosity image pair fusion method based on wavelet decomposition and bilateral filtering
Poulose Literature survey on image deblurring techniques
CN111798395B (en) Event camera image reconstruction method and system based on TV constraint
CN114885074A (en) Event camera denoising method based on space-time density
CN113689461B (en) Self-adaptive cutting method based on space-time data flow of bionic visual sensor
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
CN110852335B (en) Target tracking system based on multi-color feature fusion and depth network
CN112927171A (en) Single image deblurring method based on generation countermeasure network
CN105069764B (en) A kind of image de-noising method and system based on Edge track
CN109711417B (en) Video saliency detection method based on low-level saliency fusion and geodesic
CN116977208A (en) Low-illumination image enhancement method for double-branch fusion
CN115512263A (en) Dynamic visual monitoring method and device for falling object
Zheng et al. Badminton action recognition based on improved I3D convolutional neural network
CN114155425A (en) Weak and small target detection method based on Gaussian Markov random field motion direction estimation
CN115690190A (en) Moving target detection and positioning method based on optical flow image and small hole imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant