CN112987026A - Event field synthetic aperture imaging algorithm based on hybrid neural network - Google Patents

Event field synthetic aperture imaging algorithm based on hybrid neural network Download PDF

Info

Publication number
CN112987026A
CN112987026A CN202110244649.8A CN202110244649A CN112987026A CN 112987026 A CN112987026 A CN 112987026A CN 202110244649 A CN202110244649 A CN 202110244649A CN 112987026 A CN112987026 A CN 112987026A
Authority
CN
China
Prior art keywords
event
camera
neural network
scene
synthetic aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110244649.8A
Other languages
Chinese (zh)
Inventor
余磊
张翔
廖伟
杨文�
夏桂松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110244649.8A priority Critical patent/CN112987026A/en
Publication of CN112987026A publication Critical patent/CN112987026A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/90Lidar systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an event field synthetic aperture imaging algorithm based on a hybrid neural network, which comprises the steps of constructing an event data set, refocusing an event stream, constructing and training the hybrid neural network and reconstructing a visual image. After the brightness information of a scene is captured from multiple visual angles through the movement of the event camera, the collected event stream can be refocused, so that effective event points generated by the target are successfully aligned on a space-time plane, and noise event points generated by the obstruction are still in an out-of-focus state. And then, a high-quality non-occlusion target image can be recovered from the event field after the refocusing by constructing and training a mixed neural network (SNN-CNN) model formed by the pulse neural network and the convolutional neural network. The invention comprehensively utilizes the advantages of low time delay and high dynamic range of the event camera and the space-time data processing capacity of the hybrid neural network, realizes the target reconstruction under the intensive shielding and extreme illumination environment and has better visual effect.

Description

Event field synthetic aperture imaging algorithm based on hybrid neural network
Technical Field
The invention belongs to the field of image processing, and particularly relates to a method for realizing synthetic aperture imaging by using an event camera.
Background
Synthetic Aperture Imaging (SAI) is taken as an important branch of the field of light field computing imaging, and the problem that an occluded target cannot be effectively imaged when shooting at a single visual angle is solved. By mapping and synthesizing frame images shot by the camera under multiple visual angles, synthetic aperture imaging can be equivalent to imaging of a virtual large-aperture and small-depth-of-field camera, so that a shelter far away from a focusing plane can be virtualized, imaging of a sheltered target is realized, and the method has extremely high application value in aspects of shelter removal, target recognition and tracking, three-dimensional scene reconstruction and the like.
Current synthetic aperture imaging methods are mostly based on optical cameras in frame format. When the obstruction is too dense, the effective target information in the frame image captured by the common optical camera is reduced and the obstruction interference is increased, which seriously affects the definition and contrast of the imaging result and often introduces fuzzy noise. In addition, because the dynamic range of a common optical camera is low, the traditional synthetic aperture imaging method inevitably generates the problems of overexposure/underexposure in extreme illumination conditions such as over-brightness/over-darkness and the like, so that the target cannot be effectively imaged.
And event cameras based on biological visual perception mechanisms become a breakthrough to solve the above problems. Unlike conventional optical cameras, event cameras only perceive the log-domain luminance change of a scene, replace the representation of conventional frames with asynchronous event streams, and have the characteristics of low latency, high dynamic range, low bandwidth requirements, low power consumption and the like. Compared with a traditional optical camera, the event camera can respond to the transient change of scene brightness with extremely low time delay. Therefore, in an environment with dense shielding, the event camera can continuously sense a scene and a target, so that sufficient effective target information is captured, and the imaging quality is improved. The high dynamic range of the event camera itself also makes reconstruction of the target possible in extreme lighting conditions.
Disclosure of Invention
Based on the above analysis, the present invention aims to provide a synthetic aperture imaging algorithm based on an event camera, which utilizes the advantages of low time delay, high dynamic range, etc. of the event camera to realize synthetic aperture imaging under the conditions of dense shielding and extreme illumination. And the noise of the input event data is removed by utilizing the space-time processing capacity of the hybrid neural network, and a visual image with no shielding and high quality is reconstructed from the pure event stream, thereby achieving the perspective effect.
The synthetic aperture algorithm based on the event camera provided by the invention comprises the following specific steps:
step 1: capturing scene light information; capturing scene information under a plurality of viewing angles by using an event camera, and outputting an event stream;
step 2: event refocusing; mapping the event data captured at the plurality of view angles to a synthetic aperture imaging plane through a multi-view geometry of the camera;
and step 3: constructing and training a hybrid neural network; training a hybrid neural network by using the event data after refocusing and the non-shielding visual image matched with the event data;
and 4, step 4: reconstructing a visual image; and reconstructing an unobstructed target image from the pure event data by using the trained hybrid neural network.
In the above synthetic aperture imaging algorithm based on the event camera, in step 1, the event camera needs to be used for data capture of a dense occlusion scene under multiple viewing angles, and methods such as an event camera array, event camera moving shooting, and the like can be used; when a data set is constructed, an ordinary optical camera is required to additionally shoot an image without shielding to form a training sample pair;
step 1, the scene event data sets under multiple visual angles are as follows:
events(i),s∈[1,S],i∈[1,Cs]
wherein,eventsFor a scene event data set at the s-th view,
Figure BDA0002963640020000021
is the ith event point of the scene at the s view angle, wherein
Figure BDA0002963640020000022
To be the polarity thereof, the polarity of the,
Figure BDA0002963640020000023
for which a time stamp of the time of day is generated,
Figure BDA0002963640020000024
and
Figure BDA0002963640020000025
indicating its generation position as the camera imaging plane
Figure BDA0002963640020000026
And a first
Figure BDA0002963640020000027
Columns; t is the total capture duration of the scene event data; s is the number of visual angles; csThe total number of event points collected at the s-th view; m is the number of imaging plane lines; n is the number of imaging plane columns;
step 1, the scene non-occlusion image data set under multiple visual angles is as follows:
frames(us,vs),s∈[1,S],us∈[1,M],vs∈[l,W]
wherein the frame issFor the scene without occlusion image, frame under the s-th view angles(us,vs) For the u th on the scene non-occlusion image collected under the s th visual anglesLine vsPixels of a column; s is the number of visual angles; m is the number of imaging plane lines; n is the number of imaging plane columns.
In the event camera based synthetic aperture imaging algorithm described above, step 2 the multiple viewsIn the scene event data set under the angle, the event point data generated under the s-th view angle is the data
Figure BDA0002963640020000028
And mapping the images to the imaging plane of the r camera under the reference view angle one by one, wherein the specific steps are as follows:
Figure BDA0002963640020000029
wherein the content of the first and second substances,
Figure BDA00029636400200000210
is the pixel position mapped to the reference view angle r by the ith event point under the view angle s, K is the internal reference matrix of the camera,
Figure BDA00029636400200000211
is a rotation matrix of the camera view angle s relative to the reference view angle r,
Figure BDA00029636400200000212
a translation matrix of the camera view angle s relative to the reference view angle r, d is a synthetic aperture focal length, i.e. the distance from the occluded object to the camera plane;
the refocused event point data generated at the s-th view is expressed as:
Figure BDA0002963640020000031
the refocused event dataset at the s-th view is represented as:
Figure BDA0002963640020000032
the event data set after mapping the event data captured under all the views to the reference view r is represented as:
Figure BDA0002963640020000033
in the above synthetic aperture imaging algorithm based on the event camera, in step 3, the neural network is a hybrid network composed of a pulse neural network and a convolution neural network, wherein the pulse neural network needs to be composed of neurons with leakage mechanisms, such as Leaky integral-and-fire (LIF) neurons; the input of the hybrid network model is the event data set event refocused to the reference view at the multiple views in step 2rAnd output as reconstructed visual image IreconThe image and the scene captured in the step one are subjected to non-occlusion image framerAnd calculating the loss function and then performing back propagation to finish the training of the network model.
In the above synthetic aperture imaging algorithm based on the event camera, the event data input in step 4 first needs to be mapped to the synthetic aperture imaging plane through the multi-view geometric relationship of the camera in step 2 to obtain an event data set refocused to the reference view at a plurality of views, and then the trained neural network is input to obtain the corresponding visual image.
The invention provides a hybrid neural network-based aperture imaging algorithm for event occasions, which comprehensively utilizes the mechanism advantages of an event camera, realizes image reconstruction under the conditions of dense shielding and extreme illumination, and greatly expands the applicable range of synthetic aperture imaging. And the input event point is denoised from the time dimension by utilizing the space-time processing capacity of the hybrid neural network, so that the quality of image reconstruction is greatly improved.
Drawings
FIG. 1 is a schematic diagram of an experimental scene including an event camera mounted on a programmable sled, a dense wooden fence and occluded targets.
Fig. 2 is a flowchart of a synthetic aperture imaging algorithm proposed by the present invention.
Fig. 3 is a schematic diagram of an event camera movement shooting process.
Fig. 4 is a schematic diagram of the working mechanism of LIF pulse neuron.
Fig. 5 is a schematic diagram of a neural network structure, in which a pulse neural network encoder is arranged at the front end, a convolutional neural network decoder is arranged at the rear end, event frames at different time intervals are input, and a visual image is output.
Figure 6 is a comparison of results with different synthetic aperture imaging algorithms. From left to right, the first column is a reference image, the second column is a synthetic aperture imaging algorithm (F-SAI) based on a traditional optical camera, the third column is a synthetic aperture imaging algorithm (F-SAI + CNN) based on a traditional optical camera and a convolutional neural network, the fourth column is a synthetic aperture imaging algorithm (E-SAI + ACC) based on an event camera and an accumulation method, the fifth column is a synthetic aperture imaging algorithm (E-SAI + CNN) based on an event camera and a convolutional neural network, and the sixth column is a synthetic aperture imaging algorithm (E-SAI + Hybrid) based on an event camera and a Hybrid neural network. The first to the fourth lines from top to bottom are the reconstruction results under dense shielding, and the fifth and the sixth lines are the reconstruction results under the environment of over-bright and over-dark.
Fig. 7 shows the comparison result after enlarging the details.
Fig. 8a is a reference image captured under good lighting conditions.
Fig. 8b is a SAI reconstruction result based on a conventional frame.
Fig. 8c is the SAI reconstruction result of an event camera based on the present invention.
Detailed Description
In order that the present invention may be more clearly understood, the following detailed description is provided.
Multi-view shooting of the occluded object can be achieved by an event camera (see fig. 1) mounted on a programmable slide rail. After the shielded target is shot by using the event camera, the target without shielding is shot by using the common optical camera as a reference image, and the reference image is matched with the event stream data to construct a data set. However, since the data size of the field photography is limited, the sample expansion needs to be performed by a data enhancement method. Deep learning is a data-driven method, and the larger the training data set is, the stronger the generalization ability of the trained model is. However, in practice, it is difficult to cover all scenes when data is collected, and the collection of data requires a large cost, which results in a limited training set in practice. If various training data can be generated according to the existing data, better open source throttling can be achieved, and the purpose of data enhancement is achieved. Although the event stream data has no frame structure, the event stream data can be correspondingly transformed according to the pixel position of each event point, and an enhanced event stream is obtained. Common data enhancement techniques are:
(1) turning: the flipping includes a horizontal flipping and a vertical flipping.
(2) Rotating: rotation is clockwise or counter-clockwise, and it is noted that rotation is preferably 90-180 ° during rotation, otherwise dimensional problems may occur.
(3) Zooming: the image may be enlarged or reduced. When enlarged, the size of the enlarged image will be larger than the original size. Most image processing architectures crop the enlarged image to its original size.
(4) Cutting: the region of interest of the picture is cut, and different regions are cut out randomly and are expanded to the original size again usually during training.
Event point data produced by an event camera may be represented as e { (p, x, t), where p { +1, -1} is the event point polarity, x is the pixel position of the event point, and t is the event point generation time. Since event stream data obtained during photographing is generated at different viewing angles, it is necessary to refocus an event point. Taking the camera pose when the reference image is shot as the reference pose thetarefTo be held in the camera pose thetaiLower event Point ei=(pi,xi,ti) And mapping to an imaging plane of a reference camera pose, and obtaining the following mapping formula by using a camera multi-view geometric principle and a pinhole imaging model:
Figure BDA0002963640020000051
wherein
Figure BDA0002963640020000052
For the mapped event point pixel location, K is the camera's internal reference matrix, Ri,TiIs a rotation matrix and a translation matrix between two camera poses, and d is a synthetic aperture focal length, namely the distance from a shielded target to a camera plane. The event point obtained after refocusing is
Figure BDA0002963640020000053
Through the event point refocusing process, effective target information in the event stream is successfully aligned in space and time, and the noise event point generated by the occlusion object is still in a defocusing state, so that the primary occlusion removing effect is achieved.
In order to reconstruct a high-quality visual image from event data after refocusing, a hybrid neural network consisting of a pulse neural network and a convolution neural network is constructed to process the event data. The framework further reduces the interference of noise event points by utilizing the space-time processing capacity of the impulse neural network and improves the robustness of the model, and the framework carries out high-quality visual image reconstruction by utilizing the powerful learning capacity of the convolutional neural network and guarantees the overall performance of the model. To effectively deal with the interference at the noise event point, the impulse neural network needs to be composed of neurons with a leakage mechanism. Taking LIF neurons as an example (the working mechanism is shown in FIG. 4), the LIF neurons will not be activated immediately when receiving external stimulation, but will convert the external input into body current to charge the membrane potential U (t) of the LIF neurons, and when the membrane potential exceeds the pulse emission threshold U (t)thIs activated when u (t)>Uth. When a neuron activates, its membrane potential will reset to a resting potential (U) immediatelyrestTaking U in generalrest0) and simultaneously generates a pulse signal to act on other neurons. When no new pulse is input, the membrane potential of the LIF neuron leaks to the resting potential in a certain rule. The working mechanism of LIF neurons is mathematically expressed as follows:
Figure BDA0002963640020000054
wherein
Figure BDA0002963640020000055
Represents the membrane potential of the nth LIF neuron in the l pulse layer at time t, alpha represents the leakage rate of the neuron, and wmnFor synaptic weights connecting the mth and nth LIF neurons,
Figure BDA0002963640020000056
the output of the nth LIF neuron in the ith pulse layer at the time t can be defined as
Figure BDA0002963640020000057
Wherein U isthRepresenting the firing threshold of the spiking neuron. First term in formula
Figure BDA0002963640020000058
Two pieces of information are implied:
(1) LIF neurons leak their own membrane potential gradually with a leak rate of α.
(2) When pulsing by itself, i.e.
Figure BDA0002963640020000059
When this occurs, the membrane potential itself is reset.
And the second term
Figure BDA00029636400200000510
It indicates the degree of influence of the pulses generated by other LIF neurons on that neuron. Since a pulsing neuron relies on its membrane potential exceeding a threshold to generate a pulse, the effects of temporally isolated event points are easily leaked away by the pulsing neuron and thus do not have an effect (as shown in fig. 4). After the event points are refocused, effective target event points are aligned in time and space, and noise event points are scattered, so that the pulse neuron with the leakage mechanism can further process noise events and filter out interference information. Then the output of the impulse neural network is input into the convolutional neural network, and the convolutional neural network is utilized to learn the mapping relation between the event data and the visual image, thereby reconstructing low-noise and high-quality imageA visual image of the quantity.
After the hybrid neural network is built, event data after refocusing is cut into N time intervals according to a certain time interval delta t, and event points are accumulated in each time interval to form an event frame with the size of 2 multiplied by H multiplied by W (2 represents positive and negative polarities, H and W are the height and width of the event frame). The frames of events are then input into the impulse neural network in a time sequence. And when the N event frames are input, integrating the output pulses of each time interval into a multi-channel tensor input convolutional neural network by the pulse neural network. And (3) performing loss function calculation on the image output by the whole hybrid neural network and a non-shielding reference image collected in advance, and then reversely propagating the loss value to realize the joint supervision training of the hybrid neural network. Due to the differentiation difficulty problem in the impulse neural network, a method based on a proxy function, such as a Spatio-temporal Back propagation (STBP) algorithm, a Back Propagation Through Time (BPTT) algorithm, and the like, can be used in the Back propagation training process.
Fig. 6 and 7 show the synthetic aperture imaging results of the method under the conditions of dense shading and extreme illumination. By comparing several synthetic aperture imaging algorithms:
(1) F-SAI: a synthetic aperture imaging algorithm based on a traditional optical camera and an accumulation method.
(2) F-SAI + CNN: a synthetic aperture imaging algorithm based on a conventional optical camera and a convolutional neural network.
(3) E-SAI + ACC: synthetic aperture imaging algorithm based on event camera and accumulation method.
(4) E-SAI + CNN: synthetic aperture imaging algorithms based on event cameras and convolutional neural networks.
(5) E-SAI + hybrid (Ours): is a synthetic aperture imaging algorithm based on an event camera and a hybrid neural network. We measure the numerical index for the same dataset:
table 1 model test results
Figure BDA0002963640020000061
Peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) were used as metrics in experiments with reference images, both specifically defined as follows:
PSNR=10*log10(2552/mean(mean((X-Y).2)))
SSIM=[L(X,Y)a]×[C(X,Y)b]×[S(X,Y)c]
wherein
Figure BDA0002963640020000071
μXAnd muYRepresents the mean values, σ, of X and Y, respectivelyX、σYAnd σXYRepresenting the variance of X and Y and the covariance of the two, respectively. The higher the PSNR and SSIM values are, the better the reconstruction effect is. And an effective reference image cannot be acquired under extreme illumination, so that the reference index-free image entropy (entropy) is used:
Figure BDA0002963640020000072
wherein m is the total number of different pixel values in the image, and p (i) is the normalized probability of the ith pixel value in the re-image. Higher values of image entropy represent a greater amount of information in the image. In addition, the standard deviation STD is also used for measuring the contrast of the image, and the higher the STD value is, the stronger the contrast is.

Claims (5)

1. An event field synthetic aperture imaging algorithm based on a hybrid neural network is characterized by comprising the following steps:
step 1: capturing scene light information; capturing scene information under a plurality of viewing angles by using an event camera, and outputting an event stream;
step 2: event refocusing; mapping the event data captured at the plurality of view angles to a synthetic aperture imaging plane through a multi-view geometry of the camera;
and step 3: constructing and training a hybrid neural network; training a hybrid neural network by using the event data after refocusing and the non-shielding visual image matched with the event data;
and 4, step 4: reconstructing a visual image; and reconstructing an unobstructed target image from the pure event data by using the trained hybrid neural network.
2. The event camera-based synthetic aperture imaging algorithm of claim 1, wherein:
in step 1, an event camera is required to capture data of a densely occluded scene under multiple viewing angles, and methods such as an event camera array and event camera moving shooting can be used; when a data set is constructed, an ordinary optical camera is required to additionally shoot an image without shielding to form a training sample pair;
step 1, the scene event data sets under multiple visual angles are as follows:
events(i),s∈[1,S],i∈[1,Cs]
wherein, eventsFor a scene event data set at the s-th view,
Figure FDA0002963640010000011
is the ith event point of the scene at the s view angle, wherein
Figure FDA0002963640010000012
To be the polarity thereof, the polarity of the,
Figure FDA0002963640010000013
for which a time stamp of the time of day is generated,
Figure FDA0002963640010000014
and
Figure FDA0002963640010000015
indicating its generation position as the camera imaging plane
Figure FDA0002963640010000016
And a first
Figure FDA0002963640010000017
Columns; t is the total capture duration of the scene event data; s is the number of visual angles; csThe total number of event points collected at the s-th view; m is the number of imaging plane lines; n is the number of imaging plane columns;
step 1, the scene non-occlusion image data set under multiple visual angles is as follows:
frames(us,vs),s∈[1,S],us∈[1,M],vs∈[1,N]
wherein the frame issFor the scene without occlusion image, frame under the s-th view angles(us,vs) For the u th on the scene non-occlusion image collected under the s th visual anglesLine vsPixels of a column; s is the number of visual angles; m is the number of imaging plane lines; n is the number of imaging plane columns.
3. The event camera-based synthetic aperture imaging algorithm of claim 1, wherein:
in step 2, in the scene event data set under a plurality of visual angles, the event point data generated under the s-th visual angle is the data
Figure FDA0002963640010000021
And mapping the images to the imaging plane of the r camera under the reference view angle one by one, wherein the specific steps are as follows:
Figure FDA0002963640010000022
wherein the content of the first and second substances,
Figure FDA0002963640010000023
is the pixel position mapped to the reference view angle r by the ith event point under the view angle s, K is the internal reference matrix of the camera,
Figure FDA0002963640010000024
is a rotation matrix of the camera view angle s relative to the reference view angle r,
Figure FDA0002963640010000025
a translation matrix of the camera view angle s relative to the reference view angle r, d is a synthetic aperture focal length, i.e. the distance from the occluded object to the camera plane;
the refocused event point data generated at the s-th view is expressed as:
Figure FDA0002963640010000026
the refocused event dataset at the s-th view is represented as:
Figure FDA0002963640010000027
the event data set after mapping the event data captured under all the views to the reference view r is represented as:
Figure FDA0002963640010000028
4. the event camera-based synthetic aperture imaging algorithm of claim 1, wherein:
in the step 3, the neural network is a mixed network consisting of a pulse neural network and a convolution neural network, wherein the pulse neural network needs to be composed of neurons with electric leakage mechanisms, such as Leaky integral-and-fire (LIF) neurons; the input of the hybrid network model is the event data set event refocused to the reference view at the multiple views in step 2rAnd output as reconstructed visual image IreconThe image and the scene captured in the step one are subjected to non-occlusion image framerAnd calculating the loss function and then performing back propagation to finish the training of the network model.
5. The event camera-based synthetic aperture imaging algorithm of claim 1, wherein:
the event data input in step 4 is firstly mapped to the synthetic aperture imaging plane through the multi-view geometric relationship of the camera in step 2 to obtain event data sets refocused to reference views at multiple views, and then the trained neural network is input to obtain corresponding visual images.
CN202110244649.8A 2021-03-05 2021-03-05 Event field synthetic aperture imaging algorithm based on hybrid neural network Pending CN112987026A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110244649.8A CN112987026A (en) 2021-03-05 2021-03-05 Event field synthetic aperture imaging algorithm based on hybrid neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110244649.8A CN112987026A (en) 2021-03-05 2021-03-05 Event field synthetic aperture imaging algorithm based on hybrid neural network

Publications (1)

Publication Number Publication Date
CN112987026A true CN112987026A (en) 2021-06-18

Family

ID=76353013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110244649.8A Pending CN112987026A (en) 2021-03-05 2021-03-05 Event field synthetic aperture imaging algorithm based on hybrid neural network

Country Status (1)

Country Link
CN (1) CN112987026A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408671A (en) * 2021-08-18 2021-09-17 成都时识科技有限公司 Object identification method and device, chip and electronic equipment
CN114862732A (en) * 2022-04-21 2022-08-05 武汉大学 Synthetic aperture imaging method fusing event camera and traditional optical camera
WO2023083121A1 (en) * 2021-11-09 2023-05-19 华为技术有限公司 Denoising method and related device
CN117097876A (en) * 2023-07-07 2023-11-21 天津大学 Event camera image reconstruction method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667442A (en) * 2020-05-21 2020-09-15 武汉大学 High-quality high-frame-rate image reconstruction method based on event camera
CN111798513A (en) * 2020-06-16 2020-10-20 武汉大学 Synthetic aperture imaging method and system based on event camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667442A (en) * 2020-05-21 2020-09-15 武汉大学 High-quality high-frame-rate image reconstruction method based on event camera
CN111798513A (en) * 2020-06-16 2020-10-20 武汉大学 Synthetic aperture imaging method and system based on event camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG ZHANG等: ""Event-based Synthetic Aperture Imaging"", 《HTTPS://ARXIV.ORG/PDF/2103.02376V1.PDF》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408671A (en) * 2021-08-18 2021-09-17 成都时识科技有限公司 Object identification method and device, chip and electronic equipment
WO2023083121A1 (en) * 2021-11-09 2023-05-19 华为技术有限公司 Denoising method and related device
CN114862732A (en) * 2022-04-21 2022-08-05 武汉大学 Synthetic aperture imaging method fusing event camera and traditional optical camera
CN114862732B (en) * 2022-04-21 2024-04-26 武汉大学 Synthetic aperture imaging method integrating event camera and traditional optical camera
CN117097876A (en) * 2023-07-07 2023-11-21 天津大学 Event camera image reconstruction method based on neural network
CN117097876B (en) * 2023-07-07 2024-03-08 天津大学 Event camera image reconstruction method based on neural network

Similar Documents

Publication Publication Date Title
Zhang et al. Deep image deblurring: A survey
CN112987026A (en) Event field synthetic aperture imaging algorithm based on hybrid neural network
Godard et al. Deep burst denoising
Raghavendra et al. Comparative evaluation of super-resolution techniques for multi-face recognition using light-field camera
CN110580472B (en) Video foreground detection method based on full convolution network and conditional countermeasure network
CN112102182B (en) Single image reflection removing method based on deep learning
Duan et al. EventZoom: Learning to denoise and super resolve neuromorphic events
CN114897752A (en) Single-lens large-depth-of-field calculation imaging system and method based on deep learning
Aakerberg et al. Rellisur: A real low-light image super-resolution dataset
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN112651911A (en) High dynamic range imaging generation method based on polarization image
Zhang et al. Hybrid deblur net: Deep non-uniform deblurring with event camera
Yang et al. Learning event guided high dynamic range video reconstruction
CN112819742B (en) Event field synthetic aperture imaging method based on convolutional neural network
AU2020408599B2 (en) Light field reconstruction method and system using depth sampling
CN115984124A (en) Method and device for de-noising and super-resolution of neuromorphic pulse signals
Wan et al. Progressive convolutional transformer for image restoration
CN115661452A (en) Image de-occlusion method based on event camera and RGB image
Wang et al. PMSNet: Parallel multi-scale network for accurate low-light light-field image enhancement
CN114612305A (en) Event-driven video super-resolution method based on stereogram modeling
Yan et al. Single Image Reflection Removal From Glass Surfaces Via Multi-Scale Reflection Detection
Li et al. High-speed large-scale imaging using frame decomposition from intrinsic multiplexing of motion
Shi The application of image processing in the criminal investigation
CN114066751B (en) Vehicle card monitoring video deblurring method based on common camera acquisition condition
Duan et al. NeuroZoom: Denoising and super resolving neuromorphic events and spikes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication