CN111798395A - Event camera image reconstruction method and system based on TV constraint - Google Patents

Event camera image reconstruction method and system based on TV constraint Download PDF

Info

Publication number
CN111798395A
CN111798395A CN202010620396.5A CN202010620396A CN111798395A CN 111798395 A CN111798395 A CN 111798395A CN 202010620396 A CN202010620396 A CN 202010620396A CN 111798395 A CN111798395 A CN 111798395A
Authority
CN
China
Prior art keywords
image
event
camera
double
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010620396.5A
Other languages
Chinese (zh)
Other versions
CN111798395B (en
Inventor
余磊
江盟
王碧杉
杨文�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010620396.5A priority Critical patent/CN111798395B/en
Publication of CN111798395A publication Critical patent/CN111798395A/en
Application granted granted Critical
Publication of CN111798395B publication Critical patent/CN111798395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Operations Research (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an event camera image reconstruction method and system based on TV constraint, which comprises the steps that an event camera simultaneously outputs an intensity image sequence and an event stream, and the event stream in exposure time is extracted according to a time stamp of each frame of image output by the event camera; establishing an event double-integration model, wherein the model establishes the relationship among images shot by a camera, event data and clear instantaneous intensity images; introducing a segmented smooth prior of a TV regular term on the basis of an event double-integral model as a space constraint of a reconstructed image, and establishing a variation model of image reconstruction; and reconstructing a high-quality gray image sequence by joint optimization solution based on the obtained variation model for image reconstruction. The invention can solve the problems of motion blur and noise in the event camera image reconstruction and recover the high-frame-rate and high-quality intensity image sequence.

Description

Event camera image reconstruction method and system based on TV constraint
Technical Field
The invention relates to the field of image processing, in particular to an event camera image reconstruction method and system based on TV constraint.
Background
The event camera is a bionic vision sensor inspired by human retina, and a chip is used for simulating the perception process of biological retina to the external light intensity change. Unlike the image frames output by the conventional camera, the event camera asynchronously acquires the light intensity variation information by sensing the scene brightness variation, and outputs a series of asynchronous event streams, wherein each event comprises pixel coordinates, time and polarity: e ═ x, y, t, p }, x, y denote coordinates, t denotes a timestamp, p ═ 1 denotes polarity (+1 denotes luminance enhancement, -1 denotes luminance reduction), as shown in fig. 1. Compared with the traditional camera, the event camera has the characteristics of high time resolution, low delay (1 mu s), high dynamic range (>120dB), low power consumption (10mW) and the like, and the advantages enable the sensor to have extremely wide application prospects in the research fields of high-speed robot positioning, target tracking and identification and the like.
In the research related to the event camera, the output modes of the event camera and the conventional optical camera are different, so that many mature visual methods based on image frames cannot be directly applied to the event camera, the event stream output by the event camera lacks scene texture and detail information, and a large amount of noise exists. Therefore, in order to effectively apply the event camera to the vision task, image reconstruction is required to be performed on the event stream to facilitate application of a subsequently mature vision method on the basis of developing a new computer vision method for the event camera. Image reconstruction based on the event camera can provide effective scene representation on one hand, and is convenient for establishing corresponding relation between the event and the scene. On the other hand, image reconstruction is the basis for applying the existing image processing method and analysis technology in an event camera, image frames are reconstructed through an event stream, and then the reconstructed images or videos can be analyzed and processed in a classical image processing mode, such as target detection, tracking, identification and the like.
Existing event cameras, such as davis (dynamic and active video sensor), can output intensity images and event streams at the same time, but the frame rate of luminance images captured by the cameras is low, and the delay is high (delay of 5ms or more), and when the event cameras record high-dynamic scenes, the luminance images are affected by motion blur and noise. The high time resolution and high dynamic range characteristics of the event camera provide a new idea for solving the problems of motion blur, saturated exposure and the like of the traditional optical image, so that the event camera can solve the problem of target imaging in an extreme environment. Therefore, an image reconstruction technology based on the event camera is researched, the characteristics of high time resolution and high dynamic range of event data are utilized, and the traditional low-frame-rate strength image is combined to reconstruct a clear image or video with high dynamic range and high frame rate, so that effective representation capable of reflecting real scene information is obtained, and the method has important value for the application of the event camera in an actual scene.
Disclosure of Invention
In order to overcome the problems of motion blur and noise in the process of reconstructing an event camera image, the invention provides an image reconstruction scheme based on TV constraint to reconstruct a clear intensity image.
The technical scheme adopted by the invention is an event camera image reconstruction method based on TV constraint, which comprises the following steps:
step 1, an event camera simultaneously outputs an intensity image sequence and an event stream;
step 2, outputting each frame image Y to the event cameraiAccording to the time stamp t of the frameiExtracting the exposure time [ t ]i-T/2,ti+T/2]Wherein T is the length of the exposure time;
step 3, an event double integral model is established, the model establishes the relationship among the image shot by the camera, the event data and the clear instantaneous intensity image, the realization method is as follows,
constructing an image Y captured by an event cameraiWith the reconstructed grey-scale image sequence and the exposure time ti-T/2,ti+T/2]Linear mathematical relationship between internal event streams:
Figure BDA0002562854410000021
wherein I (f) is any time f e [ t ] within the exposure time of the image shot by the current camerai-T/2,ti+T/2]Intensity of the image of (J)i(f) Represents a double-integral image signal calculated from an event stream within an exposure time corresponding to the ith frame image at time f, τ is an integral sign, exp () represents an exponential function, and e () represents an event;
step 4, introducing a segmented smooth prior of a TV regular term on the basis of an event double-integral model as a space constraint of a reconstructed image, and establishing a variational model of image reconstruction as follows,
Figure BDA0002562854410000022
wherein ^ means obtaining a spatial gradient,
Figure BDA0002562854410000023
representing the denoised double integral image, wherein lambda and beta respectively represent the weight of a regular term;
and 5, reconstructing a high-quality gray image sequence through joint optimization solution based on the variational model reconstructed from the image obtained in the step 4.
In step 5, when a high-quality gray image sequence is reconstructed through joint optimization solution, the method is realized by using an alternative iteration minimization mode based on split Bregman iteration.
Furthermore, the iterative process is implemented as follows,
(1) first fix it
Figure BDA0002562854410000031
Image Ik+1(f) The updates of (2) are as follows:
Figure BDA0002562854410000032
wherein the content of the first and second substances,
Figure BDA0002562854410000033
is the double product of the kth iterationDividing the image;
(2) fixed image Ik+1(f) Double-integral image of kth iteration
Figure BDA0002562854410000034
The updates of (2) are as follows:
Figure BDA0002562854410000035
and reconstructing an intensity image with higher quality when iteration converges.
Moreover, for any frame image YiAnd image reconstruction is carried out, and the reconstructed frame rate reaches the triggering rate of the event camera.
The invention also provides an event camera image reconstruction system based on TV constraint, which is used for executing the event camera image reconstruction method based on TV constraint.
The method can solve the problems of motion blur and noise in the image reconstruction of the event camera, and through combining the traditional image frame and the event stream of the event camera, through complementary fusion of the information of the traditional image frame and the event stream and introducing full-variation space smooth constraint in the reconstruction process, the noise of the reconstructed image is suppressed while the motion blur is removed, and a high-quality-intensity image sequence with a high frame rate is recovered.
Drawings
Fig. 1 is a comparison graph of conventional camera and event camera data.
FIG. 2 is a flow chart of event camera image reconstruction based on TV constraints according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific examples described herein are intended to be illustrative only and are not intended to be limiting.
The invention mainly integrates information of a traditional image and an event stream to complete image reconstruction, deduces an event double-integration model according to a fuzzy generation model and an event generation model of the image, fuses event data and image data on the basis and introduces image space smooth constraint, and provides an image reconstruction method based on TV constraint. Firstly, a blurred image generation model of a camera is utilized to model an image generation process of the camera in a time direction, then a simple and effective double-integral model of an event is deduced by combining the image blur generation model according to a mathematical model of the event camera, the model fully utilizes the high-fraction time characteristic of an event stream to establish a relation between an image shot by the camera, event data and a clear high-frame-rate potential image, namely, the blurred image can be regarded as an integral of a high-frame-rate potential image sequence, and the event represents intensity change between the potential images. And finally, introducing a segmented smooth prior of a TV regular term on the basis of an event double-integral model as space constraint of a reconstructed image, thereby establishing a variation model of image reconstruction and reconstructing a high-quality gray image sequence by joint optimization solution.
As shown in fig. 2, an event camera reconstruction method based on TV space constraint according to an embodiment of the present invention includes the following specific implementation steps:
step 1, the event camera simultaneously outputs an intensity image sequence
Figure BDA0002562854410000041
And event streams
Figure BDA0002562854410000042
Wherein, YiRepresenting the ith frame of image output from the event camera, l being the number of frames in the sequence of intensity images, emRepresents the number of event points output by the event camera, and M is the number of event points in the event stream.
Step 2, outputting each frame image Y to the event cameraiAccording to the time stamp t of the frameiExtracting the exposure time [ t ]i-T/2,ti+T/2]An event stream within;
wherein T is the length of the exposure time.
Step 3, establishing a double-integration model of the event, wherein the model establishes the relationship among the image shot by the camera, the event data and the clear instantaneous intensity image:
the image generation model is defined as the accumulation of the instantaneous intensity image I (t) at time t within the exposure time
Figure BDA0002562854410000043
According to the generation model of the event log (I (t)) -log (I (f)) > p.c and the continuous time model of the event e (t) ═ pc (t-t)e) P ═ 1 denotes polarity (+1 denotes brightness enhancement, -1 denotes brightness reduction), c denotes contrast threshold, e () denotes event, t denoteseRepresenting the time instant at which the event is triggered, a mathematical model relating the successive potential image sequences i (t) to the event signal can be derived, namely:
Figure BDA0002562854410000044
where log () denotes the logarithm, exp () denotes the exponential function, () is the dirac function, τ is the sign of the integral,
the model combines an image generation model to derive a double integral model of an event, and an image Y captured by an event camera is constructediWith the reconstructed gray-scale image sequence I (f) and the exposure time ti-T/2,ti+T/2]Linear mathematical relationship between event streams e (t) in (c):
Figure BDA0002562854410000045
wherein I (f) is any time f e [ t ] within the exposure time of the image shot by the current camerai-T/2,ti+T/2]Intensity of the image of (J)i(f) Representing the double-integrated image signal calculated at time f from the event stream within the exposure time corresponding to the ith frame image.
Step 4, introducing a segmented smooth prior of a TV regular term on the basis of an event double-integral model as space constraint of a reconstructed image, thereby establishing a variation model of image reconstruction:
both the original image signal captured by the event camera and the double integrated image signal generated from the event stream are noisy. In order to solve the problems, a Total Variation (TV) spatial smoothing regular term is introduced on the basis of the double-integral model in the step 3 to suppress noise, penalize spatial fluctuation and simultaneously reserve edges, and convert the event camera image reconstruction problem into an energy minimization problem:
Figure BDA0002562854410000051
wherein the content of the first and second substances,
Figure BDA0002562854410000052
the spatial gradient is shown to be solved,
Figure BDA0002562854410000053
the denoised double-integral image is represented, λ and β respectively represent the weight of the corresponding regularization term, and a preferred value according to an experimental suggestion is λ ═ 0.01, and β ═ 0.005.
And 5, reconstructing a high-quality gray image sequence through joint optimization solution:
and (4) solving the optimization problem in the step (4) by using an alternative iteration minimization method based on split Bregman iteration. In an iterative process, initialization
Figure BDA0002562854410000054
First, fix the double integral image
Figure BDA0002562854410000055
Updating the estimated image I (f) using a split Bregman method, and then estimating a double-integrated image based on the estimated updated image I (f) also using a split Bregman method
Figure BDA0002562854410000056
The iteration is repeated to achieve the optimal image estimate i (f), the process is as follows:
(3) update of I (f): first fix it
Figure BDA0002562854410000057
Grayscale image Ik+1(f) The updates of (2) are as follows:
Figure BDA0002562854410000058
wherein the content of the first and second substances,
Figure BDA0002562854410000059
is the double-integrated image of the kth iteration.
(4) Updating
Figure BDA00025628544100000510
Fixed image Ik+1(f) Double-integral image of the (k + 1) th iteration
Figure BDA00025628544100000511
The updates of (2) are as follows:
Figure BDA00025628544100000512
the higher quality intensity image I (f) is generally reconstructed converging at the iteration number k ≦ 5.
In specific implementation, the method can adopt a computer software technology to realize an automatic operation process, and a corresponding system device for implementing the method process is also in the protection scope of the invention.
Further, for an arbitrary frame image YiThe image reconstruction steps are all equivalent, and the image i (f) at any time f can be estimated, and theoretically, the reconstructed frame rate can reach the trigger rate (event number per second) of the event camera.
To facilitate understanding of the technical effects of the present invention, the following examples are provided to implement reconstruction by using the above embodiment processes:
(1) simultaneous output of intensity image sequences by event cameras
Figure BDA00025628544100000513
And event streams
Figure BDA00025628544100000514
(2) For each new frame YiAccording to the frameTime stamp tiExtracting the exposure time [ t ]i-T/2,ti+T/2]An event stream within; (3) for any time f-t in the exposure timei-T/2:Δt:ti+ T/2, the reconstructed frame rate delta T is controlled to be T/20 at intervals delta T, and a double integral image at the moment f is calculated according to an event double integral model
Figure BDA0002562854410000061
(4) Initialization
Figure BDA0002562854410000062
λ ═ 0.01, β ═ 0.005, by alternating iterations i (f),
Figure BDA0002562854410000063
Solving the energy minimization problem to obtain a high-quality image I (f).
Through the event camera reconstruction example, images shot by the camera, event streams and a reconstruction result are compared, and therefore the method can remove image motion blur, reduce image noise and reconstruct clear images.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (5)

1. An event camera image reconstruction method based on TV constraints is characterized by comprising the following steps:
step 1, an event camera simultaneously outputs an intensity image sequence and an event stream;
step 2, outputting each frame image Y to the event cameraiAccording to the time stamp t of the frameiExtracting the exposure time [ t ]i-T/2,ti+T/2]Wherein T is the length of the exposure time;
step 3, an event double integral model is established, the model establishes the relationship among the image shot by the camera, the event data and the clear instantaneous intensity image, the realization method is as follows,
constructing an image Y captured by an event cameraiWith the reconstructed grey-scale image sequence and the exposure time ti-T/2,ti+T/2]Linear mathematical relationship between internal event streams:
Figure FDA0002562854400000011
wherein I (f) is any time f e [ t ] within the exposure time of the image shot by the current camerai-T/2,ti+T/2]Intensity of the image of (J)i(f) Represents a double-integral image signal calculated from an event stream within an exposure time corresponding to the ith frame image at time f, τ is an integral sign, exp () represents an exponential function, and e () represents an event;
step 4, introducing a segmented smooth prior of a TV regular term on the basis of an event double-integral model as a space constraint of a reconstructed image, and establishing a variational model of image reconstruction as follows,
Figure FDA0002562854400000012
wherein the content of the first and second substances,
Figure FDA0002562854400000013
the spatial gradient is shown to be solved,
Figure FDA0002562854400000014
representing the denoised double integral image, wherein lambda and beta respectively represent the weight of a regular term;
and 5, reconstructing a high-quality gray image sequence through joint optimization solution based on the variational model reconstructed from the image obtained in the step 4.
2. A TV constraint based event camera image reconstruction method as defined in claim 1, wherein: and 5, when reconstructing a high-quality gray image sequence through joint optimization solution, realizing the reconstruction by using an alternative iteration minimization mode based on split Bregman iteration.
3. A TV constraint based event camera image reconstruction method as defined in claim 2, wherein: the iterative process is implemented as follows,
(1) first fix it
Figure FDA0002562854400000021
Image Ik+1(f) The updates of (2) are as follows:
Figure FDA0002562854400000022
wherein the content of the first and second substances,
Figure FDA0002562854400000023
is the double-integrated image of the kth iteration;
(2) fixed image Ik+1(f) Double-integral image of the (k + 1) th iteration
Figure FDA0002562854400000024
The updates of (2) are as follows:
Figure FDA0002562854400000025
and reconstructing an intensity image with higher quality when iteration converges.
4. A TV constraint based event camera image reconstruction method according to claim 1 or 2 or 3, characterized by: for any frame image YiAnd image reconstruction is carried out, and the reconstructed frame rate reaches the triggering rate of the event camera.
5. An event camera image reconstruction system based on TV constraints, characterized by: for performing the TV constraint based event camera image reconstruction method as claimed in claims 1 to 4.
CN202010620396.5A 2020-06-30 2020-06-30 Event camera image reconstruction method and system based on TV constraint Active CN111798395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010620396.5A CN111798395B (en) 2020-06-30 2020-06-30 Event camera image reconstruction method and system based on TV constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010620396.5A CN111798395B (en) 2020-06-30 2020-06-30 Event camera image reconstruction method and system based on TV constraint

Publications (2)

Publication Number Publication Date
CN111798395A true CN111798395A (en) 2020-10-20
CN111798395B CN111798395B (en) 2022-08-30

Family

ID=72810782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010620396.5A Active CN111798395B (en) 2020-06-30 2020-06-30 Event camera image reconstruction method and system based on TV constraint

Country Status (1)

Country Link
CN (1) CN111798395B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232356A (en) * 2020-11-19 2021-01-15 中国人民解放军战略支援部队航天工程大学 Event camera denoising method based on cluster degree and boundary characteristics
CN113067979A (en) * 2021-03-04 2021-07-02 北京大学 Imaging method, device, equipment and storage medium based on bionic pulse camera
CN113269699A (en) * 2021-04-22 2021-08-17 天津(滨海)人工智能军民融合创新中心 Optical flow estimation method and system based on fusion of asynchronous event flow and gray level image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894550A (en) * 2016-03-31 2016-08-24 浙江大学 Method for synchronously reconstructing dynamic PET image and tracer kinetic parameter on the basis of TV and sparse constraint
WO2018037079A1 (en) * 2016-08-24 2018-03-01 Universität Zürich Simultaneous localization and mapping with an event camera
CN108182670A (en) * 2018-01-15 2018-06-19 清华大学 A kind of resolution enhancement methods and system of event image
CN110148159A (en) * 2019-05-20 2019-08-20 厦门大学 A kind of asynchronous method for tracking target based on event camera
US20190279379A1 (en) * 2018-03-09 2019-09-12 Samsung Electronics Co., Ltd. Method and apparatus for performing depth estimation of object
CN110428477A (en) * 2019-06-24 2019-11-08 武汉大学 A kind of drawing methods for the event camera not influenced by speed
CN110503686A (en) * 2019-07-31 2019-11-26 三星(中国)半导体有限公司 Object pose estimation method and electronic equipment based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894550A (en) * 2016-03-31 2016-08-24 浙江大学 Method for synchronously reconstructing dynamic PET image and tracer kinetic parameter on the basis of TV and sparse constraint
WO2018037079A1 (en) * 2016-08-24 2018-03-01 Universität Zürich Simultaneous localization and mapping with an event camera
CN108182670A (en) * 2018-01-15 2018-06-19 清华大学 A kind of resolution enhancement methods and system of event image
US20190279379A1 (en) * 2018-03-09 2019-09-12 Samsung Electronics Co., Ltd. Method and apparatus for performing depth estimation of object
CN110148159A (en) * 2019-05-20 2019-08-20 厦门大学 A kind of asynchronous method for tracking target based on event camera
CN110428477A (en) * 2019-06-24 2019-11-08 武汉大学 A kind of drawing methods for the event camera not influenced by speed
CN110503686A (en) * 2019-07-31 2019-11-26 三星(中国)半导体有限公司 Object pose estimation method and electronic equipment based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GOTTFRIED MUNDA: "Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
江盟等: "低维流形约束下的事件相机去噪算法", 《信号处理》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232356A (en) * 2020-11-19 2021-01-15 中国人民解放军战略支援部队航天工程大学 Event camera denoising method based on cluster degree and boundary characteristics
CN112232356B (en) * 2020-11-19 2023-09-22 中国人民解放军战略支援部队航天工程大学 Event camera denoising method based on group degree and boundary characteristics
CN113067979A (en) * 2021-03-04 2021-07-02 北京大学 Imaging method, device, equipment and storage medium based on bionic pulse camera
CN113269699A (en) * 2021-04-22 2021-08-17 天津(滨海)人工智能军民融合创新中心 Optical flow estimation method and system based on fusion of asynchronous event flow and gray level image
CN113269699B (en) * 2021-04-22 2023-01-03 天津(滨海)人工智能军民融合创新中心 Optical flow estimation method and system based on fusion of asynchronous event flow and gray level image

Also Published As

Publication number Publication date
CN111798395B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Rebecq et al. High speed and high dynamic range video with an event camera
CN111798395B (en) Event camera image reconstruction method and system based on TV constraint
Baldwin et al. Time-ordered recent event (TORE) volumes for event cameras
Pan et al. High frame rate video reconstruction based on an event camera
US11928792B2 (en) Fusion network-based method for image super-resolution and non-uniform motion deblurring
CN111539884B (en) Neural network video deblurring method based on multi-attention mechanism fusion
Li et al. LightenNet: A convolutional neural network for weakly illuminated image enhancement
Zhang et al. Learning to restore hazy video: A new real-world dataset and a new method
CN111798370A (en) Manifold constraint-based event camera image reconstruction method and system
CN114245007B (en) High-frame-rate video synthesis method, device, equipment and storage medium
CN106251297A (en) A kind of estimation based on multiple image fuzzy core the rebuilding blind super-resolution algorithm of improvement
Zhao et al. High-speed motion scene reconstruction for spike camera via motion aligned filtering
Haoyu et al. Learning to deblur and generate high frame rate video with an event camera
Zhao et al. Super resolve dynamic scene from continuous spike streams
Xiang et al. Learning super-resolution reconstruction for high temporal resolution spike stream
Zhong et al. Real-world video deblurring: A benchmark dataset and an efficient recurrent neural network
Rai et al. Robust face hallucination algorithm using motion blur embedded nearest proximate patch representation
Wang et al. Joint framework for single image reconstruction and super-resolution with an event camera
Xin et al. Video face super-resolution with motion-adaptive feedback cell
CN114494050A (en) Self-supervision video deblurring and image frame inserting method based on event camera
Tang et al. Structure-embedded ghosting artifact suppression network for high dynamic range image reconstruction
Wang et al. Uneven image dehazing by heterogeneous twin network
CN116579945B (en) Night image restoration method based on diffusion model
Cui et al. Multi-stream attentive generative adversarial network for dynamic scene deblurring
CN115984124A (en) Method and device for de-noising and super-resolution of neuromorphic pulse signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant