CN112053290A - Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder - Google Patents

Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder Download PDF

Info

Publication number
CN112053290A
CN112053290A CN202010698124.7A CN202010698124A CN112053290A CN 112053290 A CN112053290 A CN 112053290A CN 202010698124 A CN202010698124 A CN 202010698124A CN 112053290 A CN112053290 A CN 112053290A
Authority
CN
China
Prior art keywords
denoising
encoder
unsupervised
self
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010698124.7A
Other languages
Chinese (zh)
Inventor
索津莉
张志宏
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010698124.7A priority Critical patent/CN112053290A/en
Publication of CN112053290A publication Critical patent/CN112053290A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for denoising an unsupervised event camera based on a convolution denoising autoencoder, wherein the method comprises the following steps: acquiring a noise-containing event sequence containing time, space and polarity information by using an event camera, dividing the event sequence according to a fixed step length to obtain an event sequence slice, mapping events in the event sequence slice to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and sequentially mapping a plurality of continuous event sequence slices to obtain a reconstructed video frame; the method comprises the steps of constructing a denoising self-encoder, carrying out pre-denoising on a reconstructed video frame through a preset denoising algorithm to generate a simulation truth value, training the denoising self-encoder by using the reconstructed video frame and the truth value to obtain an unsupervised convolution denoising self-encoder, and denoising the video frame by using the unsupervised convolution denoising self-encoder. The method can simultaneously realize the two-dimensional visualization and denoising tasks of the event camera sequence.

Description

Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder
Technical Field
The invention relates to the technical field of signal processing and image denoising, in particular to an unsupervised event camera denoising method and device based on a convolution denoising autoencoder.
Background
Event Camera (Event Camera) is a new type of asynchronous imaging Camera based on the principle of neuromorphic vision, also known as the "silicon retina". The camera simulates an imaging mechanism of human retina, has the advantages of low power consumption, small time delay, large dynamic range and the like compared with the traditional camera, and has great application value in the aspects of unmanned aerial vehicle visual navigation, unmanned driving, high-speed target detection and the like. However, the quality of the original event sequence signal output by the event camera is poor, and the application performance of the original event sequence signal in an actual scene is severely limited. In addition, because the original output sequence of the event camera does not have a two-dimensional visualization effect, the original output sequence of the event camera is often required to be reconstructed into video frames for further display or to complete subsequent processing, so that it is necessary to design an algorithm capable of simultaneously realizing reconstruction and denoising of the video frames of the event camera.
The event camera is a bionic asynchronous imaging camera. During imaging, the camera only focuses on changed pixel points without the concept of frame, so that the problem of high redundancy of data of the traditional camera is fundamentally solved, and the bandwidth required by data transmission is saved. The event camera is a bionic asynchronous imaging camera. During imaging, the camera only focuses on changed pixel points without the concept of frame, so that the problem of high redundancy of data of the traditional camera is fundamentally solved, and the bandwidth required by data transmission is saved. In addition, the asynchronous refreshing mechanism enables the event camera to get rid of the limitation of the frame rate, has the characteristic of low delay, can realize accurate capturing and tracking of a high-speed moving target, and cannot generate motion blur. In brightness detection, the brightness acquisition of the event camera adopts logarithmic response, and the imaging dynamic range of the event camera is greatly widened, so that the working performance of the event camera in an extreme environment is improved, and the event camera has great application value in the fields of unmanned driving, target detection, security monitoring and the like.
Image denoising is one of the basic problems in the field of image processing. The method aims to remove noise in an image by utilizing the characteristics of smoothness, low rank, sparsity, self-similarity, noise randomness and the like of the image so as to acquire a higher-quality image. The traditional image denoising algorithms include nearest neighbor filtering, bilateral filtering, non-local mean value and the like, but the algorithms are usually limited in adaptivity and operation speed.
In recent years, with the rise of machine learning and computer vision, the denoising algorithm based on deep learning has received more and more attention. Deep learning, as a data-driven powerful fitting tool, has excellent performance in good tasks including denoising, and has substantially exceeded the best results achieved by conventional algorithms. In various deep learning models, a convolutional neural network and a denoising autocoder are two common architectures in image processing, and can deeply mine structural information of an image so as to realize various tasks such as denoising and super-resolution.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the present invention is to provide an unsupervised event camera denoising method based on a convolution denoising autoencoder, which can simultaneously implement two-dimensional visualization and denoising tasks of an event camera sequence.
The invention also aims to provide an unsupervised event camera denoising device based on the convolution denoising autoencoder.
In order to achieve the above object, an embodiment of the present invention provides an unsupervised event camera denoising method based on a convolutional denoising autoencoder, including:
acquiring a noise-containing event sequence containing time, space and polarity information by using an event camera, dividing the event sequence according to a fixed step length to obtain an event sequence slice, mapping events in the event sequence slice to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and sequentially mapping a plurality of continuous event sequence slices to obtain a reconstructed video frame;
the method comprises the steps of constructing a denoising self-encoder, carrying out pre-denoising on a reconstructed video frame through a preset denoising algorithm to generate a simulation truth value, training the denoising self-encoder by using the reconstructed video frame and the truth value to obtain an unsupervised convolution denoising self-encoder, and denoising the video frame by using the unsupervised convolution denoising self-encoder.
According to the unsupervised event camera denoising method based on the convolution denoising autoencoder, the original video frame reconstructed by the event sequence is used as input, the result obtained by the original video frame through the pre-denoising module is used as a true value to be trained, the unsupervised convolution denoising autoencoder with stability and good performance can be obtained, and therefore the two-dimensional visualization and denoising task of the event camera sequence can be finally achieved at the same time.
In addition, the unsupervised event camera denoising method based on the convolution denoising autoencoder according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the denoising self-encoder is a U-shaped denoising self-encoder including 8 convolutional layers with jump connection.
Further, in one embodiment of the present invention, the output of the unsupervised convolution denoising autoencoder is a denoised video frame, and the denoised video frame is stored or displayed as a two-dimensional picture.
Further, in an embodiment of the present invention, the preset denoising algorithm includes a nearest neighbor filtering algorithm.
In order to achieve the above object, an embodiment of another aspect of the present invention provides an unsupervised event camera denoising apparatus based on a convolutional denoising autoencoder, including:
the processing module is used for acquiring a noise-containing event sequence containing time, space and polarity information by using an event camera, dividing the event sequence according to a fixed step length to obtain an event sequence slice, mapping events in the event sequence slice to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and sequentially mapping a plurality of continuous event sequence slices to obtain a reconstructed video frame;
the de-noising module is used for constructing a de-noising self-encoder, pre-de-noising is carried out on the reconstructed video frame through a preset de-noising algorithm to generate a simulation true value, the de-noising self-encoder is trained by using the reconstructed video frame and the true value to obtain an unsupervised convolution de-noising self-encoder, and the unsupervised convolution de-noising self-encoder is used for de-noising the video frame.
According to the unsupervised event camera denoising device based on the convolution denoising autoencoder, the original video frame reconstructed by the event sequence is used as input, the result obtained by the original video frame through the pre-denoising module is used as a true value to be trained, the unsupervised convolution denoising autoencoder with stability and good performance can be obtained, and therefore the two-dimensional visualization and denoising task of the event camera sequence can be finally achieved at the same time.
In addition, the unsupervised event camera denoising device based on the convolution denoising autoencoder according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the denoising self-encoder is a U-shaped denoising self-encoder including 8 convolutional layers with jump connection.
Further, in one embodiment of the present invention, the output of the unsupervised convolution denoising autoencoder is a denoised video frame, and the denoised video frame is stored or displayed as a two-dimensional picture.
Further, in an embodiment of the present invention, the preset denoising algorithm includes a nearest neighbor filtering algorithm.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of an unsupervised event camera denoising method based on a convolutional denoising autoencoder according to an embodiment of the present invention;
FIG. 2 is a block diagram of a method for denoising an unsupervised event camera based on a convolutional denoising auto-encoder according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a basic structure of a U-shaped convolution denoising autoencoder according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an event camera denoising result obtained by using simulation data according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an unsupervised event camera denoising apparatus based on a convolution denoising auto-encoder according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes an unsupervised event camera denoising method and apparatus based on a convolution denoising autoencoder according to an embodiment of the present invention with reference to the accompanying drawings.
First, a method for denoising an unsupervised event camera based on a convolutional denoising autoencoder according to an embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a flowchart of an unsupervised event camera denoising method based on a convolutional denoising auto-encoder according to an embodiment of the present invention.
FIG. 2 is a block diagram of a method for denoising an unsupervised event camera based on a convolutional denoising auto-encoder according to an embodiment of the present invention.
As shown in fig. 1 and fig. 2, the unsupervised event camera denoising method based on the convolution denoising self-encoder includes the following steps:
s1, collecting noise-containing event sequences containing time, space and polarity information by using an event camera, dividing the event sequences according to a fixed step length to obtain event sequence slices, mapping the events in the event sequence slices to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and mapping a plurality of continuous event sequence slices in sequence to obtain a reconstructed video frame.
It will be appreciated that the original sequence of events collected by the event camera, containing spatio-temporal and polar information, is reconstructed into successive video frames.
The data acquisition needs to use an event camera, the code operating environment is a computer with a CPU or a GPU, and the data storage or display needs to be provided with a corresponding hard disk or a display.
The code uses TensorFlow as the main programming framework and python as the main programming language. The trained model can be directly operated on a machine configured with a corresponding environment and can complete a corresponding task.
Code training may employ either simulation data sets or real acquisition data sets. The real acquisition data set is acquired by utilizing the event camera in the specified environment, and compared with simulation data, the real acquisition data has better fitting generalization capability on noise.
S2, constructing a denoising self-encoder, pre-denoising the reconstructed video frame through a preset denoising algorithm to generate a simulation truth value, training the denoising self-encoder by using the reconstructed video frame and the truth value to obtain an unsupervised convolution denoising self-encoder, and denoising the video frame by using the unsupervised convolution denoising self-encoder.
Specifically, a video frame reconstructed by an original event sequence is used as input, a result generated by pre-denoising of the reconstructed video frame through a denoising algorithm of a pre-denoising module is used as a true value to train a denoising sub-encoder, and the unsupervised convolution denoising self-encoder with stability and good performance is obtained. Therefore, the two-dimensional visualization and denoising tasks of the event camera sequence are finally realized. Because the signal-to-noise ratio of the event camera in the current market is generally low, and in addition, the event sequence acquired by the event camera must be mapped to two dimensions to obtain image information, the method of the embodiment of the invention simultaneously realizes the two tasks, can effectively utilize the advantages of high speed and large dynamic range of the event camera, and has great application value.
The algorithm proposed above is an unsupervised algorithm, and no truth value is required for the data set.
The denoising algorithm of the pre-denoising module uses nearest neighbor filtering to generate a simulation true value for an original video frame reconstructed by an input sequence, the denoising algorithm is not limited to the nearest neighbor filtering algorithm, and other related denoising algorithms can be the pre-denoising module instead of the nearest neighbor filtering. When a single 1050Ti GPU is used for training, 50 to 100 cycles of training are basically needed, and the required time is generally not more than 10 hours.
In the embodiment of the invention, the structure of the constructed denoising self-encoder is a U-shaped convolution denoising self-encoder structure with 8 convolution layers and jump connection.
As shown in fig. 3, the arrows with different colors represent different neural network operations, and the blocks with different colors represent feature images output by different neural network operations, and the sizes of the respective feature images are noted above the feature images. The connected blocks represent the splicing operation between the two blocks, and the dotted arrows represent the solving operation flow of the network loss function.
The output of the self-encoder obtained in the above way is a denoised video frame, and can be directly stored or displayed as a two-dimensional image.
As shown in fig. 4, the comparison between the de-noised effect and the non-de-noised effect is demonstrated, and compared with the original two-dimensional image obtained by mapping and reconstructing an event sequence, the noise in the result image obtained after de-noising is completed through a network is obviously removed, and the visual effect is obviously improved.
According to the unsupervised event camera denoising method based on the convolution denoising autoencoder provided by the embodiment of the invention, the original video frame reconstructed by the event sequence is used as input, and the result obtained by the original video frame through the pre-denoising module is used as a true value for training, so that the unsupervised convolution denoising autoencoder with stability and good performance can be obtained, and finally, the two-dimensional visualization and denoising task of the event camera sequence can be realized at the same time.
Next, an unsupervised event camera denoising apparatus based on a convolution denoising autoencoder according to an embodiment of the present invention will be described with reference to the drawings.
FIG. 5 is a schematic structural diagram of an unsupervised event camera denoising apparatus based on a convolution denoising auto-encoder according to an embodiment of the present invention.
As shown in fig. 5, the unsupervised event camera denoising apparatus based on the convolution denoising auto-encoder includes: a processing module 100 and a denoising module 200.
The processing module 100 is configured to acquire a noise-containing event sequence including time, space, and polarity information by using an event camera, divide the event sequence according to a fixed step length to obtain an event sequence slice, map events in the event sequence slice to two dimensions according to corresponding spatial position coordinates to form a two-dimensional image, and map multiple continuous event sequence slices in sequence to obtain a reconstructed video frame.
The denoising module 200 is configured to construct a denoising autoencoder, pre-denoise a reconstructed video frame through a preset denoising algorithm to generate a simulated true value, train the denoising autoencoder with the reconstructed video frame and the true value to obtain an unsupervised convolution denoising autoencoder, and denoise the video frame with the unsupervised convolution denoising autoencoder.
Further, in one embodiment of the present invention, the denoising self-encoder is a U-shaped denoising self-encoder comprising 8 convolutional layers with jump connection.
Further, in one embodiment of the present invention, the output of the unsupervised convolution denoising autoencoder is a denoised video frame, and the denoised video frame is stored or displayed as a two-dimensional picture.
Further, in one embodiment of the present invention, the predetermined denoising algorithm includes a nearest neighbor filtering algorithm.
It should be noted that the foregoing explanation of the embodiment of the unsupervised event camera denoising method based on the convolution denoising autoencoder is also applicable to the apparatus of the embodiment, and is not repeated here.
According to the unsupervised event camera denoising device based on the convolution denoising autoencoder, provided by the embodiment of the invention, the original video frame reconstructed by the event sequence is used as input, and the result obtained by the original video frame through the pre-denoising module is used as a true value for training, so that the unsupervised convolution denoising autoencoder with stability and good performance can be obtained, and finally, the two-dimensional visualization and denoising tasks of the event camera sequence can be realized at the same time.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A non-supervision event camera denoising method based on a convolution denoising autoencoder is characterized by comprising the following steps:
acquiring a noise-containing event sequence containing time, space and polarity information by using an event camera, dividing the event sequence according to a fixed step length to obtain an event sequence slice, mapping events in the event sequence slice to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and sequentially mapping a plurality of continuous event sequence slices to obtain a reconstructed video frame;
the method comprises the steps of constructing a denoising self-encoder, carrying out pre-denoising on a reconstructed video frame through a preset denoising algorithm to generate a simulation truth value, training the denoising self-encoder by using the reconstructed video frame and the truth value to obtain an unsupervised convolution denoising self-encoder, and denoising the video frame by using the unsupervised convolution denoising self-encoder.
2. The unsupervised event camera denoising method based on the convolution denoising self-encoder according to claim 1, wherein the denoising self-encoder is a U-shaped denoising self-encoder with jump connection, comprising 8 convolution layers.
3. The unsupervised event camera denoising method based on the convolutional denoising autoencoder according to claim 1, wherein the output of the unsupervised convolutional denoising autoencoder is a denoised video frame, and the denoised video frame is stored or displayed as a two-dimensional picture.
4. The unsupervised event camera denoising method based on convolution denoising autoencoder according to claim 1, wherein the preset denoising algorithm comprises a nearest neighbor filtering algorithm.
5. An unsupervised event camera denoising apparatus based on a convolution denoising auto-encoder, comprising:
the processing module is used for acquiring a noise-containing event sequence containing time, space and polarity information by using an event camera, dividing the event sequence according to a fixed step length to obtain an event sequence slice, mapping events in the event sequence slice to two dimensions according to corresponding space position coordinates to form a two-dimensional image, and sequentially mapping a plurality of continuous event sequence slices to obtain a reconstructed video frame;
the de-noising module is used for constructing a de-noising self-encoder, pre-de-noising is carried out on the reconstructed video frame through a preset de-noising algorithm to generate a simulation true value, the de-noising self-encoder is trained by using the reconstructed video frame and the true value to obtain an unsupervised convolution de-noising self-encoder, and the unsupervised convolution de-noising self-encoder is used for de-noising the video frame.
6. The unsupervised event camera denoising device based on the convolutional denoising self-encoder according to claim 5, wherein the denoising self-encoder is a U-shaped denoising self-encoder with jump connection, comprising 8 convolutional layers.
7. The unsupervised event camera denoising apparatus based on the convolutional denoising autoencoder of claim 5, wherein the output of the unsupervised convolutional denoising autoencoder is a denoised video frame, and the denoised video frame is stored or displayed as a two-dimensional picture.
8. The unsupervised event camera denoising apparatus based on convolution denoising autoencoder according to claim 5, wherein the preset denoising algorithm comprises a nearest neighbor filtering algorithm.
CN202010698124.7A 2020-07-20 2020-07-20 Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder Pending CN112053290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010698124.7A CN112053290A (en) 2020-07-20 2020-07-20 Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010698124.7A CN112053290A (en) 2020-07-20 2020-07-20 Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder

Publications (1)

Publication Number Publication Date
CN112053290A true CN112053290A (en) 2020-12-08

Family

ID=73601067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010698124.7A Pending CN112053290A (en) 2020-07-20 2020-07-20 Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder

Country Status (1)

Country Link
CN (1) CN112053290A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810611A (en) * 2021-09-17 2021-12-17 北京航空航天大学 Data simulation method and device for event camera
CN116016064A (en) * 2023-01-12 2023-04-25 西安电子科技大学 Communication signal noise reduction method based on U-shaped convolution denoising self-encoder
CN116757926A (en) * 2023-05-22 2023-09-15 华南师范大学 Super-resolution SIM-FRET imaging method and system based on self-supervision learning image denoising

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610069A (en) * 2017-09-29 2018-01-19 西安电子科技大学 DVS visualization video denoising methods based on shared K SVD dictionaries
CN108073857A (en) * 2016-11-14 2018-05-25 北京三星通信技术研究有限公司 The method and device of dynamic visual sensor DVS event handlings
CN109559823A (en) * 2018-11-29 2019-04-02 四川大学 A kind of DVS data processing method conducive to progress spermatozoon activity analysis
CN110321777A (en) * 2019-04-25 2019-10-11 重庆理工大学 A kind of face identification method based on the sparse denoising self-encoding encoder of stack convolution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073857A (en) * 2016-11-14 2018-05-25 北京三星通信技术研究有限公司 The method and device of dynamic visual sensor DVS event handlings
CN107610069A (en) * 2017-09-29 2018-01-19 西安电子科技大学 DVS visualization video denoising methods based on shared K SVD dictionaries
CN109559823A (en) * 2018-11-29 2019-04-02 四川大学 A kind of DVS data processing method conducive to progress spermatozoon activity analysis
CN110321777A (en) * 2019-04-25 2019-10-11 重庆理工大学 A kind of face identification method based on the sparse denoising self-encoding encoder of stack convolution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张遥等: "基于残差卷积自编码器(RCAE)的红外图像降噪方法研究", 《信息技术与信息化》 *
罗月童 等: "基于卷积去噪自编码器的芯片表面弱缺陷检测方法", 《计算机科学》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810611A (en) * 2021-09-17 2021-12-17 北京航空航天大学 Data simulation method and device for event camera
CN113810611B (en) * 2021-09-17 2022-06-07 北京航空航天大学 Data simulation method and device for event camera
CN116016064A (en) * 2023-01-12 2023-04-25 西安电子科技大学 Communication signal noise reduction method based on U-shaped convolution denoising self-encoder
CN116757926A (en) * 2023-05-22 2023-09-15 华南师范大学 Super-resolution SIM-FRET imaging method and system based on self-supervision learning image denoising
CN116757926B (en) * 2023-05-22 2024-04-05 华南师范大学 Super-resolution SIM-FRET imaging method and system based on self-supervision learning image denoising

Similar Documents

Publication Publication Date Title
Zhang et al. Deep image deblurring: A survey
Xu et al. Quadratic video interpolation
Dong et al. Multi-scale boosted dehazing network with dense feature fusion
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
US20190005360A1 (en) Method and apparatus for joint image processing and perception
WO2021164731A1 (en) Image enhancement method and image enhancement apparatus
Mostafavi et al. Learning to reconstruct hdr images from events, with applications to depth and flow prediction
JP2020027659A (en) Method for training convolutional recurrent neural network, and inputted video semantic segmentation method using trained convolutional recurrent neural network
CN111445418A (en) Image defogging method and device and computer equipment
Zheng et al. Deep learning for event-based vision: A comprehensive survey and benchmarks
Liu et al. Self-supervised linear motion deblurring
EP1026634A2 (en) Estimating targets using statistical properties of observations of know targets
CN112053290A (en) Unsupervised event camera denoising method and unsupervised event camera denoising device based on convolution denoising self-encoder
CN111652921A (en) Generation method of monocular depth prediction model and monocular depth prediction method
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN114463218A (en) Event data driven video deblurring method
Vitoria et al. Event-based image deblurring with dynamic motion awareness
Basak et al. Monocular depth estimation using encoder-decoder architecture and transfer learning from single RGB image
CN117408916A (en) Image deblurring method based on multi-scale residual Swin transducer and related product
CN111696034A (en) Image processing method and device and electronic equipment
CN114119428B (en) Image deblurring method and device
CN110675320A (en) Method for sharpening target image under spatial parameter change and complex scene
Wan et al. Progressive convolutional transformer for image restoration
CN114549361A (en) Improved U-Net model-based image motion blur removing method
Li et al. High-speed large-scale imaging using frame decomposition from intrinsic multiplexing of motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201208