CN112767274A - Light field image rain stripe detection and removal method based on transfer learning - Google Patents

Light field image rain stripe detection and removal method based on transfer learning Download PDF

Info

Publication number
CN112767274A
CN112767274A CN202110094780.0A CN202110094780A CN112767274A CN 112767274 A CN112767274 A CN 112767274A CN 202110094780 A CN202110094780 A CN 202110094780A CN 112767274 A CN112767274 A CN 112767274A
Authority
CN
China
Prior art keywords
rain
depth map
real scene
module
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110094780.0A
Other languages
Chinese (zh)
Inventor
晏涛
李明悦
井花花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202110094780.0A priority Critical patent/CN112767274A/en
Publication of CN112767274A publication Critical patent/CN112767274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a light field image rain stripe detection and removal method based on transfer learning. According to the method, firstly, a depth map is obtained by using a depth map calculation module, then a synthetic data rain stripe map is detected by using a rain stripe detection module, a real scene data rain stripe map is detected in a self-supervision mode by using a Gaussian process module, finally, a rain 3DEPI volume block is connected in series with the obtained depth map and the extracted rain stripe map, a 3D recursion generation countermeasure network is input to remove rainwater, and the training is repeated until a high-quality rain-free map is obtained. Compared with the same type of work, the method utilizes the transfer learning to construct the self-supervision network, can more accurately extract the rain stripes of the real scene and obtain the high-quality rain-free image, and has good generalization capability.

Description

Light field image rain stripe detection and removal method based on transfer learning
Technical Field
The invention relates to a light field image rain stripe detection and removal method based on transfer learning, and belongs to the technical field of computer image processing.
Background
The task of image rain removal has been a difficult point and hot spot in the field of computer vision. Recent image de-raining methods based on deep learning exhibit excellent performance in both reconstruction errors and visual quality. However, due to various challenges in acquiring an image of a real scene to remove a rain data set, most of these methods only train on synthesized data, and are difficult to be generalized to images of real scenes.
In addition, existing rain removal methods are mainly applied to single images or frame sequences. However, it is difficult to effectively detect and remove rainwater using the depth information or the positional relationship of the raindrops, both on a single image and on a frame sequence, and there is no good effect on a real scene.
Light Field Image (LFI) can capture multiple sub-aperture views at once, record rich structural and texture information of the target scene, estimate depth maps more easily, and the positions of the rainstripes in the sub-aperture views are highly correlated spatially.
Therefore, in order to enhance the generalization ability of the network and simultaneously utilize the advantages of the LFI to solve the problems, the invention designs a light field image rainstripe detection and removal method based on transfer learning.
Disclosure of Invention
The invention aims to provide a light field image rain stripe detection and removal method based on transfer learning, which is used for constructing an automatic supervision network by using transfer learning so as to more accurately extract rain stripes of a real scene and obtain a high-quality rain-free image, thereby enhancing the generalization capability of the network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a light field image rain stripe detection and removal method based on transfer learning comprises a depth map calculation module, a rain stripe detection module and a rain removal module;
the depth map calculation module obtains a depth map through calculation;
the rainstripe detection module detects a synthetic data rainstripe image, and meanwhile, a self-supervision network based on a Gaussian process is used for detecting a real scene data rainstripe image;
the rainwater removing module is used for inputting a 3D recursion generation countermeasure network to remove rainwater after connecting the rainwater-carrying 3DEPI volume block, the depth map obtained by the depth map calculating module and the rainwater stripe map extracted by the rainwater stripe detecting module in series, repairing the background, and repeatedly training until obtaining a high-quality rainwater-free map.
Furthermore, the depth map calculation module firstly extracts information of the light field sub-viewpoints from different directions by using the multi-stream neural network, and then enters the fusion network to calculate the depth map.
Furthermore, the depth map comprises a synthetic data depth map and a real scene depth map; the synthetic data depth map is obtained by entering synthetic data into a depth map calculation module; the real scene depth map is obtained by entering real scene data into a depth map calculation module.
Further, the rain streak detection module takes a 3DEPI volume block formed by stacking a row of sub-viewpoints of a light field image as input, utilizes a residual error network to extract features of the rain streak, and in each layer of convolution, models are built for the extracted synthetic data features and real scene data features through a Gaussian process to obtain a false true value of the real scene data, and the false true value is further used for extracting the real scene data features through a supervision network.
Further, the rain stripe image comprises a synthetic data rain stripe image and a real scene rain stripe image; the synthetic data rain stripe graph is obtained by entering synthetic data into a rain stripe detection module; the real scene rain stripe image is obtained by real data entering a rain stripe detection module.
Furthermore, the rainwater removing module is provided with an LSTM layer and a discriminator structure.
Further, the LSTM layer is used to propagate features between convolutional layers at each iteration.
And further, each iteration utilizes a long-time memory network to transmit the characteristics.
Further, the loss function of the discriminator is:
Figure BDA0002913584390000021
where y is the clean image output by the generator, ygtD (-) is the discriminator network convolution operation for the corresponding true value.
Compared with the prior art, the invention has the advantages and beneficial effects that:
(1) the rain streak detection module obtains a false true value of real scene data through the construction of a 3D residual error network based on a Gaussian process, the false true value is further used for supervising the extraction of real scene data characteristics by the network, and the real scene data can be used for improving the generalization capability of the network;
(2) the rainwater removal module of the invention generates a countermeasure network for a 3D recursion, and the characteristics between convolution layers can be propagated every iteration by introducing the LSTM layer, thereby being beneficial to removing the rainwater stripes; the last added discriminator structure of the network can guide the generator to remove rain and improve the condition of no-rain-pattern detail blur output by the generator;
(3) compared with the same type of work, the method utilizes the transfer learning to construct the self-supervision network, can more accurately extract the rain stripes of the real scene and obtain the high-quality rain-free image, and has good generalization capability.
Drawings
FIG. 1 is a flow chart of a method for detecting and removing a rainstrip of a light field image based on transfer learning;
FIG. 2 is a depth map calculation flow diagram;
FIG. 3 is a diagram of a rain stripe detection network architecture;
FIG. 4 is a diagram of a rainwater removal network architecture;
FIG. 5 is a diagram of the experimental results of the synthetic data, with PSNR/SSIM as the evaluation index, wherein (a) is an input light-field rain image; (b) listing to obtain a rain-free picture by the method; (c) columns are truth diagrams of light field images; (d) obtaining a rain stripe pattern by the method; (e) the columns are truth diagrams of the rain stripes; (f) listing as the depth map obtained by the method;
FIG. 6 is a diagram of experimental results of real scene data, in which (a) columns are input light field rain images; (b) listing to obtain a rain-free picture by the method; (c) the column is the rain stripe pattern obtained by the method.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and preferred embodiments, which are implemented on the premise of the technical solution of the present invention, and give detailed embodiments and procedures, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, a flowchart of a method for detecting and removing a rainstripe of a light field image based on transfer learning according to the present invention is shown, and the method for detecting and removing a rainstripe of a light field image based on transfer learning according to the present invention includes the following three modules:
1. depth map calculation module
The depth map calculation module of the present invention calculates using the existing depth map calculation network EPINET, and as shown in fig. 2, is a depth map calculation flow chart. The depth map calculation network EPINET takes a 3DEPI volume block based on four directions of horizontal, vertical, left diagonal and right diagonal as input, firstly a multi-stream network is constructed to extract features in each direction, then the features extracted in each direction are connected together and input to a fusion network to calculate the depth map.
Further, the depth map calculation module sequentially inputs the synthetic data and the real scene data to obtain a synthetic data depth map DsAnd real scene data depth map DrThe calculation formula is as follows:
Figure BDA0002913584390000041
wherein f isEPINET() represents an EPINET network convolution operation; b isEPIShowing the true value of no rain of the synthetic data; since the real scene has no true value of no rain, the 3DEPI volume block I with rain is directly usedEPIAs an input.
2. Rain strip detection module
2.1 network architecture
In the rain streak detection module, the invention constructs a 3D residual error network based on Gaussian Process (GP), as shown in fig. 3, which is a structure diagram of the rain streak detection network. The input I size is (3, 9, 128, 128), where 3 represents three channels, 9 represents depth, i.e. the number of sub-views, and 128 × 128 is the sub-view size. The middle part of the network is composed of 5 layers of standard residual blocks, the size of a convolution kernel is set to be 3 multiplied by 3, an activation function is set to be a linear rectification function (ReLU), and an activation function of the last layer is a Sigmoid function. Each layer of residual block is connected with a GP module, the extracted synthetic data characteristics and the real scene data characteristics are modeled to obtain the false and true values of the real scene data, and the false and true values are further used for extracting the real scene data characteristics by the monitoring network.
In FIG. 3, IsFor input of synthetic data, IrFor input real scene data, RsFor output composite data rainstripe, RrFor the output real scene data rainstripes, Fzs[i]For storing all intermediate feature vectors extracted for the ith layer
Figure BDA0002913584390000048
Of the matrix of (a).
2.2 synthetic data training phase
At this stage, the rainstripe detection network is trained using the synthetic data, minimizing the supervised loss function:
Figure BDA0002913584390000042
wherein λ ispIs a constant value, and is characterized in that,
Figure BDA0002913584390000043
is 11The loss of the carbon dioxide gas is reduced,
Figure BDA0002913584390000044
for perceptual loss, it is defined as:
Figure BDA0002913584390000045
Figure BDA0002913584390000046
wherein the content of the first and second substances,
Figure BDA0002913584390000047
for the rain streak truth, VGG (-) is a pre-trained VGG-16 network convolution operation.
In addition to minimizing the loss function, all intermediate feature vectors of the synthesized data are used in the training process
Figure BDA0002913584390000051
Is stored in matrix Fzs[i]In, i.e.
Figure BDA0002913584390000052
NsIs the total number of synthesized data.
2.3 real scene training phase
After learning the weights using the synthetic data, the weights are updated using the real scene data. And the generalization capability of the network is improved by utilizing the real scene data. Specifically, data F was synthesized first using GP Joint modelingzsAnd real scene data zr. Then by minimizing zrAnd a false true value zr,pseudoAnd the error between the network and the network provides supervision for the potential space in the middle of the network.
Modeling process: first using the synthetic data zsRepresenting observed samples of real scene data corresponding to the kth training
Figure BDA0002913584390000053
Figure BDA0002913584390000054
Wherein alpha isnIs constant and epsilon is added noise
Figure BDA0002913584390000055
The distribution of the synthetic data and the real scene data is jointly modeled using GP using equations (2-4). The conditional joint probability distribution will form the following conditional multivariate gaussian distributions for the real scene data:
Figure BDA0002913584390000056
by
Figure BDA0002913584390000057
Obtaining a pseudo true value of the data characteristic of the real scene of the kth training, wherein a GP module connected behind each layer of residual block of the network can obtain a pseudo true value
Figure BDA0002913584390000058
In order to reserve the Gaussian distribution parameters obtained by the GP module in the shallow network and transmit the false and true values obtained by the Gaussian distribution parameters backwards, the formula is as follows:
Figure BDA0002913584390000059
thus, the auto-supervised loss function used by the real scene training process is defined as follows:
Figure BDA00029135843900000510
2.4 network Total loss function
The overall loss function for training the network is defined as follows:
Figure BDA00029135843900000511
wherein λ isrIs a predefined weight for adjusting
Figure BDA00029135843900000512
And
Figure BDA00029135843900000513
is a proportion of the total loss.
3. Rainwater removal module
3.1 network architecture
The rain removal module generates a countermeasure network for a 3D recursion, as shown in fig. 4, which is a diagram of a rain removal network structure. The generator employs a recursive residual network (ResNet) with 4 iterations, each residual layer (ResBlock) containing 2 convolutional layers, and then activated using the function ReLU. All convolutional layer convolutional kernels have a size of 3 × 3 × 3, and no downsampling or upsampling operations are required. And a long short-term memory network (LSTM) layer is introduced, all convolutions of the LSTM have 32 input channels and 32 output channels, and features between the convolution layers of each iteration can be propagated through the layer, so that the rain stripes can be removed. Meanwhile, a discriminator structure is added at the end of the network to guide the generator to remove rain and improve the condition that the details of a rain-free image output by the generator are fuzzy.
In fig. 4, I is an input rain-carrying 3DEPI volume block, D is a depth map obtained by the depth map calculation module, and R is a rain stripe map extracted by the rain stripe detection module, where I, D, and R include both synthetic data and real data.
3.2 loss function
The existing image rain removal method adopts an additive model, namely a rain image x is regarded as the superposition of a clean image y rain component r:
x=y+r (3-1)
then y-x-r can be obtained from formula (3-1). In the rain streak detection module, the detected rain streak image r is obtained, and because the real scene data has no true value, the input rain image x is used for subtracting the rain streak image r and normalizing the input rain image r to be used as a false true value of a clean image of the real scene data.
And after the rain 3DEPI volume block I, the depth map D obtained by the depth map calculation module and the rain stripe map R extracted by the rain stripe detection module are connected in series, inputting the series into a generator to remove rain water. The loss function of the generator is a structural similarity loss (SSIM) and a perception loss formula which are respectively shown as the following formulas (3-2) and (3-3):
Figure BDA0002913584390000061
Figure BDA0002913584390000062
where y is the clean image output by the generator, ygtIs the corresponding true value.
The discriminator receives the output of the generator and evaluates whether it resembles a real clear rainless scene. The discriminator loss function is:
Figure BDA0002913584390000071
where D (-) is the arbiter network convolution operation.
In summary, the overall loss function for training the network is defined as follows:
Figure BDA0002913584390000072
wherein λp,g,λganAre respectively as
Figure BDA0002913584390000073
The weight in the total loss.
In order to fully illustrate the effect of the method of the present invention on the detection and removal of the rain streak, the present invention is illustrated by the following specific experiments:
the angular resolution of the input LFI is 9 × 9 and the spatial resolution is 512 × 512. Training was performed on a PC equipped with i7-7700K CPU, 24GB memory and NVIDIARTX TITAN GPU, 24 GB. Due to the memory size limitations of the NVIDIARTX TITAN GPU, the rain-bearing LFI is cut into small blocks of 128 x 128 size in step 64. The network was implemented using pytorch1.5.0 and Python 3.6.0, with the number of iterations set to 500 and the batch size set to 5002. Adam optimization is adopted to train the network, the learning rate of the rain streak detection module is set to be 0.0002, the learning rate of the rain water removal module generator is set to be 0.0001, and the learning rate of the discriminator is set to be 0.0004. λ in the formula (2-1)pSet to 0.04; λ in the formula (2-8)rSet to 0.015; in formula (3-5)
Figure BDA0002913584390000074
Set to 0.04 and 0.01, respectively.
Firstly, a depth map is obtained by a depth map calculation module, then a synthetic data rain stripe map is detected by a rain stripe detection module, meanwhile, a real scene data rain stripe map is detected by a Gaussian process module in a self-supervision mode, finally, the method enters a rain removing module, a depth map with a rain 3DEPI volume block obtained by the depth map calculation module and the rain stripe map extracted by the rain stripe detection module are connected in series, a 3D recursion generation confrontation network is input, rain is removed, and the training is repeated until a high-quality rain-free map is obtained.
The light field image rain stripe detection and removal method based on transfer learning is quantitatively evaluated by two typical evaluation indexes of peak signal to noise ratio (PSNR) and Structural Similarity (SSIM) for the synthetic data, as shown in an experimental result diagram of the synthetic data shown in FIG. 5, and as can be seen from FIG. 5, when dense rain stripes and dense fog are input in the row (a), the rain stripes can be accurately extracted by the method shown in the row (d). The rain removing result of the column (b) shows that the rain-free image obtained by the invention has higher PSNR value and SSIM value, can effectively remove rain stripes, and repair the background to obtain a high-quality rain-free image.
For real scene data, qualitative evaluation is performed on the method provided by the invention, for example, fig. 6 is a real scene data experiment result diagram, and as can be seen from the columns (b) and (c) in fig. 6, the method can accurately extract the rain stripes in the real scene, and obtain a clean rain-free diagram, which proves that the method has good generalization capability.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto. All equivalent changes, simplifications and modifications which do not depart from the spirit and scope of the invention are intended to be covered by the scope of the invention.

Claims (9)

1. A light field image rain stripe detection and removal method based on transfer learning is characterized by comprising a depth map calculation module, a rain stripe detection module and a rain removal module;
the depth map calculation module obtains a depth map through calculation;
the rainstripe detection module detects a synthetic data rainstripe image, and meanwhile, a Gaussian process module is used for detecting a real scene data rainstripe image in a self-supervision mode;
the rainwater removing module is used for inputting a 3D recursion generation countermeasure network to remove rainwater after connecting the rainwater-carrying 3DEPI volume block, the depth map obtained by the depth map calculating module and the rainwater stripe map extracted by the rainwater stripe detecting module in series, repairing the background, and repeatedly training until obtaining a high-quality rainwater-free map.
2. The method for detecting and removing the light field image rainstreak based on the transfer learning of claim 1, wherein the depth map calculation module firstly extracts information of light field sub-viewpoints from different directions by using a multi-stream neural network, and then enters a fusion network to calculate the depth map.
3. The method according to claim 2, wherein the depth map comprises a synthetic data depth map and a real scene depth map; the synthetic data depth map is obtained by entering synthetic data into a depth map calculation module; the real scene depth map is obtained by entering real scene data into a depth map calculation module.
4. The method for detecting and removing the rainstreak of the light field image based on the transfer learning of claim 1, wherein the rainstreak detection module takes a 3DEPI volume block formed by stacking a row of sub-viewpoints of the light field image as an input, performs the feature extraction of the rainstreak by using a residual error network, and in each layer of convolution, models the extracted synthetic data features and the real scene data features through a Gaussian process to obtain the false true value of the real scene data, and the false true value is further used for supervising the extraction of the real scene data features by the network.
5. The method according to claim 1, wherein the rain stripe pattern comprises a composite data rain stripe pattern and a real scene rain stripe pattern; the synthetic data rain stripe graph is obtained by entering synthetic data into a rain stripe detection module; the real scene rain stripe image is obtained by real data entering a rain stripe detection module.
6. The method for detecting and removing the rainstreak of the light field image based on the transfer learning of claim 1, wherein the raining water removal module is provided with an LSTM layer and a discriminator structure.
7. The method as claimed in claim 6, wherein the LSTM layer is used to propagate the features between convolution layers at each iteration.
8. The method according to claim 7, wherein each iteration is performed by using a long-time memory network to transfer features.
9. The method according to claim 6, wherein the loss function of the discriminator is as follows:
Figure FDA0002913584380000021
where y is the clean image output by the generator, ygtD (-) is the discriminator network convolution operation for the corresponding true value.
CN202110094780.0A 2021-01-25 2021-01-25 Light field image rain stripe detection and removal method based on transfer learning Pending CN112767274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110094780.0A CN112767274A (en) 2021-01-25 2021-01-25 Light field image rain stripe detection and removal method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110094780.0A CN112767274A (en) 2021-01-25 2021-01-25 Light field image rain stripe detection and removal method based on transfer learning

Publications (1)

Publication Number Publication Date
CN112767274A true CN112767274A (en) 2021-05-07

Family

ID=75707003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110094780.0A Pending CN112767274A (en) 2021-01-25 2021-01-25 Light field image rain stripe detection and removal method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112767274A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738605A (en) * 2019-08-30 2020-01-31 山东大学 Image denoising method, system, device and medium based on transfer learning
US20200226769A1 (en) * 2019-01-11 2020-07-16 Tata Consultancy Services Limited Dynamic multi-camera tracking of moving objects in motion streams
CN111445465A (en) * 2020-03-31 2020-07-24 江南大学 Light field image snowflake or rain strip detection and removal method and device based on deep learning
CN112085680A (en) * 2020-09-09 2020-12-15 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226769A1 (en) * 2019-01-11 2020-07-16 Tata Consultancy Services Limited Dynamic multi-camera tracking of moving objects in motion streams
CN110738605A (en) * 2019-08-30 2020-01-31 山东大学 Image denoising method, system, device and medium based on transfer learning
CN111445465A (en) * 2020-03-31 2020-07-24 江南大学 Light field image snowflake or rain strip detection and removal method and device based on deep learning
CN112085680A (en) * 2020-09-09 2020-12-15 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MINGHAN LI等: "Video Rain Streak Removal by Multiscale Convolutional Sparse Coding", IEEE, 16 December 2018 (2018-12-16) *

Similar Documents

Publication Publication Date Title
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN113298818B (en) Remote sensing image building segmentation method based on attention mechanism and multi-scale features
CN111179187B (en) Single image rain removing method based on cyclic generation countermeasure network
CN112489054A (en) Remote sensing image semantic segmentation method based on deep learning
CN110717863B (en) Single image snow removing method based on generation countermeasure network
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN111832484A (en) Loop detection method based on convolution perception hash algorithm
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN116306203A (en) Intelligent simulation generation method for marine target track
CN116579943A (en) Remote sensing SAR-optical image fusion cloud removing method based on generation countermeasure network
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN111275751B (en) Unsupervised absolute scale calculation method and system
CN112767267A (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN112767274A (en) Light field image rain stripe detection and removal method based on transfer learning
Mi et al. Dense residual generative adversarial network for rapid rain removal
CN115393735A (en) Remote sensing image building extraction method based on improved U-Net
CN115661786A (en) Small rail obstacle target detection method for area pre-search
CN113095328A (en) Self-training-based semantic segmentation method guided by Gini index
CN117649635B (en) Method, system and storage medium for detecting shadow eliminating point of narrow water channel scene
CN113971686B (en) Target tracking method based on background restoration and capsule network
Lin et al. Lightweight Remote Sensing Image Denoising via Knowledge Distillation
CN114998683B (en) Attention mechanism-based ToF multipath interference removal method
Liu et al. S&CNet: A lightweight network for fast and accurate depth completion
CN117979028A (en) Video prediction method based on shallow depth space frequency information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination