CN113763261A - Real-time detection method for far and small targets under sea fog meteorological condition - Google Patents

Real-time detection method for far and small targets under sea fog meteorological condition Download PDF

Info

Publication number
CN113763261A
CN113763261A CN202110724402.6A CN202110724402A CN113763261A CN 113763261 A CN113763261 A CN 113763261A CN 202110724402 A CN202110724402 A CN 202110724402A CN 113763261 A CN113763261 A CN 113763261A
Authority
CN
China
Prior art keywords
image
fog
far
layer
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110724402.6A
Other languages
Chinese (zh)
Other versions
CN113763261B (en
Inventor
刘开周
马海亮
赵宝德
崔健
王银欢
曹哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202110724402.6A priority Critical patent/CN113763261B/en
Publication of CN113763261A publication Critical patent/CN113763261A/en
Application granted granted Critical
Publication of CN113763261B publication Critical patent/CN113763261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing and computer vision, in particular to a real-time detection method for a far small target under a sea fog meteorological condition. The invention provides a real-time defogging method based on a convolutional neural network, which consists of a K module estimation network and a fog-free image restoration part, wherein K parameters are obtained through the K module estimation network, and a clear fog-free image is obtained according to the fog-free image restoration part. A multi-scale target detection model Dense-YOLOv4 fused with Dense connecting blocks is designed, and the Dense connecting blocks are added in a Neck layer of a YOLOv4 network under the scale of 76 x 76. The method can obviously enhance the image definition and the target detail information under the sea fog weather condition, thereby improving the detection precision of the detector on the far and small targets under the sea fog weather condition, reducing the omission factor and having important application value on the navigation safety under the sea fog weather condition.

Description

Real-time detection method for far and small targets under sea fog meteorological condition
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a method for detecting a far small target in real time under a sea fog meteorological condition.
Background
In recent years, with the development of the fusion of computer vision and deep learning, object detection becomes more and more important as a basic research work in the field of computer vision. Due to the fact that the marine environment is quite complex and changeable, sea fog weather conditions often occur, and quality of video image information collected on marine equipment is seriously degraded due to atmospheric scattering particles in sea fog weather, and tasks such as target detection and the like of the marine equipment are interfered to a great extent. At present, the influence of sea fog weather on detection is not considered in the detection of the offshore far and small targets, the offshore targets mainly comprise the far and small targets, less detail information can be obtained in the detection, and the interference of sea fog is added, so that the finally obtained detection precision is obviously reduced. Therefore, it is crucial for marine equipment to improve the accuracy of detection of small and distant targets under marine fog weather conditions.
Disclosure of Invention
Aiming at the defects of the existing method in the detection application of the far and small targets under the actual sea fog weather condition, the invention provides a far and small target detection method under the sea fog weather condition. By adopting an image processing and computer vision technology, a Defogging method (SFD-Net) based on a convolutional neural Network and a multiscale target detection Network Dense-Yolov4 fused with Dense connection blocks are provided, the SFD-Net method obviously improves the definition of a far and small target image and the detail information of a target under the Sea Fog weather condition, and finally, the detection is carried out by using a Dense-Yolov4 Network, so that the detection precision of the far and small target under the Sea Fog weather condition is improved, and the omission ratio of the detector to the far and small targets is reduced.
The technical scheme adopted by the invention for realizing the purpose is as follows:
a far and small target real-time detection method under the sea fog meteorological condition comprises the following steps:
constructing a many-to-one data set of a fog image and a fog-free image;
constructing a real-time defogging network model according to the data set and the defogging task of the sea fog, and training the real-time defogging network model by using the data set;
collecting various marine far and small target images, establishing a far and small target detection data set, and preprocessing the images in the far and small target detection data set;
training a real-time detection model Dense-YOLOv4 by using the preprocessed far small target detection data set to obtain a trained detection model;
acquiring an original image under a sea fog meteorological condition, carrying out defogging treatment on the original image by using a trained real-time defogging network model, inputting the image after defogging treatment into a real-time detection model Dense-YOLOv4 for detecting a far target and a small target to obtain the detected far target and small target.
The far small target image is an image that contains a ship target and the number of target pixels does not exceed 100.
The method for constructing the many-to-one data set of the foggy image and the fogless image specifically comprises the following steps:
the method comprises the steps of obtaining a color image and a depth information map corresponding to the color image in an NYU data set as a fog-free image, adjusting a global atmospheric light value and an atmospheric scattering coefficient, respectively obtaining the color image and the depth information map corresponding to the color image and the atmospheric scattering coefficient of different global atmospheric light values and atmospheric scattering coefficients as fog images, and carrying out many-to-one data association on a plurality of fog images and one fog-free image to form a many-to-one data set.
The method for constructing the real-time defogging network model comprises the following steps:
the K module estimation network carries out feature extraction and fusion on the foggy images in the data set to obtain a K parameter module;
and restoring the foggy image by using a K parameter module to realize the reconstruction and restoration of the defogged image.
The K-module estimation network includes a downsampling layer and a feature fusion layer, wherein:
the down-sampling layer is used for extracting the characteristics of the foggy image;
the feature fusion layer is used for fusing image features extracted by different channels in the down-sampling layer.
The downsampled layer comprises 6 convolutional layers, the 6 convolutional layers are divided into 3 convolutional layer modules, each convolutional layer module is composed of 1 × 1 convolutional kernels and 3 × 3 convolutional kernels, and each convolutional layer uses a Mish activation function.
The feature fusion layer comprises 3 feature splicing structures and a convolution layer, wherein the first feature splicing structure fuses features extracted by a first convolution layer and a second convolution layer in the down-sampling layer; the second feature splicing structure fuses features extracted by the second convolution layer and the fourth convolution layer in the down-sampling layer; the third feature splicing structure fuses features extracted by the first convolution layer, the second convolution layer, the fourth convolution layer and the sixth convolution layer in the down-sampling layer; and finally, performing feature dimension reduction on the features obtained by splicing the third features through a convolution layer, and taking the obtained result as a K parameter module.
The use K parameter module restores the foggy image, and specifically comprises the following steps:
J(x)=ReLU[K(x)I(x)-K(x)+1]
j (x) represents a fog-free image, K (x) represents a K parameter module, I (x) represents a fog-free image, and the fog-free image is obtained through a ReLU activation function.
The preprocessing is performed on the images in the detection data set of the far and small targets, and specifically comprises the following steps: calibrating a far small target image in a far small target detection data set; and to resize it.
The real-time detection model Dense-YOLOv4 specifically comprises the following steps:
replacing the Neck layer at the 76 × 76 scale of the YOLOv4 network with dense connected blocks, namely: forming a dense connecting block by using five convolutional layers and two residual modules; the five convolutional layers are composed of three 128-dimensional 1 × 1 convolution kernels and two 256-dimensional 3 × 3 convolution kernels; the first residual module fuses the input features of the dense connecting block and the extracted features of the second convolutional layer, and the second residual module fuses the input features of the dense connecting block, the extracted features of the second convolutional layer and the extracted features of the fourth convolutional layer; the extracted features are transformed after the second residual block using a 128-dimensional 1 x 1 convolution kernel to match the feature dimensions later.
The invention has the following beneficial effects and advantages:
the invention realizes the real-time defogging requirement in the defogging aspect of the sea fog image by using the SFD-Net method, and the defogging result of the method has richer target detail information, thereby effectively avoiding the color distortion and the Halo phenomenon in the defogging result and providing a clearer image for the subsequent target detection. The defogged image is input into a Dense-YOLOv4 detection network, the network integrates Dense connecting blocks, the problem that the Neck layer of the detection network is reversely propagated and gradient disappears is solved, and the characteristic multiplexing capability of the network is improved. In addition, the network reasoning time is not obviously increased by only adding one dense connecting block in the Neck layer, the detection precision of a far small target is greatly improved, and the network has the performance of real-time target detection.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a diagram of the SFD-Net method K module estimation network;
FIG. 3 is a graph comparing the effect before and after defogging of SFD-Net (a is before defogging and b is after defogging);
FIG. 4 is a diagram of a Dense-Yolov4 network architecture;
FIG. 5 is a graph showing the comparison of the results of detection before and after defogging by Dense-YOLOv4 (a is before defogging and b is after defogging).
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as modified in the spirit and scope of the present invention as set forth in the appended claims.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
As shown in FIG. 1, the invention provides a real-time detection method for far and small targets under sea fog weather conditions, which comprises the following steps:
step 1: and establishing a fog image and fog-free image many-to-one data set according to the defogging requirement.
Step 2: the established data set is used for training an SFD-Net model, as shown in FIG. 2, a K module of the model estimates a network structure, the model finally outputs an image after defogging, and the specific defogging effect is as shown in FIG. 3, wherein FIG. 3a is a sea fog image, and FIG. 3b is an image after defogging by an SFD-Net method.
And step 3: acquiring a marine far small target image, establishing a far small target detection data set according to target detection requirements, calibrating and size-adjusting the data, and finally, uniformly adjusting the input image to 608 multiplied by 608.
And 4, step 4: and (3) using the established target detection data set for training a Dense-YOLOv4 network, wherein the structure of the Dense-YOLOv4 network is shown in FIG. 4, so as to obtain a trained target detection model.
In fig. 4, Conv denotes a convolutional layer, BN denotes batch normalization, Mish denotes a Mish activation function, Leak ReLU denotes a Leak ReLU activation function, Concat denotes feature concatenation, and add denotes feature addition.
And 5: inputting a sea fog image collected under a sea fog weather condition into an SFD-Net network for defogging, inputting a clear processed fog-free image into a Dense-YOLOv4 network for detecting a far target and a small target, and outputting a detection result, wherein the detection result of the far target and the small target is shown in figure 5, figure 5a is an effect diagram of direct detection without defogging, and figure 5b is an effect diagram of detection after defogging.
Wherein the far small image refers to an image containing a ship target and the number of target pixels does not exceed 100.
The fog image and fog-free image many-to-one data set specifically comprises:
using the color image and depth information in the NYU dataset according to the following atmospheric scattering model formula:
I(x)=J(x)t(x)+A(1-t(x))
t(x)=e-βd(x)
wherein, i (x) represents an input foggy image, j (x) represents a clear fogless image, t (x) represents atmospheric transmittance, a global atmospheric light value, d (x) is a distance between a target and an imaging device in a sea fog scene, and β is an atmospheric scattering coefficient.
And selecting a global atmosphere light value A belonging to [0.5,0.9] and an atmosphere scattering coefficient beta belonging to [0.4,1.6], obtaining 27531 foggy images and 1449 fogless images, and realizing data association of the fogless images and the fogless images.
The method for collecting the offshore far and small targets specifically comprises the following steps:
according to the invention, the images of the far and small targets are collected in a mode of carrying a camera shooting ball machine on the unmanned surface vehicle, and the far and small targets of different types are obtained by screening after the images are collected.
The SFD-Net network structure provided by the invention comprises two parts: the K module estimates a network and restores a fog-free image; the K module estimation network consists of two modules, one module is a down-sampling layer, the down-sampling layer comprises 6 convolutional layers, the 6 convolutional layers can be divided into 3 convolutional layer modules, each convolutional layer module consists of 1 multiplied by 1 and 3 multiplied by 3 convolutional cores, and each convolutional layer uses a Mish activation function; the other module is a feature fusion layer, the feature fusion layer comprises 3 feature splicing structures and a convolution layer, and the first feature splicing structure fuses the features extracted by the first convolution layer and the second convolution layer; the second feature splicing structure fuses the features extracted by the second convolution layer and the fourth convolution layer; the third feature splicing structure fuses the features extracted by the first convolution layer, the second convolution layer, the fourth convolution layer and the sixth convolution layer; and finally, the characteristics obtained by splicing the third characteristics pass through a convolution layer to be used as a K parameter module.
Using a K parameter module of network estimation to obtain a clear fog-free image according to the following recovery formula:
J(x)=ReLU[K(x)I(x)-K(x)+1]
j (x) represents a clear fog-free image, K (x) represents a K parameter module for network estimation, I (x) represents an input fog image, and the clear fog-free image is obtained through a ReLU activation function.
The target detection data set calibration and size adjustment specifically comprises the following steps:
step 3.1: 9000 images containing the far and small targets are selected, and the far and small targets are calibrated by using labelImg software;
step 3.2: dividing the calibrated images into 8100 training sets and 900 testing sets;
step 3.3: both the training set and test set images were resized to 608 x 608.
The proposed Dense-YOLOv4 network of the invention is to add Dense connection blocks in the Neck layer of YOLOv4 network at the scale of 76 × 76, namely: forming a dense connecting block by using five convolutional layers and two residual modules; the five convolutional layers are composed of three 128-dimensional 1 × 1 convolution kernels and two 256-dimensional 3 × 3 convolution kernels; the first residual module fuses the input features of the dense connecting block and the features obtained by the second convolutional layer, and the second residual module fuses the input features of the dense connecting block, the features obtained by the second convolutional layer and the fourth convolutional layer; the extracted features are transformed after the second residual block using a 128-dimensional 1 x 1 convolution kernel to match the feature dimensions later.
The Dense-YOLOv4 network provided by the invention takes the AP value (Average Precision) and the F1Score as the evaluation indexes of the detection model, and compared with the network without adding a Dense connection block structure, the AP value is improved by 0.1%, and the F1Score value is improved by 2%.
AP definition:
Figure BDA0003137987910000061
f1Score definition:
Figure BDA0003137987910000062
where precision represents precision and recall represents recall.

Claims (10)

1. A far and small target real-time detection method under the sea fog meteorological condition is characterized by comprising the following steps:
constructing a many-to-one data set of a fog image and a fog-free image;
constructing a real-time defogging network model according to the data set and the defogging task of the sea fog, and training the real-time defogging network model by using the data set;
collecting various marine far and small target images, establishing a far and small target detection data set, and preprocessing the images in the far and small target detection data set;
training a real-time detection model Dense-YOLOv4 by using the preprocessed far small target detection data set to obtain a trained detection model;
acquiring an original image under a sea fog meteorological condition, carrying out defogging treatment on the original image by using a trained real-time defogging network model, inputting the image after defogging treatment into a real-time detection model Dense-YOLOv4 for detecting a far target and a small target to obtain the detected far target and small target.
2. The method of claim 1, wherein the image of the object is a ship object with a pixel number not exceeding 100.
3. The method according to claim 1, wherein the constructing many-to-one data sets of the foggy image and the fogless image comprises:
the method comprises the steps of obtaining a color image and a depth information map corresponding to the color image in an NYU data set as a fog-free image, adjusting a global atmospheric light value and an atmospheric scattering coefficient, respectively obtaining the color image and the depth information map corresponding to the color image and the atmospheric scattering coefficient of different global atmospheric light values and atmospheric scattering coefficients as fog images, and carrying out many-to-one data association on a plurality of fog images and one fog-free image to form a many-to-one data set.
4. The method for detecting the far and small targets under the sea fog meteorological condition as claimed in claim 1, wherein the constructing of the real-time defogging network model comprises the following steps:
the K module estimation network carries out feature extraction and fusion on the foggy images in the data set to obtain a K parameter module;
and restoring the foggy image by using a K parameter module to realize the reconstruction and restoration of the defogged image.
5. The method of claim 4, wherein the K-module estimation network comprises a down-sampling layer and a feature fusion layer, wherein:
the down-sampling layer is used for extracting the characteristics of the foggy image;
the feature fusion layer is used for fusing image features extracted by different channels in the down-sampling layer.
6. The method as claimed in claim 5, wherein the down-sampling layer comprises 6 convolutional layers, the 6 convolutional layers are divided into 3 convolutional layer modules, each convolutional layer module is composed of 1 x 1 and 3 x 3 convolutional kernels, and each convolutional layer uses Mish activation function.
7. The method for detecting the far and small targets under the sea fog meteorological condition as claimed in claim 5, wherein the feature fusion layer comprises 3 feature splicing structures and a convolution layer, and the first feature splicing structure fuses the features extracted by the first convolution layer and the second convolution layer in the down sampling layer; the second feature splicing structure fuses features extracted by the second convolution layer and the fourth convolution layer in the down-sampling layer; the third feature splicing structure fuses features extracted by the first convolution layer, the second convolution layer, the fourth convolution layer and the sixth convolution layer in the down-sampling layer; and finally, performing feature dimension reduction on the features obtained by splicing the third features through a convolution layer, and taking the obtained result as a K parameter module.
8. The method according to claim 4, wherein the K parameter module is used to recover the foggy image, and specifically comprises:
J(x)=ReLU[K(x)I(x)-K(x)+1]
j (x) represents a fog-free image, K (x) represents a K parameter module, I (x) represents a fog-free image, and the fog-free image is obtained through a ReLU activation function.
9. The method according to claim 1, wherein the preprocessing of the image in the detection data set of the far and small targets includes: calibrating a far small target image in a far small target detection data set; and to resize it.
10. The method for detecting the far and small targets in the sea fog weather condition in real time as claimed in claim 1, wherein the real-time detection model Dense-YOLOv4 specifically comprises:
replacing the Neck layer at the 76 × 76 scale of the YOLOv4 network with dense connected blocks, namely: forming a dense connecting block by using five convolutional layers and two residual modules; the five convolutional layers are composed of three 128-dimensional 1 × 1 convolution kernels and two 256-dimensional 3 × 3 convolution kernels; the first residual module fuses the input features of the dense connecting block and the extracted features of the second convolutional layer, and the second residual module fuses the input features of the dense connecting block, the extracted features of the second convolutional layer and the extracted features of the fourth convolutional layer; the extracted features are transformed after the second residual block using a 128-dimensional 1 x 1 convolution kernel to match the feature dimensions later.
CN202110724402.6A 2021-06-29 2021-06-29 Real-time detection method for far small target under sea fog weather condition Active CN113763261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110724402.6A CN113763261B (en) 2021-06-29 2021-06-29 Real-time detection method for far small target under sea fog weather condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110724402.6A CN113763261B (en) 2021-06-29 2021-06-29 Real-time detection method for far small target under sea fog weather condition

Publications (2)

Publication Number Publication Date
CN113763261A true CN113763261A (en) 2021-12-07
CN113763261B CN113763261B (en) 2023-12-26

Family

ID=78787509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110724402.6A Active CN113763261B (en) 2021-06-29 2021-06-29 Real-time detection method for far small target under sea fog weather condition

Country Status (1)

Country Link
CN (1) CN113763261B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645287A (en) * 2023-05-22 2023-08-25 北京科技大学 Diffusion model-based image deblurring method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109928107A (en) * 2019-04-08 2019-06-25 江西理工大学 A kind of automatic classification system
CN111161360A (en) * 2019-12-17 2020-05-15 天津大学 Retinex theory-based image defogging method for end-to-end network
CN111461291A (en) * 2020-03-13 2020-07-28 西安科技大学 Long-distance pipeline inspection method based on YO L Ov3 pruning network and deep learning defogging model
WO2020246834A1 (en) * 2019-06-04 2020-12-10 주식회사 딥엑스 Method for recognizing object in image
CN112949389A (en) * 2021-01-28 2021-06-11 西北工业大学 Haze image target detection method based on improved target detection network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109928107A (en) * 2019-04-08 2019-06-25 江西理工大学 A kind of automatic classification system
WO2020246834A1 (en) * 2019-06-04 2020-12-10 주식회사 딥엑스 Method for recognizing object in image
CN111161360A (en) * 2019-12-17 2020-05-15 天津大学 Retinex theory-based image defogging method for end-to-end network
CN111461291A (en) * 2020-03-13 2020-07-28 西安科技大学 Long-distance pipeline inspection method based on YO L Ov3 pruning network and deep learning defogging model
CN112949389A (en) * 2021-01-28 2021-06-11 西北工业大学 Haze image target detection method based on improved target detection network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王飞;刘梦婷;刘雪芹;秦志亮;马本俊;郑毅;: "基于YOLOv3深度学习的海雾气象条件下海上船只实时检测", 海洋科学, no. 08 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645287A (en) * 2023-05-22 2023-08-25 北京科技大学 Diffusion model-based image deblurring method
CN116645287B (en) * 2023-05-22 2024-03-29 北京科技大学 Diffusion model-based image deblurring method

Also Published As

Publication number Publication date
CN113763261B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN108960261B (en) Salient object detection method based on attention mechanism
CN111524135A (en) Image enhancement-based method and system for detecting defects of small hardware fittings of power transmission line
CN111768388A (en) Product surface defect detection method and system based on positive sample reference
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN112465759A (en) Convolutional neural network-based aeroengine blade defect detection method
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
CN113066025B (en) Image defogging method based on incremental learning and feature and attention transfer
CN110827218A (en) Airborne image defogging method based on image HSV transmissivity weighted correction
CN112258537B (en) Method for monitoring dark vision image edge detection based on convolutional neural network
CN112767267A (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN111104532B (en) RGBD image joint recovery method based on double-flow network
CN117392496A (en) Target detection method and system based on infrared and visible light image fusion
CN112288031A (en) Traffic signal lamp detection method and device, electronic equipment and storage medium
CN116596792A (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN113763261B (en) Real-time detection method for far small target under sea fog weather condition
Pazhani et al. A novel haze removal computing architecture for remote sensing images using multi-scale Retinex technique
CN116433822B (en) Neural radiation field training method, device, equipment and medium
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN116704309A (en) Image defogging identification method and system based on improved generation of countermeasure network
CN115578256A (en) Unmanned aerial vehicle aerial insulator infrared video panorama splicing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant