CN115601657A - Method for detecting and identifying ship target in severe weather - Google Patents

Method for detecting and identifying ship target in severe weather Download PDF

Info

Publication number
CN115601657A
CN115601657A CN202211270809.7A CN202211270809A CN115601657A CN 115601657 A CN115601657 A CN 115601657A CN 202211270809 A CN202211270809 A CN 202211270809A CN 115601657 A CN115601657 A CN 115601657A
Authority
CN
China
Prior art keywords
model
defogging
branch
detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211270809.7A
Other languages
Chinese (zh)
Inventor
董立泉
易伟超
刘明
蔡博雍
赵跃进
惠梅
孔令琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202211270809.7A priority Critical patent/CN115601657A/en
Publication of CN115601657A publication Critical patent/CN115601657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the fields of image defogging, target detection and the like, and provides a method applied to ship target detection and identification in severe weather, aiming at the problems of difficult ship detection, low precision and the like in severe weather. The method comprises two stages: firstly, converting a fog image into a clear fog-free image through a defogging model; secondly, an improved detection network is used for carrying out target detection tasks on the processed clear input, and interesting ship targets are identified and positioned. The defogging model consists of a CNN branch, a transformer branch and a fusion branch. The CNN branch is responsible for local feature extraction, the transformer branch is used for long-distance global feature dependence, and the fusion branch realizes feature adaptive mode fusion. The detection model is based on an original YOLOV5 framework, and a multi-branch convolution structure is used for replacing an original feature extraction module, so that the detection performance is improved. The method can solve the problems of low detection precision and the like of the ship target in severe weather, and has higher practical value.

Description

Method for detecting and identifying ship target in severe weather
Technical Field
The invention mainly relates to the technical fields of intelligent image processing, deep learning, target detection and the like, and provides a method for detecting and identifying ship targets in severe weather by combining an image defogging and target detection model.
Background
The target detection technology is one of important branches in the field of computer vision, and gradually becomes a key research problem concerned by various colleges and universities and scientific research institutions. In recent years, target detection algorithms based on single-stage and double-stage strategies are diversified, excellent detection effects are obtained, and the method is effectively applied to the military and civil fields of border security, crowd analysis, traffic flow estimation and the like. Specifically, the existing target detection technology mainly uses a clear and clean image as a target input, that is, the image input to the detection network is not interfered by noise, so as to ensure high image quality. However, due to the influence of complicated and severe weather conditions, in the process of detecting and identifying the ship target, the obtained remote sensing image is easily shielded and interfered by vapor objects such as cloud and haze, the quality of the image is greatly influenced, and the problems of false detection, missing detection and the like are caused to subsequent detection tasks. Therefore, how to improve the detection effect of the ship target in severe weather and improve the anti-interference capability of the detection algorithm to severe weather conditions becomes a key problem which needs to be solved urgently.
Disclosure of Invention
In order to meet the requirement of a target detection algorithm in foggy days, the problems of low detection rate, high omission factor and the like caused by cloud and fog shielding are solved, and better ship target detection performance is realized. The invention provides a method for detecting and identifying a ship target in bad weather.
A method for detecting and identifying ship targets in severe weather comprises the following steps: the first stage, the obtained atomization remote sensing image is subjected to sharpening treatment, namely, the atomization image is converted into a clear fog-free image through a defogging model; and in the second stage, the obtained clear image is used as the image input of the detection network, namely, an improved detection network is utilized to carry out target detection task on the processed clear image, and the interested ship target is identified and positioned.
The first phase described comprises: constructing an atomization-clear data set based on an atmospheric scattering model, and establishing a defogging model combining CNN and a transform structure and a design of a global feature and local feature fusion module; and the defogging treatment is used for realizing the defogging treatment of the fog image.
The second phase described comprises: the method comprises the steps of collecting and labeling remote sensing ship data, carrying out ship target detection tasks by using improved YOLOv5, and training, testing and deploying detection models.
In addition, the defogging model of the first stage and the detection model of the second stage adopt a cascade end-to-end simultaneous training mode, namely loss functions are combined and added to optimize the whole model. Specifically, the model total loss function is as follows:
L total =L h +L d (1.1)
wherein L is total Representing a model total loss function; l is h Expressed as a loss function of the defogging model; l is d Expressed as a loss function of the target detection model.
The technical scheme adopted by the invention is that the method for detecting and identifying the ship target in severe weather is implemented according to the following steps:
first stage defogging model as shown above:
step 1: and constructing a fog-clear data set based on the atmospheric scattering model. In the existing defogging algorithm research, an atmospheric scattering model shown as the following is often adopted,
i (x) = J (x) t (x) + a (1-t (x)) (1.2) wherein I (x) represents a fogging image, J (x) represents a clear fogging-free image, t (x) represents a transmission matrix, and a represents an atmospheric scattering coefficient. From the parameter analysis it can be known that: when a clear fog-free image is known, the corresponding fog-free image can be generated in a simulation manner by setting the corresponding atmospheric scattering coefficient and the corresponding projection coefficient by utilizing the depth map of the clear fog-free image. Therefore, the acquired remote sensing image is subjected to fogging processing in the formula, and a fogging-clear remote sensing image data set for training and testing is constructed.
Step 2: and establishing a defogging model combining the CNN and the Transformer structure. The method benefits from the local feature perception capability of the convolutional neural network CNN, so that the method is effectively applied to various image recovery tasks. However, the CNN structure lacks the dependency of capturing long distance, and the existing method mainly alleviates the problem by increasing the number of layers of the network, but the simple and naive idea easily causes network redundancy and loses more local detail information. The problem is solved well by the proposal of Transformer, and the characteristic global dependency relationship is well described by a self-attention mechanism. Therefore, the proposed defogging model is composed of CNN branches and Transformer branches, and the advantages of the respective structures are fully utilized to obtain a more excellent defogging effect. In addition, the fusion branch is utilized to perform superposition fusion on the obtained features so as to realize stronger feature representation capability.
And step 3: and designing a global feature and local feature fusion module. In the feature fusion process, effective features extracted by using the CNN and the Transformer structures are simultaneously utilized, so that the fused features are more compact and the network expression capability is stronger. The specific operations comprise a channel attention module, a space attention module, a conventional convolution operation, residual error connection and the like.
The second stage object detection model as shown above:
step 1: and (4) acquiring and labeling remote sensing ship data. Firstly, a remote sensing image containing a ship target is intercepted on a satellite map, and original image data is obtained. And secondly, labeling the ship target in the acquired image data by using a labeling tool to obtain corresponding image characteristics. Finally, the image and the marked target position information are in one-to-one correspondence, and a remote sensing ship target data set is established;
step 2: and carrying out a ship target detection task by using the improved YOLOv 5. In order to meet the requirement of the algorithm for subsequently deploying airborne hardware, the simplified design adjustment is carried out on the original YOLOv5 characteristic backbone extraction network. And replacing the original characteristic extraction module by using a multi-branch convolution structure based on the topological structure paradigm. Meanwhile, operators of the whole backbone network are adjusted, so that efficient reasoning on hardware is realized, and meanwhile, efficient multi-scale feature fusion capability is kept;
and step 3: and (4) training and testing deployment of the detection model. And training the detection model by using the labeled ship data set, and obtaining corresponding training weight after completing model training. To speed up the inference speed of the model, a TensorRT configuration operation is performed. The TensorRT can be regarded as a deep learning framework only with forward propagation reasoning, the framework can analyze network models of Caffe and TensorFlow, then one-to-one mapping is carried out on the network models and corresponding layers in the TensorRT, and models of other frameworks are uniformly converted into the TensorRT, so that deployment acceleration is carried out.
The invention is also characterized in that:
and the defogging model and the ship target detection model adopt a joint training mode. Firstly, the weight of the defogging subnetwork is initialized by using the Gaussian random variable. Secondly, the target detection subnet does not perform random initialization weight operation, but performs class down-sampling fine adjustment on a model obtained by pre-training on the COCO data set to realize weight initialization. And finally, the integral model carries out end-to-end training on the constructed data set, and simultaneously learns the aims of image defogging enhancement, ship target classification and positioning.
In summary, the main contributions of the present invention are:
(1) The invention provides a method for detecting and identifying ship targets in severe weather, which comprises the steps of connecting a defogging model and a detection model in a cascade mode, taking a fogging image as the input of the defogging model, taking the clear output of the defogging model as the input of a detection model at the next stage, and outputting to obtain the class positioning result of the ship targets.
(2) The invention provides a method for detecting and identifying ship targets in severe weather, wherein an adopted defogging model is combined and synthesized by CNN and Transformer structures, and comprises three branches: CNN branch, transformer branch and fusion branch. The structural characteristics of CNN and Transformer are fully fused and utilized to realize more excellent defogging effect.
(3) The invention provides a method for detecting and identifying a ship target in severe weather, which is characterized in that a detection model is adjusted in a light weight mode by using a topological structure paradigm, network memory is reduced, and in addition, a trained model is deployed in a TensorRT mode, so that the reasoning speed of the model on hardware is accelerated.
Drawings
FIG. 1 is a schematic structural diagram of a method for detecting and identifying a ship target in severe weather according to the present invention;
FIG. 2 is a schematic view of the overall structure of a defogging method model according to the present invention;
FIG. 3 is a schematic diagram of a feature fusion architecture according to the present invention;
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments. It should be noted that the described embodiments are only intended to facilitate the understanding of the present invention, and do not impose any limitation thereon. The accompanying drawings, which are in a simplified form and are not to scale, are included for purposes of illustrating embodiments of the invention in a clear and concise manner and are incorporated in and constitute a part of the specification. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a method for detecting and identifying ship targets in severe weather, wherein the overall model of the method is shown in figure 1 and comprises two stages: the first stage, the obtained atomization remote sensing image is subjected to sharpening treatment, namely, the atomization image is converted into a clear fog-free image through a defogging model; and in the second stage, the clear image obtained by the defogging model is used as the image input of the detection network, namely, an improved detection network is used for carrying out a target detection task on the processed clear input, and an interested ship target is identified and positioned. Through the technical steps, the purpose of intelligently detecting and positioning the ship target in severe weather is achieved.
And a defogging and clearing treatment stage of the atomized remote sensing image, which is used for restoring the atomized image into a clear fog-free image.
And analyzing the physical mechanism of the atmospheric scattering model, and constructing a atomization-clear image data pair for training and testing in a simulation generation mode. The method mainly comprises two steps of original remote sensing image acquisition and atomization image construction. Acquiring an original remote sensing image: and intercepting and obtaining an interested remote sensing image through a satellite remote sensing map, and paying attention to the fact that the image contains a ship target as much as possible, so that a ship target detection data set is conveniently constructed subsequently. Construction of the fogging image: and calculating the acquired remote sensing images to obtain respective depth maps, setting corresponding atmospheric scattering coefficients and transmission coefficients, and carrying out simulation calculation to obtain corresponding atomization images based on the atmospheric scattering model. In order to better simulate the real haze scene, the setting of the parameters covers the interval of 0 to 1 as much as possible so as to meet the requirements of remote sensing images of different concentrations of haze.
And constructing a defogging model to achieve the aim of defogging the image. The proposed defogging model is shown in fig. 2 and comprises three branches: CNN branch, transformer branch and fusion branch.
Specifically, in the CNN branch, feature accumulation is achieved by stacking N layers of conventional residual blocks, each with a pooling layer with a downsampling coefficient of 2 in front of it, to reduce the size of the feature. The feature representation extracted by CNN is mainly about local features of the image, such as contours, boundaries, etc., and global information thereof cannot be well perceived. Therefore, a transformer branching pair is needed to supplement the features strongly.
Specifically, in the transform branch, the transform structure follows the classical codec structure. First, in an encoder, an input image is encoded
Figure BDA0003892514520000061
Is divided into N image blocks, and the image size is changed
Figure BDA0003892514520000062
The size of S is set to 16. Then, the image blocks are stretched and input into an embedding layer, and an embedding vector is output
Figure BDA0003892514520000063
Where D represents the dimension size. To ensure that the image is spatially a priori, a learnable position-encoding vector is added, whose size is consistent with the image-embedding vector, and both are combined as a common input to the encoder. The encoder is composed of a multi-head attention module and a multi-layer sensor, wherein the self-attention module operates as a kernel of a transform and can be expressed as:
Figure BDA0003892514520000071
furthermore, in addition to the above operations, we also add an example regularization operation. Finally, the output of the encoder is obtained
Figure BDA0003892514520000072
In the decoder, we follow a time-sequential upsampling method, i.e., gradually increasing the feature size. Specifically, the features are first returned to the original size
Figure BDA0003892514520000073
Then, the spatial resolution of the features is sequentially improved by using the upsampled convolutional layer, and finally, the obtained features are according to theThe size of the fusion module is transmitted to the fusion module of the next stage.
Specifically, in the merging branch, the specific steps are shown in fig. 3. Two features are derived from CNN and transformer branches, respectively
Figure BDA0003892514520000074
And
Figure BDA0003892514520000075
these two feature streams are shown in sequence. Wherein, H, W and C respectively represent the height and width of the feature and the size of the number of channels. First, consider F c Only local feature information is contained, and the dependence effect of the feature pixel level is enhanced by utilizing a space attention module; considering F t More global dependency information is included, and the interaction relationship of features among different channels is enhanced by using a channel attention module. Secondly, multiplying the feature stream at the pixel level, exploring the interaction between features, and performing feature enhancement by using a residual error module. And finally, carrying out channel splicing treatment on the three characteristic flows to obtain a final fusion result.
And constructing a detection model to realize the purpose of ship target identification and positioning. The method comprises the following three steps: acquisition and labeling of remote sensing ship data, training of ship target detection task and detection model by using improved YOLOv5, and test deployment
Particularly, the process of acquiring and labeling the remote sensing ship data mainly depends on manual operation. And intercepting remote sensing images containing ship targets on the open-source Google satellite map, and storing all the remote sensing images into the same resolution. And performing a labeling task on the obtained remote sensing image ship target by using labeling software labelme to obtain an xml file with position and size information.
Specifically, in the target detection stage, the method utilizes improved YOLOv5 to perform ship target detection tasks. In the decoupling detection head structure, the decoupling detection head structure is simplified and designed. The detection head of the original YOLOv5 is realized by a classification and regression branch fusion sharing mode, although the accuracy of target detection is improved, the network delay is increased to a certain extent, and the reasoning speed is slowed down. Therefore, the method simultaneously considers the balance of the characterization capability of the correlation operator and the calculation cost on hardware, adopts a Hybrid Channels strategy to redesign a more efficient decoupling head structure, reduces the time delay while maintaining the precision, relieves the extra time delay cost brought by the conventional convolution in the original decoupling head, and improves the running speed of the network.
Specifically, in the training and testing deployment process of the ship target detection model, tensorRT transplantation deployment is performed on the proposed detection method. Deployment is divided into two phases: namely a model preprocessing phase and a model reasoning phase. The general flow is as follows: 1) Deriving a detection network definition and relevant trained weights; 2) Re-resolving the detection network definition and the related weight; 3) Constructing an optimal execution plan according to the used display card operator; 4) Storing the executed plan serialization in a graphics card; 5) Performing a target detection plan in a reverse serialization manner; 6) And carrying out a forward reasoning process of the ship detection model.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A method for detecting and identifying ship targets in severe weather is characterized by comprising the following steps:
step 1: cascading the defogging model and the detection model, wherein the fogging image is used as the input of the defogging model, the clear output of the defogging model is used as the input of the detection model of the next stage, and the category positioning result of the ship target is obtained through output;
step 2: the defogging model is synthesized by combining CNN and Transformer structures, and comprises three branches: CNN branch, transformer branch and fusogenic branch. Structural characteristics of CNN and Transformer are fully fused and utilized to realize more excellent defogging effect;
and 3, step 3: and the detection model is adjusted in a light weight manner by utilizing the topological structure paradigm, the network memory is reduced, and in addition, the trained model is deployed in a TensorRT manner, so that the reasoning speed of the model on hardware is accelerated.
2. The cascade defogging model and the detection model in the step 1 are characterized in that: and initializing the weight of the defogging subnetwork by using a Gaussian random variable in a joint training mode. The target detection subnet does not use random initialization weight operation, but carries out class down-sampling fine adjustment on a model obtained by pre-training on a COCO data set so as to realize weight initialization. The whole model carries out end-to-end training on the constructed data set and simultaneously learns the purposes of image defogging, ship target classification and positioning.
3. The defogging model construction according to the step 2 is characterized in that: in the CNN branch, local feature information about an image, such as contours, boundaries, etc., is mainly characterized by stacking N layers of conventional residual modules to achieve feature accumulation.
4. The defogging model construction method according to the step 2 is characterized in that: in the transform branch, based on the coding and decoding structure, the long-distance dependent global feature representation is realized by using operations such as a multi-head self-attention module, a multi-layer perceptron and an upsampling convolutional layer. Supplementing the characteristics of the CNN branch to achieve better defogging performance.
5. The defogging model construction according to the step 2 is characterized in that: in the feature fusion branch, based on a feature adaptive fusion strategy, a channel attention module, a space attention module, a conventional convolution operation, residual connection and the like are utilized, and effective features extracted by CNN and Transformer structures are utilized simultaneously.
6. The construction of the ship target detection model in the step 3 is characterized in that: and replacing the original feature extraction module by using a multi-branch convolution structure based on the topological structure paradigm. Meanwhile, operators of the whole backbone network are adjusted, and high-efficiency multi-scale feature fusion capability is kept.
CN202211270809.7A 2022-10-17 2022-10-17 Method for detecting and identifying ship target in severe weather Pending CN115601657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211270809.7A CN115601657A (en) 2022-10-17 2022-10-17 Method for detecting and identifying ship target in severe weather

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211270809.7A CN115601657A (en) 2022-10-17 2022-10-17 Method for detecting and identifying ship target in severe weather

Publications (1)

Publication Number Publication Date
CN115601657A true CN115601657A (en) 2023-01-13

Family

ID=84846270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211270809.7A Pending CN115601657A (en) 2022-10-17 2022-10-17 Method for detecting and identifying ship target in severe weather

Country Status (1)

Country Link
CN (1) CN115601657A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557477A (en) * 2024-01-09 2024-02-13 浙江华是科技股份有限公司 Defogging recovery method and system for ship
CN117789041A (en) * 2024-02-28 2024-03-29 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 Single image defogging method based on cyclic generation countermeasure network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557477A (en) * 2024-01-09 2024-02-13 浙江华是科技股份有限公司 Defogging recovery method and system for ship
CN117557477B (en) * 2024-01-09 2024-04-05 浙江华是科技股份有限公司 Defogging recovery method and system for ship
CN117789041A (en) * 2024-02-28 2024-03-29 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model
CN117789041B (en) * 2024-02-28 2024-05-10 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 Single image defogging method based on cyclic generation countermeasure network

Similar Documents

Publication Publication Date Title
Yu et al. Underwater-GAN: Underwater image restoration via conditional generative adversarial network
CN108537742B (en) Remote sensing image panchromatic sharpening method based on generation countermeasure network
CN110135366B (en) Shielded pedestrian re-identification method based on multi-scale generation countermeasure network
CN115601657A (en) Method for detecting and identifying ship target in severe weather
Li et al. A comprehensive benchmark analysis of single image deraining: Current challenges and future perspectives
CN111862126A (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN111553869B (en) Method for complementing generated confrontation network image under space-based view angle
Liu et al. Deep multi-level fusion network for multi-source image pixel-wise classification
Bescos et al. Empty cities: Image inpainting for a dynamic-object-invariant space
CN111582232A (en) SLAM method based on pixel-level semantic information
Hwang et al. Lidar depth completion using color-embedded information via knowledge distillation
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
Li et al. Exploring image generation for UAV change detection
CN116740516A (en) Target detection method and system based on multi-scale fusion feature extraction
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
Zhang et al. FFCA-YOLO for small object detection in remote sensing images
Yao et al. Vision-based environment perception and autonomous obstacle avoidance for unmanned underwater vehicle
CN104463962A (en) Three-dimensional scene reconstruction method based on GPS information video
Sun et al. Remote sensing images dehazing algorithm based on cascade generative adversarial networks
Chen et al. Coupled global–local object detection for large vhr aerial images
US20220020177A1 (en) Overhead View Image Generation
Zhou et al. Lithological unit classification based on geological knowledge-guided deep learning framework for optical stereo mapping satellite imagery
CN116503602A (en) Unstructured environment three-dimensional point cloud semantic segmentation method based on multi-level edge enhancement
Yang et al. Cascaded deep residual learning network for single image dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination