CN116091331A - Haze removing method and device for vehicle-mounted video of high-speed railway - Google Patents

Haze removing method and device for vehicle-mounted video of high-speed railway Download PDF

Info

Publication number
CN116091331A
CN116091331A CN202211153204.XA CN202211153204A CN116091331A CN 116091331 A CN116091331 A CN 116091331A CN 202211153204 A CN202211153204 A CN 202211153204A CN 116091331 A CN116091331 A CN 116091331A
Authority
CN
China
Prior art keywords
haze
image
loss
unpaired
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211153204.XA
Other languages
Chinese (zh)
Inventor
刘俊博
裴艳婷
王凡
王胜春
黄雅平
顾子晨
王昊
戴鹏
王宁
杜馨瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Railway Sciences Corp Ltd CARS
Infrastructure Inspection Institute of CARS
Beijing IMAP Technology Co Ltd
Original Assignee
China Academy of Railway Sciences Corp Ltd CARS
Infrastructure Inspection Institute of CARS
Beijing IMAP Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Railway Sciences Corp Ltd CARS, Infrastructure Inspection Institute of CARS, Beijing IMAP Technology Co Ltd filed Critical China Academy of Railway Sciences Corp Ltd CARS
Priority to CN202211153204.XA priority Critical patent/CN116091331A/en
Publication of CN116091331A publication Critical patent/CN116091331A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for removing haze of a vehicle-mounted video of a high-speed railway, wherein the method comprises the following steps: acquiring haze images and unpaired clear images; constructing a haze removal network by using a circularly generated countermeasure network; according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined; according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network; and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video. According to the haze removal method, the haze removal effect on the truly acquired haze images can be improved by performing unpaired training and supervised training on the haze removal network.

Description

Haze removing method and device for vehicle-mounted video of high-speed railway
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for removing haze of a vehicle-mounted video of a high-speed railway.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The vehicle-mounted video of the high-speed railway is often used for the operation environment security check task, but the video picture quality is easily affected by weather along the line, such as haze, sand storm and the like, and is very unfavorable for the automatic operation environment security check. Therefore, haze in the video picture needs to be removed, and image quality is improved. The existing image haze removal method mainly aims at the synthesized haze image, has a corresponding clear image and can be used for supervised training.
However, when the model trained by the synthesized haze images is applied to a real haze image, the haze removal effect is poor. However, the actual haze image in practice usually has no corresponding clear image, and no supervised training can be performed.
Therefore, how to provide a new solution to the above technical problem is a technical problem to be solved in the art.
Disclosure of Invention
The embodiment of the invention provides a haze removing method for a vehicle-mounted video of a high-speed railway, which can improve the haze removing effect on a truly acquired haze image by unpaired training and supervised training on a haze removing network, and comprises the following steps:
acquiring haze images and unpaired clear images;
constructing a haze removal network by using a circularly generated countermeasure network;
According to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined;
according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network;
and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video.
The embodiment of the invention also provides a device for removing haze of the vehicle-mounted video of the high-speed railway, which comprises the following steps:
the image acquisition module is used for acquiring haze images and unpaired clear images;
the haze-removing network construction module is used for constructing a haze-removing network by circularly generating an countermeasure network;
the unpaired training module is used for carrying out unpaired training on the haze removal network according to the haze images and the unpaired clear images to determine haze removal images;
the supervised training module is used for performing supervised training on the haze-removed network with unpaired training according to the haze-removed image, and determining a trained haze-removed network;
the high-speed railway vehicle-mounted video haze removal module is used for removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze removal network and determining the haze-free high-speed railway vehicle-mounted video.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the haze removing method for the vehicle-mounted video of the high-speed railway when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the haze removing method for the vehicle-mounted video of the high-speed railway when being executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and the computer program realizes the haze removing method for the vehicle-mounted video of the high-speed railway when being executed by a processor.
The embodiment of the invention provides a method and a device for removing haze of a vehicle-mounted video of a high-speed railway, comprising the following steps: acquiring haze images and unpaired clear images; constructing a haze removal network by using a circularly generated countermeasure network; according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined; according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network; and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video. According to the invention, the anti-network is circularly generated to generate the haze-removed image, the haze-removed network is subjected to unpaired training to generate the haze-removed image, the haze-removed image is used as a corresponding clear image of the haze image, the characteristics of the haze-removed image and the unpaired clear image are enabled to be more consistent, so that the haze-removed performance of the image is improved, the haze-removed network is subjected to supervised training, the paired haze image and the haze-removed image generated by the unpaired training are subjected to supervised training, the performance of the image haze removal is further improved by utilizing the instinct attribute of the clear image, and the haze-removed effect of the haze image which is actually acquired can be improved by carrying out unpaired training and supervised training on the haze-removed network.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
fig. 1 is a schematic diagram of a method for removing haze from a vehicle-mounted video of a high-speed railway according to an embodiment of the invention.
Fig. 2 is a flowchart of a method for removing haze from a vehicle-mounted video of a high-speed railway according to an embodiment of the invention.
Fig. 3 is a schematic diagram of an unpaired training process of a high-speed railway vehicle-mounted video haze removal method according to an embodiment of the invention.
Fig. 4 is a schematic diagram of a supervised training process of a vehicle-mounted video haze removal method for a high-speed railway according to an embodiment of the invention.
Fig. 5 is a schematic diagram of a computer device for running a method for haze removal of a vehicle-mounted video of a high-speed railway.
Fig. 6 is a schematic diagram of a vehicle-mounted video haze removing device for a high-speed railway according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
Fig. 1 is a schematic diagram of a method for removing haze from a vehicle-mounted video of a high-speed railway according to an embodiment of the present invention, as shown in fig. 1, where the method for removing haze from a vehicle-mounted video of a high-speed railway according to the embodiment of the present invention includes that unpaired training and supervised training are performed on a haze removing network, so that a haze removing effect on a truly collected haze image can be improved, and the method includes:
step 101: acquiring haze images and unpaired clear images;
step 102: constructing a haze removal network by using a circularly generated countermeasure network;
step 103: according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined;
step 104: according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network;
step 105: and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video.
The embodiment of the invention provides a method for removing haze of a vehicle-mounted video of a high-speed railway, which comprises the following steps: acquiring haze images and unpaired clear images; constructing a haze removal network by using a circularly generated countermeasure network; according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined; according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network; and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video. According to the invention, the anti-network is circularly generated to generate the haze-removed image, the haze-removed network is subjected to unpaired training to generate the haze-removed image, the haze-removed image is used as a corresponding clear image of the haze image, the characteristics of the haze-removed image and the unpaired clear image are enabled to be more consistent, so that the haze-removed performance of the image is improved, the haze-removed network is subjected to supervised training, the paired haze image and the haze-removed image generated by the unpaired training are subjected to supervised training, the performance of the image haze removal is further improved by utilizing the instinct attribute of the clear image, and the haze-removed effect of the haze image which is actually acquired can be improved by carrying out unpaired training and supervised training on the haze-removed network.
Haze affects not only visual effects, but also subsequent advanced visual tasks such as image target detection, image classification, etc. Previous image haze removal methods have relied primarily on atmospheric scattering models. At present, most image haze removing methods are mainly based on deep learning, and due to the application of the deep learning, the existing image haze removing methods obtain good haze removing effects, but the methods are mainly aimed at synthesized haze images, have corresponding clear images and can be used for supervised training. However, in many practical applications, such as in-car video of high-speed railway, it is difficult to acquire a large number of paired haze images and clear images. Therefore, haze removal of a real haze image without a corresponding clear image is a very important and challenging problem.
In order to alleviate the problem that a real haze image does not have a corresponding clear image and cannot be subjected to supervised training, the Cycle-Dehaze enhances the CycleGAN method by combining Cycle consistency and perception loss. The PID uses a decoupling network for image haze removal, which uses only unpaired training to generate haze-removed images. These image haze removal methods show great potential of unpaired image haze removal in improving the haze removal performance of real haze images. However, the two methods for removing haze from images do not fully utilize useful information, and the haze removing performance of the real haze images is still greatly improved.
In addition, most of the existing image haze removal methods aim at the synthesized haze image, and although a good haze removal effect is obtained on the synthesized haze image, the haze removal effect is poor when the existing image haze removal methods are applied to the real haze image. Aiming at the defects of the existing image haze removal method, the invention provides an unpaired image haze removal method for solving the haze removal problem of a real haze image. The method comprises two aspects, namely unpaired training and supervised training. For unpaired training, firstly generating haze-removing images by using a cyclic generation countermeasure network; then, the characteristics of the haze-removed image and unpaired clear images are more consistent by using the perception loss, so that the haze-removed performance of the image is improved; finally, the instinct attribute of the clear image is utilized to further improve the haze removing performance of the image. For supervised training, a pseudo tag scheme is designed, and haze removal images generated by unpaired training modules are used as pseudo clear images, so that the supervised training is carried out to further improve haze removal performance.
Fig. 2 is a flowchart of a method for removing haze from a vehicle-mounted video of a high-speed railway according to an embodiment of the present invention, as shown in fig. 2, when the method for removing haze from a vehicle-mounted video of a high-speed railway according to the embodiment of the present invention is implemented, in one embodiment, the method includes:
Acquiring haze images and unpaired clear images;
constructing a haze removal network by using a circularly generated countermeasure network;
according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined;
according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network;
and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video.
The invention provides an unpaired image haze removal method, an overall flow chart is shown in figure 2, and the method comprises two modules, one is unpaired training, and is used for generating a pseudo clear image which is used as a corresponding clear image of a haze image so as to carry out subsequent supervised training; and the other is supervised training, and the paired haze-removing and clear haze-removing images generated by unpaired training are subjected to supervised training, so that the haze-removing performance of the images is further improved. Experimental results on the public haze image dataset and the real haze and sand storm image dataset we collected demonstrate the effectiveness of the proposed method.
Fig. 3 is a schematic diagram of an unpaired training process of a high-speed railway vehicle-mounted video haze removal method according to an embodiment of the present invention, when the method for removing haze of a high-speed railway vehicle-mounted video provided by the embodiment of the present invention is implemented, in one embodiment, according to a haze image and an unpaired clear image, unpaired training is performed on a haze removal network, and the method for removing haze includes:
step 301: taking the haze image and the unpaired clear image as training sets, and importing the haze image and the unpaired clear image into a haze removal network; the haze removal network comprises: a first generator and a second generator, a first arbiter and a second arbiter, a first map and a second map; the first generator is used for generating haze-removing images; the second generator is used for generating a pseudo haze image;
step 302: determining a first countering loss according to the first arbiter, the haze image, the unpaired clear image, the haze-removed image, and the first map; the first countermeasures against loss are used for training a haze removal network;
step 303: determining a second countermeasures loss according to the second discriminator, the haze image, the unpaired clear image, the pseudo haze image and the second map, wherein the second countermeasures loss is used for training the haze removal network;
Step 304: determining a cycle consistency loss according to the second generator, the haze-removed image and the haze image, and the first generator, the pseudo haze image and the unpaired clear image; the cyclical consistency loss is used for constraining a haze removal network;
step 305: determining a cycle perception consistency loss according to the haze image, the haze removal image, the second generator, the unpaired clear image, the pseudo haze image and the first generator, and the combination layer extraction characteristics; the cycle perception consistency loss is used for training a haze removal network;
step 306: determining total variation loss according to the haze-removed image by combining the horizontal gradient and the vertical gradient; the total variation loss is used for training a haze removal network;
step 307: determining dark channel prior according to the color channel and the local neighborhood taking the pixel as the center; the dark channel prior is used for training a haze removal network;
step 308: determining dark channel loss according to dark channel priori and haze-removed images; the dark channel loss is used for training a haze removal network;
step 309: determining an unpaired total loss function based on the first fight loss, the second fight loss, the cyclic consistency loss, the cyclic perceived consistency loss, the total variation loss, and the dark channel loss;
Step 310: and carrying out unpaired training on the haze network according to the dark channel prior and the unpaired total loss function, completing the unpaired training, and outputting haze-removing images.
In the embodiment, the real haze image is set to belong to an X domain, the unpaired clear image belongs to a Y domain, and the haze removal aim of the image is to convert the X domain into the Y domain so as to obtain the haze removed image. First, a cyclic generation countermeasure network (CycleGAN) is used as a haze removal network. Suppose haze image x i E X (i=1, …, M), where M is the number of haze images, unpaired sharp image y i E Y (i=1, …, N), where N is the number of sharp images. The model includes two generators: a first generator G and a second generator F, two discriminators: first discriminator D Y And a second discriminator D X And two mappings: the first mapping G X-Y and the second mapping F Y-X. D (D) X For distinguishing the haze image x from the generated pseudo haze image F (y). D (D) Y For distinguishing between a sharp image y and a haze-removed image G (x).
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the first countermeasures are determined according to the following mode:
Figure SMS_1
wherein ,LGAN (G,D Y X, Y) is a first counterdamage; d (D) Y Is a first discriminator; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is notThe number of pairs of sharp images; g (x) is a haze-removed image; X-Y is the first mapping;
Figure SMS_2
obeying p for y data (y) a desire for probability distribution; />
Figure SMS_3
Obeying p for x data (x) The expectation of the probability distribution.
The foregoing expressions for determining the first countering loss are given by way of example, and it will be understood by those skilled in the art that the above-described formulas may be modified and other parameters or data may be added as desired or other specific formulas may be provided, and such modifications are intended to fall within the scope of the present invention.
In an embodiment, a haze-removing network is trained using a first countering loss for X.fwdarw.Y and a discriminant D Y The first contrast loss can be expressed as the above equation (1), in which the first generator G tries to generate an image similar to the domain Y, and the first discriminator D Y It is intended to distinguish between G (x) and y.
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the second countermeasures loss are determined according to the following mode:
Figure SMS_4
wherein ,LGAN (G,D X Y, X) is a second countermeasures loss; d (D) X Is a second discriminator; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is not the number of sharp images in pairs; f (y) is a pseudo haze image; Y-X is the second mapping;
Figure SMS_5
obeying p for x data (x) The expectation of probability distribution; />
Figure SMS_6
Obeying p for y data (y) expectation of probability distribution.
The foregoing expressions for determining the second countermeasures are given by way of example, and it will be understood by those skilled in the art that the above-described formulas may be modified and other parameters or data may be added as desired or other specific formulas may be provided, and such modifications are intended to fall within the scope of the present invention.
In the embodiment, for FY→X and the discriminator D X The countering loss may represent the above equation (2), wherein the second generator F tries to generate an image similar to the domain X, and the second discriminator D X It is intended to distinguish between F (y) and x.
When the method for removing haze of the vehicle-mounted video of the high-speed railway is implemented, in one embodiment, the loss of cycle consistency is determined according to the following mode:
Figure SMS_7
wherein ,LCycle Is a cyclical consistency loss; f is a second generator; g (x) is a haze-removed image; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is not the number of sharp images in pairs; g is a first generator; f (y) is a pseudo haze image;
Figure SMS_8
obeying p for x data (x) The expectation of probability distribution; />
Figure SMS_9
Obeying p for y data (y) expectation of probability distribution.
The foregoing expressions for determining the loss of loop consistency are given by way of example, and it will be understood by those skilled in the art that the foregoing formulas may be modified and other parameters or data may be added as desired or other specific formulas may be provided as desired and within the scope of the present invention.
In an embodiment, constraining the GAN network with a cyclic consistency loss may be expressed as equation (3) above.
When the method for removing haze of the vehicle-mounted video of the high-speed railway is implemented, in one embodiment, the loss of the cycle perception consistency is determined according to the following mode:
Figure SMS_10
wherein ,LPer Loss of consistency for cyclic perception; f is a second generator; g (x) is a haze-removed image; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is not the number of sharp images in pairs; g is a first generator; f (y) is a pseudo haze image; phi is relu1 from VGG-16 2 ,relu2 2 and relu33 The layer extracts features.
The foregoing expressions for determining the loss of loop perceived consistency are illustrative, and those skilled in the art will appreciate that the above-described formulas may be modified and added with other parameters or data in some form as desired or other specific formulas may be provided, and such modifications are within the scope of the present invention.
In an embodiment, to maintain sharpness of the haze-removed image, a model is trained using a cyclic perception consistency loss, which can be expressed as the above formula (4).
If the haze-removing image and the clear image have similar attributes, the haze-removing effect of the image is good. Thus, in order for the defogging images to have properties similar to those of a clear image, the instinctive properties of the clear image are introduced to further train the image defogging network, such as total variation loss and dark channel priors.
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the total variation loss is determined according to the following mode:
L T =||α h G(x)|| 1 +||α v G(x)|| 1 (5)
wherein ,LT Is the total variation loss; g (x) is a haze-removed image; alpha h Is a horizontal gradient; alpha v Is a vertical gradient.
The foregoing expressions for determining total variation loss are given by way of example, and it will be understood by those skilled in the art that the foregoing formulas may be modified and other parameters or data may be added as desired or other specific formulas may be provided, and such modifications are intended to fall within the scope of the present invention.
In an embodiment, the total variation loss is L of the predicted image 1 Regularized gradient prior, which can be expressed as equation (5) above.
When the method for removing haze of the vehicle-mounted video of the high-speed railway is implemented, in one embodiment, a dark channel prior is determined according to the following mode:
Figure SMS_11
wherein ,Jdark (i) Is dark channel prior; c is a color channel, r, g, b represents red, green and blue; Ω (i) is a local neighborhood centered on pixel i; j is the pixel coordinates of the image.
The foregoing expression for determining the dark channel prior is given by way of example, and it will be understood by those skilled in the art that the above formula may be modified and added with other parameters or data in a certain manner or other specific formulas may be provided as needed in the practice, and these modifications shall fall within the protection scope of the present invention.
In an embodiment, the dark channel a priori may be expressed as equation (6) above.
When the method for removing haze of the vehicle-mounted video of the high-speed railway is implemented, in one embodiment, dark channel loss is determined according to the following mode:
L Dark =||J dark (G(x))|| 1 (7)
wherein ,LDark Loss for dark channels; g (x) is a haze-removed image; j (J) dark For dark channel a priori.
The foregoing expressions for determining dark channel loss are given by way of example, and those skilled in the art will appreciate that the above-described formulas may be modified and added with other parameters or data as needed in practice, or that other specific formulas are provided, and such modifications are within the scope of the present invention.
In an embodiment, the luminance of the majority of the dark channel image is zero or near zero. Therefore, the dark channel loss is used for ensuring that the dark channel of the image after haze removal is consistent with the dark channel of the clear image, so that the image after haze removal is more consistent with the clear image. The dark channel loss can be expressed as the above equation (7).
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the unpaired total loss function is determined according to the following mode:
L=λ GAN (L GAN (G,D Y ,X,Y)+L GAN (G,D X ,Y,X))+λ Cycle L CyclePer L PerT L TDark L Dark (8)
Wherein L is an unpaired total loss function; l (L) GAN (G,D Y X, Y) is a first counterdamage; l (L) GAN (G,D X Y, X) is a second countermeasures loss; l (L) Cycle Is a cyclical consistency loss; l (L) Per Loss of consistency for cyclic perception; l (L) T Is the total variation loss; l (L) Dark Loss for dark channels; lambda (lambda) GAN 、λ Cycle 、λ Per 、λ T and λDark Is a balance parameter.
The foregoing description of determining the general loss function is not intended to be an example, and those skilled in the art will appreciate that the foregoing equations may be modified and other parameters or data may be added as desired, or other specific equations may be provided, while still falling within the scope of the present invention.
In an embodiment, for unpaired training, the total loss function L may be expressed as equation (8) above.
Fig. 4 is a schematic diagram of a supervised training process of a high-speed railway vehicle-mounted video haze removal method according to an embodiment of the present invention, and as shown in fig. 4, when the method for high-speed railway vehicle-mounted video haze removal provided by the embodiment of the present invention is implemented, in one embodiment, according to a haze removal image, a supervised training is performed on a haze removal network for which training is not performed, and a trained haze removal network is determined, including:
step 401: determining a mean square error loss according to the haze image and the haze removal image; the mean square error loss is used for performing supervised training;
Step 402: according to the haze image and the haze removal image, extracting features from the combination layer, and determining perception loss; the perception loss is used for performing supervised training;
step 403: determining a supervised total loss function according to the mean square error loss and the perceived loss;
step 404: and performing supervised training on the haze removal network without paired training according to the supervised total loss function, and determining the trained haze removal network.
In the embodiment, the real haze image x has no corresponding clear image, so that the supervised training cannot be performed, and how to perform the supervised training on the model is a very challenging problem, so that the embodiment of the invention provides a supervised training mode. Specifically, the haze-removed images G (x) are obtained from the unpaired training, and then the haze-removed images are used as pseudo-clear images for supervised training, so that the haze-removed performance of the images is further improved.
When the haze removing method for the vehicle-mounted video of the high-speed railway is implemented, in one embodiment, the mean square error loss is determined according to the following mode:
Figure SMS_12
wherein ,LMSE Is the mean square error loss; x is haze image, x i E X (i=1, …, M), where M is the number of haze images, X i Is the ith haze image; g (x) is a haze-removed image.
The foregoing expressions for determining the mean square error loss are given by way of example, and it will be understood by those skilled in the art that the above-described expressions may be modified and other parameters or data may be added as needed or other specific expressions may be provided, and such modifications are intended to fall within the scope of the present invention.
In an embodiment, the supervised training using the mean square error loss function may be represented as the above equation (9), where M is the number of haze images.
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the perceived loss is determined according to the following mode:
Figure SMS_13
wherein ,LPer ' is a perceived loss; x is haze image, x i E X (i=1, …, M), where M is the number of haze images, X i Is the ith haze image; g (x) is a haze-removed image; phi is relu1 from VGG-16 2 ,relu2 2 and relu33 The layer extracts features.
The foregoing expressions for determining the perceived loss are given by way of example, and it will be understood by those skilled in the art that the foregoing expressions may be modified and other parameters or data may be added as desired or other specific expressions may be provided, and such modifications are intended to fall within the scope of the invention.
In an embodiment, in addition to using a mean square error loss function, a supervised training image haze removal model using a perceptual loss function is also required, and the perceptual loss can be represented by the above formula (10).
When the method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, a supervised total loss function is determined according to the following mode
L s =L MSESup L Per ′’ (11)
wherein ,Ls Is a supervised total loss function; l (L) MSE Is the mean square error loss; l (L) Per ' is a perceived loss; lambda (lambda) Sup Is a balance parameter.
The foregoing expressions for determining the supervised total loss function are exemplary, and those skilled in the art will appreciate that the above-described formulas may be modified and added with other parameters or data as needed in practice, or that other specific formulas may be provided, and such modifications are within the scope of the present invention.
In an embodiment, for supervised training, the total loss function L s Can be expressed as the above formula (11).
The method for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is briefly described below by combining with a specific scene:
the invention provides an unpaired image haze removal method. The method includes two aspects, one is unpaired training for generating a pseudo-sharp image as a corresponding sharp image of the haze image; the other is supervised training, and the paired haze and clear images generated by unpaired training are subjected to supervised training. Experimental results on the public haze image dataset and the real haze and sand storm image dataset we collected demonstrate the effectiveness of the proposed method.
The invention provides an unpaired image haze removal method for solving the haze removal problem of a real haze image. The method comprises two training steps, namely unpaired training and supervised training. For unpaired training, firstly generating haze-removing images by using a cyclic generation countermeasure network; then, the characteristics of the haze-removed image and unpaired clear images are more consistent by using the perception loss, so that the haze-removed performance of the image is improved; finally, the instinct attribute of the clear image is utilized to further improve the haze removing performance of the image. For supervised training, a pseudo-label scheme is designed, and haze removal images generated by unpaired training are used as pseudo-clear images, so that the supervised training is carried out to further improve haze removal performance.
For experimental study, a real high-speed railway haze image data set is collected and arranged. Firstly, shooting haze videos on a High-speed railway in haze weather, then selecting 2000 training images and 500 test images, naming a data set as an HRHI (High-speed Railway hazy images) data set, and collecting 2000 unpaired clear images for experiments. Mainly comprises haze images and unpaired clear images.
In addition, in order to verify the generalization of the proposed method, a number of experiments were performed on the D-HAZY haze image dataset synthesized based on the NYU-Depth and Middlebury datasets. The NYU-Depth dataset in D-HAZY has 1449 pairs of synthetic haze images and sharp images. Since the D-HAZY dataset has paired haze and sharp images, the prior art references randomly scramble the order of the images to simulate unpaired images. The dataset includes some haze images and unpaired sharp images.
To verify the versatility of the proposed method, a real High-speed railway sand storm image (HRSI, high-speed Railway Sandstorm Images) dataset was also collected. The method for acquiring haze images comprises the steps of firstly shooting videos on a high-speed railway in sand storm weather, and then cutting 1700 training images and 200 test images. 2000 images without a sand storm were also collected for unpaired training, including some images of the sand storm and images without a sand storm.
The invention provides an unpaired image haze removal method for solving the haze removal problem of a real haze image, which utilizes unpaired training to generate the haze removal image as a pseudo clear image, uses the generated pseudo tag to perform supervised training as a corresponding clear image of the haze image, and obtains competitive results in the haze removal of the real haze image.
First, experimental results of real haze images
The method of the embodiment of the invention is compared with other classical image haze removal methods, wherein the methods comprise a semi-supervised image haze removal method DA and PSD, an unsupervised image haze removal method DCP, an unpaired image translation method CycleGAN and an unpaired image haze removal method CycleDehaze. Since semi-supervised image haze removal methods DA and PSD cannot be trained on a real haze image dataset, the trained model in the classical method is used to test directly on the test set, as is also the case for tables 3 and 5. For unpaired methods CycleGAN and cyclehze, the code is re-implemented and the model is re-trained on the training set and then tested on the test set. The experimental results on the HRHI dataset are shown in table 1. Although the method of the embodiment of the invention does not use paired clear images to train the model, but only uses unpaired clear images, the method is superior to the semi-supervised image haze removal method DA and PSD, and is far superior to the unsupervised image haze removal method DCP. Based on BLIINDS evaluation index, the proposed method is superior to unpaired methods CycleGAN and CycleDehaze. Experimental results indicate the effectiveness of the proposed method.
Table 1 experimental results on HRHI dataset
Figure SMS_14
/>
To verify the effectiveness of the components of the proposed method, ablation experiments were performed on the HRHI dataset, the experimental results being shown in table 2. Gan_cycle refers to training unpaired modules with GAN loss and loop consistency loss. Gan_cycle_periodic refers to training unpaired modules using GAN loss, loop consistency loss, and sense consistency loss. Gan_cycle_periodic_tv refers to training unpaired modules using GAN loss, loop consistency loss, perceived consistency loss, and total variation loss. Gan_cycle_periodic_tv_dc refers to training unpaired modules using GAN loss, loop consistency loss, sense consistency loss, total variation loss, and dark channel loss. gan_cycle_periodic_tv_dc+Supervisory refers to Supervised training with mean square error loss and Perceptual loss after unpaired module training. As can be seen from table 2, the total variation loss, dark channel prior loss, and supervised training module all contribute to image haze removal.
Table 2 ablation study on HRHI dataset
Figure SMS_15
Figure SMS_16
In qualitative comparison on the HRHI data set, it can be obtained that color distortion exists in haze-removed images generated by the haze-removed method DA and PSD of the semi-supervised images; the DCP of the unsupervised image haze removal method has color distortion and poor haze removal effect in a sky area; the haze-removed image generated by the method is clearer and more natural.
Second, experimental results of general haze images
In order to verify the generalization ability of the method of the present embodiment, experimental verification was performed on the D-HAZY dataset. Since the dataset has pairs of sharp images, the evaluation can be performed using PSNR and SSIM. The method of the embodiment of the invention is compared with other classical image haze removal methods, including supervised image haze removal methods CAP, MSCNN and DehazeNet, semi-supervised image haze removal methods DA and PSD, unsupervised image haze removal method DCP, unpaired image translation method CycleGAN and unpaired image haze removal method CycleDehaze. The experimental results of other methods, except the DA and PSD methods, were from the literature CycleDehaze. The experimental results are shown in table 3. As can be seen from table 3, the method of the present embodiment is superior to the supervised image haze removal method CAP, MSCNN and DehazeNet, the unsupervised image haze removal method DCP, the semi-supervised image haze removal method PSD, and the unpaired methods CycleGAN and cyclehaze. Compared with semi-supervised DA method and unpaired PID method, our method obtained the best results based on PSNR evaluation index and also obtained competitive results based on SSIM evaluation index.
Table 3 quantitative comparison on the D-HAZY dataset.
Figure SMS_17
The ablation experiments on the D-HAZY dataset are shown in table 4. It can be seen that in the D-HAZY dataset, the total variation, dark primary priors, and supervised training modules all contribute to image haze removal. In a qualitative comparison on a D-HAZY dataset, comprising: the synthesized haze image, and the haze-removed image generated by the DA and PSD methods are group trunk images. It can be obtained that the haze-removed image generated by the method has a better visual effect and is closer to the group trunk image.
Table 4D-ablation study on HAZY dataset
Haze removing method for image PSNR↑ SSIM↑
GAN_Cycle 14.68 0.60
GAN_Cycle_Perceptual 15.78 0.70
GAN_Cycle_Perceptual_TV 15.88 0.73
GAN_Cycle_Perceptual_TV_DC 16.57 0.76
GAN_Cycle_Perceptual_TV_DC+Supervised 16.63 0.76
Third, experimental results of the sand storm image
In order to verify the generality of the proposed method, experiments were performed on the sand storm images on the HRSI dataset, the experimental results are shown in table 5, and the experimental results indicate the effectiveness and the generality of the method. In qualitative comparison on the HRSI data set, the sand storm removing effect of the DA method is not obvious, the color of the image generated by the PSD method is changed, and the image generated by the method of the embodiment of the invention is more natural.
Table 5 ablation study on HRSI sand storm dataset
Method BLIINDS↓ NIQE↓
DA 15.06 5.11
PSD 17.73 5.13
GAN_Cycle 16.09 5.22
GAN_Cycle_Perceptual 11.96 5.20
GAN_Cycle_Perceptual_TV 11.75 5.12
GAN_Cycle_Perceptual_TV_DC 14.37 5.08
GAN_Cycle_Perceptual_TV_DC+Supervised 10.78 5.23
Fig. 5 is a schematic diagram of a computer device for running a method for haze removal of a vehicle-mounted video of a high-speed railway, and as shown in fig. 5, an embodiment of the invention further provides a computer device 500, including a memory 510, a processor 520, and a computer program 530 stored in the memory and capable of running on the processor, where the processor implements the method for haze removal of a vehicle-mounted video of a high-speed railway when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the haze removing method for the vehicle-mounted video of the high-speed railway when being executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and the computer program realizes the haze removing method for the vehicle-mounted video of the high-speed railway when being executed by a processor.
The embodiment of the invention also provides a device for removing haze of the vehicle-mounted video of the high-speed railway, which is described in the following embodiment. Because the principle of the device for solving the problems is similar to that of a high-speed railway vehicle-mounted video haze removing method, the implementation of the device can be referred to the implementation of the high-speed railway vehicle-mounted video haze removing method, and repeated parts are not repeated.
Fig. 6 is a schematic diagram of a vehicle-mounted video haze removal device for a high-speed railway according to an embodiment of the present invention, and as shown in fig. 6, the embodiment of the present invention further provides a vehicle-mounted video haze removal device for a high-speed railway.
When the haze removing device for the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention is implemented, in one embodiment, the haze removing device comprises:
an image acquisition module 601, configured to acquire a haze image and an unpaired clear image;
the haze removal network construction module 602 is configured to construct a haze removal network by using the cyclic generation countermeasure network;
the unpaired training module 603 is configured to perform unpaired training on the haze removal network according to the haze image and the unpaired clear image, and determine a haze removal image;
the supervised training module 604 is configured to perform supervised training on the haze-removed network after the unpaired training is completed according to the haze-removed image, and determine a trained haze-removed network;
the high-speed railway vehicle-mounted video haze removal module 605 is used for removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze removal network, and determining the haze-free high-speed railway vehicle-mounted video.
In summary, the method and the device for removing haze of the vehicle-mounted video of the high-speed railway provided by the embodiment of the invention comprise the following steps: acquiring haze images and unpaired clear images; constructing a haze removal network by using a circularly generated countermeasure network; according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined; according to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network; and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video. According to the invention, the anti-network is circularly generated to generate the haze-removed image, the haze-removed network is subjected to unpaired training to generate the haze-removed image, the haze-removed image is used as a corresponding clear image of the haze image, the characteristics of the haze-removed image and the unpaired clear image are enabled to be more consistent, so that the haze-removed performance of the image is improved, the haze-removed network is subjected to supervised training, the paired haze image and the haze-removed image generated by the unpaired training are subjected to supervised training, the performance of the image haze removal is further improved by utilizing the instinct attribute of the clear image, and the haze-removed effect of the haze image which is actually acquired can be improved by carrying out unpaired training and supervised training on the haze-removed network.
The invention provides an unpaired image haze removal method for solving the haze removal problem of a real haze image, which utilizes unpaired training to generate the haze removal image as a pseudo clear image, uses the generated pseudo tag to perform supervised training as a corresponding clear image of the haze image, and obtains competitive results in the haze removal of the real haze image.
According to the technical scheme, the data acquisition, storage, use, processing and the like all meet the relevant regulations of national laws and regulations, and various types of data such as personal identity data, operation data, behavior data and the like related to individuals, clients, crowds and the like acquired by the method are authorized.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (18)

1. The haze removing method for the vehicle-mounted video of the high-speed railway is characterized by comprising the following steps of:
acquiring haze images and unpaired clear images;
constructing a haze removal network by using a circularly generated countermeasure network;
according to the haze images and unpaired clear images, unpaired training is carried out on the haze removal network, and haze removal images are determined;
According to the haze removal image, performing supervised training on the haze removal network without paired training, and determining a trained haze removal network;
and removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze-removing network, and determining the haze-free high-speed railway vehicle-mounted video.
2. The method of claim 1, wherein the unpaired training of the haze removal network to determine the haze removal image based on the haze image and the unpaired clear image comprises:
taking the haze image and the unpaired clear image as training sets, and importing the haze image and the unpaired clear image into a haze removal network; the haze removal network comprises: a first generator and a second generator, a first arbiter and a second arbiter, a first map and a second map; the first generator is used for generating haze-removing images; the second generator is used for generating a pseudo haze image;
determining a first countering loss according to the first arbiter, the haze image, the unpaired clear image, the haze-removed image, and the first map; the first countermeasures against loss are used for training a haze removal network;
determining a second countermeasures loss according to the second discriminator, the haze image, the unpaired clear image, the pseudo haze image and the second map, wherein the second countermeasures loss is used for training the haze removal network;
Determining a cycle consistency loss according to the second generator, the haze-removed image and the haze image, and the first generator, the pseudo haze image and the unpaired clear image; the cyclical consistency loss is used for constraining a haze removal network;
determining a cycle perception consistency loss according to the haze image, the haze removal image, the second generator, the unpaired clear image, the pseudo haze image and the first generator, and the combination layer extraction characteristics; the cycle perception consistency loss is used for training a haze removal network;
determining total variation loss according to the haze-removed image by combining the horizontal gradient and the vertical gradient; the total variation loss is used for training a haze removal network;
determining dark channel prior according to the color channel and the local neighborhood taking the pixel as the center; the dark channel prior is used for training a haze removal network;
determining dark channel loss according to dark channel priori and haze-removed images; the dark channel loss is used for training a haze removal network;
determining an unpaired total loss function based on the first fight loss, the second fight loss, the cyclic consistency loss, the cyclic perceived consistency loss, the total variation loss, and the dark channel loss;
and carrying out unpaired training on the haze network according to the dark channel prior and the unpaired total loss function, completing the unpaired training, and outputting haze-removing images.
3. The method of claim 2, wherein the first countermeasures loss is determined as follows:
Figure QLYQS_1
wherein ,LGAN (G,D Y X, Y) is a first counterdamage; d (D) Y Is a first discriminator; x is haze image, x i E X (i=1,., M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1,., N), where N does not pair the number of sharp images; g (x) is a haze-removed image; g: X-Y is a first mapping;
Figure QLYQS_2
obeying p for y data (y) a desire for probability distribution; />
Figure QLYQS_3
Obeying p for x data (x) The expectation of the probability distribution.
4. The method of claim 2, wherein the second countermeasures against loss is determined as follows:
Figure QLYQS_4
wherein ,LGAN (G,D X Y, X) is a second countermeasures loss; d (D) X Is a second discriminator; x is haze image, x i E X (i=1,., M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1,., N), where N does not pair the number of sharp images; f (y) is a pseudo haze image; f: Y-X is a second mapping;
Figure QLYQS_5
obeying p for x data (x) The expectation of probability distribution; />
Figure QLYQS_6
Obeying p for y data (y) expectation of probability distribution.
5. The method of claim 2, wherein the loop consistency loss is determined as follows:
Figure QLYQS_7
wherein ,
Figure QLYQS_8
is a cyclical consistency loss; f is a second generator; g (x) is a haze-removed image; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is not the number of sharp images in pairs; g is a first generator; f (y) is a pseudo haze image; />
Figure QLYQS_9
Obeying p for x data (x) The expectation of probability distribution; />
Figure QLYQS_10
Obeying p for y data (y) expectation of probability distribution.
6. The method of claim 2, wherein the loop-aware consistency loss is determined as follows:
Figure QLYQS_11
wherein ,LPer Loss of consistency for cyclic perception; f is a second generator; g (x) is a haze-removed image; x is haze image, x i E X (i=1, …, M), where M is the number of haze images; y is an unpaired sharp image, y i E Y (i=1, …, N), where N is not the number of sharp images in pairs; g is a first generator; f (y) is a pseudo haze image; phi is relu1 from VGG-16 2 ,relu2 2 and relu33 The layer extracts features.
7. The method of claim 2, wherein the total variation loss is determined as follows:
L T =||α h G(x)|| 1 +||α v G(x)|| 1
wherein ,LT Is the total variation loss; g (x) is a haze-removed image; alpha h Is a horizontal gradient; alpha v Is a vertical gradient.
8. The method of claim 2, wherein the dark channel priors are determined in the following manner:
Figure QLYQS_12
wherein ,Jdark (i) Is dark channel prior; c is a color channel, r, g, b represents red, green and blue; Ω (i) is a local neighborhood centered on pixel i; j is the pixel coordinates of the image.
9. The method of claim 2, wherein the dark channel loss is determined as follows:
L Dark =||J dark (G(x))|| 1
wherein ,LDark Loss for dark channels; g (x) is a haze-removed image; j (J) dark For dark channel a priori.
10. The method of claim 2, wherein the unpaired total loss function is determined in the following manner:
L=λ GAN (L GAN (G,D Y ,X,Y)+L GAN (G,D X ,Y,X))+λ Cycle L CyclePer L PerT L TDark L Dark
wherein L is an unpaired total loss function; l (L) GAN (G,D Y X, Y) is a first counterdamage; l (L) GAN (G,D X Y, X) is a second countermeasures loss; l (L) Cycle Is a cyclical consistency loss; l (L) Per Loss of consistency for cyclic perception; l (L) T Is the total variation loss; l (L) Dark Loss for dark channels; lambda (lambda) GAN 、λ Cycle 、λ Per 、λ T and λDark Is a balance parameter.
11. The method of claim 2, wherein performing supervised training on the haze-removed network with the unpaired training based on the haze-removed image, determining a trained haze-removed network, comprises:
determining a mean square error loss according to the haze image and the haze removal image; the mean square error loss is used for performing supervised training;
According to the haze image and the haze removal image, extracting features from the combination layer, and determining perception loss; the perception loss is used for performing supervised training;
determining a supervised total loss function according to the mean square error loss and the perceived loss;
and performing supervised training on the haze removal network without paired training according to the supervised total loss function, and determining the trained haze removal network.
12. The method of claim 11, wherein the mean square error loss is determined as follows:
Figure QLYQS_13
wherein ,LMSE Is the mean square error loss; x is haze image, x i E X (i=1.,. M), where M is the number of haze images, X i Is the ith haze image; g (x) is a haze-removed image.
13. The method of claim 11, wherein the perceived loss is determined as follows:
Figure QLYQS_14
wherein ,LPer ' is a perceived loss; x is haze image, x i E X (i=1, …, M), where M is the number of haze images, X i Is the ith haze image; g (x) is a haze-removed image; phi is relu1 from VGG-16 2 ,relu2 2 and relu33 The layer extracts features.
14. The method of claim 11, wherein the supervised total loss function is determined as follows
L s =L MSESup L Per ′’
wherein ,Ls Is a supervised total loss function; l (L) MSE Is the mean square error loss; l (L) Per ' is a perceived loss; lambda (lambda) Sup Is a balance parameter.
15. The utility model provides a on-vehicle video defogging device of high-speed railway which characterized in that includes:
the image acquisition module is used for acquiring haze images and unpaired clear images;
the haze-removing network construction module is used for constructing a haze-removing network by circularly generating an countermeasure network;
the unpaired training module is used for carrying out unpaired training on the haze removal network according to the haze images and the unpaired clear images to determine haze removal images;
the supervised training module is used for performing supervised training on the haze-removed network with unpaired training according to the haze-removed image, and determining a trained haze-removed network;
the high-speed railway vehicle-mounted video haze removal module is used for removing images containing haze in the high-speed railway vehicle-mounted video according to the trained haze removal network and determining the haze-free high-speed railway vehicle-mounted video.
16. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 14 when executing the computer program.
17. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 14.
18. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the method of any of claims 1 to 14.
CN202211153204.XA 2022-09-19 2022-09-19 Haze removing method and device for vehicle-mounted video of high-speed railway Pending CN116091331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211153204.XA CN116091331A (en) 2022-09-19 2022-09-19 Haze removing method and device for vehicle-mounted video of high-speed railway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211153204.XA CN116091331A (en) 2022-09-19 2022-09-19 Haze removing method and device for vehicle-mounted video of high-speed railway

Publications (1)

Publication Number Publication Date
CN116091331A true CN116091331A (en) 2023-05-09

Family

ID=86205226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211153204.XA Pending CN116091331A (en) 2022-09-19 2022-09-19 Haze removing method and device for vehicle-mounted video of high-speed railway

Country Status (1)

Country Link
CN (1) CN116091331A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning
CN116563169B (en) * 2023-07-07 2023-09-05 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Similar Documents

Publication Publication Date Title
Guarnera et al. Preliminary forensics analysis of deepfake images
Li et al. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features
Oh et al. Blind deep S3D image quality evaluation via local to global feature aggregation
CN101610425B (en) Method for evaluating stereo image quality and device
CN106250555B (en) Vehicle retrieval method and device based on big data
Goebel et al. Detection, attribution and localization of gan generated images
CN109801232A (en) A kind of single image to the fog method based on deep learning
CN104751485B (en) GPU adaptive foreground extracting method
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN114638767B (en) Laparoscope image smoke removal method based on generation of countermeasure network
KR102605692B1 (en) Method and system for detecting anomalies in an image to be detected, and method for training restoration model there of
Chen et al. Geo-defakehop: High-performance geographic fake image detection
CN116091331A (en) Haze removing method and device for vehicle-mounted video of high-speed railway
CN114972016A (en) Image processing method, image processing apparatus, computer device, storage medium, and program product
CN114119420A (en) Fog image defogging method in real scene based on fog migration and feature aggregation
CN116033279B (en) Near infrared image colorization method, system and equipment for night monitoring camera
CN117252936A (en) Infrared image colorization method and system adapting to multiple training strategies
Poibrenski et al. Towards a methodology for training with synthetic data on the example of pedestrian detection in a frame-by-frame semantic segmentation task
Emeršič et al. Towards accessories-aware ear recognition
Jiang et al. Haze relevant feature attention network for single image dehazing
Zhang et al. Depth combined saliency detection based on region contrast model
CN115953312A (en) Joint defogging detection method and device based on single image and storage medium
CN116258867A (en) Method for generating countermeasure sample based on low-perceptibility disturbance of key region
Feng A survey on video dehazing using deep learning
CN111461091A (en) Universal fingerprint generation method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination